I had a real nightmare trying to render this video, took half a day and I finally had to cut it at the problem area 20:23 which is why there's a little repeat. ;)
Bruh... Ive watched every single video since that original polaris video sequentially. And i never once noticed a shift in your accent since i watch videos right after upload. I'm shook at your old accent. This made my day. keep up the good work.
Ya I didn't realize I had been following him since the very beginning of his tech analysis. I remember finding that video and being impressed with the analysis and I have watched every video since. Adoredtv and Gamers Nexus are far and away the best tech channels on this site!
@@CaveyMoth Yep I used to speak in a high pitch because I thought my voice was too deep for most to handle it. Even now I probably speak at a slightly higher pitch than my natural voice.
@@adoredtv Whoa, so you weren't speaking in your normal voice? That's fascinating. My voice is low AF, too, and people have trouble understanding it in real life.
@@CaveyMoth I think I'm easier to understand now, also my pacing has improved a lot as well. Been living in Sweden for 3 years so I have to try to make myself understood on two fronts. ;)
14:04 - Actually it's probably running mostly out of L2/L3, because AMD is using a more complicated version of the test which should not fit into L1, namely the sphfract scene at a much higher resolution and a lot of oversampling. It may well be accessing main memory too, I'm not sure; on SGIs I deliberately ran C-ray/sphfract at higher resolutions and with high oversampling in order to ensure the test would hit main RAM to at least some degree, but on these modern architectures with all the caching and other stuff going on, grud knows what Rome is doing here, but it sure as heck isn't dependent on any relevant core to core communication. Presumably there's a management thread which collates the scan line render results back from the separate threads, but the returned data chunks are pretty small, just a few K even at a high resolution, so inter-core bandwidth is not a factor. My guess is, much like the tiny "scene" test benefits from residing entirely in L1, the sphfract test benefits on modern CPUs from being able to sit mainly in L2/L3, which these days is very fast. On SGIs one could use precise monitoring tools to discern what the CPU was doing, but I guess x86 doesn't have this (I don't know, beyond my knowledge). 17:39 - The "scene" test definitely won't touch it, but sphfract at high res, etc. probably does (honestly not sure tbh). It's definitely hitting L2, but it could be that the way its using L3 (if at all) is still very favourable for the demo. Jim, thankyou so much for highlighting my comments on your earlier video, I'm glad that people will likely now have a better understanding of the nature and limitations of C-ray. It's an appealing test for AMD because it scales so well with threads (in a manner which the old CB R15 does not), but it's far from a real-world scenario, especially for rendering. For example, someone at a major US movie company told me that for their modern productions, a single rendered frame may involve pulling in many tens or even hundreds of GB of data over their SAN (hence the rendering on the CPU cores themselves involves a lot of data and thus accessing main memory), which means bandwidth and latency on their renderfarm is important, factors C-ray doesn't test at all. A different movie company in the UK told me their SAN can do about 10GB/sec, performance that is absolutely essential now they are frequently working with uncompressed 8K (can you imagine the memory demands of that? The guy told me they're about to move up to 48GB Quadro RTX 8000 cards because the 24GB of their existing Quadro M6000s is no longer enough). There's lies, damned lies, and statistics. Or as my old stats book says, people use statistics as a drunk uses a lamp post, for support rather than illumination. C-ray is *interesting* (that's why John wrote it), but my jaw completely hit the floor when I watched the Rome demo and there it was on the screen. That was like... Bugatti promoting their latest Veyron based on how fast one could fill the petrol tank. :D I did try to contact AMD to ask for more details of exactly how they ran their test, since disclosure of the precise compile command used to create the binary is supposed to be part of the test process (in order to be sure there's been no cheating), but I was unable to get a response (I'm a comparative nobody in the x86 space). I even added a new Test 5 to my C-ray page to match what I gather is the settings they used for the test: www.sgidepot.co.uk/c-ray.html but I'm loathe to flesh out the currently empty table with any entries until I can be certain the settings are correct. Jim, if you have any contacts at AMD, can you give them a nudge? I'd love to hear from them, would be great to have Rome in the #1 spot and see how things pan out from there. 8) What's hillarious about all this though is that on the one hand if AMD keeps using C-ray in its PR then Intel will copy them and use it too (and where that rabbit hole leads is anyone's guess; is it believable that neither side will ever try to cheat?), while on the other hand as long as they do use the test then by definition it cannot be used to promote whatever advantages Zen2 may have over Intel a la improved AVX performance. Why didn't they use CB R20? Perhaps because it incorporates Intel's raytracing engine and using Win10 might not be optimal on CPUs with as many cores as Rome (assuming it's possible to use Win10 on Rome at all atm). Hopefully, those to whom Rome may be appealing (as with any CPU) will be more discerning in their buying decisions and wait for proper relevant reviews that correctly reflect their intended workload. Thanks! Ian.
So...you're literally saying that AMD's Zen is Bugatti Veyron of CPU platforms for much less money than Inturd. Got it. Tank you very much for publicly and openly confirming/clarifying that Zen absolutely BTFOs Intrash on all fields while staying much cheaper and way more efficient at the same time (and also having great forward/backward compatibility). P.S. >Why didn't they use CB R20? Zen's THREADRIPPER (not even Epyc) already completely BTFOd Inturd's overpriced garbage CPoos in Cinebench R20, according to latest tests. So there's that. See here for example: www.imagebam.com/image/d6137d1167288784. Yes, as you can clearly see by that pic, Zen ALREADY (not even in it's much more improved Zen 2 state, but "mere" Zen+) utterly destroys *FOUR* extremely overpriced Xeons *in a 4S* and got very close to owning TWO *most expensive* "platinum" Inturds in a 2S configuration, and that's in a heavily Intel-biased benchmark that was made SPECIFICALLY only with one sole purpose of making Intel """look good""" in comparison with AMD's Zen on the worldwide scene. Irony is T H I C C, lol.
@@defeqel6537 Yes, hence the 32-CPU SGI Origin3K results on my C-ray page, along with my own 24-CPU POWER Challenge. Just as cores don't need to talk to each other much for these tests, sockets therefore don't either, except for returning render results to whichever core holds the management thread (but the data is miniscule).
@David haldkuk I think perhaps you're looking at it too much from the perspective of how complexity works in real-time 3D scenarios such as games, ie. more polygons means lower performance. For C-ray, one can make the test a lot more complicated merely by increasing the output resolution and using higher oversampling. From what I've been able to find so far, AMD for its Rome test used the sphfract scene at 4K res with 8x oversampling, so it isn't even really that complex a configuration, and from what Jim says it could well be that AMD chose settings which would ensure the test would remain within L2/L3 (there's no "standard" C-ray test, so they can choose whatever they like). For something tougher one could move up to 8K with 16x oversampling, but I don't know if the changes in relative performance would be that useful. Yes one could create a newer scene file with many more objects and surfaces, but I don't know if that would increase the compute complexity in a manner that's any different to simply rendering at a higher resolution and/or deeper oversampling level. Worse, the longer runtime might allow people to infer that the test is more relevant somehow to real world performance, when it really isn't. This is why, with SGIs, I was interested in comparing how a popular benchmark scene in Maya differed so greatly to a real-world scene rendered in Alias, which means the renderers are different aswell (the Alias scene came from a digital artist who designed magazine adverts, large advertising billboard posters, etc.): www.sgidepot.co.uk/perfcomp_RENDER4_maya1.html www.sgidepot.co.uk/perfcomp_RENDER3_alias1.html The Maya test is very simple and (on the same CPU arch) scales pretty much just with clock speed, whereas the Alias test is sensitive to system architecture and especially L2 cache size, eg. the dual-R14K/600MHz Octane2 is only slightly faster than a single-CPU R16K/800MHz Fuel (the latter has 2x more L2, higher mem bw and lower mem latency). And crucially, the Alias test very much reflects the kind of real daily work the artist in question has to deal with, so it's genuinely useful (or was back then); the guy does the same sort of work today, but of course he's long since moved onto PCs: www.johnharwood.com/ Teasing apart these issues has always been messy, as Jim's excellent videos convey. Sometimes companies do not want to delve into exactly what's going on in a public manner too deeply as it might reveal issues with their systems which don't look so good. Jim shows how the power efficiency curves of Zen/Zen2 may be related to the choice of test settings used by AMD to present their products in the most +ve light possible (I guess that's marketing; people do the same thing, the clothes we wear, hair, makeup, jewelry etc., all designed to convey something we want to project that's relevant to the context, eg. romantic appeal. professional presence, imposing military force, eco tree hugger, etc.) Heck, the entire plant and animal world for half a billion years has been an exercise in deceptive advertising. :D These days though, with modern social media, etc., trying to spin things in such a manner might be counter productive. The kind of people who would be interested in Rome are less likely to be fooled by such shallow practices, ditto (one would hope) the enthusiasts interested in a 16-core Ryzen. The danger is AMD over hypes the product but then delivers disappointment. In the context of SGIs I looked into an example of this sort of thing. After SGI released their final InfiniteReality4 graphics product, Onyx350/Onyx3900 gfx supercomputers and their Tezro workstation, a natural question to ask was, how good would these be products be with a maximum spec for running Discreet Inferno or Flame? How much better than existing configurations with lesser CPUs, or older SGIs with earlier architectures? eg. a quad-1GHz Tezro V12, likewise the equivalent node boards stuffed into the Onyx systems with V12 or IR4 gfx (max 32 CPUs for Onyx350, max 1024 CPUs for Onyx3900). None of SGI's PR contained this information, which I thought was a bit weird. Discreet wasn't talking about it either. Thus, with the help of some key people I was able to run some proper tests: www.sgidepot.co.uk/perfcomp_DISCREET1_FlameTests.html I never got round to testing Inferno, but the conclusion for Flame on SGIs was startling: for various real-world tasks running on systems using V12 gfx, performance can be severely held back by the V12's 128MB VRAM limit (SGI should have increased the VRAM for V12 in O3K-class systems to at least 512MB, preferably 1GB, but perhaps by then they couldn't afford to). It meant that having much faster CPU options such as the quad-1GHz barely made any difference in many cases, the CPUs were waiting on the V12 to get a move on. IR4 (released in 2002 btw) running Inferno would not suffer from this because it has a lot more VRAM (10GB, with 1GB texture RAM). Point being, even though SGI risked annoying customers by potentially selling them products or upgrades that may not provide a useful gain in performance, the marketing and PR still did it anyway (at least in the case of those using Flame, which for SGI was a critical market by then). Epicurus said 2300 years ago that advertising was the greatest evil. Nothing has changed, marketing/PR still poses tech products in the best light if it can, regardless of whether doing so might make the product designers and engineers want to tear their hair out in frustration. Ian.
@@Kawayolnyo :D My Bugatti analogy was just to convey the idea that AMD boasting about C-ray isn't telling relevant potential customers anything they want to know. I could just as easily have referenced something more mundane, like promoting the latest TV by boasting about the number of buttons on the remote control. :) It's a mismatching of concepts, like the Suez Crisis popping out for a bun (and I'll nick that line from Adams as often as possible). The kind of customers who might be interested in Rome would I am certain not care about C-ray numbers. I'm no expert on the whole Intel/AMD competitive position in Enterprise btw, not my field. Also remember that TCO is often more important than raw hw performance, which includes other aspects such as system support, maintenance, staff salaries, software licensing and 3rd party sw optimisation, etc. A Cinebench score, just like C-ray, does not for this class of hw tell one anything useful in terms of making a buying decision. It makes for great PR and headlines, but I doubt it helps much with relevant buyers who are more likely to be interested in directly representative benchmarks, or indeed inhouse testing on loan systems. Ian.
Well, AMD may show only their best at presentations... But isn't it also true that Intel only show their FAKEST at presentations? (Industrial Chillers w/ pre-overclocked no-show CPUs, VLC video playback of "live" iGPU gameplay etc)? *LOL* Glad you mentioned that tho. :)
This is very true! But you should never measure your own success based on the failures of others! Kinda the same as saying that if you are fleeing from a bear, you only need to run faster than the slowest person ;) I'd prefer seeing AMD run faster than the fastest person, that would be something to brag about!
Be sure to also check out Intel's GRID Autosport (or was it GRID 2?) iGPU demonstration... That was actually a VLC video playback from a few years back. The chiller fiasco may have been disingenuous as fuck, but the fake iGPU demo was a flat out LIE. And there are other examples. Let's not ever forget these things. Intel is the textbook definition of cronyism and failure of free markets. We need better technology leaders than these crooks.
Jim, another excellent video. I have had multiple people on my channel call me crazy for suggesting 5GHz chips, and state that it's specifically crazy because VII proves 7nm only brings 20% higher clocks. Then I point out that a 20% clockspeed increase over the 2700X would mean a 5.2GHz 3850X.... and they stop arguing lol. The fact is it is conservative to expect 4.5GHz 16-core parts from AMD, and if you are optimistic honestly the sky is the limit on this launch (but I recommend conservatism). Like you say Jim - the most likely downside to Zen 2 is probably a segmented roll-out of parts over all of 2019, but I don't see that as horrible at all. Oh, and yeah wow your old accent is hilarious. Cheers!
GF's 7nm was supposed to have a pretty consistent frequency-power curve up to well over 5GHz, but it was also a superior process, so who knows where the limit is with TSMC's 7nm.
Thank you for another extremely good video. Some thoughts: 1) Those voltage curves are terrible for Polaris, no wonder people think it is such a power hog. Just doing some napkin math here, but power consumption typically scales linearly with clockspeed and the square of voltage. So looking at relative power consumption we have Efficient Polaris = 850Mhz x 0.815V x 0.815V = 565 relative power consumption Shipping Polaris = 1266Mhz x 1.120V x 1.120V = 1588 relative power consumption 1266Mhz / 850Mhz = 1.489 and 1588/565 = 2.81. So AMD got 50% higher clocks for close to three times the power consumption. I think that shows just how hard they are pushing Polaris past it's efficiency point. (dunno if this is right, someone correct me if it is not). 2) It sounds like the 1.4GHz 64core epyc couldn't possibly have been what they showed at next horyzen, unless they somehow got truly ridiculous ipc gains (which I doubt). So maybe the 1.4Ghz is a lower end model designed for efficiency and very low clocks, and there will be a 64 core part with much higher clock speeds, or AMD chose a particularly well clocking chip for the event and overclocked it to frequencies shipping parts won't have. 3) It wouldn't surprise me if TSMC 7nm has some voltage wall somewhere, so ryzen would clock well up to a point, and then just refuse to budge without extreme measures. I am saying this because it seems to me that more modern processes are that way, clocking well up to a point and then stopping in their tracks, whereas older an older process would just bump the voltage slightly for every clock bump. Global foundries 14nm stops at ~4Ghz and Intel 14nm++ at ~5.1 GHz, for example. So I think ryzen 3000 will have a similar clock wall, but if it occurs at 4.3Ghz or 5.3Ghz, time will tell. (again, people more knowledgeable than me, correct me if this is wrong).
1)The Equation you used comes from dennard scalling which due to complexity of modern MOSFET scaling is no longer valid. More at:(en.wikipedia.org/wiki/Dennard_scaling)
@@master_andreas1202 I know that dennard scaling broke down a little over a decade ago, so smaller transistors are not necessarily more power efficient or faster. It basically takes away Moore's law's teeth. But why would that invalidate P = CV^2f?
@@coopergates9680 I honestly do not know. It could very well be finfets causing the wall. The only thing I know for certain is that the voltage/frequency curve keeps getting steeper.
@@thomashayes1285 I almost think it might be nice to plot max stable clock against voltage (swap the axes), so that users know beyond what point increasing voltage yields insignificant or insufficient potential clock speed increases. The 'wall' would then be a horizontal line.
That polaris power video was the first video of yours I saw. I was lucky enough to come across your channel just as you changed from a lets play channel. Its been great to see you improve the quality of your channel to where it is now. Keep up the good work Jim.
I've been around since then as well. I get excited when there is a new video from Jim, he holds nothing back when it comes to his criticism of computer hardware. Jim and I are both GenXers who have grown up with all of this wizardry that is modern computer chips. Having 2MB of memory in your setup usually set you back $1000.
The reason they picked an 8C is straightforward, they wanted to do an apples-to-apples comparison to the 9900K. A clocked-down 16C chip would obviously be faster in MT performance, but everyone knows that, so that wouldn't have been very impressive. The handwave here is in the selection of Cinebench as the benchmark, while using power-limited chips. Like the Polaris demo, they are clocking down to their efficiency point, while using a benchmark in which a 2700X already outperforms the 9900K's IPC. It's not an unfair test, but it is one where they are putting their best foot forward.
16 core against a 9900K wouldn't be as telling about the technology. AMD's point is that they're winning on IPC, cores and possibly clock speed, certainly power. If they beat them down with a 12 core we wouldn't know as much about single threaded performance. Apples to apples gave us far more info than a 16 core vs 8 core shutout would have.
@Chiriac Puiu sure it is enough to make ppl exited but not to be sure about the speed of the rest cpus. As far as everybody knows right now that may be the best they have in terms of clockspeed "that matters for gaming alot" and the rest may just be more cores lower clock most likely. Not sure on prices and the performance of the full cpu range to be to exited since last I checked the word was about radeon vii that will be around 400$ for the same performance as a 2080... the performance ok but whit less features and more heat and noise , but the price... well lets just say its not what ppl wanted. I just hope they don't do the same on the cpu side and keep the prices decent.
@Chiriac Puiu Yes but my problem whit it is that we do not know the clockspeed and ipc improvements to make any sort of claim about performance. Until reviews whit benchmarks and official prices its kind of pointless to make claims , unless u work for amd and know something the rest don't I'l believe it when it comes to market.
Man I couldn't wait for another one of your videos. Honestly alot of what you have I find with heavy research (your leaks are unparalleled though) but I just love how you explain the information. It reinforces and gives me a more complete understanding of the subjects I was studying. Okay guys stop reading this and get back to the video!
@@pec1739 Just 6 hours ago someone wrote an article. www.google.com/amp/s/wccftech.com/amd-ryzen-3000-valhalla-cpus-x370-x470-motherboard-bios-support/amp/
CROSSHAIR VI HERO BIOS 6808 Update AGESA 0070 for the upcoming processors and improve CPU compatibility. ASUS strongly recommends installing AMD chipset driver 18.50.16 or later before updating BIOS. So i have updated my bios so yeah
As always Jim, another video that was well put together, and one I was especially looking forward to. Since the whole Ryzen 3000/Zen 2 hype train started rolling, the majority had been very optimistic. But bringing in some healthy skepticism is necessary I'd say. Nonetheless, it looks like even in the worst case scenario, things don't look too bad. I wouldn't be disappointed seeing the clock speeds taper off at around 4.6GHz or so, with an IPC boost of around 10%. That was my initial expectation anyway. Therefore even at their worst, they'll still be ahead of Intel. So as long as they keep the prices right, I can see Ryzen 3000 still being a success. I can see that you did have to dig down deep to find the concerning parts of everything that has been shown. I'm just annoyed now that we've heard so much, and in such variation, but with no clear release date in sight. I'm getting really anxious about it. I want to trust they'll use their better judgment to not botch the launch and hopefully they release a 12C/24T CPU in the first line-up.
the worst is a decent upgrade not bad but not earth shattering the best is another godzilla set loose all over the industry that people are going to struggle to tame.
@@sharkexpert12 the most important things is still it's awesome efficienty at lower clockspeeds. Clockspeed does not matter all that much in enterprise situations. The big xeons have been in the 2ghz range for forever because power usage and heat generation are far more important. And amd succeding in the datacenter is far far more important then succeding in the enthausiast gaming space for their survival. It is possible that zen2 at the very high end could again be a disapointment. In the end it will be still be a win for amd as marketshare in the datacenter should be their main priority anyway.
34:16 Would it? In Zen/Zen+, the speed of Infinity Fabric is tied to the memory clock. Isn't Zen 2 supposed to decouple that link? So isn't Infinity Fabric running as fast as it can regardless of memory clock speed on Zen 2? Could they have used that memory speed to demonstrate that fact?
BTW the biggest problem of zen2 (Ryzen 3000) is and most likely will be availability, not only for the CPU´s but the boards too! Thats also another reason to delay the 16 Core 1-3 Months.
There might be the possibility of the 12c/24t chips running in the prior boards as well. It all depends on the power draw. My guess is the 12c CPU will be about the same usage as the old 8c part, at worst.
@@TheCgOrion Yeah, that would depend on the exact model of board, and how good the power delivery system is. For reasons I don't understand, apparently MSI seems to offer better power delivery with their mid-ranged AM4 boards, and at good prices. I've seen a few different reviews which talk about this, but it's typically not easy to get good info about power-delivery, as motherboard makers usually don't make this clear, with specs that are often misleading at best, if not outright lies. Gigabyte has even gone as far as to load up some of their boards with components it doesn't need to make it LOOK like the power delivery system is better, without actually providing a better than average power system.
Cinebench does benefit Zen because it does mostly run out of the L1/L2 cache. That has been obvious since Zen 1. The things that you do not mention in this video and maybe did not consider or fully understand are the following: L1/L2 and L3 cache on Zen are all running at the CPU Clock speed, not the speed of the installed system memory. The L1/L2 and L3 are all limited to their own CCX module within the die and communication between CCX Modules is reliant on the Infinity Fabric "on die network" which is of a hub and spoke design that has equal speed connections between a central switch, the two CCX modules, the Memory Controller and PCIE bus with the maximum connection being limited to the max speed of dual channel memory installed in the system. Likewise a single CCX module also has the same bandwidth available to it that the dual channels of memory have. As with an office Lan, when too many users want access to a central server the network connection on a hub and spoke network that connects the server to the switch becomes a bottleneck. That is the reason why typically a central server might be connected to the switch with a 10GB/s link and the workstations all run at 1GB/s. The Infinity Fabric that transports the data between cores and between the system memory is performing the same functions as the Intel Ring Bus. However ring bus is more like a token Ring network where every device is guaranteed access to the network and is clocked at roughly double the rate of the Infinity Fabric to allow throughput on the ring to exceeds the capacity of the system memory itself and the PCIe bus. Im sure that power considerations were the likely reason for the compromised design Like with Intel, faster Ram is beneficial to a small extent. The major benefits to Zen though, is the increased throughput on the IF because of the higher frequencies allow more data transfers per second and push the bottleneck to system memory that inherent to the zen1/1+ architecture higher up the cpu performance curve. as demonstrated by teh 9900K, The memory chips themselves are not the bottleneck it is the shared transport in between cpu cores, pcie devices and memory. The Ringbus doesn't have the same bottleneck as it allows roughly twice the throughput between cores, PCIe controller and memory controller to start with. You can test it yourself on an Intel system by setting the cache multiplier to half of what the stock settings are and try playing a 1080p game. The 9900K will play games with performance more like a 2700X. While I do not have any hard facts other than what has been discussed about Zen 2 here, based on my knowledge of Zen 1 architecture and its inherent design limitations together with what we have seen so far in AMD demos of Zen 2. I am pretty confident that we will see: 1. Zen 2 CCX modules now contain 8 cores with the L1/L2 and L3 doubling in size compared to Zen1 4 core CCX modules, the L3 Cache will be shared by all 8 cores. 2. The infinity Fabric clocked separately from the memory speed, most likely clocked at or near CPU frequency. 3. The IO die will contain some level of L4 cache, that has not yet been disclosed by AMD, that is shared by all the installed CCX modules. The L4 Cache would allow the CPU cores to switch between modules without the need to continually go back and access relatively slow system memory when switching threads between the different modules working something like the 128mb edram cache on broadwell and the Iris pro mobile haswell chips.
I was thinking the same !!! Specialy with L4 cache . With the infinity fabric , i remember the Tofu 2 in the fujitsu spark 64 fx , they have less pin conections but a faster speed and reduce latency.
@@cristiansalazar6622 I certainly think that the 12 core part coming first makes the most sense. Thread ripper chips are likely to follow Ryzen by some time so a 16 core Ryzen eats into their HEDT business when It doesnt have to be that way right now. Intel look like they have a 10 core product coming next so a higher IPC 12 core Ryzen, even if the IPC doesn't quite match Intel at single core performance, should compare favorably with the top end mainstream Intel SKU
brad morris This backs up AMDs claim that Zen 2 will be very good for gaming; if your points are accurate, thats all targeting to lower latency which games are so sensitive of. Question: do you think the 8C ryzen cpu will be better for gaming than the 12C? 8C needs only one 8C chip, the 12C will have 6C+6C which may still introduce latency, but the 12C should have more total cache. Thats important also in terms of comparing to its rival, and still king of gaming Intel, the best 8C ryzen expected to cost around half of the 9900k bringing the value envelope to another level, and intel will be in trouble because singlethread performance is the last crown intel still has. If thats in jeopardy they will really have to grind and innovate and lower prices, which is all we consumers want
@@sergiomadureira9985 the L4 Cache is complete speculation on my part but I do believe that it mitigates some of the core to core latency issues that Zen has demonstrated to date. Similarly, separately clocking the Infinity fabric even if it is only at say - x1.5 memory frequency will go a long way to provide a more Intel like gaming experience. Rumored PCIe 4.0 will also help as it suggests that IF bandwidth will double as well. Getting a 25% IPC gain over zen 1 by more efficiently parallelizing instructions (floating point and integer calcs running in parallel for example) together with a more efficient internal data transport architecture should combine to allow for that possibility. If I was buying something today and money was no object I would go with a 9900K solution so I don't care about this brand being better than the other. Having said that, I am quietly confident that the 3rd Generation of Ryzen is the one that really pulls everything together and makes a name for the Ryzen series of chips. Thunderbolt 3 going royalty free and usb4 coming also bodes well for including it with AMD platforms. Competition is a good thing. It pushes innovation which is something that we have not really seen much of for some time
@@sergiomadureira9985 I think that we should all take manufacturer's claims at face value, they want you to get excited and buy their product. I truely hope that they are genuine claims but that will have to wait until the chips can be tested out in the wild. with regards 8c vs 12 for gaming, I honestly dont know. If AMD learns from the past and if they migitage the deficiencies of teh design then both should perform pretty well. If Jim's leaks are real, then the 12 core looks like it will provide the best single core performance. The new generation does stand a chance of reducing the latency to close to what Intel is doing now. An 8c will not have to deal with the same dual level thread switch that the current chips do. a 12 core will have the divide in the middle. The Windows scheduler has shown to not be all that smart when it comes to multi die or multi CCX based chips so maybe we will see benefits there as well.
I actually limited my Vega 64 a bit on purpose to make it run less hot, they put way too much power into these reference cards. My friend who runs a Vega 56 showed me an undervolting and overclocking guide for Vega and I thought he was smoking something the first time I heard him until I read it. It's one of those blowers and it runs far more stable with the power usage reduced in the Radeon software and doesn't hit it's thermal limit to shut down like my old Nvidia card did often. God Maxwell was a dumpster fire. This card in my experience behaves a lot better than cards I've used in the past. It's a good happy little GPU.
what ever comes, comes, enthusiasm isn't a crime. I still bought a vega 56 and did what you showed in your video on it.....happy as hell with my 1600X and power color red dragon vega card. this is still the best tech news i watch, probably always will be. the efforts made to bring us this information is outstanding and i think we that watch consistantly know to give thanks to you as an honest as can be given veiw.
I'm optimistic, I think there will be a nice boost in IPC and a decent boost in frequency. add the two together and throw in the tweaks they surely did to the memory controller and it should be very competitive on a per core basis and much better in a price/performance scale.
These are the types of videos that I like. You stepped back from your Primary analysis and looked at the Zen 2 Architecture from a different perspective. Perhaps not everything is all as it seems at AMD, and this is what will keep the discussion going for us all. Keep up the great work!
People who get professional voice training for broadcasting are taught to speak in a deeper voice, I believe. Of course, I don't think Jim has actually gotten any professional training, not unless it's from one of those inexpensive educational video websites.
I'd say most gains outside of the obvious from the 7nm die shrink could be down to the I/O and IF2. There hasn't been much talk about that I/O die and what it may or may not contain as well as any improvements to Infinity Fabric 2 through the switch to 7nm on the Zen Core IFOP's as well as 7-14nm cross compatibility and any performance gains from that and the IFIS links across the CPU package. Great video again, appreciate your analysis Jim.
@@flcnfghtr tbf, the fact that it has the same size doesn't mean it has the same content. You would need to do quite a bit of redesign of what is "left" after you remove cores to accommodate this change to chiplet architecture. We'll see. Release should be a few month from now
Hearing that old recording it shows how much you've changed your accent to make it more understandable. As someone with english 2nd I apreciate that very much, thank you.
I'm just stunned by the amount of work you put in your videos -- collecting information from various sources, running benchmarks yourself and doing the analysis, to reveal the these tech companies' business models, and saving us from being fooled. Respect. Great accent though ;)
They are keeping ZEN 2 very tight lipped. There's aspects in the processor design that will not be revealed till days before official launch. That's when people will go WOW.
@@gabriellucena6583 What's interesting is that ZEN2 may be a complete design overhaul versus the original ZEN. Really can't wait to see more details on it.
One benchmark that _could_ be L3 heavy is LuxMark with Hotel scene. It is intended to bench raytraced rendering on OpenCL GPUs, but also has a plain C++-on-the-CPU rendering mode. Would've been interesting to see how its samples per second change with 2+0 and 1+1 core arrangements.
I have the 2400G APU and rarely buy / need the power of a discrete GPU so I'm holding off upgrading until Zen 2 + Navi comes to APUs or AMD release a chip similar to the Intel+Vega used in Hades Canyon.
The Shape Very different, no doubt. But a Scottish accent is a lot closer to an Irish accent than anything else, so the confusion is understandable. Both Gaelic/Celtic in origin. - American who is very familiar with the varied accents of 🇬🇧
I saw somewhere- perhaps even here, though I think it was anandtech- was that the power consumption on threadripper scaled exponentially with utilization due to the infinity fabric power utilization. It mean that as it clocked up, nearly all the power and thermal headroom went into inter core communication and not into increased clocks. It’s why that had such low boost clocks. I wonder if that could continue to be the problem. Is the infinity fabric dooming these otherwise good chips?
8:32 this is what triggers me the most about mainstream benchmarking YT channels who compare CPUs at ultra setting - 60 fps at best and stable 99% GPU utilisation... and than they base the winner on 2-4 fps difference
Unless he knows you personally, no, he doesn't know, and why should he care? He lives likely in a different timezone and uploads based on what is convenient for him, not you. Now, all seriousness aside, I just have to say, good luck on your tests. :P
Its hard to think that Zen2 will suck. Intel, and many people basically thought Ryzen will be DOA. It was not. quite opposite in fact. Block diagram alone of Zen let to believe, that it was unfinished product, an early taste of what they been cooking since 2012. Designing good architecture takes many years, and Ideas which was not incorporated into Zen, and all the tweaks found in the meantime, should be incorporated in Zen2. My Wallet is ready.
Also @ 20:31, I think you missed the opportunity to bench the Radeon VII with the same "maxed out" setting on a stock or overclocked 9900K, to see IF a supposedly faster gaming CPU could yield the extra 10+ frames per second the Key Note Forza demo showed... Well, if you have a 9900K at hand that is. But still, I think such a bench could have given a few more PRACTICAL answers to your endeavour.
3:40 HE'S SOO YOUNG and adorable sounding. lmao Soooo cool to see how much you've matured as a person along with your content as well. Good shit mate :)
Jim tries to find a way to be negative about 3xxx Ryzen. And can't. I almost feel bad here. Really looking forward to dropping a 12 core/24 thread "x" series ( 3700x? We think? ), in to mY Crosshair 6 after a bios flash and being a happy camper. (1600x currently).
MrDaChicken we could really feel his effort to be negative when all his senses are going the opposite direction. But I understand why he made this video, he has been accused of being a AMD shill and getting a lot if hate lately (as he talked about in his last video), and wanted to give us a different perspective
My theory is that the 1.4/2.2 isn't base and boost, but rather it's boost over base, so 2.2 ghz base and 3.6 ghz boost, or 1.4 ghz over base....I could be wrong, but that could explain things well as 3.6 ghz is ~40% higher clockspeed than 2.2 ghz and combined with the 10-15% ipc improvement it could explain the 60% performance boost....…………..but that's just my theory.
As it being an epyc part a 2.2 turbo is actually pretty acceptable. Efficienty and heat output are far more important then core clocks. In the datacenter computations per watt rule and if you can run 3 64c cpu's at 2.2ghz for the same power usage as 2 64c cpu's at 3.6ghz the 2.2ghz cpu's will be the obvious choice. Lifetime power consumption costs are far higher then initial purchase costs. So spending a bit extra on hardware in order to save big money on power over the lifetime will always make much more financial sense.
@@needausernamesoyeah I think the latter is more likely ... the change wasn't quite an accent change as the speech patterns are pretty much the same... he just uses a much more assertive, deeper tone now.
When I came to this channel for the first time, I thought your voice was weird. After listening to the old video, I have to say, keep up the good work, you have improved so fucking much
Hey Jim, just wanted to let you know that this was a really interesting and well done video. It's very interesting to hear a more meta discussion comparing tactics and "best case" scenarios and the like for what AMD has done previously vs with Zen 2. I have to wonder though, even with the 7nm boost in clock speed, do you REALLY think that they can do 5ghz on 16 cores on desktop? I'm a hopeful person but I have a hard time seeing them getting even 12 cores up to 4.8 or 4.9. 5ghz just seems like too much of an ideal situation to get without cranking up voltage, which we know from ryzen 1 and 2 hits a wall really fast with clock speeds (4.1 on ryzen 1, and like 4.4 on ryzen 2). Anyways, I lloved your analysis and thoughts, and I can't wait to upgrade my 1600 to a shiny new 12 core zen 2 CPU later this year. Cheers!!
The heat shouldn´t be the deal breaker here, even if its like 20C° hotter than zen+ its still only as hot as an intel CPU... so yeah we shouldnt have worries there! :)
? It would be a deal breaker for me. I didn't buy new i7 or i9 as those chips are freaking barbeques. I tried Acer Nitro laptop, and after 1 hour of gaming 4 core 8 thread Intel chip in it reached 94 degrees Celsius! I don't want new Ryzen CPUs to turn into that.
@@myroslav6873 for some reason people think 80c is the max,despite having laptos with intel cpus running at 100c amd throttling for years with no problem.I also remember a test someone did(cant remember)who run a desktop intel cpu at 100c for a year and had no problem at all,there is a reason intel puts 100c the point of throttling and not 80c they probably know what they are doing,i think its mostly in your head since older chips needed to run colder and people have continued with the same mentality
I'm not surprised at the clock speed for Rome on the mega-core version. That's normal even for Intel's Xeons. Rome is for servers where power/performance is the main purchasing criteria. I don't think Zen 2 in the desktop Ryzens will disappoint unless people have unrealistic expectations. As usual, pricing has a big impact on success.
Dubious headline mate, very dubious. AMD have all the info on their competition, so they're taking time while stocks run low... to make sure their new tech kicks ass. :)
Holy cow, your Polaris video was the first video I ever watched from you, I had no idea it was your first video after switching from a let's play channel. Also you do sound a lot different now, I just never noticed because you've changed over time. And of course, amazing video as always. :)
Hey guys a dumb question right here: Would it be possible to use HBM on an APU with a Ryzen Chiplet and navi graphic? The coolest part of that would be if you could use the HBM as a L4 Cache/RAM, does anyone know if thats technically feasible? If that would work that would be a crazy good product!
@@cheescake98 Not with appropriate prefetching. I mean yeah it's no L3 (in speed I mean) but it's so much better than RAM that it's worthwhile, especially for APUs which are very constrained by memory bandwidth (it's only not all that apparent because they are so low end) Re: use as RAM adjunct, could attach via PCIe but it would take up so many lanes to be effective. Better to use a dedicated interface. Maybe we'll go back to co-processors sockets haha (except for memory)
They are doing some research about 3D stacking and doing something similar to what you said. But there's not much about it, just rumors and little to no info news articles
Technically possible. Yes certainly a single hbm2 stack of 4GB would be an immense high bandwidth cache (HBC) and bring allot of improvements to be igpu performance (which is going to be needed with better navi igpu being ALLOT faster then traditional small gcn igpus currently used). Plus HSA where the igpu is used as a floating point unit accelerator with data being flagged by cpu and gpu for use by both would be awesome. We've already seen tiered storage become a thing. We now have l1 cache then l2 then l3 cache then a big gap to system ram then optane caches and/or ssd then a big gap to hdds. The industry is constantly trying to fill the gaps at the moment cost effectively and going from a 16mb l3 cache then to 16gb of ram...a 4gb HBC does that very well. The big problem is cost. A hbm2 stack is close to the same size as an 8 core chiplet so costs the same to make. navi based apus will be monolithic style single dies like current apus are with 8 cores in one half of the die and navi20 gpu being on the other side and a price of £150. to add a hbm2 stack to that would make it £200 and the performance increase wouldn't scale with price. It would still be cool to see and some propriety system still might get it. There are apus out there with 4c8t already and 2560 vega cores which would benefit but they're not socketed. techgage.com/article/a-look-at-amd-radeon-vega-hbcc/
Holy shit! I can't believe how much you have changed your voice over time! That's crazy! I never had an issue with your accent, so it has gone totally unnoticed to me.
16:31, that's not quite right. Not having to go main memory usually increases the load on the core because the instructions and data are readily available and therefor the lesser the chances of a pipeline stall. This in turn should theoretically result in a reduction of clock speed but in actual fact it probably doesn't make that much of a difference either way.
Terrible timing for this Jim. :P AMD just unveiled there next step. It was obvious they would take it, but they are also going down the 3D stacking route. They had to even without Intel announcing its 3D stacking technology. You know what is expected of you now. No more sleep. :P
At @23:32 what do you mean by R9 2700x? LE: Nvm I think it was just a typo. For a second I actually believed there was a new processor I didn't know about :D
Why AMD is sandbagging? Simple, no hype pre launch is good thing. Remember bulldozer, it was not bad CPU, but hype was way too high. It was better to blow away competition on lunch.
@Chiriac Puiu It go well against first gen i7 and little behind second gen after which progress basically stopped. I have both i7-920 and fx8350, so can tell you that in real world single/dual threaded engineering applications bulldozer run why smoother and bit faster then overclocked first gen i7.
@@theonetruelenny9883 cpu.userbenchmark.com/Compare/Intel-Core-i7-2700K-vs-AMD-FX-8150/1985vs2006 2/3 performance for $332 vs $245 7/10 price. Is it that bad? It is if all you need highest benchmark score, for the rest it is good value for money.
I always appreciate your analysis. I think you've been on the right track this entire time. Navi... I think AMD is having problems because of the size of the die. Nothing more. I fully expect the moment AMD can properly chiplet their GPU's they'll be on a far stronger footing even WITH the limitations of chiplet design on their GPU software. Software can be reengineered well enough, even if AMD's efforts haven't been at the best there. As far as intel, I'm praying that they can produce one heck of a GPU. Because 1) Intel Integrated Graphics sucks and people who have to use intel because there's no decent competition (such as mobile) deserve better and 2) someone's got to compete with nVidia. And Intel taking nVidia off their game will force AMD to redouble their efforts. Because once nVidia mindshare starts dropping that's an opening for AMD to retake some of that for themselves. As for your audio quality from your earlier recordings... Sounds to me you've learned to enunciate a bit better and got better audio equipment and acoustics where you do your recording. You'd be shocked how the room acoustics and audio equipment can really change the way people sound.
Great video Jim! Just a comment for those who don't know. As voltage increases so does current and power is voltage times current so the real power usage increase is far more dramatic than implied by that voltage graph for those who don't understand how voltage effects current. Being that this isn't a Linus Tech Tips video I assume this doesn't really need to be mentioned though.
That's common knowledge for middle school graduated people. If they don't know that, they must be doing something other than learning while in school. 😁
Thank you for your insight on everything that you do. I love your reviews, and the knowledge that you bring to the tech world. I'm glad I got to your patreon discord as well!
Hmm. You do a pretty good job of playing devils advocate to yourself. Might be time to team up with another youtuber that does analysis. Maybe rapid fire debate-like videos being launched on each of your channels. I don't know who that would be, but people like conflict, and the adversarial system is very good at helping others arrive at a conclusion. Hell, even if someone agrees with your analysis, they can go all-in devils advocate like a lawyer. I like that you tested the 2+2 and 4+0 core configuration. I was wondering in a previous video how that would affect performance in different workloads. Since that could be a method of segmentation. (This chip has 8+0 and is $110, this one has 4+4 and is $100 etc).
I waited 6 months after zen1 launched before picking up a R7 1700. Wasn't a high-clocking sample so I built a new system around it for my dad. Jumped early on the 2700X and am much happier now. With 7nm heat density, having dual chiplets, and likely a bucketload of binning I can't help but wonder if your average 12 core part will be the better performer for the cooling and for the money. I can't see those halo high clockspeed 16 core chips being $300. I'll try and wait until the end of the year to make a decision to upgrade (for fun) or to hold off until the next refresh.
It's so strange, hearing your voice from back then. It sounds so different, and yet, I didn't even notice how much it's changed over the years. Talk about a flash from the past, that was certainly quick trip down memory lane. I have a feeling we'll get around to mentioning this specific time and video sometime in the future, maybe. Anyways, cheers mate.
I didn't realise how squeaky you used to be haha, how times have changed! Great video mate, considering how hard it is to say if it's gonna suck right now, you made some valid points :)
AdoredTV : Cinebench R15 has always favored Intel cpu-s since it detects AMD FX 8350 as 4 core 8 thread cpu which is just not a proper detection :). I remember reading somewhere around 2013-2014 that Intel made a donation in developing Cinebench back then....and I am pretty sure that they made them put in a compiler to hinder AMD cpu-s like Intel did in many small time companies that developed bechmarking tools. I remember also when Ryzen was hit with - 30 % performance degradation switching from CPU-Z v1.78 to v1.79 while Intel cpu-s stayed pretty much the same despite the fact that KabyLake was an older revision of Intels architecture at the time. Any common sense used here hints to another of Intels anti-competitive tactics using "donations" to CPU-Z developers in exchange them putting in a compiler to hurt AMDs performance yet once again. So to say that Cinebench R15 favors AMD cpu-s is an illiterate statement because Cinebench R15 is a benchmark that is older than ZEN architecture and is completely nonupdated since it was released and therefore cannot be anticipating any kind of ZEN performance as a favored one.
I make templates for Davinci Resolve and what AMD gave me is 8 core 16 thread cpu for 200$. I could only dream of that 3 years ago! Also RX 580 8GB for 120$? Yes! Thank you, AMD.
Back then when I wasn't subbed to this channel, I watched occasionally one of your videos from time to time, but man, your accent did actually change drastically since the Polaris vs Maxwell video.
well, der8auer sounded pretty positive about it, right? i bet he knows a couple of guys with some scoops too, seeing what he does, and gets cpus shipped to him 6 months before release and the such.
Wow the voice difference is like a Scott and an Irish of a difference lol. Also why is everyone complaining about the time? It's late afternoon for me so perfect timing. :-)
I had a real nightmare trying to render this video, took half a day and I finally had to cut it at the problem area 20:23 which is why there's a little repeat. ;)
Keep on doing awesome content Jim!
Love your videos
Maybe you need a 64 core Rome CPU to render it
@@Knowbody42Nah 2950x and R VII are the shit at the moment when it comes to Adobe at the moment , with some plugins you need that much VRAM.
@AdoredTV Seems to be a problem with Adobe. L1T has a video on the subject.
Bruh... Ive watched every single video since that original polaris video sequentially. And i never once noticed a shift in your accent since i watch videos right after upload. I'm shook at your old accent.
This made my day. keep up the good work.
Ya I didn't realize I had been following him since the very beginning of his tech analysis. I remember finding that video and being impressed with the analysis and I have watched every video since. Adoredtv and Gamers Nexus are far and away the best tech channels on this site!
Same here
Same here also!
How time has passed
Yeah, didn't realise I'd been a viewer for that long
My 3rd daughter was born today...and Jim posts a video on zen... Good day😎
Ayyy congrats!
Congratulations Kirk
Congrats my guy!
Three daughters, yikes. Stop eating so much soy.
Grats dude!
Your voice in that old video sounds like a dwarf from The Witcher 3 xD awesome ^^
I think that Jim got too much helium or something.
@@CaveyMoth Yep I used to speak in a high pitch because I thought my voice was too deep for most to handle it. Even now I probably speak at a slightly higher pitch than my natural voice.
@@adoredtv Whoa, so you weren't speaking in your normal voice? That's fascinating. My voice is low AF, too, and people have trouble understanding it in real life.
@@CaveyMoth I think I'm easier to understand now, also my pacing has improved a lot as well. Been living in Sweden for 3 years so I have to try to make myself understood on two fronts. ;)
@@adoredtv You live in Sweden?! :O I do too! I'm Swedish! Pratar du svenska? Hur låter det?! Kan du visa hur det låter i någon video?
14:04 - Actually it's probably running mostly out of L2/L3, because AMD is using a more complicated version of the test which should not fit into L1, namely the sphfract scene at a much higher resolution and a lot of oversampling. It may well be accessing main memory too, I'm not sure; on SGIs I deliberately ran C-ray/sphfract at higher resolutions and with high oversampling in order to ensure the test would hit main RAM to at least some degree, but on these modern architectures with all the caching and other stuff going on, grud knows what Rome is doing here, but it sure as heck isn't dependent on any relevant core to core communication. Presumably there's a management thread which collates the scan line render results back from the separate threads, but the returned data chunks are pretty small, just a few K even at a high resolution, so inter-core bandwidth is not a factor. My guess is, much like the tiny "scene" test benefits from residing entirely in L1, the sphfract test benefits on modern CPUs from being able to sit mainly in L2/L3, which these days is very fast. On SGIs one could use precise monitoring tools to discern what the CPU was doing, but I guess x86 doesn't have this (I don't know, beyond my knowledge).
17:39 - The "scene" test definitely won't touch it, but sphfract at high res, etc. probably does (honestly not sure tbh). It's definitely hitting L2, but it could be that the way its using L3 (if at all) is still very favourable for the demo.
Jim, thankyou so much for highlighting my comments on your earlier video, I'm glad that people will likely now have a better understanding of the nature and limitations of C-ray. It's an appealing test for AMD because it scales so well with threads (in a manner which the old CB R15 does not), but it's far from a real-world scenario, especially for rendering. For example, someone at a major US movie company told me that for their modern productions, a single rendered frame may involve pulling in many tens or even hundreds of GB of data over their SAN (hence the rendering on the CPU cores themselves involves a lot of data and thus accessing main memory), which means bandwidth and latency on their renderfarm is important, factors C-ray doesn't test at all. A different movie company in the UK told me their SAN can do about 10GB/sec, performance that is absolutely essential now they are frequently working with uncompressed 8K (can you imagine the memory demands of that? The guy told me they're about to move up to 48GB Quadro RTX 8000 cards because the 24GB of their existing Quadro M6000s is no longer enough).
There's lies, damned lies, and statistics. Or as my old stats book says, people use statistics as a drunk uses a lamp post, for support rather than illumination. C-ray is *interesting* (that's why John wrote it), but my jaw completely hit the floor when I watched the Rome demo and there it was on the screen. That was like... Bugatti promoting their latest Veyron based on how fast one could fill the petrol tank. :D
I did try to contact AMD to ask for more details of exactly how they ran their test, since disclosure of the precise compile command used to create the binary is supposed to be part of the test process (in order to be sure there's been no cheating), but I was unable to get a response (I'm a comparative nobody in the x86 space). I even added a new Test 5 to my C-ray page to match what I gather is the settings they used for the test:
www.sgidepot.co.uk/c-ray.html
but I'm loathe to flesh out the currently empty table with any entries until I can be certain the settings are correct. Jim, if you have any contacts at AMD, can you give them a nudge? I'd love to hear from them, would be great to have Rome in the #1 spot and see how things pan out from there. 8)
What's hillarious about all this though is that on the one hand if AMD keeps using C-ray in its PR then Intel will copy them and use it too (and where that rabbit hole leads is anyone's guess; is it believable that neither side will ever try to cheat?), while on the other hand as long as they do use the test then by definition it cannot be used to promote whatever advantages Zen2 may have over Intel a la improved AVX performance. Why didn't they use CB R20? Perhaps because it incorporates Intel's raytracing engine and using Win10 might not be optimal on CPUs with as many cores as Rome (assuming it's possible to use Win10 on Rome at all atm). Hopefully, those to whom Rome may be appealing (as with any CPU) will be more discerning in their buying decisions and wait for proper relevant reviews that correctly reflect their intended workload.
Thanks!
Ian.
I guess C-Ray would also be a test that would scale well on multi-socket systems?
So...you're literally saying that AMD's Zen is Bugatti Veyron of CPU platforms for much less money than Inturd. Got it.
Tank you very much for publicly and openly confirming/clarifying that Zen absolutely BTFOs Intrash on all fields while staying much cheaper and way more efficient at the same time (and also having great forward/backward compatibility).
P.S.
>Why didn't they use CB R20?
Zen's THREADRIPPER (not even Epyc) already completely BTFOd Inturd's overpriced garbage CPoos in Cinebench R20, according to latest tests. So there's that. See here for example: www.imagebam.com/image/d6137d1167288784.
Yes, as you can clearly see by that pic, Zen ALREADY (not even in it's much more improved Zen 2 state, but "mere" Zen+) utterly destroys *FOUR* extremely overpriced Xeons *in a 4S* and got very close to owning TWO *most expensive* "platinum" Inturds in a 2S configuration, and that's in a heavily Intel-biased benchmark that was made SPECIFICALLY only with one sole purpose of making Intel """look good""" in comparison with AMD's Zen on the worldwide scene. Irony is T H I C C, lol.
@@defeqel6537 Yes, hence the 32-CPU SGI Origin3K results on my C-ray page, along with my own 24-CPU POWER Challenge. Just as cores don't need to talk to each other much for these tests, sockets therefore don't either, except for returning render results to whichever core holds the management thread (but the data is miniscule).
@David haldkuk I think perhaps you're looking at it too much from the perspective of how complexity works in real-time 3D scenarios such as games, ie. more polygons means lower performance. For C-ray, one can make the test a lot more complicated merely by increasing the output resolution and using higher oversampling. From what I've been able to find so far, AMD for its Rome test used the sphfract scene at 4K res with 8x oversampling, so it isn't even really that complex a configuration, and from what Jim says it could well be that AMD chose settings which would ensure the test would remain within L2/L3 (there's no "standard" C-ray test, so they can choose whatever they like).
For something tougher one could move up to 8K with 16x oversampling, but I don't know if the changes in relative performance would be that useful. Yes one could create a newer scene file with many more objects and surfaces, but I don't know if that would increase the compute complexity in a manner that's any different to simply rendering at a higher resolution and/or deeper oversampling level. Worse, the longer runtime might allow people to infer that the test is more relevant somehow to real world performance, when it really isn't. This is why, with SGIs, I was interested in comparing how a popular benchmark scene in Maya differed so greatly to a real-world scene rendered in Alias, which means the renderers are different aswell (the Alias scene came from a digital artist who designed magazine adverts, large advertising billboard posters, etc.):
www.sgidepot.co.uk/perfcomp_RENDER4_maya1.html
www.sgidepot.co.uk/perfcomp_RENDER3_alias1.html
The Maya test is very simple and (on the same CPU arch) scales pretty much just with clock speed, whereas the Alias test is sensitive to system architecture and especially L2 cache size, eg. the dual-R14K/600MHz Octane2 is only slightly faster than a single-CPU R16K/800MHz Fuel (the latter has 2x more L2, higher mem bw and lower mem latency). And crucially, the Alias test very much reflects the kind of real daily work the artist in question has to deal with, so it's genuinely useful (or was back then); the guy does the same sort of work today, but of course he's long since moved onto PCs:
www.johnharwood.com/
Teasing apart these issues has always been messy, as Jim's excellent videos convey. Sometimes companies do not want to delve into exactly what's going on in a public manner too deeply as it might reveal issues with their systems which don't look so good. Jim shows how the power efficiency curves of Zen/Zen2 may be related to the choice of test settings used by AMD to present their products in the most +ve light possible (I guess that's marketing; people do the same thing, the clothes we wear, hair, makeup, jewelry etc., all designed to convey something we want to project that's relevant to the context, eg. romantic appeal. professional presence, imposing military force, eco tree hugger, etc.) Heck, the entire plant and animal world for half a billion years has been an exercise in deceptive advertising. :D These days though, with modern social media, etc., trying to spin things in such a manner might be counter productive. The kind of people who would be interested in Rome are less likely to be fooled by such shallow practices, ditto (one would hope) the enthusiasts interested in a 16-core Ryzen. The danger is AMD over hypes the product but then delivers disappointment.
In the context of SGIs I looked into an example of this sort of thing. After SGI released their final InfiniteReality4 graphics product, Onyx350/Onyx3900 gfx supercomputers and their Tezro workstation, a natural question to ask was, how good would these be products be with a maximum spec for running Discreet Inferno or Flame? How much better than existing configurations with lesser CPUs, or older SGIs with earlier architectures? eg. a quad-1GHz Tezro V12, likewise the equivalent node boards stuffed into the Onyx systems with V12 or IR4 gfx (max 32 CPUs for Onyx350, max 1024 CPUs for Onyx3900). None of SGI's PR contained this information, which I thought was a bit weird. Discreet wasn't talking about it either. Thus, with the help of some key people I was able to run some proper tests:
www.sgidepot.co.uk/perfcomp_DISCREET1_FlameTests.html
I never got round to testing Inferno, but the conclusion for Flame on SGIs was startling: for various real-world tasks running on systems using V12 gfx, performance can be severely held back by the V12's 128MB VRAM limit (SGI should have increased the VRAM for V12 in O3K-class systems to at least 512MB, preferably 1GB, but perhaps by then they couldn't afford to). It meant that having much faster CPU options such as the quad-1GHz barely made any difference in many cases, the CPUs were waiting on the V12 to get a move on. IR4 (released in 2002 btw) running Inferno would not suffer from this because it has a lot more VRAM (10GB, with 1GB texture RAM). Point being, even though SGI risked annoying customers by potentially selling them products or upgrades that may not provide a useful gain in performance, the marketing and PR still did it anyway (at least in the case of those using Flame, which for SGI was a critical market by then).
Epicurus said 2300 years ago that advertising was the greatest evil. Nothing has changed, marketing/PR still poses tech products in the best light if it can, regardless of whether doing so might make the product designers and engineers want to tear their hair out in frustration.
Ian.
@@Kawayolnyo :D My Bugatti analogy was just to convey the idea that AMD boasting about C-ray isn't telling relevant potential customers anything they want to know. I could just as easily have referenced something more mundane, like promoting the latest TV by boasting about the number of buttons on the remote control. :) It's a mismatching of concepts, like the Suez Crisis popping out for a bun (and I'll nick that line from Adams as often as possible). The kind of customers who might be interested in Rome would I am certain not care about C-ray numbers.
I'm no expert on the whole Intel/AMD competitive position in Enterprise btw, not my field. Also remember that TCO is often more important than raw hw performance, which includes other aspects such as system support, maintenance, staff salaries, software licensing and 3rd party sw optimisation, etc. A Cinebench score, just like C-ray, does not for this class of hw tell one anything useful in terms of making a buying decision. It makes for great PR and headlines, but I doubt it helps much with relevant buyers who are more likely to be interested in directly representative benchmarks, or indeed inhouse testing on loan systems.
Ian.
Well, AMD may show only their best at presentations... But isn't it also true that Intel only show their FAKEST at presentations? (Industrial Chillers w/ pre-overclocked no-show CPUs, VLC video playback of "live" iGPU gameplay etc)? *LOL*
Glad you mentioned that tho. :)
LOL, that cooling system was insane. Intel should have been punished for that >:(
This is very true! But you should never measure your own success based on the failures of others! Kinda the same as saying that if you are fleeing from a bear, you only need to run faster than the slowest person ;) I'd prefer seeing AMD run faster than the fastest person, that would be something to brag about!
Be sure to also check out Intel's GRID Autosport (or was it GRID 2?) iGPU demonstration... That was actually a VLC video playback from a few years back. The chiller fiasco may have been disingenuous as fuck, but the fake iGPU demo was a flat out LIE. And there are other examples.
Let's not ever forget these things. Intel is the textbook definition of cronyism and failure of free markets. We need better technology leaders than these crooks.
LOL I think they should be sued for lying like that.
Please go somewhere with your whataboutism.
Jim, another excellent video. I have had multiple people on my channel call me crazy for suggesting 5GHz chips, and state that it's specifically crazy because VII proves 7nm only brings 20% higher clocks. Then I point out that a 20% clockspeed increase over the 2700X would mean a 5.2GHz 3850X.... and they stop arguing lol.
The fact is it is conservative to expect 4.5GHz 16-core parts from AMD, and if you are optimistic honestly the sky is the limit on this launch (but I recommend conservatism). Like you say Jim - the most likely downside to Zen 2 is probably a segmented roll-out of parts over all of 2019, but I don't see that as horrible at all. Oh, and yeah wow your old accent is hilarious. Cheers!
Cheers bud.
GF's 7nm was supposed to have a pretty consistent frequency-power curve up to well over 5GHz, but it was also a superior process, so who knows where the limit is with TSMC's 7nm.
Thanks, enjoyed this very much! Interesting times indeed and looking forward to your reviews and analysis when the products hit the market!
Cheers bud.
Thank you for another extremely good video. Some thoughts:
1) Those voltage curves are terrible for Polaris, no wonder people think it is such a power hog. Just doing some napkin math here, but power consumption typically scales linearly with clockspeed and the square of voltage. So looking at relative power consumption we have
Efficient Polaris = 850Mhz x 0.815V x 0.815V = 565 relative power consumption
Shipping Polaris = 1266Mhz x 1.120V x 1.120V = 1588 relative power consumption
1266Mhz / 850Mhz = 1.489 and 1588/565 = 2.81. So AMD got 50% higher clocks for close to three times the power consumption. I think that shows just how hard they are pushing Polaris past it's efficiency point. (dunno if this is right, someone correct me if it is not).
2) It sounds like the 1.4GHz 64core epyc couldn't possibly have been what they showed at next horyzen, unless they somehow got truly ridiculous ipc gains (which I doubt). So maybe the 1.4Ghz is a lower end model designed for efficiency and very low clocks, and there will be a 64 core part with much higher clock speeds, or AMD chose a particularly well clocking chip for the event and overclocked it to frequencies shipping parts won't have.
3) It wouldn't surprise me if TSMC 7nm has some voltage wall somewhere, so ryzen would clock well up to a point, and then just refuse to budge without extreme measures. I am saying this because it seems to me that more modern processes are that way, clocking well up to a point and then stopping in their tracks, whereas older an older process would just bump the voltage slightly for every clock bump. Global foundries 14nm stops at ~4Ghz and Intel 14nm++ at ~5.1 GHz, for example. So I think ryzen 3000 will have a similar clock wall, but if it occurs at 4.3Ghz or 5.3Ghz, time will tell. (again, people more knowledgeable than me, correct me if this is wrong).
1)The Equation you used comes from dennard scalling which due to complexity of modern MOSFET scaling is no longer valid.
More at:(en.wikipedia.org/wiki/Dennard_scaling)
@@master_andreas1202 I know that dennard scaling broke down a little over a decade ago, so smaller transistors are not necessarily more power efficient or faster. It basically takes away Moore's law's teeth. But why would that invalidate P = CV^2f?
Is the voltage wall a FinFET thing (less of an issue for planar transistors) or increases with shrinking node side?
@@coopergates9680 I honestly do not know. It could very well be finfets causing the wall. The only thing I know for certain is that the voltage/frequency curve keeps getting steeper.
@@thomashayes1285 I almost think it might be nice to plot max stable clock against voltage (swap the axes), so that users know beyond what point increasing voltage yields insignificant or insufficient potential clock speed increases. The 'wall' would then be a horizontal line.
That polaris power video was the first video of yours I saw. I was lucky enough to come across your channel just as you changed from a lets play channel. Its been great to see you improve the quality of your channel to where it is now. Keep up the good work Jim.
Same here. The improvement over time in technical analysis from Jim is great. I really appreciate what this channel has become!
I've been around since then as well. I get excited when there is a new video from Jim, he holds nothing back when it comes to his criticism of computer hardware.
Jim and I are both GenXers who have grown up with all of this wizardry that is modern computer chips. Having 2MB of memory in your setup usually set you back $1000.
Haven't finished watching the video but baby Jim's voice is adorable
*khm* Adored
The reason they picked an 8C is straightforward, they wanted to do an apples-to-apples comparison to the 9900K. A clocked-down 16C chip would obviously be faster in MT performance, but everyone knows that, so that wouldn't have been very impressive. The handwave here is in the selection of Cinebench as the benchmark, while using power-limited chips. Like the Polaris demo, they are clocking down to their efficiency point, while using a benchmark in which a 2700X already outperforms the 9900K's IPC. It's not an unfair test, but it is one where they are putting their best foot forward.
16 core against a 9900K wouldn't be as telling about the technology.
AMD's point is that they're winning on IPC, cores and possibly clock speed, certainly power. If they beat them down with a 12 core we wouldn't know as much about single threaded performance.
Apples to apples gave us far more info than a 16 core vs 8 core shutout would have.
@@glenwaldrop8166 Its hard to tell the performance increase without clock speed.
@Chiriac Puiu sure it is enough to make ppl exited but not to be sure about the speed of the rest cpus.
As far as everybody knows right now that may be the best they have in terms of clockspeed "that matters for gaming alot" and the rest may just be more cores lower clock most likely.
Not sure on prices and the performance of the full cpu range to be to exited since last I checked the word was about radeon vii that will be around 400$ for the same performance as a 2080... the performance ok but whit less features and more heat and noise , but the price... well lets just say its not what ppl wanted.
I just hope they don't do the same on the cpu side and keep the prices decent.
@Chiriac Puiu Yes but my problem whit it is that we do not know the clockspeed and ipc improvements to make any sort of claim about performance.
Until reviews whit benchmarks and official prices its kind of pointless to make claims , unless u work for amd and know something the rest don't I'l believe it when it comes to market.
Man I couldn't wait for another one of your videos. Honestly alot of what you have I find with heavy research (your leaks are unparalleled though) but I just love how you explain the information. It reinforces and gives me a more complete understanding of the subjects I was studying. Okay guys stop reading this and get back to the video!
Asus have started releasing BIOS updates for Ryzen 3000.
dude i'd love to read about that, any links to it ?
@@pec1739 Just 6 hours ago someone wrote an article. www.google.com/amp/s/wccftech.com/amd-ryzen-3000-valhalla-cpus-x370-x470-motherboard-bios-support/amp/
@@Nianfur thanks man !
CROSSHAIR VI HERO BIOS 6808
Update AGESA 0070 for the upcoming processors and improve CPU compatibility.
ASUS strongly recommends installing AMD chipset driver 18.50.16 or later before updating BIOS.
So i have updated my bios so yeah
As always Jim, another video that was well put together, and one I was especially looking forward to. Since the whole Ryzen 3000/Zen 2 hype train started rolling, the majority had been very optimistic. But bringing in some healthy skepticism is necessary I'd say. Nonetheless, it looks like even in the worst case scenario, things don't look too bad. I wouldn't be disappointed seeing the clock speeds taper off at around 4.6GHz or so, with an IPC boost of around 10%. That was my initial expectation anyway. Therefore even at their worst, they'll still be ahead of Intel. So as long as they keep the prices right, I can see Ryzen 3000 still being a success.
I can see that you did have to dig down deep to find the concerning parts of everything that has been shown. I'm just annoyed now that we've heard so much, and in such variation, but with no clear release date in sight. I'm getting really anxious about it. I want to trust they'll use their better judgment to not botch the launch and hopefully they release a 12C/24T CPU in the first line-up.
SO even if we asume the worst its still pretty awesome - sounds like a deal to me! :)
Thanks for that outstandig research and analysis.
the worst is a decent upgrade not bad but not earth shattering the best is another godzilla set loose all over the industry that people are going to struggle to tame.
@@sharkexpert12 the most important things is still it's awesome efficienty at lower clockspeeds.
Clockspeed does not matter all that much in enterprise situations. The big xeons have been in the 2ghz range for forever because power usage and heat generation are far more important.
And amd succeding in the datacenter is far far more important then succeding in the enthausiast gaming space for their survival. It is possible that zen2 at the very high end could again be a disapointment.
In the end it will be still be a win for amd as marketshare in the datacenter should be their main priority anyway.
Yep, been holding out for some time now waiting on the Zen 2 chips to upgrade my ole 4790K.
last time i was this early intel i7 had 4 cores
My Intel CPU has 4 cores.../cry
But my AMD CPU has 6^^
So yeasterday?
Last time i was this early Intel glued 2 cores together.
Last time I was this early Intel had a water chiller
Great video. The only bad part is knowing that I now have to wait 7 days or so for the next one. Keep up the good work! :D
34:16 Would it? In Zen/Zen+, the speed of Infinity Fabric is tied to the memory clock. Isn't Zen 2 supposed to decouple that link? So isn't Infinity Fabric running as fast as it can regardless of memory clock speed on Zen 2? Could they have used that memory speed to demonstrate that fact?
Still unknown. ;)
I love my 1800X but if the Zen 2 reports are true AMD might be forcing me to upgrade. I cannot wait.
i have a 1700x and im upgrading its been a few years and its time regardless of how good zen 2 will be
@@pig666eon8 I have the 1700 (non x), time to upgrade.
I'll keep my 1700 for a good while. Nothing wrong with it :) (Locked at 3.7)
@ yea mine locked to 3.7 too. can push but nahh. it works, nothing wrong with it.
They're going to twist your rubber arm.
BTW the biggest problem of zen2 (Ryzen 3000) is and most likely will be availability, not only for the CPU´s but the boards too!
Thats also another reason to delay the 16 Core 1-3 Months.
The 3600x and below can use the old AM4 boards. it'll be a problem on the higher end CPUs though.
There might be the possibility of the 12c/24t chips running in the prior boards as well. It all depends on the power draw. My guess is the 12c CPU will be about the same usage as the old 8c part, at worst.
@@TheCgOrion i saw somewhere they said that the ryzen 3000 wont run on b350. But will run on X370 and up.
@@NBWDOUGHBOY Nice. Thank you for the information. My Ryzen 7 is on X370, so hopefully I'll be able to upgrade it in the future.
@@TheCgOrion Yeah, that would depend on the exact model of board, and how good the power delivery system is.
For reasons I don't understand, apparently MSI seems to offer better power delivery with their mid-ranged AM4 boards, and at good prices. I've seen a few different reviews which talk about this, but it's typically not easy to get good info about power-delivery, as motherboard makers usually don't make this clear, with specs that are often misleading at best, if not outright lies. Gigabyte has even gone as far as to load up some of their boards with components it doesn't need to make it LOOK like the power delivery system is better, without actually providing a better than average power system.
Cinebench does benefit Zen because it does mostly run out of the L1/L2 cache. That has been obvious since Zen 1. The things that you do not mention in this video and maybe did not consider or fully understand are the following:
L1/L2 and L3 cache on Zen are all running at the CPU Clock speed, not the speed of the installed system memory. The L1/L2 and L3 are all limited to their own CCX module within the die and communication between CCX Modules is reliant on the Infinity Fabric "on die network" which is of a hub and spoke design that has equal speed connections between a central switch, the two CCX modules, the Memory Controller and PCIE bus with the maximum connection being limited to the max speed of dual channel memory installed in the system. Likewise a single CCX module also has the same bandwidth available to it that the dual channels of memory have. As with an office Lan, when too many users want access to a central server the network connection on a hub and spoke network that connects the server to the switch becomes a bottleneck. That is the reason why typically a central server might be connected to the switch with a 10GB/s link and the workstations all run at 1GB/s.
The Infinity Fabric that transports the data between cores and between the system memory is performing the same functions as the Intel Ring Bus. However ring bus is more like a token Ring network where every device is guaranteed access to the network and is clocked at roughly double the rate of the Infinity Fabric to allow throughput on the ring to exceeds the capacity of the system memory itself and the PCIe bus. Im sure that power considerations were the likely reason for the compromised design
Like with Intel, faster Ram is beneficial to a small extent. The major benefits to Zen though, is the increased throughput on the IF because of the higher frequencies allow more data transfers per second and push the bottleneck to system memory that inherent to the zen1/1+ architecture higher up the cpu performance curve. as demonstrated by teh 9900K, The memory chips themselves are not the bottleneck it is the shared transport in between cpu cores, pcie devices and memory. The Ringbus doesn't have the same bottleneck as it allows roughly twice the throughput between cores, PCIe controller and memory controller to start with.
You can test it yourself on an Intel system by setting the cache multiplier to half of what the stock settings are and try playing a 1080p game. The 9900K will play games with performance more like a 2700X.
While I do not have any hard facts other than what has been discussed about Zen 2 here, based on my knowledge of Zen 1 architecture and its inherent design limitations together with what we have seen so far in AMD demos of Zen 2. I am pretty confident that we will see:
1. Zen 2 CCX modules now contain 8 cores with the L1/L2 and L3 doubling in size compared to Zen1 4 core CCX modules, the L3 Cache will be shared by all 8 cores.
2. The infinity Fabric clocked separately from the memory speed, most likely clocked at or near CPU frequency.
3. The IO die will contain some level of L4 cache, that has not yet been disclosed by AMD, that is shared by all the installed CCX modules. The L4 Cache would allow the CPU cores to switch between modules without the need to continually go back and access relatively slow system memory when switching threads between the different modules working something like the 128mb edram cache on broadwell and the Iris pro mobile haswell chips.
I was thinking the same !!! Specialy with L4 cache .
With the infinity fabric , i remember the Tofu 2 in the fujitsu spark 64 fx , they have less pin conections but a faster speed and reduce latency.
@@cristiansalazar6622 I certainly think that the 12 core part coming first makes the most sense. Thread ripper chips are likely to follow Ryzen by some time so a 16 core Ryzen eats into their HEDT business when It doesnt have to be that way right now.
Intel look like they have a 10 core product coming next so a higher IPC 12 core Ryzen, even if the IPC doesn't quite match Intel at single core performance, should compare favorably with the top end mainstream Intel SKU
brad morris This backs up AMDs claim that Zen 2 will be very good for gaming; if your points are accurate, thats all targeting to lower latency which games are so sensitive of. Question: do you think the 8C ryzen cpu will be better for gaming than the 12C? 8C needs only one 8C chip, the 12C will have 6C+6C which may still introduce latency, but the 12C should have more total cache.
Thats important also in terms of comparing to its rival, and still king of gaming Intel, the best 8C ryzen expected to cost around half of the 9900k bringing the value envelope to another level, and intel will be in trouble because singlethread performance is the last crown intel still has. If thats in jeopardy they will really have to grind and innovate and lower prices, which is all we consumers want
@@sergiomadureira9985 the L4 Cache is complete speculation on my part but I do believe that it mitigates some of the core to core latency issues that Zen has demonstrated to date.
Similarly, separately clocking the Infinity fabric even if it is only at say - x1.5 memory frequency will go a long way to provide a more Intel like gaming experience.
Rumored PCIe 4.0 will also help as it suggests that IF bandwidth will double as well.
Getting a 25% IPC gain over zen 1 by more efficiently parallelizing instructions (floating point and integer calcs running in parallel for example) together with a more efficient internal data transport architecture should combine to allow for that possibility.
If I was buying something today and money was no object I would go with a 9900K solution so I don't care about this brand being better than the other. Having said that, I am quietly confident that the 3rd Generation of Ryzen is the one that really pulls everything together and makes a name for the Ryzen series of chips. Thunderbolt 3 going royalty free and usb4 coming also bodes well for including it with AMD platforms.
Competition is a good thing. It pushes innovation which is something that we have not really seen much of for some time
@@sergiomadureira9985 I think that we should all take manufacturer's claims at face value, they want you to get excited and buy their product. I truely hope that they are genuine claims but that will have to wait until the chips can be tested out in the wild.
with regards 8c vs 12 for gaming, I honestly dont know. If AMD learns from the past and if they migitage the deficiencies of teh design then both should perform pretty well. If Jim's leaks are real, then the 12 core looks like it will provide the best single core performance.
The new generation does stand a chance of reducing the latency to close to what Intel is doing now. An 8c will not have to deal with the same dual level thread switch that the current chips do. a 12 core will have the divide in the middle. The Windows scheduler has shown to not be all that smart when it comes to multi die or multi CCX based chips so maybe we will see benefits there as well.
Holy crap, a downclocked Polaris GPU sounds great for a media player PC.
Just get a 2400G
I was thinking similarly but for a CNC machine that I want passive cooling on so it doesn't clog up with sawdust. ;D
@@prototype3a get a 2400G with one of those passive coolers that derbauer showed off
Imagine what Vega could do downclocked, I already known.
I actually limited my Vega 64 a bit on purpose to make it run less hot, they put way too much power into these reference cards. My friend who runs a Vega 56 showed me an undervolting and overclocking guide for Vega and I thought he was smoking something the first time I heard him until I read it. It's one of those blowers and it runs far more stable with the power usage reduced in the Radeon software and doesn't hit it's thermal limit to shut down like my old Nvidia card did often. God Maxwell was a dumpster fire. This card in my experience behaves a lot better than cards I've used in the past. It's a good happy little GPU.
what ever comes, comes, enthusiasm isn't a crime. I still bought a vega 56 and did what you showed in your video on it.....happy as hell with my 1600X and power color red dragon vega card.
this is still the best tech news i watch, probably always will be. the efforts made to bring us this information is outstanding and i think we that watch consistantly know to give thanks to you as an honest as can be given veiw.
I'm optimistic, I think there will be a nice boost in IPC and a decent boost in frequency. add the two together and throw in the tweaks they surely did to the memory controller and it should be very competitive on a per core basis and much better in a price/performance scale.
These are the types of videos that I like. You stepped back from your Primary analysis and looked at the Zen 2 Architecture from a different perspective.
Perhaps not everything is all as it seems at AMD, and this is what will keep the discussion going for us all.
Keep up the great work!
What has made your voice change so much? Your accent is different but also you speak with a much deeper voice, I definitely prefer it now!
Alcohol and cigarettes xD
drugs and sex
People who get professional voice training for broadcasting are taught to speak in a deeper voice, I believe. Of course, I don't think Jim has actually gotten any professional training, not unless it's from one of those inexpensive educational video websites.
He got older.
Different microphones e.t.c
Some people have half-hour TV programmes they watch week-in week-out. I have AdoredTV :P
I'd say most gains outside of the obvious from the 7nm die shrink could be down to the I/O and IF2. There hasn't been much talk about that I/O die and what it may or may not contain as well as any improvements to Infinity Fabric 2 through the switch to 7nm on the Zen Core IFOP's as well as 7-14nm cross compatibility and any performance gains from that and the IFIS links across the CPU package.
Great video again, appreciate your analysis Jim.
FYI the IO die on Matisse is the exact size you get from taking a Zeppelin chip and removing two CCXs. There isn't any space for an L4 cache.
@@flcnfghtr tbf, the fact that it has the same size doesn't mean it has the same content. You would need to do quite a bit of redesign of what is "left" after you remove cores to accommodate this change to chiplet architecture.
We'll see. Release should be a few month from now
Hearing that old recording it shows how much you've changed your accent to make it more understandable. As someone with english 2nd I apreciate that very much, thank you.
AMD may be showing their best hand... but at least they aren't using 1.6kw refrigeration units.... :D
35:30 why the decaped Winbond EPROM?
My biggest concern with Zen 2 is motherboard compatibility with older boards.
x470 should be fine with a bios update at least
If that is your biggest concern in life, you should consider a reevaluation of your life priorities.
@@lancewhitchurch512
In his post.
I'm just stunned by the amount of work you put in your videos -- collecting information from various sources, running benchmarks yourself and doing the analysis, to reveal the these tech companies' business models, and saving us from being fooled. Respect.
Great accent though ;)
🙂 yay, a new release means better discounts on last gen 🐢😍
They are keeping ZEN 2 very tight lipped. There's aspects in the processor design that will not be revealed till days before official launch. That's when people will go WOW.
@@buzzworddujour LOL
@@buzzworddujour more than likely.
Hopefully it's like the Zen launch tight lip. Which resulted on a good WOW and not a bad WOW.
@@gabriellucena6583 What's interesting is that ZEN2 may be a complete design overhaul versus the original ZEN. Really can't wait to see more details on it.
One benchmark that _could_ be L3 heavy is LuxMark with Hotel scene. It is intended to bench raytraced rendering on OpenCL GPUs, but also has a plain C++-on-the-CPU rendering mode. Would've been interesting to see how its samples per second change with 2+0 and 1+1 core arrangements.
2.2ghz bost clock on the 64core epyc cpu seems to be true, a new leak came out. so does this mean that the ipc gains must be massive?
I have the 2400G APU and rarely buy / need the power of a discrete GPU so I'm holding off upgrading until Zen 2 + Navi comes to APUs or AMD release a chip similar to the Intel+Vega used in Hades Canyon.
What a beautiful voice to hear on St. Patrick’s Day
🤔 It's a Scottish accent though, not Irish accent.
The Shape Very different, no doubt. But a Scottish accent is a lot closer to an Irish accent than anything else, so the confusion is understandable. Both Gaelic/Celtic in origin.
- American who is very familiar with the varied accents of 🇬🇧
I would still be happy with an 8 core chip at 4.5ghz, if I can clock it to 4.8ghz+, regardless of efficiency doing a Bonny and Clyde.
I saw somewhere- perhaps even here, though I think it was anandtech- was that the power consumption on threadripper scaled exponentially with utilization due to the infinity fabric power utilization. It mean that as it clocked up, nearly all the power and thermal headroom went into inter core communication and not into increased clocks. It’s why that had such low boost clocks. I wonder if that could continue to be the problem. Is the infinity fabric dooming these otherwise good chips?
8:32 this is what triggers me the most about mainstream benchmarking YT channels who compare CPUs at ultra setting - 60 fps at best and stable 99% GPU utilisation... and than they base the winner on 2-4 fps difference
Which ones?
What's the name of the game in 2:22???
Please why do you upload right when I have to sleep. I have tests tomorrow you know.
Unless he knows you personally, no, he doesn't know, and why should he care? He lives likely in a different timezone and uploads based on what is convenient for him, not you. Now, all seriousness aside, I just have to say, good luck on your tests. :P
@@matilija yomama! :D
What browser extension are you using that shows you subscriber count for each commenter at 13:31 ?
VidIQ
Thanks!
Its hard to think that Zen2 will suck. Intel, and many people basically thought Ryzen will be DOA.
It was not. quite opposite in fact. Block diagram alone of Zen let to believe, that it was unfinished product, an early taste of what they been cooking since 2012. Designing good architecture takes many years, and Ideas which was not incorporated into Zen, and all the tweaks found in the meantime, should be incorporated in Zen2.
My Wallet is ready.
Also @ 20:31, I think you missed the opportunity to bench the Radeon VII with the same "maxed out" setting on a stock or overclocked 9900K, to see IF a supposedly faster gaming CPU could yield the extra 10+ frames per second the Key Note Forza demo showed...
Well, if you have a 9900K at hand that is. But still, I think such a bench could have given a few more PRACTICAL answers to your endeavour.
I don't have a 9900K or I would have. I might try to get hold of one though.
@@adoredtv Figured, awesome nonetheless! Thanks for the reply!
you sounded waaaay younger at 2:00 :))
He sounds like a strapping young lad
3:40 HE'S SOO YOUNG and adorable sounding. lmao
Soooo cool to see how much you've matured as a person along with your content as well. Good shit mate :)
If Intel releases 10 core mainstream CPU then AMD would probably have to launch at least 12 core ones.
Well, Lisa did say in techjournalist interview @ CES 19. "If you look at the evolution of Ryzen, we've always had an advantage in core count."
Nice, another quality video, its really good the way you explain everything so clearly. Thanks.
Jim tries to find a way to be negative about 3xxx Ryzen.
And can't.
I almost feel bad here.
Really looking forward to dropping a 12 core/24 thread "x" series ( 3700x? We think? ), in to mY Crosshair 6 after a bios flash and being a happy camper. (1600x currently).
MrDaChicken we could really feel his effort to be negative when all his senses are going the opposite direction. But I understand why he made this video, he has been accused of being a AMD shill and getting a lot if hate lately (as he talked about in his last video), and wanted to give us a different perspective
Daaamn almost a week passed and youtube didn't generate an english subtitles yet :(
You are a great analyst, excellent content and great channel
Keep it up Jim :)
My theory is that the 1.4/2.2 isn't base and boost, but rather it's boost over base, so 2.2 ghz base and 3.6 ghz boost, or 1.4 ghz over base....I could be wrong, but that could explain things well as 3.6 ghz is ~40% higher clockspeed than 2.2 ghz and combined with the 10-15% ipc improvement it could explain the 60% performance boost....…………..but that's just my theory.
As it being an epyc part a 2.2 turbo is actually pretty acceptable.
Efficienty and heat output are far more important then core clocks.
In the datacenter computations per watt rule and if you can run 3 64c cpu's at 2.2ghz for the same power usage as 2 64c cpu's at 3.6ghz the 2.2ghz cpu's will be the obvious choice.
Lifetime power consumption costs are far higher then initial purchase costs. So spending a bit extra on hardware in order to save big money on power over the lifetime will always make much more financial sense.
Was the accent shift intentional? I remember this video, but I never once noticed a change in your accent in all the videos since.
It could be a different microphone. Or maybe jim has developed his "commentors voice" a little.
@@needausernamesoyeah I think the latter is more likely ... the change wasn't quite an accent change as the speech patterns are pretty much the same... he just uses a much more assertive, deeper tone now.
When I came to this channel for the first time, I thought your voice was weird. After listening to the old video, I have to say, keep up the good work, you have improved so fucking much
hay mate, could you make some historic course about S3 Graphics, IBM CPUs, or about VIA 🤔
as always, your content is da best
and cyrix
I could get behind that
Hey Jim, just wanted to let you know that this was a really interesting and well done video. It's very interesting to hear a more meta discussion comparing tactics and "best case" scenarios and the like for what AMD has done previously vs with Zen 2. I have to wonder though, even with the 7nm boost in clock speed, do you REALLY think that they can do 5ghz on 16 cores on desktop? I'm a hopeful person but I have a hard time seeing them getting even 12 cores up to 4.8 or 4.9. 5ghz just seems like too much of an ideal situation to get without cranking up voltage, which we know from ryzen 1 and 2 hits a wall really fast with clock speeds (4.1 on ryzen 1, and like 4.4 on ryzen 2).
Anyways, I lloved your analysis and thoughts, and I can't wait to upgrade my 1600 to a shiny new 12 core zen 2 CPU later this year. Cheers!!
The heat shouldn´t be the deal breaker here, even if its like 20C° hotter than zen+ its still only as hot as an intel CPU... so yeah we shouldnt have worries there! :)
Ah and if it is like 40C° hotter (which it isnt) they would just let only 1-2-4-6 of 8 Cores boost high up.
? It would be a deal breaker for me. I didn't buy new i7 or i9 as those chips are freaking barbeques. I tried Acer Nitro laptop, and after 1 hour of gaming 4 core 8 thread Intel chip in it reached 94 degrees Celsius! I don't want new Ryzen CPUs to turn into that.
to be fair intel cpus can run at 100c for a long time with no issue and i dont know if the same can be said for ryzen
@@TheBilaras97 perhaps, but I'm still uncomfortable with those temps. Never had parts run hotter than 80 degrees in my systems.
@@myroslav6873 for some reason people think 80c is the max,despite having laptos with intel cpus running at 100c amd throttling for years with no problem.I also remember a test someone did(cant remember)who run a desktop intel cpu at 100c for a year and had no problem at all,there is a reason intel puts 100c the point of throttling and not 80c they probably know what they are doing,i think its mostly in your head since older chips needed to run colder and people have continued with the same mentality
I'm not surprised at the clock speed for Rome on the mega-core version. That's normal even for Intel's Xeons. Rome is for servers where power/performance is the main purchasing criteria. I don't think Zen 2 in the desktop Ryzens will disappoint unless people have unrealistic expectations. As usual, pricing has a big impact on success.
Dubious headline mate, very dubious. AMD have all the info on their competition, so they're taking time while stocks run low... to make sure their new tech kicks ass. :)
AMD is definitely not in a hurry.
Holy cow, your Polaris video was the first video I ever watched from you, I had no idea it was your first video after switching from a let's play channel. Also you do sound a lot different now, I just never noticed because you've changed over time.
And of course, amazing video as always. :)
Hey guys a dumb question right here:
Would it be possible to use HBM on an APU with a Ryzen Chiplet and navi graphic?
The coolest part of that would be if you could use the HBM as a L4 Cache/RAM, does anyone know if thats technically feasible?
If that would work that would be a crazy good product!
@@cheescake98 Not with appropriate prefetching. I mean yeah it's no L3 (in speed I mean) but it's so much better than RAM that it's worthwhile, especially for APUs which are very constrained by memory bandwidth (it's only not all that apparent because they are so low end)
Re: use as RAM adjunct, could attach via PCIe but it would take up so many lanes to be effective. Better to use a dedicated interface. Maybe we'll go back to co-processors sockets haha (except for memory)
They are doing some research about 3D stacking and doing something similar to what you said. But there's not much about it, just rumors and little to no info news articles
Technically possible. Yes certainly a single hbm2 stack of 4GB would be an immense high bandwidth cache (HBC) and bring allot of improvements to be igpu performance (which is going to be needed with better navi igpu being ALLOT faster then traditional small gcn igpus currently used). Plus HSA where the igpu is used as a floating point unit accelerator with data being flagged by cpu and gpu for use by both would be awesome.
We've already seen tiered storage become a thing. We now have l1 cache then l2 then l3 cache then a big gap to system ram then optane caches and/or ssd then a big gap to hdds.
The industry is constantly trying to fill the gaps at the moment cost effectively and going from a 16mb l3 cache then to 16gb of ram...a 4gb HBC does that very well.
The big problem is cost.
A hbm2 stack is close to the same size as an 8 core chiplet so costs the same to make. navi based apus will be monolithic style single dies like current apus are with 8 cores in one half of the die and navi20 gpu being on the other side and a price of £150. to add a hbm2 stack to that would make it £200 and the performance increase wouldn't scale with price.
It would still be cool to see and some propriety system still might get it. There are apus out there with 4c8t already and 2560 vega cores which would benefit but they're not socketed.
techgage.com/article/a-look-at-amd-radeon-vega-hbcc/
"Positivity and negativity are fluid."
Good, now I can feel better having an AMD-Nvidia build again.
lol the AMD rebellion marketing is accurate though. a rebellion from intel and nvidia empires
Holy shit! I can't believe how much you have changed your voice over time! That's crazy! I never had an issue with your accent, so it has gone totally unnoticed to me.
Now amd managed to get 50%ipc gain by zen so theres nothing impossible for amd
16:31, that's not quite right. Not having to go main memory usually increases the load on the core because the instructions and data are readily available and therefor the lesser the chances of a pipeline stall. This in turn should theoretically result in a reduction of clock speed but in actual fact it probably doesn't make that much of a difference either way.
Terrible timing for this Jim. :P
AMD just unveiled there next step. It was obvious they would take it, but they are also going down the 3D stacking route. They had to even without Intel announcing its 3D stacking technology.
You know what is expected of you now. No more sleep. :P
AMD planning on 3D stacking?
source pls
Sleep? What's that...
@@tobiassteindl2308 www.tomshardware.com/news/amd-3d-memory-stacking-dram,38838.html
At @23:32 what do you mean by R9 2700x?
LE: Nvm I think it was just a typo. For a second I actually believed there was a new processor I didn't know about :D
AMD has great products,.. and some of the worst marketing know to humanity.
Sony ,Panasonic and Philip too
one question, jim
who doesnt sandbag?
Why AMD is sandbagging? Simple, no hype pre launch is good thing. Remember bulldozer, it was not bad CPU, but hype was way too high. It was better to blow away competition on lunch.
@Chiriac Puiu It go well against first gen i7 and little behind second gen after which progress basically stopped. I have both i7-920 and fx8350, so can tell you that in real world single/dual threaded engineering applications bulldozer run why smoother and bit faster then overclocked first gen i7.
@@theonetruelenny9883 cpu.userbenchmark.com/Compare/Intel-Core-i7-2700K-vs-AMD-FX-8150/1985vs2006
2/3 performance for $332 vs $245 7/10 price. Is it that bad? It is if all you need highest benchmark score, for the rest it is good value for money.
Is there any info on whether b350 motherboard will work with zen 2 chips. I saw a video from dannyzplay saying it won't. Kinda worried.
Don't worry, you guys are overthink it to much...
I always appreciate your analysis. I think you've been on the right track this entire time. Navi... I think AMD is having problems because of the size of the die. Nothing more. I fully expect the moment AMD can properly chiplet their GPU's they'll be on a far stronger footing even WITH the limitations of chiplet design on their GPU software. Software can be reengineered well enough, even if AMD's efforts haven't been at the best there. As far as intel, I'm praying that they can produce one heck of a GPU. Because 1) Intel Integrated Graphics sucks and people who have to use intel because there's no decent competition (such as mobile) deserve better and 2) someone's got to compete with nVidia. And Intel taking nVidia off their game will force AMD to redouble their efforts. Because once nVidia mindshare starts dropping that's an opening for AMD to retake some of that for themselves.
As for your audio quality from your earlier recordings... Sounds to me you've learned to enunciate a bit better and got better audio equipment and acoustics where you do your recording. You'd be shocked how the room acoustics and audio equipment can really change the way people sound.
Do Intel's integrated GPU's really suck though? Mainly, how is their performance/watt or performance/area when compared with AMD?
Great video Jim!
Just a comment for those who don't know. As voltage increases so does current and power is voltage times current so the real power usage increase is far more dramatic than implied by that voltage graph for those who don't understand how voltage effects current. Being that this isn't a Linus Tech Tips video I assume this doesn't really need to be mentioned though.
That's common knowledge for middle school graduated people. If they don't know that, they must be doing something other than learning while in school. 😁
Hence why I wrote this: "Being that this isn't a Linus Tech Tips video I assume this doesn't really need to be mentioned though."
That awkward moment when you publish the best debunker of your own initial theory xD.
Whats the game 20:00?
Forza Horizon 4, great game I've really enjoyed playing it.
@@adoredtv I was just searching for nice racing game, this looks perfect, thanks for response! Keep up the good work! :)
I am excited for zen 2 news and information
Thank you for your insight on everything that you do. I love your reviews, and the knowledge that you bring to the tech world. I'm glad I got to your patreon discord as well!
Cheers.
Hmm. You do a pretty good job of playing devils advocate to yourself. Might be time to team up with another youtuber that does analysis. Maybe rapid fire debate-like videos being launched on each of your channels. I don't know who that would be, but people like conflict, and the adversarial system is very good at helping others arrive at a conclusion. Hell, even if someone agrees with your analysis, they can go all-in devils advocate like a lawyer.
I like that you tested the 2+2 and 4+0 core configuration. I was wondering in a previous video how that would affect performance in different workloads. Since that could be a method of segmentation. (This chip has 8+0 and is $110, this one has 4+4 and is $100 etc).
I just wanted to say i love your analysis vids (i fount your cash vid vary informative) keep up the great work.
I waited 6 months after zen1 launched before picking up a R7 1700. Wasn't a high-clocking sample so I built a new system around it for my dad. Jumped early on the 2700X and am much happier now.
With 7nm heat density, having dual chiplets, and likely a bucketload of binning I can't help but wonder if your average 12 core part will be the better performer for the cooling and for the money. I can't see those halo high clockspeed 16 core chips being $300.
I'll try and wait until the end of the year to make a decision to upgrade (for fun) or to hold off until the next refresh.
It's so strange, hearing your voice from back then. It sounds so different, and yet, I didn't even notice how much it's changed over the years. Talk about a flash from the past, that was certainly quick trip down memory lane. I have a feeling we'll get around to mentioning this specific time and video sometime in the future, maybe.
Anyways, cheers mate.
Holy shit your voice has improved considerably haha.
As did his mocrophone and recording software.
I didn't realise how squeaky you used to be haha, how times have changed! Great video mate, considering how hard it is to say if it's gonna suck right now, you made some valid points :)
AdoredTV : Cinebench R15 has always favored Intel cpu-s since it detects AMD FX 8350 as 4 core 8 thread cpu which is just not a proper detection :). I remember reading somewhere around 2013-2014 that Intel made a donation in developing Cinebench back then....and I am pretty sure that they made them put in a compiler to hinder AMD cpu-s like Intel did in many small time companies that developed bechmarking tools. I remember also when Ryzen was hit with - 30 % performance degradation switching from CPU-Z v1.78 to v1.79 while Intel cpu-s stayed pretty much the same despite the fact that KabyLake was an older revision of Intels architecture at the time. Any common sense used here hints to another of Intels anti-competitive tactics using "donations" to CPU-Z developers in exchange them putting in a compiler to hurt AMDs performance yet once again.
So to say that Cinebench R15 favors AMD cpu-s is an illiterate statement because Cinebench R15 is a benchmark that is older than ZEN architecture and is completely nonupdated since it was released and therefore cannot be anticipating any kind of ZEN performance as a favored one.
Did you ever get BTC or ETH donations?
Yes I do, not frequently but every so often.
@@adoredtv huh, neat
Is it ERC20 compatible?
I have some (pretty useless) tokens hahaha
@@TechDunk Just Eth, BTC and BTC Cash. ;)
Yes! New AdoredTV vid. Upvote, then watch. Silky Scottish voice here we come.
Hey, I've always thought you were extra fair on your video's, m8. Good to see you're sticking to that fairness in the best way possible.
I make templates for Davinci Resolve and what AMD gave me is 8 core 16 thread cpu for 200$. I could only dream of that 3 years ago! Also RX 580 8GB for 120$? Yes! Thank you, AMD.
Back then when I wasn't subbed to this channel, I watched occasionally one of your videos from time to time,
but man, your accent did actually change drastically since the Polaris vs Maxwell video.
I never knew that Jim used to be a hobbit! XD OMG, his voice was so much more high-pitched back then.
How do you explain your accent change? On a side note... excellent video as always.
Jeez Jim. Did you have a... "bit of a change" between your first tech video to today?
He's 42 or around that today.
well, der8auer sounded pretty positive about it, right?
i bet he knows a couple of guys with some scoops too, seeing what he does, and gets cpus shipped to him 6 months before release and the such.
Don't get wheel for forza horizon. By far best experience is with pad (i have both it suits pad way better as an arcarde racing game)
i second this, the xbox one pad is awesome for forza pc, the trigger rumble is amazing for feeling what your wheels are doing
Wow the voice difference is like a Scott and an Irish of a difference lol.
Also why is everyone complaining about the time? It's late afternoon for me so perfect timing. :-)
Look at the time in UTC+1
Creepy