11:52 - correction here. They didn't switch to Unreal Engine 5 for Witcher 3 on next-gen consoles. They simply updated and improved the assets, visual quality, added RT, etc; using the existing REDengine that powers the older version of the Witcher 3. CD Projekt RED will be switching to UE5 for brand new Witcher & other games in the future.
I think a lot of people forget (or never fully realized) that early Zen really was never particularly fast; its advantages were lots of cores for cheap, and then they added in efficiency when they hit 7nm with Zen 2, but were still behind the Intel ST performance curve. It wasn't until Zen 3 that AMD really started to challenge Intel in ST.
@@victorkreig6089 I'm confused, are they not challenging Intel for the ST performance crown? I know they were behind in ST through the first 3 generations of Zen (especially Zen/Zen+), but the gap has largely closed starting with Zen 3 taking a contested lead, with some see-sawing back and forth ever since.
@Zefra except that is objectively not true. AMD managed to build up a massive multithread lead as early as Zen 2, but they were still behind in single thread and never established a significant lead in ST even when Zen 3 was new. And now that HEDT has been largely abandoned by both companies (although it appears to be coming back later this year) even AMDs massive multithread lead has largely disappeared on the mainstream platform, with intle arguably having the faster parts for single user MT tasks Right now, with Zen 4, the best thing AMD has going for it is efficiency. Otherwise, performance is largely a wash (AMD needed X3D again take back the gaming performance crown), and prices even favor Intel. If that is intel being "destroyed," then I have no idea how AMD survived from about 2006 until 2019. Right now, we are in a golden era of CPU competition that hasn't been this close since the early days of Athlon.
@@bojinglebells the single thread is a slim margin now, to choose Intel over AMD is literally doing yourself a disservice for brand loyalty to a corporation that's bent everyone over a barrel for 2 decades because they basically have a monopoly on the market just like Nvidia
UPDATE *Xbox Series X The Xbox Series X GPU is a high-end gaming console graphics solution by AMD, launched on November 10th, 2020. Built on the 7 nm process, and based on the Scarlett graphics processor, the device supports DirectX 12 Ultimate. The Scarlett graphics processor is a large chip with a die area of 360 mm² and 15,300 million transistors. It features 3328 shading units, 208 texture mapping units, and 64 ROPs. AMD includes 10 GB GDDR6 memory, which are connected using a 320-bit memory interface. The GPU is operating at a frequency of 1825 MHz, memory is running at 1750 MHz (14 Gbps effective). Its power draw is rated at 200 W maximum. The console's dimensions are 301 mm x 151 mm x 151 mm, and it features a igp cooling solution. Its price at launch was 499 US Dollars. Graphics Processor GPU Name Scarlett Architecture RDNA 2.0 Foundry TSMC Process Size 7 nm Transistors 15,300 million Density 42.5M / mm² Die Size 360 mm² Chip Package BGA-2693 Graphics Card Release Date Nov 10th, 2020 Generation Console GPU (Microsoft) Production Active Launch Price 499 USD Clock Speeds GPU Clock 1825 MHz Memory Clock 1750 MHz 14 Gbps effectiver
No. That memory has incredibly higher latency which is awful for gaming. We've tested the 4700s and it's latency hit 145ns! The console CPU actually performs worse than 4700G with regular cheap ddr4 3200 even at jedec spec c22.
@@CasualGamers exactly, and people underestimate cache's importance, even for GPU's and comparing just memory bandwidth is old school late 2000s talk, look how little memory bandwidth nvidia's 4000 and amd > RDNA2 have, but still they are faster..
@@niks660097 if bandwidth was not highly important in gpus, then they would have being using simple ddr not gddr. gddr has other advantages over ddr, not just the bandwidth. noone underestimates cache in both cpus and gpus.
The last part about it being a cheaper alternative is exactly why it wouldn’t happen. I am not convinced it should happen. Now imagine a 7000 series with double the rdna2 units.
I must say that I feel like this video was one of the most worked on (if not I'm sorry). Jokes, transitions and everything is getting better as time goes on. It's great to see the progress.
R5 5600 is an excellent recommendation. If you're one of those people already considering a 5500, just put the extra money towards the 5600. At stock speeds it's only 200mhz slower than the 5600x (this applies to base + boost clocks) however, with PBO support this gap can easily be closed resulting in the same performance at almost the same power draw - depending on "silicon lottery" of course. You can also manually OC (like myself @ 4.7Ghz all-core) for even better performance - at the cost of slightly higher power draw & temps due to it being lower binned than the 5600x. Otherwise the rest of the CPU is identical from cache to PCIe 4.0 support. I'm not saying the 5500 is bad, but with half the L3 cache of the 5600/5600x along with only PCIe 3.0 support, you might as well get the 5600G in that case as at least the inclusion of the iGPU can be useful for various purposes. Such as being a display output if your discrete GPU is giving you trouble and you need to do some driver repairs, or gaming at low settings while saving up for a better GPU.
@@vinylSummeryeah, I know it's due to prices being so different all over the world. For example the 5600 is $129 usd at Micro Center currently, when I got mine there it was $149 and the 5600x was $199 back in May 2022
@@couriersix2443 here in Russia 5500 is ~$100, while 5600 starts at $150. 5600 is 50% more expensive but it doesn't provide nearly 50% more performance, so the choice for budget conscious buyers is obvious. I imagine it is the case for many other countries
@@vinylSummer Same here in Indonesia. Tho it's only about 35% - 40% more expensive. And the crazy thing is, even a new R5 3600 is slightly more expensive than R5 5500.
@@vinylSummer 5600 is $129 on Newegg (of all places lol) for the next day apparently. Not sure if they ship internationally, just thought it was interesting. And it honestly depends on the title, but that lower L3 cache really cripples the 5500 in games that rely on it. Most of the time it struggles against the 12100F (in both performance and price) despite having a core/thread advantage, and is closer to Zen 2 CPUs like the 3600/3700x in performance despite having Zen 3 cores. As I said, it's not a bad CPU. Just doesn't really belong with Ryzen 5 series in my opinion. They should've made it 4c/8t and called it the R3 5100 or 5300x
Great video as always, really love your content! Just would like to point out that 11:53 Witcher 3 update is still running on REDengine 3, not UE5 :) (Witcher 1 Remake is going to be on UE5) And regarding the Nvidia driver cpu overhead problem, wouldn't you condiser to change the CPU testing methodology to AMD GPUs? Results might be higher, especially for older/weaker CPU's. For example for this video, a nice GPU pair might have been RX 6700 / 6700XT wich are roughly the same as Xbox/Playstation GPUs. EDIT: Wrote this comment before watching till the end 😂
Yeah, someone just told me about REDEngine. As for driver overhead, I haven't got any fixed plans yet but at some point in the future I will probably trade in my 3070 for an RDNA 3 card or maybe a 6900XT. I mentioned in the outtro that I am thinking about picking up an RX 6700 to review for a "PS5 equivalent" build video, too.
the 6800 is closer to a series x than a 6700xt, 6700xt is closer to a ps5 gpu but they also have some specific stuff for making a more efficient geometry pipeline while the series systems are basically just pc's at this point using direct x with specific hardware
@wnxdafriz ps5 benefits due to the SSD, being soldered on the board. Hence less Teraflops were needed. The xbox series x shines due to its high Teraflops the gpu and cpu work hand in hand so the faster ssd wasn't needed nor was soldered to the board...
Interesting video when looking at the 4700g in Windows, just to be clear though the CPU as it's being used here, is significantly more powerful than the counterpart in the Series X. You touch on the clock speeds being locked at 3.6 GHz when SMT is enabled on console. However the Xbox OS permanently reserves 1 core at all times, for the Xbox interface. That means even with SMT enabled by the individual games' developer, the CPU would only be running at 3.6 GHz, on 7 cores 14 threads.
The Consoles have dedicated decompression chips to assist their CPUs. You and the video creator are comparing apples to oranges. In real world scenerios, the PS5 and Xbox Series X CPUs are obliterating their PC counterparts. This APU he is using is flawed because it lacks direct storage and kraken in addition to the compute units from rdna2 that the consoles use to limit the amount of burden on their CPUs.
@@Rican856 I very clearly said that it runs at 3.6Ghz when SMT is enabled, which it does. It can only run at 3.8Ghz without SMT. So that would be 7 cores, 7 threads, in total, at 3.8Ghz.
@@RydarGames Direct storage is an API made by microsoft.....microsoft who makes the Xbox and Windows, it's on both but requires devs to implement it which none have yet (because why would a game developer make a game and a technology for prebuilt plastic pc's but intetionally leave it out of the standard pc version? As for "dedicated hardware" the VCE used for compression/decompression IS present on there parts and separate from the shader array so whether its GCN or RDNA compute units is irrelevant. PC is already on RDNA3 and when the ps5 Pro and Xbox series X elite launch devs are going to have to do a tiny amount of optimisation for the difference but its still preferable to just sticking with RDNA2 (and allot of tiles will run fine on both). So more like golden delicious versus braeburn apple's.
@@frankmundo4300 A PC uses all of it's cores so you're limited only by what type of CPU you have and how many cores it has, the Xbox strictly uses 6 cores for gaming because the other two are hard coded to operate the OS.
@@frankmundo4300 Not dedicated, the OS utilizes all cores on a CPU when not gaming, but when you launch a game the CPU is prioritized for the game and allows the game to use as many cores as it needs (it's different depending on the game engine, but generally 6-8 cores). On PC, when you launch a game the OS is basically put on a low priority mode, especially now with "game mode" which stops unnecessary background tasks like Windows update etc. So basically, the Xbox will always be locked at 6 cores for gaming, that's why this video is a little misleading. If you ask me why Xbox locked out the 2 cores for OS, it's probably a RAM issue because consoles don't have large RAM sticks like on PC so the Xbox CPU has to do more tasks instead of storing them in memory (RAM). The two extra cores would then need to be prioritized for the OS so there aren't any issues that would freeze the console up or cause other issues.
@@frankmundo4300 not sure how the consoles do it, but a cpu core can power both the OS and a some programs at the same time (of course, proccessor time is a limited resource that will be split between the 2 tasks). this becomes apparent in lower-end cpus with 2 or 4 cores, because leaving only one core for most games wouldn't end well
Sadly the full APU used in consoles was never sold in the retail market. It would be a blessing during the 2nd crypto mining craze and lockdowns and chip shortage. Even at "high" prices like $300 or something.
I run a 6700 non xt snagged it new for $300 around Christmas time. Paired with my 5600g, rubs most games at 1080p ultra settings (aside from some Ray tracing) at 80+ fps
11:53 The Witcher 3 Remaster isn't using UE5. It's still on RedEngine. The remake of the first game that's currently in development is the one that's using UE5
To recap. 😄This was one more glorious Iceberg's tech content 👍 that suspiciously involved too many 69 incidents while testing Cyberpunk 2096 that arguably would comply with other peoples mother's console dongles if it wasn't so stiff about it. Alias at some point TH-cam has to boost with more viewers this type of spicy content. No ''''pan" intended.
Had a friend kill my 4350g when overclocking ram...he set the IMC voltage to 1.3v.....not the core voltage...needless to say...the system never made it to post.
The RTX 3070 is more powerful than the GPUs in the current gen consoles, but it has less available VRAM. To do this test properly, you really need to use a Radeon RX 6700 (10GB), or possibly 6600/6650 XT for the PS5.
i wish there were benchmarks of the 4800s desktop kit (the XSX real soc) but i cant find any, and no, the 4700s is the ps5 soc in case someone is wondering
Big props for testing the iGPU. The DDR 4 4000 was great for that, for the CPU maybe no so much. I would enjoy a follow up vid where you test the Tomahawk's out of box PBO settings. The manual 4.3GHz is likely less perfomant in most games than max PBO boost obtainable. I'd like to see the CPU performance tested with lower clocked lower CL ram too. I presume you decoupled the infinity fabric from the ram speed for 4000MT/s. If so the fastest 1:1 you can hit with the lowest CL is probably going to be optimal. Especially if the ram is dual rank. The USB issues on older AM4 chipsets is well known. If you are running the latest bios, and all drivers are the latest, try a different port. If no joy, welcome to the club. I never did get some of my USB stuff to work right with an ASRock X370 with the latest everything on win 10 or 11. Anyways, if you read this reply thanks for your time.
Overclocking in my opinion has been a waste of time since the 9th gen Intel processors. I've been instead undervolting my CPUs. As you demonstrated the performance gains are usually negligible, but the power consumption % increase is usually enormous by comparison to the performance gained. I know it's a little late but I found a game that might be a good addition to the CPU benchmark. I started playing Dead Rising 2, it's DX9 and therefore single-threaded and it maxed out my 11900k. I can barely hit the engine cap of 120fps due to the processor bottleneck. Another question about testing; How do you select area's for your benchmarks? I've been trying to optimize a game's settings lately and I'm struggling to find an area of the game that uses all of the graphical effects. Do you have a method of finding these areas, or is it that you just run around and find where performance is the lowest? I'm trying to test things like volumetric lighting and screen space reflections but I'm never quite sure if there are any in the scene I'm using. Any advice?
@@MuzdokOfficial Change the voltage mode to adaptive, then lower the voltage in 25mV steps and then test the stability and performance in cinebench. You can use XTU to speed up the process by not needing to restart every time you want to change the voltage. Then restart and enter your numbers at the end.
@man with a username It wasn't technically bottlenecking. I was hitting the engine cap, but at 98% utilization on a single thread. But I upgraded from a 2060 to a 6950xt and the game now runs in the 70s, which makes no sense. The game just might be broken
Maybe like the Ryzen 3 4350g that is sold in my country, not in prebuilts though since prebuilts are extremely expensive. Those are sold without box, only includes the CPU cooler.
No, it doesn’t offset the poor cache amount. The consoles have shared memory not unified memory because they are APU’s. You don’t know what your talking about at all.
Witcher 3 next gen update doesn't even use any form of UE. They kept using literally the same engine, but they updated a few assets and graphical effects making the game more demanding and running worse too.
Hi Great video as always. I wanted to ask how you got msi afterburner(specifically RTSS) working on valorant? For me and many people on reddit, it just crashes the game or doesnt launch it.
Yes, it’s weird. I can’t explain it, really. It stopped working for me on this motherboard when I did my R5 3400G video, I had to use FRAPS and a spreadsheet to work out the frame rates in Valorant. Then, lo and behold, Afterburner worked just fine on every video since, including this one. No idea why 🤷♂️
I think you would have got better Performance out of the 4700g with slower RAM because the Infinitiy fabric will not be able to run 1 to 1 with the ram at 4000 Mhz
APU has a different I/O die as far as I know. The Zen 3 ones can sometimes hit 2300mhz+ (DDR4 4600+), so the Zen 2 4700G probably runs 4000 in 1:1 as well
14:15 LMAO that was good.. that was really really good I can say I've never played any of these things like Battlefield 5 fortnite or Modern COD. But I don't play any MMO's any online multiplayer Drive farming Goat flying Sims ect ect an stay far far far away from any GAAS.. so for the gaming that I do offline story-driven action-rpg style and I only need 60fps the GPU and CPU(More GPU) are still not quite enough on PS5 or series X with higher fidelity. I find it crazy that the Xbox One X in the PS4 Pro did the same staying in a lot of games and here we are with a PS5 and series X still getting higher Fidelity at 30fps and lower Fidelity at 60. Seemingly the only thing that really changed was a couple 100 higher pixel count and faster load times.. LOVE RDR2 OFFLINE.. everybody needs to go to Rockstar support page and send them a message about giving us a real update for current consoles of RDR2.. I can honestly care less about GTA or the 6th one, not going to buy it and have no desire to play it. But an upgrade of RDR2 on PS5, XSX(ONLY) that I would be into
UPDATE *Xbox Series X The Xbox Series X GPU is a high-end gaming console graphics solution by AMD, launched on November 10th, 2020. Built on the 7 nm process, and based on the Scarlett graphics processor, the device supports DirectX 12 Ultimate. The Scarlett graphics processor is a large chip with a die area of 360 mm² and 15,300 million transistors. It features 3328 shading units, 208 texture mapping units, and 64 ROPs. AMD includes 10 GB GDDR6 memory, which are connected using a 320-bit memory interface. The GPU is operating at a frequency of 1825 MHz, memory is running at 1750 MHz (14 Gbps effective). Its power draw is rated at 200 W maximum. The console's dimensions are 301 mm x 151 mm x 151 mm, and it features a igp cooling solution. Its price at launch was 499 US Dollars. Graphics Processor GPU Name Scarlett Architecture RDNA 2.0 Foundry TSMC Process Size 7 nm Transistors 15,300 million Density 42.5M / mm² Die Size 360 mm² Chip Package BGA-2693 Graphics Card Release Date Nov 10th, 2020 Generation Console GPU (Microsoft) Production Active Launch Price 499 USD Clock Speeds GPU Clock 1825 MHz Memory Clock 1750 MHz 14 Gbps effectiver
good video, my only critique is I would have tested with FSR over DLSS, it is closer to the most game's dynamic res settings and more likely to be implemented on the 9th gen consoles given their AMD based hardware
I’ve heard leaving comments on videos does mystical, positive things for creators and their channel, something to do with engagement I think? So I try to leave a comment on all of your vids if I can think of something to say. That 69 to 96 joke was funny, I used to wonder what 96 would be, but now I know, thank you Iceberg Tech!
Gotta love AM4, well AMD in general. I'm content with my 1080p set up, RX 5700 paired with a Ryzen 7 2700x. Doesn't dip below 60fps on high in most titles
I really love the CIV-brand.... just not CIV6, because it is always crashing at a certain point (fixed number of turns) on my X470-based system and loading an older save will have the game just crash at exactly the same turn again. As far as I know Firaxis did not really care about it at all and never even tried to fix the bug.
May I ask you what CPU you are using? And how many turns are happening before you experience crashes? My gaming PC has a 5800X on a B450 board (the GPU is a 6750XT) and I play the Windows version of the game through Proton on Linux (Fedora 37). I've never had any crashes even when playing online with a friend with the turn limit of 250 turns removed (in our current save we reached turn 450-ish).
The limited L3 cache of the console Zen2 CPU is substantially mitigated by the exceptional fast and low-latency GDDR6 UMA (roughly 6-8 times the bandwidth and half the latency versus higher spec dual channel DDR4). From a practical standpoint, the Zen2 cores will function at performance levels equivalent to higher-spec AM4 parts, clock for clock (obviously, for overall power envelope reasons, the console CPU is clocked lower). Overall, both the Series X and PS5 have very balanced architectures, where everything is specced around targets and tradeoffs in performance and cost. But, they are purpose built, and assumptions are made around memory consumption and purpose (thus the asymmetric bandwidth of the Series X), which makes performance comparisons difficult. One analogy for the core continuing difference between consoles and general purpose PC’s is a console architecture can generally be thought of as very wide rivers connecting lakes of memory and compute resources, whereas the PC is relatively narrow rivers or streams, connecting oceans of compute and memory.
My CPU on my VR puter is a 4600G. It was from a HP POS. I ended up replacing the motherboard. It slaps for what I need. It will even run Beat Saber (min settings, 72FPS), but barely. Paired with a 1660 TI, I can play a lot of VR games that are high quality and 120fps, where available. And I only need a 500 w continuous power supply.
I would love to see a Rx 6700 video. I got one for a friend because at the time it was way less the the 6700xt and only a little more then 6650xt. With the extra 2gb and x16 pci I thought it would be a better choice for a pci e 3.0 system.
I keep telling people that the series x is running a ryzen 7 4700G. But people call me crazy saying I don't know what I'm talkin about. Well looky here....but fun fact: if you don't care about your console's warranty, and you know how to use a soldering iron, you could swap the base AMD from your series x with a ryzen 7 4700G AMD and get that extra GHZ out of it.
Low cache make sense for console, you dont care about high latency since you rarely run above 60 fps. Latency and performance are not directly link but low cache will limit your max responce time, FPS in that case. It lower the cost quite a bit to. One good example on how cache can affect performance.
The PS5 lacks L3 cache, however. So, I think it's more comparable to a 6600XT, especially when you consider the lower clock speed of the PS5, despite having more shader cores.
Hard to compare , Game developers optimized around the consoles never changing hardware that is well known to them . It's why after the PS4 after 10 years can still play many new AAA Games decently and Eldin Ring can cause a 4090 to dip to 10 fps.
I think I saw something about your videos not monetizing well. What you need to do one insert an additional ad every 7 minutes to triple revenue vs pre roll only
Thanks for the suggestion, but the screenshot I linked was of my Shorts earnings. I made a few Shorts last year to test out the platform a little, back before it was officially monetized, and some of them are still getting views. Based on the RPM, though, it's probably not going to be a priority for me.
TheWitcher3(Remastered) did not switch the Engine to UE5! It still uses RedEngine2. CDPR only announced, that all Projects after Cyberpunk2077:PhantomLiberty will use UE5 and they put their proprietary Home-Engine to rest.
The 710 has much faster VRAM with more bandwidth. The Intel integrated graphics have a trivial amount of dedicated VRAM and mostly rely on your system memory.
Fortnite only really needs quad cores unless you are screen recording then the hyperthreading comes in handy. It's the only game I've found that can still run ok with an Intel Arc A770 without resizeable bar. Fortnite doesn't like slow clock speeds and it likes a decent amount of memory.
I wonder if there is any contractual reasons why we don’t see console style apus (big integrated graphics) on the market, or is it just a perceived (or actual) lack of demand.
Imo it has to be contractual. Bc the demand is there for high performing APUs and iGPUs. Or it might be economics. Sony and MS are (or were) selling consoles at a loss so in the consumer mkt, those APUs would be too expensive. Hella bang for buck this gen! 😀
@@IcebergTech honestly, stellaris, rimworld, hoi4 and vicky 3 are all absolutely worth playing as all of these games are more story generators than anything. no two runs are the same. this is also true of vicky 3 and hoi4 but based more on the real world than sci fi stuff. hoi4 is a world war 2 sim and vicky 3 deals with 19th century.
"Ryzen 7 3700X Xbox and PC Hardware Capabilities Xbox Series X specs are almost equivalent to a Ryzen 7 3700X desktop processor paired with Radeon RX 6800 or Nvidia RTX 3070." From Google search.
With the RAM at 4*8, 4000MHZ, you must have set the infiniti fabric rate to 2:1. Wouldn't dual rank 2*16, 3600mhz cl16 be measurably better? Its not a difficult kit to find.
Bah, got the Spider-Man charts mixed up. My bad 😢
Also forgot the 3.7GHz Overlay on MSFS. Still a good video, happens :)
@@nitram0103 Lol great, rub it in why dontcha 😉
@@IcebergTech Didn't bother me at all but if u start pointin out mistakes, gotta do it lol
I HAVE THE SAME MOTHERBOARD HAHAHA
I'm using it with my 5900x ~
Hey brother is hd 630 better than GT 710 2gb GDDR5 in your video you compared 1gb version of 710 ???
11:52 - correction here. They didn't switch to Unreal Engine 5 for Witcher 3 on next-gen consoles. They simply updated and improved the assets, visual quality, added RT, etc; using the existing REDengine that powers the older version of the Witcher 3.
CD Projekt RED will be switching to UE5 for brand new Witcher & other games in the future.
The transition at 7:33 was smooth.
🫡
So smooth that I didn't even notice it
Its around 14:15 a little before. Your welcome
I think a lot of people forget (or never fully realized) that early Zen really was never particularly fast; its advantages were lots of cores for cheap, and then they added in efficiency when they hit 7nm with Zen 2, but were still behind the Intel ST performance curve. It wasn't until Zen 3 that AMD really started to challenge Intel in ST.
"challenge", lol yeah okay that's what they're doing challenging
@@victorkreig6089 I'm confused, are they not challenging Intel for the ST performance crown? I know they were behind in ST through the first 3 generations of Zen (especially Zen/Zen+), but the gap has largely closed starting with Zen 3 taking a contested lead, with some see-sawing back and forth ever since.
@Zefra except that is objectively not true. AMD managed to build up a massive multithread lead as early as Zen 2, but they were still behind in single thread and never established a significant lead in ST even when Zen 3 was new.
And now that HEDT has been largely abandoned by both companies (although it appears to be coming back later this year) even AMDs massive multithread lead has largely disappeared on the mainstream platform, with intle arguably having the faster parts for single user MT tasks
Right now, with Zen 4, the best thing AMD has going for it is efficiency. Otherwise, performance is largely a wash (AMD needed X3D again take back the gaming performance crown), and prices even favor Intel. If that is intel being "destroyed," then I have no idea how AMD survived from about 2006 until 2019.
Right now, we are in a golden era of CPU competition that hasn't been this close since the early days of Athlon.
@@bojinglebells the single thread is a slim margin now, to choose Intel over AMD is literally doing yourself a disservice for brand loyalty to a corporation that's bent everyone over a barrel for 2 decades because they basically have a monopoly on the market just like Nvidia
Man i will never forgot going from a core 2 quad and a 750 ti to a Ryzen 1200 and a 750 ti. It was so much faster and really cheap
Yes, I would like to see the RX 6700/PS5 comparison.
UPDATE
*Xbox Series X
The Xbox Series X GPU is a high-end gaming console graphics solution by AMD, launched on November 10th, 2020. Built on the 7 nm process, and based on the Scarlett graphics processor, the device supports DirectX 12 Ultimate. The Scarlett graphics processor is a large chip with a die area of 360 mm² and 15,300 million transistors. It features 3328 shading units, 208 texture mapping units, and 64 ROPs. AMD includes 10 GB GDDR6 memory, which are connected using a 320-bit memory interface. The GPU is operating at a frequency of 1825 MHz, memory is running at 1750 MHz (14 Gbps effective).
Its power draw is rated at 200 W maximum. The console's dimensions are 301 mm x 151 mm x 151 mm, and it features a igp cooling solution. Its price at launch was 499 US Dollars.
Graphics Processor
GPU Name
Scarlett
Architecture
RDNA 2.0
Foundry
TSMC
Process Size
7 nm
Transistors
15,300 million
Density
42.5M / mm²
Die Size
360 mm²
Chip Package
BGA-2693
Graphics Card
Release Date
Nov 10th, 2020
Generation
Console GPU
(Microsoft)
Production
Active
Launch Price
499 USD
Clock Speeds
GPU Clock
1825 MHz
Memory Clock
1750 MHz
14 Gbps effectiver
the small cache is probably compensate somewhat by the insanely faster memory the cpu on the xbox has access to thanks to unified memory.
No. That memory has incredibly higher latency which is awful for gaming. We've tested the 4700s and it's latency hit 145ns! The console CPU actually performs worse than 4700G with regular cheap ddr4 3200 even at jedec spec c22.
@@CasualGamers exactly, and people underestimate cache's importance, even for GPU's and comparing just memory bandwidth is old school late 2000s talk, look how little memory bandwidth nvidia's 4000 and amd > RDNA2 have, but still they are faster..
This is likely why Sony chose to go VRAM instead. Unified memory is a bit of a flop comparing it to modern RAM
@@niks660097 if bandwidth was not highly important in gpus, then they would have being using simple ddr not gddr. gddr has other advantages over ddr, not just the bandwidth. noone underestimates cache in both cpus and gpus.
@@MajorMacGyver ps5 has also unified gddr6. the small ddr4 ram, 512mb if not mistaken, is only for background tasks.
It'll never happen, but a Ryzen 5000 series CPU with RDNA2 Graphics would be so cool as a cheaper option vs the 7000 series
7600g with a cheap motherboard is gonna be a best hope here, 5000g series is probably discontinued.
@@technologicalelite8076 Am5. Cheap motherboard. Lol.
The last part about it being a cheaper alternative is exactly why it wouldn’t happen. I am not convinced it should happen. Now imagine a 7000 series with double the rdna2 units.
7600x is basically 5700g a little bit slower but if you put a gpu it will mop 5700g but then again that defeat the purpose of buying 5700g
@@ILoveTinfoilHatsthat's a big brain moment right there lol
My guy really did a "dongle in other peoples mother" joke at 14:18 🤣🤣
That's how you know it's quality content, when you slide those in
I must say that I feel like this video was one of the most worked on (if not I'm sorry). Jokes, transitions and everything is getting better as time goes on. It's great to see the progress.
R5 5600 is an excellent recommendation. If you're one of those people already considering a 5500, just put the extra money towards the 5600. At stock speeds it's only 200mhz slower than the 5600x (this applies to base + boost clocks) however, with PBO support this gap can easily be closed resulting in the same performance at almost the same power draw - depending on "silicon lottery" of course. You can also manually OC (like myself @ 4.7Ghz all-core) for even better performance - at the cost of slightly higher power draw & temps due to it being lower binned than the 5600x. Otherwise the rest of the CPU is identical from cache to PCIe 4.0 support. I'm not saying the 5500 is bad, but with half the L3 cache of the 5600/5600x along with only PCIe 3.0 support, you might as well get the 5600G in that case as at least the inclusion of the iGPU can be useful for various purposes. Such as being a display output if your discrete GPU is giving you trouble and you need to do some driver repairs, or gaming at low settings while saving up for a better GPU.
The reason people consider 5500 is because it's cheap. If they could put extra money, they absolutely would.
@@vinylSummeryeah, I know it's due to prices being so different all over the world. For example the 5600 is $129 usd at Micro Center currently, when I got mine there it was $149 and the 5600x was $199 back in May 2022
@@couriersix2443 here in Russia 5500 is ~$100, while 5600 starts at $150. 5600 is 50% more expensive but it doesn't provide nearly 50% more performance, so the choice for budget conscious buyers is obvious. I imagine it is the case for many other countries
@@vinylSummer
Same here in Indonesia. Tho it's only about 35% - 40% more expensive. And the crazy thing is, even a new R5 3600 is slightly more expensive than R5 5500.
@@vinylSummer 5600 is $129 on Newegg (of all places lol) for the next day apparently. Not sure if they ship internationally, just thought it was interesting. And it honestly depends on the title, but that lower L3 cache really cripples the 5500 in games that rely on it. Most of the time it struggles against the 12100F (in both performance and price) despite having a core/thread advantage, and is closer to Zen 2 CPUs like the 3600/3700x in performance despite having Zen 3 cores. As I said, it's not a bad CPU. Just doesn't really belong with Ryzen 5 series in my opinion. They should've made it 4c/8t and called it the R3 5100 or 5300x
Great video as always, really love your content! Just would like to point out that 11:53 Witcher 3 update is still running on REDengine 3, not UE5 :) (Witcher 1 Remake is going to be on UE5)
And regarding the Nvidia driver cpu overhead problem, wouldn't you condiser to change the CPU testing methodology to AMD GPUs? Results might be higher, especially for older/weaker CPU's.
For example for this video, a nice GPU pair might have been RX 6700 / 6700XT wich are roughly the same as Xbox/Playstation GPUs.
EDIT: Wrote this comment before watching till the end 😂
Yeah, someone just told me about REDEngine. As for driver overhead, I haven't got any fixed plans yet but at some point in the future I will probably trade in my 3070 for an RDNA 3 card or maybe a 6900XT. I mentioned in the outtro that I am thinking about picking up an RX 6700 to review for a "PS5 equivalent" build video, too.
Spot on!
the 6800 is closer to a series x than a 6700xt, 6700xt is closer to a ps5 gpu but they also have some specific stuff for making a more efficient geometry pipeline while the series systems are basically just pc's at this point using direct x with specific hardware
@wnxdafriz ps5 benefits due to the SSD, being soldered on the board.
Hence less Teraflops were needed.
The xbox series x shines due to its high Teraflops the gpu and cpu work hand in hand so the faster ssd wasn't needed nor was soldered to the board...
Interesting video when looking at the 4700g in Windows, just to be clear though the CPU as it's being used here, is significantly more powerful than the counterpart in the Series X.
You touch on the clock speeds being locked at 3.6 GHz when SMT is enabled on console. However the Xbox OS permanently reserves 1 core at all times, for the Xbox interface. That means even with SMT enabled by the individual games' developer, the CPU would only be running at 3.6 GHz, on 7 cores 14 threads.
The Consoles have dedicated decompression chips to assist their CPUs. You and the video creator are comparing apples to oranges. In real world scenerios, the PS5 and Xbox Series X CPUs are obliterating their PC counterparts. This APU he is using is flawed because it lacks direct storage and kraken in addition to the compute units from rdna2 that the consoles use to limit the amount of burden on their CPUs.
Also the Xbox series X CPU is running at 3.8Ghz not 3.6Ghz
@@Rican856 I very clearly said that it runs at 3.6Ghz when SMT is enabled, which it does. It can only run at 3.8Ghz without SMT. So that would be 7 cores, 7 threads, in total, at 3.8Ghz.
@@RydarGames Direct storage is an API made by microsoft.....microsoft who makes the Xbox and Windows, it's on both but requires devs to implement it which none have yet (because why would a game developer make a game and a technology for prebuilt plastic pc's but intetionally leave it out of the standard pc version?
As for "dedicated hardware" the VCE used for compression/decompression IS present on there parts and separate from the shader array so whether its GCN or RDNA compute units is irrelevant.
PC is already on RDNA3 and when the ps5 Pro and Xbox series X elite launch devs are going to have to do a tiny amount of optimisation for the difference but its still preferable to just sticking with RDNA2 (and allot of tiles will run fine on both).
So more like golden delicious versus braeburn apple's.
14:16 lmaoooooooo I burst out laughing so hard man.
Your dongle is dongling out of control
Remember that the series X doesnt utilize all 8-cores its only allowed to use 6.5 cores and the other 1.5 for software and that kind of stuff.
I thought pc kinda does the same thing
@@frankmundo4300 A PC uses all of it's cores so you're limited only by what type of CPU you have and how many cores it has, the Xbox strictly uses 6 cores for gaming because the other two are hard coded to operate the OS.
@@TheRedRaven_ does the pc not dedicate cores for its operating system?
@@frankmundo4300 Not dedicated, the OS utilizes all cores on a CPU when not gaming, but when you launch a game the CPU is prioritized for the game and allows the game to use as many cores as it needs (it's different depending on the game engine, but generally 6-8 cores). On PC, when you launch a game the OS is basically put on a low priority mode, especially now with "game mode" which stops unnecessary background tasks like Windows update etc.
So basically, the Xbox will always be locked at 6 cores for gaming, that's why this video is a little misleading. If you ask me why Xbox locked out the 2 cores for OS, it's probably a RAM issue because consoles don't have large RAM sticks like on PC so the Xbox CPU has to do more tasks instead of storing them in memory (RAM). The two extra cores would then need to be prioritized for the OS so there aren't any issues that would freeze the console up or cause other issues.
@@frankmundo4300 not sure how the consoles do it, but a cpu core can power both the OS and a some programs at the same time (of course, proccessor time is a limited resource that will be split between the 2 tasks). this becomes apparent in lower-end cpus with 2 or 4 cores, because leaving only one core for most games wouldn't end well
very professional video man , i like the content a lot
Cycloon?
14:18 most professional
Testing the Intel Ryzen 7 4700G 🤔 i didn't know that intel bought the cpu division from AMD .
Sadly the full APU used in consoles was never sold in the retail market. It would be a blessing during the 2nd crypto mining craze and lockdowns and chip shortage.
Even at "high" prices like $300 or something.
I run a 6700 non xt snagged it new for $300 around Christmas time. Paired with my 5600g, rubs most games at 1080p ultra settings (aside from some Ray tracing) at 80+ fps
The comical comments were spot on 👌🏻😂 keep up the good work!
Wow. i really thought people forgot 4000 series got completely forgotten.
Nice video!
11:53 The Witcher 3 Remaster isn't using UE5. It's still on RedEngine. The remake of the first game that's currently in development is the one that's using UE5
Ah! My bad, I must have misheard.
yes yes, i use the ryzen 4070. it is real and you guys should get one
No, that would be the 4700S kit that you can buy at some places that also includes 16GB of GDDR6 RAM
Still pretty incredible three years later how good current gen consoles are, thanks to Nvidia and AMD ruining the GPU market
To recap. 😄This was one more glorious Iceberg's tech content 👍 that suspiciously involved too many 69 incidents while testing Cyberpunk 2096 that arguably would comply with other peoples mother's console dongles if it wasn't so stiff about it. Alias at some point TH-cam has to boost with more viewers this type of spicy content. No ''''pan" intended.
The fascinating thing about consoles is the cost reduction that is required to pack so much computational power in a cheap package.
they sell at a loss to recoup on game and subcription sales.
@@KC-lg8qf Yes, but they still cost reduce things so that the product loss isn't TOO high.
@@qwertykeyboard5901 absolutely
Had a friend kill my 4350g when overclocking ram...he set the IMC voltage to 1.3v.....not the core voltage...needless to say...the system never made it to post.
My condolences 😔
Is your friend still alive ? 😆
The RTX 3070 is more powerful than the GPUs in the current gen consoles, but it has less available VRAM. To do this test properly, you really need to use a Radeon RX 6700 (10GB), or possibly 6600/6650 XT for the PS5.
I enjoy all sorts of RX 6000 series content, I have a 6900XT, so it's nice to know how it's siblings are doing.
my rx 6800 ist happily playing LoU1 😁
as always, excellent video bro. cant wait for the next.
You got me as a subscriber at that dongle bit. 😂
i wish there were benchmarks of the 4800s desktop kit (the XSX real soc) but i cant find any, and no, the 4700s is the ps5 soc in case someone is wondering
I bet some of the performance problems have to do with the fact that the Series X has dedicated decompression hardware taking loads off of the cpu.
Those benchmarks make me love my 5600X even more lol
Same, it's a great CPU.
Big props for testing the iGPU. The DDR 4 4000 was great for that, for the CPU maybe no so much. I would enjoy a follow up vid where you test the Tomahawk's out of box PBO settings. The manual 4.3GHz is likely less perfomant in most games than max PBO boost obtainable. I'd like to see the CPU performance tested with lower clocked lower CL ram too. I presume you decoupled the infinity fabric from the ram speed for 4000MT/s. If so the fastest 1:1 you can hit with the lowest CL is probably going to be optimal. Especially if the ram is dual rank.
The USB issues on older AM4 chipsets is well known. If you are running the latest bios, and all drivers are the latest, try a different port. If no joy, welcome to the club. I never did get some of my USB stuff to work right with an ASRock X370 with the latest everything on win 10 or 11.
Anyways, if you read this reply thanks for your time.
Overclocking in my opinion has been a waste of time since the 9th gen Intel processors. I've been instead undervolting my CPUs. As you demonstrated the performance gains are usually negligible, but the power consumption % increase is usually enormous by comparison to the performance gained. I know it's a little late but I found a game that might be a good addition to the CPU benchmark. I started playing Dead Rising 2, it's DX9 and therefore single-threaded and it maxed out my 11900k. I can barely hit the engine cap of 120fps due to the processor bottleneck.
Another question about testing; How do you select area's for your benchmarks? I've been trying to optimize a game's settings lately and I'm struggling to find an area of the game that uses all of the graphical effects. Do you have a method of finding these areas, or is it that you just run around and find where performance is the lowest? I'm trying to test things like volumetric lighting and screen space reflections but I'm never quite sure if there are any in the scene I'm using. Any advice?
im curious to know you method to undervolt a 9900k
@@MuzdokOfficial Change the voltage mode to adaptive, then lower the voltage in 25mV steps and then test the stability and performance in cinebench. You can use XTU to speed up the process by not needing to restart every time you want to change the voltage. Then restart and enter your numbers at the end.
@@bardofhighrenown Never tried xtu for undervolting. Good idea. Happy easter thank you.
@@MuzdokOfficial Yeah it saves a ton of time. Happy Easter!
@man with a username It wasn't technically bottlenecking. I was hitting the engine cap, but at 98% utilization on a single thread.
But I upgraded from a 2060 to a 6950xt and the game now runs in the 70s, which makes no sense. The game just might be broken
Those pins look beutiful close up .
They don't look beautiful when they are stuck to the bottom of a CPU cooler though lol
Shame we will no longer see them on new cpus
This was a good benchmark, would be interested if you compared the 4700G to a 3700X capped at 3.7Ghz to do an apples to apples test.
I glad that other peoples mothers don't complain about your dongle
When a €500 console cpu is better than your €2000 pc
The channel is growing! (I love the @zWORMzGaming references)
Maybe like the Ryzen 3 4350g that is sold in my country, not in prebuilts though since prebuilts are extremely expensive.
Those are sold without box, only includes the CPU cooler.
Well, the 4700 in the XSX has really fast but high latency G6 memory as unified memory. That should offset the poor cache amount.
Can it be used simultaneously?!
I don’t think that one is higher than 8M already in L3. Which brings exactly none compared even to 24mb on old Haswell
No, it doesn’t offset the poor cache amount. The consoles have shared memory not unified memory because they are APU’s. You don’t know what your talking about at all.
Def interested in that other video you teased😊
Love to se the 6700 test mate! Great vids pal... 🙏👍
Ty for this video some people did not believe me now i can show them this video
Geralt’s horse was actually trotting, not cantering :p
I bow to your superior horsemanship 🐎
Great stuff as usual Iceberg, thumbnail looks fuckin great!
Witcher 3 next gen update doesn't even use any form of UE. They kept using literally the same engine, but they updated a few assets and graphical effects making the game more demanding and running worse too.
Interesting video. Thank you!
I always like objective analysis comparing console to PC hardware
>9:51
>managing 69, no comment
There is minimum one comment :3
Haswell-E is strong with his L3-cache and quad-channel.
Geat video as always ^^
Civ 6 has an issue with constant crashing on PC too, ever since they added the launcher
nice to see clock speeds displayed. cheers
If you listen closely the song playing around 17:00 kind of sounds like oath to order from Majora's Mask
based on the thumbnail we now know the optimal place to put the cpu is ontop of the xbox, its even got the holes for the pins to go in
Yes, I was quite surprised how big the 4700G was. The heatsink looked very silly perched on top.
Hi
Great video as always. I wanted to ask how you got msi afterburner(specifically RTSS) working on valorant? For me and many people on reddit, it just crashes the game or doesnt launch it.
Yes, it’s weird. I can’t explain it, really. It stopped working for me on this motherboard when I did my R5 3400G video, I had to use FRAPS and a spreadsheet to work out the frame rates in Valorant. Then, lo and behold, Afterburner worked just fine on every video since, including this one.
No idea why 🤷♂️
I believe you need to update to the latest beta version of Afterburner for it to work with the latest Valorant version
I think you would have got better Performance out of the 4700g with slower RAM because the Infinitiy fabric will not be able to run 1 to 1 with the ram at 4000 Mhz
APU has a different I/O die as far as I know. The Zen 3 ones can sometimes hit 2300mhz+ (DDR4 4600+), so the Zen 2 4700G probably runs 4000 in 1:1 as well
I run my 4750G with exactly the same speed and spec RAM, it's 1:1
@@ЛюбомирДинков Then you are very lucky. 99% of Zen2 Cpus can not do more than 1800Mhz on the Infinity Fabric.
@@marcroyale13 The integrated memory controller in the monolythic Zen CPUs is better than the one in the chiplet variants
14:15 LMAO that was good.. that was really really good
I can say I've never played any of these things like Battlefield 5 fortnite or Modern COD. But I don't play any MMO's any online multiplayer Drive farming Goat flying Sims ect ect an stay far far far away from any GAAS.. so for the gaming that I do offline story-driven action-rpg style and I only need 60fps the GPU and CPU(More GPU) are still not quite enough on PS5 or series X with higher fidelity. I find it crazy that the Xbox One X in the PS4 Pro did the same staying in a lot of games and here we are with a PS5 and series X still getting higher Fidelity at 30fps and lower Fidelity at 60. Seemingly the only thing that really changed was a couple 100 higher pixel count and faster load times.. LOVE RDR2 OFFLINE.. everybody needs to go to Rockstar support page and send them a message about giving us a real update for current consoles of RDR2.. I can honestly care less about GTA or the 6th one, not going to buy it and have no desire to play it. But an upgrade of RDR2 on PS5, XSX(ONLY) that I would be into
UPDATE
*Xbox Series X
The Xbox Series X GPU is a high-end gaming console graphics solution by AMD, launched on November 10th, 2020. Built on the 7 nm process, and based on the Scarlett graphics processor, the device supports DirectX 12 Ultimate. The Scarlett graphics processor is a large chip with a die area of 360 mm² and 15,300 million transistors. It features 3328 shading units, 208 texture mapping units, and 64 ROPs. AMD includes 10 GB GDDR6 memory, which are connected using a 320-bit memory interface. The GPU is operating at a frequency of 1825 MHz, memory is running at 1750 MHz (14 Gbps effective).
Its power draw is rated at 200 W maximum. The console's dimensions are 301 mm x 151 mm x 151 mm, and it features a igp cooling solution. Its price at launch was 499 US Dollars.
Graphics Processor
GPU Name
Scarlett
Architecture
RDNA 2.0
Foundry
TSMC
Process Size
7 nm
Transistors
15,300 million
Density
42.5M / mm²
Die Size
360 mm²
Chip Package
BGA-2693
Graphics Card
Release Date
Nov 10th, 2020
Generation
Console GPU
(Microsoft)
Production
Active
Launch Price
499 USD
Clock Speeds
GPU Clock
1825 MHz
Memory Clock
1750 MHz
14 Gbps effectiver
It's the Rx 5700 xt
@@MtFoxt this is what Xbox series x uses. What I commented
@@Unique1Media yeah it's the Rx 5700 xt similar performance.
That’s “SCARLET?”
It’s similar to SCARLET or it is SCARLET there talking about? There’s a difference.
CAUSE I posted what XBOX SERIES X is using.
good video, my only critique is I would have tested with FSR over DLSS, it is closer to the most game's dynamic res settings and more likely to be implemented on the 9th gen consoles given their AMD based hardware
I’ve heard leaving comments on videos does mystical, positive things for creators and their channel, something to do with engagement I think? So I try to leave a comment on all of your vids if I can think of something to say.
That 69 to 96 joke was funny, I used to wonder what 96 would be, but now I know, thank you Iceberg Tech!
The real mystery is, what's an 88?
My dongle went in very smoothly with my new MSI motherboard for my new shinny xbox xs controller of course.😁
Gotta love AM4, well AMD in general. I'm content with my 1080p set up, RX 5700 paired with a Ryzen 7 2700x. Doesn't dip below 60fps on high in most titles
I really love the CIV-brand.... just not CIV6, because it is always crashing at a certain point (fixed number of turns) on my X470-based system and loading an older save will have the game just crash at exactly the same turn again. As far as I know Firaxis did not really care about it at all and never even tried to fix the bug.
May I ask you what CPU you are using? And how many turns are happening before you experience crashes?
My gaming PC has a 5800X on a B450 board (the GPU is a 6750XT) and I play the Windows version of the game through Proton on Linux (Fedora 37). I've never had any crashes even when playing online with a friend with the turn limit of 250 turns removed (in our current save we reached turn 450-ish).
Did you actually run the DDR4 at 4000 with FCLK 2000? Not running memory/fclk in 2:1 gives huge performance losses on Ryzen.
are my ears doing something weird or in the second half were you using a vocoder? Maybe it's youtube compression against the music lol...
Please list the specific tracks you use so I can find the ones I like.
The Witcher 3's next-gen update still uses REDengine 3, not Unreal Engine 5.
The limited L3 cache of the console Zen2 CPU is substantially mitigated by the exceptional fast and low-latency GDDR6 UMA (roughly 6-8 times the bandwidth and half the latency versus higher spec dual channel DDR4). From a practical standpoint, the Zen2 cores will function at performance levels equivalent to higher-spec AM4 parts, clock for clock (obviously, for overall power envelope reasons, the console CPU is clocked lower).
Overall, both the Series X and PS5 have very balanced architectures, where everything is specced around targets and tradeoffs in performance and cost. But, they are purpose built, and assumptions are made around memory consumption and purpose (thus the asymmetric bandwidth of the Series X), which makes performance comparisons difficult. One analogy for the core continuing difference between consoles and general purpose PC’s is a console architecture can generally be thought of as very wide rivers connecting lakes of memory and compute resources, whereas the PC is relatively narrow rivers or streams, connecting oceans of compute and memory.
"half the latency versus higher spec dual channel DDR4"
GDDR6 does not have lower latency than DDR4.
@@nathangamble125 gddr6 is in quad channel...
good vid, keep up the good work :)
Id love to see a mega apu in the future
My CPU on my VR puter is a 4600G. It was from a HP POS. I ended up replacing the motherboard. It slaps for what I need. It will even run Beat Saber (min settings, 72FPS), but barely.
Paired with a 1660 TI, I can play a lot of VR games that are high quality and 120fps, where available. And I only need a 500 w continuous power supply.
11:50 Witcher 3 was not switched to Unreal Engine 5.
My 4750G cost me 50% more than what I paid for a 5700G a year later. Madness! Still beats wasting money on a GPU.
love it. this guy is the best tek youtuber
AMD= A Massive Dynamite ... it will probly blow up like everything they do... like the new 7000 series and 7900 GPUs...
I would love to see a Rx 6700 video. I got one for a friend because at the time it was way less the the 6700xt and only a little more then 6650xt. With the extra 2gb and x16 pci I thought it would be a better choice for a pci e 3.0 system.
Bro they are 3 generations behind ain't no way
That cache is tiny AF, no wonder Gotham Knights is 30fps only
I keep telling people that the series x is running a ryzen 7 4700G. But people call me crazy saying I don't know what I'm talkin about. Well looky here....but fun fact: if you don't care about your console's warranty, and you know how to use a soldering iron, you could swap the base AMD from your series x with a ryzen 7 4700G AMD and get that extra GHZ out of it.
69 will now be referred to as 96 since the cost of eating out has gone up so much.
Low cache make sense for console, you dont care about high latency since you rarely run above 60 fps. Latency and performance are not directly link but low cache will limit your max responce time, FPS in that case. It lower the cost quite a bit to. One good example on how cache can affect performance.
The PS5 lacks L3 cache, however. So, I think it's more comparable to a 6600XT, especially when you consider the lower clock speed of the PS5, despite having more shader cores.
yeah when the ps5 came out people were comparing it to the rtx 2070 so should be around the 5700xt or 6600xt for an amd equivalent
The ps5 are clock higher
Hard to compare , Game developers optimized around the consoles never changing hardware that is well known to them .
It's why after the PS4 after 10 years can still play many new AAA Games decently and Eldin Ring can cause a 4090 to dip to 10 fps.
Was the 4000MHz DDR at a 1:1 ratio with the infinity fabric clock?
I think I saw something about your videos not monetizing well. What you need to do one insert an additional ad every 7 minutes to triple revenue vs pre roll only
Thanks for the suggestion, but the screenshot I linked was of my Shorts earnings. I made a few Shorts last year to test out the platform a little, back before it was officially monetized, and some of them are still getting views. Based on the RPM, though, it's probably not going to be a priority for me.
TheWitcher3(Remastered) did not switch the Engine to UE5! It still uses RedEngine2. CDPR only announced, that all Projects after Cyberpunk2077:PhantomLiberty will use UE5 and they put their proprietary Home-Engine to rest.
Bro Is GT 710 2GB GDDR5 better than i3 7100s HD 630? please reply in your video you compared 1gb version ???
It's in the same level of performance, you will not get that much of an improvement
The 710 has much faster VRAM with more bandwidth. The Intel integrated graphics have a trivial amount of dedicated VRAM and mostly rely on your system memory.
the hd630 is faster
Is the ram running at 1:1 or 1:2 ie gear2 when u display the 4000mt/s spec?
id also like to know timings as well
Fortnite only really needs quad cores unless you are screen recording then the hyperthreading comes in handy. It's the only game I've found that can still run ok with an Intel Arc A770 without resizeable bar. Fortnite doesn't like slow clock speeds and it likes a decent amount of memory.
I wonder if there is any contractual reasons why we don’t see console style apus (big integrated graphics) on the market, or is it just a perceived (or actual) lack of demand.
Imo it has to be contractual. Bc the demand is there for high performing APUs and iGPUs. Or it might be economics. Sony and MS are (or were) selling consoles at a loss so in the consumer mkt, those APUs would be too expensive. Hella bang for buck this gen! 😀
How on earth you got the ram to 4x4000 ????? Most people cant run dual sticks in 4000
you should use 1080p with dlss performance to avoid gpu bottleneck for example in cyberpunk
Can we then upgrade the cpu and GPU in the xbox with desoldering?
You got my attention.
hi @IcebergTech and everyone else!
ps, how well does this cpu play games like hearts of iron 4, rimworld, stellaris and victoria 3?
Hi! Sorry, but of those games I'm afraid I only own Stellaris, and I've never played it. Sorry!
@@IcebergTech honestly, stellaris, rimworld, hoi4 and vicky 3 are all absolutely worth playing as all of these games are more story generators than anything. no two runs are the same. this is also true of vicky 3 and hoi4 but based more on the real world than sci fi stuff. hoi4 is a world war 2 sim and vicky 3 deals with 19th century.
"Ryzen 7 3700X
Xbox and PC Hardware Capabilities
Xbox Series X specs are almost equivalent to a Ryzen 7 3700X desktop processor paired with Radeon RX 6800 or Nvidia RTX 3070."
From Google search.
I'm looking at getting a R7 5700X for £110 from CEX so I can use my Intel Arc with the resizeable bar which I can't do on my 8700K.
With the RAM at 4*8, 4000MHZ, you must have set the infiniti fabric rate to 2:1. Wouldn't dual rank 2*16, 3600mhz cl16 be measurably better? Its not a difficult kit to find.
IF was running at 1:1. APUs typically have better memory controllers than Ryzen CPUs, and can run the IF at higher frequencies too.
@@IcebergTech I was unaware of that. Good choice then.