Let's add the fact that Ally X got 80W battery which is insane and till end of the year there should be new handhelds with 100W batteries - so even 35W turbo mode would be possible in handheld mode.
One issue here: the laptop has a far greater thermal mass that can handle far greater heat before throttling. So it's likely not comparable to a handheld, even at the same wattage. Especially relevant when considering running short benchmarks.
The performance increase is the same wattage, at 15w that is usually the maximum that you can handle with reasonable battery life it should have the same performance increase in %
I've been using laptops for 20 years, since I had an HP Pavilion with Intel Pentium M (Centrino) in 2004. Always Intel until 2021 when I switched to AMD APUs with Ryzen 7 5800h. I am amazed by its compromise between power and efficiency, and the Radeon Vega 8 iGPU, although obsolete compared to the 890m, still gives a lot of play and with very little consumption. Personally, I can't conceive of a laptop PC without an AMD APU... I'm not interested in ARM PCs at the moment. I think AMD can win with Ryzen.
It's not bad, but it's more of an evolution instead of a revolution. After watching multiple reviews, it looks like the biggest improvements are on the CPU side, and the 365 is overall better value than the 370 given how ridiculously efficient it is while still offering good CPU performance. By all means the iGPU improvements are not bad given RDNA 3+ is just a bug fix, it will be interesting to see how Medusa Point will perform in around a year and a half or so with RDNA 4.
from the handheld POV its safe to say they went too much for the CPU side and too little on the GPU + if u take a real look in to IGPU that made big jump in the past it was almost always along side new DDR standard. 680M has the same GPU power as RX 6400 yet it lose to it by a lot 780M barely improved over the 680M but have X2 the compute power ( they went dual-issue shader) 890M bump the spec to 16CU which is the same as 6500XT but with way newer architecture yet from the numbers it looks really close. 680M/780M had 100GB/s, new 780M/890M have 120GB/s both are shared with the CPU. RX6400 have 128GB/s and 6500XT have 144GB/s and they have it all to themself... ( and they have 16MB infinity cache) I think its the next big jump will come when: 1. we will move to DDR6 or 2. we will have more than 128bit memory interface or 3. AMD will finely add infinity cache/ 3d v cache to APU
Everyone wait for the 40 Compute Unit APU. Its possible so... why not put the RX 7900XTX (96 CU) into the CPU? Yes the multi-chiplet board gets bigger and no one cares. Its just 2x CPU Size but would combine the best CPU with GPU. Lowering the frequency to half and power goes down by 90% (Most efficient spot on voltage curve). Bandwidth isnt a issue, instead of dual, they can support octa-channel. If they make finally the best cpu+gpu fusion, i will buy it. Why? Its smaller, its more efficient, vram is setable. It suits in all devices.
its looking pretty solid to me! Either its a 20% more efficient chip at the same performance, for example when bandwidth limited, or its 20-30% faster in a bunch of cases I call that a win
true expected to reach optimum perf when desktop version on sale, although its usually longer for radeon, they usually take around 18 months to have fully stable and optimum perf
The 890M is closer to the 2050 than it is with the 3050. Oh, ya i wouldn't be surprised if Valve was to use a similar iGPU in the next Steam Deck but of course they would do something more like 6 cores/12 threads instead of 8 cores/16 threads like other systems since the first Steam Deck models are 4 cores/8 threads with an iGPU performance between the Radeon 660M and 680M making it more like a 670M. I have done a test between my Deck OLED vs my mini PC which uses a Ryzen 9 6900HX and also using Bazzite on my mini PC to compare the performance better at 720p and my mini PC performance is only slightly better when playing games that requires more the GPU than the CPU.
As I'd really love to do that, these pages are a hell lot of work to make. My time is very limited, but I understand why you'd want that :( Sorry! (Running this channel beside a full time job and a family with 2 kids)
Hm well... it always comes down to cherry picking. Some games gain more... some less. As you can see at 20W the gain is pretty high. Really Depends. I don't know how much drivers can potentially improve, though amd is pretty solid with driver updates over time. (Fine wine policy)
@@Hubwood considering the 780m was a trash improvement over the 680m, this is more of a fix than an improvement. Plus, it’s 4 more CUs. That’s really where your performance is from, the improvements were just on fixing issues related to efficiency curves.
one of the reviews i have seen showed a nice up lift by locking the game to only the 4 P cores... ( P core CCX to E core CCX latency is awful on that chip) i would love to see what happen if u lock the game to use only the 8 E cores, only the 4 P cores and the full chip in 20/30W :) oo i forgot, great video :)
@@Hubwood it will be amazing to see: 1. 8 slow cores vs 4 fast cores in feraly low FPS with IGPU in 2024. 2. how bad the latency actually effect games ( interesting in both iGPU and dGPU) 3. if canceling some cores will help the GPU get higher clocks. 4. looks like testing of those now will be the groundwork for testing luner lake soon, feel like it will have the same problems.
@@gametime4316due to the resource allocation system not being reasonable, but after updating the bios, the problem of e core being used to play games was fixed.
i think gradualy decreacing the tdp while with moderate speed boosts is the right way to go. as someone who bought his first pc 2002, it is rediculus, that high end gaming and hardware youtube has become somewhat of climate technology channels ( which nobody is interesetd in). and with modern energy prices the next hardware upgrade is basicly 30-50% cheaper if you can save 100watts per hour.
We went from GTX 1650 performance to RTX 3050 performance in one generation. That's insane. Strix Halo I am guessing will be about 4050 speeds for the low end version. Likely to be called the "990M". Then we'll have the monster unbinned 4070 one. Im guessing it'll be called something like the "995M".
Thanks for the video. I really like the improved performance for lower power consumption. Very promising for when the drivers are optimized and perhaps when there are even better systems with it. It looks like the 890M could reach performance of a 1650 Super at times? As for the 40 APU...I don't know when it is coming, but if it is as good as it could, why buy a lower tier GPU? Perhaps that will force the prices for future GPUs a bit lower too as they are still ridiculous for the higher tiers.
Nice tests..! Strix Point is like a pre-statement before Strix Halo really says "Hello.." Let's see how serious the damage made by a 40CU igpu to discrete gpu world.
@@fidelisitor8953 I partially agree! In many test the M3 max performed between a 4070-4080 mobile and in some games just shy of like 6% of a 4080. I guess it really depends on the task. It’s going to be interesting for a performance/efficiency battle between M4 max and strix halo maxed out. If Apple were smart they’d add about 10 more GPu cores to the Max so it can at least catch up to a 4090 mobile. Who knows how powerful the 50 series will be mobile next year.
It would be great news if AMD Strix Halo made its way to the Desktop PC realm as Ryzen G, maybe it would crush the RTX 4050,4060,Up to 4070 in terms of portability
hmm thats a bit enttäuschend isn't it? i might be a bit biased cause i got a mini pc with a 780m but i still expected a little more. on paper 15- 25% more sounds great but if that amounts to 5 to 8 fps..i dont know
20W benchmarks are pretty good when compared to prev gen. Setting the wattage manually greater than that rather meh, as you said their may be other things going on as well, the 35W mode may be better than manually setting it to 30W. The full test speed isn't that bad at all. With ~16% on avg less SoC consumption it can perform as well and better than the 780M. Now, I have a theory with the 890M performing as it does. Yes, there is the whole bandwidth issue. That is one of them, but at this point is rather a hardware limitation. Then there are drivers and BIOS. I am leaning towards BIOS simply due to one thing, Zen 5c cores or the 'efficiency cores'. I think AMD is having an Intel issue when Intel released their 12th gen CPUs. I think if you set the game to prioritize the 4 performance Zen5 cores, it won't have to suffer and stall if the game is on the slower core. Maybe something like Process Lasso or something easier, like windows Task Manager. Set the games affinity to the CPU 0-3 cores.
I agree with the bios and driver optimisation needed, but also don't go call Zen 5c efficiency cores as AMD isn't happy with people calling it that, as it's very different to Intel's approach. Zen 5 vs Zen 5c is only cache size and actual size difference, and with Zen 5c being on more advanced 3nm vs Zen 5 4nm. That's why the "c" stands for compact. Otherwise exact same architecture unlike Intel's E-Cores which have a big performance difference to the P-Cores. If we do a "Z2 Extreme" based on Zen 5, I see them shaving off more power but taking out the NPU and only using Zen 5c. You don't need more the 8c/16t in a handheld as more cores = more power and efficiency is king for handhelds.
Another point to applaud Hubwood for testing is increasing the power allocation by 5W, as the whole point of a Z1 Extreme over a Ryzen 7 7840U is with both being APU's, if we hypothetically feed both 15W, a Z1 Extreme might share 5W+10W as apposed to 7.5W+7.5W between the CPU + GPU. So with them testing the 35W on the AI 9 370X vs 30W Z1 Extreme, at the end of the day the 890M is probably getting the same power budget as the 780M.
Yeah currently playing around with Process Lasso on Allocating Either the Zen 5 or the 5c cores to a game. And it actually seems to make a small but measurable difference. More on that in a few days.
@@Hubwood Cool, thanks for testing! Whether or not it makes a difference, not everyone has tested or taken notice of this. Also, it would be really informative
The problem is: 1. There is always something better in the future you could wait for (hardware wise) 2. I'm assuming the Strix Halo, if as fast as they claim, will be VERY expensive and laptops will cost as much as gaming laptops with dedicated GPUs of the same price category...
@@HubwoodI agree with the 1st point but my reason to wait is not for the next best thing but for a hardware that is capable enough to run the games at 60 on medium settings (the AAA ones) with a sustained battery performance, which is just not there in the present market! a focused chip from amd can reduce power requirement and increase performance drastically especially for the steam oled ,i can pay the 200-300$ extra but buying a product in Later part of its life cycle with unsatisfactory performance (according to me) doesn't sit right with me!
@@Hubwood crossing my fingers as my old pcs mobo died and i might just grab a upgrade kit and run these Apus for fun. I mean my second pc is a Ryzen 5 5500+RX6700 Xt combo so yeah.
@@807800 could try plugged in later but i guess my wattmeter is not precise at such low wattages. On battery It is around 3-5W on idle if I'm using The silent and Battery Saving mode. Resulting in 16-24 Hours (Still figuring out the precise duration as the system started a screen saver to protect the OLED from burning in, need to do it again) Watching TH-cam at 50% Brightness gives around an insane 11 Hours.
Yes all from the Vivobook s14. On maximum Performance mode it reaches around 50w like the examples in the end of the video where I compare it with the Ryzen laptop.
@@Hubwood Awesome can't wait to see at what price is this coming in my country. Do you think it's worth buying it or go for a cheaper version with Ryzen 7 8845HS + RTX 3050 at 6gb DDR like Lenovo Yoga Pro 7? Juat subbed. :D
@@mariussm7797 really depends if it's on your budget. But to be honest I haven't been that exited about a laptop in a long time. It is absolutely amazing in my opinion. Silent. Ultra powerful. Super light weighted. Ultra long battery times. Great display. Amazing speakers. Good build quality. Just stunned... I will try to make a review in about a week or so.
No it's equal when the wattage is the same. The gaming Handelds Z1 Extreme is a basically a laptop 7840HS. If you compare them at the same wattage they perform identical
@@Hubwood Yep but handhelds suffer more often from CPU-GPU throttling 'cuz of lack of cooling and notebooks are generally better optimized,Moreover I don't think they will release a handheld with AMD Ryzen AI 9 HX 370,just a dumbed down version of it.
Yeah thats why you can run these igpus at higher wattages on laptops USUALLY (not always). but again, at the same wattage they are basically equal :) (That's what the initial post and my answer was about 😅) Surely, a 780m in the ally can't compete with a 780m in a 60W Ryzen 9... (even though the difference might be marginal even in that case)
Will this laptop (the Ryzen AI 9 HX 370 & Radeon 890M model) be able to handle architecture and design 3d modelling and 3d rendering software such as AutoCAD, 3D Studio Max, V-Ray and Lumion? Has anyone tried and how were the results?
No point. Many people already showed is not worth the money. Like no improvementh at all. You will get the same results at a cheaper price with the old Zenbook 14 8840HS.
@@leothehuman_9476 yes, of course I do. 🤷 (No sarcasm here) Even if I would, usually not being able to update them or play them online makes them a no go for testing anyhow. AND even if that would be possible spending so much time to get each game running... Sorry, don't have that time 😬
This is insane. New chip in way more power efficient and we got already Ally X with 80W battery - till the end of the year we should have 90-100W batteries. So even 35W turbo in handheld mode would be doable and still that should give us 2+hrs of gameplay. Those 7-10 more fps more in native resolution translates to 15-20fps more using FSR. FSR is getting better and better, Frame Generation 2.0 will be also more consistant. 890m device with 32GB RAM/VRAM and 99.9W battery and OLED with VRR is not far away!
Insane? A 15% gain for you is insane? Man... you have such low expectations. Laptops with 890m cost 1300+ euro and for that money you can get a 4060 laptop. I am not impressed at all.
Strix Halo was leaked by Tom from MLsD channel. IIRC it will feature only Zen 5 cores (up to 16) and more powerful RDNA 3.5 iGPUs. Of course, Halo will require more power. High end model will be 16 cores and 40 CUs igpu ( aiming for RTX 4070 mobile performance). Not too shabby, but pricey no doubt.
It’s NOT that impressive. The only impressive thing is the fact they got its power efficiency fixed. And that’s not even impressive, that’s just a relief. This is what the 780m could have, and should have been. 33% more CUs, 80% the power, 25% more performance, while still on the same node. Thats a fix, especially considering a 780m is only about 5-10% faster than a 680m, which was an entire node behind it. That, was a pathetic increase. Plain and simple.
Totally pathetic. Same thing with rdna 3 discreet gpu’s. Newer node, architecture and increased bus size, with a measly 35% performance increase. The 6nm 7600 had an increase of less than 5% over the 6600xt. AMD is fucking trash and has been for such a long time. I hate them. They’re incompetence and inability to bring competition are the reasons for Nvidias skyrocketing prices. AMD fanboys blame it on “nvidia fanboys” like myself for being willing to biy nvidia, while AMD is just as terrible a value with their horrible ray tracing performance
I guess my comment got deleted. AMD and Intel are incompetent as hell these days. I get it that it's getting tougher to squeeze performance out of these chips, especially on the same node, but other chip designers manage to do it. Look at Nvidia with Maxwell and Turing, or more recently the Snapdragon gen 3 had massive performance gains on the same node. I mean, what the heck have they been doing the past 4 years? RDNA 3 hardly performs better than RDNA 2. Same thing with Zen 5. Actually now that I think about it AMD, more specifically their gpu division, has been extremely meek starting with the 290x almost 10 years ago.
@@Hubwood I'm not talking about efficiency. The bandwidth issue is coming up when you shift your focus from efficiency gains. 890M has 33% more cores but the gains at turbo mode isn't even 30% on average. I understand at the end of the day, it's a desktop architecture at it's core and APUs are not a big factor desktops and so the iGPU component in these APUs are much smaller than the CPU. No infinity cache, small memory bus etc...
0:17 get your stuff properly man, HX370 has ALWAYS 32GB of RAM !!! 4:46 890M uses 16 CUs !!!!!! What TF you are presenting us ? Or are you really that bad at reading technical specifications ? I mean, your tests are nice and fine and everything, but when I cannt be sure what are you testing in reality, the value is being lost. ok, so 8:30, it is an HX370 = 12 cores/24 threads... And this chip has ALWAYS 16 CUs of iGPU - Radeon 890M and ALWAYS has 32GB of RAM. So yeah, would be nice to make some correction or atleast pinned comment.
1. Yes you are right, it's 16 CUs of course. I mixed that up. Sorry. 2. No you are wrong, it does not have always 32GB as in this laptop it has 24GB ... Look it up. 3. Watch your language or get banned from the channel. Last warning.
@@Hubwood dont give me warnings and rather give me an example of HX370 with 24GB only ( cause it doent exists...) Only HX365 has 24GB. But hey, we are 50% closer to the perfection... ( 50% - you admited 50% of my corrections :P ) So yeah, would glad to watch you next time aswell.
I love how power efficient these chips are. 28 watts for this performance is crazy. Hopefully RDNA 4 apus will be the ultimate upgrade for handhelds
even crazier at 16-20W tbh... :)
Let's add the fact that Ally X got 80W battery which is insane and till end of the year there should be new handhelds with 100W batteries - so even 35W turbo mode would be possible in handheld mode.
Rtx a2000 is more powerful and has dlss and better RT
@@gejamugamlatsoomanam7716 nope and it got terrible power efficency
@lechistanskiswit320 it's a 70w tdp and equal to a desktop rtx 3050
Next gen mobile Ryzen hype!
Wow RDNA 3.5 is so good RNDA 4 will be crazy I don't need anything past ps4 can not wait.
intel arc is also significantly improved with thier ultra chips
@@X862go When RDNA 4 releases, you'll say that you'll wait for RDNA 5.
That's consumerism in a nutshell...
Yes of course but the temp make me think a little. 10⁰ and more for 4-5 fps
10% more powerful with 10W less power
YES PLEASE
But not for 30% higher price.
It's not more then 10% in most of these cases
I would rather have the option to get 20% better performance even at 50% more power.
One issue here: the laptop has a far greater thermal mass that can handle far greater heat before throttling. So it's likely not comparable to a handheld, even at the same wattage. Especially relevant when considering running short benchmarks.
The performance increase is the same wattage, at 15w that is usually the maximum that you can handle with reasonable battery life it should have the same performance increase in %
Thanks for the comparisons! 👍
You're welcome 😌
Great performance on rdna 3.5 but I'm hyped for Rdna 4 it's going to be a breakthrough because we're finally getting a 256 bit bus.
Halo strix is still using RDNA 3.5, but it is a 256 bit bus yes.
Great as always! Thx Hub!
I've been using laptops for 20 years, since I had an HP Pavilion with Intel Pentium M (Centrino) in 2004.
Always Intel until 2021 when I switched to AMD APUs with Ryzen 7 5800h.
I am amazed by its compromise between power and efficiency, and the Radeon Vega 8 iGPU, although obsolete compared to the 890m, still gives a lot of play and with very little consumption.
Personally, I can't conceive of a laptop PC without an AMD APU...
I'm not interested in ARM PCs at the moment. I think AMD can win with Ryzen.
It's not bad, but it's more of an evolution instead of a revolution. After watching multiple reviews, it looks like the biggest improvements are on the CPU side, and the 365 is overall better value than the 370 given how ridiculously efficient it is while still offering good CPU performance. By all means the iGPU improvements are not bad given RDNA 3+ is just a bug fix, it will be interesting to see how Medusa Point will perform in around a year and a half or so with RDNA 4.
I agree. (Also the CPUs are insane really)
I think Medusa will still feature RDNA 3.5 save for improvements here and there. RDNA4 is meant to fight Nvidia's architecture.
@@PurpleWarlock Medusa will feature Zen 6 and RDNA 4.
from the handheld POV its safe to say they went too much for the CPU side and too little on the GPU
+ if u take a real look in to IGPU that made big jump in the past it was almost always along side new DDR standard.
680M has the same GPU power as RX 6400 yet it lose to it by a lot
780M barely improved over the 680M but have X2 the compute power ( they went dual-issue shader)
890M bump the spec to 16CU which is the same as 6500XT but with way newer architecture yet from the numbers it looks really close.
680M/780M had 100GB/s, new 780M/890M have 120GB/s both are shared with the CPU.
RX6400 have 128GB/s and 6500XT have 144GB/s and they have it all to themself... ( and they have 16MB infinity cache)
I think its the next big jump will come when:
1. we will move to DDR6 or
2. we will have more than 128bit memory interface or
3. AMD will finely add infinity cache/ 3d v cache to APU
Everyone wait for the 40 Compute Unit APU. Its possible so... why not put the RX 7900XTX (96 CU) into the CPU?
Yes the multi-chiplet board gets bigger and no one cares. Its just 2x CPU Size but would combine the best CPU with GPU.
Lowering the frequency to half and power goes down by 90% (Most efficient spot on voltage curve). Bandwidth isnt a issue, instead of dual, they can support octa-channel.
If they make finally the best cpu+gpu fusion, i will buy it. Why? Its smaller, its more efficient, vram is setable. It suits in all devices.
Nice work man
45 watt seems to be sweet spot for this apu, decent uplift imho
Oh prefect 24gb vs 24gtb ETA move over.
the best GPU benchmark content creator , good job mate
Thanks mate 🙂🤞
its looking pretty solid to me!
Either its a 20% more efficient chip at the same performance, for example when bandwidth limited, or its 20-30% faster in a bunch of cases
I call that a win
Buddy its an APU, there is no scenario where its not bandwidth limited.
@@juice7661 the radeon 610m would like to have a word with you...
@@ChuwiEnjoyer Look up tech power up 610m and then, tech power up 780m.
@@juice7661 what am I supposed to find there?
The 610m is entirely compute limited, unless you somehow put it on a 32 bit channel like in mendocino
21,000 cinebench r23 is insanse 😮. 👌
with driver updates this new gen apu going to a beast
true expected to reach optimum perf when desktop version on sale, although its usually longer for radeon, they usually take around 18 months to have fully stable and optimum perf
@@iikatinggangsengii2471 yeah best thing about it power efficiency. so laptops and handheld going to be much more powerful and less battery draining
Remember this one on HX verison ryzen 9 (HX 370 above)... The ryzen 9 365 use 880M (12cu).
The 890M is closer to the 2050 than it is with the 3050.
Oh, ya i wouldn't be surprised if Valve was to use a similar iGPU in the next Steam Deck but of course they would do something more like 6 cores/12 threads instead of 8 cores/16 threads like other systems since the first Steam Deck models are 4 cores/8 threads with an iGPU performance between the Radeon 660M and 680M making it more like a 670M.
I have done a test between my Deck OLED vs my mini PC which uses a Ryzen 9 6900HX and also using Bazzite on my mini PC to compare the performance better at 720p and my mini PC performance is only slightly better when playing games that requires more the GPU than the CPU.
I was really hoping for that 33% increase in overall performance. Hope new drivers help reach that RTX 3050 mobile tier.
It wont until they using faster ram. 8533mhz should be standar for these igpu not 7500mhz
no,asus locked the whole chip at 33w
What makes you think it's locked at 33w? I defenitely see better performance above 33w 🤔
@@Hubwood I think that is the zenbook
This is the vivobook, right?
@@PurpleWarlock yes ;)
Great review, please provide a summary page for all the games, with an average speedup at 20w, 30w, and 50w.
As I'd really love to do that, these pages are a hell lot of work to make. My time is very limited, but I understand why you'd want that :( Sorry!
(Running this channel beside a full time job and a family with 2 kids)
Thanks for ur efforts @Hubwood
The 128 bit ram shared between CPU and GPU is the bottleneck and clearly why its not a straight 2x performance bump
Crazy performance for 890M but seems like performance gap is not as expected or driver issues made it?
Hm well... it always comes down to cherry picking. Some games gain more... some less.
As you can see at 20W the gain is pretty high. Really Depends. I don't know how much drivers can potentially improve, though amd is pretty solid with driver updates over time. (Fine wine policy)
Don't forget there are 12 cores also consuming the power envelope.
@@Hubwood considering the 780m was a trash improvement over the 680m, this is more of a fix than an improvement. Plus, it’s 4 more CUs. That’s really where your performance is from, the improvements were just on fixing issues related to efficiency curves.
one of the reviews i have seen showed a nice up lift by locking the game to only the 4 P cores... ( P core CCX to E core CCX latency is awful on that chip)
i would love to see what happen if u lock the game to use only the 8 E cores, only the 4 P cores and the full chip in 20/30W :)
oo i forgot, great video :)
Currently looking into that with Process Lasso...
@@Hubwood it will be amazing to see:
1. 8 slow cores vs 4 fast cores in feraly low FPS with IGPU in 2024.
2. how bad the latency actually effect games ( interesting in both iGPU and dGPU)
3. if canceling some cores will help the GPU get higher clocks.
4. looks like testing of those now will be the groundwork for testing luner lake soon, feel like it will have the same problems.
@@gametime4316due to the resource allocation system not being reasonable, but after updating the bios, the problem of e core being used to play games was fixed.
I'm thinking of returning the Ally X. I wish Asus had released the Ally X with an upgraded CPU/APU. The performance difference is noticeable.
do it and wait gor strix halo to come out in a handheld
Sad that you were not able to segment the video with timestamps unlike everyone else doing benchmark videos
So do you recommend the Ally X or is the difference worth waiting for a next gen successor?
i think gradualy decreacing the tdp while with moderate speed boosts is the right way to go. as someone who bought his first pc 2002, it is rediculus, that high end gaming and hardware youtube has become somewhat of climate technology channels ( which nobody is interesetd in). and with modern energy prices the next hardware upgrade is basicly 30-50% cheaper if you can save 100watts per hour.
Yeah, I really hate that especially on Desktops its more and more and more Watts.
I love Hardware that is efficient.
🤔 I am getting almost 3500 on my Ally X 3487 to be exact on time spy Graphics 3208 and CPU 6888.
We went from GTX 1650 performance to RTX 3050 performance in one generation. That's insane. Strix Halo I am guessing will be about 4050 speeds for the low end version. Likely to be called the "990M". Then we'll have the monster unbinned 4070 one. Im guessing it'll be called something like the "995M".
Nah not really! Watch out. This is the low powered 35W version! A a regular 3050 mobile is another 30% faster.
Will strix halo comes on laptop?
funnily enough, the 35W 3050 performs within single-digit percentage of the GTX 1650 Mobile GDDR6 (which is a 50W part)
Thanks for the video.
I really like the improved performance for lower power consumption. Very promising for when the drivers are optimized and perhaps when there are even better systems with it.
It looks like the 890M could reach performance of a 1650 Super at times?
As for the 40 APU...I don't know when it is coming, but if it is as good as it could, why buy a lower tier GPU? Perhaps that will force the prices for future GPUs a bit lower too as they are still ridiculous for the higher tiers.
i think it can reach 1070 w overclocking, which is quite an achievement from amd, to have such graphics power in an igpu
Nice tests..!
Strix Point is like a pre-statement before Strix Halo really says "Hello.."
Let's see how serious the damage made by a 40CU igpu to discrete gpu world.
I agree! It’s going to be the M4 max vs the AMD strix halo 40CU.
I mean that’s why AMD is making that because that’s going to be the competition.
gains may not be that high as even now the 780m and 890m are begging for faster memory.
@@GlobalWave1 M3 max is outputting 4070 laptop performance so that strix halo igpu can only compete if it can output a 70 tier laptop dgpu.
@@fidelisitor8953 I partially agree! In many test the M3 max performed between a 4070-4080 mobile and in some games just shy of like 6% of a 4080. I guess it really depends on the task.
It’s going to be interesting for a performance/efficiency battle between M4 max and strix halo maxed out.
If Apple were smart they’d add about 10 more GPu cores to the Max so it can at least catch up to a 4090 mobile. Who knows how powerful the 50 series will be mobile next year.
It would be great news if AMD Strix Halo made its way to the Desktop PC realm as Ryzen G, maybe it would crush the RTX 4050,4060,Up to 4070 in terms of portability
hmm thats a bit enttäuschend isn't it? i might be a bit biased cause i got a mini pc with a 780m but i still expected a little more. on paper 15- 25% more sounds great but if that amounts to 5 to 8 fps..i dont know
Difference is Way too Small
Its Not Worth to Buy New CPU
If You Have 780M
Not Worth Buying For New
Just Wait for The Rumors
Monster APU
Strix Halo or one of the Medusa Point ones?
Let's go!
Where'd you buy the asus vivobook s14? I can't see it anywhere on the market
Germany...
@@Hubwood Where? In a local or online shop? What shop is it?
@@leothehuman_9476 Here's a list of shops that sell it: geizhals.de/asus-vivobook-s14-oled-m5406wa-qd126ws-90nb14p1-m007d0-a3234906.html
Could you make test for low wattage 10-15w to see performance difference?
Will u also be comparing them with the ultra 7?
Жду этот процессор в десктопной версии, если поднять частоту памяти то результат будет еще +25%
Where is the RX 7000M gpus ?
20W benchmarks are pretty good when compared to prev gen. Setting the wattage manually greater than that rather meh, as you said their may be other things going on as well, the 35W mode may be better than manually setting it to 30W. The full test speed isn't that bad at all. With ~16% on avg less SoC consumption it can perform as well and better than the 780M.
Now, I have a theory with the 890M performing as it does. Yes, there is the whole bandwidth issue. That is one of them, but at this point is rather a hardware limitation. Then there are drivers and BIOS. I am leaning towards BIOS simply due to one thing, Zen 5c cores or the 'efficiency cores'. I think AMD is having an Intel issue when Intel released their 12th gen CPUs. I think if you set the game to prioritize the 4 performance Zen5 cores, it won't have to suffer and stall if the game is on the slower core. Maybe something like Process Lasso or something easier, like windows Task Manager. Set the games affinity to the CPU 0-3 cores.
I agree with the bios and driver optimisation needed, but also don't go call Zen 5c efficiency cores as AMD isn't happy with people calling it that, as it's very different to Intel's approach. Zen 5 vs Zen 5c is only cache size and actual size difference, and with Zen 5c being on more advanced 3nm vs Zen 5 4nm. That's why the "c" stands for compact. Otherwise exact same architecture unlike Intel's E-Cores which have a big performance difference to the P-Cores.
If we do a "Z2 Extreme" based on Zen 5, I see them shaving off more power but taking out the NPU and only using Zen 5c. You don't need more the 8c/16t in a handheld as more cores = more power and efficiency is king for handhelds.
Another point to applaud Hubwood for testing is increasing the power allocation by 5W, as the whole point of a Z1 Extreme over a Ryzen 7 7840U is with both being APU's, if we hypothetically feed both 15W, a Z1 Extreme might share 5W+10W as apposed to 7.5W+7.5W between the CPU + GPU. So with them testing the 35W on the AI 9 370X vs 30W Z1 Extreme, at the end of the day the 890M is probably getting the same power budget as the 780M.
Yeah currently playing around with Process Lasso on Allocating Either the Zen 5 or the 5c cores to a game. And it actually seems to make a small but measurable difference. More on that in a few days.
@@Silversurfer9821 I know 😅. I used the term as it makes more sense to ppl, hence the quotes. ISA compatible and similar IPC.
@@Hubwood Cool, thanks for testing! Whether or not it makes a difference, not everyone has tested or taken notice of this. Also, it would be really informative
How does the 890m compare to the Intel Arc IGPU?
I have 780m this and a nvidia 4060 in my laptop, i was so confused why i had two
How many want to hold off buying a unit for 6 plus months maybe longer ??
it better to wait it out till strix halo chips or new custom chips for hanhelds enter market ......they are going to have atleast 70-80 % more CU
The problem is:
1. There is always something better in the future you could wait for (hardware wise)
2. I'm assuming the Strix Halo, if as fast as they claim, will be VERY expensive and laptops will cost as much as gaming laptops with dedicated GPUs of the same price category...
@@HubwoodI agree with the 1st point but my reason to wait is not for the next best thing but for a hardware that is capable enough to run the games at 60 on medium settings (the AAA ones) with a sustained battery performance, which is just not there in the present market!
a focused chip from amd can reduce power requirement and increase performance drastically especially for the steam oled ,i can pay the 200-300$ extra but buying a product in Later part of its life cycle with unsatisfactory performance (according to me) doesn't sit right with me!
@@Baddmann1931 Fair enough ;)
Im hoping we get desktop APUs that can match a GTX 1080 one day thing is its seeming very unlikely right now...
It's actually kinda guaranteed that we will get there not that far in the future....
@@Hubwood crossing my fingers as my old pcs mobo died and i might just grab a upgrade kit and run these Apus for fun. I mean my second pc is a Ryzen 5 5500+RX6700 Xt combo so yeah.
Can you measure the laptop power consumption at idle with screen on/off?
You mean plugged in or on battery?
@@Hubwood plugged in.
or both, I guess.
@@807800 could try plugged in later but i guess my wattmeter is not precise at such low wattages. On battery It is around 3-5W on idle if I'm using The silent and Battery Saving mode. Resulting in 16-24 Hours (Still figuring out the precise duration as the system started a screen saver to protect the OLED from burning in, need to do it again) Watching TH-cam at 50% Brightness gives around an insane 11 Hours.
@@HubwoodThat really looks promising!
Thank you for the reply!
@@807800 it is. Crazy CPU really. Absolutely love it.
So all this results are from this Vivobook S14? What's the max TDP it reaches?
Yes all from the Vivobook s14.
On maximum Performance mode it reaches around 50w like the examples in the end of the video where I compare it with the Ryzen laptop.
@@Hubwood Awesome can't wait to see at what price is this coming in my country. Do you think it's worth buying it or go for a cheaper version with Ryzen 7 8845HS + RTX 3050 at 6gb DDR like Lenovo Yoga Pro 7? Juat subbed. :D
@@mariussm7797 really depends if it's on your budget. But to be honest I haven't been that exited about a laptop in a long time. It is absolutely amazing in my opinion. Silent. Ultra powerful. Super light weighted. Ultra long battery times. Great display. Amazing speakers. Good build quality. Just stunned... I will try to make a review in about a week or so.
@@Hubwood I thnk you will get even better results with AMD Fluid Motion Frames 2. I've seen some people already using it on the new Ally X.
Of course, thats an option aswell.
APU's performance driven with notebook processors not equal driven with handhelds.
No it's equal when the wattage is the same. The gaming Handelds Z1 Extreme is a basically a laptop 7840HS. If you compare them at the same wattage they perform identical
@@Hubwood Yep but handhelds suffer more often from CPU-GPU throttling 'cuz of lack of cooling and notebooks are generally better optimized,Moreover I don't think they will release a handheld with AMD Ryzen AI 9 HX 370,just a dumbed down version of it.
Yeah thats why you can run these igpus at higher wattages on laptops USUALLY (not always).
but again, at the same wattage they are basically equal :) (That's what the initial post and my answer was about 😅)
Surely, a 780m in the ally can't compete with a 780m in a 60W Ryzen 9... (even though the difference might be marginal even in that case)
So basically this was a new era where iGPU outperform my GTX 750Ti?
It's on 1050ti level. So yes.
At 20w the performance difference is huge, at 30w the performance difference is disappointing. That's a shame
Will this laptop (the Ryzen AI 9 HX 370 & Radeon 890M model) be able to handle architecture and design 3d modelling and 3d rendering software such as AutoCAD, 3D Studio Max, V-Ray and Lumion? Has anyone tried and how were the results?
Steam Deck has the chance to do a proper upgrade with RDNA 4
Yes I believe in 2026
Waiting for Ryzen ai 365 (880m) vs 780m
No point. Many people already showed is not worth the money. Like no improvementh at all. You will get the same results at a cheaper price with the old Zenbook 14 8840HS.
@@mariussm7797 isn't it because not yet driver update?
8840hs using the same igpu as 7840hs
So 3 years and no improvement.. so sad..
The only major difference between 7840HS, 8840HS and the new 365 is AI related. 370 is the big thing to hunt.
Its 12 CUs vs 12 CUs and there should be more power for the iGPU available since only 10 cores.
everyone gansta until AMD push the 8 or 12 big core HS chip out at 65w,i think the chip would even murder the previous zen 4 desktop
possibly intel...
Yeah, im sooo curious right now how intels Answer as in "Lunar Lake" will be 😁
I'll find out in excactly 31 days from now.
890m is starved for mem bandwidth since it uses the same mem speed as 780m. hence, it doesn't reach it's full potential
it doesnt, it has higher memory clock
@@iikatinggangsengii2471 I see 7500mhz for both
Can you try latest resident evil game with this iGPU ?
Don't have them, sorry....
@@Hubwood sailing the seas is always an option, you actually buy all the games you test?
@@leothehuman_9476 yes, of course I do. 🤷
(No sarcasm here)
Even if I would, usually not being able to update them or play them online makes them a no go for testing anyhow. AND even if that would be possible spending so much time to get each game running... Sorry, don't have that time 😬
Why I see 30fps and 32 fps avg on monitoring but you say me that is a 31 and 40? 3:34
Look again. At the beginning it's 30 and 32 but later in the benchmark run it's 31 and 40
This has to be an 880m you have 24gb ram. The 890m is in the 32 gb ram model
No. See:
ibb.co/Gcrsfxp
i should wait even longer
This is insane.
New chip in way more power efficient and we got already Ally X with 80W battery - till the end of the year we should have 90-100W batteries. So even 35W turbo in handheld mode would be doable and still that should give us 2+hrs of gameplay.
Those 7-10 more fps more in native resolution translates to 15-20fps more using FSR. FSR is getting better and better, Frame Generation 2.0 will be also more consistant.
890m device with 32GB RAM/VRAM and 99.9W battery and OLED with VRR is not far away!
Insane? A 15% gain for you is insane? Man... you have such low expectations. Laptops with 890m cost 1300+ euro and for that money you can get a 4060 laptop. I am not impressed at all.
What's the max TGP of laptop?
55-65W for the APU
This is an interesting apu
What is that new 40 core GPU apu?
Not yet released. Will probably take a few more months.
Strix Halo was leaked by Tom from MLsD channel. IIRC it will feature only Zen 5 cores (up to 16) and more powerful RDNA 3.5 iGPUs. Of course, Halo will require more power.
High end model will be 16 cores and 40 CUs igpu ( aiming for RTX 4070 mobile performance). Not too shabby, but pricey no doubt.
@@PurpleWarlock very kewlll 😎🆒😎🆒😎
Will be in flow z13 , comparable to 4070
yo 40 cu would be great, can indeed run raytracing
Yeah but the price... Cheapest R9 laptop costs 2x more than 4060 laptops
It's not a gaming laptop. Gaming laptops will get you better value if gaming is your main concern.
So what can it do ? Nor cuda, nor compute, nor AI training, nor graphics... Only marginally faster CPU.
@@Hubwood
@@panjak323 you could just check out my review I published a few seconds ago
7940HS is R9 not R7 🚓🚔
Just a little step.... Nothing Big here
It’s NOT that impressive. The only impressive thing is the fact they got its power efficiency fixed. And that’s not even impressive, that’s just a relief. This is what the 780m could have, and should have been. 33% more CUs, 80% the power, 25% more performance, while still on the same node. Thats a fix, especially considering a 780m is only about 5-10% faster than a 680m, which was an entire node behind it. That, was a pathetic increase. Plain and simple.
Totally pathetic. Same thing with rdna 3 discreet gpu’s. Newer node, architecture and increased bus size, with a measly 35% performance increase. The 6nm 7600 had an increase of less than 5% over the 6600xt. AMD is fucking trash and has been for such a long time. I hate them. They’re incompetence and inability to bring competition are the reasons for Nvidias skyrocketing prices. AMD fanboys blame it on “nvidia fanboys” like myself for being willing to biy nvidia, while AMD is just as terrible a value with their horrible ray tracing performance
I mean they don't have much to work with
I guess my comment got deleted. AMD and Intel are incompetent as hell these days. I get it that it's getting tougher to squeeze performance out of these chips, especially on the same node, but other chip designers manage to do it. Look at Nvidia with Maxwell and Turing, or more recently the Snapdragon gen 3 had massive performance gains on the same node. I mean, what the heck have they been doing the past 4 years? RDNA 3 hardly performs better than RDNA 2. Same thing with Zen 5. Actually now that I think about it AMD, more specifically their gpu division, has been extremely meek starting with the 290x almost 10 years ago.
Wow thanks professor
Those IGPUs are pointless without a dedicated GDDR memory.
Same old same old.. bandwidth starved.
Nah well up to 50% more at 20w is amazing for future gaming handhelds :)
@@Hubwood I'm not talking about efficiency. The bandwidth issue is coming up when you shift your focus from efficiency gains. 890M has 33% more cores but the gains at turbo mode isn't even 30% on average.
I understand at the end of the day, it's a desktop architecture at it's core and APUs are not a big factor desktops and so the iGPU component in these APUs are much smaller than the CPU. No infinity cache, small memory bus etc...
@@Deeptesh97 all true, but again: great improvement at lower wattages for future gaming handhelds 🤷
0:17 get your stuff properly man, HX370 has ALWAYS 32GB of RAM !!!
4:46 890M uses 16 CUs !!!!!! What TF you are presenting us ? Or are you really that bad at reading technical specifications ?
I mean, your tests are nice and fine and everything, but when I cannt be sure what are you testing in reality, the value is being lost.
ok, so 8:30, it is an HX370 = 12 cores/24 threads... And this chip has ALWAYS 16 CUs of iGPU - Radeon 890M and ALWAYS has 32GB of RAM. So yeah, would be nice to make some correction or atleast pinned comment.
1. Yes you are right, it's 16 CUs of course. I mixed that up. Sorry.
2. No you are wrong, it does not have always 32GB as in this laptop it has 24GB ... Look it up.
3. Watch your language or get banned from the channel. Last warning.
@@Hubwood dont give me warnings and rather give me an example of HX370 with 24GB only ( cause it doent exists...) Only HX365 has 24GB.
But hey, we are 50% closer to the perfection... ( 50% - you admited 50% of my corrections :P ) So yeah, would glad to watch you next time aswell.
@@AdalbertSchneider_ ibb.co/Gcrsfxp
@@AdalbertSchneider_ So?...