Reminder : AMD zen is started from not so advanced 14nm And start enjoying TSMC buff at zen2 While intel already included benefit from switch to TSMC node 😂
Apple has the fastest iGPU the m3max 40c thats almost on par with a 150W 4080m. Around that 130W 4080m lvl. Deal with it windows sheeps and stop hating on whats objectively amazing
@@PKperformanceEU 1. You are the one hating (windows sheep's) 2. The apple chip IS impressive indeed! No doubt about that. But it costs more than 2 4090 desktop GPUs... We're talking 5000$. With display latencys of 80ms. Aren't we?
That iGPU performance is fantastic even if it's nitpicked. From what I can see, intel seems to have a very nice lead with raytracing on iGPUs, and with the "infinite" VRAM on an APU, it could be pretty awesome news for enthousiasts! That being said, we gotta wait for it to be tested and validated, but if they're in the general ballpark and it's not overly expensive... I'll take it!
@@Hubwood that is already a promising start! The moment I heard RT on though, I wondered if they can match ryzen in raster or just beat them in RT. A bit like Nvidia v AMD I'll be keeping my eye out for your videos! Hopefully you can show both ends of the performance spectrums, both the ultra 5 and 9 are intriguing
@@roqeyt3566 according to the guy at the booth with the F1 24 running on both systems the difference is even a bit bigger without RT. They showed this only to show the RT capability of it in combination with XESS
With the RAM being on package, surely it must be possible to achieve some insane RAM speeds. We've already seen people hit >10,000 on DIMMs with OC. Allowing memory OC on Lunar Lake would be pretty cool, especially for the iGPU performance, but I assume that won't be possible
This makes me very excited for the future. Next year Celestial Arc Xe3 iGPU will be manufactured on Intel 18A. Intel 18A seems to be on track and very impressive. I don't know how long we have to wait but I want gaming at native 1080p 120fps on a Handheld device to last a whole day on battery.
@@EnochGitongaKimathi Xe3 with 12Xe cores will be manufactured on TSMC N3E, the compute tile is the one which is using 18A. N3E should be great expecially for GPUs, there is nothing to worry about.
@@EnochGitongaKimathi Intel recently downgraded 18A by 10 percent. It's a N3 competitor and unlikely it will be miles better. For the GPU tile 18A was never planned anyways.
nothing much. iGPUs are bandwidth constrained so the bottleneck is LPDDR5 8533 that is reasonably available but more like 7500MT/s will be the norm. number of cores does not help iGPUs much ....
@@lubossoltes321 We'll find out in a few months. I happen to think there will be a very noticeable performance jump over Strix Point. The number of CUs has more than doubled (Halo = 40 CUs). We can see how that increases performance in Hubwood's video comparing 760M (8CUs) v 780M (12CUs). The memory bandwidth with Strix Halo will also double with the 256-bit bus.
@@lubossoltes321I beg to differ. Compare the lower M3 pro with less than 20 cores to the M3 max with 40 cores. The difference in GPU performance is insane. Yes of coarse there is more CPU and bandwidth in the MAX but core count is a big factor.
but the price seems expensive which is likely caused by tsmc n3 cost. zen 5 and qualcomm version of hp omni book starts at 1149 while the lunar lake version starts at 1499
There were listings in France for 226V and 258V (i5 and i7 equivalents) laptops costing 996 and 1359 euros post-VAT. The pricing seems to be mostly in line with other manufacturers.
Im most interested in the new MSI Claw 8 with this chipset, although i suspect that it will get one of the lower tier chips with only 7 Xe2 cores instead of the top of the range 8 core parts used in these benchmark results.
4 days ago I bought a notebook with 5 125 H but I have any questions about the npu ia tops because I don't know that it's sufficient 31 - 39 tops for use in copilot, with respect to the gpu of seriees 200 it seems extremely good and that is a big change in terms of efficiency and results in fps and editing or decoding videos, a lot of price but it is worth it with all this equipment for external graphics card
off topic: eGPU Dock for mac mini M2 Pro via thunderbolt 4,.....and (sadly) running windows on the mac....for the drivers to then a RTX 4090 card? ever thought about creating a video on such a combo ?
It's 2.3x vs 2.0x. It's not a huge difference in internal resolution, and since XeSS still looks better than FSR with the lower resolution, it doesn't matter.
@@ryanspencer6778 People should look at all of this data and compare it to the more mature drivers for this platform later on this month, Intel has been cooking very hard with drivers just for battle mage for over 2 years
Thanks for the video! Your videos feel more and more like a celebration ❤ As for Intel's new CPUs, I never really understood why manufacturers make such a big deal about "highest performance possible in a thin and light". How many people *actually* buy thin and light laptops for highest performance possible? We need performance improvements in *gaming* laptops. Battery life in that segment has also been terrible so improving idle power consumption is very much needed there. About RAM - it's a misconception that non-soldered RAM consumes a lot of power. In my current gaming laptop, sensors report using just 0.13w for each stick during idle and basic tasks like browsing, office work etc, so just 1\4 of a watt combined. The CPU, motherboard, display etc consume way more power than RAM does. And CAMM 2 will help upgradable RAM to be even more efficient.
@@kartikpintu yeah for some reason there is this stereotype "gamers don't care about battery life". Just because we like gaming doesn't mean we don't also use our laptops for watching videos, office work, browsing etc. Ryzen H and HS series have excellent idle power consumption, but they almost aren't present in mid range to high end gaming laptops. If they really wanted to, they'd make 10 hour long battery life and high end gaming performance when connected to the wall. But for some reason, they insist on using desktop HX chips instead.
Can't wait for those 4lbs, 0.7", 16" 16:10 1600p laptops to be coming with these things. 200FPS on Roblox sounds incredible, but I bet that they tested it on the Roblox Starter place and not a more intensive game like Titanic: S.O.S. V2
Though Intel has bad reputation but its better to have competition otherwise you will see the crazy price point of Nvidia when they dominated the dedicated gpu market. Competition gives better pricing only Apple is exception lol they build their own ecosystem and the device costs a fortune.
🙄 I swear the crowd at the ACER press Date today was VERY quiet every time something with AI came up and hyped once real laptops and gaming stuff was the topic...
the only major downside to this will be MT performance and that is what they did not show. I think it will be equivalent to M2, 8845hs and snapdragon x plus, so last gen performance in MT which ain't gonna cut it imo. sure igpu might be good on this, but they kind of missed the target on multicore performance. i do understand that they plan to launch also a more performance oriented lineup, but it's gonna sit in a different power/performance class to these.
Chromebook Lake, with 4 cores 8 threads, is shaping up to be lunarcy, the CPUs are just like 2011. I don't think people realize it but an e-core is essentially as fast as a 2nd thread, and Intel's new generation of laptop chips, run at slowpoke speed, the only thing they can do is play games, most other things are slow, slow, slow.
Nah, thats not true. The P Cores are REALLY fast for Lunar lake and the Ecores are MUCH faster than the old ones. The only thing this lacks is great Multicore performance. Still will do for whatever those Laptops are meant for. This is a chip for thind and lights. But its enough for 4K or even some 8K Video editing. 1080p Gaming at medium Settings. Blender. Architecture etc. In some cases you would need to get the 32GB versions though. Defenitely.
@@Matlockization depends on the task. Sometimes it's less... They showed us the Prolycion office benchmark run (pc mark follow up) and it was significantly lower than meteor lake. They used special USBC wattmeters. Also the part with Dota 2... You can see the power draw there as well.
@@Hubwood Well, I guess we'll find out by the end of the year from independent reviewers. However, what I find disappointing is that laptop manufacturers tend to reduce the Wh capacity of their batteries as a result of these more efficient CPU's ! Makes no sense.
Does apple M4 beat apple M3? We don't know right, at least wait for M4 to actually be available in market. Besides ppl who like apple would never buy an intel because of ecosystem restrictions
No I'm not. I'm gonna Test it myself. But I've SEEN the performance myself in some scenarios. And they made absolutely clear, that the results are legit. I also spoke to someone outside from intel. He already tested it and confirmed it.
Actually it has always been, igpu means integrated graphics not really a discrete gpu. AMD never had a good igpu but they were compensating it with their own discrete GPUs or using nvidias. But I guess things are about to change for good now.
@@Hubwood speaking of which, just had a look into the disclosure for the HX 370 vs core 9 288 iGPU benchmark, they tested the Intel with 8533 MT/s memory and AMD with 7500MT/s memory claming an average 16% advantage. I wonder what that Core 9 would do given same memory constraints.
I would have a very hard time trusting Intel right now after their 13 and 14 series FIASCO and the handling of the entire situation has reported by many such as Gamers Nexus. They handled customers requests very poorly, especially those who had issues way before the massive reporting of the said issues. Intel will need to prove itself before I consider any products from them. And no this is not to say AMD is without its part of blame with the way their Marketing department handled the 9000 series but at least their CPUs are not breaking down at the moment. I guess, the best approach here with these companies who seem bent on releasing stuff super fast just to gain more market share and meet their end of year financial expectations over quality, is to wait and see.
Intel expects a ton of money for 8-thread CPU which is not comparable to Ryzen 370 24-thread CPU. They have chosen the charts to win by a few percent having 3nm compute chiplet vs 4nm (Qualcomm and AMD). GeekBench MT is missing... Let's see 2025 Kraken and Halo chips from AMD with same memory 8533MHz and 8/16 cores to compare. I wonder if they patch Windows 11 or not before testing because Zen 5 gains is above 10% for gaming...😁
Tests are usually specifically chosen to demonstrate advantage, while hiding the existing flaws. Need to have Independent tests on almost same hardware(ram, ssds, windows power settings etc) and just native resolutions. Unfortunately - windows is only staying to adapt to handheld gaming and doing this waaaay too slow.....
I understand that. To be fair all manufacturers are cherry picking and optimizing benchmarks. Though what they showed here live to the audience seemed legit (F1 24 and Dota2) though as always well have to wait for review units and test ourselves.
@@Hubwood Even that doesnt mean its real because they have also proven to test with bottleneck hardware and settings or compare using different test systems
I don’t trust Intel even if it was the last chip brand in the world. Decades of deceiving, patches decreasing speed, high energy consumption, rebranding of old chips, monopoly, etc etc etc.. Even if they have a little REAL edge with these new chips, I won’t buy it: it’s very likely that in 4/5 years they need a patch that will reduce 20-30% de speed. Or that the chip throttles down and they say the computer/board manufacturer is the guilty…
@@owen99412 are you a Intel fanboy? I’m not anybody’s fanboy FYI. @Hubwood definitely, but I access a person or company’s behavior on the long run, not in one single moment. If Intel showed us a behavior for ages, I won’t trust them immediately just “because”. See what happened with the last generations about the instability. That is “Intel-typical”.
If Lunar Lake can really deliver what they showed at the announcement yesterday, then this is Intel's Ryzen moment for this decade.
Always good beein sceptical and testing yourself 😅
Reminder : AMD zen is started from not so advanced 14nm
And start enjoying TSMC buff at zen2
While intel already included benefit from switch to TSMC node 😂
I hope this is true, but based on the track record i am doubting it
Apple has the fastest iGPU the m3max 40c thats almost on par with a 150W 4080m.
Around that 130W 4080m lvl.
Deal with it windows sheeps and stop hating on whats objectively amazing
@@PKperformanceEU 1. You are the one hating (windows sheep's)
2. The apple chip IS impressive indeed! No doubt about that. But it costs more than 2 4090 desktop GPUs... We're talking 5000$. With display latencys of 80ms. Aren't we?
That iGPU performance is fantastic even if it's nitpicked.
From what I can see, intel seems to have a very nice lead with raytracing on iGPUs, and with the "infinite" VRAM on an APU, it could be pretty awesome news for enthousiasts!
That being said, we gotta wait for it to be tested and validated, but if they're in the general ballpark and it's not overly expensive... I'll take it!
Yeah looking forward to test and compare it myself. I think the 45 games aren't necessarily cherry picked as they cover many popular ones :)
@@Hubwood that is already a promising start! The moment I heard RT on though, I wondered if they can match ryzen in raster or just beat them in RT. A bit like Nvidia v AMD
I'll be keeping my eye out for your videos! Hopefully you can show both ends of the performance spectrums, both the ultra 5 and 9 are intriguing
@@roqeyt3566 according to the guy at the booth with the F1 24 running on both systems the difference is even a bit bigger without RT. They showed this only to show the RT capability of it in combination with XESS
@@roqeyt3566 iGPU raytracing is probably a dumb idea, even a 3050 or 3060 cant do serious RT. It is a fun tech demo and not much more
45 games isn't nitpicking.
That Dota2 demo was very impressive.
With the RAM being on package, surely it must be possible to achieve some insane RAM speeds. We've already seen people hit >10,000 on DIMMs with OC. Allowing memory OC on Lunar Lake would be pretty cool, especially for the iGPU performance, but I assume that won't be possible
I've heard from someone outside of intel, that personally tested the Graphics... that it seems to be real.
It has to be . Intel cannot get caught lying abt them now after their recent fiasco . That will be the final nail in the coffin .
This makes me very excited for the future. Next year Celestial Arc Xe3 iGPU will be manufactured on Intel 18A. Intel 18A seems to be on track and very impressive.
I don't know how long we have to wait but I want gaming at native 1080p 120fps on a Handheld device to last a whole day on battery.
@@EnochGitongaKimathi Xe3 with 12Xe cores will be manufactured on TSMC N3E, the compute tile is the one which is using 18A. N3E should be great expecially for GPUs, there is nothing to worry about.
@@Neo-a7 Intel 18A is miles better than TSMC 3NE
@@EnochGitongaKimathi Intel recently downgraded 18A by 10 percent. It's a N3 competitor and unlikely it will be miles better. For the GPU tile 18A was never planned anyways.
That interview was fantastic! Hopefully more opportunities like this
Thanks man :)
Impressive. We'll have to see what AMD's upcoming Strix Halo can offer up to the fight 👍
nothing much. iGPUs are bandwidth constrained so the bottleneck is LPDDR5 8533 that is reasonably available but more like 7500MT/s will be the norm. number of cores does not help iGPUs much ....
@@lubossoltes321 Strix Halo is supposed to have bigger bandwidth...
@@lubossoltes321 We'll find out in a few months. I happen to think there will be a very noticeable performance jump over Strix Point. The number of CUs has more than doubled (Halo = 40 CUs). We can see how that increases performance in Hubwood's video comparing 760M (8CUs) v 780M (12CUs). The memory bandwidth with Strix Halo will also double with the 256-bit bus.
@@Hubwood What about Arrow Lake Halo, is that still coming?
@@lubossoltes321I beg to differ. Compare the lower M3 pro with less than 20 cores to the M3 max with 40 cores. The difference in GPU performance is insane. Yes of coarse there is more CPU and bandwidth in the MAX but core count is a big factor.
but the price seems expensive
which is likely caused by tsmc n3 cost.
zen 5 and qualcomm version of hp omni book starts at 1149
while the lunar lake version starts at 1499
If this's true, then it's not great for Intel
Omnibook with zen 5 was 1449 for starters.
Bet that 365. The HX 370 models will cost more for sure. 😢
There were listings in France for 226V and 258V (i5 and i7 equivalents) laptops costing 996 and 1359 euros post-VAT. The pricing seems to be mostly in line with other manufacturers.
Ultra core 2 Series laptops will be starting much cheaper than 1500. I think 999 Dollar is realistic in the US.
@@PurpleWarlockstarts at 1149 for zen 5 and qualcomm as shown in hp website
Wow. Really looking forward to the reviews.
Did you find out anything more about the future of the i3s?
Okay but I hope it’s not gonna still be a freaking furnace on our laps. It better be silent and long lasting like the apples M series laptops .
I believe it will be close in efficiency to an M3 and also in performance. I doubt it will be a furnace this time, given the very limited tdp
we see about that. Can’t trust their claims till we’ve seen tests done by regular users
The lunr lake laptops are out…when will u be testing them? Eagerly waiting
Im most interested in the new MSI Claw 8 with this chipset, although i suspect that it will get one of the lower tier chips with only 7 Xe2 cores instead of the top of the range 8 core parts used in these benchmark results.
We will see soon :)
The Ally and other premium handhelds have the 780m. The original Claw had the full Meteor Lake iGPU. The Claw 8 will almost certainly have the 140V.
4 days ago I bought a notebook with 5 125 H but I have any questions about the npu ia tops because I don't know that it's sufficient 31 - 39 tops for use in copilot, with respect to the gpu of seriees 200 it seems extremely good and that is a big change in terms of efficiency and results in fps and editing or decoding videos, a lot of price but it is worth it with all this equipment for external graphics card
off topic: eGPU Dock for mac mini M2 Pro via thunderbolt 4,.....and (sadly) running windows on the mac....for the drivers to then a RTX 4090 card? ever thought about creating a video on such a combo ?
Let's not forget that the first gen was better on paper as well compared to AMD, but in reality it wasn't thanks to the crappy Intel drivers
Note that XeSS "Performance" mode renders from a significantly lower native resolution than FSR or DLSS Performance modes....
To be honest. In F1 24 the xess looked waaaay better than FSR on the AMS system.
It's 2.3x vs 2.0x. It's not a huge difference in internal resolution, and since XeSS still looks better than FSR with the lower resolution, it doesn't matter.
@@ryanspencer6778 People should look at all of this data and compare it to the more mature drivers for this platform later on this month, Intel has been cooking very hard with drivers just for battle mage for over 2 years
Will we see these on budget gaming laptops ?
No I don't think so. They are for thin and light ultrabooks. Budget gaming laptops still use dedicated GPUs
Great work!
Keep it up
Thanks for the video! Your videos feel more and more like a celebration ❤
As for Intel's new CPUs, I never really understood why manufacturers make such a big deal about "highest performance possible in a thin and light". How many people *actually* buy thin and light laptops for highest performance possible?
We need performance improvements in *gaming* laptops. Battery life in that segment has also been terrible so improving idle power consumption is very much needed there.
About RAM - it's a misconception that non-soldered RAM consumes a lot of power. In my current gaming laptop, sensors report using just 0.13w for each stick during idle and basic tasks like browsing, office work etc, so just 1\4 of a watt combined. The CPU, motherboard, display etc consume way more power than RAM does. And CAMM 2 will help upgradable RAM to be even more efficient.
I'll pass your statement right to them and see what they will have to say
I love more performance. I also want good battery life
@@kartikpintu yeah for some reason there is this stereotype "gamers don't care about battery life". Just because we like gaming doesn't mean we don't also use our laptops for watching videos, office work, browsing etc. Ryzen H and HS series have excellent idle power consumption, but they almost aren't present in mid range to high end gaming laptops. If they really wanted to, they'd make 10 hour long battery life and high end gaming performance when connected to the wall. But for some reason, they insist on using desktop HX chips instead.
Can't wait for those 4lbs, 0.7", 16" 16:10 1600p laptops to be coming with these things. 200FPS on Roblox sounds incredible, but I bet that they tested it on the Roblox Starter place and not a more intensive game like Titanic: S.O.S. V2
yea, they 100% just tested it on an empty baseplate. i doubt it can get such high FPS in a game like arsenal or jailbreak
I hope they will give you a sample unit 🤩
I'm sure they will. Question is when 😅
Yes CPU run at 37 watt. Yes battery is 74wh, yes, battery life is 20 hours. How is that possible?
20hours is possible with the silent modes that use less power. These chips have been designed with 9W in mind!
3:05 why is there no 64GB alternative???
I can ask that for you later :)
These are CPUs for light notebooks. Intel is preparing Arrow Lake for higher-end machines.
Though Intel has bad reputation but its better to have competition otherwise you will see the crazy price point of Nvidia when they dominated the dedicated gpu market. Competition gives better pricing only Apple is exception lol they build their own ecosystem and the device costs a fortune.
Yeah benchmark seems ok , but does it suffer oxidation ?🤫
Thanks for not being obsessed with ai 😄
🙄 I swear the crowd at the ACER press Date today was VERY quiet every time something with AI came up and hyped once real laptops and gaming stuff was the topic...
Waiting for ur LUNAR LAKE VS AMD 300 AI gaming review.
Soon... Still waiting for the review sample 😔
the only major downside to this will be MT performance and that is what they did not show.
I think it will be equivalent to M2, 8845hs and snapdragon x plus, so last gen performance in MT which ain't gonna cut it imo.
sure igpu might be good on this, but they kind of missed the target on multicore performance.
i do understand that they plan to launch also a more performance oriented lineup, but it's gonna sit in a different power/performance class to these.
Yeah I actually wondered why they didn't show Cinebench results. We will see ;)
Make sense, you are not building a server out of a thin laptop, so single core performance is what you should be looking for.
Chromebook Lake, with 4 cores 8 threads, is shaping up to be lunarcy, the CPUs are just like 2011. I don't think people realize it but an e-core is essentially as fast as a 2nd thread, and Intel's new generation of laptop chips, run at slowpoke speed, the only thing they can do is play games, most other things are slow, slow, slow.
Nah, thats not true. The P Cores are REALLY fast for Lunar lake and the Ecores are MUCH faster than the old ones. The only thing this lacks is great Multicore performance. Still will do for whatever those Laptops are meant for. This is a chip for thind and lights. But its enough for 4K or even some 8K Video editing. 1080p Gaming at medium Settings. Blender. Architecture etc.
In some cases you would need to get the 32GB versions though. Defenitely.
@@Hubwood haha you drank the Intel koolaide. Enjoy it!!
@@dgillies5420 No. I just seen it. Myself. Life. In reality.
Where is the desktop cpu?
That's arrow lake. Was previewed today and Infos on that are under embargo!
Are you sure the power reduction is 50% ? Because I think it's more like 5% on average.
@@Matlockization depends on the task. Sometimes it's less... They showed us the Prolycion office benchmark run (pc mark follow up) and it was significantly lower than meteor lake. They used special USBC wattmeters.
Also the part with Dota 2... You can see the power draw there as well.
@@Hubwood Well, I guess we'll find out by the end of the year from independent reviewers. However, what I find disappointing is that laptop manufacturers tend to reduce the Wh capacity of their batteries as a result of these more efficient CPU's ! Makes no sense.
@@Matlockization I don't think they will do that in general.
I will also test the wattage myself. Independently and unpaid for ;)
Considering AMD have had Radion for so long it's surprising that Intel have overtaken them in iGP.
Intel had iGPU for a long time
Im waiting for AMD response on this, their Ryzen AI is great, but battery life still far away from matching X elite or Apple M series.
8:39 Why didnt u ask him about the Royal core architecture???????
Not his field ;)
Does this chip beat Apple's M4????
Does apple M4 beat apple M3? We don't know right, at least wait for M4 to actually be available in market. Besides ppl who like apple would never buy an intel because of ecosystem restrictions
idle of this cpu is stay at 0.4w , insain power consum
as far as I've seen reviews, the new Xe2 is still slower than the new amd 980m
You mean 890m? It actually really seems to be a bit faster...
@@Hubwood Yes 890
Lets wait for the actual hands on testing and reserve our judgement
At latest 2023, Intel had also claimed that meteor lake is about 15% faster than 780m in gaming.
I know. I mean they showed real benchmarks here. We will see if it was cherry picking or if it's true over larger number of games :)
Meteor Lake had driver issues, though in systemics it did perform better than the 780m, Xe2 is supposed to release with fully fledged out drivers
So you are actually taking Intel's marketing releases as being even potentially indicative of reality? That's .. brave of you.
No I'm not. I'm gonna Test it myself. But I've SEEN the performance myself in some scenarios. And they made absolutely clear, that the results are legit. I also spoke to someone outside from intel. He already tested it and confirmed it.
Lol he said intl instead of intel
Sorry but when did Intel hold the iGPU crown ? To my knowledge never ...
Actually it has always been, igpu means integrated graphics not really a discrete gpu. AMD never had a good igpu but they were compensating it with their own discrete GPUs or using nvidias. But I guess things are about to change for good now.
@@syedshaqutub613 do you think Steam Deck gpu is discrete or bad?
Well you have a point there. I deleted the "back" as that statement was at least arguable...
@@Hubwood speaking of which, just had a look into the disclosure for the HX 370 vs core 9 288 iGPU benchmark, they tested the Intel with 8533 MT/s memory and AMD with 7500MT/s memory claming an average 16% advantage. I wonder what that Core 9 would do given same memory constraints.
I would have a very hard time trusting Intel right now after their 13 and 14 series FIASCO and the handling of the entire situation has reported by many such as Gamers Nexus. They handled customers requests very poorly, especially those who had issues way before the massive reporting of the said issues. Intel will need to prove itself before I consider any products from them. And no this is not to say AMD is without its part of blame with the way their Marketing department handled the 9000 series but at least their CPUs are not breaking down at the moment. I guess, the best approach here with these companies who seem bent on releasing stuff super fast just to gain more market share and meet their end of year financial expectations over quality, is to wait and see.
Intel expects a ton of money for 8-thread CPU which is not comparable to Ryzen 370 24-thread CPU.
They have chosen the charts to win by a few percent having 3nm compute chiplet vs 4nm (Qualcomm and AMD). GeekBench MT is missing...
Let's see 2025 Kraken and Halo chips from AMD with same memory 8533MHz and 8/16 cores to compare.
I wonder if they patch Windows 11 or not before testing because Zen 5 gains is above 10% for gaming...😁
hx370 laptops ship with 24h2. thats why you dont see much os upgrade related retest on hx370
zen 5 tests were lackluster for gaming from what I saw, and in some cases even worse than the previous generation.
Tests are usually specifically chosen to demonstrate advantage, while hiding the existing flaws. Need to have Independent tests on almost same hardware(ram, ssds, windows power settings etc) and just native resolutions. Unfortunately - windows is only staying to adapt to handheld gaming and doing this waaaay too slow.....
Sure I believe Intel..not…keep shipping defective chips
Give me a break dude - Intel can't even make competent DISCREET video cards.
Pathetic.
Useless Intel benchmarks are useless. Intel hasn't released an honest set of benchmarks in my memory.
I understand that. To be fair all manufacturers are cherry picking and optimizing benchmarks. Though what they showed here live to the audience seemed legit (F1 24 and Dota2) though as always well have to wait for review units and test ourselves.
@@Hubwood Even that doesnt mean its real because they have also proven to test with bottleneck hardware and settings or compare using different test systems
@@tendosingh5682 that's always possible yes. Again: we will need to see ourselves 😅
I don’t trust Intel even if it was the last chip brand in the world. Decades of deceiving, patches decreasing speed, high energy consumption, rebranding of old chips, monopoly, etc etc etc.. Even if they have a little REAL edge with these new chips, I won’t buy it: it’s very likely that in 4/5 years they need a patch that will reduce 20-30% de speed. Or that the chip throttles down and they say the computer/board manufacturer is the guilty…
Well I guess about that we'll have to measure and judge them ourselves ;)
i buy intel
Im notfanboy amd
@@owen99412 are you a Intel fanboy? I’m not anybody’s fanboy FYI.
@Hubwood definitely, but I access a person or company’s behavior on the long run, not in one single moment. If Intel showed us a behavior for ages, I won’t trust them immediately just “because”. See what happened with the last generations about the instability. That is “Intel-typical”.