💵 Save money on your next gaming laptop with our daily deals: gaminglaptop.deals Updates & Corrections: 1. I've been advised that the 28W power limit for Lunar Lake includes the on-package memory too, which is not the case for Meteor Lake and Strix Point. This may explain the big power draw difference at "9:20 Power Draw Difference". In theory this means for the comparison to be properly fair, Meteor Lake would need some higher power limit. Currently I am not sure how to work that out, but I also can't test it with these laptops anyway, because the S 14 maxes out at 28W while the G16's can't both drop below 28W. So the difference between LL and SP may be smaller if you're somehow able to get them both running at the same power level - perhaps normalizing total power draw at the wall, despite LL needing a higher TDP in software to do that. 2. The "10:46 Cost Per Frame (Best Value in Late Sept 2024)" is incorrect. I accidentally used the cheapest Ryzen laptop without realizing it's the 365 which has a lower tier GPU, not the 370. Here is an updated cost per frame graph using the lowest 370 chip - though it's sort of a moot point when it's packing an RTX 4050 anyway, I probably shouldn't have bothered: jarrods.tech/wp-content/uploads/2024/09/cost-per-frame.png 3. Although I said not to use the upscaling results for comparing, I did this for one game, Black Myth: Wukong, because it's kind of the only option, I don't think you can turn upscaling off. Anyway, XeSS on Meteor Lake and Strix Point will use the DP4a version of XeSS while Lunar Lake uses the XMX verison which should look better visually. I guess it kind of is what it is, unless I used FSR on all three instead, or just ignored one of the most popular games currently from the comparison. In any case it's one game, I doubt the difference in FPS would be big enough to affect the 20 game average much. Also same deal with what i said for starfield and shadow of the tombraider, just assume upscaled results aren’t comparable. 4. Some complaints about using the 258V instead of the 288V - I couldn't get the 288V for this video, unfortunately I can't just get whatever I like. The 288V only has a 100MHz higher GPU clock, which is about 5% higher. Even if we assume game FPS will scale 1:1 with this (it won't), it's not enough of a difference to affect the conclusion. Some people also mentioned the 288V has a higher power limit - but then we'd just run the other laptops with more power too to match, and given LL seems to do better at lower power levels, again I don't think this would change the conclusion.
@@dream8699 I can only work with what I've got, and if you think the Core Ultra 9 288V with its slightly higher single core turbo and 100MHz higher GPU clock is enough to put it ahead, you're dreaming bro. As for your comment on the size differences, I mention it in the video and point out they're all power limited the same, the size shouldn't really affect the performance, the S 14 didn't seem to throttle in the game I checked, but yeah it isn't ideal. Kind of is what it is, I'm not able to just snap my fingers and get what ever hardware I like.
Hey Jarrod! Quick question, will you ever test the Asus FLOW X13 2023 model with the RTX4060 or 4070 Ryzen 9 7940HS Models?? I have the X13 4060, got it new for 1199$ and its been amazing, super quiet under load even at full power. The GPU Consumes only around 35W to 41.44W. Can take it up to 60W or 75W. I would love to see if you could give me a difference in performance on a normal 90W/115W 4060 Laptop. Greatly appreciated if you can do that for me, thank you.
Hey Jarrod love the videos but you know when you set the cpu to 28 watts pretty sure that the igpu power is separate from those on the non soc designs like strixx point you can tell if you chart it on hwinfo I think
It's only really on geekbench though, the meteor lake laptop still comes out ahead in multi-threading in cinebench and some other multicore tasks. Meteor Lake, overall, wasn't really a good architecture for Multi-Threading. If I wasn't mistaken, the new AMD Strix Point laptop is like 25-30% better than a meteor lake laptop at around the same wattage.
If that 4 e-core with 4xe2 is made into chip instead 8xe2 as 4xe2 is same as 8cu of amd in steamdeck, we would have steamdeck killer for cheaper price while being far more efficient... Intel 1xe2=1*8*16=128 shader cores 4xe2 =4*8*16=512 shader cores AMD 1cu=64 shader cores 8cu =512 shader cores... FYI, Intel xess uses ai cores like dlss of nvidia and it's better than fsr...
Kudos Jarrod effectively reviewing the most intricate tech category. All while testing insane amounts of games, keeping it approachable, and even shouting out another channel in the outro. You're killing it!
I normally love his reviews, but this one was very flawed for many reasons. Clearly the Intel was in standard mode. Even besides that, comparing against 16" class laptops with dgpu is kinda dumb (I know disabled). Why not against another strix point 14" iGPU with their performance mode?
@JarrodsTech u said several times that u can compare the upscaled results when u used XESS, that is not true. both HX 370 and 185H use the DP4A version of XESS while the lunar lake use the XMX version that looks much better.
True, fortunately I only did it for one game - because it's the only option, unless I use FSR on Intel, which I think isn't as preferable? Anyway updated the pinned comment to mention that, thanks.
@@JarrodsTech i think that if possible its always better to use the upscale from the company who made the gpu i.e intel-xess nvidia-dlss amd-fsr. but since there are many time that even dp4a xess looks better than fsr i can see some demand for xess test on amd gpu.
@JarrodsTech I do think an apples to apples comparison is "fair" to do for the performance numbers, especially since the more GPU agnostic DP4A version of XeSS is quite hard on the GPU compared to running the "native" XMX version or using FSR. Practically one only opts for XeSS on non Intel XMX hardware when: a) no other upscaler is available & the result still increases performance. b) image quality is insufficient with FSR & there's still desirable performance with XeSS even if it has a lower performance than running FSR.
@@JarrodsTech Also, while I'm sure you were aware of this but it may have escaped your attention, I'd like to refresh everyone's memory of the fact that some titles have XeSS 1.3 support (or it can be forced to use 1.3 algorithm via a DLL swap) and the scaling of 1.3 is different from the previous XeSS iterations. Namely, Performance XeSS to Performance FSR 2.0+ does not have identical scaling rate per dimension. XeSS Performance has 2.3x scaling, whereas FSR 2.0+ has 2x scaling.
4 FPS more at 30 FPS is more than a 10% performane difference (for the starfield example) You need talk more in percentages as small FPS differencen means big differencen in frametime and relative performabce for example, 40 FPS feels almost twice as smooth as 30 FPS because the frametime is reduced massively.
@@ImDembe first of all, 3D games start being "playable" at 18fps. ideally you want 120+ but if you really want to play something, 18 is the bare minimum that you can still extract fun. and that's for 3D action games, there are genres of games you can easily play at 8fps. if you think 30 is unplayable you're a horrendously spoiled brat. each single frame per second improvement is felt dearly at these low rates. 30% more than 30fps will tell you that it's running almost 40fps, which is MASSIVELY better, like 30% implies. And most importantly, saying it's "4 fps faster" is absolutely meaningless as the value of the improvement is tied to the total frame rate and thus a relative measure. that could be four times better or 0.1% better depending on how the game is running. 4 fps more in a game running at 500 fps is less than margin of error, in a game running at 30 fps it's a gpu tier upgrade.
Currently own a 2023 Strix 17 with a 4090 and 7945HX but i m mainly using the laptop for office while gaming maybe 4-5 hours / month . I definitely see myself switching to those kind of laptops in the next 2-3 years if they can run latest game titles on FHD on medium -high with 100+fps . Portability it s becoming a damn valuable thing.
the low powered memory consumes not that much, maybe around 2-3W. and it is an architecture decision Intel made to improve the performance. they are probably gaining more from doing this than what that little power difference would make (especially for GPU tasks that are bandwidth starved).
@@mariuspuiu9555Check 9:29 and see how much the difference is. Also no reason to test Chip designed for thin and light at 28W. Lunar lake shines at 17W.
@@mariuspuiu9555 The AMD chip doesn’t have RAM while Intel Lunar Lake does. Hence, overall power draw is more relevant. Also Lunar Lake’s sweet spot is 7W to 17W. 28W limit is too high for it. Within 17W limit it will offer almost the same performance as 28W and hence it will be car more efficient. But Jarrod chose to use 28W since strix is too power hungry and doesn’t go that low.
@@green_block lpddr consumes very little power, around 2-3W. it doesn't make any meaningful impact with regards to "power limits" for performance (the 288V is proof of this). but it does help intel achieve higher performance because they placed it so close to the CPU. so your entire argument kinda falls apart.
@@JD4fun "2 step blow" - the difference is just 100MHz on the GPU and 300MHz on the max CPU boost. it's just a small single digit difference. AMD still beats.
Intel doesn't have a competitor yet as much as they like to pretend otherwise. I think the same goes the other way around since these aren't amd most efficient chips for laptops i.e. Kraken.
lets go. recently brought the loq because of your video but still watching cause its interesting. also if you want AFMF on intel laptops you can use lossless scaling which is a program on steam which is basically the same thing. you can also upscale but it isnt as good as fsr 2/3 or dlss as it doesnt have access to game code
@sivaxiosu isn't free though but it's pretty cheap for what it can do. Although depending on the system it is a big resource heavy so it requires some headroom on both cpu and gpu and only works in windowed or borderless mode
@@shikikanaz4624 They're resource heavy because, it's an AI based upscaler because of that, It's very good at visual and motion quality. They can even have lower performance impact by using DirectML for better perfomance for GPU with TPU like Nvidia and Intel or separate NPU but chose not to due to "compatibility reason"
he used XeSS 1.2, instead of newest 1.3 for Intel (so worst case for Intel) and FSR for AMD (so best case for AMD), plus added frame generation which just was released like week ago for AMD. So yeah, test biased in all ways possible.
Except the FG was listed completely separate from the res of the results to show what to expect from it. It wasnt used in the actual comparison of the results. And ti be clear FG is dumb and I personally don't care for it no mater what company is adding it.
@@FLKRM to be fair most of the game benchmarks Red was faster anyway even without FG. And like I said I don't care for FG and think it's mostly dumb. I'm sure if Intel had an equivalent feature he would show it as well. (not sure what happened to my post you replied to, it's no longer showing for me lol)
Hey man really cool video but for Ghost of Tsushima it's has FSR 3 frame generation implanted in game so i think you should have used it instead of AFMF2 ?
Hello, I was wandering if it would be worth waiting for the asus s14 with the intel core ultra 9 288V instead of the actual ultra 7 258V. I don’t know if the difference will be noticeable considering the 14" thin laptop ?
Hey choosing between same priced 2023 models: STRIX G16(4060/i7-13650HX/1920x1200) and Lenovo Legion 5(4060/i5-13450HX/2560x1600p) or TUF 2024 (4060/i7-13650HX/1920x1200)????
Hi Jarrod, I am going for a laptop, but am a bit confused between Lenovo legion 7i 4070 GPU and Acer Predator helios 16 4080 GPU. Can you suggest which one shall I go with.
Do we know what the average GPU clocks were for each iGPU? I’d be interested to know how Battlemage compares to RDNA 3+/3.5 in IPC, to get a feel for what kind of performance we can expect from discrete cards.
Zenbook if you are editing, traveling, browsing and light gaming (especially CPU intensive games). If you are gaming with GPU intensive games then you'll want that Nvidia graphics card. The portability of the Zenbook is nice.
Is it possible to test Strix Point with only the Zen 5c cores enabled for efficiency focus? Or maybe the other way around with only the regular Zen 5 cores enabled?
So GUYS I haven’t really gotten a grip yet just from basing my research from forums which one should I get ultra 9 288v or the amd Ryzen 360 hx just for battery life and everyday use for productivity
these benchmarks are what the users need the most right now, adding a battery life comparaison between them could've been helpful because it could be a determining factor vs a 5 fps increase
@@eliadbu well the protocol for battery testing calibrates the same nits on both screens and gives Mha/hours of battery life and since AMD was always king in latest generations, it's good to see how are things now
I am thinking about buying one of these now and docking them as the igpu becomes outdated. Can you test how well they handle the same egpu and compare the results with both a pc and a gaming laptop?
Excellent video! If the power draw difference translates to less fan noise and heat on the 14” lunar lake, I’d happily give up the extra frames for it. I’d love to know what the comparative noise and heat experience was under gaming load for each laptop.
Jarrod this is a off topic question but I wanna know if there’s a big difference between 1080p and 1440p laptops. I am going to buy my first gaming laptop and I am wondering if I should go with 1080p display or increase my budget a little bit and go for 1440p
which budget laptop would be best to buy now to play all aaa games(like black myth wukong) smoothly? And i am having an idea of buying gaming pc may be 3 or 4 years later
There are solution with ryzen 8845hs 16 gb ram 1tb ssd and 4070 for 800€ (that's a killer price). I've seen the hp omen, but you should be able to find also something done by others OEM.
AMD on top as expected, but Intel is catching up very fast which is good. I've always been impressed with Intel's progress ever since they launched the Core Ultra series. Especially considering how bad their 13th & 14th gen chips are in both laptops and desktops, compared to AMD counterparts.
One thing you can use is g helper on Asus devices to see how many watts is the system is using. Simply disconnect the battery and monitor the battery level, it will tell you how many watts you are discharging
probably not the target demographic as these are lightweight, low noise, low heat, high battery life laptops consuming less power. So probably for mobile/business work whilsts also being able to do some light gaming here and there. And then there is geforce now.
Great review - but the situation regarding these new iGPUs is a bit confusing. How does gaming performance translate into content creation performance (image post processing, video editing, 3D)? There are synthetic benchmarks according to which Intel´s Lunar Lake iGPU should not be more performant than the previous Intel Arc iGPU, but apparently they are by a pretty solid margin.
There’s some incorrect testing methodology. By putting the Intel laptop in Standard Mode, you essentially are choking it down to 15W long term compared to Strix’s higher wattage. Full Performance on Lunar Lake yields at least 13% better performance, by NotebookCheck’s testing, at 28 watts sustained. Memory package takes 2 watts, so factoring that into performance to power ratio would be incorrect regardless. Kind of disappointed, NotebookCheck’s numbers are really off from your conclusion because of these mistakes
He literally said that the 28w limit is equal to highest performace mode in Lunar Lake at the start. And he measured the power consumption at the wall, so the lower consumption of Lunar Lake can be explained by 2 things: 1. The Lunar Lake laptop have a smaller scrern, + it is not a gaming laptop, so its screen probably is more optimized for low power consumption. 2. The charger in the Lunar Lake laptop is smaller and thus operating at a higher load, making it a bit more efficient. (Remember that PSUs run the most efficient at around 50% load, losing about 2% efficiency at 20%, applied to ATX PSUs but should be true for power bricks) 3. Lunar Lake Memory is on the CPU package, thus those 2W are included in the 28w limit, making the real consumption around 26w.
I even think Notebook Check's review is faulty though. In the efficiency section of their review it shows the Intel as being 30% more performant per watt in Witcher 3, but then when you check how they compare in fps in their Witcher 3 benchmark, it shows the AMD chip being a bit faster... 🤷 Same problem in this review: he showed faster performance on the AMD for cyberpunk then proceeded to show Intel winning the efficiency, yet they were supposedly tested using the same wattage. Literally makes no sense....
@@djayjp Maybe relevant I dunno NBC sometimes tests Witcher 3 on both v1.32 and on v4+, you lose upscalers and RT in exchange for like 30% performance (indoor lighting is worse ig)
Holy crap. It's been a while since I've checked out one of your videos and I was instantly slapped in the face with 60FPS video! WOOO! Top work Jarrod! I'll be watching more for sure just thanks to the eye candy :D
If you are comparing the chips, use an external monitor with laptop display turned off for power efficiency testing. A 16 inch oled consumes quite a bit more than a 14 inch.
The Strix Point is a good good CPU for ultrabook+gaming laptops like the Zephyrus G14/G16 or any 14 inches gaming laptops. But just for ultrabooks, the new Intel CPUs have their own edge which is the battery life without sacrificing too much performance. But since the price for the laptops with those CPUs are high af (nearly unaffordable for many students), I guess it's just gonna be for those top-tier machines.
Hello, I've been looking for a gaming laptop that way I can play The Sims I need a good processor and a lot of storage for all the custom content I get. What should I get I'm not looking to spend over $1000 what do you suggest, I know you would know and I love your videos.
Are those curious about scaling on the IGPU, on AMD. I have done extensive testing, so turning it down to about 28w to have AMD match Intel isn’t really reducing its performance that much AMD is able to achieve 100% performance at about 28 to 30 W. in my Pro Art video i show the power scaling of the Ryzen AI HX 370 as low as 15w (as low as it let me go with OEM soft). There is a retty spicy drop when you start to go under 20w
Jarrod I’ve been binge watching all your laptop video reviews.I can honestly say you are by my far favorite content creator. You give an incredible and unique take on every video.With honesty at the center of everything. Thank you for not being biased and not letting the big companies buy you with fake reviews. Keep it up you’re killing it.
Considering the fact that these processors are majorly being used in thin and light laptops where the main focus is productivity, note taking, a little bit of light video editing for tik tok etc, watching netflix, prime video or Disney+, a lot of web browsing and some casual gaming, I'd rather choose Intel than AMD. This is not a processor for gaming laptops, or video editing or CAD machines, why treat them like one, only a fool would treat them like gaming laptops. Battery life is the most important criteria for this category. I'd be able to bing watch an entire season of Stranger things or work on Excel for 8 hours.
This is very objective and makes a lot of sense. But would have wished for you to compare with Intel's top dog 288V instead of 258V however I don't know if there is a significant difference in performance between the two.
Very interesting data! It would be interesting to see which frame gen tech would be better in terms of added latency. DLSS, FSR, AFMF or lossless scaling application🤔 Could be tested on a laptop with AMD CPU/iGPU + Nvidia GPU.
I thought you wouldnt be covering Lunar Lake since its not really 'gaming' But thanks for the comparisons!! Definately interested in getting one of these when they put it on a Surface, if
Hey Jarrod ... When Do You Think We Would See Ryzen 9 Ai HX Series on Other Laptop brands (Other Than Asus ... Lenovo , MSI, Acer etc.)? ... I Would Like To see and buy a Proper 4080+Ai HX370 combo ... is that even going to happen??
eGPU! eGPU! eGPU! 😉 Would be interested in the impact of different TDP skews when there is also a PCIe 4.0/5.0 x4 bottleneck. Also, if a lower end GPU could be tested, that'd be great.
70% to way higher performance at far fewer CPU cores, half the CUs, and significantly lower clocks is pretty insane. If this at all teanslates to gains on desktop GPUs and the prices stay relatively the same, the competition is gonna get heated.
Nope it's just a higher clocked version of 258v and within the thermal limits you are barely using the increased clocks in gaming. Within a 30w package you would require about 15 w for maximum boost on a single core. Granted the xe cores are also higher clocked they are barely increased from 258 to begin with.
Thank you Jarrod. I know they're not gaming laptop CPUs but is there any way ANY WAY to make a video comparing meteor lake and lunar lake cpus and amd same league cpu?
@@ou1814 just buy whatever fits your budget :) i may like AMD but i still buy the best value i can find which is why i went with an 12700H laptop 2 years ago.
It would have been nice of you to mention as a side note, how much memory these iGPU's take from system memory or RAM and how this can be maximised in the BIOS or not. Also, you could have tested how these run on a bag of office and education apps for real world appraisal 😀.
@@JarrodsTech Yes of course, but sometimes this memory can be increased or used to be increased in past models or is it all on auto on windows side, which is what I was asking......... It's a little something, that no one talk about anymore.
@@JarrodsTech I wouldn't know how to do that, but I'm sure you're going to explain it in a future video. Is there a reason why you prefer 8GB of shared system memory for iGPU's and not more ?
How do they fare in Lightroom AI Denoise? Intel showed a 79% improvement over 155H in the presentation graphic, AMD has an "AI sticker" on everything, but I don't see actual tests. Is it at least as good as the Macbook Air M2 or mobile RTX 3050?
Lunar Lake does have a bright spot after all.....that's really significantly better iGPU performance. I think Intel might have spent a lot of their engineering this gen on improving iGPU and that dubious NPU rather than general compute performance...I honestly wonder about how their taking up that much die space for the NPU might be affecting things. But it is kind of nice seeing some competition in the iGPU space. I think this flew under the radar in a lot of reviews.
I was able to score a G16 with the new Ryzen 9 AI 370 HX chip for 1700 on Best Buy and I found myself playing more on battery with the integrated graphics than with the 4070. I just tweaked a little both through G Helper and get almost 5 hours of playtime
Except that Intel claimed 16% faster than 890M, but is slower. It could definitely be because of drivers too, but is still dissapointing that they weren't able to get as much as they claimed. Perf/watt is really impressive though, I'll give them that.
@@auritro3903 I dont think the Arc are better than the 890m, we know that these igpus gain significantly performance with faster ram. If the 890m had this 8533 speed too, it would be way better.
💵 Save money on your next gaming laptop with our daily deals: gaminglaptop.deals
Updates & Corrections: 1. I've been advised that the 28W power limit for Lunar Lake includes the on-package memory too, which is not the case for Meteor Lake and Strix Point. This may explain the big power draw difference at "9:20 Power Draw Difference". In theory this means for the comparison to be properly fair, Meteor Lake would need some higher power limit. Currently I am not sure how to work that out, but I also can't test it with these laptops anyway, because the S 14 maxes out at 28W while the G16's can't both drop below 28W. So the difference between LL and SP may be smaller if you're somehow able to get them both running at the same power level - perhaps normalizing total power draw at the wall, despite LL needing a higher TDP in software to do that.
2. The "10:46 Cost Per Frame (Best Value in Late Sept 2024)" is incorrect. I accidentally used the cheapest Ryzen laptop without realizing it's the 365 which has a lower tier GPU, not the 370. Here is an updated cost per frame graph using the lowest 370 chip - though it's sort of a moot point when it's packing an RTX 4050 anyway, I probably shouldn't have bothered: jarrods.tech/wp-content/uploads/2024/09/cost-per-frame.png
3. Although I said not to use the upscaling results for comparing, I did this for one game, Black Myth: Wukong, because it's kind of the only option, I don't think you can turn upscaling off. Anyway, XeSS on Meteor Lake and Strix Point will use the DP4a version of XeSS while Lunar Lake uses the XMX verison which should look better visually. I guess it kind of is what it is, unless I used FSR on all three instead, or just ignored one of the most popular games currently from the comparison. In any case it's one game, I doubt the difference in FPS would be big enough to affect the 20 game average much. Also same deal with what i said for starfield and shadow of the tombraider, just assume upscaled results aren’t comparable.
4. Some complaints about using the 258V instead of the 288V - I couldn't get the 288V for this video, unfortunately I can't just get whatever I like. The 288V only has a 100MHz higher GPU clock, which is about 5% higher. Even if we assume game FPS will scale 1:1 with this (it won't), it's not enough of a difference to affect the conclusion. Some people also mentioned the 288V has a higher power limit - but then we'd just run the other laptops with more power too to match, and given LL seems to do better at lower power levels, again I don't think this would change the conclusion.
Can't wait for Strix Halo. It should blow all of these out of the water in iGPU performance.
@@dream8699 I can only work with what I've got, and if you think the Core Ultra 9 288V with its slightly higher single core turbo and 100MHz higher GPU clock is enough to put it ahead, you're dreaming bro.
As for your comment on the size differences, I mention it in the video and point out they're all power limited the same, the size shouldn't really affect the performance, the S 14 didn't seem to throttle in the game I checked, but yeah it isn't ideal. Kind of is what it is, I'm not able to just snap my fingers and get what ever hardware I like.
Hey Jarrod! Quick question, will you ever test the Asus FLOW X13 2023 model with the RTX4060 or 4070 Ryzen 9 7940HS Models??
I have the X13 4060, got it new for 1199$ and its been amazing, super quiet under load even at full power. The GPU Consumes only around 35W to 41.44W. Can take it up to 60W or 75W.
I would love to see if you could give me a difference in performance on a normal 90W/115W 4060 Laptop. Greatly appreciated if you can do that for me, thank you.
@@Gab2456_ASUS never lent it to us and its a bit old at this point for us to try and buy it, maybe we can check out the next refresh
Hey Jarrod love the videos but you know when you set the cpu to 28 watts pretty sure that the igpu power is separate from those on the non soc designs like strixx point you can tell if you chart it on hwinfo I think
Finally a proper games benchmark instead of the useless 3Dmark nonsense.
Yeah I got kind of sick of seeing all the time spy results showing LL must be better for gaming lol
3Dmark also sucks, they havent made a new benchmark in almost half a decade
@@wile123456this is straight up a lie.
@@wile123456does steel nomad just not exist?
What do you mean? They still update it every few months.@@wile123456
The 8C Lunar Lake matching the 16C Meteor Lake in multithreaded workloads is mind blowing.
It doesn't even have multi threading
It's only really on geekbench though, the meteor lake laptop still comes out ahead in multi-threading in cinebench and some other multicore tasks. Meteor Lake, overall, wasn't really a good architecture for Multi-Threading. If I wasn't mistaken, the new AMD Strix Point laptop is like 25-30% better than a meteor lake laptop at around the same wattage.
If that 4 e-core with 4xe2 is made into chip instead 8xe2 as 4xe2 is same as 8cu of amd in steamdeck, we would have steamdeck killer for cheaper price while being far more efficient...
Intel
1xe2=1*8*16=128 shader cores
4xe2 =4*8*16=512 shader cores
AMD
1cu=64 shader cores
8cu =512 shader cores...
FYI, Intel xess uses ai cores like dlss of nvidia and it's better than fsr...
Kudos Jarrod effectively reviewing the most intricate tech category. All while testing insane amounts of games, keeping it approachable, and even shouting out another channel in the outro. You're killing it!
I normally love his reviews, but this one was very flawed for many reasons. Clearly the Intel was in standard mode. Even besides that, comparing against 16" class laptops with dgpu is kinda dumb (I know disabled).
Why not against another strix point 14" iGPU with their performance mode?
Performance like that with power limit to 28 watts is seriously impressive 😮
@JarrodsTech u said several times that u can compare the upscaled results when u used XESS, that is not true.
both HX 370 and 185H use the DP4A version of XESS while the lunar lake use the XMX version that looks much better.
True, fortunately I only did it for one game - because it's the only option, unless I use FSR on Intel, which I think isn't as preferable? Anyway updated the pinned comment to mention that, thanks.
@@JarrodsTech i think that if possible its always better to use the upscale from the company who made the gpu i.e intel-xess nvidia-dlss amd-fsr.
but since there are many time that even dp4a xess looks better than fsr i can see some demand for xess test on amd gpu.
@JarrodsTech I do think an apples to apples comparison is "fair" to do for the performance numbers, especially since the more GPU agnostic DP4A version of XeSS is quite hard on the GPU compared to running the "native" XMX version or using FSR.
Practically one only opts for XeSS on non Intel XMX hardware when:
a) no other upscaler is available & the result still increases performance.
b) image quality is insufficient with FSR & there's still desirable performance with XeSS even if it has a lower performance than running FSR.
I prefer XMX based XeSS image quality at lower frame rate than FSR image quality at higher frame rate.
@@JarrodsTech Also, while I'm sure you were aware of this but it may have escaped your attention, I'd like to refresh everyone's memory of the fact that some titles have XeSS 1.3 support (or it can be forced to use 1.3 algorithm via a DLL swap) and the scaling of 1.3 is different from the previous XeSS iterations. Namely, Performance XeSS to Performance FSR 2.0+ does not have identical scaling rate per dimension. XeSS Performance has 2.3x scaling, whereas FSR 2.0+ has 2x scaling.
4 FPS more at 30 FPS is more than a 10% performane difference (for the starfield example) You need talk more in percentages as small FPS differencen means big differencen in frametime and relative performabce for example, 40 FPS feels almost twice as smooth as 30 FPS because the frametime is reduced massively.
Cool. Make your own videos with the information you desire.
@@sdcraig no. Provide feedback so reviewers improve. Some of us have actual jobs to tend to.
what a braindead take. @@sdcraig
Why? It would just make it more confusing, if Intel give you 30fps and AMD 4 more fps 30% will tell you what? that it's still unplayable?
@@ImDembe first of all, 3D games start being "playable" at 18fps. ideally you want 120+ but if you really want to play something, 18 is the bare minimum that you can still extract fun. and that's for 3D action games, there are genres of games you can easily play at 8fps. if you think 30 is unplayable you're a horrendously spoiled brat.
each single frame per second improvement is felt dearly at these low rates. 30% more than 30fps will tell you that it's running almost 40fps, which is MASSIVELY better, like 30% implies.
And most importantly, saying it's "4 fps faster" is absolutely meaningless as the value of the improvement is tied to the total frame rate and thus a relative measure. that could be four times better or 0.1% better depending on how the game is running. 4 fps more in a game running at 500 fps is less than margin of error, in a game running at 30 fps it's a gpu tier upgrade.
Currently own a 2023 Strix 17 with a 4090 and 7945HX but i m mainly using the laptop for office while gaming maybe 4-5 hours / month . I definitely see myself switching to those kind of laptops in the next 2-3 years if they can run latest game titles on FHD on medium -high with 100+fps . Portability it s becoming a damn valuable thing.
Lunar Lake TDP includes the on pakage LPDDR5 memory. Make no sense to compare 28w lunar lake to other processor with same power
the low powered memory consumes not that much, maybe around 2-3W. and it is an architecture decision Intel made to improve the performance. they are probably gaining more from doing this than what that little power difference would make (especially for GPU tasks that are bandwidth starved).
@@mariuspuiu9555Check 9:29 and see how much the difference is. Also no reason to test Chip designed for thin and light at 28W. Lunar lake shines at 17W.
@@green_block it's power measured at the wall. not chip power draw.
@@mariuspuiu9555 The AMD chip doesn’t have RAM while Intel Lunar Lake does. Hence, overall power draw is more relevant. Also Lunar Lake’s sweet spot is 7W to 17W. 28W limit is too high for it. Within 17W limit it will offer almost the same performance as 28W and hence it will be car more efficient. But Jarrod chose to use 28W since strix is too power hungry and doesn’t go that low.
@@green_block lpddr consumes very little power, around 2-3W. it doesn't make any meaningful impact with regards to "power limits" for performance (the 288V is proof of this). but it does help intel achieve higher performance because they placed it so close to the CPU. so your entire argument kinda falls apart.
This is quality work! Now we just need more Strix Point laptops.
but isn't the 288V the competitor of HX370?
258V and 288V are extremly similar, only very minor clock speed differences.
Yes, intel claimed 288v is 16% faster than 370. 258v is 2 step below 288v and has less gaming power according to intel official website
@@JD4fun "2 step blow" - the difference is just 100MHz on the GPU and 300MHz on the max CPU boost. it's just a small single digit difference. AMD still beats.
Intel doesn't have a competitor yet as much as they like to pretend otherwise.
I think the same goes the other way around since these aren't amd most efficient chips for laptops i.e. Kraken.
The 258v costs $200more so not an equal comparison
890m vs 780m at 28w
what a time to be alive. both amd and intel offers great performance
Glad to see Intel doing well with their integrated graphics.
Hi Jarrod, please compare apple vs AMD vs intel iGPUs. Can M4 pro keep up with AMD and Intel in emulated games ?
lets go. recently brought the loq because of your video but still watching cause its interesting.
also if you want AFMF on intel laptops you can use lossless scaling which is a program on steam which is basically the same thing. you can also upscale but it isnt as good as fsr 2/3 or dlss as it doesnt have access to game code
Cool hope it's going well 👍 yeah maybe I should have checked that out, I guess I was just considering first party options here.
Not to mention Lossless Scaling is way better than AMD AFMF2
@sivaxiosu isn't free though but it's pretty cheap for what it can do. Although depending on the system it is a big resource heavy so it requires some headroom on both cpu and gpu and only works in windowed or borderless mode
@@shikikanaz4624 They're resource heavy because, it's an AI based upscaler because of that, It's very good at visual and motion quality.
They can even have lower performance impact by using DirectML for better perfomance for GPU with TPU like Nvidia and Intel or separate NPU but chose not to due to "compatibility reason"
@@JarrodsTechCan you power limit to 17W instead of 28. Lunar lake is for thin and light and don’t scale well beyond 17W.
10:58 this is the most important thing for our money.
is your LNL laptop on standard mode?These numbers are not what other reviewers got in games
No, full performance. Like I said, 28W, which is the maximum.
he used XeSS 1.2, instead of newest 1.3 for Intel (so worst case for Intel) and FSR for AMD (so best case for AMD), plus added frame generation which just was released like week ago for AMD. So yeah, test biased in all ways possible.
Except the FG was listed completely separate from the res of the results to show what to expect from it. It wasnt used in the actual comparison of the results. And ti be clear FG is dumb and I personally don't care for it no mater what company is adding it.
@@tj_2701 still looks biased as for most of viewers it will look like "RED GOEZ FASTA"
@@FLKRM to be fair most of the game benchmarks Red was faster anyway even without FG. And like I said I don't care for FG and think it's mostly dumb. I'm sure if Intel had an equivalent feature he would show it as well. (not sure what happened to my post you replied to, it's no longer showing for me lol)
How's lunar lake the most efficient if in power-normalized benchmarks is always behind?
sour grapes .. intel is better deal with it .
I am curious if the 288v in a bigger laptop will perform a bit better
FSR Performance mode renders at a higher native resolution than XeSS Performance mode.
This is why I said not to compare the upcaling results :)
Hey man really cool video but for Ghost of Tsushima it's has FSR 3 frame generation implanted in game so i think you should have used it instead of AFMF2 ?
Hello, I was wandering if it would be worth waiting for the asus s14 with the intel core ultra 9 288V instead of the actual ultra 7 258V.
I don’t know if the difference will be noticeable considering the 14" thin laptop ?
I feel like iGPU is better suited for handheld PC gaming rather than notebook and/or laptop gaming
I wouldn't mind getting a new thin laptop with just an igpu. But it's hard to find a good priced 17" one like I'm using now.
@@raptor1672 lg gram have them great laptops also
9:29 this is the most important thing.
Its just not enough to come out on top. Intel still behind on raw performance, and costing more to make.
AMD offers double the perf with a little more power. Intel dont even have a chance here.
I’m so confused, how do you chose what to use between FSR and XeSS ?
Use what is the best for your device and what the game has
Battlemage should be great if priced same as AMD
Hey choosing between same priced 2023 models: STRIX G16(4060/i7-13650HX/1920x1200) and Lenovo Legion 5(4060/i5-13450HX/2560x1600p) or TUF 2024 (4060/i7-13650HX/1920x1200)????
Hi Jarrod, I am going for a laptop, but am a bit confused between Lenovo legion 7i 4070 GPU and Acer Predator helios 16 4080 GPU. Can you suggest which one shall I go with.
Lunarlake includes dram into its soc power, did account for it when doing the strix/MTL powerlimit?
Hi Jarrod, just wondering if AMD discrete cards like 6850mxt support AFMF?
Do we know what the average GPU clocks were for each iGPU? I’d be interested to know how Battlemage compares to RDNA 3+/3.5 in IPC, to get a feel for what kind of performance we can expect from discrete cards.
Really waiting for tuf A14 review,getting the 4050 for 125k inr (1500$~),should I go for it?
Zenbook s14 or s16 or zephyrs g14 /4050 if they have the same price???
For a laptop for all the jobs , editing travelling browsing..
Thank you.
Zenbook if you are editing, traveling, browsing and light gaming (especially CPU intensive games). If you are gaming with GPU intensive games then you'll want that Nvidia graphics card. The portability of the Zenbook is nice.
Best review of these new iGPUs I have seen so far, thanks for the great work!
10:39 Pretty sure the 365 only has an 880m....
Yes see pinned comment.
Is it possible to test Strix Point with only the Zen 5c cores enabled for efficiency focus? Or maybe the other way around with only the regular Zen 5 cores enabled?
So GUYS I haven’t really gotten a grip yet just from basing my research from forums which one should I get ultra 9 288v or the amd Ryzen 360 hx just for battery life and everyday use for productivity
Just for battery life, then go for intel
what about cost per performance, power draw?
these benchmarks are what the users need the most right now, adding a battery life comparaison between them could've been helpful because it could be a determining factor vs a 5 fps increase
Bit pointless when they have different chassis with different screens and batteries, but power consumption could be a good indicator for battery life.
@@eliadbu well the protocol for battery testing calibrates the same nits on both screens and gives Mha/hours of battery life and since AMD was always king in latest generations, it's good to see how are things now
I am thinking about buying one of these now and docking them as the igpu becomes outdated. Can you test how well they handle the same egpu and compare the results with both a pc and a gaming laptop?
Excellent video! If the power draw difference translates to less fan noise and heat on the 14” lunar lake, I’d happily give up the extra frames for it. I’d love to know what the comparative noise and heat experience was under gaming load for each laptop.
Jarrod this is a off topic question but I wanna know if there’s a big difference between 1080p and 1440p laptops. I am going to buy my first gaming laptop and I am wondering if I should go with 1080p display or increase my budget a little bit and go for 1440p
Increase your budget even more and go for 2160p , that's way better than 1440p
But it doesn't make sense comparing with Intel Core Ultra 7 258V instead of Ultra 9 288V.
I wonder what the results would be with the last one...
Can you include lossless Scaling next time with integrated graphic benchmarks?
which budget laptop would be best to buy now to play all aaa games(like black myth wukong) smoothly? And i am having an idea of buying gaming pc may be 3 or 4 years later
Rtx4080 the minimum is rtx4070
@@dantebg100 oh...i never thought rtx 4070 as a budget option
There are solution with ryzen 8845hs 16 gb ram 1tb ssd and 4070 for 800€ (that's a killer price). I've seen the hp omen, but you should be able to find also something done by others OEM.
AMD on top as expected, but Intel is catching up very fast which is good. I've always been impressed with Intel's progress ever since they launched the Core Ultra series. Especially considering how bad their 13th & 14th gen chips are in both laptops and desktops, compared to AMD counterparts.
Is the i9 15900hx apart of lunar lake for laptops?
One thing you can use is g helper on Asus devices to see how many watts is the system is using. Simply disconnect the battery and monitor the battery level, it will tell you how many watts you are discharging
The problem is that laptops with dedicated GPUs will be like $100-$150 more expensive only but give way better gaming results.
probably not the target demographic as these are lightweight, low noise, low heat, high battery life laptops consuming less power.
So probably for mobile/business work whilsts also being able to do some light gaming here and there.
And then there is geforce now.
All 3 have a higher single core Cinebench 24 score than my overclocked desktop 12600k with a DeepCool AK620. Very impressive
I really have high hopes for the Claw 2. They really fixed the LB/RB Buttons and the new Chips should be great at these low wattages...
Great review - but the situation regarding these new iGPUs is a bit confusing. How does gaming performance translate into content creation performance (image post processing, video editing, 3D)? There are synthetic benchmarks according to which Intel´s Lunar Lake iGPU should not be more performant than the previous Intel Arc iGPU, but apparently they are by a pretty solid margin.
There’s some incorrect testing methodology.
By putting the Intel laptop in Standard Mode, you essentially are choking it down to 15W long term compared to Strix’s higher wattage.
Full Performance on Lunar Lake yields at least 13% better performance, by NotebookCheck’s testing, at 28 watts sustained.
Memory package takes 2 watts, so factoring that into performance to power ratio would be incorrect regardless.
Kind of disappointed, NotebookCheck’s numbers are really off from your conclusion because of these mistakes
He literally said that the 28w limit is equal to highest performace mode in Lunar Lake at the start.
And he measured the power consumption at the wall, so the lower consumption of Lunar Lake can be explained by 2 things:
1. The Lunar Lake laptop have a smaller scrern, + it is not a gaming laptop, so its screen probably is more optimized for low power consumption.
2. The charger in the Lunar Lake laptop is smaller and thus operating at a higher load, making it a bit more efficient. (Remember that PSUs run the most efficient at around 50% load, losing about 2% efficiency at 20%, applied to ATX PSUs but should be true for power bricks)
3. Lunar Lake Memory is on the CPU package, thus those 2W are included in the 28w limit, making the real consumption around 26w.
@@rj7250a Strix Point also has its memory directly on the processor package.
I even think Notebook Check's review is faulty though. In the efficiency section of their review it shows the Intel as being 30% more performant per watt in Witcher 3, but then when you check how they compare in fps in their Witcher 3 benchmark, it shows the AMD chip being a bit faster... 🤷
Same problem in this review: he showed faster performance on the AMD for cyberpunk then proceeded to show Intel winning the efficiency, yet they were supposedly tested using the same wattage. Literally makes no sense....
@@djayjp Maybe relevant I dunno
NBC sometimes tests Witcher 3 on both v1.32 and on v4+, you lose upscalers and RT in exchange for like 30% performance (indoor lighting is worse ig)
@@djayjp Because the Strix Point chip most likely used more wattage to get the FPS it did during the test even when put on a wattage cap.
Should do a 3 games battery rundown.
Do these output to Thunderbolt5 ports?
There's no tb5 available on any product
Holy crap. It's been a while since I've checked out one of your videos and I was instantly slapped in the face with 60FPS video! WOOO! Top work Jarrod! I'll be watching more for sure just thanks to the eye candy :D
If you are comparing the chips, use an external monitor with laptop display turned off for power efficiency testing. A 16 inch oled consumes quite a bit more than a 14 inch.
Haven't tried any of the Intel arc stuff yet... but I'm sure that they have loads of potential gain to get through driver optimisations.
@jarrodstech what gaming laptop should I get my budget is 800 and the highest is 900,please me some suggestion
The Strix Point is a good good CPU for ultrabook+gaming laptops like the Zephyrus G14/G16 or any 14 inches gaming laptops. But just for ultrabooks, the new Intel CPUs have their own edge which is the battery life without sacrificing too much performance. But since the price for the laptops with those CPUs are high af (nearly unaffordable for many students), I guess it's just gonna be for those top-tier machines.
Wasnt strix point supposed to use 120w?
Is there any laptop with glossy oled screen and 4060 video ?
Hello, I've been looking for a gaming laptop that way I can play The Sims I need a good processor and a lot of storage for all the custom content I get. What should I get I'm not looking to spend over $1000 what do you suggest, I know you would know and I love your videos.
Are those curious about scaling on the IGPU, on AMD. I have done extensive testing, so turning it down to about 28w to have AMD match Intel isn’t really reducing its performance that much
AMD is able to achieve 100% performance at about 28 to 30 W.
in my Pro Art video i show the power scaling of the Ryzen AI HX 370 as low as 15w (as low as it let me go with OEM soft). There is a retty spicy drop when you start to go under 20w
Jarrod I’ve been binge watching all your laptop video reviews.I can honestly say you are by my far favorite content creator.
You give an incredible and unique take on every video.With honesty at the center of everything.
Thank you for not being biased and not letting the big companies buy you with fake reviews.
Keep it up you’re killing it.
Thanks!
Perhaps you should redo this with lossless scaling which will work on every platform.
So strix point rog ally will have better graphics, but msi claw with lunar lake has better battery life?
Isn't the Helios neo 16 a watered down version of the Helios 16?
Yes
Considering the fact that these processors are majorly being used in thin and light laptops where the main focus is productivity, note taking, a little bit of light video editing for tik tok etc, watching netflix, prime video or Disney+, a lot of web browsing and some casual gaming, I'd rather choose Intel than AMD. This is not a processor for gaming laptops, or video editing or CAD machines, why treat them like one, only a fool would treat them like gaming laptops.
Battery life is the most important criteria for this category. I'd be able to bing watch an entire season of Stranger things or work on Excel for 8 hours.
I got myself a msi A15 thin B7UCX
hey jarrod, i rlly need your help. i just bought an HP omen 16 gaming laptop and i want to set a max charging limit of 80, how do i do this?
Great video.
If only you had one laptop based on 7840HS (or similar) to see how they all stand together
Comparing ryzen 9 with Ultra 7, is that fair? We need a rematch with the ground leveled
Shoulda used lossless scaling instead imo
Brother please also include Valorant in your benchmarks
This is very objective and makes a lot of sense. But would have wished for you to compare with Intel's top dog 288V instead of 258V however I don't know if there is a significant difference in performance between the two.
Why did you take AMD 365 laptop instead of 370 when comparing prices from BestBuy?
Sigh, I didn't notice.
Updated pinned comment with fix.
Very interesting data! It would be interesting to see which frame gen tech would be better in terms of added latency.
DLSS, FSR, AFMF or lossless scaling application🤔
Could be tested on a laptop with AMD CPU/iGPU + Nvidia GPU.
This year is a great year for igpu, can't wait to see them on handheld and tablets...
I thought you wouldnt be covering Lunar Lake since its not really 'gaming'
But thanks for the comparisons!! Definately interested in getting one of these when they put it on a Surface, if
We weren't going to, but decided to see how they compared in igpu
@@JarrodsTech cheers
Hey Jarrod ... When Do You Think We Would See Ryzen 9 Ai HX Series on Other Laptop brands (Other Than Asus ... Lenovo , MSI, Acer etc.)? ... I Would Like To see and buy a Proper 4080+Ai HX370 combo ... is that even going to happen??
Love this channel. Super detailed.
eGPU! eGPU! eGPU! 😉
Would be interested in the impact of different TDP skews when there is also a PCIe 4.0/5.0 x4 bottleneck.
Also, if a lower end GPU could be tested, that'd be great.
Video I have been looking for. Cheers.
70% to way higher performance at far fewer CPU cores, half the CUs, and significantly lower clocks is pretty insane. If this at all teanslates to gains on desktop GPUs and the prices stay relatively the same, the competition is gonna get heated.
How was I getting more fps on Baldur's Gate than you? I was running FSR2 Balanced at Medium and was getting 55-60fps
Should have tested the 288V. It has a higher power limit.
Yesss. I get these are the laptops out right now, but this isn't even the top 140v igpu! 😮
Exactly and intel claimed 288v is faster than hx 370, not 258v
Nope it's just a higher clocked version of 258v and within the thermal limits you are barely using the increased clocks in gaming. Within a 30w package you would require about 15 w for maximum boost on a single core. Granted the xe cores are also higher clocked they are barely increased from 258 to begin with.
@@shk0014 no, thermal headroom is still there. They ain't thermal throttling all the time. Higher power limit will benefit.
It's still slower and the 258v still costs $200 more. The 288v costs $400 more
Thank you Jarrod. I know they're not gaming laptop CPUs but is there any way ANY WAY to make a video comparing meteor lake and lunar lake cpus and amd same league cpu?
what do you mean? there's only a 5% difference between the 258V and the 288V.
@@mariuspuiu9555I'm debating between lunar lake, meteor lake or amd cpu in ultrabook.
@@ou1814 just buy whatever fits your budget :)
i may like AMD but i still buy the best value i can find which is why i went with an 12700H laptop 2 years ago.
Please compare the Lenovo legion 7i RTX 4070 and Asus Rog Scar 16 G614 RTX 4070
Can u test ryzen 7 8845hs radeon 780m with afmf 2
AMD iGPU win here. Can't wait for a Z2 Extreme inside Rog Ally 2 & Legion Go 2 ❤❤❤
Thank you so much Jarrod
You might want to add FFXIV to your benchmark list, especially since it comes with an easy benchmark.
It would have been nice of you to mention as a side note, how much memory these iGPU's take from system memory or RAM and how this can be maximised in the BIOS or not. Also, you could have tested how these run on a bag of office and education apps for real world appraisal 😀.
It's on screen at 0:24
@@JarrodsTech Yes of course, but sometimes this memory can be increased or used to be increased in past models or is it all on auto on windows side, which is what I was asking......... It's a little something, that no one talk about anymore.
I explicitly set them to 8GB in BIOS.
@@JarrodsTech I wouldn't know how to do that, but I'm sure you're going to explain it in a future video. Is there a reason why you prefer 8GB of shared system memory for iGPU's and not more ?
How do they fare in Lightroom AI Denoise? Intel showed a 79% improvement over 155H in the presentation graphic, AMD has an "AI sticker" on everything, but I don't see actual tests. Is it at least as good as the Macbook Air M2 or mobile RTX 3050?
Lunar Lake does have a bright spot after all.....that's really significantly better iGPU performance.
I think Intel might have spent a lot of their engineering this gen on improving iGPU and that dubious NPU rather than general compute performance...I honestly wonder about how their taking up that much die space for the NPU might be affecting things. But it is kind of nice seeing some competition in the iGPU space. I think this flew under the radar in a lot of reviews.
I was able to score a G16 with the new Ryzen 9 AI 370 HX chip for 1700 on Best Buy and I found myself playing more on battery with the integrated graphics than with the 4070. I just tweaked a little both through G Helper and get almost 5 hours of playtime
How long does the battery last on P16? From Jarrod's tests the G16 with same specs has 3 hours longer battery life on iGpu
Lunar Lake is everything and more of what i was hoping itd be. RIP ARM on Windows/Linux. Hyped to see the low-end SKUs for Arrow Lake.
Except that Intel claimed 16% faster than 890M, but is slower. It could definitely be because of drivers too, but is still dissapointing that they weren't able to get as much as they claimed. Perf/watt is really impressive though, I'll give them that.
@@auritro3903 I dont think the Arc are better than the 890m, we know that these igpus gain significantly performance with faster ram. If the 890m had this 8533 speed too, it would be way better.