Im ok with the laptops having RAM on chip for the NPU and such, but i definitely want to see a move away from the soddered memory and onto the CAMM2 standard
Not sure if this would not somewhat degrade the memory-bandwidth. M4 and Snapdragon have 120-135 GB/s and they all have the memory right next to the SoC. Very wide busses might help (M2/M3 Max has 512 Bit with 400GB/s), but I saw only 2 chips in the videos. I also would LOVE to be able to upgrade RAM again, but am skeptical. For AI inference token-generation the memory bandwidth is the key limiting factor, not the shiny TOPs/FLOPs - the SoC needs to pump GBytes of parameters to the cache for each token generated. NPU/GPU top performance is only necessary for prompt-processing, where AI-pworkloads can batch the token-processing. AI on the edge will become quite a standard workload for PCs, requiring >100GB/s as baseline. P.S: CAMM2 in theory seems to have 160 Bit wide access at 7.5 GT/s - if the chip uses 2x64-Bit=128 Bit, this gives 100GB/s.
@@theglowcloud2215 you might live a bit in the past. The whole Idea of this Copilot+PC innovation (which I like) these chips are for, is that some Information better stays on your machine and will get AI processed locally - "inference on the edge", next to also cloud inference. It's not about beefy machine-learning, just single-user inference (good TOPS + memory-bandwidth).
Yours is the only take i trust and like Wendel. I love your positivity, you love tech like some of us like tech. Also, Its so interesting that the actual p cores dont even take up the most space.
ai this ai that ai thus ai so... what about making hardware that would enable laptops with 14-20 hour battery life? running a llm on a machine is such a niche use case versus having a generally useful power enevelope.
I mean, you don't need cutting edge hardware advancements to reach 14 hours of battery life. Just bring back the secondary pack design from the old think pads. The ones that you could hot swap while the laptop was still on. We could have 14 hour laptops. If only laptop manufacturers weren't cowards who are afraid of laptops with some Thiccness.
@@5600hp but we need manufacturers to claim 12+ hour battery life cus when they say 12 hours, in reality it's like 3 hours under any moderate load (even just a zoom meeting or a powerpoint with some animations).
@@5600hp Yes. For your use case hours and hours of battery life is less valuable. But for people who work in environments without outlets, long battery life is a god send. I still remember the Lenovo my work gave me back in the day. It had a main battery, plus an hot swappable back up battery. So whenever you ran low on power you could hot swap in another battery pack with a full charge. I'd carry around two extra's in my back pack with me. Just in case.
Maybe I missed it, but I heard nothing about memory bandwidth. AI inference token-generation is all about memory-bandwidth, and this is where the M-series Max/Ultra shine, with their wide busses - TOPs/FLOPs only matter during prompt-processing where you can batch the token-processing, and don‘t have to pump GBytes of AI parameters into the caches for each new token generated.
@@PhazerTech this will help, but with 2 chips (16/32GB RAM) it probably will get a similar bandwidth to the M4 or SnapDragon X - probably 120-135 GB/s. Where the M-series Max have 400GB/s and the Ultra 800GB/s. P.S: CAMM2 in theory seems to have 160 Bit wide access at 7.5 GT/s - if the chip uses 2x64-Bit=128 Bit, this gives 100GB/s.
If you work with Microsoft Office, the AI functionality will power the copilot product. If you are a developer, the AI functions will assist in Visual Studio development. If you game, the AI functions will assist in NPC language and actions. If you do graphics work the Adobe suite leverages them for filters and effects.
Why are so many people getting tired of AI? I get that it is overhyped but it is moving everything forward, it is everywhere. I also suggest that if you don't like Microsoft Copilot than move to Linux. I've got llama3 running on fedora and it's opensource, so many possibilities. I can build an AI security bot for my system and network and discord bots and on and on. We are going through a paradigm shift and it is difficult to comprehend.
I think for me the cautious optimism comes from their willingness to rethink the whole SoC layout, it felt like they were endlessly copy pasting the old Haswell era ring bus architecture but this looks like a pretty radical departure from that with the memory side cache. I'm curious what they are using for the global interconnect though, they have been quite hushed about it, time and reviews will tell...
Good point about the ram. Going to need bigger buses to get those possible 128gb+ of ram machines properly fed. Will they make the silicon and price leap for it(with arrow lake?)
@Overseer476 So, currently the RAM DIMMs which i have in my RiG are: 4x 32GB " G.SKILL Trident Z5 RGB White 6400MHz CL32-39-39-102 1.40V XMP " All 4 of them are exactly the same! And currently i have them left at my Mobo's BIOS Defaults, my Mobo is a GIGABYTE X670E AORUS MASTER Rev.1.x with Latest BIOS version F30! And the RAM is running @ 3600MHz CL30-30-30-58-88 1T at the moment at this BIOS Default! I have NOT yet attempted to tweak my RAM. Because i'm saving up for the newly announced GIGABYTE X870E AORUS XTREME Mobo when it comes out, and after that for a Ryzen 9950X3D when it comes out. Because i'm a Desktop PC Enthusiast and this is my one and only hobby i'm passionate about. 🤓😅😀
@@Gielderst talking about laptops and their ram bus width as well as total ram. People are paying $4.7k to buy Apple laptops with 128gb of unified ram so they can work with bigger llms. If amd, intel, Qualcomm want to compete with that they are going to need wider laptop buses with more ram. Desktop is a little different but similar with people paying $5.6k for 192gb of unified memory on the Mac Studio. With much higher bandwidth than you can get on consumer desktop.
@tomis181 I'm not familiar with anything Apple, cause i've never ever been even slightly interested in Apple hardware and products. Due to hearing from others how expensive and overpriced Apple products are, be it iPhones or their Mac Computers, also how they're isolated almost entirely from the PC Gaming scene, by not supporting PC Games on their MacOS. But i'm familiar that in the PC Scene, you have Computers which are above the Desktop PC. Such as the Workstation/High End Desktop PC. Which utilize AMD's Threadripper PRO 7000WX or Intel's Xeon W-3400 CPUs with up to 8 channel DDR5 DIMM slots, up to 96 CPU Cores with AMD and up to 128 PCIe 5.0 lanes. My point is that, i think that any of those machines fitted with the flagship CPUs and the Maximum amount of supported DDR5 RAM DIMMs and the Flagship GPUs by either AMD or NVIDIA, can pretty much beat any Apple Mac Computer and wipe the floor with it. 😎😁
Kind of pointless until software exists that uses these NPUs… and I don’t mean some useless built-in co-pilot or bard equivalent. Apple Silicon has had Neural Engine for years now, and when I monitor its usage on macOS 99% of time it’s idle.
“If you build it they will come” gotta have the hardware available if people are going to write software for it. Here’s hoping the silicon doesn’t go to waste.
@@carbongrip2108 I don’t have a MacBook so can’t comment there but the neural engine on the iPhone does have a handful of tricks I enjoy. Fair points though and I don’t buy into AI everywhere hype but it will certainly have a solid place in computers from now on.
The ANE is very hard to use, its access is not very open and hit&miss for programs to get to it, Apple‘s Software finally decides if your program runs on the ANE or not. For the NPU the developers are in control.
It’s interesting to see race-to-sleep in the context of NPU operations. I wonder what the expected duty cycle would be on the thermal design of the chassis. It’s only saving energy per task, if a user then responds by running more tasks then the total power usage can get much higher. This has been an issue in past xeons where race-to-sleep behaviours led to unpredictable performance as the actual power draw under load could quickly become a problem. It caught us out big time in a previous job where systems under high load could pull 2-5x the stated TDP for short periods and get us in trouble with our co-location partner. I’m also curious how a blade based box with 32 of these would work for a mixed workload multi-tenant host. Not the fastest at any one thing, but can run almost anything fairly well within a reasonable power budget.
For "Ice Lake" (2020) - Intel addressed this by disabling C1E auto-promotion and exposing C1E as a separate idle state, allowing better control over power-saving features. drivers/idle/intel_idle.c 36 Intel Xeon X5xxx processors go into a race condition, was patched years ago in Microsoft and Linux, having a similar issue, which your vendors really should know about in either case and be able to patch or work around the issue, as it is not exactly a mystery or ancient tai-chi mastery... probably exists, as it goes for known problems, for equally as long however.. co-located.. good luck with that! Much fun... so much! Oh, how I do not envy thee, good day to you sir and God speed - boom-tsh!
It really feels like we've hit a new era of competition that is pushing them all to innovate. Hopefully they all survive and keep the competition going.
Laptops suck. That's why i have built my own RiG with an X670E Mobo + a Ryzen 7950X3D with a 420mm AIO + an RX 7900 XTX + 128GB RAM + 1250W PSU + 4x M.2 NVME SSDs + a Full Tower big a$$ Case. 😎
Thank you. Looking for info about lga1851. Hope for more pcie lanes (empty hopes) and bifurcation, that AMD doing for generations is regular desktops (and hope they will go to 2x2x2....x2 scheme that is interesting with pcie5). There can be interesting configurations with 2-4 core cheap and efficient chips in DIY area. For example if I want to build something like simple nvme NAS, I will use AMD (may be new EPYC 4004 with 4 cores), that delivers bifurcation at least 4x4x4x4 that can give me 4+2 on board nvme without any complicated pci-e switches. Hope to see Intel in this area.
I have serious trouble grasping why the f*** don't we have 10Gbps Ethernet as a default nowadays, or even 5Gbps when 1Gbps ethernet got mass marketed so much faster and became the standard so much faster that 10Gbps has, even 100Mbps and proper Switches became the standard faster and that required both better NICs and that HUBs would become "Smart" turning them into Switches from stupid collision ridden HUBs.
Given the subject matter, I'm really curious as to how you feel about AMD's keynote. I was actually really impressed personally. They're leaning heavily into AI as well.
Mate what are you on??? It won't be called a 15900K. It will be called an intel Core Ultra 9 285K on the LGA1851 socket platform Motherboards with 800 series intel chipsets.
Nothing more pure engineering than Lunar Lake, like a moon shining on the surface of the lake. I'm also hope for Meteor Lake too, Intel could be the first CPU provider to bring Copilot+ to desktop, especially AMD missed their chance in their Zen 5 desktop CPUs.
I really like the window sill setting you’re talking in, really beautiful and freeing background compared to the indoors setting tech TH-camrs always have. Also I’d like to know what camera you’re using because it really balanced the exposure on both sides of your face very well!
Yes, but Apple's base-technology software is limited. E.g. no GPU/ANE access from Docker/VMs (unlike native Linux/Windows GPU/NPU access), ANE access is also very shaky (as developer you can only hope your code runs on the ANE, Apple decides on the fly). I really hope that they fix this with WWDC24 announcements.
@@Fordance100 I’m not sure about the M series chips, but I know as far back as the iphone x (10) they were touting generational improvements in their neural engines. Clearly some degree of foresight was present and perhaps has been simmering in the background. Their reputation for not being the first mover, but coming later with a trimmed down and polished goof-proof mass market implementation, may be in play here.
@@andikunar7183 I feel they have allot riding on iOS 18 and this WWDC. The grievances are starting to add up, and they really do need to turn a fresh corner. They must be feeling some pressure to reveal Ai stuff. But I think it’s actually probably better to wait. Allowing all the other players to blunder and discredit their own reputations while Apple hones whatever they’re up to. With their recent regulatory setbacks with the app store and USB C, I would think Ai regulation would be top of mind. Maybe they’re waiting to read the room before sticking out their neck.
Did you say 1W screen? Where, when, how much?! I really hope you're right about most of this. At least the Xe guy Gamer's Nexus had on a few episodes actually knew his stuff and wasn't blowing smoke about some nebulous ideas, or concepts: he seemed to be quite aware that Xe was in need of software and had been one of, if not the mind behind the past year of driver improvement for the GPUs. If Intel would just un-clinch their tight...grip....on the insane segmentation of every stupid little thing, they would quickly find birds leaving the bush and voluntarily flying into their hand. My money says not Pat.
so excited competition is heating up so much even Qualcomm is coming to laptops hopefully one day also custom desktops AMD and Intel I'm so happy i first got into PC's when Intel was just shitting on AMD at sandy bridge era we got screwed
Do you think thunderbolt 4 will create issues for Lunar Lake? Thunderbolt 5 is not widely used except on high end systems. It would be nice to have, im sure they will figure it out next gen. PCIe 5 may start to take off next year, 15000 mb read/write, maybe more.
I find it interesting. Intel despite the last 10 yrs of relative fuckery, I still am rooting for on the CPU side to do good work. Nvidia though? Fuck Nvidia? XD They do amazing work and theres some really likeable people that work there. However.....
Could be here to stay, but also way overhyped. Those of us who actually run models realize this. Maybe it’s better for those that can spend 6-7 figures on vram. But for commoners it’s totally overrated
@@shmookinsBusiness will ban this as a pci and hippa compliance and criminals would love to watch you enter passwords and hide eternal self installing malware inside recalls since AV software can’t find it
IMO they killed gpu crypto mining so they could have resources for AI. If it was about the environment, like they said, then they wouldn’t be doing AI training and inference. Llama3 training alone is using tens of thousands of H100 GPU’s (possibly hundreds of thousands GPU’s) at 700w each.
A step in the right direction, but I still can't buy onto the Intel Platform not with AMD as a better option in both power and efficiency, not to mention I don't have to change my mainboard every generation as we do for Intel. Also these AI neural engines are too NICHE to take up that much precious silicon space, which could instead have been used for more cores, sram, or just smaller footprint (cheaper chip).
I am more interested in Intel. I have Ryzen 7840 hs laptop. I can't still access to NPU, really disappointing. Intel Openvino can easily use CPU, GPU and NPU.
You know, i used to think like that too, until i learned what a use case was... and actually tried a bunch of hardware, instead of gleaning my opinions from stock tech tuber testing.
Wonder when the TH-cam is going to fix the thumbs up icon . I noticed even though it isn't present - it still works - but one can't really tell if there are already 1K plus likes . Anyways click on the left side of the thumbs down and you are good to go ...
@@chesslive2714 my io went out. It was undervolted too! So my usb ports went in and out and failed to boot sometimes with a usb drive plugged in. My gpu kept reinstalling itself next. In Linux the driver didn’t reinstall and required a reboot for the second monitor?? I realized my pch was overvilted by .5 volts which damaged the ports. I then decreased clocks to 5.2 ghz and upped the voltage to stabilize which degraded it further. In the end my new 7950x3d outperforms my 13900k as I had all the undervolts and clocks
Intel is dead because they may be able to go from 100 watts to 20 watts but with ARM & Apple they will be able to get under 1 watt of power with new process nodes in the future. This is what is necessary for glasses as we move into the "Post-PC Era".
I'm still waiting for gamers Nexus to do the testing on these new chips and find out just how much Intel is lying. I understand the Silicon Lottery can have something to do with it, but still..
There are lawyers you know, Intel showed comparisons with their own products, but it is true they mentioned Qualcomm x elite processors that already hit the market. I don’t think Intel is allowed just to claim whatever they want in an event like this
The sum of the Intel CPUs power vs. perfomance is jacked up by too much Java and not enough chill pill . The Nvdia GPUs have the same recipe as well . Can't understand why anyone would justify burning up 1,000 watts for their minecraft rig . LoL !!
not that much, moores law is dead just went over it and pointed out that pricing is what will make or break the new AMD cpu's as they are going to be just behind or equal to arrow lake but they may be way more energy efficient and if they price it decently then they can smoke Intel as you say but if they decide to get greedy instead, well time will tell.
New AMD chips are 15% faster. No real new technology. Intel finally move to 3nm and below after 3 generations on 10nm. I think AMD power efficiency advantage is gone.
@@Fordance100 amd has actually mew technology and intel only ups the clock without any sense. Amd cpus are waaaaaay more efficient. Compare 7800x3d vs 14900k. 14900k eats more then 300W 😂.
@@KillaGorilla-l7z the point is this time is a completely different architecture for intel. What you are dong I’d like saying “all amd does is make hot messes like bulldozer”. Times change. Intel will also be the first to have backside power delivery. Whoever is behind generally creates hotter CPUs to try to make them look more competitive. It used to be amd when intel was ahead. Then it was intel when amd was ahead. And now we are in a new paradigm.
Only 4+4 cores. AMD's upcoming mobile offerings are stronger and more compelling even in Tops ,(50), and that with august availability. Intel needs have a big uplift with their arrow lake desktop CPU's over 14th gen to compete with Ryzen 9000 or they're in big trouble.
If you take a shot every time AMD and Intel mentioned AI you are certified dead.
You forgot Nvidia
@@NameUserOf if you include that, you are in the afterlife
So you think AI was not important? What is your point?
Dammit youtube why are you deleting my comments again?
If I got a dollar for everytime I heard "AI" from the tech industry I'd pay off my student loans by now lol
I foresee many CVEs with the new edge 'AI' hardware.
How many CVEs, could you see, if the CVEs from AI, could see eye to AI?
Very happy to see you so excited!
Im ok with the laptops having RAM on chip for the NPU and such, but i definitely want to see a move away from the soddered memory and onto the CAMM2 standard
Not sure if this would not somewhat degrade the memory-bandwidth. M4 and Snapdragon have 120-135 GB/s and they all have the memory right next to the SoC. Very wide busses might help (M2/M3 Max has 512 Bit with 400GB/s), but I saw only 2 chips in the videos. I also would LOVE to be able to upgrade RAM again, but am skeptical.
For AI inference token-generation the memory bandwidth is the key limiting factor, not the shiny TOPs/FLOPs - the SoC needs to pump GBytes of parameters to the cache for each token generated. NPU/GPU top performance is only necessary for prompt-processing, where AI-pworkloads can batch the token-processing. AI on the edge will become quite a standard workload for PCs, requiring >100GB/s as baseline.
P.S: CAMM2 in theory seems to have 160 Bit wide access at 7.5 GT/s - if the chip uses 2x64-Bit=128 Bit, this gives 100GB/s.
m4 has 100GB/s at 6400mhz@@andikunar7183
@@andikunar7183Well, that’s only true for LLMs. True, nonetheless.
@@andikunar7183 the real question is: why are we trying to run these workloads on laptops? Run that shit on a server at home like God intended.
@@theglowcloud2215 you might live a bit in the past. The whole Idea of this Copilot+PC innovation (which I like) these chips are for, is that some Information better stays on your machine and will get AI processed locally - "inference on the edge", next to also cloud inference. It's not about beefy machine-learning, just single-user inference (good TOPS + memory-bandwidth).
This has to be in the top 3 of the tech streams that I view . Love hearing what Wendell has to say about stuff !
always love the wide-eyed photos, never fail to put a smile on my face
I find them disturbing, deeply.
Yours is the only take i trust and like Wendel. I love your positivity, you love tech like some of us like tech.
Also, Its so interesting that the actual p cores dont even take up the most space.
ai this ai that ai thus ai so... what about making hardware that would enable laptops with 14-20 hour battery life? running a llm on a machine is such a niche use case versus having a generally useful power enevelope.
Snapdragon X Elite is promising that kind of battery life, but translation performance with Windows on Arm remains to be seen in the real world.
I mean, you don't need cutting edge hardware advancements to reach 14 hours of battery life.
Just bring back the secondary pack design from the old think pads. The ones that you could hot swap while the laptop was still on.
We could have 14 hour laptops. If only laptop manufacturers weren't cowards who are afraid of laptops with some Thiccness.
I really don’t need a laptop with over 7-8 hours of battery life because power outlets are everywhere . I’d rather pay for performance or lightness .
@@5600hp but we need manufacturers to claim 12+ hour battery life cus when they say 12 hours, in reality it's like 3 hours under any moderate load (even just a zoom meeting or a powerpoint with some animations).
@@5600hp Yes. For your use case hours and hours of battery life is less valuable.
But for people who work in environments without outlets, long battery life is a god send.
I still remember the Lenovo my work gave me back in the day. It had a main battery, plus an hot swappable back up battery.
So whenever you ran low on power you could hot swap in another battery pack with a full charge.
I'd carry around two extra's in my back pack with me. Just in case.
I appreciate your delivery on this. It's equally engaging and relaxing.
Are you engaged in relaxing, or is it your rifle you are polishing, giving it a good waxing?
@@mrhassell You seem a bit rifle obsessed, ngl
@@PropaneWP it's you who's relaxing and demonstrating your appreciation champ! Something on your nose..
Maybe I missed it, but I heard nothing about memory bandwidth. AI inference token-generation is all about memory-bandwidth, and this is where the M-series Max/Ultra shine, with their wide busses - TOPs/FLOPs only matter during prompt-processing where you can batch the token-processing, and don‘t have to pump GBytes of AI parameters into the caches for each new token generated.
Wait for the hands on. The npu has 8 mb of sram for a buffer so.. the pieces are here at least. But a lot can happen between nownans launch
I believe the on-die RAM will help maximize bandwidth & latency.
@@PhazerTech this will help, but with 2 chips (16/32GB RAM) it probably will get a similar bandwidth to the M4 or SnapDragon X - probably 120-135 GB/s. Where the M-series Max have 400GB/s and the Ultra 800GB/s.
P.S: CAMM2 in theory seems to have 160 Bit wide access at 7.5 GT/s - if the chip uses 2x64-Bit=128 Bit, this gives 100GB/s.
@@andikunar7183that’s really useful. Thanks for sharing the maths :)
It's more than 544 GB/s at bandwidth. Which is a real number unlike Apple's 800gb lies.
Super hyped for the Xeon 6 video
This might sound ironic; i dont want my cpu to have AI capability. I dont want to pay for features i wont ever use.
“AI” in a CPU name is a good reason for me not to buy it.
If you work with Microsoft Office, the AI functionality will power the copilot product.
If you are a developer, the AI functions will assist in Visual Studio development.
If you game, the AI functions will assist in NPC language and actions.
If you do graphics work the Adobe suite leverages them for filters and effects.
All search will be AI assisted.
Why are so many people getting tired of AI? I get that it is overhyped but it is moving everything forward, it is everywhere. I also suggest that if you don't like Microsoft Copilot than move to Linux. I've got llama3 running on fedora and it's opensource, so many possibilities. I can build an AI security bot for my system and network and discord bots and on and on. We are going through a paradigm shift and it is difficult to comprehend.
Adorable that you think your life won't be forcibly augmented by AI within the next 12 months.
Thanks, Steve.
3:12 Total not an NSA backdoor :P
I think for me the cautious optimism comes from their willingness to rethink the whole SoC layout, it felt like they were endlessly copy pasting the old Haswell era ring bus architecture but this looks like a pretty radical departure from that with the memory side cache. I'm curious what they are using for the global interconnect though, they have been quite hushed about it, time and reviews will tell...
Good point about the ram. Going to need bigger buses to get those possible 128gb+ of ram machines properly fed. Will they make the silicon and price leap for it(with arrow lake?)
What are you talking about.
I already have 128GB RAM on my Desktop RiG at home with an X670E Mobo + a 7950X3D + an RX 7900XTX
@@Gielderst What speed?
@Overseer476 So, currently the RAM DIMMs which i have in my RiG are:
4x 32GB " G.SKILL Trident Z5 RGB White 6400MHz CL32-39-39-102 1.40V XMP "
All 4 of them are exactly the same!
And currently i have them left at my Mobo's BIOS Defaults, my Mobo is a GIGABYTE X670E AORUS MASTER Rev.1.x with Latest BIOS version F30!
And the RAM is running @ 3600MHz CL30-30-30-58-88 1T at the moment at this BIOS Default!
I have NOT yet attempted to tweak my RAM.
Because i'm saving up for the newly announced GIGABYTE X870E AORUS XTREME Mobo when it comes out, and after that for a Ryzen 9950X3D when it comes out.
Because i'm a Desktop PC Enthusiast and this is my one and only hobby i'm passionate about. 🤓😅😀
@@Gielderst talking about laptops and their ram bus width as well as total ram. People are paying $4.7k to buy Apple laptops with 128gb of unified ram so they can work with bigger llms. If amd, intel, Qualcomm want to compete with that they are going to need wider laptop buses with more ram.
Desktop is a little different but similar with people paying $5.6k for 192gb of unified memory on the Mac Studio. With much higher bandwidth than you can get on consumer desktop.
@tomis181 I'm not familiar with anything Apple, cause i've never ever been even slightly interested in Apple hardware and products.
Due to hearing from others how expensive and overpriced Apple products are, be it iPhones or their Mac Computers, also how they're isolated almost entirely from the PC Gaming scene, by not supporting PC Games on their MacOS.
But i'm familiar that in the PC Scene, you have Computers which are above the Desktop PC.
Such as the Workstation/High End Desktop PC.
Which utilize AMD's Threadripper PRO 7000WX or Intel's Xeon W-3400 CPUs with up to 8 channel DDR5 DIMM slots, up to 96 CPU Cores with AMD and up to 128 PCIe 5.0 lanes.
My point is that, i think that any of those machines fitted with the flagship CPUs and the Maximum amount of supported DDR5 RAM DIMMs and the Flagship GPUs by either AMD or NVIDIA, can pretty much beat any Apple Mac Computer and wipe the floor with it. 😎😁
Kind of pointless until software exists that uses these NPUs… and I don’t mean some useless built-in co-pilot or bard equivalent. Apple Silicon has had Neural Engine for years now, and when I monitor its usage on macOS 99% of time it’s idle.
“If you build it they will come” gotta have the hardware available if people are going to write software for it. Here’s hoping the silicon doesn’t go to waste.
They could atleast release something like a low power sound equaliser or something like that which shifts the load away from cpu
But nope
@@tomis181 🤔, how’s that going for Apples Neural Engine now after 4 years of Apple Silicon?
@@carbongrip2108 I don’t have a MacBook so can’t comment there but the neural engine on the iPhone does have a handful of tricks I enjoy. Fair points though and I don’t buy into AI everywhere hype but it will certainly have a solid place in computers from now on.
The ANE is very hard to use, its access is not very open and hit&miss for programs to get to it, Apple‘s Software finally decides if your program runs on the ANE or not. For the NPU the developers are in control.
It’s interesting to see race-to-sleep in the context of NPU operations. I wonder what the expected duty cycle would be on the thermal design of the chassis. It’s only saving energy per task, if a user then responds by running more tasks then the total power usage can get much higher.
This has been an issue in past xeons where race-to-sleep behaviours led to unpredictable performance as the actual power draw under load could quickly become a problem. It caught us out big time in a previous job where systems under high load could pull 2-5x the stated TDP for short periods and get us in trouble with our co-location partner.
I’m also curious how a blade based box with 32 of these would work for a mixed workload multi-tenant host. Not the fastest at any one thing, but can run almost anything fairly well within a reasonable power budget.
For "Ice Lake" (2020) - Intel addressed this by disabling C1E auto-promotion and exposing C1E as a separate idle state, allowing better control over power-saving features.
drivers/idle/intel_idle.c 36
Intel Xeon X5xxx processors go into a race condition, was patched years ago in Microsoft and Linux, having a similar issue, which your vendors really should know about in either case and be able to patch or work around the issue, as it is not exactly a mystery or ancient tai-chi mastery... probably exists, as it goes for known problems, for equally as long however.. co-located.. good luck with that! Much fun... so much! Oh, how I do not envy thee, good day to you sir and God speed - boom-tsh!
Something new from intel eventually!
You mean promises? Yes, new and original.
@@dgo4490cynical
It really feels like we've hit a new era of competition that is pushing them all to innovate. Hopefully they all survive and keep the competition going.
0:50 what do you mean "before q3"? We are literally LESS THAN A MONTH AWAY from q3!
Yeah, he means q4, no reason to yell
@@Workaholic42 oh, understandable. Have a nice day
Q3 released decades ago, bro. 😉
Maybe intel will do something other than just pumping more wattage into it
Is intel testing the laptop cpu's in linux or im i wrong? because they have a +/- 10 % in the footnotes about claimed 15% ipc
That blank space is a mislead. Something will be there. They gotta hold something for launch day.
6:50 You must have one in your lab or lap?
I’m excite for the change and when will intel implement back side power delivery?
Thunderbolt share at 20Gbps is way faster than syncing over OneDrive or copying files to and from an external SSD.
Ant other information other than pictures released for the press that everybody is showing? Like performances? Regards
Do you know if Lunar Lake will be implemented on mobile workstations? TY 🙂
Dam. Wish i didn't buy a meteor lake laptop back in march. I think im gonna start buying framework laptops from now on
Laptops suck.
That's why i have built my own RiG with an X670E Mobo + a Ryzen 7950X3D with a 420mm AIO + an RX 7900 XTX + 128GB RAM + 1250W PSU + 4x M.2 NVME SSDs + a Full Tower big a$$ Case. 😎
I asked for a NPU dev kit a while ago. More like a PCIe card... But that's kinda interesting
You da best Wendell -my guess is sept end of q3
Anyone else notice "More Telemetry" is built into the chip, and marketed in the slide as a "Feature"?
Wait was Intel’s event in a side room of the conference center? Looks like the room was the size of a high school class room
Thank you. Looking for info about lga1851. Hope for more pcie lanes (empty hopes) and bifurcation, that AMD doing for generations is regular desktops (and hope they will go to 2x2x2....x2 scheme that is interesting with pcie5). There can be interesting configurations with 2-4 core cheap and efficient chips in DIY area. For example if I want to build something like simple nvme NAS, I will use AMD (may be new EPYC 4004 with 4 cores), that delivers bifurcation at least 4x4x4x4 that can give me 4+2 on board nvme without any complicated pci-e switches. Hope to see Intel in this area.
ARL will be the same as MTL. IO, SOC, & iGPU TILES are all the same as MTL & all are made in TSMC.
I have serious trouble grasping why the f*** don't we have 10Gbps Ethernet as a default nowadays, or even 5Gbps when 1Gbps ethernet got mass marketed so much faster and became the standard so much faster that 10Gbps has, even 100Mbps and proper Switches became the standard faster and that required both better NICs and that HUBs would become "Smart" turning them into Switches from stupid collision ridden HUBs.
Given the subject matter, I'm really curious as to how you feel about AMD's keynote. I was actually really impressed personally. They're leaning heavily into AI as well.
You sound (pleasantly) like the Hank Hill of tech!
Striking similarity - lol
Great breakdown of LL.
So on the 15900K are we getting full PCIE 5.0 4x NVME to CPU and 16X GPU at the same time?
No, we’re not getting that!
@@tringuyen7519 Godammit 🙄
Mate what are you on???
It won't be called a 15900K.
It will be called an intel Core Ultra 9 285K on the LGA1851 socket platform Motherboards with 800 series intel chipsets.
Of course not! You need a $5000 Xeon for that 🙄
Could you do a similar breakdown of Strix Point, the new AI 300 mobile chip from AMD?
Nothing more pure engineering than Lunar Lake, like a moon shining on the surface of the lake. I'm also hope for Meteor Lake too, Intel could be the first CPU provider to bring Copilot+ to desktop, especially AMD missed their chance in their Zen 5 desktop CPUs.
I really like the window sill setting you’re talking in, really beautiful and freeing background compared to the indoors setting tech TH-camrs always have. Also I’d like to know what camera you’re using because it really balanced the exposure on both sides of your face very well!
there's a lot going on... snapdragon x elite, AMD 9000, and I'm starting to get lost in it...
Snapdragon needs emulation to run x86 software. AMD Strix Point doesn’t need emulation. Lunar Lake has only 4 P cores & no SMT!
The whole NPU thing will most likely be as useless as Intel GMA was vs dedicated GPU.
Actually it will be even more useless.
@user-yd1qk9kw9n uhh isn't 4090 doing like 20tokens/s on 30b LLMs? This doing 30?
@user-yd1qk9kw9n i tried to find memory bandwidth on that, it looks like it will be 170 max which would mean around 3 tokens/s for 30b models at 8bit
all efficient cores chip will be interesting
not really.
I must admit that I still hold a grudge against Intel for all its anti-consumer anti-competitive practices in the 2010s.
It should be 2.5Ghz ethernet. I will always want more stable cable over wifi, whenever possible
Please, for the love of all things don't say AI!, a small piece of me dies every time I hear those two letters said together now.....ahhh he said it!
Apple’s been baking neural engines into their custom silicon for a long time now.
So I’m sure they’re just a tap away from something revolutionary.
😂
Apple NPUs are not very fast. M4 will have a big upgrade on NPU supposedly.
Yes, but Apple's base-technology software is limited. E.g. no GPU/ANE access from Docker/VMs (unlike native Linux/Windows GPU/NPU access), ANE access is also very shaky (as developer you can only hope your code runs on the ANE, Apple decides on the fly). I really hope that they fix this with WWDC24 announcements.
@@Fordance100 I’m not sure about the M series chips, but I know as far back as the iphone x (10) they were touting generational improvements in their neural engines. Clearly some degree of foresight was present and perhaps has been simmering in the background.
Their reputation for not being the first mover, but coming later with a trimmed down and polished goof-proof mass market implementation, may be in play here.
@@andikunar7183 I feel they have allot riding on iOS 18 and this WWDC. The grievances are starting to add up, and they really do need to turn a fresh corner. They must be feeling some pressure to reveal Ai stuff. But I think it’s actually probably better to wait. Allowing all the other players to blunder and discredit their own reputations while Apple hones whatever they’re up to.
With their recent regulatory setbacks with the app store and USB C, I would think Ai regulation would be top of mind. Maybe they’re waiting to read the room before sticking out their neck.
Thanks for the video
Intel still referring to there CPU as X86 a 32bit architecture how cute lol.
Seems like so much space for the NPU, that's for AI right? Kind of a waste if you're not gonna use copilot or stuff like that.
Did you say 1W screen? Where, when, how much?! I really hope you're right about most of this. At least the Xe guy Gamer's Nexus had on a few episodes actually knew his stuff and wasn't blowing smoke about some nebulous ideas, or concepts: he seemed to be quite aware that Xe was in need of software and had been one of, if not the mind behind the past year of driver improvement for the GPUs. If Intel would just un-clinch their tight...grip....on the insane segmentation of every stupid little thing, they would quickly find birds leaving the bush and voluntarily flying into their hand. My money says not Pat.
so excited competition is heating up so much even Qualcomm is coming to laptops hopefully one day also custom desktops
AMD and Intel I'm so happy i first got into PC's when Intel was just shitting on AMD at sandy bridge era we got screwed
Can anyone explain to me how is the comparison between intel Arc graphic and Intel Xe2?
Where is thunderbolt 5 🤷🏼
10 watts while watching Netflix??????????????
2:16 - I am only slightly ashamed to have found "Stiffener" funny...
If you're excited Wendell then i am too! Computing isn't dead...yayyy 😊
Amd Strix point seems to do better without breaking improvements
i agree. AMD went a much easier route.
they should call it Bacon Bake instead of Lunar Lake
Run intel run
Yeah Intel, keep giving TSMC all of your business. Lunar Lake, Meteor Lake, & Gaudi GPU all use TSMC. US gave Intel $20 billion for this?
Zilog discontinued the z80. Computing is dead.
Not switching to Windows 11
Not buying a shitty desktop CPU from Intel with mobile cores
Do you think thunderbolt 4 will create issues for Lunar Lake? Thunderbolt 5 is not widely used except on high end systems. It would be nice to have, im sure they will figure it out next gen. PCIe 5 may start to take off next year, 15000 mb read/write, maybe more.
I find it interesting. Intel despite the last 10 yrs of relative fuckery, I still am rooting for on the CPU side to do good work.
Nvidia though? Fuck Nvidia? XD
They do amazing work and theres some really likeable people that work there. However.....
AI: asinine intrusion, absolute incoherence, all information... any other takers?
Abominable Intelligence
Algorithmic Incontinence
AI is the new VR ready
lol, no.
NPUs are here to stay and unlike VR, they are actually useful and practical.
@@shmookins oh i know but AI mobos just shows they are cramming it into everything like they did VR
Could be here to stay, but also way overhyped. Those of us who actually run models realize this.
Maybe it’s better for those that can spend 6-7 figures on vram.
But for commoners it’s totally overrated
@@shmookinsjust because you're not paying attention to vr/ar doesn't mean there's no innovations with them.
@@shmookinsBusiness will ban this as a pci and hippa compliance and criminals would love to watch you enter passwords and hide eternal self installing malware inside recalls since AV software can’t find it
I miss the Pentium 4 and Athlon days, PCs were exciting... Now it's all AI fluff 🥱
I think all this NPU silicon was a last minute addition (relatively speaking)
AMD had an NPU since their Phoenix mobile APUs, so it's not as "last minute" as you think.
Ai the new criptoooooooooooooooooo
IMO they killed gpu crypto mining so they could have resources for AI.
If it was about the environment, like they said, then they wouldn’t be doing AI training and inference. Llama3 training alone is using tens of thousands of H100 GPU’s (possibly hundreds of thousands GPU’s) at 700w each.
Ai can do everything! Except get people to use it...
A step in the right direction, but I still can't buy onto the Intel Platform not with AMD as a better option in both power and efficiency, not to mention I don't have to change my mainboard every generation as we do for Intel. Also these AI neural engines are too NICHE to take up that much precious silicon space, which could instead have been used for more cores, sram, or just smaller footprint (cheaper chip).
I am more interested in Intel. I have Ryzen 7840 hs laptop. I can't still access to NPU, really disappointing. Intel Openvino can easily use CPU, GPU and NPU.
Amd apu still better than intel. Specially the npu
You know, i used to think like that too, until i learned what a use case was... and actually tried a bunch of hardware, instead of gleaning my opinions from stock tech tuber testing.
The chip was designed in Israel.
Wonder when the TH-cam is going to fix the thumbs up icon . I noticed even though it isn't present - it still works - but one can't really tell if there are already 1K plus likes . Anyways click on the left side of the thumbs down and you are good to go ...
AMD seems to be draining intels swamp...lol
Anyone want to beleive that there is actually dummy filler die space?
After my failed 13900k I will never buy Intel again. Screw Intel and the $1200 lost mobo and cpu they caused 😡. I am now am AMD fanboy for life
What's happened? did it burn??? I bought it 1 week ago😯
@@chesslive2714 my io went out. It was undervolted too! So my usb ports went in and out and failed to boot sometimes with a usb drive plugged in. My gpu kept reinstalling itself next. In Linux the driver didn’t reinstall and required a reboot for the second monitor?? I realized my pch was overvilted by .5 volts which damaged the ports. I then decreased clocks to 5.2 ghz and upped the voltage to stabilize which degraded it further. In the end my new 7950x3d outperforms my 13900k as I had all the undervolts and clocks
i just wish a cpu with intel cpu and amd igpu
Ai is a fad, and this too will come back to bite them.
AI has no real world value. It's a fancy chatbot function
AI is th HD of 2008.
im soo triggered that u said arm woukd take over the world. x86 will never die
Unscuffed Barnacles
filler smh add more cores and npus wow sheesh
Keep your eye toward the coast, watch out for that invasion!!
Intel is toast, this cant compare to AMD Strix or AMD 9000 processors
Intel is dead because they may be able to go from 100 watts to 20 watts but with ARM & Apple they will be able to get under 1 watt of power with new process nodes in the future. This is what is necessary for glasses as we move into the "Post-PC Era".
All this so Microsoft can better spy on you....
I'm still waiting for gamers Nexus to do the testing on these new chips and find out just how much Intel is lying. I understand the Silicon Lottery can have something to do with it, but still..
There are lawyers you know, Intel showed comparisons with their own products, but it is true they mentioned Qualcomm x elite processors that already hit the market. I don’t think Intel is allowed just to claim whatever they want in an event like this
The sum of the Intel CPUs power vs. perfomance is jacked up by too much Java and not enough chill pill . The Nvdia GPUs have the same recipe as well . Can't understand why anyone would justify burning up 1,000 watts for their minecraft rig . LoL !!
Ah, I see you've never touched hardware before.
"What are Intel fabs busy with if not this?" Churning out RMA replacements for their failing CPUs as it turns out
Who cares, AMD new chips are going to smoke Intel.
not that much, moores law is dead just went over it and pointed out that pricing is what will make or break the new AMD cpu's as they are going to be just behind or equal to arrow lake but they may be way more energy efficient and if they price it decently then they can smoke Intel as you say but if they decide to get greedy instead, well time will tell.
New AMD chips are 15% faster. No real new technology. Intel finally move to 3nm and below after 3 generations on 10nm. I think AMD power efficiency advantage is gone.
@@Fordance100 amd has actually mew technology and intel only ups the clock without any sense. Amd cpus are waaaaaay more efficient. Compare 7800x3d vs 14900k. 14900k eats more then 300W 😂.
But the only thing we want is faster...power and energie 90% doenst give a F...@@Fordance100
@@KillaGorilla-l7z the point is this time is a completely different architecture for intel. What you are dong I’d like saying “all amd does is make hot messes like bulldozer”.
Times change. Intel will also be the first to have backside power delivery.
Whoever is behind generally creates hotter CPUs to try to make them look more competitive. It used to be amd when intel was ahead. Then it was intel when amd was ahead. And now we are in a new paradigm.
Only 4+4 cores. AMD's upcoming mobile offerings are stronger and more compelling even in Tops ,(50), and that with august availability. Intel needs have a big uplift with their arrow lake desktop CPU's over 14th gen to compete with Ryzen 9000 or they're in big trouble.
bruh... still TB4
*devolution
Say something to start the conversation
We dont care about intel. Will never buy one
I don't trust Intel with innovation. They are too lazy. There's no way I will buy this crap chip over ARM chips.