I guess I didn't explicitly say this but the difference between a NPU and is the GPU is more general purpose and power hungry, they can execute FP32 and FP16, have have frame buffers and generally have a wider specialized math functions and more control flow operations. NPUs are primarily a power saving mechanism so the GPU isn't being taxed for less complex operations.
@@SalivatingSteveYes, but some of them may be disabled, just like some binned chips as fewer cpu cores than others. They don’t have to toss the entire die by disabling some cores. Perhaps they can just take the disabling deeper to make a gpu into a npu.
In general, The Apple Neural Engine (ANE) is designed to handle tasks that require extensive mathematical computations, such as image and speech recognition, natural language processing, and other AI-driven functionalities. It offloads these tasks from the main CPU and GPU, allowing for more efficient processing and improved performance.
Yes and no, at least not yet. I’ve had the PowerMac for a little while now but I haven’t had much time to really dive deep into it. I plan on filming the cleaning and installing some upgrades. For starters, a hard drive would be nice as it doesn’t have one. My iTunes setup is also currently less than ideal so I also want to change it up and also accommodate the G5 into it. It currently just has my 2008 black MacBook on Lion. While I do use the 2012 Mac Mini with upgraded RAM/SSD, I only really use it as a work machine for TH-cam, although I would like to try and use an older system/Mac OS for that also. Don’t know when that’ll happen but it’ll be fun to try out older Final Cut and other old Pro apps.
@@TheOriginalCollectorA1303 Speaking as someone who had a G5 right about the time I was wrapping up college, it was in that strange era of early HD video. 720p was pretty brutal on FCP5. I had a dual 2 GHz PCI-X variant. I think why my channel is so Intel Mac heavy is despite being from mid-to-late PowerPC era is those machines were always having to make compromises.
I know PowerPC isn’t perfect, and even the early Intel machines can do beyond the best of the G5, but it’s still an interesting architecture and cool system to use. Plus upgrades don’t cost what they once did, especially larger storage drives or maxing out the RAM. I do like the Intel systems and I would like to get some more early models. The fact that a Mac Pro which only came out a couple of years after the G5 can still run a modern OS speaks to just how powerful (and expensive) it was back when it was new. It’s just a shame that the golden era of Snow Leopard/Windows 7 is over, it’s just not the same. However, if I can utilize those versions once again, why not? For PowerPC though, I’d definitely try and get a good MiniDV or similar camcorder, way better than trying to force modern video in terms of speed. Plus FireWire is still amazing.
Very long and drawn out video. It was because of the items being over my head. But in the end I have a good basis for what ANE, Apple Intelligence, and ML are for. I did learn why my iPhone seems to take forever to take images now. Yes these processors are fast but when tasked with 100 tasks for an image. That’ll take some time to complete. Thank you for your candid thoughts.
I appreciate the feedback. I probably should have jumped in with a very basic explanation of a cpu vs gpu vs npu with a graphic on the screen before diving into tech details. My audience skews fairly geeky, but as my channel expands I need to try and “onboard” a bit without slowing down the flow when doing these sort of videos.
@@WarriorsPhotonever know if people are joking or not, so I try to be open to feedback . I was not offended or anything like that, just looking to make the best content I can.
What I imagine would set the NE apart from CPU and GPU is the NE’s need to continuously update its log and not clear that log accidentally with updates. If it had to start from scratch every time it encountered a complicated solution, it would not in fact be operating like a ‘Neural’ anything. We learn and remember, setting our minds free from the task of doing that again every time the same brick wall looms.
Technically, each packed tensor works the same, just disables overflow between each tensor. Which is useful because CPU/GPU supports 32 or 64 bit memory, so can multiply quantised speed by 2-4x.
Just to add another wrinkle there are TPU's and 'edge accelerators' that are also available from companies like Coral and Hailo that can work with desktop systems (or even Pi's). Also Patrick Boyle is great.
GPUs are more general purpose and power hungry, they can execute FP32 and FP16, have have frame buffers and generally have a wider specialized math functions and more control flow operations. Desktop PCs until ARM have leaned heavily on the GPU for these sort of operations at the cost of power.
@@dmug To put it another way GPU's were meant for graphics but happen to be good at the math that ML uses, NPU's are optimized for the math and don't need to accelerate graphics which saves a lot of power.
I'd like to see how the M1 and M4 laptops compare with each other doing AI tasks. As the TFlops from M1 to M4 is a massive increase. I'm guessing M1 AI will be mostly done on Apple servers compared to M4 laptops. I was looking at getting an M1 laptop at some point as I'm generally PC, but like to dabble.
I'll be curious too, the issue is a lot of the stuff that's more avant-garde like running Stable Diffusion locally thus far doesn't really use much of the neural engine. On my M1 Max, with Draw Things, the neural engine doesn't do a lot when enabled.
TSMC, the neural engine is in-house or at least not listed as anything other than that. The GPUs up to the A10 were imagination technologies, PowerVRs. I believe they licensed it as the starting point for their current architecture, again fabbed by TSMC.
Good catch, added and thanks! Added these two What are Apple's GPU cores? th-cam.com/video/-TOdEjcFldI/w-d-xo.html Mac Mini M1vs Mac Pro 2013 Benchmark Battle! th-cam.com/video/Gza7ebuT7DA/w-d-xo.html
Not sure if you watched the entire video but the ANE is Apple's marketing term, and I explain the ANE is a type of NPU around the 6 minute mark. Apple's ML research papers use the "ANE" acronym. Here's a few examples of Apple using the term ANE machinelearning.apple.com/research/neural-engine-transformers machinelearning.apple.com/research/vision-transformers So in short, it is called the ANE akin to how Apple refers to motherboards as "logicboards". More viewers are likely familiar with the term Neural Engine from Apple's marketing than they are with Neural Processing Units.
The one image of the woman looked frazzled was pretty bad, and they decided to use it during the presentation suggesting that it was a “good” example. Generative emoji strikes me as an acceptable thing as it’d be trained on a limited data and we’ve already allowed for options on emoji like gender/skin tones, might as well let people combine them. The other stuff is just kinda gross.
I guess I didn't explicitly say this but the difference between a NPU and is the GPU is more general purpose and power hungry, they can execute FP32 and FP16, have have frame buffers and generally have a wider specialized math functions and more control flow operations. NPUs are primarily a power saving mechanism so the GPU isn't being taxed for less complex operations.
So, are they just binned Gpus used to avoid throwing out imperfect dies?
@@RogerZoulno because with Apple silicon the NPU, GPU, and CPU are all integrated into the same die.
@@SalivatingSteveYes, but some of them may be disabled, just like some binned chips as fewer cpu cores than others. They don’t have to toss the entire die by disabling some cores. Perhaps they can just take the disabling deeper to make a gpu into a npu.
In general, The Apple Neural Engine (ANE) is designed to handle tasks that require extensive mathematical computations, such as image and speech recognition, natural language processing, and other AI-driven functionalities. It offloads these tasks from the main CPU and GPU, allowing for more efficient processing and improved performance.
NPU is like A GPU but instead of graphics, it’s optimized for processing machine learning tasks
This new modern computing stuff is just too advanced, I’ll stick to my PowerMac G5, it’s already 64-bit! Great video though, very informative!
I have to ask, do you actually still use your G5 regularly? I have my g4 it’s more of a thing I have that I rarely use but do like having.
Yes and no, at least not yet. I’ve had the PowerMac for a little while now but I haven’t had much time to really dive deep into it. I plan on filming the cleaning and installing some upgrades. For starters, a hard drive would be nice as it doesn’t have one. My iTunes setup is also currently less than ideal so I also want to change it up and also accommodate the G5 into it. It currently just has my 2008 black MacBook on Lion. While I do use the 2012 Mac Mini with upgraded RAM/SSD, I only really use it as a work machine for TH-cam, although I would like to try and use an older system/Mac OS for that also. Don’t know when that’ll happen but it’ll be fun to try out older Final Cut and other old Pro apps.
@@TheOriginalCollectorA1303 Speaking as someone who had a G5 right about the time I was wrapping up college, it was in that strange era of early HD video. 720p was pretty brutal on FCP5. I had a dual 2 GHz PCI-X variant. I think why my channel is so Intel Mac heavy is despite being from mid-to-late PowerPC era is those machines were always having to make compromises.
I know PowerPC isn’t perfect, and even the early Intel machines can do beyond the best of the G5, but it’s still an interesting architecture and cool system to use. Plus upgrades don’t cost what they once did, especially larger storage drives or maxing out the RAM. I do like the Intel systems and I would like to get some more early models. The fact that a Mac Pro which only came out a couple of years after the G5 can still run a modern OS speaks to just how powerful (and expensive) it was back when it was new. It’s just a shame that the golden era of Snow Leopard/Windows 7 is over, it’s just not the same. However, if I can utilize those versions once again, why not? For PowerPC though, I’d definitely try and get a good MiniDV or similar camcorder, way better than trying to force modern video in terms of speed. Plus FireWire is still amazing.
Me too I have financial difficulties...
Great video, this cleared it up for me!
And i’ve not heard the words “mantissa” and “exponent” since college haha 😂
Likewise, don’t hear the term “back propagation” used in all the AI/Neural Network/ Machine Learning/ LLM videos.
Very long and drawn out video.
It was because of the items being over my head. But in the end I have a good basis for what ANE, Apple Intelligence, and ML are for.
I did learn why my iPhone seems to take forever to take images now. Yes these processors are fast but when tasked with 100 tasks for an image. That’ll take some time to complete.
Thank you for your candid thoughts.
I appreciate the feedback. I probably should have jumped in with a very basic explanation of a cpu vs gpu vs npu with a graphic on the screen before diving into tech details.
My audience skews fairly geeky, but as my channel expands I need to try and “onboard” a bit without slowing down the flow when doing these sort of videos.
@@dmug I am a Geek and proudly so but I don’t know 🤷 everything. Geez 😒. Joke BTW.
@@WarriorsPhotonever know if people are joking or not, so I try to be open to feedback . I was not offended or anything like that, just looking to make the best content I can.
@@dmug That's why I mentioned the Joke. Figured it would assist with understanding what was intended.
What I imagine would set the NE apart from CPU and GPU is the NE’s need to continuously update its log and not clear that log accidentally with updates. If it had to start from scratch every time it encountered a complicated solution, it would not in fact be operating like a ‘Neural’ anything. We learn and remember, setting our minds free from the task of doing that again every time the same brick wall looms.
Awesome overview! I really like these breakdowns loved your CUDA vs Apple GPU's videos
A very fun topic. Thanks for the video 👌🏼
Thanks for watching
You did not, in fact, link the memory management video in the description.
Technically, each packed tensor works the same, just disables overflow between each tensor. Which is useful because CPU/GPU supports 32 or 64 bit memory, so can multiply quantised speed by 2-4x.
Best explanation. Thank you!!!
very good video.
Gave you a LIKE just for the title 😊
Just to add another wrinkle there are TPU's and 'edge accelerators' that are also available from companies like Coral and Hailo that can work with desktop systems (or even Pi's). Also Patrick Boyle is great.
i'm still a bit confused how exactly npus are different from gpus
GPUs are more general purpose and power hungry, they can execute FP32 and FP16, have have frame buffers and generally have a wider specialized math functions and more control flow operations.
Desktop PCs until ARM have leaned heavily on the GPU for these sort of operations at the cost of power.
@@dmug got it thanks! awesome video :)
NPUs are less fun than GPUs.
@@darylcheshire1618 Yep.
@@dmug To put it another way GPU's were meant for graphics but happen to be good at the math that ML uses, NPU's are optimized for the math and don't need to accelerate graphics which saves a lot of power.
Before: Patrick Boyle
After: RAPTRICK BOYZZ
My man Patrick goes hard.
Usefull video
Saved it
Dope background music .
Thanks, I like to reflip the samples used in classic hip hop for background music and original stuff.
I'd like to see how the M1 and M4 laptops compare with each other doing AI tasks. As the TFlops from M1 to M4 is a massive increase. I'm guessing M1 AI will be mostly done on Apple servers compared to M4 laptops. I was looking at getting an M1 laptop at some point as I'm generally PC, but like to dabble.
I'll be curious too, the issue is a lot of the stuff that's more avant-garde like running Stable Diffusion locally thus far doesn't really use much of the neural engine. On my M1 Max, with Draw Things, the neural engine doesn't do a lot when enabled.
Who actually manufactures the GPU and the GPU memory for Neural engine for Apple?
TSMC, the neural engine is in-house or at least not listed as anything other than that. The GPUs up to the A10 were imagination technologies, PowerVRs. I believe they licensed it as the starting point for their current architecture, again fabbed by TSMC.
You still didn’t say what a NP is.
Math coprocessor for FP16 tasks designed for power savings. I should have been more explicit.
I have the Asus Zenbook Duo, Asus refers to the NPU in a vague way.
Links to other videos not currently included in description as claimed. 🙂
Good catch, added and thanks!
Added these two
What are Apple's GPU cores?
th-cam.com/video/-TOdEjcFldI/w-d-xo.html
Mac Mini M1vs Mac Pro 2013 Benchmark Battle!
th-cam.com/video/Gza7ebuT7DA/w-d-xo.html
@@dmug I enjoy your videos and I know it’s a lot of work to make them. My pleasure.
It is not called an "ane". It is an NPU/TPU. And it will mainly just use the NPU if it has one.
Not sure if you watched the entire video but the ANE is Apple's marketing term, and I explain the ANE is a type of NPU around the 6 minute mark. Apple's ML research papers use the "ANE" acronym.
Here's a few examples of Apple using the term ANE
machinelearning.apple.com/research/neural-engine-transformers
machinelearning.apple.com/research/vision-transformers
So in short, it is called the ANE akin to how Apple refers to motherboards as "logicboards". More viewers are likely familiar with the term Neural Engine from Apple's marketing than they are with Neural Processing Units.
Patrick Boyle is in da house
I can’t believe I watched the whole ad about the charger… Lol
Put chapters so people can go around it but it really is a good charger.
"Generative AI for uninspired images" & 11:30 PREACH! The generated images they showcased at the event were horrendous, bland, and creepy
The one image of the woman looked frazzled was pretty bad, and they decided to use it during the presentation suggesting that it was a “good” example.
Generative emoji strikes me as an acceptable thing as it’d be trained on a limited data and we’ve already allowed for options on emoji like gender/skin tones, might as well let people combine them. The other stuff is just kinda gross.
Matrix multiplication, Fin.
Neural engine = 3 trillion dollar company.
If they just make computer that is nothing but neural engines, they’d be worth 4 trillion.
Ok, so now, as of yesterday, Nvidia is the world's most valuable company.
did elon really blocked you?
I love that this is being asked but no, all of those emails are jokes.
Apple absolutely F’up shipping 8GByte Macs for the last several years! Dead end computers. Not usable in an AI world. 😂
It’d be hilarious if they didn’t charge through the nose for RAM… which I did for my M1 Max
Nvidia bough PhysX
motor head
It’s a special chip that steals your ideas and sends it to central so they could be patented.Very useful for industrial espionage.
That’s certainly a take…
Horribly explained
Yes.
kbye
Yea, the antisemitic caricature wasn't needed. Go watch John Oliver and leave YT alone.
What?
@@dmug Sam Altman