5:02 nice showing of actual lamas there 😂 Please add more funny materials like this in the videos so that it gets interesting as well as already informative to watch.
I would like to see AI used to go over all uploaded videos on TH-cam and normalise the volume levels so that I am not changing them per video. Hint Hint! :-)
Nothing wrong with a sponsored video Gary, but with respect you have to keep editorial control. ARM marketing team makes you sound like a marketing AI. There wasn’t much of the “explains” in this one
im pretty sure that running on a cpu is pretty much the definition of not being accelerated 💀 vector extensions like avx have been on x86 for a long time yet there are massive differences between smth like a 7950x and a 4090. your video is selling a false promise. running ai will always be slow on a phone and running it on the cpu will be even slower.
Vector extensions on ARM are awesome. Tenstorrent is taking similar approach with RISC-V but with sole focus on vector instruction and data transfers.
5:02 nice showing of actual lamas there 😂 Please add more funny materials like this in the videos so that it gets interesting as well as already informative to watch.
🤣
I would like to see AI used to go over all uploaded videos on TH-cam and normalise the volume levels so that I am not changing them per video. Hint Hint! :-)
Thx for your video but i can't see the links in the description ?
Ooops, sorry about that. I have added them now. The main Arm developer hub is here: arm.com/dev-hub
@@GaryExplains Thank you, but how do we install/run the android version you showed in the video?
Nice to see an application running locally instead of always on the cloud. I am all for giving people the power not big tech companies.
The video seems like an advertisement.
So which cpu for mobile devices is more future-proof in 2025? Snapdragon 8 gen3 or Apple A18?
How do I get this on my phone like that. Thanks
Nothing wrong with a sponsored video Gary, but with respect you have to keep editorial control. ARM marketing team makes you sound like a marketing AI. There wasn’t much of the “explains” in this one
He didn't say it was faster on the CPU then GPU or NPU. 🤦🏽
The future is looking brighter.
im pretty sure that running on a cpu is pretty much the definition of not being accelerated 💀 vector extensions like avx have been on x86 for a long time yet there are massive differences between smth like a 7950x and a 4090. your video is selling a false promise. running ai will always be slow on a phone and running it on the cpu will be even slower.