Please don’t do tech videos when you don’t understand tech. Or in that case, don’t do AI videos when your knowledge is at best good enough for your a little small talk with friends.
Obviously you didn't listen very carefully to the keynote, or didn't understand it. Apple Intelligence runs on 3 hierarchical tiers: 1. Edge AI running on-device running an LLM on the SoC in the SoC's neural engine. 2. Apple's Private Cloud running in Apple data centers on Apple Silicon based servers. 3. Finally if the Private Cloud determines that the query would be better served by OpenAI, the user is asked if he/she would like the query forwarded to ChatGPT and _if_ the user assents, the bare bones amount of data and query are forwarded to OpenAI to complete the query. Any time data/query travels over the internet it is encrypted and anonymized and _not_ logged or stored or profiled. For Tech Heads, you guys aren't very tech saavy.
Exactly, this model they explained seemed to have gone over almost all these tech youtuber / influencers heads. I guess it's just too tempting to frame it differently to get clicks.
Tech heads is actually just one person - me. I'll overlook the patronising tone as I think there is actually some interesting content worthy of discussion here... So yes, I understand the tiered model you've outlined. However, I still think it seems like a convoluted architecture that's likely to lead to increased latency. I could be wrong here, but since we never saw a real-time demo, we can't know for certain. We also have no real detail on what training data was used in the training phase of Apple's model. My assertion is that they've likely been cautious in their approach to data scraping, the quality of their model has suffered as a result and it is therefore inferior to GPT-4o in terms of performance. If this is not the case, then it's not clear to me why they would partner with OpenAI at all. If they've created a high-quality model that can run entirely on-device or within Private Cloud, why even give anyone the option to use chatGPT at all? Why would *any* query be better served by OpenAI?
@@tomedwardstechnews Sorry … when you started _sending so much of your data to ChatGPT …_ it was apparent that you weren't understanding that most of the queries would be satisfied by the first two tiers - actually going to OpenAI's ChatGPT is an exceptional case. Most queries will be satisfied by the on device LLM, and a request will not go to Apple's Private Cloud if the request is satisfied by the local LLM. ChatGPT will be consulted if the user agrees and Apple's private cloud can't satisfy the request … and OpenAI will get as little data as possible to fulfill the request. Most likely this will contain no user data … so no ChatGPT will _not_ be getting your personal data. You can think of tier 3 as an external LLM provider, much like Google search is a search provider. Apple will most likely allow a whole host of 3rd party chat providers in the future provided they agree to encrypted, anonymized, and unlogged or profiled connections. Apple has enumerated their training data and I believe they are using high quality news services for which Apple is paying - they don't want to get stuck with low quality X posts or accusations of IP theft. They don't even appear to want to go near ChatGPT's voices, probably since OpenAI has quite questionably and unethically probably stolen voices from god knows where like Scarlett Johansson. That's probably why there's a clear demarcation between Apple's LLMs and 3rd parties like OpenAI … they want to make it clear when answers are arriving from a 3rd party source. 3rd party AI providers have a wider scope of training, since they scrape the entire internet ripping off information from any number of unsuspecting sources. Because of this, they can answer queries about diverse things like recipes which can be made from potatoes, onions, garlic, and mayonnaise and it won't be them ripping off Aunt Sally's copyrighted potato salad recipe from the Home Cooking site. Really, Siri is mostly meant to be a personal assistant … the example of Craig getting an email about a meeting being moved is the best example. th-cam.com/users/liveRXeOiIDNNek?si=cCsAfoWWm79fIZl3&t=4240 He gets an email about a meeting being moved to later in the afternoon and wants to know if he can still make it in time to his daughter's play. The biggest strength of the on device LLM is the deep knowledge of his personal context - update the meeting time in the calendar, know who his daughter is, and search for messages and emails from his daughter, open up the PDF attachment of the play, determine where and when his daughter's play is, and use Apple Maps to project the travel time from the end of his meeting to the venue of his daughter's play. _That's_ the kind of information that only his phone can know since all his personal data is stored on his iPhone.
@@tomedwardstechnews I think this part of the keynote where they introduce ChatGPT explains that a bit: keynote 1:36:27: "Still, there are other artificial intelligence tools available that can be useful for tasks that draw on broad world knowledge, or offer specialized domain expertise. We want you to be able to use these external models without having to jump between different tools. So we're integrating them right into your experiences. And we're starting out with the best of these, the pioneer and market leader ChatGPT from Open AI, powered by GPT-4o." In context, I understood that their own models are specialised in intent recognition, doing these operatingsystem level personalised tasks (which is of course very privacy sensitive). That is what I think they want you to think of as "Apple Intelligence". Besides they also seem to have their text and image generation / manipulation models for the tasks like rewriting text and making their very AI looking images. No idea how much they can do on device and how much needs to go to the cloud. Then besides they're building in support for third party AI's (hence the "starting with ChatGPT"). We don't have AGI's (yet I guess) so not every model will be good at everything. And we use these AI's like ChatGPT now already via other interfaces. They're just tools to use. I do think your idea is interesting that they might have "outsourced the messiness" because they didn't have enough data. That could be kind of true under the presupposition that they wanted to make an AI in house like GPT-4o but couldn't. Though that is not a very sturdy presupposition. I see this more as an interface to support third party software via a nice interface, which is what most of computing platforms are about.
Where did it say they send most stuff to chat GPT? Did you watch the same imaginary keynote at Elon X
Please don’t do tech videos when you don’t understand tech. Or in that case, don’t do AI videos when your knowledge is at best good enough for your a little small talk with friends.
If you don't like my videos, just don't watch. Nobody is forcing you to be here.
Obviously you didn't listen very carefully to the keynote, or didn't understand it.
Apple Intelligence runs on 3 hierarchical tiers:
1. Edge AI running on-device running an LLM on the SoC in the SoC's neural engine.
2. Apple's Private Cloud running in Apple data centers on Apple Silicon based servers.
3. Finally if the Private Cloud determines that the query would be better served by OpenAI, the user is asked if he/she would like the query forwarded to ChatGPT and _if_ the user assents, the bare bones amount of data and query are forwarded to OpenAI to complete the query.
Any time data/query travels over the internet it is encrypted and anonymized and _not_ logged or stored or profiled.
For Tech Heads, you guys aren't very tech saavy.
Exactly, this model they explained seemed to have gone over almost all these tech youtuber / influencers heads. I guess it's just too tempting to frame it differently to get clicks.
@@davidzwitser Not to mention Elon …
Tech heads is actually just one person - me. I'll overlook the patronising tone as I think there is actually some interesting content worthy of discussion here...
So yes, I understand the tiered model you've outlined. However, I still think it seems like a convoluted architecture that's likely to lead to increased latency. I could be wrong here, but since we never saw a real-time demo, we can't know for certain.
We also have no real detail on what training data was used in the training phase of Apple's model. My assertion is that they've likely been cautious in their approach to data scraping, the quality of their model has suffered as a result and it is therefore inferior to GPT-4o in terms of performance.
If this is not the case, then it's not clear to me why they would partner with OpenAI at all. If they've created a high-quality model that can run entirely on-device or within Private Cloud, why even give anyone the option to use chatGPT at all? Why would *any* query be better served by OpenAI?
@@tomedwardstechnews Sorry … when you started _sending so much of your data to ChatGPT …_ it was apparent that you weren't understanding that most of the queries would be satisfied by the first two tiers - actually going to OpenAI's ChatGPT is an exceptional case.
Most queries will be satisfied by the on device LLM, and a request will not go to Apple's Private Cloud if the request is satisfied by the local LLM.
ChatGPT will be consulted if the user agrees and Apple's private cloud can't satisfy the request … and OpenAI will get as little data as possible to fulfill the request. Most likely this will contain no user data … so no ChatGPT will _not_ be getting your personal data.
You can think of tier 3 as an external LLM provider, much like Google search is a search provider. Apple will most likely allow a whole host of 3rd party chat providers in the future provided they agree to encrypted, anonymized, and unlogged or profiled connections.
Apple has enumerated their training data and I believe they are using high quality news services for which Apple is paying - they don't want to get stuck with low quality X posts or accusations of IP theft. They don't even appear to want to go near ChatGPT's voices, probably since OpenAI has quite questionably and unethically probably stolen voices from god knows where like Scarlett Johansson.
That's probably why there's a clear demarcation between Apple's LLMs and 3rd parties like OpenAI … they want to make it clear when answers are arriving from a 3rd party source.
3rd party AI providers have a wider scope of training, since they scrape the entire internet ripping off information from any number of unsuspecting sources. Because of this, they can answer queries about diverse things like recipes which can be made from potatoes, onions, garlic, and mayonnaise and it won't be them ripping off Aunt Sally's copyrighted potato salad recipe from the Home Cooking site.
Really, Siri is mostly meant to be a personal assistant … the example of Craig getting an email about a meeting being moved is the best example.
th-cam.com/users/liveRXeOiIDNNek?si=cCsAfoWWm79fIZl3&t=4240
He gets an email about a meeting being moved to later in the afternoon and wants to know if he can still make it in time to his daughter's play. The biggest strength of the on device LLM is the deep knowledge of his personal context - update the meeting time in the calendar, know who his daughter is, and search for messages and emails from his daughter, open up the PDF attachment of the play, determine where and when his daughter's play is, and use Apple Maps to project the travel time from the end of his meeting to the venue of his daughter's play.
_That's_ the kind of information that only his phone can know since all his personal data is stored on his iPhone.
@@tomedwardstechnews I think this part of the keynote where they introduce ChatGPT explains that a bit: keynote 1:36:27: "Still, there are other artificial intelligence tools available that can be useful for tasks that draw on broad world knowledge, or offer specialized domain expertise. We want you to be able to use these external models without having to jump between different tools. So we're integrating them right into your experiences. And we're starting out with the best of these, the pioneer and market leader ChatGPT from Open AI, powered by GPT-4o."
In context, I understood that their own models are specialised in intent recognition, doing these operatingsystem level personalised tasks (which is of course very privacy sensitive). That is what I think they want you to think of as "Apple Intelligence". Besides they also seem to have their text and image generation / manipulation models for the tasks like rewriting text and making their very AI looking images. No idea how much they can do on device and how much needs to go to the cloud.
Then besides they're building in support for third party AI's (hence the "starting with ChatGPT"). We don't have AGI's (yet I guess) so not every model will be good at everything. And we use these AI's like ChatGPT now already via other interfaces. They're just tools to use.
I do think your idea is interesting that they might have "outsourced the messiness" because they didn't have enough data. That could be kind of true under the presupposition that they wanted to make an AI in house like GPT-4o but couldn't. Though that is not a very sturdy presupposition. I see this more as an interface to support third party software via a nice interface, which is what most of computing platforms are about.
i think u didn't watch the keynote
Apple Apple intelligence
Hoping I can get it to direct me to the nearest ATM machine
@@crae_s that is amazing
John Oliver
Oliver John 😂
I’ve had that one before. Also get Louis Theroux a lot
Ringo starr