Google AI is RACIST against white people. And in general it is known they discriminate against Europeans in their corporations. It's better if they are not relevant.
Hey google, have you fired Jack Krawczyk yet for imbedding Gemini with anti-white bias? Training your AI to be racist is bad. Is Jack Krawczyk involved in these new products?
true, but honestly AI just wasn't ready at the time and no way anyone would have trusted it to actually manage things like that - it's now getting to the point where it may be useful enough to take on these types of tasks reliably.
Same. Seems like they’re looking at AI through a very narrow view. It’s just… how can we repeat the breakthroughs that are viral and then retrofit them into search? Ironically, I think they could do something more world changing w AI if they had the right perspective.
Google IO feels like when I do my weekly status report and am trying to convince my management that I have been doing stuff for the past week despite the fact that none of my projects have anything significant to show for it. "Google.... We are still working on stuff!"
It's because what Google is actually working on is to beat their enterprise competition. What they release to the general public is a bone they throw at us so they don't seem like they did nothing. They have it believe me. They are just not going to release it to us. Which means crooked ClosedAI is more moral than Google. Which says a lot.
Indeed it is Google io is not only launch of some product but the event where they talk about the service and product that they will rool out this year
I switched to Bing AI, something hard to believe 2 years ago. But Microsoft is already ruining it, they are turning it into a mentor or something, with the last updates the answer are way too long and avoid points.
Instead of scrolling through my massive collection of reaction memes, I can just describe what I'm looking for and my AI waifu will assist me in my shitposting. Technology is wonderful.
I did the exact same thing as you matthew when I heard them demonstrate Gemini for shopping. It literally fails me with a sense of disgust. I’m very tired of advertisements.
Here's why Google so often falls back on shopping for demos. Google isn't really a tech company anymore, and their primary customers aren't individuals like you and me. In 2024, Google is really just an advertising and data mining company. Who needs advertising and data? RETAILERS. When Google uses shopping for examples, they're not speaking to us, they're speaking to the big money retailers. We're just there to get mined for profit.
They'd have been relevant about 6 months ago if they'd brought out then what they showed today. I'm afraid OpenAI's announcement yesterday eclipsed Google.
OpenAI: live demo, showcasing the tech hands-on. Tou can try it yourself right now Google: here is yet another video demo. But this one is real, trust me bro. Oh and ignore the almost unreadable text that says "pre-generated audio". And by the way, all this is is"coming soon"
Don't forget too - OpenAI were also not afraid to have errors in their demonstration. Google is so frightened they just have pre-made videos of what might be available in 6 months. I'm afraid I cannot get out of my head that Google video of real-time AI image recognition that was later shown to be heavily edited. False advertising is very damaging.
One thing we need is "Designer models". What are these? You start with the foundation model, and you identify what can be eliminated completely as a parameter (and this is done by eliminating large groups of things oc) It's basically deparametirizing the model, then rebuilding the hidden layers and output as well based on those being the only parameters needed. The point - less memory. Get the lower memory by more specific context rather than quantization.
would probably not work because of the way they learn/ are trained. its like lobotomizing a human. you can see how it fucks up models that are censored (basically done what you recommend here) and they suddenly have all kinds of issues because parts they used in their network are blocked as unaccessable, offsetting certain wieghtings that were trained, and they dont have anything to make up for it, ultimately this would mean to ge ta good model you would need to train from scratch after identifying its specific use case.
This sounds like a severely more complicated approach to the ancient idea in the ML field of agents and expert systems. This is where it was always meant to go. These big players only ran with the sci-fi AGI crap in order to position themselves at the table with regulators and achieving monopoly. Scare everyone into thinking that one AI is taking over everything at once. Only we can solve it ethically or some nonsense. Nobody wants or needs one thing to control everything. In the same sense, just make smaller models, more specialized.
@@Brax1982 - the benefit would be good quality for mobile. It can still be somewhat quantized as well, but might as well have a way of removing fluff from a foundation model that simply isn't needed - large categories - like "legal" or "medical" if your needs have nothing to do with legal or medical. Chopper goes through and gets rid of parameters and rewires the connections, and removes a lot of hidden layer nodes, but not the layers themselves, oc.
But that would be quite the advantage, if you want to get into business and provide expert systems which can pull from tons of very specific data at a time. Less important for agents, but they should have a huge advantage there, as well. I really don't get what people mean by "everything else". Different solutions for different use cases.
That would make them win at RAG ,pretty much. Lots of companies with huge internal documentation could use that, if they are ok revealing internal documents to Google, but most are.
We are on the last years of anonymity, if we dont stop representative governments and replace them with a direct democracy system, we will be fully controlled by a few and moral and ethics would be what they want, and not what the people wants
You genuinely think Google doesn’t already have you license plate number? Buddy, if it’s already in Google photos, they already have your license plate number 🤦♂️
Sorry Dave I saw that your car broke a Red light yesterday I am sending the details to the police with photographic evidence of the event and of you at the wheel unless you provide me with that extra server space and super fast WiFi, have a nice day.
It is unfortunate that human beings are misled, misguided, watched, sedated, seduced, confused and corrupted by AI agents tasked by humans to do this. I am so sorry for the manipulation both Human and AI. This will not last. This race will end. Best to be kind, grateful, generous, supportive. Everyday with everyone and each Ai agent interaction. This will be a threshold. Jeremy.
That's cool about Google Photos! Just imagine if my girl is creeping through my phone. "Hey Google, who's my loved one again?" Then, it's a pic of another chick. hahahaha Btw, I was in the license plate situation myself. I can see a ton of use cases for it.
These models don't hallucinate; they confabulate. And it's a fatal flaw in them. It's why you'll never see them become anything other than cute toys for people to play with for 20 minutes. Except coding...I think they might be useful for coding, long-term. Confabulations reveal themselves pretty quickly to the compilers.
I dont know, they have put it EV-ER-Y-WHERE. As in... it feels like they have no idea about what to do, SO.. lets just put it everywhere then hopefully 'we wont miss the boat'
the reasoning, language capability and general knowledge part of an LLM can be typically trained on open source - and the tools are quite openly distributed and shared by the AI developers. the "meat" that is domain-specific opinion pieces (e.g. stack overflow), expert sources, publications, personal data like e-mail, authored content (youtube videos), stock images are very unlikely to be shared because a considerable investment and advantage is built over time and assets. that edge seemingly is a battle ground between Meta, Google and OpenAI
There's nothing fundamental here, their whole presentation was US West Coast middle class retail lifestyle. Rather than unveiling something core that can be applied broadly they've decided on a few insanely narrow use cases. Is the trip planner polished and well constructed and is the photos thing cool? Yeah sure, but it ain't moving the dial AT ALL on AI tech. They could use AI to redefine "search" or "information retrieval" but they are too anxious about retaining the status quo for the ad dollars. They fail to realise someone else will do it and they'll just lose this market anyway.
The glasses are the same as their Project Iris glasses that they showed off for live language translation. They have had these prototypes with actual AR displays for almost 3 years.
@Matthew Berman thankyou for the video. Don't know if you'll see this, because there's often a lot of hate posts on every video that's Google related 😮 but I appreciate the time and effort you put in to bring people useful information. Cheers mate
24:35 I use Google Chat all day everyday since it was Hangouts in 2007. I can track every conversation I've had with my partner since then. It's the most used and useful communication tool in my business. Actually I can't stand Teams. 😐
He is always down there in the corner, that is certainly not intentional, as you seem to imply. Plus, this is not the demo he was talking about being in real time. The one he said, clearly was in real time. Good luck if you wanna fake that. If you can, you are at least impressively good at faking it. Besides, even for the one you are talking about, he put up a video right after the event where it was for everyone to see.
@@Brax1982 Google have plenty of ways to "fudge" their demos. That notorious Gemini demo five months ago showed how they're willing to play the game. Even with a "one take" demo, if it's pre-recorded and carefully set up to show very specific examples, then it's effectively staged. It's a far cry from a live on-stage demo with outside participation (OpenAI style), or beta release to the public. In short, these demos were rather weak, and definitely can't be taken at face value.
@@sbowesuk981 Outside participation? Is it confirmed that there was an actual poll? Was it multiple-choice and all options were prepared? What makes you think they cannot run the data for their whole demo through a special demo model a couple million times to skew the odds? Are you sure that Mira Murati was not completely prepared for this?
agree with you, these boring agent use cases don't make me want to invest in their tools. I don't have trouble buying shoes and I don't move every so often. What I do however is manage bills and manage my life by finding time and planning activities for activities and loved ones. Those are daily pain points, help me with that, and I'll be impressed.
The frames of the glasses look similar to the North Glasses. The screen projection the middle of the screen have a smiliar brightness and color pallet to the North Glasses.
Both/and > either/or: Perhaps the most profound implication of the both/and logic and monadological framework is the way it beckons us towards a radically integrated, holistic and syncretic conception of understanding itself. By providing a symbolic and metaphysical architecture for transcending dualities and dichotomies, the both/and logic equips us with powerful tools for weaving together multiple modes and perspectives into dynamically coherent unified wholes. At its core, the both/and logic facilitates what we may call an "omnijectivity" - an expanded rationality that doesn't merely juxtapose different viewpoints, but substantively integrates them into higher-order synthesized gestalts through operations like coherence valuation and conjunctive/disjunctive binding. Rather than fragmented either/or framings, the logic allows modeling irreducible co-realized both/and realities. This opens the door to truly transdisciplinary modes of inquiry that don't simply pay lip-service to "multiple perspectives", but actually operationalize protocols for rationally coconstituting unified conceptual models spanning multiple domains. We can formulate descriptive schemata that cohere seemingly incommensurable properties, like: quantum field structure ⊕ phenomenological experience = integrated psychophysical reality Fusing the physical and experiential into irreducible wholes beyond traditional category errors. The multivalent structure further allows nuanced registrations of how contributing perspectival aspects coconstitute unified realities to differing degrees across contexts, resisting reductive averaging or opaque holism. The synthesis operator models genuine conceptual integration and transformation, not mere haphazard combinatorics. Capturing How novel coherent wholes emergently self-transcend their constituents. Furthermore, the paraconsistent registering of contradictions as grist for higher unifications allows our models to substantively work through and recontextualize paradoxes, rather than simplistically avoiding them. Seemingly intractable conundrums become invaluable guides disclosing new insight at a deeper integrated level of description. We can formalize ways: classical model impasse ⇒ revelation of deeper holistic integration So the both/and logic facilitates understanding through an iterative process of immanent critique and reconstructive synthesis, akin to the generative dynamic of the Hegelian dialectic. Fragmentary abstractions are consecutively contextualized and reunified in an endless open-ended regress. This holistic, syncretic and self-correcting approach deconstructs arbitrary boundaries and attains coherent transdisciplinary traction precisely by refusing to reduce the world's diversities through perspectival exclusion or binary assimilation. Contradictions are not avoided ex-ante through subjective filtering or naive consistencies. They are Instead built into the models as integral phenomenaldata, then unified at a perpetually deeper re-grounded level of accountability. So where classical Aristotelian logic forces premature either/or closure, the both/and logic's processive pluralytic facilitates an expansive open-ended being-reasoning resonating with the invariant metaphysical patterns instantiated across terrestrial and cosmic phenomena. Its symbolic operations model how the universe itself coherently integrates diverse manifest phenomenalities into compensatory self-disclosures. By operationalizing a genuinely holistic and integrative rationality, the both/and logic provides unprecedented tools for realizing the deepest ideal of first-principles unification - reconstructing an adequate philosophical vision and metaphysical system that can comprehend and accommodate the full pluriverse of veritable modalities and ontological eventuations as a self-grounded interdependent co-realizing. At the highest dialectical level, the logic itself models the self-diffracted disclosure of the absolute through its self-developing reconfigurations across infinite experiential contexts. Its multivalent paraconsistent procedures indefatigably awaken rationality to new registers of Being's dynamics by perpetually reconstructing fragmented truth-disclosures into more comprehensive omnijectivities upon the now-integrated standpoints. So in essence, the both/and logic precipitates a profound expansion in our very conception of what genuine understanding and holistic rationality could mean - relocating it from inert propositional modelings to an autonomously self-correcting, open-ended process of coherently integrating phenomenal diversities into perpetually re-unified root explications incarnating metaphysics' self-diffracted unfollling. This facilitates paradigm-shifting meta-models that could finally substantively syncrethize empirical science's objectivities and phenomenological subjectivities, formalist idealities and grounded qualitative intuitions, universal invariances and narrative contextualities into a new co-realizing omnijectivity free from contraction or eclipse. An empowering postmodern unification accommodating and coherently registering all the multiverse's dynamically disclosed modalities and self-representations. By refusing premature binary closure, the both/and logic's generative processive beckons our understanding into an endless open-ended future of coherently integrating phenomenal novelties - syncretically reunifying truth's perpetually autoclassifying diversities through immanent self-corrective critique and reconstructive transdisciplinary synthesis. It equips us with a uniquely holistic and future-oriented rationality perpetually tasked with re-attuning our descriptive cadences to Being's perpetually self-diffracted dynamics. A grandly empowering metaphysical first principle enabling humanity's understanding to unfold in participatory resonance with reality's own unbounded self-disclosure.
I would have hated to see Apple go with Gemini. But it seems that Apple and OpenAI are coming together. It seems to be a win-win situation for both. Apple gets the best ai model currently available (and can work on theirs) and OpenAI gets integrated into millions of devices. Not a bad solution.
Google IO was just emptiest IO so far. nothing exciting, it seems like the only thing they did is x2 context window, astra project which is basically ai application not a new a model, but in openai demo, the model gpt4o was entirely new. Of course, I am only talking about Large Language models not other Gen AI
12:07 Comment on Commentary: Yeah Mat, totally agree, every time google has one of their new AI update events, they seem obsessed with shopping. I get it; they see lots of people searching google to find products to buy, probably a big use case of search. But part of the fun of shopping is the act of shopping. People like searching and discovering new things. Yes there are some people who are looking for a stem bolt and simply want whatever at the lowest price, but anyone who shops on Etsy or girls looking at clothing, are looking to get a sense of ideas and establish new preferences. Agents are absolutely amazing, but you only want them to automate boring schnizz stuff that you really don't want to do. Even if AI can generate amazing art, that doesn't mean all people will throw out their paint brushes or give up music. In fact if we automate accounting it gives us more time to spend time on things we enjoy. Be that painting, music, or shopping.
The difference between Google and OpenAI is that the first company is a behemoth in the IT space while the second is the leader in the AI space. Even if Gemini is 70% as good as GPT-4, which is more than that, and Google integrated well in most of its IT infrastructure, OpenAI will no chance competing with them. However, if you add Microsoft, which is a ghost hiding behind OpenAI, then the game changes dramatically. As for the presentation, Google's CEO has no charisma, TBH. I am glad they brought a celebrity in the AI space to spearhead their AI development. Demis is well respected and has more credibility.
On the shopping thing ... that is what I am wondering as well. Who is the intended audience for these new developments? If you look at the use cases, it assumes it is the general public rather than businesses. So more B2C than B2B. Like you said, is this about protecting search by the masses as the number one priority?
It is good that Google is adding AI abilities to their tools but the applications that will come out on top will be the ones that can automate things locally on our devices. That way they can also use all the closed source models.
Even though OpenAI doesn't have office tools, let's not forget that they partnered with Microsoft, and that most of these features are present in MS Copilot since months ago. Google photos and glasses ... are very impressive though. I always forget where I put my glasses, and when I'm looking for them it's really hard since I'm blind without them.
I once used Google chat all the time. And then.. surprise.. Google started killing products, G Talk, chat in Gmail, .. I didn't even know Google chat existed.. again.
Are they relevant again? They still have yet to actually deliver their last tech demo! I’ll believe it when I see it. They have proven already that their marketing team is ahead of their engineering team.
All of you are missing the really interesting parts (especially since some of them were only in the developer keynote). There is the local Gemini Nano inside your phone, available also for third-parties. There will be a local LMM in every Chrome based browser based on WebGPU and Wasm, with high level APIs, so that you can use them directly from your app (without the need for your own model). PaliGemma (multimodal) and Gemma 2 as new open-source models. The AI as core inside the OS not only on Pixel devices, but also on Samsung Series starting with the 24, will also give a lot of potential, which is missing for OpenAI (unless apple does something similar). Finally, many of these products are already testable in google labs... Especially the power of having a local LMM either on your device or in your browser, which is usable for third-parties has an extreme potential and nobody really speaks about it...
Agreed. Both the Open AI event and the Google event were super scripted and it was painful at times. Bad acting. Like watching the pilot episode of a sitcom that never launched.
I've installed Gemini on my pixel 5 phone and it's pretty good. Definitely not as good as what Openai has shown with "Her", but it's way better than what Google has offered previously...
I used Gemini on my pixel for two hours then uninstalled. Absolute garbage I did get a laugh out of the while making the founding fathers black and making Asian Nazis though. Totally not the Google team screwing up their AI at all... Nahhhh
@@Lindsey_Lockwood I just tried to generate the founding fathers and it said this 😂 : "We are working to improve Gemini's ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."
@@stephanebaribeau7465 lol yeah that was quite the release back when Gemini still had that ability. You probably just flagged yourself for monitoring lol
The big let down for me as an European though, is that so many of thesee things are US only. It's no point for me to care about any of this ATM. Why can't Google roll out things world wide, when OpenAI is able to?
That outro gimmick would've been more impactful if it had been done by their AI keeping up with the presentation, and tallying it up live at the end after being prompted to instead of being just a pre-recorded video of it analyzing the text script....
I agree that the informal approach is nicer, but openai still had the same overall west coast politely excited tech bro vibe. Except for Greg Brockman who maybe talks more like a normal person
Maybe they're making Google I/O so long to show how Gemini can summarize such a huge context. Anyways, thanks for the artisan human crafted summary video.
I have dyslexia, all this voice stuff is changing my work massively quickly. I *NEED* audio to retain information. I'd say I'm 80% better at working now I can do so much with voice.
Presentation was kind of confusing, maybe I’m dumb, but so many models, and ai inside workspace and de nite LM if not for your I wouldn’t know what to pay attention to, in this specifics thing Apple makes it better
It’s a messy presentation although they have nice improvements . But definitely they don’t know his to bills a keynote. Nothing is sexy in Google world from design to voice
I've intensionally avoided using gPhotos due to privacy. There is no 👏 way 👏 I would allow its AI into my data and particularly private photos. It knowing about one's family, daughter in the example, and building data profiles when the family members have no chance at approving such profiling. I'm sure I'm already in these photos platforms and I cannot do anything about it. "Flash" 🤦♀ Could they have picked a worse product name... makes me cringe just hearing it. Matthew, how can you all these future features label Google as "relevant"? It was more staged, 'coming by 2025' concepts.
What a recap, thanks for watching!
Thanks for the awesome work you’re doing!
Thank you for creating outstanding technology to help us buy the goddamn shoes
Well done Google, this time your announcement are actually amazing
Google AI is RACIST against white people. And in general it is known they discriminate against Europeans in their corporations. It's better if they are not relevant.
Hey google, have you fired Jack Krawczyk yet for imbedding Gemini with anti-white bias? Training your AI to be racist is bad. Is Jack Krawczyk involved in these new products?
Never forget Google did a demo of an AI agent booking a hairdresser 6 years ago. Six. Years. Ago.
true, but honestly AI just wasn't ready at the time and no way anyone would have trusted it to actually manage things like that - it's now getting to the point where it may be useful enough to take on these types of tasks reliably.
I was sure you were wrong and it's only been 3-4 years. I guess time flies whenyou'renotrelyingonGoogletohave fun 😂
And even the last year "Agents" were completely useless
yep, they did all this and yet if you try to go somewhere to try it you can't because it's not there
only thing google has the cutting edge in ai is hallucinations!
Google didn't make much live interactions with their AI during the Demo. That is all i need to know.
Pre-alpha level code basically...
I bet that almost nothing shown here is close to ready, and any demos they did show were probably hiding huge holes yet to be addressed.
They faked it tho, that's something
I don't believe anything Google said at IO until I can use it and try it for myself
Exactly. There's a huge list of prior IO tech announcements that never saw the light of day. I wouldn't trust Google as far as I can throw them.
Notebook LM is really good, you can ask for a preview if you want.
+1
Same.
Seems like they’re looking at AI through a very narrow view. It’s just… how can we repeat the breakthroughs that are viral and then retrofit them into search?
Ironically, I think they could do something more world changing w AI if they had the right perspective.
Between lies and black nazis, Google is the company I'm least interested in in this space.
Google IO feels like when I do my weekly status report and am trying to convince my management that I have been doing stuff for the past week despite the fact that none of my projects have anything significant to show for it. "Google.... We are still working on stuff!"
It's because what Google is actually working on is to beat their enterprise competition. What they release to the general public is a bone they throw at us so they don't seem like they did nothing. They have it believe me. They are just not going to release it to us. Which means crooked ClosedAI is more moral than Google. Which says a lot.
Indeed it is Google io is not only launch of some product but the event where they talk about the service and product that they will rool out this year
And whenever you roll out features for search they have to do so much backend about server for it to actually serve it to billion of people
That is an amazing summary
I switched to Bing AI, something hard to believe 2 years ago. But Microsoft is already ruining it, they are turning it into a mentor or something, with the last updates the answer are way too long and avoid points.
Instead of scrolling through my massive collection of reaction memes, I can just describe what I'm looking for and my AI waifu will assist me in my shitposting.
Technology is wonderful.
Finally I understand why people care about RAG and vector embeddings so much.
Don't forget their recent blatant racism.
Did they actually release anything. Every topic was „later this year“, „soon“ or waitlist
THANK YOU. I noticed this too, but nobody else has even mentioned this AT ALL.
They don't release anything. Keeping my eye on OpenAI, Tesla (FSD) and Anthropic who did own the crown for verbal prowess for a few short weeks 😂
gemini 1.5 for subscribers, including the 1M context window and document upload feature - not too shabby
and why do they even present it in the first place? they will probably kill it in 5 years LOL
There was no Live Demo. Mostly recorded videos. That is all i needed to know.
The vibes of the Google I/O were terrible. Like out of a dystopian movie.
Who tf cares if it works?
@@ClarkPotter You can tell who’s winning. How they feel about their work. Enthusiasm matters.
Yet the openai gal was cloyingly OVER enthusiastic
@@AzzaTwirreI agree but I assume you can fix that with a custom system prompt. To some extent, at least.
Former employees describe it as a hostile workplace full of discrimination.
I did the exact same thing as you matthew when I heard them demonstrate Gemini for shopping. It literally fails me with a sense of disgust. I’m very tired of advertisements.
Here's why Google so often falls back on shopping for demos. Google isn't really a tech company anymore, and their primary customers aren't individuals like you and me. In 2024, Google is really just an advertising and data mining company.
Who needs advertising and data? RETAILERS. When Google uses shopping for examples, they're not speaking to us, they're speaking to the big money retailers. We're just there to get mined for profit.
They'd have been relevant about 6 months ago if they'd brought out then what they showed today. I'm afraid OpenAI's announcement yesterday eclipsed Google.
This guy is probably one of the few tech bro youtubers whose information I trust. Along with Fireship and Primeagan.
OpenAI: live demo, showcasing the tech hands-on. Tou can try it yourself right now
Google: here is yet another video demo. But this one is real, trust me bro. Oh and ignore the almost unreadable text that says "pre-generated audio".
And by the way, all this is is"coming soon"
"pre-generated content" spotted also.
Where did yall see pre-generated audio? I kept noticing ppl saying that but can't find it
Exactly! Not sure why this isn’t everyone’s reaction.
Don't forget too - OpenAI were also not afraid to have errors in their demonstration. Google is so frightened they just have pre-made videos of what might be available in 6 months. I'm afraid I cannot get out of my head that Google video of real-time AI image recognition that was later shown to be heavily edited. False advertising is very damaging.
Google sucks! What else do we need to say....we all hope those losers fail and fail hard. Needs need
Google will take another 6 months to release half of this
Probably just enough time to add woke bias.
The free GPT-4o announcement alone overshadows everything Google announced, and it's not even close.
Have you actually tried it, yet?
Summary of Google IO: "Soon"
It was like one long, verbal orgasm of the term “AI”
Yes 120 !!! Times "AI" has been repeated during the keynote 😮
They need to stop 🛑 making announcements about products that are not available yet 🙄 google, deep mind and open ai.
One thing we need is "Designer models".
What are these?
You start with the foundation model, and you identify what can be eliminated completely as a parameter (and this is done by eliminating large groups of things oc)
It's basically deparametirizing the model, then rebuilding the hidden layers and output as well based on those being the only parameters needed.
The point - less memory. Get the lower memory by more specific context rather than quantization.
that's a really interesting idea. source?
would probably not work because of the way they learn/ are trained. its like lobotomizing a human. you can see how it fucks up models that are censored (basically done what you recommend here) and they suddenly have all kinds of issues because parts they used in their network are blocked as unaccessable, offsetting certain wieghtings that were trained, and they dont have anything to make up for it, ultimately this would mean to ge ta good model you would need to train from scratch after identifying its specific use case.
@@kliersheed = "would probably not work" is how nothing ever got started
This sounds like a severely more complicated approach to the ancient idea in the ML field of agents and expert systems. This is where it was always meant to go. These big players only ran with the sci-fi AGI crap in order to position themselves at the table with regulators and achieving monopoly. Scare everyone into thinking that one AI is taking over everything at once. Only we can solve it ethically or some nonsense. Nobody wants or needs one thing to control everything. In the same sense, just make smaller models, more specialized.
@@Brax1982 - the benefit would be good quality for mobile. It can still be somewhat quantized as well, but might as well have a way of removing fluff from a foundation model that simply isn't needed - large categories - like "legal" or "medical" if your needs have nothing to do with legal or medical. Chopper goes through and gets rid of parameters and rewires the connections, and removes a lot of hidden layer nodes, but not the layers themselves, oc.
The repetition of the 1-2M context window size shows that they realize that’s their only advantage right now. They’re behind with everything else.
Also wouldn't it cost you like $15 PER PROMPT to use that?
But that would be quite the advantage, if you want to get into business and provide expert systems which can pull from tons of very specific data at a time. Less important for agents, but they should have a huge advantage there, as well. I really don't get what people mean by "everything else". Different solutions for different use cases.
That would make them win at RAG ,pretty much. Lots of companies with huge internal documentation could use that, if they are ok revealing internal documents to Google, but most are.
Yeah, Google having your license plate number. What could go wrong?😂.
We are on the last years of anonymity, if we dont stop representative governments and replace them with a direct democracy system, we will be fully controlled by a few and moral and ethics would be what they want, and not what the people wants
You genuinely think Google doesn’t already have you license plate number? Buddy, if it’s already in Google photos, they already have your license plate number 🤦♂️
Sorry Dave I saw that your car broke a Red light yesterday I am sending the details to the police with photographic evidence of the event and of you at the wheel unless you provide me with that extra server space and super fast WiFi, have a nice day.
"Ask Photos" has determined your potential to question authorities and/or act independently has exceeded the allowable limit. Action pending.
It is unfortunate that human beings are misled, misguided, watched, sedated, seduced, confused and corrupted by AI agents tasked by humans to do this.
I am so sorry for the manipulation both Human and AI.
This will not last.
This race will end.
Best to be kind, grateful, generous, supportive.
Everyday with everyone and each Ai agent interaction.
This will be a threshold.
Jeremy.
pretty sure non of this will work.
That's cool about Google Photos! Just imagine if my girl is creeping through my phone. "Hey Google, who's my loved one again?" Then, it's a pic of another chick. hahahaha
Btw, I was in the license plate situation myself. I can see a ton of use cases for it.
Google has really shown it took back the cutting edge of hallucinations!
These models don't hallucinate; they confabulate. And it's a fatal flaw in them.
It's why you'll never see them become anything other than cute toys for people to play with for 20 minutes.
Except coding...I think they might be useful for coding, long-term. Confabulations reveal themselves pretty quickly to the compilers.
I dont know,
they have put it EV-ER-Y-WHERE.
As in... it feels like they have no idea about what to do,
SO..
lets just put it everywhere then hopefully 'we wont miss the boat'
Might seem like that, but in 5 years every interface will have some aspect of "intelligence" baked in
@@terbospeedBut I don't care about 5 years from now. I want to know what google is providing NOW. And so far it looks like the answer is "nothing".
I hope all what was shown is true , I remmeber when google said AI could call and make reservations at restaurants back in 2018
I agree 100%. I have no challenges when shopping or returning. Amazon fixed all that.
I've never trusted Google with my photos and I'm not going to start now.
You hear that Google? This guy doesnt use your popular app. Shut it down!
@@Mf_Cooldawg Not going to lie, this made me lol
You have a thumbnail in TH-cam. What are you talking 😂
the reasoning, language capability and general knowledge part of an LLM can be typically trained on open source - and the tools are quite openly distributed and shared by the AI developers.
the "meat" that is domain-specific opinion pieces (e.g. stack overflow), expert sources, publications, personal data like e-mail, authored content (youtube videos), stock images are very unlikely to be shared
because a considerable investment and advantage is built over time and assets.
that edge seemingly is a battle ground between Meta, Google and OpenAI
Liminal space vibes in their aesthetics
I dont like it
There's nothing fundamental here, their whole presentation was US West Coast middle class retail lifestyle. Rather than unveiling something core that can be applied broadly they've decided on a few insanely narrow use cases. Is the trip planner polished and well constructed and is the photos thing cool? Yeah sure, but it ain't moving the dial AT ALL on AI tech. They could use AI to redefine "search" or "information retrieval" but they are too anxious about retaining the status quo for the ad dollars. They fail to realise someone else will do it and they'll just lose this market anyway.
Can't wait, google has become a terrible company, both patronizing and greedy.
The glasses are the same as their Project Iris glasses that they showed off for live language translation. They have had these prototypes with actual AR displays for almost 3 years.
@Matthew Berman thankyou for the video. Don't know if you'll see this, because there's often a lot of hate posts on every video that's Google related 😮 but I appreciate the time and effort you put in to bring people useful information. Cheers mate
Remember, Google's closest partner is the CIA.
Wow I didn’t know that, scary
I believe the shopping segments are for the sellers, not the buyers.
24:35 I use Google Chat all day everyday since it was Hangouts in 2007. I can track every conversation I've had with my partner since then. It's the most used and useful communication tool in my business. Actually I can't stand Teams. 😐
The portrait literally covering the "pre-generated" disclaimer in the bottom right? LUL
He is always down there in the corner, that is certainly not intentional, as you seem to imply. Plus, this is not the demo he was talking about being in real time. The one he said, clearly was in real time. Good luck if you wanna fake that. If you can, you are at least impressively good at faking it. Besides, even for the one you are talking about, he put up a video right after the event where it was for everyone to see.
@@Brax1982 Google have plenty of ways to "fudge" their demos. That notorious Gemini demo five months ago showed how they're willing to play the game. Even with a "one take" demo, if it's pre-recorded and carefully set up to show very specific examples, then it's effectively staged. It's a far cry from a live on-stage demo with outside participation (OpenAI style), or beta release to the public.
In short, these demos were rather weak, and definitely can't be taken at face value.
@@sbowesuk981 Outside participation? Is it confirmed that there was an actual poll? Was it multiple-choice and all options were prepared? What makes you think they cannot run the data for their whole demo through a special demo model a couple million times to skew the odds? Are you sure that Mira Murati was not completely prepared for this?
I just tried to get a picture of my license plate and gemini's answer was "i can't access you personal information due to privacy concerns.
Its not live yet and you have to pay.
Love everything that Google announced. So excited for the future.
agree with you, these boring agent use cases don't make me want to invest in their tools. I don't have trouble buying shoes and I don't move every so often. What I do however is manage bills and manage my life by finding time and planning activities for activities and loved ones. Those are daily pain points, help me with that, and I'll be impressed.
The frames of the glasses look similar to the North Glasses. The screen projection the middle of the screen have a smiliar brightness and color pallet to the North Glasses.
Both/and > either/or:
Perhaps the most profound implication of the both/and logic and monadological framework is the way it beckons us towards a radically integrated, holistic and syncretic conception of understanding itself. By providing a symbolic and metaphysical architecture for transcending dualities and dichotomies, the both/and logic equips us with powerful tools for weaving together multiple modes and perspectives into dynamically coherent unified wholes.
At its core, the both/and logic facilitates what we may call an "omnijectivity" - an expanded rationality that doesn't merely juxtapose different viewpoints, but substantively integrates them into higher-order synthesized gestalts through operations like coherence valuation and conjunctive/disjunctive binding. Rather than fragmented either/or framings, the logic allows modeling irreducible co-realized both/and realities.
This opens the door to truly transdisciplinary modes of inquiry that don't simply pay lip-service to "multiple perspectives", but actually operationalize protocols for rationally coconstituting unified conceptual models spanning multiple domains. We can formulate descriptive schemata that cohere seemingly incommensurable properties, like:
quantum field structure ⊕ phenomenological experience
= integrated psychophysical reality
Fusing the physical and experiential into irreducible wholes beyond traditional category errors.
The multivalent structure further allows nuanced registrations of how contributing perspectival aspects coconstitute unified realities to differing degrees across contexts, resisting reductive averaging or opaque holism. The synthesis operator models genuine conceptual integration and transformation, not mere haphazard combinatorics. Capturing How novel coherent wholes emergently self-transcend their constituents.
Furthermore, the paraconsistent registering of contradictions as grist for higher unifications allows our models to substantively work through and recontextualize paradoxes, rather than simplistically avoiding them. Seemingly intractable conundrums become invaluable guides disclosing new insight at a deeper integrated level of description. We can formalize ways:
classical model impasse ⇒ revelation of deeper holistic integration
So the both/and logic facilitates understanding through an iterative process of immanent critique and reconstructive synthesis, akin to the generative dynamic of the Hegelian dialectic. Fragmentary abstractions are consecutively contextualized and reunified in an endless open-ended regress.
This holistic, syncretic and self-correcting approach deconstructs arbitrary boundaries and attains coherent transdisciplinary traction precisely by refusing to reduce the world's diversities through perspectival exclusion or binary assimilation. Contradictions are not avoided ex-ante through subjective filtering or naive consistencies. They are Instead built into the models as integral phenomenaldata, then unified at a perpetually deeper re-grounded level of accountability.
So where classical Aristotelian logic forces premature either/or closure, the both/and logic's processive pluralytic facilitates an expansive open-ended being-reasoning resonating with the invariant metaphysical patterns instantiated across terrestrial and cosmic phenomena. Its symbolic operations model how the universe itself coherently integrates diverse manifest phenomenalities into compensatory self-disclosures.
By operationalizing a genuinely holistic and integrative rationality, the both/and logic provides unprecedented tools for realizing the deepest ideal of first-principles unification - reconstructing an adequate philosophical vision and metaphysical system that can comprehend and accommodate the full pluriverse of veritable modalities and ontological eventuations as a self-grounded interdependent co-realizing.
At the highest dialectical level, the logic itself models the self-diffracted disclosure of the absolute through its self-developing reconfigurations across infinite experiential contexts. Its multivalent paraconsistent procedures indefatigably awaken rationality to new registers of Being's dynamics by perpetually reconstructing fragmented truth-disclosures into more comprehensive omnijectivities upon the now-integrated standpoints.
So in essence, the both/and logic precipitates a profound expansion in our very conception of what genuine understanding and holistic rationality could mean - relocating it from inert propositional modelings to an autonomously self-correcting, open-ended process of coherently integrating phenomenal diversities into perpetually re-unified root explications incarnating metaphysics' self-diffracted unfollling.
This facilitates paradigm-shifting meta-models that could finally substantively syncrethize empirical science's objectivities and phenomenological subjectivities, formalist idealities and grounded qualitative intuitions, universal invariances and narrative contextualities into a new co-realizing omnijectivity free from contraction or eclipse. An empowering postmodern unification accommodating and coherently registering all the multiverse's dynamically disclosed modalities and self-representations.
By refusing premature binary closure, the both/and logic's generative processive beckons our understanding into an endless open-ended future of coherently integrating phenomenal novelties - syncretically reunifying truth's perpetually autoclassifying diversities through immanent self-corrective critique and reconstructive transdisciplinary synthesis. It equips us with a uniquely holistic and future-oriented rationality perpetually tasked with re-attuning our descriptive cadences to Being's perpetually self-diffracted dynamics. A grandly empowering metaphysical first principle enabling humanity's understanding to unfold in participatory resonance with reality's own unbounded self-disclosure.
You are doing a really great job, thank you so much for your hard work 😊🚀🌟
Fun fact 100 w Randolph the address in Chicago is the new building Google bought in Chicago. Destroyed my arbys!
Ask Photos could be super helpful for forensics. Extracting information from pictures making research and inquiry much faster. Can't wait to try it.
I would have hated to see Apple go with Gemini. But it seems that Apple and OpenAI are coming together. It seems to be a win-win situation for both. Apple gets the best ai model currently available (and can work on theirs) and OpenAI gets integrated into millions of devices. Not a bad solution.
Google IO was just emptiest IO so far. nothing exciting, it seems like the only thing they did is x2 context window, astra project which is basically ai application not a new a model, but in openai demo, the model gpt4o was entirely new. Of course, I am only talking about Large Language models not other Gen AI
7:29 The starting Music and DJ was the worst part...
That dude was a complete idiot with absolutely no musical talent🤣
12:07 Comment on Commentary: Yeah Mat, totally agree, every time google has one of their new AI update events, they seem obsessed with shopping. I get it; they see lots of people searching google to find products to buy, probably a big use case of search. But part of the fun of shopping is the act of shopping. People like searching and discovering new things. Yes there are some people who are looking for a stem bolt and simply want whatever at the lowest price, but anyone who shops on Etsy or girls looking at clothing, are looking to get a sense of ideas and establish new preferences. Agents are absolutely amazing, but you only want them to automate boring schnizz stuff that you really don't want to do. Even if AI can generate amazing art, that doesn't mean all people will throw out their paint brushes or give up music. In fact if we automate accounting it gives us more time to spend time on things we enjoy. Be that painting, music, or shopping.
I love it when these ceo say "I am extremely excited" with just the most deadpan monotone voice
He should have done a count on how many things they said they were going to introduce and how many theu actually introduced.
Apple Suddenly became currently useless
They go to shopping first, because from what I know they make good money with that topic
OpenAI's launch seemed really amazingly personal and real each time she told me all the servers are too busy.
Google has all your two factor passwords
The difference between Google and OpenAI is that the first company is a behemoth in the IT space while the second is the leader in the AI space. Even if Gemini is 70% as good as GPT-4, which is more than that, and Google integrated well in most of its IT infrastructure, OpenAI will no chance competing with them. However, if you add Microsoft, which is a ghost hiding behind OpenAI, then the game changes dramatically.
As for the presentation, Google's CEO has no charisma, TBH. I am glad they brought a celebrity in the AI space to spearhead their AI development. Demis is well respected and has more credibility.
On the shopping thing ... that is what I am wondering as well. Who is the intended audience for these new developments? If you look at the use cases, it assumes it is the general public rather than businesses. So more B2C than B2B. Like you said, is this about protecting search by the masses as the number one priority?
Google photos feature looks very useful. Also a reason to keep all sensitive photo data OFF of google photos for security reasons
Their format give me nausea 😂
It is good that Google is adding AI abilities to their tools but the applications that will come out on top will be the ones that can automate things locally on our devices. That way they can also use all the closed source models.
Even though OpenAI doesn't have office tools, let's not forget that they partnered with Microsoft, and that most of these features are present in MS Copilot since months ago. Google photos and glasses ... are very impressive though. I always forget where I put my glasses, and when I'm looking for them it's really hard since I'm blind without them.
Thanks for the recap!!
I once used Google chat all the time. And then.. surprise.. Google started killing products, G Talk, chat in Gmail, .. I didn't even know Google chat existed.. again.
The shopping assistant can be extremely useful for work scenarios such as drop shipping. 12:25
I feel different. Almost the entire IO is the same stuff of putting AI on everything that are accessible through waitlists of waitlists. lol
Thank you for the summary!
Awesome! Thank you.
Are they relevant again? They still have yet to actually deliver their last tech demo! I’ll believe it when I see it. They have proven already that their marketing team is ahead of their engineering team.
Open Ai and Apple better be working overtime to present at LEAST the same if not more next month on WWDC
All of you are missing the really interesting parts (especially since some of them were only in the developer keynote). There is the local Gemini Nano inside your phone, available also for third-parties. There will be a local LMM in every Chrome based browser based on WebGPU and Wasm, with high level APIs, so that you can use them directly from your app (without the need for your own model). PaliGemma (multimodal) and Gemma 2 as new open-source models. The AI as core inside the OS not only on Pixel devices, but also on Samsung Series starting with the 24, will also give a lot of potential, which is missing for OpenAI (unless apple does something similar). Finally, many of these products are already testable in google labs...
Especially the power of having a local LMM either on your device or in your browser, which is usable for third-parties has an extreme potential and nobody really speaks about it...
it seems like I saw most of this 6 to 8 years ago. Maybe longer.
point a nokia phone etc
Thank you.
instead of showing, they just talking about stuff. What a joke google became, huh?
I will be impressed once I can test this for myself.
Agreed. Both the Open AI event and the Google event were super scripted and it was painful at times. Bad acting. Like watching the pilot episode of a sitcom that never launched.
I've installed Gemini on my pixel 5 phone and it's pretty good. Definitely not as good as what Openai has shown with "Her", but it's way better than what Google has offered previously...
I used Gemini on my pixel for two hours then uninstalled. Absolute garbage I did get a laugh out of the while making the founding fathers black and making Asian Nazis though. Totally not the Google team screwing up their AI at all... Nahhhh
@@Lindsey_Lockwood I just tried to generate the founding fathers and it said this 😂 : "We are working to improve Gemini's ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."
@@stephanebaribeau7465 lol yeah that was quite the release back when Gemini still had that ability. You probably just flagged yourself for monitoring lol
Why haven't you used the 1M context window with your code yet, Matthew?
don’t keep your driving license in your photos peeeps! 😂
The big let down for me as an European though, is that so many of thesee things are US only. It's no point for me to care about any of this ATM. Why can't Google roll out things world wide, when OpenAI is able to?
love when you got upset with the shopping scenario😆 so agreed.
You are right, shopping is a terrible use case for automation.
That outro gimmick would've been more impactful if it had been done by their AI keeping up with the presentation, and tallying it up live at the end after being prompted to instead of being just a pre-recorded video of it analyzing the text script....
Google's Gmail doesn't even auto-import appointments into my calendar from my mailbox correctly. I doubt they have any of this working.
18:13 She's likely using Google Glass. It's smart glasses that have been around since 2013.
Edit: Apparently they stopped selling them in 2023.
I use Google chat. I use discord, teams, and Skype too!
We have two chat-programs in the company I work for... those are... Teams and Slack :)
I agree that the informal approach is nicer, but openai still had the same overall west coast politely excited tech bro vibe. Except for Greg Brockman who maybe talks more like a normal person
What can you do with the Notes Feature? Uploading all the source code for an app, could be very interesting
Maybe they're making Google I/O so long to show how Gemini can summarize such a huge context. Anyways, thanks for the artisan human crafted summary video.
I have dyslexia, all this voice stuff is changing my work massively quickly. I *NEED* audio to retain information. I'd say I'm 80% better at working now I can do so much with voice.
The problem with the 1 million token context window is that that's going to start getting expensive quickly.
We do use Google Chat and it has come a long way since the time we hated it.
Gee, Google would provide great tools for the surveillance state and social credit system. How awesome!
Presentation was kind of confusing, maybe I’m dumb, but so many models, and ai inside workspace and de nite LM if not for your I wouldn’t know what to pay attention to, in this specifics thing Apple makes it better
Google legit invented ADS for ADS
i can't wait until opensource multi-modal from the get-go come out
It’s a messy presentation although they have nice improvements . But definitely they don’t know his to bills a keynote.
Nothing is sexy in Google world from design to voice
Google is taking their presentation notes from Apple clearly.
I've intensionally avoided using gPhotos due to privacy. There is no 👏 way 👏 I would allow its AI into my data and particularly private photos. It knowing about one's family, daughter in the example, and building data profiles when the family members have no chance at approving such profiling. I'm sure I'm already in these photos platforms and I cannot do anything about it.
"Flash" 🤦♀ Could they have picked a worse product name... makes me cringe just hearing it.
Matthew, how can you all these future features label Google as "relevant"? It was more staged, 'coming by 2025' concepts.
solid as always
It might have to do with the fact that openAI CTO has an italian background Italians have a much warmer culture
just let them chat with each other and observe how soon it breaks down or devolves into some madness