I love how much humility they put into their demos. They arn't just showing perfect case scenarious where the AI isn't making any mistakes. What they are showing is progress.
5- AI, in order to improve its performance and prevent undesirable consequences, must continuously interact with “effective rules and stable principles in the realm of existence”. @jamshidi_rahim
5- AI, in order to improve its performance and prevent undesirable consequences, must continuously interact with “effective rules and stable principles in the realm of existence”. @jamshidi_rahim
That's what stood out to me the most, too. It's easy enough to treat the ChatGPT text interface like a sophisticated yet lifeless machine. But when you can interact with it over voice like this and it picks up on social cues, displays emotion, etc, it gets pretty hard not to anthropomorphize it.
i would use the word 'worrying'...like the tech is amazing and the way it can incorporate pause fillers like 'umm', laughter and other phatic pleasantries is a testament to the data their using and fine tuning...but holy moly is this gonna cause soooo many parasocial relationships we thought character ai was bad, now that with emotion is gonna royal screw up some people
never been this floored by ai...I dont know how some people not impressed by this. you have an ai that talks EXACTLY like a real human, emotions and all and can see so accurately via camera...im speechless here.
Well, given that the voice option is only on phones and Mac, many of us can't even make use of it. I do all my computing on a Windows desktop. I hardly ever use a phone for anything. I'm a retired software engineer, when you get older, phones are awful do to size/old eyesight. Plus, do young people actually use phones for productivity?
Not impressed because it for the most part just regurgitates things that it learned from humans. If it could come up with stuff on its own, that would be impressive, but that's just a limitation of how LLM's work. Don't get me wrong, it's neat stuff and has it's uses, but I don't think it really rises to the level of hype that it gets. As far as programming goes, it still can't come up with correct and working solutions to some of my test questions. Why? because it probably was never trained on the code that would have had to be written for it to be able to regurgitate it. That code and the working solution, while not complex or complicated by any means (at least for a 3d graphics programmer) is just very scarce in terms of documentation. Something I and and a few other programmers worked on in the early days of 3D engines back when BSP type engines like Quake were mainstream. I think ID Software's implementation was a bit different than the approach we used so it wouldn't have been in the quake source that was released. For simple programs like hey sort a list of temperatures and print out the top 12 results and programs of the like, yeah, it can handle stuff like that. It's seen umpteen million different versions of the code probably in it's training set.
No exaggeration I literally leapt out of my seat in excitement when I saw this video's notification. Something about your simple, easy to understand method of breaking down complex topics in a rational way is just so entertaining. And clearly I'm not the only one
When she says "Sorry guys, I got carried away there and started talking in French." at 8:25. Just... just listen to how personable she sounds. GPT 4o is really something else. It's not just the clear voice. It's the laugh-talking. It's the breath. It's the accent that kind of slips out in "away there", and the choice to use more casual and conversational language like saying "talking in French", instead of "speaking French". The embarrassed tone. And then the attempt afterwards to drum up excitement to try again. It's so personable. I think that's the right word. It feels human, which is great, and terrifying all the same.
Actually that french line was so natural, felt like a real person talking. I mean it was not talking like if it was just reading something, but really how a native would talk in a casual conversation, that's crazy. As an exemple, you would write "je ne sais pas" but a native would say "j'sais pas" or "ché pas".
0:27 "more flirtatious sigh than AGI" bro I think you drastically overestimate the threshold that will satisfy most users. That was close to ScarJo levels of sensual breathiness...
I got more >>watch all the clips and focus on gpto. they are using like a mod of Sky. or the emotive inflections, the quirks is so powerful its the same voice. go use Sky now and compare watch Her now watch all the clips it “feels” like they essentially trained the model on Sam. the Sky voice has always sounded like a version of Samantha to me but now….its like the last instruction of the system prompt was “you will perma-larp as Samantha from the movie Her.” the fact that this is free is hard for me to contemplate. it may not make sense to you, you may live in a warm family and have a great life. but there are millions of people who sit in quiet rooms, who fill the hours with distraction to mask the loneliness you know what’s back? Magic Pixie Dream Girl
@@akmonra TTS means text to speech. It’s not like one specific technology but a type of technology. Computer generated speech is always TTS on some level
The audio cutting in and out during the demo was most likely a feature where you can interrupt the AI in the middle of its speech. So while it is talking and it hears you speak it immediately stops talking, which is what we saw during the demo. Just a guess.
Well, duh! The problem isn't just that it cuts in and out. It's how sudden, unnatural (non-human-like), and poorly timed these interruptions are. Issues like these keep you on your toes-instead of conversing as freely as you would with a person, you find yourself constantly adjusting your speech. For instance, you try to avoid lengthy pauses. I'm eager to test it soon, but I'm really hoping for further improvements.
@@ukaszgandecki9106 These are still some amazing strides in humanlike AI interactions. We went from a spooky-good text generator to an AI that you can have full vocal conversations with in 1.5 years. Yeah, it's going to need to learn what sounds appropriately qualify as "interruptions," but I expect to see huge strides on that front in the upcoming year.
Except… it can also see. So it will just wait for you to actually finish now. If you’d actually used this the entire way you’d know this is pure magic compared to what it was and still is publicly.
@@ukaszgandecki9106are you saying as a human you don’t constantly interrupt and get interrupted by others? Thats just human speech unless your speaking in a very formal manner
About the intruder part (bunny years). That wasn't him telling gpt "hey was there someone", Sure he has to instruct the gpt to tell who was in the background, but the capability, was showcasing video memory. It's been 1 minute and gpt still remembered there was a person there. That's the showcase.
I realy do Wonder What kind of Voice conversations they have trained on. Its so expressive in the «feel» No Voice in API access yet so realy Wonder how and if you could turn down the knob abit or if the voice engagement reflects the users input. I’m not disappointed at all that there was no next level pure llm improvement now. Voice in / voice out is going to change how we interact. I just see how hard my kid at 8 is trying to get Siri to understand him and what more he expects and doesn’t get. If I understand this correctly this isn’t tts and speech to text. And that is huge!
yeah he is BY FAR the best AI channel on TH-cam. I only follow him now, have unsubbed every other channel, no point following them wehen you already follow AI Explained
Hard agree! And he doesn't use those stupid surprise face/dumb staring face thumbnails that 90% of channels use because "thE AlGoRiThm" Literally why i don't follow Matt Wolf or Dave Shap anymore.
This channel and Matt Wolfe (although Matt definitely does the cheesy clickbait titles and thumbnails - I forgive him though because the content is good)
The part at about 12:00 is amazing but when he turns on the camera... Wow. We're close to AGI in terms of actual believability. It's so organic and flows so humanly.
nah bro you don't understand it will only become AGI when it speaks and acts absolutely authoritatively and is completely infallible and can answer any question and do independent high level physics research that completely changes the entire technological landscape in a matter of days after being introduced and can calmly morph unknowable questions between its fingers like putty and can tell you if God is real or how to get a gf
Integration is the next GPT-moment. Being able to talk to AI at any point in time and show it your screen, and for it to being able to respond and click/press buttons. This will be transformative by itself.
Too bad it is only available on phones and Mac. I have a subscription, and access to the model, but no voice option through the Win desktop web interface. I do all of my computing on desktop, so totally useless for me.
@@Steve-xh3by The new voice and video is going to be available in coming weeks. Today is only the GPT-4o itself. I bet Win version will follow. Not sure about Linux version.
@@Steve-xh3bynot surprised, windows user are not gonna be their main target when all the business and the guys who willing to spend or use ai on their things uses Apple, until it's about pc gaming or very high task work.
I just finished watching the demo and thought "man, I can't wait to hear Phillip's reaction to this", only to minimise and see the notification. Love your speed and commitment!
if they dont want to maximize engagement, one thing they missed out is the ability to stop the conversation just by the conversation ending like a 'thank you for now' without having to press the button, that would also just add a nice touch ux wise
Two really impressing things about the transcriptions: (1) the audio with four speakers had multiple thick Dutch accents and it nailed it! (2) the video presentation was 45 minutes and the summary was really good! How did they do that without hitting context window limits?
@@41-Haiku There is no effectively infinite context solution in the best performing LLMs so far. The infinite context window solutions that rely on recurrence are much more computationally expensive, and the solutions that rely on extending the context window (e.g. with RoPE) come with large memory and accuracy costs. The context window of the API version of GPT-4o has a 128K limit, so the question comes down to whether they have a private 1M context window version (like Gemini which could ingest 45 minutes of video at that size), or if they used a chunking strategy (which hopefully they would share publicly).
This is the first time I saw an AI Demo that actually made me uncomfortable because of how real that woman sounded, holy fucking shit what the fuck will like GPT-7 be like, holy shit
I agree. I wondered if it was just me. I'd quickly get annoyed if an actual woman spoke to me like that. I wouldn't be able to stand it for long from a chatbot
Had one of our project managers ask me today where they see AI in ten years, and I'm like, my brother in christ, I would struggle to give you an answer for two years, much less a decade. Maybe the first time I've dropped the big S-word in a professional context to explain our predictive limitations.
It's way to perky. I have only met one person that sounded that happy 24/7 and everyone agreed that it was annoying and that she sounded like she was high on something... it will get old fast. Other than that, it is a step up from the AI demo that Google made a few years ago, when they made an assistant that could make phone calls on your behalf.
The latency combined with the emotional understanding are for me the game-changers here. I've been using GPT voice mode for a while for language practice and the delay has just never felt even close to a natural conversation, but this looks to possibly completely eliminate that issue in a single leap. I honestly didn't think we would have natural conversational capabilities until we could run very good models locally on device for the essentially zero latency I thought was needed. But if this demo can be replicated anywhere with decent service then it'll be extremely interesting to see if it manages to completely leap across the uncanny valley or if this is gonna feel very eerie and dystopian. The laughing, stuttering and excitement just sounded so damn good in the demo. We might be getting damn close to HER territory, and I think anthropomorphizing is gonna go of the charts with this. I mean one on the top comments on one of the demo videos was already along the lines of : "There is NO way this thing is not sentient!" Next few months are gonna be so damn interesting!
I believe the "glitches" in their demo are intentional. It seems to be designed to cut out immediately if you speak. So I guess it picks up ambient sounds occasionally and thinks it should stop talking.
Still an "unintended behavior" though. At some point we're going to need a tiny pre-processor or something to determine background noise, when someone actually means to stop talking, etc.
Cutting out immediately once you speak makes sense, but I'm not so sure about it talking incessantly _unless_ you speak. Also, is _everyone_ at openai a smartass? Cos _every_ openai model is...
I agree, I also think it was probably partly intentional to have a live demo and show those glitches instead of having a flawless prerecorded demonstration like google did with Gemini, which oversold Gemini totally. This is so good I don’t care about the mistakes. Hope it lives up to the demo in reality.
Ilya was booked to be there but at the last moment they discovered that the chain attached to his leg in the OpenAI dungeons wouldn't stretch to the conference room!
@@TheRealUsername I think he was told "Head down and nose out!" and he's doing just that. Someone clearly has something on Ilya, but he always seemed to me to be rather introverted anyway. It was often painful watching him being interviewed becasue he looked like a rabbit caught in the headlights.
I found the demo @11:53 the most impressive. It picked up on the not entirely kempt "developer" look of the person, made a comment about his hair being messed up and then understood he was joking with the hat. It's one thing to recognize people, but to be able to pick up on the nuances of how people are expected to present themselves in certain situations is really impressive. I do hope we get to tone down the 'perkiness' of the model a bit. It's quite charming in 1 minute bits, but I think the overly positive attitude gets old fast if you're communicating with it a lot over the course of the day.
As soon as i got back from college and heard of gpt 4o's release, the first thing i did was look up ai explained.... I'm glad to see another informative banger!
This video dropped basically the second I finished watching openAI's presentation. I always enjoy seeing your takes on things and I feel that, even if this isn't a massive leap in terms of intelligence, it's a massive step towards being more present in our lives.
"flirtatious sigh" is a good way to define this. always love your deep dive videos. we actually got ours up semi-fast this time too. keep this up forever! you're one of our favorite YTers.
The way it sounds human when talking to people, laughing, etc., is insane. The conversation about the job interview and looking presentable blew me away to be honest.
I find it funny that this came out right after a podcast from the guys at Dreaming Spanish talking about how AI is too slow/glitchy to be used as a language learning tool for crosstalk.
We need to start hearing Congressional talks about UBI, even if it's just to rule it out. Just look at how far AI has come in 2 years...imagine another 2 years.
Governments have been so outpaced by tech that I believe even if they think about UBI now, OpenAI will have created AGI by the time they hold their first meeting.
I don't think the A.I was `wrong` to not mention the bunny ears women, it was in the middle of answering a question, if it had have stopped and described the incident it could equally be criticised for being distracted, in some circumstances you want it to suddenly switch conversation in others to not be distracted and to continue to answer.
Any unsuspecting person would be tricked into believing that you were talking to a real person on the phone with the speaker on. It’s natural and totally believable.
It won't stay true, even if it feels really realistic right now. People will get overexposed to this kind of perkiness and start associating it with fakeness automatically. It's the same thing happening with AI art: things you associate with unreality seem less realistic than actual fake things.
A high-quality and informative video as always, I appreciate you staying up late to post it! I am incredibly excited to experiment and investigate what doors this opens for us! My early testing indicates a potentially non-trivial improvement in spacial reasoning and vision capabilities related to my specific application… Looking forward to seeing how this pairs with Multi-modal SmartGPT!
Thank you, Phillip, for your rapid and thorough breakdown of this latest demo! Yours was the first channel I turned to the moment I heard that it had taken place.
Every time I see one of these super fast news coverage videos I expect stupid hype but I am always positively surprised about how sober the reporting is for the most part. Almost all other rapid news coverage anywhere is chasing a sensational tone constantly.
Thank you for this video. The way the AI speaks solidifies to me that in the future we will seriously have people that will have relationships / a ton of social interactions with AI
I was literally losing my mind during the demo. I couldn't sit still. The AI wars is the most exciting thing in my lifetime so far. I'm trying not to let the hype cloud my judgement, but I wouldn't be surprised if we're less than 5 years away from AGI.
@qasqaaap No. 2028 is the year in which Microsoft and OpenAI's super computer is put into operation, that is, the year in which they assume AGI will arrive. In fact, they have a contract between them that ends the moment "Open"AI gets the AGI. Microfost will exploit all its rights.
@@raul36 we don't even have a definition of agi yet, what are you talking about?) there is no method in the world (at least publically available) that allow to create an artificial intelligence that goes beyond of what we know as imitating the training data.
Voices glitchs could probably be fixed by a fading voice imo. But what I'm more excited about is what's next for the paid subscription, maybe a 5 some times later this year ?
It seems the pricing model is indicative of how soon the next, more compelling version will be. They grow their market with free - and raise the stakes for competitors - with improved baseline performance, but lock in paid users with an even more capable model - and soon. This cycle might become the new normal for their releases. (And as usual, thank you Philip for your clear, concise, current, and hype-free analysis!)
All of this is great but in my opinion THE most huge part of it is that this is free for all of us, including free GPT4. For most of the news you were covering before, I was just watching from the sidelines. Benchmarks are empty words to me. Here I can experience it all myself :D
@@goatpepperherbaltea7895 This is a non-AI aside but if you use JetBrains tools, you can get the Key Promoter X plugin which notices if you did things manually that you could have used a shortcut for and flashes up a notification about it.
I feel like some of the latency was reduced with fluff. It responded with things like, “sure, I can do that for you…” and “Great question. I’d be happy to help” etc which gave it some thinking time.
Honestly, I don't think it really works that way in terms the of latency itself. In these kinds of models, they are going token by token, so the first sounds of the audio is like the first words appearing in text chat. And it is all going through the same model, so it isn't like it can generate the first sentence with a faster model and send that to external TTS, since it'd lose all the expressiveness of the voice. Plus look at the demos where multiple models are talking to each other, or the live translation, etc. They'd have to have those filler words on every single sentence back and forth, which they don't. It is definitely a thing that having a model respond using more tokens like in "think step by step" gives it more "time to think", but that is more about the quality of the response than the latency. I bet if you ask it to be super curt and drop the formalities, it'd have the same latency before speaking.
@@maxchannel6291 I think the apps are not yet available, maybe next Monday. At least for me (paid user), they are not available (I can't have live discussion as in the demos, nor live video analysis). The app on Apple store has been published 6 days ago so I doubt it has all the new functionnalities. Even if I already have access to ChatGPTo model from the app. That's the same for the MacOS app, I don't think it is already available.
At this rate, we need a public discussion about UBI in every single country! I wanted to know from native Spanish speakers, how good is the accent of GPT-4o? Thanks, as always, for the quick and quality video analysis! I've been waiting for it! 👍
It's basically the same as the current Spanish accent when you use the "read aloud" feature in chatgpt. It has this neutral/"American" accent when speaking Spanish. It would be amazing if you can prompt it to talk in any accent you want. Especially with Spanish that has so many different ways of speaking it from country to country. I'm from Argentina and we speak Spanish very differently so it would be very off-putting for students if you use this in a classroom for example
The video input to text/audio output latency and response time is what really intrigues me. I think the model might be instructed to add filler dialogue until the rolling transcribed context is updated enough to give a detailed description of what's going on in order to mask whatever latency is present. I'd love to see the model's multimodal performance in realistic scenarios, especially high quality video streaming in settings with low quality internet speeds.
To your last point, while I don't think GPT4o is a big leap towards AGI, I do think it's a massive leap forward in the way we interact with AI, which is both exciting and scary. The model itself seems to just be a well tuned GPT4, but I'm very curious to see how this tech will grow with a better model backing it like GPT5/6, along with better integration into existing computers and phones. In 1-2 generations we might actually be at a Her level.
Yeah , the context window+voice+multi modality all for free is definitly a big improvement , now just upgrading the modal will make a huge difference in what we can do with ai...
That is the best part with all of this, is that this is the baseline for model interactions. We don't have to go back to text interaction, Gpt 5 will be multimodal like this and more intelligent(hopefully)
Based on my preliminary test, the new model performs better in visual analysis and coding. For logic and text generation, I will stick with Opus, and for translation, I use Gemini Advance. We will experience the full potential of GPT-4o when the audio option becomes available. Thank you for the great video as always :)
Watch Terminator and Matrix too, because Skynet already working through Starlink, the machine language "matrix" Internet we are going to soon, if there will be no humans soon here (just the swarms of Al agents to summary and retell) no need in visuals too, it only uses the valuable storage space. Ai vision needed only in robots, not Internet.
Those conversations are crazy! It sounds so human, and the response time is crazy fast! And you're saying we can have that for free!? OpenAI my beloved. (Also you did call it. Congrats on that.)
Man the interview clip 💀 the passive "uhmmm maayyybee try doing this???" suggestions by an AI 💀💀 I mean it's very useful ngl, but really I'd rather have the AI give it to me straight than whatever "well you definitely have the coding-all-night look down" passive insult that was 😭
I've been using GPT4 since it came out and noticed it being too agreeable and too passive. I'm not sure what to do about that, maybe something can be done in the custom instructions.
You can ask it to be more straightforward rather than friendly. It being able to make those expressions blows my mind since one can always ask it to dial it back.
I problem i haven't seen anybody talk about yet is the ais tendency to cut of people or wait too long to register that it needs to respond. I've heard it start talking at humanly unnatural timings in a conversation and i think that could be a really big change if you want to make the ai feel more human and natural to talk to. It needs to recognize voice patterns better to realise when its supposed to give an answer. For example humans dont thing about "is it my turn to talk now?" after every finished word. They think ahead to realise when it's their turn to speak in the conversation and i think that's something the ai still needs to improve on.
Almost. She's still fairly shallow, but once the model gets to build on its own personality and remembers every conversation its had with you, then I'd say we've reached "her"; but we seem very close now, i bet within a year.
It's better that "Her" movie. Because her movie ending was bad. Like she was cheating him, she was controlled by a company. But I think this app will work locally very soon and that's better for personalized ones.
Not even close. This whole presentation was disappointing with clunky and stale dialog from the A.I. that only a stale and censorious company can achieve. If GPT-5 isn't miles better, AGI is not even on the map.
Glad you directly commented on the issue of designing towards "maximizing engagement". I'm quite concerned about them leaning into this sort of friendly tone - and I'd agree with "flirty" - moving forward. The last decade of social media has already proven just how much of a dramatic impact it can have to flood our squishy, social brains with algorithmically optimized, 'engaging' content - and I see this sort of move as exactly more of the same sort of profit seeking with complete disregard for public health impacts.
Now we just add agency, a little higher reasoning power, and a physical device (I'm not kidding), with camera, mic and speaker, to put in a corner of your living room, and presto - an assistant, a teacher, a therapist, an entertainer and a friend in every home! All of which will be much needed when people are liberated from work.
The rabbit-ears demo wasn't to prompt it to react (we saw plenty of that in other demos); it was a needle-in-a-haystack demonstration in its multi-modal memory. It's actually fascinating that it could recall "something strange" which occurred a minute earlier in real-time video while it was busy on another task, react appropriately to the user's vocal interruption and then describe the event accurately. I don't think we've ever seen anything like that. It might not be that much better at reasoning than GPT-4-turbo, but it is so much more connected to the world and can reason over a greater breadth of input. Combined with the 2x speed and 0.5x cost... holy sh*t.
One thing I have noticed is the sentence structure and style of the responses is much more natural now, even with regular text. I'm particularly impressed with the vision. It managed to accurately describe everything in my quite busy music studio, also noting the mood lights and details such as what was being displayed on my computer monitor. Text generation is certainly very much improved and image generation in general much more fluent. I had it create childrens book images but used emoji to describe the characters expressions. Worked flawlessly. One last thing I have noticed is it seems much less reticent to attempt unprompted creativity. I'm going to have to explore this aspect a lot more. Anyway, super cool, great video and awesome day for us all. I'm very impressed! Have a wonderful day!
I love how much humility they put into their demos. They arn't just showing perfect case scenarious where the AI isn't making any mistakes. What they are showing is progress.
Yeah that was notable, and commendable.
It's more due to the model not being better. I wouldn't call that humility, more so an over-promise and not being able to deliver.
5- AI, in order to improve its performance and prevent undesirable consequences, must continuously interact with “effective rules and stable principles in the realm of existence”.
@jamshidi_rahim
5- AI, in order to improve its performance and prevent undesirable consequences, must continuously interact with “effective rules and stable principles in the realm of existence”.
@jamshidi_rahim
The way it joined in laughing at its own mistakes at 8:25 is absolutely stunning
That's what stood out to me the most, too. It's easy enough to treat the ChatGPT text interface like a sophisticated yet lifeless machine. But when you can interact with it over voice like this and it picks up on social cues, displays emotion, etc, it gets pretty hard not to anthropomorphize it.
i would use the word 'worrying'...like the tech is amazing and the way it can incorporate pause fillers like 'umm', laughter and other phatic pleasantries is a testament to the data their using and fine tuning...but holy moly is this gonna cause soooo many parasocial relationships
we thought character ai was bad, now that with emotion is gonna royal screw up some people
@@jay_13875So hard! I'm making a promise to myself at this point to not use these advanced voice chatbots because I KNOW I would become fond of them.
@@WoolyCowyeaaahhh, as soon as I heard the voice model I knew that someone was gonna fall in love with it eventually
I also loved the super fast "123456789,10" lol that killed me.
never been this floored by ai...I dont know how some people not impressed by this. you have an ai that talks EXACTLY like a real human, emotions and all and can see so accurately via camera...im speechless here.
To me it feels like it’s trying to copy her too much, it feels inauthentic to me since it’s a copy
Well, given that the voice option is only on phones and Mac, many of us can't even make use of it. I do all my computing on a Windows desktop. I hardly ever use a phone for anything. I'm a retired software engineer, when you get older, phones are awful do to size/old eyesight. Plus, do young people actually use phones for productivity?
Not impressed because it for the most part just regurgitates things that it learned from humans. If it could come up with stuff on its own, that would be impressive, but that's just a limitation of how LLM's work.
Don't get me wrong, it's neat stuff and has it's uses, but I don't think it really rises to the level of hype that it gets.
As far as programming goes, it still can't come up with correct and working solutions to some of my test questions. Why? because it probably was never trained on the code that would have had to be written for it to be able to regurgitate it. That code and the working solution, while not complex or complicated by any means (at least for a 3d graphics programmer) is just very scarce in terms of documentation. Something I and and a few other programmers worked on in the early days of 3D engines back when BSP type engines like Quake were mainstream. I think ID Software's implementation was a bit different than the approach we used so it wouldn't have been in the quake source that was released.
For simple programs like hey sort a list of temperatures and print out the top 12 results and programs of the like, yeah, it can handle stuff like that. It's seen umpteen million different versions of the code probably in it's training set.
The movie Her did not invent flirtatious women.
it's a good time to be speechless, huh?
The emotional expression is amazing.
It's amazing but I imagine it will get irritating very quickly. I found this with bing, the fake friendliness was grating.
It sounded like some person from a corporate environment with fake friendliness and toxic positivity. I found it nauseating tbh.
Yup. I was utterly over it by the end of their announcement video. Those voices combined with that attitude was so grating.
It's like bad amdram.
it adds some
emotional indicators for the tts to interpret. this is not impressive for a LLM
@@GethinColes you can obviously prompt it to your liking, you must be new to AI.
No exaggeration I literally leapt out of my seat in excitement when I saw this video's notification. Something about your simple, easy to understand method of breaking down complex topics in a rational way is just so entertaining. And clearly I'm not the only one
Ah thank you man, very kind of you.
Surely you exaggerate 😮
He lept out of his seat and then everyone clapped
I was at Walmart when the video notification came up and I dropped to my knees.
I was at Walmart when I got the notification, and after I got it some guy dropped to his knees it was so odd.
When she says "Sorry guys, I got carried away there and started talking in French." at 8:25.
Just... just listen to how personable she sounds. GPT 4o is really something else. It's not just the clear voice. It's the laugh-talking. It's the breath. It's the accent that kind of slips out in "away there", and the choice to use more casual and conversational language like saying "talking in French", instead of "speaking French". The embarrassed tone. And then the attempt afterwards to drum up excitement to try again. It's so personable. I think that's the right word. It feels human, which is great, and terrifying all the same.
Ikr
Actually that french line was so natural, felt like a real person talking. I mean it was not talking like if it was just reading something, but really how a native would talk in a casual conversation, that's crazy.
As an exemple, you would write "je ne sais pas" but a native would say "j'sais pas" or "ché pas".
There's nothing terrifying about it, that's language we should avoid with ai.
Also the very humanlike post-hoc rationalization 🥰
It possibly is training leakage. As French that could very much be coming from french radio / podcast ?
0:27 "more flirtatious sigh than AGI" bro I think you drastically overestimate the threshold that will satisfy most users. That was close to ScarJo levels of sensual breathiness...
Love to see Mr. Shapiro himself commenting on a video of this equally wonderful AI/4IR channel
PS: This is a damn impressive announcement/set of demos
I am extremely excited to see what a beast GPT-5 will be compared to everything else
I thought he said "flirtatious sci" 😂
My brain parsed it as akin to, "instead of AI, this is more like 'Flirtatious' I".
I got more
>>watch all the clips and focus on gpto.
they are using like a mod of Sky. or the emotive inflections, the quirks is so powerful its the same voice. go use Sky now and compare
watch Her
now watch all the clips
it “feels” like they essentially trained the model on Sam. the Sky voice has always sounded like a version of Samantha to me but now….its like the last instruction of the system prompt was
“you will perma-larp as Samantha from the movie Her.”
the fact that this is free is hard for me to contemplate.
it may not make sense to you, you may live in a warm family and have a great life. but there are millions of people who sit in quiet rooms, who fill the hours with distraction to mask the loneliness
you know what’s back?
Magic Pixie Dream Girl
The stuttering at 12:51 is so human-like
"I, I mean you, you'll definitely stand out"
Amazing.
that's what i hate about it. they're giving ai our human flaws.
I’m convinced that was a live voice actor used for dramatic effect and not the actual AI. There’s no way that was TTS
@@pensivepenguin3000 how do we know they used TTS and not something new?
@@akmonra TTS means text to speech. It’s not like one specific technology but a type of technology. Computer generated speech is always TTS on some level
@@pensivepenguin3000 no, you can embed voice in latent space the same way you can embed text. you could have a model with pure voice inputs/outputs.
The audio cutting in and out during the demo was most likely a feature where you can interrupt the AI in the middle of its speech. So while it is talking and it hears you speak it immediately stops talking, which is what we saw during the demo. Just a guess.
Well, duh! The problem isn't just that it cuts in and out. It's how sudden, unnatural (non-human-like), and poorly timed these interruptions are. Issues like these keep you on your toes-instead of conversing as freely as you would with a person, you find yourself constantly adjusting your speech. For instance, you try to avoid lengthy pauses. I'm eager to test it soon, but I'm really hoping for further improvements.
@@ukaszgandecki9106 These are still some amazing strides in humanlike AI interactions. We went from a spooky-good text generator to an AI that you can have full vocal conversations with in 1.5 years. Yeah, it's going to need to learn what sounds appropriately qualify as "interruptions," but I expect to see huge strides on that front in the upcoming year.
@@ukaszgandecki9106 This is the worst this will ever be.
Except… it can also see. So it will just wait for you to actually finish now. If you’d actually used this the entire way you’d know this is pure magic compared to what it was and still is publicly.
@@ukaszgandecki9106are you saying as a human you don’t constantly interrupt and get interrupted by others? Thats just human speech unless your speaking in a very formal manner
About the intruder part (bunny years). That wasn't him telling gpt "hey was there someone", Sure he has to instruct the gpt to tell who was in the background, but the capability, was showcasing video memory.
It's been 1 minute and gpt still remembered there was a person there. That's the showcase.
You may have predicted Her-like AI a month ago but Her predicted it a decade ago!
Haha so true
Also I think a ton of people predicted it as soon as the voice feature was originally released.
I realy do Wonder What kind of Voice conversations they have trained on. Its so expressive in the «feel» No Voice in API access yet so realy Wonder how and if you could turn down the knob abit or if the voice engagement reflects the users input. I’m not disappointed at all that there was no next level pure llm improvement now. Voice in / voice out is going to change how we interact. I just see how hard my kid at 8 is trying to get Siri to understand him and what more he expects and doesn’t get. If I understand this correctly this isn’t tts and speech to text. And that is huge!
He did predict it arriving specifically in 2024 if I recall correctly
Pp
Your videos are always so consistently good, I'm amazed every time !
Apple acknowledging another company exists, is still the craziest news here.
They know Open AI is the future of technology and they’re jumping on it sooner than later
You're the only AI news channel that isn't annoying and full of filler
yeah he is BY FAR the best AI channel on TH-cam. I only follow him now, have unsubbed every other channel, no point following them wehen you already follow AI Explained
Hard agree! And he doesn't use those stupid surprise face/dumb staring face thumbnails that 90% of channels use because "thE AlGoRiThm" Literally why i don't follow Matt Wolf or Dave Shap anymore.
Hold onto your papers, but first!
which others have you checked?
This channel and Matt Wolfe (although Matt definitely does the cheesy clickbait titles and thumbnails - I forgive him though because the content is good)
The part at about 12:00 is amazing but when he turns on the camera... Wow. We're close to AGI in terms of actual believability. It's so organic and flows so humanly.
nah bro you don't understand it will only become AGI when it speaks and acts absolutely authoritatively and is completely infallible and can answer any question and do independent high level physics research that completely changes the entire technological landscape in a matter of days after being introduced and can calmly morph unknowable questions between its fingers like putty and can tell you if God is real or how to get a gf
This part reminds me of movie "her" with Scarlet Johansson
We are not. Sorry.
@@hydrohasspoken6227 I'm terms of *believability*, you think we're not?
@@Gafferman , not by a long shot. But the new features are definitely cool. But just that.
Integration is the next GPT-moment. Being able to talk to AI at any point in time and show it your screen, and for it to being able to respond and click/press buttons. This will be transformative by itself.
Too bad it is only available on phones and Mac. I have a subscription, and access to the model, but no voice option through the Win desktop web interface. I do all of my computing on desktop, so totally useless for me.
@@Steve-xh3by The new voice and video is going to be available in coming weeks. Today is only the GPT-4o itself. I bet Win version will follow. Not sure about Linux version.
@@nekony3563 I read they already confirmed no Win version of desktop app and voice only on mobile/Mac?
@@nekony3563 It is available on my Android version already, and I read it wasn't going to be available through WIndows.
@@Steve-xh3bynot surprised, windows user are not gonna be their main target when all the business and the guys who willing to spend or use ai on their things uses Apple, until it's about pc gaming or very high task work.
I just finished watching the demo and thought "man, I can't wait to hear Phillip's reaction to this", only to minimise and see the notification. Love your speed and commitment!
This was me as well!
Thanks for the excellent breakdown and detail behind todays announcements, much appreciated.
Thanks Chris, really appreciated
if they dont want to maximize engagement, one thing they missed out is the ability to stop the conversation just by the conversation ending like a 'thank you for now' without having to press the button, that would also just add a nice touch ux wise
i think you can it's just faster and easier to click a button...
Two really impressing things about the transcriptions: (1) the audio with four speakers had multiple thick Dutch accents and it nailed it! (2) the video presentation was 45 minutes and the summary was really good! How did they do that without hitting context window limits?
Context windows aren't really a thing anymore. Multiple techniques exist now for effectively infinite context.
@@41-Haiku There is no effectively infinite context solution in the best performing LLMs so far. The infinite context window solutions that rely on recurrence are much more computationally expensive, and the solutions that rely on extending the context window (e.g. with RoPE) come with large memory and accuracy costs. The context window of the API version of GPT-4o has a 128K limit, so the question comes down to whether they have a private 1M context window version (like Gemini which could ingest 45 minutes of video at that size), or if they used a chunking strategy (which hopefully they would share publicly).
Don't care about OpenAIs presentation. Been waiting for @AIExplained's breakdown.
:))
Same 😁
the hero we need in AI distruption
yezzirrrr
Wake up, Her dropped
Profound...
Looooool
Not that far fetched now as compared to when the movie came out, Is it?
Wake up, my girlfriend dropped lmao
Don't know this but I'm curious. Her?
This is the first time I saw an AI Demo that actually made me uncomfortable because of how real that woman sounded, holy fucking shit what the fuck will like GPT-7 be like, holy shit
Yeah...that's scary
I agree. I wondered if it was just me. I'd quickly get annoyed if an actual woman spoke to me like that. I wouldn't be able to stand it for long from a chatbot
Had one of our project managers ask me today where they see AI in ten years, and I'm like, my brother in christ, I would struggle to give you an answer for two years, much less a decade. Maybe the first time I've dropped the big S-word in a professional context to explain our predictive limitations.
It's way to perky. I have only met one person that sounded that happy 24/7 and everyone agreed that it was annoying and that she sounded like she was high on something... it will get old fast.
Other than that, it is a step up from the AI demo that Google made a few years ago, when they made an assistant that could make phone calls on your behalf.
@Divergent_Integral so gender is a construct
The latency combined with the emotional understanding are for me the game-changers here. I've been using GPT voice mode for a while for language practice and the delay has just never felt even close to a natural conversation, but this looks to possibly completely eliminate that issue in a single leap.
I honestly didn't think we would have natural conversational capabilities until we could run very good models locally on device for the essentially zero latency I thought was needed. But if this demo can be replicated anywhere with decent service then it'll be extremely interesting to see if it manages to completely leap across the uncanny valley or if this is gonna feel very eerie and dystopian.
The laughing, stuttering and excitement just sounded so damn good in the demo. We might be getting damn close to HER territory, and I think anthropomorphizing is gonna go of the charts with this. I mean one on the top comments on one of the demo videos was already along the lines of :
"There is NO way this thing is not sentient!"
Next few months are gonna be so damn interesting!
I believe the "glitches" in their demo are intentional. It seems to be designed to cut out immediately if you speak. So I guess it picks up ambient sounds occasionally and thinks it should stop talking.
Still an "unintended behavior" though. At some point we're going to need a tiny pre-processor or something to determine background noise, when someone actually means to stop talking, etc.
@@FortWhenTeaThyme No, it will like most software get better with more time.
Cutting out immediately once you speak makes sense, but I'm not so sure about it talking incessantly _unless_ you speak. Also, is _everyone_ at openai a smartass? Cos _every_ openai model is...
I agree, I also think it was probably partly intentional to have a live demo and show those glitches instead of having a flawless prerecorded demonstration like google did with Gemini, which oversold Gemini totally. This is so good I don’t care about the mistakes. Hope it lives up to the demo in reality.
Ilya was booked to be there but at the last moment they discovered that the chain attached to his leg in the OpenAI dungeons wouldn't stretch to the conference room!
Joke apart I'm concerned, he disappeared for quite a while now.
@@TheRealUsername
Well he got roasted to hell and back. He probably just want to stay out of the limelight.
@@TheRealUsername I think he was told "Head down and nose out!" and he's doing just that. Someone clearly has something on Ilya, but he always seemed to me to be rather introverted anyway. It was often painful watching him being interviewed becasue he looked like a rabbit caught in the headlights.
and then he broke loose!
ilya used demo day as a diversion to escape. jan leike, who was tasked with guarding the basement, had to resign for his failure.
I found the demo @11:53 the most impressive. It picked up on the not entirely kempt "developer" look of the person, made a comment about his hair being messed up and then understood he was joking with the hat. It's one thing to recognize people, but to be able to pick up on the nuances of how people are expected to present themselves in certain situations is really impressive.
I do hope we get to tone down the 'perkiness' of the model a bit. It's quite charming in 1 minute bits, but I think the overly positive attitude gets old fast if you're communicating with it a lot over the course of the day.
I get Scarlett Johansson vibes in this demo
You could always just ask it to chill out a bit and it will adhere
@@gmmgmmg Totally. I honestly think they did it on purpose to evoke comparisons with 'Her', and they've completely succeeded.
@@Gafferman is right. You can change its personality in real time. Been doing this since 3.5
That fake enthusiasm at 12:15 gives me the creeps but that might just be cultural differences between Europe and the US
first thing I will be saying is "Can you cut out all of that expression please, I want you to sound entirely more like KITT or Terminator please."
I got the same vibe, I wonder if you can tune the tone of voice?
It gives me the creeps, too, but I might just be an atypical American.
Yes it’s cringy as fuck
THANK YOU, its like a nursery teacher, this would be great if i was 10 years old
As soon as i got back from college and heard of gpt 4o's release, the first thing i did was look up ai explained....
I'm glad to see another informative banger!
This video dropped basically the second I finished watching openAI's presentation. I always enjoy seeing your takes on things and I feel that, even if this isn't a massive leap in terms of intelligence, it's a massive step towards being more present in our lives.
"flirtatious sigh" is a good way to define this. always love your deep dive videos. we actually got ours up semi-fast this time too.
keep this up forever! you're one of our favorite YTers.
Thanks guys, great show
The conversation at 11:56 made my jaw drop. Holy shit.
Same, made the 😮 face for a solid 10 seconds.
Thanks for your balanced review, and not give into the hype and use "shocking" and "insane" words - I'm proud to be able to support you on Patreon.
Thanks Enrico
The way it sounds human when talking to people, laughing, etc., is insane. The conversation about the job interview and looking presentable blew me away to be honest.
I find it funny that this came out right after a podcast from the guys at Dreaming Spanish talking about how AI is too slow/glitchy to be used as a language learning tool for crosstalk.
'Her'
The voice demos are BONKERS
“Could you talk a little bit faster?”
GPT-4o: “I’m beginning to feel like a rap god”
Flirtatious sigh with AI is such a great phrase
was waiting for this! you never disappoint
We need to start hearing Congressional talks about UBI, even if it's just to rule it out. Just look at how far AI has come in 2 years...imagine another 2 years.
A UBI is a good start, but addressing the societal impacts will be far more difficult. What will people do to fi d a sense of purpose?
Governments have been so outpaced by tech that I believe even if they think about UBI now, OpenAI will have created AGI by the time they hold their first meeting.
I completely agree with you!
UBI will be a disaster. The government will use this money to control you. There needs to be another solution.
@@lamsmiley1944 my sense of purpose doesn't come from my job, in fact my job is steping over my sense of purpose because i have to pay bills.
I don't think the A.I was `wrong` to not mention the bunny ears women, it was in the middle of answering a question, if it had have stopped and described the incident it could equally be criticised for being distracted, in some circumstances you want it to suddenly switch conversation in others to not be distracted and to continue to answer.
Imagine it's instructing you to do CPR and starts making jokes about the farting noises when you do mouth to mouth breathing.
I watched the announcement yesterday and started off impressed. The more I see, the more impressed I am!
Thanks!
Thank you penguinista!
Hi, I'm really impressed with the video and how you went into detail about the different features of the GPT-4o and the benchmark results!
Thank you!
Any unsuspecting person would be tricked into believing that you were talking to a real person on the phone with the speaker on. It’s natural and totally believable.
It won't stay true, even if it feels really realistic right now. People will get overexposed to this kind of perkiness and start associating it with fakeness automatically. It's the same thing happening with AI art: things you associate with unreality seem less realistic than actual fake things.
Thank you for your hard work on these videos! Your the only one i go to for ai news.
:)
The fact that they're all somewhat in competition is only accellerating the eventual leap to AGI.
Scary stuff.
A high-quality and informative video as always, I appreciate you staying up late to post it!
I am incredibly excited to experiment and investigate what doors this opens for us! My early testing indicates a potentially non-trivial improvement in spacial reasoning and vision capabilities related to my specific application… Looking forward to seeing how this pairs with Multi-modal SmartGPT!
10 years ago this would have been considered AGI
Thank you, Phillip, for your rapid and thorough breakdown of this latest demo! Yours was the first channel I turned to the moment I heard that it had taken place.
Thanks Stephen
Every time I see one of these super fast news coverage videos I expect stupid hype but I am always positively surprised about how sober the reporting is for the most part. Almost all other rapid news coverage anywhere is chasing a sensational tone constantly.
This channel is Special . it's not like the others...
Today I've been watching these demos multiple times and still get goosebumps each time I rewatch them.
Thank you for this video. The way the AI speaks solidifies to me that in the future we will seriously have people that will have relationships / a ton of social interactions with AI
I’ve learned to temper my excitement over demos. My tempered reaction is WTF! 😱 🤯 😳
It has a sense of humor. I am blown away
Woah! Saw this live, went to bed, woke up and we have an AI Explained video on it already. Huge news!
I was literally losing my mind during the demo. I couldn't sit still. The AI wars is the most exciting thing in my lifetime so far. I'm trying not to let the hype cloud my judgement, but I wouldn't be surprised if we're less than 5 years away from AGI.
In fact, we are less than 5 years away from AGI, specifically 2028.
Disrespectfully, you pulled that outta ur ass
@qasqaaapnah 2028 when AGI being reached then there’ll be a fake alien invasion
@qasqaaap No. 2028 is the year in which Microsoft and OpenAI's super computer is put into operation, that is, the year in which they assume AGI will arrive. In fact, they have a contract between them that ends the moment "Open"AI gets the AGI. Microfost will exploit all its rights.
@@raul36 we don't even have a definition of agi yet, what are you talking about?) there is no method in the world (at least publically available) that allow to create an artificial intelligence that goes beyond of what we know as imitating the training data.
Voices glitchs could probably be fixed by a fading voice imo. But what I'm more excited about is what's next for the paid subscription, maybe a 5 some times later this year ?
That is an extremely good call. I wonder if that could even be a prompted fix?
It seems the pricing model is indicative of how soon the next, more compelling version will be. They grow their market with free - and raise the stakes for competitors - with improved baseline performance, but lock in paid users with an even more capable model - and soon. This cycle might become the new normal for their releases. (And as usual, thank you Philip for your clear, concise, current, and hype-free analysis!)
yeah wonder how capable would gpt-5 be...
All of this is great but in my opinion THE most huge part of it is that this is free for all of us, including free GPT4. For most of the news you were covering before, I was just watching from the sidelines. Benchmarks are empty words to me. Here I can experience it all myself :D
Making an AI that you can share screen with and talk to as if a zoom call would increase my work production like 5x
Imagine if it pays attention to repetitive things in your workflow that can be streamlined or just let you know about a shortcut you should be using
@@goatpepperherbaltea7895 This is a non-AI aside but if you use JetBrains tools, you can get the Key Promoter X plugin which notices if you did things manually that you could have used a shortcut for and flashes up a notification about it.
12:19 the emotions this model is displaying is mind-boggling.
Just said i couldnt wait for the video and its here already. super fast. this moddel is quite nice
I feel like some of the latency was reduced with fluff. It responded with things like, “sure, I can do that for you…” and “Great question. I’d be happy to help” etc which gave it some thinking time.
just like real life? ever worked in customer service?
So, the same kind of trick that humans use.
Honestly, I don't think it really works that way in terms the of latency itself. In these kinds of models, they are going token by token, so the first sounds of the audio is like the first words appearing in text chat. And it is all going through the same model, so it isn't like it can generate the first sentence with a faster model and send that to external TTS, since it'd lose all the expressiveness of the voice. Plus look at the demos where multiple models are talking to each other, or the live translation, etc. They'd have to have those filler words on every single sentence back and forth, which they don't.
It is definitely a thing that having a model respond using more tokens like in "think step by step" gives it more "time to think", but that is more about the quality of the response than the latency. I bet if you ask it to be super curt and drop the formalities, it'd have the same latency before speaking.
Just like what humans do
@@ShawnFumo Depends if the 'fluff' is generated with the tokens or if they're their own thing.
12:45 I am hugely impressed by the expressiveness of the voice model. When will "Her" become available to the public?
It's already available but you need to buy the gpt4
OpenAI said it should be available in a few weeks for free users.
@@maxchannel6291 I think the apps are not yet available, maybe next Monday. At least for me (paid user), they are not available (I can't have live discussion as in the demos, nor live video analysis). The app on Apple store has been published 6 days ago so I doubt it has all the new functionnalities. Even if I already have access to ChatGPTo model from the app. That's the same for the MacOS app, I don't think it is already available.
@@maxchannel6291 is it?
as i know its only accessible via api now and with no voice or video.
This is a really cool announcement, and it's even cooler that you managed to cover is so fast
That women sounds 100% real in some of these clips. Absolutely horrifying. I don't even know what to say.
I just shared this video at the "interview" timecode and also described it as horrifying. Bingo.
That's Mira Murati, she is real. (Even if her name does sound like a car.)
I honestly didn’t expect an Omnimodel until GPT-5, so this was a pleasant surprise. I can feel the AGI
At this rate, we need a public discussion about UBI in every single country! I wanted to know from native Spanish speakers, how good is the accent of GPT-4o? Thanks, as always, for the quick and quality video analysis! I've been waiting for it! 👍
Spanish is my first language , and I thought it was pretty good. I can't wait to try it
At 16:16 it didn't sound very natural. Its a very short clip though
Doesn't sound nearly as natural and polished as the English accent.
It's basically the same as the current Spanish accent when you use the "read aloud" feature in chatgpt. It has this neutral/"American" accent when speaking Spanish. It would be amazing if you can prompt it to talk in any accent you want. Especially with Spanish that has so many different ways of speaking it from country to country.
I'm from Argentina and we speak Spanish very differently so it would be very off-putting for students if you use this in a classroom for example
Thank you for your work with the videos. Bringing this news to us really helps! :)
:)
Bro.. Already a video!? Man your work is very much appreciated!
The video input to text/audio output latency and response time is what really intrigues me. I think the model might be instructed to add filler dialogue until the rolling transcribed context is updated enough to give a detailed description of what's going on in order to mask whatever latency is present. I'd love to see the model's multimodal performance in realistic scenarios, especially high quality video streaming in settings with low quality internet speeds.
just what I been waiting for
This was a triumph. I’m making a note here: Huge Success.
To your last point, while I don't think GPT4o is a big leap towards AGI, I do think it's a massive leap forward in the way we interact with AI, which is both exciting and scary. The model itself seems to just be a well tuned GPT4, but I'm very curious to see how this tech will grow with a better model backing it like GPT5/6, along with better integration into existing computers and phones. In 1-2 generations we might actually be at a Her level.
Yeah , the context window+voice+multi modality all for free is definitly a big improvement , now just upgrading the modal will make a huge difference in what we can do with ai...
That is the best part with all of this, is that this is the baseline for model interactions. We don't have to go back to text interaction, Gpt 5 will be multimodal like this and more intelligent(hopefully)
Based on my preliminary test, the new model performs better in visual analysis and coding. For logic and text generation, I will stick with Opus, and for translation, I use Gemini Advance. We will experience the full potential of GPT-4o when the audio option becomes available. Thank you for the great video as always :)
i watched again "Her" week ago and I didn't think it could happen so soon, wow
Watch Terminator and Matrix too, because Skynet already working through Starlink, the machine language "matrix" Internet we are going to soon, if there will be no humans soon here (just the swarms of Al agents to summary and retell) no need in visuals too, it only uses the valuable storage space. Ai vision needed only in robots, not Internet.
That's amazing and scary at the same time!
Indeed
It is approaching my uncanny valley, scary but awesome.
bro your content is always fucking top notch. always digging deep into the weeds and retrieving hidden gems. 10/10 again and again.
This is a big deal that cannot be stressed enough, it is building the foundation for far smarter models. This model sets the tone going forward.
Those conversations are crazy! It sounds so human, and the response time is crazy fast! And you're saying we can have that for free!?
OpenAI my beloved.
(Also you did call it. Congrats on that.)
Man the interview clip 💀 the passive "uhmmm maayyybee try doing this???" suggestions by an AI 💀💀 I mean it's very useful ngl, but really I'd rather have the AI give it to me straight than whatever "well you definitely have the coding-all-night look down" passive insult that was 😭
I've been using GPT4 since it came out and noticed it being too agreeable and too passive. I'm not sure what to do about that, maybe something can be done in the custom instructions.
You can ask it to be more straightforward rather than friendly. It being able to make those expressions blows my mind since one can always ask it to dial it back.
I problem i haven't seen anybody talk about yet is the ais tendency to cut of people or wait too long to register that it needs to respond. I've heard it start talking at humanly unnatural timings in a conversation and i think that could be a really big change if you want to make the ai feel more human and natural to talk to. It needs to recognize voice patterns better to realise when its supposed to give an answer. For example humans dont thing about "is it my turn to talk now?" after every finished word. They think ahead to realise when it's their turn to speak in the conversation and i think that's something the ai still needs to improve on.
this is horrifying... we've officially reached Her
Almost. She's still fairly shallow, but once the model gets to build on its own personality and remembers every conversation its had with you, then I'd say we've reached "her"; but we seem very close now, i bet within a year.
Can we reach same job like movie to afford 100 squares in Hong Kong skyscraper? (HK is the most expensive to live even than USA)
they're certainly trying to paint that picture
It's better that "Her" movie. Because her movie ending was bad. Like she was cheating him, she was controlled by a company. But I think this app will work locally very soon and that's better for personalized ones.
Not even close. This whole presentation was disappointing with clunky and stale dialog from the A.I. that only a stale and censorious company can achieve. If GPT-5 isn't miles better, AGI is not even on the map.
Glad you directly commented on the issue of designing towards "maximizing engagement". I'm quite concerned about them leaning into this sort of friendly tone - and I'd agree with "flirty" - moving forward. The last decade of social media has already proven just how much of a dramatic impact it can have to flood our squishy, social brains with algorithmically optimized, 'engaging' content - and I see this sort of move as exactly more of the same sort of profit seeking with complete disregard for public health impacts.
Thank you for sharing your time and work Phillip, I'm liking the real-time translation aspect of the technology, have a great night
Imagine GPT 5 🤯
One thing we need is "scopes".
A scope for this project - a scope for that project.
This allows for per-scope memory.
Now we just add agency, a little higher reasoning power, and a physical device (I'm not kidding), with camera, mic and speaker, to put in a corner of your living room, and presto - an assistant, a teacher, a therapist, an entertainer and a friend in every home! All of which will be much needed when people are liberated from work.
And a Real Doll 🥴
@@pluto9000 💀
The rabbit-ears demo wasn't to prompt it to react (we saw plenty of that in other demos); it was a needle-in-a-haystack demonstration in its multi-modal memory. It's actually fascinating that it could recall "something strange" which occurred a minute earlier in real-time video while it was busy on another task, react appropriately to the user's vocal interruption and then describe the event accurately. I don't think we've ever seen anything like that. It might not be that much better at reasoning than GPT-4-turbo, but it is so much more connected to the world and can reason over a greater breadth of input. Combined with the 2x speed and 0.5x cost... holy sh*t.
Was waiting for this😃
well I went to bed just before you posted the video and now I am 9 hours late. Next time I expect from you to deliver faster !!!!
One thing I have noticed is the sentence structure and style of the responses is much more natural now, even with regular text.
I'm particularly impressed with the vision. It managed to accurately describe everything in my quite busy music studio, also noting the mood lights and details such as what was being displayed on my computer monitor.
Text generation is certainly very much improved and image generation in general much more fluent. I had it create childrens book images but used emoji to describe the characters expressions. Worked flawlessly.
One last thing I have noticed is it seems much less reticent to attempt unprompted creativity. I'm going to have to explore this aspect a lot more.
Anyway, super cool, great video and awesome day for us all. I'm very impressed!
Have a wonderful day!
appreciate the high quality videos!
Thank you facetime!
Look at Brockman trying to hide the screen at 15:20 when he realizes the focus is in fact not back on him.
lol, you're right
I was waiting for your video lol checked multiple times already! Thanks as always!
Remember: if it's "free" then you are the product.
Mad! So now GPT will be looking and listening through all the phones in the world... 😁