I'm screaming. As a visually impaired person, this is what I was eagerly waiting for. Still screaming! Thank you, Sam, Kev and the entire team over at OpenAI.
@amritsingh6987they said it's coming over the course of the week to all of the teams users and most of the plus and pro users.... most I don't know if that means most of the people who are in countries that don't have restrictions. They should have been more clear about it. I'm sure Sam Altman would have not been so vague. But if it's anything like the other releases then I would assume all of the US is getting it within the week but I don't know I don't like the way that they said most especially when you don't have plus already and are wondering to get it.
Okay the merge of Advanced Voice with their Vision AI capabilities so it can process what is on your video or screen is an impressive update showing how fast they are improving AI's capabilities. Santa Mode is a fun release - especially will be great for kids!
You have just made the biggest breakthrough in blind accessibility tech in god knows how many years. Great job, guys. I am going to be extensively testing this out
at the rate they are going they might get to that in 3 years or less maybe,of course the video is hard since the ai has to be analizing about 24 frames per second,it is possible but it has to be quick which is gonna take a while to develop
@@airlesstermite4240 there is ZERO lag on this thing, like it can maintain conversation while identifying everything instantly, it’s just crazy, it can PERFECTLY read text, and full describe paintings, most importantly if it’s seeing your code it saves you time to copy, I have o1 pro so I can keep it running all day. I’m just amazed bro that such technology exists, it’s something else. And all the voice models sound different now.
Noticed that too, but it did at least tell him to use a circular motion initially. It didn’t pick up that he wasn’t actively doing it, but maybe this failing on their live announcement will force them to work on hot fixes to make it take in more seconds of video before answering specifically about reviewing actions rather than instant responses catered to just knowledge
Same, ChatGPT probably can't process small details like that. It's probably looking at each frame for like hundreds of a second and moving to the next frame. The latency is low but the ability to process information is hurt due to low latency. I can only assume if it took a full second to process each frame (24 seconds per second of video input) it would more than likely have caught that. Since the video is being stored temporarily it likely, if asked, can recall the mistake.
Just further to my last comment, It is so refreshing as a viewer to get to sense the authenticity and integrity of the various people who work at Openai. They come across as real people who deeply care about what they do. An example of a great company culture where the very best shines through. Must be great to work there, inspiring to many young folks who see this!
LOL that awkward moment at 5:24 😂 He asks ChatGPT what's wrong, expecting it to say 'use circular motions' (since it just said that 10 seconds ago), but ChatGPT goes 'your technique is fine!' Then he just goes ahead and pours in circles anyway, knowing ChatGPT messed up. Definitely didn’t go as planned! 😆 (it might be because chatgpt takes only few frames from the video and its not able to figure out just based off those? or something else is the reason?)
It's totally because he only made that mistake during frames where it didn't quite catch it. Though it's a negligible issue that will certainly be ironed out as processing speed naturally increases.
You have no idea how badly I wanted this for my near blind grandmother. I hope you guys know how useful and helpful this will be, if done well, to handicapped individuals. Thank you.
I have also a near blind grandmother. She could launch chatGPT app with Siri. But how should she start the advanced voice, as she can't see the small icon. And AFAIK chatGPT app does not (yet) support basic Siri control. (Apples Voice Over is not an option for her, as it's way too complicated.)
I love how ChatGPT suggested rinsing the paper to get rid of the papery flavour and then didn't see any issue with pouring the coffee into the same mug with the same water that was used to rinse the filter.
@@adammccoy1 possibly they are commenting on the times when the audio continued when it shouldn't have? I.e. when the humans were speaking over ChatGPT (sounding like it didn't have time to finish what it was saying)
No shit your right you can only upload photos. I know in blue beam you can save down a PDF as a series of PNG. Im sure most other PDF readers could do the same. Maybe you could upload that and GPT could OCR it.
Don't worry, I'm in the US and it's claiming it's not available in my country. And no, I don't use a VPN. This has been a pretty botched rollout overall. They built the hype up way too early and now they're underdelivering because of it.
@@Patrick-gm3fb it said over a course of the week and also it said for all team members and most plus and pro users and I'm not sure if most means anyone who doesn't have any regulations for most of the people only that have no such regulations. But nonetheless you would have been totally not get it yet you have to wait a week as well as a couple other things you should have a little patience.
Need the ability to Screen Share Windows while on Windows and macOS. A lot of the ways to interface with AGI/ASI will just be a simple UI/UX issue. Perfecting the super basics of input with what has existed even 20 years ago is all that is needed. The AI just needs to 'see' what is going on.
When i first heard Sam talking about this 12 days of Christmas thing, I was like this will be ho hum... But every day has been nothing short of amazing. Big congrats to the team!
@@zinthaniel9913 Ive had it flag on completely normal things, like talking about self driving cars. It seems to miss hear words more often than the speech to text as well.
@@quantuminfinity4260 make sure you're speaking close to your phone and in a low-noise environment and it should work perfectly. Haven't found a better voice chat for AI than this, but if you have any recommendations please let me know!
More feature requests, reply below to add yours: Here is my list: - Ability to organise chats into folders for easy access and management. - Ability to store and read the pdf within chatgpt interface and talk with the pdf at the same time to make better notes, key points, etc. Both are much needed
Maybe right now this seems like a small update, but I like it as a step into the future. Imagine building some Ikea stuff and you just film all the left over parts and chatgpt gives you exact instructions. I am excited for the future!
God Damn! All of the features are AMAZING!!!! I am a ChatGPT Pro user, but I live in Germany where we do not get lots of AI features like the rest of the world.
I really appreciate how they present this in such a casual and 'first try' way. For those critiquing the audio, it's clear they prioritize substance over unnecessary clutter. They're genuinely making a difference in the world!
This is literally the biggest advancement for the human experience right now. Thank you open ai and the magic you're bringing and have brought the world 🌎
Great update! I love these daily videos with the AI development team. I’ve been a regular user of ChatGPT Advanced Voice, so happy to see this update. Advanced Voice (through the mobile app) allows me to work from anywhere. I often start composing articles for my AI newsletter while out walking the dog. Keep up the great work.
We're about to have a personal tutor with endless patience and understanding. I can't even imagine what else you can do with this, hopefully a lot of impaired people will have their lives made much easier now. What a time to be alive.
This update is like Christmas came early for ChatGPT users-literally, with Santa in tow. Let’s break this down: 1. Video and Screen Sharing: Finally, Advanced Voice Mode gets the visual upgrade it needed. Teaching coffee brewing? Screen-sharing your Mall Santa rejection letter? This is where ChatGPT goes from “useful assistant” to “full-blown partner-in-crime.” 2. Santa Mode: Talking directly to Santa? Brilliant for parents and kids, or anyone who’s overly curious about reindeer politics. The snowflake icon feels like a cute touch, even if Santa’s jokes are straight out of a dollar-store pun book. 3. Global Rollout: Rolling these features out to Plus and Pro users first makes sense-it’s a nice thank-you for the folks already invested. Enterprise and EDU plans getting access later means the functionality has the potential to go pro, literally. This feels like OpenAI flexing on every other virtual assistant. Between the advanced voice features, the screen sharing, and even Santa mode, they’ve managed to make ChatGPT even more dynamic. But also…let’s not pretend we’re not all going to spend the first week asking Santa increasingly absurd questions.
GREAT JOB, TEAM!!! I’m really enjoying these 12 Days videos!! Love meeting the people behind the curtain! I’m looking forward to seeing each day’s latest video. Congrats!!! 🎄🎅🏻☃️
@@BionicAnimations the API costs roughly $15 per hour to use in your own apps or production-use cases. Which means it's likely very computationally intensive, as it is the first public iteration of that model. Just like how GPT-4 pricing dropped like 95% down to GPT-4o's pricing.. the Advanced Voice Mode shall follow suit over the next 12 months. Personally, I think 1 hour per day is a bargain, considering the API = $15/h, and Plus subscription is just $20 per month.
I am overly happy and grateful, but a little afraid at the part where she said "SOME" plus users. So, not all of us Plus users get it even though we equally pay $20 per month? Or am I missing something here? 🤔
ChatGPT is said to be good at coding. Now that it is equipped with visual recognition capability to interpret real world situations (similarly as how it responded to verbal & text prompts), I wonder how efficient an Agile / Sprint iteration cycle would become if/when it were to participate in a Scrum meeting (it should then be able to read yellow sticky-note writings updated on the white board). I presume its digital action would be 'almost' instantaneous upon recognizing its given to-do tasks (eg. up-revision of program codes upon each Sprint cycle iterations) → just need a large screen to display its work done, ie : • input - via camera lens & speaker receiver • work processing - via cloud (eg. Windows 365 Link) • output - via display screen
will this be bought to be my ai!! I'm blind I can't express enough in a comment how much these features so far have helped me and how useful they'll be with video!
My kids immediately noticed that Santa’s voice sounds suspiciously like a certain elderly porcine gentleman. They repeatedly asked him why he is Peppa Pig’s grandpa 💀
Well for me screen sharing is best part , now I can get real-time assistance from it , and it will have better context of what I am doing in real time .
I can already imagine the vision capabilities being used for AR applications, imagine like we are just wearing a pair of glasses and all those important information just pop out in front of you!
I tried it and despite using video mode, chat insisted it didn't have video capabilities and can't see through my camera or my screen. Although it did make one slip up: it mentioned the 'electronics' in my room, but when questioned insisted (incorrectly) that I had told it about them. Weird. The Santa option worked and could see fine.
Some of you guys really don't know the hardships you face while being a student if you're not from a well-off family, exactly why student plans exist in the first place. Speaking like real brats here (or illiterates, whichever suits you) :)
@@Jaroslav-f9o firstly it’s not the country it’s the company. I’m in Ghana and there is 0 regs against AI here secondly WTF would I have to pay for a VPN after paying $200? Dumb ass
IMO these features are more important than Sora. Think of all of the use cases for this in the education and training space. . . This is transformational technology.
The ChatGPT app needs to show an outline or something while talking, like how Rabbit R1 does it. I think the best way is for ChatGPT not to readback every single text on the screen and only give a simple outline that is clear and understandable. It should talk in detail if needed with words that aren't on that screen.
I'm screaming. As a visually impaired person, this is what I was eagerly waiting for. Still screaming! Thank you, Sam, Kev and the entire team over at OpenAI.
Me Too!
did they say if it's coming to be my ai?
is it only on the pluss plan?
I will pay if it is
But what's the time usage
Same here
I'm also visually Impaired and it's a delight to see this finally coming out! I'm screaming too haha
@amritsingh6987they said it's coming over the course of the week to all of the teams users and most of the plus and pro users.... most I don't know if that means most of the people who are in countries that don't have restrictions. They should have been more clear about it. I'm sure Sam Altman would have not been so vague. But if it's anything like the other releases then I would assume all of the US is getting it within the week but I don't know I don't like the way that they said most especially when you don't have plus already and are wondering to get it.
People will watch this in 20 years and they will chuckle the way we do when we see computers from the 80’s.
Most likely
This is a crap version of the Apple Knowledge Navigator fake future ad from the early 90s
Google already is
@@brianhopson2072 People just glazing all over google recently...
It’s 40 years
Imagine that 2 years ago we were all freaking out about Chat GPT-3.5
For this reason alone, they should open source every deprecated versions...
So true
What are you think about for version 8?
@@baconsky1625 why? what's the reason? people freaking out about gpt 3.5?
Now imagine what we will be freaking out about 2 years from now.
These videos add a lot of sincerity and humanity where they could have been ultra clean faceless corpo-core launch videos
they clearly know what they are doing
they are still really cringe and forced tho
I like cringe
Gotta have a healthy mix of humans and robots.
Its intentional.
Okay the merge of Advanced Voice with their Vision AI capabilities so it can process what is on your video or screen is an impressive update showing how fast they are improving AI's capabilities.
Santa Mode is a fun release - especially will be great for kids!
You have just made the biggest breakthrough in blind accessibility tech in god knows how many years. Great job, guys. I am going to be extensively testing this out
2:43 even the people of ChatGPT are tired of the standard response at the end. He cut it off!! 😂
Nice release!
can’t wait to have it walk me through stuff step by step with visuals!
at the rate they are going they might get to that in 3 years or less maybe,of course the video is hard since the ai has to be analizing about 24 frames per second,it is possible but it has to be quick which is gonna take a while to develop
@@dereklopezalvarez7168what do you mean 3 years or less? they are rolling it out today
I just tested it out and this stuff is 100% a complete software, mind blown beyond bits
@@v1nigra3 wow! what use cases did you test and what really blew your mind? I’d love to hear more!
@@airlesstermite4240 there is ZERO lag on this thing, like it can maintain conversation while identifying everything instantly, it’s just crazy, it can PERFECTLY read text, and full describe paintings, most importantly if it’s seeing your code it saves you time to copy, I have o1 pro so I can keep it running all day. I’m just amazed bro that such technology exists, it’s something else. And all the voice models sound different now.
rowan starting to pour in a circular motion when chatgpt wasn't able to point out the fault in his pouring pattern got me hard
Chat told me that Rowan has the neuralink implant.
It also got me when he didn't discard the rinse water
Noticed that too, but it did at least tell him to use a circular motion initially. It didn’t pick up that he wasn’t actively doing it, but maybe this failing on their live announcement will force them to work on hot fixes to make it take in more seconds of video before answering specifically about reviewing actions rather than instant responses catered to just knowledge
Same, ChatGPT probably can't process small details like that. It's probably looking at each frame for like hundreds of a second and moving to the next frame. The latency is low but the ability to process information is hurt due to low latency. I can only assume if it took a full second to process each frame (24 seconds per second of video input) it would more than likely have caught that. Since the video is being stored temporarily it likely, if asked, can recall the mistake.
it got you what??
You've have pushed humanity to the next era
MATRIX AND TERMINATOR COMING SOON YEARS AND END OF HUMANITY DESTROY/KILLING BY AI ...😬😬😬😱😱😱💀💀💀☠️☠️☠️🩸🩸🩸
These daily releases are fabulous, are keeping me excited for what you are going to roll out
screen share finally, the end of tutoring is near
Fr
I already Screenshoted the fuck outa my screen to Let it help me With tasks (Math) and it was Amazing.
Total Tutoring Death
DIY is about to get a lot easier too
Just further to my last comment, It is so refreshing as a viewer to get to sense the authenticity and integrity of the various people who work at Openai. They come across as real people who deeply care about what they do. An example of a great company culture where the very best shines through. Must be great to work there, inspiring to many young folks who see this!
LOL that awkward moment at 5:24 😂 He asks ChatGPT what's wrong, expecting it to say 'use circular motions' (since it just said that 10 seconds ago), but ChatGPT goes 'your technique is fine!' Then he just goes ahead and pours in circles anyway, knowing ChatGPT messed up. Definitely didn’t go as planned! 😆
(it might be because chatgpt takes only few frames from the video and its not able to figure out just based off those? or something else is the reason?)
It's totally because he only made that mistake during frames where it didn't quite catch it. Though it's a negligible issue that will certainly be ironed out as processing speed naturally increases.
It might also be that usually GPT tries to be encouraging and didn't want to criticize him.
@@r.m8146 That's possible. Sometimes it needs to get through its encouraging fluff before it really says anything.
You have no idea how badly I wanted this for my near blind grandmother. I hope you guys know how useful and helpful this will be, if done well, to handicapped individuals.
Thank you.
I have also a near blind grandmother. She could launch chatGPT app with Siri. But how should she start the advanced voice, as she can't see the small icon. And AFAIK chatGPT app does not (yet) support basic Siri control. (Apples Voice Over is not an option for her, as it's way too complicated.)
@@rdeckard66 you can create a shortcut that opens advanced voice mode but idk if it will work with video
Love how they did his right after Google's release (live now) of video in Project Astra
Google didn't really release it. They just showed what it could do potentially.
@@emmanuelr710 Will release 7 months Later Like Chat's Vision ha.
@@emmanuelr710experimental version is here
It's great! but google is free and doesn't have any limitations
this is actually out, and was a live demo. that's a huge difference.
0:18 - we know what happened. ASI escaped the lab and it is using Sora to simulate you guys, it’s game over!
Cómo así
Love this 😂
I love how ChatGPT suggested rinsing the paper to get rid of the papery flavour and then didn't see any issue with pouring the coffee into the same mug with the same water that was used to rinse the filter.
I thought I was the only one who cringed when he didn't dump out the water first.
It will remember that you silenced it at 2:45.
😂😂😂
Kids are gonna believe in Santa until they're 25 soon enough lol
Well, there are people who believe in all sorts of gods until they're 90. Not that different tbh.
The audio guy was out sick today once again i see
They all have a lav mic I am guessing it's their streaming settings
String and a tin can guy is back
What was d joke , . Explainatiom ?
He was busy bringing down chatgpt
@@adammccoy1 possibly they are commenting on the times when the audio continued when it shouldn't have? I.e. when the humans were speaking over ChatGPT (sounding like it didn't have time to finish what it was saying)
How about just being able to upload PDF’s into o1 and o1 pro?
No shit your right you can only upload photos. I know in blue beam you can save down a PDF as a series of PNG. Im sure most other PDF readers could do the same. Maybe you could upload that and GPT could OCR it.
Nobody is going to pay for o1 unless they are mentally challenged, so that feature is not needed. Gemini 2 is coming out too
The first thing I’ll be asking that new Santa voice is when he’s bringing Sora to the UK
never, thanks regulations
So much for Brexit...
Don't worry, I'm in the US and it's claiming it's not available in my country. And no, I don't use a VPN.
This has been a pretty botched rollout overall. They built the hype up way too early and now they're underdelivering because of it.
@@Patrick-gm3fb it said over a course of the week and also it said for all team members and most plus and pro users and I'm not sure if most means anyone who doesn't have any regulations for most of the people only that have no such regulations. But nonetheless you would have been totally not get it yet you have to wait a week as well as a couple other things you should have a little patience.
Why don't you get a VPN that says you're in the United States. I know many people I even know people in Russia who have access to American AI
Need the ability to Screen Share Windows while on Windows and macOS.
A lot of the ways to interface with AGI/ASI will just be a simple UI/UX issue.
Perfecting the super basics of input with what has existed even 20 years ago is all that is needed.
The AI just needs to 'see' what is going on.
You can try it for free on Google AI studio. It's actually amazing to use
Ya I’m hoping that they do this
When i first heard Sam talking about this 12 days of Christmas thing, I was like this will be ho hum... But every day has been nothing short of amazing. Big congrats to the team!
Advanced voice: ‘My guidelines do not allow me to talk about this.”
As long as you are not a pervert asking it to be your GF and do disgusting stuff you should be fine.
@@zinthaniel9913 Ive had it flag on completely normal things, like talking about self driving cars. It seems to miss hear words more often than the speech to text as well.
@@quantuminfinity4260 make sure you're speaking close to your phone and in a low-noise environment and it should work perfectly. Haven't found a better voice chat for AI than this, but if you have any recommendations please let me know!
@@zinthaniel9913 What a moronic statement. It states that randomly. People have been complaining about its nonsensical censorship for months now
@@quantuminfinity4260yeah it glitches a lot which is annoying
But helpful tbh
More feature requests, reply below to add yours:
Here is my list:
- Ability to organise chats into folders for easy access and management.
- Ability to store and read the pdf within chatgpt interface and talk with the pdf at the same time to make better notes, key points, etc.
Both are much needed
Agents!
New features available in Europe 😅
Honestly iv been wanting a way to organise our chats so badly!
In advance voice mode, the ability for chatgpt to stay silent until I tell it to respond, so I can take my time asking my question!
@@billmccarvell997 yes. 100% agree. the interruptions are rude and unproductive.
Maybe right now this seems like a small update, but I like it as a step into the future. Imagine building some Ikea stuff and you just film all the left over parts and chatgpt gives you exact instructions. I am excited for the future!
I was even thinking of building a home or something complex like a car. Have someone hold your hand to tell you the steps is a game changer
I was thinking of any practical stuff I could do with this stuff… well I do know now.
If you can download the OpenAI app, you can follow the ikea instructions ;-)
@@SALSN bro don't say that out too loud 😂😂😂☠
That will take so many gpus😂
OpenAI, you are in history!
God Damn! All of the features are AMAZING!!!! I am a ChatGPT Pro user, but I live in Germany where we do not get lots of AI features like the rest of the world.
VPN
This....is HUGE! The perfect birthday present for me today 🔥🔥🔥
I really appreciate how they present this in such a casual and 'first try' way. For those critiquing the audio, it's clear they prioritize substance over unnecessary clutter. They're genuinely making a difference in the world!
Finally, we get AVM with vision. What a wonderful Christmas gift; just what I asked for. Thank you! 😍🎄
This is literally the biggest advancement for the human experience right now. Thank you open ai and the magic you're bringing and have brought the world 🌎
Thank you so much for all of the hard work you and and your team have done thus far. Astounding work.
Great update! I love these daily videos with the AI development team. I’ve been a regular user of ChatGPT Advanced Voice, so happy to see this update. Advanced Voice (through the mobile app) allows me to work from anywhere. I often start composing articles for my AI newsletter while out walking the dog. Keep up the great work.
This is history in the making.
I just tried the video interaction and screen sharing feature, and it works really well!
Can't wait to check out the new video mode!
oh my god, this is so warm and family vibes they create. I both love the OpenAI's culture and cool Christmas feelings they provoke
We're about to have a personal tutor with endless patience and understanding. I can't even imagine what else you can do with this, hopefully a lot of impaired people will have their lives made much easier now. What a time to be alive.
This update is like Christmas came early for ChatGPT users-literally, with Santa in tow. Let’s break this down:
1. Video and Screen Sharing: Finally, Advanced Voice Mode gets the visual upgrade it needed. Teaching coffee brewing? Screen-sharing your Mall Santa rejection letter? This is where ChatGPT goes from “useful assistant” to “full-blown partner-in-crime.”
2. Santa Mode: Talking directly to Santa? Brilliant for parents and kids, or anyone who’s overly curious about reindeer politics. The snowflake icon feels like a cute touch, even if Santa’s jokes are straight out of a dollar-store pun book.
3. Global Rollout: Rolling these features out to Plus and Pro users first makes sense-it’s a nice thank-you for the folks already invested. Enterprise and EDU plans getting access later means the functionality has the potential to go pro, literally.
This feels like OpenAI flexing on every other virtual assistant. Between the advanced voice features, the screen sharing, and even Santa mode, they’ve managed to make ChatGPT even more dynamic. But also…let’s not pretend we’re not all going to spend the first week asking Santa increasingly absurd questions.
More memory for Christmas? Its been saying Memory Full for months... This is a great new feature, A11Y and useful.
You launched sora 2 times and it still not available. So frustrating
It Is available. Vpn is the key if you live in EU or UK.
This is the most amazed I’ve been in years
2:03 chatgpt is talking to her parents actually
What do you mean?
We are watching the history unfold. Amazing.
Just fabulous, attending to so many aspects (like visually impaired folks), bloody well done Openai !!😍😍
GREAT JOB, TEAM!!! I’m really enjoying these 12 Days videos!! Love meeting the people behind the curtain! I’m looking forward to seeing each day’s latest video. Congrats!!! 🎄🎅🏻☃️
bet the access for Plus users gonna be like 10mins/day
It’s one hour
It should be unlimted.
@@BionicAnimations the API costs roughly $15 per hour to use in your own apps or production-use cases. Which means it's likely very computationally intensive, as it is the first public iteration of that model. Just like how GPT-4 pricing dropped like 95% down to GPT-4o's pricing.. the Advanced Voice Mode shall follow suit over the next 12 months. Personally, I think 1 hour per day is a bargain, considering the API = $15/h, and Plus subscription is just $20 per month.
i'm so hyped for the last 3 days. this is going real good so far.
Yet another brilliant feature for ChatGPT. Well done, guys.
Yooo this is insane. I love it so much can you imagine what you can do and learn!!
I am overly happy and grateful, but a little afraid at the part where she said "SOME" plus users. So, not all of us Plus users get it even though we equally pay $20 per month? Or am I missing something here? 🤔
This was the first innovation that excited me in this process.
Jackie, don't worry for these little awkward pauses - the presentation was amazing and you're doing great!
Openai are fr getting me in a christmas spirit, I wasn't in one before I started watching 12 days of openai.. I could honestly see this be a tradition
At 2:14 Rowan's facila expression in response to Chat's statement, "multi-modal research sounds fascinating""was simply priceless! 🤣🤣🤣🤣🤣
ChatGPT is said to be good at coding. Now that it is equipped with visual recognition capability to interpret real world situations (similarly as how it responded to verbal & text prompts), I wonder how efficient an Agile / Sprint iteration cycle would become if/when it were to participate in a Scrum meeting (it should then be able to read yellow sticky-note writings updated on the white board). I presume its digital action would be 'almost' instantaneous upon recognizing its given to-do tasks (eg. up-revision of program codes upon each Sprint cycle iterations) → just need a large screen to display its work done, ie :
• input - via camera lens & speaker receiver
• work processing - via cloud (eg. Windows 365 Link)
• output - via display screen
will this be bought to be my ai!!
I'm blind
I can't express enough in a comment how much these features so far have helped me
and how useful they'll be with video!
My kids immediately noticed that Santa’s voice sounds suspiciously like a certain elderly porcine gentleman. They repeatedly asked him why he is Peppa Pig’s grandpa 💀
Santa voice is top tier
It’s a Christmas miracle. This is what we’ve been waiting for!
It feels like OpenAI and Google just want to sync up together and not miss on each other's releases
The kings of launches without making the product available
1:50 we can now finally say "Hey chat is this real?"
Well for me screen sharing is best part , now I can get real-time assistance from it , and it will have better context of what I am doing in real time .
I can already imagine the vision capabilities being used for AR applications, imagine like we are just wearing a pair of glasses and all those important information just pop out in front of you!
Finally something I can use to help me remember people's names at a party! 😎🤖
6:55 for the nerdiest laugh in the history of nerd laughs. I checked with GPT and it agrees.
you first gotta discard the water, you rinsed the paper with :)
This is super exciting, looking forward to trying it out!
It's great to live in Europe with the AI Act.
I tried it and despite using video mode, chat insisted it didn't have video capabilities and can't see through my camera or my screen. Although it did make one slip up: it mentioned the 'electronics' in my room, but when questioned insisted (incorrectly) that I had told it about them. Weird. The Santa option worked and could see fine.
Same issue! What is going on?
Give student discount man!!!
Bro. Maintain respect bro. Chatgpt team has given life changing updates. It's mind blowing. It would be nice if u give them respect
Nah, it’s such a joke people even get discounts for being a student. Pay $200 like I have to it.
Some of you guys really don't know the hardships you face while being a student if you're not from a well-off family, exactly why student plans exist in the first place.
Speaking like real brats here (or illiterates, whichever suits you) :)
They can't do that when most of their users are students.
The Santa module is pretty awesome! My cousin absolutely loved it!
The OpenAI team looks so fun and chill ❤😂
So I pay $200 and I still don’t have access to sora. What about that that apology 0:34
Blame the country you live in, not openai. And download VPN.
@@Jaroslav-f9o firstly it’s not the country it’s the company. I’m in Ghana and there is 0 regs against AI here secondly WTF would I have to pay for a VPN after paying $200? Dumb ass
You guys are freakin amazing!!! Can't wait to see more :)
This is what I have been waiting for forever. It's finally a reality ❤❤
I paused the video at 3:46 to test it all out and I just have to say it’s amazing upon preliminary testing
Lol like literally this was the best joke from this entire series so far and it came from an ai
IMO these features are more important than Sora. Think of all of the use cases for this in the education and training space. . . This is transformational technology.
Wow !! Advanced voice is just incredible
not me checking the app store for an update every 5 seconds lol
I wonder if it can help with car maintenance
Gemini 2.0 Flash Project Astra vs GPT-4o live video
Which one is the best ?
GPT!
The fact that they tried to keep the first thing secret was kind of funny. Have they not heard of a video title? 😂
there were no title during live stream it came afterwards as far i remember
@@melonenkopfchen6318 Oh, I see. That makes more sense 🤔
Been waiting for this since May
The ChatGPT app needs to show an outline or something while talking, like how Rabbit R1 does it.
I think the best way is for ChatGPT not to readback every single text on the screen and only give a simple outline that is clear and understandable.
It should talk in detail if needed with words that aren't on that screen.
Available yet? Just updated the app but still not there.
If you're in Europe you're going to have to wait due to European regulations
@@TrollAndPlay In California.
I wish Santa could send the gifts too..
Love these daily updates :)
The faux happiness in these videos is triggering, but the features are great.
Santa with a British accent is known as Father Christmas.
Çok başarılı tebrik ediyorum 👏🏼👏🏼👏🏼
If you’re in Europe just use a VPN set to US or Canada to get access to this. It’s freaking awesome
YES!!! All I wanted for Christmas!
Video will roll out in the coming weeks
inb4 6 months
This starting to feel like we’re the language models working through the kinks