I asked ChatGPT today and it told me it should be coming in alpha mode to plus users in the next 2 weeks. Omg how exciting, what will you try first ? Incant to test it as a tutor :)
Yes! ChatGPT is already an excellent tutor, but it will be so much better with these new features. I'd like to increase my understanding of quantum computing. But, currently, I can't interrupt the model hands free when it starts going off on a tangent with something I already know. Hands-free is important to me because I'm usually painting or sanding when I'm in my shed. (So, my hands are not "free.") This new functionality will make the model a lot more useful. I can imagine myself asking ChatGPT just about everything from now on. One note- just remember to ask ChatGPT: what was your source for that info? So, if ChatGPT told you alpha mode is coming in two weeks, then push back at the model and say: How did you know that? Where did that information come from? If the model is making it up, it will (usually) apologize. If it provides you with a source, ask for a link that you can verify yourself. I always push back at the model to make sure its not making stuff up. It is getting more and more obvious to me these days when it is, but that is probably because I use AI so much.
Hi Lights and Colors- It's GPT4o, since I'm a Plus subscriber. But, they just haven't added the new functionality yet. They said the new functionality is coming for Plus subscribers in the next couple of weeks. It's been two weeks, so I've been checking every day. At one point in the video, I hold up my phone so you can see it says GPT4o on the top of the screen. If I do another video, I'll make this point a lot clearer. Thank you so much for the feedback!
@@DeepLearningDaily i see, so it's just the delay. I bet you're excited as everybody else. I am. I find it so out of this world, fiction-come-true stuff. 🥰❤
This update is going to be one of the most advanced futuristic things that I have witnessed, judging by the GPT4o ChatGPT and Microsoft Copilot videos. Almost doesn't even seem real. I'll believe it when I can finally test it. I'm excited as well
The new model shouldn't need tap to interrupt. You can overlap your voice to interrupt it like in a real conversation. She didn't miss the text on screen.
The Text Model You are Using is GPT 4o but the Voice Mode is using GPT 4. Openai will start the Alpha Phase (not Beta) of the new Voice Mode with a small Group of Plus Users at the end of July. In the fall, all Plus Users (including me) should have access to the new Voice Mode. Based on the GPT 4o model.
Thank you, Manuel. I appreciate the clarity, and I am very much looking forward to trying the new version. Admittedly, even in the current version, I find VoiceMode very useful. Getting rid of the latency will be nice, but I don't need any of the other features. I rarely ask my Alexa unit to whisper to me. And, I can't imagine a use case where I need ChatGPT to sing to me.
Did you not pay attention to the release video at all? They literally said the features would be rolled out incrementally over the next few months. The only thing that’s available to free users ChatGPT 4. I got access to desktop model on Monday.
OpenAi clearly stated on May 13, 2024 (the day of the Livestream Keynote ChatGPT 4o launch), that ChatGPT 4o was available to "Pro", "Team" and "Enterprise" subscribers as of May 13th, but would not yet have the new "speech" functions. However, it stated that the text and image features were already working. This is not true. If anyone takes the time to go to OpenAi's webpage that showcases the examples of what this new "omni" model can do ... scroll down below the video examples to a section called "Explorations of Capabilities" where there is a drop down menu of 16 examples, showing the exact input "Prompt" and "Output" results. I tried replicating these "amazing, dazzling" results by copying & pasting the exact "Prompt" and instead of getting a mind-blowing result ... my "Pro" ChatGPT 4o output "gibberish" illegible long form handwritten text where letters were malformed, lucky if even 2 or 3 words of a 12 line poem were recognizable ... ChatGPT 4o failed miserably at replicating the example!! Go ahead try it yourself and let me know if you are able to replicate any of the 16 examples accurately.
@@satoriscope Wise advice. I will update my description to make it clear I have a Plus subscription and these features are coming to the Plus subscribers in the upcoming weeks.Thank you for the clarification. You make a valid point. I realize now that the readers of my newsletter all tend to be Plus subscribers, but that wouldn't be the case here on TH-cam since it is a much broader audience.
It happened to me as well, I paid for a singing, whispering sort of version of ChatGPT-4o, but it does nothing of the sort. I keep wondering when it'll change for the demo video's version, and cannot find the answer anywhere... Your video is 4 weeks old (currently), and still there is no change. It does a fine job, but not as shown on TH-cam.
I still don't have the singing, whispering version. But I still use ChatGPT Voice every day, sometimes multiple times a day. I refer to it as my "Oracle." I'm sure it will be even more useful with the updates, but even in the current form, I find ChatGPT Voice speeds up my workflow. For example, I brainstorm article ideas with ChatGPT Voice while walking the dog. To the other dog walkers, I probably look like I'm on the phone. But, I am actually "on the phone" with ChatGPT- getting some work done.
@@DeepLearningDaily Beware, when I use ChatGPT-4o, I have to tell it it's wrong numerous times. It agrees with me and fixes it's answers, but it is I who must know that something is wrong. In historical facts, in cultural etc.. So I'm not saying it's all bad, but I'm not saying it has no faults, either. So do a fact checking, for your own good. That is before the advanced features...
@@XRos28 I agree. Thank you for pointing this out. You have to know your subject material. I appreciate the heads-up, though. When I ask GPT4o to do research, I will often say: "Do your research" in my prompt. Before I publish anything, I run it through Perplexity and say: "Please fact-check this." (Why do I say 'please?' I don't know. I'm a crazy person.) Perplexity finds an actual source I can verify for everything Gpt4o says. I can also ask 4o to provide sources, but often, the links are fictitious. It saves time to ask Perplexity to do it.
Oh no! You mean I have to stay home and play with awesome Star Wars stuff? I'm heartbroken! LOL. You're not wrong, though. What I was trying to do was imitate all of the tests done by OpenAI during the launch day. If you haven't seen the video, two of the developers ran the Sky voice through several tests. One of the tests they frequently did was interrupt the model. Thank you for the feedback. th-cam.com/video/DQacCB9tDaw/w-d-xo.html&ab_channel=OpenAI
@@DeepLearningDailyaside from that, you did a good benchmark of what it can do with GPT-4o, before the new update, since many people didn’t know how extensive this feature already has become. I think the comparison is good, to know exactly what changes in capabilities when the update does come.
@@stateportSound_wav Thank you! That is very kind. I had fun doing the video. It's a great feature, and I use it all the time. I'm looking forward to the update. They are a week late as they said it would be two weeks (for the Plus subscribers.) They are likely delayed to the "Sky" incident.
@@DeepLearningDaily yeah, another comment said they gave an update somewhere, I think blog post delaying it from the original “weeks” to “months”, but I hope that’s not the case
@@stateportSound_wav Where is this blog that says months because I never heard or read that. I think if they changed it to months, there would be more of us Plus subscribers angry and talking about that all over the internet.
This does not appear to be the new voice model. It will be rolling out in the next few months. I’d like to see another analysis once you gain access.
Me, too! I look forward to making that video!
I asked ChatGPT today and it told me it should be coming in alpha mode to plus users in the next 2 weeks. Omg how exciting, what will you try first ? Incant to test it as a tutor :)
Yes! ChatGPT is already an excellent tutor, but it will be so much better with these new features. I'd like to increase my understanding of quantum computing. But, currently, I can't interrupt the model hands free when it starts going off on a tangent with something I already know. Hands-free is important to me because I'm usually painting or sanding when I'm in my shed. (So, my hands are not "free.") This new functionality will make the model a lot more useful. I can imagine myself asking ChatGPT just about everything from now on.
One note- just remember to ask ChatGPT: what was your source for that info? So, if ChatGPT told you alpha mode is coming in two weeks, then push back at the model and say: How did you know that? Where did that information come from? If the model is making it up, it will (usually) apologize. If it provides you with a source, ask for a link that you can verify yourself. I always push back at the model to make sure its not making stuff up. It is getting more and more obvious to me these days when it is, but that is probably because I use AI so much.
that version is not the one on the demo (GPT 4o). that's only GPT 4
Hi Lights and Colors- It's GPT4o, since I'm a Plus subscriber. But, they just haven't added the new functionality yet. They said the new functionality is coming for Plus subscribers in the next couple of weeks. It's been two weeks, so I've been checking every day. At one point in the video, I hold up my phone so you can see it says GPT4o on the top of the screen. If I do another video, I'll make this point a lot clearer. Thank you so much for the feedback!
@@DeepLearningDaily i see, so it's just the delay. I bet you're excited as everybody else. I am. I find it so out of this world, fiction-come-true stuff. 🥰❤
@@theeyes-fx6ld Yes! You captured it exactly!
This update is going to be one of the most advanced futuristic things that I have witnessed, judging by the GPT4o ChatGPT and Microsoft Copilot videos. Almost doesn't even seem real. I'll believe it when I can finally test it. I'm excited as well
ahhhh, how did you miss "Tap To Interrupt"?
The new model shouldn't need tap to interrupt. You can overlap your voice to interrupt it like in a real conversation. She didn't miss the text on screen.
The Text Model You are Using is GPT 4o but the Voice Mode is using GPT 4.
Openai will start the Alpha Phase (not Beta) of the new Voice Mode with a small Group of Plus Users at the end of July. In the fall, all Plus Users (including me) should have access to the new Voice Mode. Based on the GPT 4o model.
Thank you, Manuel. I appreciate the clarity, and I am very much looking forward to trying the new version. Admittedly, even in the current version, I find VoiceMode very useful. Getting rid of the latency will be nice, but I don't need any of the other features. I rarely ask my Alexa unit to whisper to me. And, I can't imagine a use case where I need ChatGPT to sing to me.
Did you not pay attention to the release video at all? They literally said the features would be rolled out incrementally over the next few months. The only thing that’s available to free users ChatGPT 4.
I got access to desktop model on Monday.
Actually, they said it would be a few weeks for their Plus subscribers. I've been checking everyday. I am very excited for the new features.
@@surfercouple they changed it to months now
@@surfercouple you have to stay updated with interviews that leadership at OpenAI do. That’s the best way to stay updated.
OpenAi clearly stated on May 13, 2024 (the day of the Livestream Keynote ChatGPT 4o launch), that ChatGPT 4o was available to "Pro", "Team" and "Enterprise" subscribers as of May 13th, but would not yet have the new "speech" functions. However, it stated that the text and image features were already working. This is not true. If anyone takes the time to go to OpenAi's webpage that showcases the examples of what this new "omni" model can do ... scroll down below the video examples to a section called "Explorations of Capabilities" where there is a drop down menu of 16 examples, showing the exact input "Prompt" and "Output" results. I tried replicating these "amazing, dazzling" results by copying & pasting the exact "Prompt" and instead of getting a mind-blowing result ... my "Pro" ChatGPT 4o output "gibberish" illegible long form handwritten text where letters were malformed, lucky if even 2 or 3 words of a 12 line poem were recognizable ... ChatGPT 4o failed miserably at replicating the example!! Go ahead try it yourself and let me know if you are able to replicate any of the 16 examples accurately.
@@satoriscope Wise advice. I will update my description to make it clear I have a Plus subscription and these features are coming to the Plus subscribers in the upcoming weeks.Thank you for the clarification. You make a valid point. I realize now that the readers of my newsletter all tend to be Plus subscribers, but that wouldn't be the case here on TH-cam since it is a much broader audience.
„I can’t interrupt the model“
The model: „Tap to interrupt“ 😂
(Yes I know that you wanted to interrupt ut with your voice)
You're not wrong. :) I'll admit I was being a total Karen to the poor model while testing it.
It will roll out in coming weeks. It is still the model
It happened to me as well, I paid for a singing, whispering sort of version of ChatGPT-4o, but it does nothing of the sort. I keep wondering when it'll change for the demo video's version, and cannot find the answer anywhere... Your video is 4 weeks old (currently), and still there is no change. It does a fine job, but not as shown on TH-cam.
I still don't have the singing, whispering version. But I still use ChatGPT Voice every day, sometimes multiple times a day. I refer to it as my "Oracle." I'm sure it will be even more useful with the updates, but even in the current form, I find ChatGPT Voice speeds up my workflow. For example, I brainstorm article ideas with ChatGPT Voice while walking the dog. To the other dog walkers, I probably look like I'm on the phone. But, I am actually "on the phone" with ChatGPT- getting some work done.
@@DeepLearningDaily Beware, when I use ChatGPT-4o, I have to tell it it's wrong numerous times. It agrees with me and fixes it's answers, but it is I who must know that something is wrong. In historical facts, in cultural etc.. So I'm not saying it's all bad, but I'm not saying it has no faults, either. So do a fact checking, for your own good. That is before the advanced features...
@@XRos28 I agree. Thank you for pointing this out. You have to know your subject material. I appreciate the heads-up, though.
When I ask GPT4o to do research, I will often say: "Do your research" in my prompt. Before I publish anything, I run it through Perplexity and say: "Please fact-check this." (Why do I say 'please?' I don't know. I'm a crazy person.) Perplexity finds an actual source I can verify for everything Gpt4o says. I can also ask 4o to provide sources, but often, the links are fictitious. It saves time to ask Perplexity to do it.
Thank you for the video. One question is, is it free version or paid version?
👀
People can be such suckers
This is an interesting general observation of humanity. Do you want me to pose that question to the Ember voice in my next video?
@@TOMTOM-zj5xj I love it! KAREN AI-that is brilliant. And, yes, you are right. I put the voice model through its paces in this video.
they have given a button to interrupt, you fail at the interview lol
Oh no! You mean I have to stay home and play with awesome Star Wars stuff? I'm heartbroken! LOL. You're not wrong, though. What I was trying to do was imitate all of the tests done by OpenAI during the launch day. If you haven't seen the video, two of the developers ran the Sky voice through several tests. One of the tests they frequently did was interrupt the model. Thank you for the feedback.
th-cam.com/video/DQacCB9tDaw/w-d-xo.html&ab_channel=OpenAI
@@DeepLearningDailyaside from that, you did a good benchmark of what it can do with GPT-4o, before the new update, since many people didn’t know how extensive this feature already has become.
I think the comparison is good, to know exactly what changes in capabilities when the update does come.
@@stateportSound_wav Thank you! That is very kind. I had fun doing the video. It's a great feature, and I use it all the time. I'm looking forward to the update. They are a week late as they said it would be two weeks (for the Plus subscribers.) They are likely delayed to the "Sky" incident.
@@DeepLearningDaily yeah, another comment said they gave an update somewhere, I think blog post delaying it from the original “weeks” to “months”, but I hope that’s not the case
@@stateportSound_wav Where is this blog that says months because I never heard or read that. I think if they changed it to months, there would be more of us Plus subscribers angry and talking about that all over the internet.