Yeah, no basement dweller dev's are gonna be messing with that API until the costs drop by at least 100x, which I honestly only see as a near term incentive for Meta to get a Llama Voice model cookin'
I'll use it, but can't wait for an uncensored open source version. Text only is too boring. I lack the patience to use text only for too long for the tasks I want, like learning languages.
Great video, thanks Kris! I'm interesting in the function calling and structured output from the voice websocket return. Can you use agents or agentic flows with constrained and structured outputs with the voice mode 🤔
Could you achieve these results in an app just using the text to speech and speech to text with native ios features alongside openai NON realtime api's?
Happy to be the first to comment. Kris you are always up to date. Once again cool stuff from you. Spaghetti code... 🤣. Great that you did talk about the costs as well. I like your creative and often real funny ideas. Please keep up the great work! Regarding your phone call: saw a video from a guy in the US weeks ago (no Realtime API) - he did let his AI order a Pizza and it worked great. Latency even back then was good enough - should work perfectly. Maybe try it with an italian accent 😉. Thx from Tom!
Would love the bankrupt myself with your code, i wont judge spaghetti, tried for 20 prompts with the new claude to get it up and running - no dice. Examples would be much apricated :)
I don't quite understand what realtime means here, especially in text version In voice version, yes, you can interact with it like really talking to a person, such as you can interrupt the conversation, or maybe openAI can understand extra information from your tone or intonation or accent. But in text version, I don't see any difference with just use OpenAI chat completion API
The Realtime API cost is high. I suggest that there is a cheaper way. 1.Using Google STT to get user's speech texts. 2.Send texts to GPT. 3. Get responses from GPT. 4.Send responses to Google TTS. 5.User gets AI responses in both texts and voices. The response time is longer and it costs lower.
In that case, you don't need to use realtime API. OpenAI chat completion API I think works just fine. I think the key point is that realtime API has the ability to not miss any information from your voice (tone, intonation or accent), which means it can feel you like a real person, as least it is trying to.
Can't you just better prompt it to have a less talkative output so you don't have to break it's response that often? That would make a big difference and everything more seamless :)
No one is going to be even able to develop at these prices other than those with deep pockets. Just testing and figuring things out would be too expensive to even try.
can you share the url to the repo?
where can i find the code? pls help
Yeah, no basement dweller dev's are gonna be messing with that API until the costs drop by at least 100x, which I honestly only see as a near term incentive for Meta to get a Llama Voice model cookin'
I'll use it, but can't wait for an uncensored open source version. Text only is too boring. I lack the patience to use text only for too long for the tasks I want, like learning languages.
Well said, 3 tests today ~2mins each conversation. $1.5. Yikes!
I don't think this is the same model as advanced voice mode.
Great work! You must have had a busy couple of days getting it working
you can really implement it in a few hours
I just integrated it on Twilio, it changes everything, but it took me a bit of time.
Looking forward to seeing your alleged "spaghetti" code! (Right now 2 weeks ago is your latest repo)
Which function controls the interruption?
VAD
@@gaijinshacho even i use it in turn detection it continue talking until it finishes.
can we have speech/voice as input to this app using websockets and get result as text as output?
5:58 - I felt that 😂 Currently having the same conundrum with the Anthropic API! (Claude 3.5 Sonnet is so good...)
Great video, thanks Kris! I'm interesting in the function calling and structured output from the voice websocket return. Can you use agents or agentic flows with constrained and structured outputs with the voice mode 🤔
Could you achieve these results in an app just using the text to speech and speech to text with native ios features alongside openai NON realtime api's?
yes I can, I have already implemented that
Happy to be the first to comment. Kris you are always up to date. Once again cool stuff from you. Spaghetti code... 🤣. Great that you did talk about the costs as well. I like your creative and often real funny ideas. Please keep up the great work! Regarding your phone call: saw a video from a guy in the US weeks ago (no Realtime API) - he did let his AI order a Pizza and it worked great. Latency even back then was good enough - should work perfectly. Maybe try it with an italian accent 😉. Thx from Tom!
can It work for other languages such as urdu, hindi?
where can i get the repo?
Does everyone have access to this beta? Anything we have to do?
when will you upload it on github??
Where to find code?
Why wouldn't you share the repo?
This is mindblowing...
What is AVA?
Would love the bankrupt myself with your code, i wont judge spaghetti, tried for 20 prompts with the new claude to get it up and running - no dice. Examples would be much apricated :)
What is the code used?
It's not out yet
@@khalifarmili1256 How long will it take?
i tested yesterday ,but
Error al conectar: 403
Acceso denegado. Verifica tu clave de API y los permisos para usar el API Realtime.
try checking your api key or just making a new one
@@elprox1290 again, thanks
where is the code?
I don't quite understand what realtime means here, especially in text version
In voice version, yes, you can interact with it like really talking to a person, such as you can interrupt the conversation, or maybe openAI can understand extra information from your tone or intonation or accent.
But in text version, I don't see any difference with just use OpenAI chat completion API
can you provide a code ? please
I still don't have access to it :/
Thanks :)
The Realtime API cost is high. I suggest that there is a cheaper way. 1.Using Google STT to get user's speech texts. 2.Send texts to GPT. 3. Get responses from GPT. 4.Send responses to Google TTS. 5.User gets AI responses in both texts and voices. The response time is longer and it costs lower.
In that case, you don't need to use realtime API. OpenAI chat completion API I think works just fine.
I think the key point is that realtime API has the ability to not miss any information from your voice (tone, intonation or accent), which means it can feel you like a real person, as least it is trying to.
Yes you said it yourself. The response time is longer and the cost is cheaper… thank you Sherlock…. Case solved
Can't you just better prompt it to have a less talkative output so you don't have to break it's response that often? That would make a big difference and everything more seamless :)
Doesn't it have emotions?
How to get the repo ?
No one is going to be even able to develop at these prices other than those with deep pockets. Just testing and figuring things out would be too expensive to even try.
By telling it it is playing a game with the user, it might be failing on purpose to let you win!
Im waiting to hear the Irish accent to be sure
Can you share the repo link ?
Can you share the repo link ?