The video was very helpful. With the help of this video I was able to build my own application. But having trouble in deploying this. Could you please make a video on how to deploy this application. For information, I am having issue in deploying it on render, pyaudio cannot be used, how to solve it. Please help
The video was very helpful. With the help of this video I was able to build my own application. But having trouble in deploying this. Could you please make a video on how to deploy this application. For information, I am having issue in deploying it on render, pyaudio cannot be used, how to solve it. Please help
Could you check the git repo, its not working properly.. config w text is working okayish but the audio config isnt working
Could you open the Inspect console of your browser to let me know its print? I was using Chrome, what browser are you using?
Isn't it multimodal able anyway from the beginning on as gemini 2.0 flash? Or was your dev for local purposes?
I decoupled the API usage from google AI Studio for customization.
How do i allow interruption?
You don’t have to intentionally “allow”, the voice sent to and received from multimodal live api are chunked and in asynchronous way.