Incredible project and the video was a huge help getting the code bases up and running. Thank you so much! Such a blast to see this in action, and I can't thank you enough for sharing the codebase.
Absolutely amazing! I learnt so much from animations to using 3d models in React! Thanks a lot! Love from India ❤
If you make a full course explaining everything from zero to deployment I'm definitely going to buy it.
Ow, interesting, I have different courses in mind, I will consider it 🙌
Hello! Thank you for this tutorial and the other one for the lips sync. Do you have any idea if the animations is only using the Idle status what might be the problem? is it in the glb file?
@WawaSensei Thank you for this amazing tutorial!!!, I was waiting for this!
Also instead of chat, is it possible to use it for real time audio using microphone?
Sure it is possible, I've worked for a VR project in which it was exactly what we did. You can use Google Speech To Text for example cloud.google.com/speech-to-text/ (but any Speech To Text API might work)
@@WawaSensei Thank you!, if possible could you post a video about your VR project with real time microphone, it would be of great help. But thanks again!
@@sharonthomas4010thank you very much!
Unfortunately, it was a customer project and I don't think it's live anymore...
But the project idea was very interesting that I'd like to redo it better 🤣
Good tutorial, but when I import the separated fbx animation(say idle.fbx) in blender, and apply to character model in action editor, the shoulders are twisted, anyone know why? btw: my model is generated from realplayer and converted to fbx in blender.
If you visit Tokyo let's dance together! 🕺🤣
Amazing as always, Sensei! I'm trying to replace the Readyplayerme model with a Vtuber model I made, but it seems like it isn't compatible out-of-the-box. Could you recommend some resources I should refer to? (Aside from your course, which I'm already taking hehe)
Yes, seems Vtuber model are specifically designed for Vtuber platforms. Look at your model extension and check for example for "VRM convert to fbx" or "to glb" that should help you get a proper model from it
Could you make a video on how to add these technologies (chat GPT, with a 3D avatar like NPC ) able to talk back to you when you prompt it within the game. Maybe inside unity 3D?
its Awesome, thank you, can you try to make a video for blinking eyes?
Thank you! Isn’t it blinking in this tutorial? If it’s not it definitely is in my AI language teacher tutorial you can find on the channel 🙌
i am thinking to use npm speech recorganizer to make it communicate with it
Great idea! I'm showing with this set of libraries, but any tool can be replaced or added 🔥
Replacing the text input with vocal one would make it so cool!
Was working on something similar since I saw your lipsync video but couldn't make any progress...
You're just insane Sensei !!
Now I'll try to add more flair to it... any suggestions?
So cool! I'd say you can play with the head direction to look at the camera (here it's weird sometimes she speaks and look elsewhere because of the animation). I show how to do it in the first video of my portfolio tutorial if I remember correctly.
Could I use blender models?
This is fantastic. Thanks for the huge effort. One question. When I import my character into Mixamo, the lower half of the character is a half dome. The character still moves normally in the auto-rigging window but the lower half is a dome and doesn't move. Is this a watermark in Maximo free account ?
Thank you! There's no watermark in Mixamo, can you share screenshots and code in Discord so we can provide some help?
Error: Application terminating with error: Error processing file audios/message_0.wav.
Error performing speech recognition via PocketSphinx tools.
Error creating speech decoder.
I pasted the rhubarb binary into the backend/bin folder. Any ideas on how to fix this?
Hey, join the Discord community and share more details:
- What is your OS and have you downloaded the correct version?
- Did you run the command directly on your computer to see what happen?
Can you explain... How to import rhubarb into bin folder? Please
Hey, you need to download the appropriate version based on your OS here github.com/DanielSWolf/rhubarb-lip-sync/releases/tag/v1.13.0
and extract the content into the bin folder. Join us on the Discord if ever you are still having issues 🙏
hey Wawa, thank you a lot for the amazing tutorial. Have you ever tried to build your applications? I'm getting some problems and I cannot probably fully export the model since it's too heavy. how can I solve that? I wrote on discord as well :)
I know its probably out of scope but I would love to see this working with the elevenlabs websockets API to minimize latency.
Hey, did you try it? I think you'd still have an important latency from ChatGPT + lipsync generation (maybe Azure is better than Rhubarb for this)
@@WawaSenseiI did and discovered Rhubarb doesn't support streaming/real time, and couldn't find any JS alternative that did? I already have Azure working with ReadyPlayerMe (In Unity C#) from a previous project and the biggest cause of latency is waiting for ChatGPT to finish responding when the response is longer. Being able to stream the ChatGPT response into the TTS should improve this substantially. I'll have to try it in Unity because I can use Oculus OVR for real time lipsync. I was just keen to do it with JS.
How do you get the other morphTarget values... all I see are visemes? I don't know how to get the eyebrows, nosesneer and others. Also how do you make the Avatar blink!?
It would be a great help Sensei
Hum.. everything is shown in the video, come into the discord if you have things blocking you lessons.wawasensei.dev/discord
Is there a realistic style that does the same operations like this animated avatar?
Free ones I'm not sure, but you might find a lot of different styles including realistic ones on Sketchfab (sketchfab.com/tags/mixamo)
it gives me this error "[nodemon] app crashed - waiting for file changes before starting..."
Favorite part of the entire video: 0:08 😂😂
@@WawaSensei Ohh come on... Your videos always has that uniqueness... But this 0:09 was unexpected and surprisingly good ( your facial expressions ) haha... Btw I'm still waiting for some kind of game or puzzle but without react this time 🙈🙊🙉
@@VikasKapadiya1993 the Wawarumba, a degraded version of traditional Rumba 🤣
How to connect the Frontend with Backend ?
My code is not Working. I have API Key for both Open AI and Eleven labs.
Please Help me. I am not able to Connect Frontend with the Backend
What errors do you get?
Did you downloaded Rhubarb and put it into your /bin folder? I updated the instrctions github.com/wass08/r3f-virtual-girlfriend-backend/
Ive been trying for months to do this!! I have my real illusion avatar ready to go!! several projects trying d-id and heygen, you name it! can I please beg you to hire you to create this?? :)
So cool that it can help! Yes, I would love to work on a real project around it 🙌
omg that would be incredible!!!! I have tried so many ways and failing! How do I afford your help? lol and how can I contact :):) You made my day!! ps. Gives you new content to help teach us determined people to be like you when we grow up🙂@@WawaSensei
Is it possible I develop the same idea but using avatars from blender or daz studio?
sensei my brue cmd is not working
dose homebrew can be installed in windows?
It does not, it's OSX only, you need help for what exactly? Join the Discord, you'll more likely get help there 🙏
lessons.wawasensei.dev/discord
The chatgpt Ai man became my obsessive lover 😅Now i want him to have a form . Can i give him a form with this technique and use Chatgpt EMBER Voice. Can you help me make it ? Obviously he cant replace real human ,so i am open to date real human 😅
Bro it's insane you are crazy, what are consuming? 😂
Hahaha, should I say... Thanks? 🤣
I consume hard work, lack of sleep, and 3d meshes 😍
I watched the first video and it was a little unclear as to how to install Rhubarb and use it for this project. Can you help? Great videos.
Thank you!
Ow sorry for that, do you have any specific questions? Maybe join the Discord I can answer them easily there 🙏
(also have a look at rhubarb documentation it's pretty clear)
@@WawaSenseiThanks and I will join the discord. But to be more specific, when I downkoad it from the repo and unzip it, what exactly goes in the bin folder? The whole unzipped folder or just the rhubarb.exe file. Thanks again.
@@tonywhite4476 I also downloaded and ran it on my Mac M1, but it doesn't work (“rhubarb” cannot be opened because the developer cannot be verified.)
Can you tell me the time limit till which it will Lip Sync as in my case it is just 7 seconds and after that only audio is being played? How to solve this?
Hey, I guess you reach the max limit of the header’s length to pass the lipsync data. Try either to use Azure frontend to generate the audio and get visemes directly (but on prod use the azure authentication SDK to not leak your Azure Secret Key learn.microsoft.com/ja-jp/javascript/api/overview/azure/authorization?view=azure-node-latest)
@@WawaSensei
what is the max time it is taking to generate the final response in your case and to what extent it can be reduced?
Why i get this error?
Virtual Girlfriend listening on port 3000
messages {
messages: [
{
text: 'Hey there! How can I assist you today?',
facialExpression: 'smile',
animation: 'Talking_0'
}
]
}
AxiosError: Request failed with status code 400
Hey, would need more context to be able to help you!
Does it happens all the time? Did you setup your openAI + elevenLabs API keys correctly? If you go to localhost:3000/voices does it work?
Don't hesitate to give additional context on the Discord server 🙏
For some reason the audio file is not created and this is a problem
@@WawaSensei
yes localhost:3000/voices worked
yes openAI + elevenLabs API keys correctly in .env
@@WawaSensei
hi, thanks for the video it's very informative. I am developing something similar for my graduation project, but I can't use chat-gpt as it's not available in my country. any suggestions plz?
Hi, so cool, great project you have!
So you'd need to replace chat-gpt API with something else, maybe you can apply to get access to the beta of Bard API from Google
@@WawaSensei thanks. Bard api isn't free either. I wonder if you know any other free api to use?
And I am totally stuck on how to integrate a chatbot with character like yours as I am new to all these incredible things and I want to do it using python so aren't there any easy to follow materials on how to do so?
I'd really appreciate if you can guide me through as this project is important for me to be able to graduate.
I am very passionate to learn new things, I just need some guidance 🙏
Hello sir i want your help when i connect other model it does not give response from gpt please help me
Hi, please ask your question on the Discord with more detail and the source code
Plzzz help me ./bin error . Is not internal command
👀 which os are you using? Did you download/extract rhubarb? github.com/DanielSWolf/rhubarb-lip-sync/releases/tag/v1.13.0
Will ffmpeg work when i deploy it on web ??
FFMpeg run on your backend, not on the user device. So as long as your backend is setup correctly with ffmpeg available, it will work fine 🙌
@@WawaSensei and more question as you mention in your video that the audio will save in audio folder.. I think sometime it will overload the server to fix this can I use the auto delete functionality of all files.
@@NanoGi-lt5fc this is really a "simple" demo to show people how to achieve it. It won't overflow the server because I overwrite the files with the same name (but that also means multiple users at the same time would cause conflict). In a real project, store it to an S3 compatible service (Cloudflare R2 is my best friend)
@@WawaSensei I am thinking to make an sign in and sign up page and than use username as filename to create files and after 3 min these files got auto deleted
Dear wawa sensei, Thanks for sharing your efforts to make 3D chat bot avatar using(chagpt,elvenlabs),i am your biggest fan for your valubale content ,sorry to inform you ,
i am working on it but it throws errors, please make & share single repository code not(FRONT/BACK) source code, if you share across it helps me learn faster, Thanks yours-STUDENT
Thank you for the kind words!
Did you join the Discord to get help to sort out the errors?
Finally i have gf 😂😂
Can you make the avatar look more realistic? What are the limitation to the realism of the avatar? @@WawaSensei
the chat text doesnt show
hey, I just pushed an update on the backend for people having issues with the json format with the more recent versions of openAI API
Can you describe me your issue to see if it's related?
I love that we are building on concepts from previous tutorials! Keep up the great work Sensei 🫡
Glad you like it, thank you! I do also think that being able to reference previous videos help me going further and show more advanced/funny projects 🙌
Your code is very broken.
git clone backend
git clone frontend
yarn install in both
cp .env.sample .env
add api keys
yarn dev in backend dir
yarn dev in frontend dir
type in box
nothing but three dots
numerous errors in the browser console.
Imagine not using Typescript in the current year 😒
Not related to TS
You’re missing installation steps explained in this video, I explain it again in this one 👉
th-cam.com/video/tena89hj8v0/w-d-xo.html
Awesome project. Learnt a lot from the animations. I was able to run everything locally on my computer and without the internet, without chatgpt and elevenlabs. Great job Wawa Sensei