Thank you so much for your hard work!❤ Looking forward to the release of the tutorial on model training for EmotiVoice! 😊 Natlamir, if you read this comment, can you 🙏please answer if you are going to do this release? If yes, when? I would be very grateful for an answer ✊
@@Mavrik9000 i will keep an eye out when that functionality is implemented. they created a milestone for its implementation: github.com/netease-youdao/EmotiVoice/milestone/3
The emotions are in the data\youdao\text\emotion file. Open that in a text editor and you should see several lines containing Chinese characters. Each line is a different emotion. Copy/paste into the prompt field in the web UI.
might be able to use that with segment anything / or inpainting. there is probably some automatic 1111 extension that does that or it might be built in, i haven't explore automatic 1111 much, but that might be a way to do that
too bad it can only support english and chinese and they don't have a training procedure yet do you know if there are any other high quality and multilingual emotive voice tts like this one (maybe coqui)?
Yeah, I think training procedure is on their TODO list, so we might get that soon. They have listed some projects on the bottom of their github page with credits to what they used, those linked projects may provide similar functionality with perhaps multilingual support, would need to research.
no problem, you would just need to run the 3 commands: 1. first activate the environement: conda activate emotivoice 2. cd to the folder it is installed in: cd c:\ai\emotivoice 3. run the web app: streamlit run demo_page.py
@@Natlamir I thought you have mastered all these things but now it seems I just have to wait for a new creator or new video on TH-cam. Sorry for wasting your time.
Gather up children, and I'll tell you a story, from long, long ago...about a man who dared to make a text thumbnail for youTube and mock the great Greta. A renegade. A pioneer. A rebel with a cause. They don't make them like that anymore...
🤣🤣🤣 when i first heard the quote from the DINet audio samples, i thought it might be some quote from a harry potter movie or something. but now it is engraved into my brain after using it for the test between wav2lip vs videoretalking vs DINet, so it was the great Greta that said this and not harry potter! 🤣
Hi! Thanks for your work, it goes a long way in helping people dive into the world of neural networks. I would like to offer you cooperation, the thing is that I am a blogger from Russia, who also talks about neural networks, but I make portable builds, so that people who are far from Python and everything like that can run and use different programs. How about doing a collaboration? If you're interested, give feedback. Thanks.
thanks. that is great, portable builds without needing to go through the python package installation process sounds great. i just do this for fun and share what i learn while i learn new machine learning concepts along the way. if you have a github that i can contribute to in any way, feel free to share that and we can see about how we can streamline the installation process.
streamlit run demo_page.py and result ModuleNotFoundError: No module named 'yacs' File "C:\Users\derpy5\miniconda3\envs\EmotiVoice\lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 534, in _run_script exec(code, module.__dict__) File "C:\Users\derpy5\EmotiVoice\demo_page.py", line 18, in from yacs import config as CONFIG
Thank you so much for your hard work!❤
Looking forward to the release of the tutorial on model training for EmotiVoice! 😊
Natlamir, if you read this comment, can you 🙏please answer if you are going to do this release? If yes, when?
I would be very grateful for an answer ✊
This was great, thanks for all the work and the demonstration. ^_^
thank you 🙏
Great my General!
if you type in the prompt box , SAD or HAPPY, has to be capitals it sort of changes the voice, but not too much. It depends on the voices.
interesting, thanks for the info.
@@Natlamir Are you going to make a new one with the demonstration of the emotions?
@@Mavrik9000 i will keep an eye out when that functionality is implemented. they created a milestone for its implementation: github.com/netease-youdao/EmotiVoice/milestone/3
The emotions are in the data\youdao\text\emotion file. Open that in a text editor and you should see several lines containing Chinese characters. Each line is a different emotion. Copy/paste into the prompt field in the web UI.
Ah, thanks for the info!
Thank you.
Please make next video on this topic
Which voice did you use for the tutorial (high energy male)?
Can you use models from other tts tools on this one?
From where i can download models
👍
Do you know if there is an AI that can remove a microphone or something else obstructing a face? Would be very useful to remove boom arms and such.
might be able to use that with segment anything / or inpainting. there is probably some automatic 1111 extension that does that or it might be built in, i haven't explore automatic 1111 much, but that might be a way to do that
how can i add more language
I Awaiting next video
too bad it can only support english and chinese and they don't have a training procedure yet
do you know if there are any other high quality and multilingual emotive voice tts like this one (maybe coqui)?
Yeah, I think training procedure is on their TODO list, so we might get that soon. They have listed some projects on the bottom of their github page with credits to what they used, those linked projects may provide similar functionality with perhaps multilingual support, would need to research.
Sorry for the silly questions, but I'm a newbie.
How to run it a second time? I somehow can't.
no problem, you would just need to run the 3 commands:
1. first activate the environement:
conda activate emotivoice
2. cd to the folder it is installed in:
cd c:\ai\emotivoice
3. run the web app:
streamlit run demo_page.py
@@Natlamir Thanks
Hello sir please make video on. how to run RVC GUI V2 On Kaggle or Sage Maker Please
i am not familiar with RVC GUI V2 and havent used Kaggle or Sage Maker yet
@@Natlamir But can't you make a video on this
@@Natlamir I thought you have mastered all these things but now it seems I just have to wait for a new creator or new video on TH-cam. Sorry for wasting your time.
@@gamewithlegand6626 no problem. 👍
Gather up children, and I'll tell you a story, from long, long ago...about a man who dared to make a text thumbnail for youTube and mock the great Greta.
A renegade. A pioneer. A rebel with a cause. They don't make them like that anymore...
🤣🤣🤣 when i first heard the quote from the DINet audio samples, i thought it might be some quote from a harry potter movie or something. but now it is engraved into my brain after using it for the test between wav2lip vs videoretalking vs DINet, so it was the great Greta that said this and not harry potter! 🤣
Hi! Thanks for your work, it goes a long way in helping people dive into the world of neural networks.
I would like to offer you cooperation, the thing is that I am a blogger from Russia, who also talks about neural networks, but I make portable builds, so that people who are far from Python and everything like that can run and use different programs.
How about doing a collaboration? If you're interested, give feedback. Thanks.
thanks. that is great, portable builds without needing to go through the python package installation process sounds great. i just do this for fun and share what i learn while i learn new machine learning concepts along the way. if you have a github that i can contribute to in any way, feel free to share that and we can see about how we can streamline the installation process.
Есть смысл ожидать портативку от тебя? Я попытался по гайду с видео установить но я обезьяна дикая и ничего не получилось.
@@yuduz367 возможно да, но пока разгребаю завалы того что начал и не доделал, плюс обновы уже существующих программ пилю
Please
streamlit run demo_page.py
and result
ModuleNotFoundError: No module named 'yacs'
File "C:\Users\derpy5\miniconda3\envs\EmotiVoice\lib\site-packages\streamlit
untime\scriptrunner\script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "C:\Users\derpy5\EmotiVoice\demo_page.py", line 18, in
from yacs import config as CONFIG
not sure why it errors with that message. for those kind of errors, i just install the missing module, for example with: pip install yacs
Please
Please
Please