hey @ThorstenMueller, great work as always! one thing that caught my eye: you mention that the code is released under MIT licence, which is right. But i think its also important to note that usually inference code and models have different licences (which you covered on other videos!). Here the model itself has a different licence: at 3:13 you can see it on top middle and in the text under it, that the model files are CC-BY-NC-4.0 licenced, which means no commercial use. This means you can not use generated voices for anything commercial like voice overs for youtube channels or in companies. It would be great to have this information as well in your videos, since people using this in any commercial environment or a simple monetized youtube channel can bring you in trouble if the owner enforces the licence. It would be great if you could make a video with an overview of fully open and free TTS/cloning models, that allow also commercial use.. i havent seen such a list anywhere and im sure lots of people would be interested.
Thanks for the clarification 😊. I've seen another comment asking for usage as voiceover - did you reply to this? I added your hint to the video description and linked you - hope it's okay for you. I added your video topic suggestion to my list as i think it is a great idea 👍.
Additional question: Does the model "re-learn" the voice everytime I want it to generate a sentence? Is there a way to learn the voice once and then use the trained model over and over again?
I tired this out on a rtx 3600 12gb model and it's fast. Quicker than speaking, maybe 2x faster to process than to listen to. Sounds really good to me.
@ThorstenMueller I should have said it's paired with a 2700 ryzen. It's a pretty cheap rig now, I think you could buy both parts used for about 300 pounds on eBay. 30 pound cpu and 270 for the gpu. Or wait a year and pick up a 3090 24gb for same price, currently sitting around 500. I did pick up a tesla 24gb I forget model number, from China for 300 which is good for really large llm. Thank you for showing me this, I have project I can purposely upgrade now.
I tried this morning and the cloned voices are the best I have never used. I wonder if I can use the cloned voices in some way with Home Assistant through I don´t know know..piper might be ? I can´t find if this is possible to do with this software, it is only tts ? is possible to synthesise a dataset with this ? Thanks
Hi Thorsten, thank you for another excellent tutorial. I have installed f5 on a Raspberry Pi 5 and it generates very good quality output but to be expected it is very slow. I am trying to understand how f5 works, does it take a standard model and modify it in some way using the ref_text & audio before generating the desired output (gen_text)? Is there an intermediate stage that could be executed separately? Thanks Ernie
Haha the F5 joke😂. The progress is amazing, right? Still waiting for german support for F5... Anyway in english it is now already easy to create synthetic voice datasets for piper for example, just an idea😊
correction: I'm running it on a 1080ti, it takes 16 sec for 4 sec of speech to synthesise. Don't know, whether it's always re-analysing the reference as well.
okay, further investigation: i let the output text the same but uploaded a longer reference, it then also takes longer to synthesise. so, the whole time is comprising reference learning as well as synthesis. would be interesting to see how much time mere synthesis would take...
If you use f5 on huggingface it will use a random gpu that is available in that momoment. If you use it locally without cuda (nvidia gpu) it will use cpu.
I tried it and it works but it did not sound like me. Nothing close to what you did. Not a fan at this time it really should have done better. Thanks for sharing you got my thumbs up...
Hello Thorsten, thanks for your great channel. I came about these videos which shows how one can train F5 with different languages th-cam.com/video/UO4usaOojys/w-d-xo.html th-cam.com/video/RQXHKO5F9hg/w-d-xo.html As you are experienced with training of speech models, I am wondering how much hours material would be required to train a German language model in good quality and what things should be considered in regards to training data. In the referenced youtube video the creator simply takes audiobooks. Can one expect to get a good quality model in this way?
Hello Christoph, thanks for your nice feedback on my channel 😊. Currently f5 tts can't be trained in german, but they are working on it. github.com/SWivid/F5-TTS/issues/87#issuecomment-2418043522 For my german "Thorsten-Voice" datasets i recorded over 30k audio files, but this should not be required now.
I'm no expert, but from what I understand, no, because although the f5 model itself is open source and available to use commercially, the license for the dataset on which it was trained is restricted and does not allow commercial use. I would love someone to tell me I'm wrong about this as I was getting really excited about f5 until I found this out...
I can not give any legal advices. Here (huggingface.co/SWivid/F5-TTS) is written: "2024/10/14. We change the License of this ckpt repo to CC-BY-NC-4.0 following the used training set Emilia, which is an in-the-wild dataset. Sorry for any inconvenience this may cause. Our codebase remains under the MIT license." So i guess @PatrickAngwin seems right.
Hi, Thorsten, the community thrives because of people like you - thanks for your work!
Thank you for your very kind words 🥰
that's really really good quality for open source
Absolutely 👍🏻😎
OMG, you are life saver for me!! Awesome!!
Wow, thanks for your kind feedback 😊.
hey @ThorstenMueller, great work as always! one thing that caught my eye: you mention that the code is released under MIT licence, which is right. But i think its also important to note that usually inference code and models have different licences (which you covered on other videos!). Here the model itself has a different licence: at 3:13 you can see it on top middle and in the text under it, that the model files are CC-BY-NC-4.0 licenced, which means no commercial use. This means you can not use generated voices for anything commercial like voice overs for youtube channels or in companies. It would be great to have this information as well in your videos, since people using this in any commercial environment or a simple monetized youtube channel can bring you in trouble if the owner enforces the licence. It would be great if you could make a video with an overview of fully open and free TTS/cloning models, that allow also commercial use.. i havent seen such a list anywhere and im sure lots of people would be interested.
Thanks for the clarification 😊. I've seen another comment asking for usage as voiceover - did you reply to this? I added your hint to the video description and linked you - hope it's okay for you.
I added your video topic suggestion to my list as i think it is a great idea 👍.
Thanks for your video. F5 TTS is absolutely stunning!
Let's hope they will include other languages (GERMAN) soon. ;)
Additional question: Does the model "re-learn" the voice everytime I want it to generate a sentence? Is there a way to learn the voice once and then use the trained model over and over again?
According to their community they are working on additional languages, including german 😊
huggingface "marduk-ra/F5-TTS-German"
That was great!! Thanks for your content! I've got this running now and it is amazing!!
Thanks for your nice feedback 😊.
I tired this out on a rtx 3600 12gb model and it's fast. Quicker than speaking, maybe 2x faster to process than to listen to. Sounds really good to me.
Thanks for your helpful comment and performance indicator on a 3600 👍🏻.
@ThorstenMueller I should have said it's paired with a 2700 ryzen. It's a pretty cheap rig now, I think you could buy both parts used for about 300 pounds on eBay. 30 pound cpu and 270 for the gpu.
Or wait a year and pick up a 3090 24gb for same price, currently sitting around 500. I did pick up a tesla 24gb I forget model number, from China for 300 which is good for really large llm.
Thank you for showing me this, I have project I can purposely upgrade now.
I enjoyed the intro it made me laugh.
I'm happy you liked it 😊.
I tried this morning and the cloned voices are the best I have never used. I wonder if I can use the cloned voices in some way with Home Assistant through I don´t know know..piper might be ? I can´t find if this is possible to do with this software, it is only tts ? is possible to synthesise a dataset with this ? Thanks
Hi Thorsten, thank you for another excellent tutorial. I have installed f5 on a Raspberry Pi 5 and it generates very good quality output but to be expected it is very slow. I am trying to understand how f5 works, does it take a standard model and modify it in some way using the ref_text & audio before generating the desired output (gen_text)? Is there an intermediate stage that could be executed separately? Thanks Ernie
Thanks for your nice feedback 😊. As i can't answer your question you might want to ask this question on their github repo to get (useful) responses.
great stuff!
Haha the F5 joke😂.
The progress is amazing, right?
Still waiting for german support for F5...
Anyway in english it is now already easy to create synthetic voice datasets for piper for example, just an idea😊
H(ei) 👋,
thanks for your nice comment 😊 and yes, progress is really impressive.
That whisper at the beginning really sounded like Stephan Molyneux?!!!
thanks! it is faesabel to do all of that trought scripted pyton code?
Good point 👍🏻. I took a quick but did not see an obvious solution for native python integration.
May I ask what gpu you are using, or if it is using a gpu?
when you start gradio the fist time and the model is downloading, it shows that pytorch loading the models into CPU, i'll investigate on that
correction: I'm running it on a 1080ti, it takes 16 sec for 4 sec of speech to synthesise. Don't know, whether it's always re-analysing the reference as well.
okay, further investigation: i let the output text the same but uploaded a longer reference, it then also takes longer to synthesise. so, the whole time is comprising reference learning as well as synthesis. would be interesting to see how much time mere synthesis would take...
If you use f5 on huggingface it will use a random gpu that is available in that momoment. If you use it locally without cuda (nvidia gpu) it will use cpu.
I tried it and it works but it did not sound like me. Nothing close to what you did. Not a fan at this time it really should have done better. Thanks for sharing you got my thumbs up...
Thanks for your "thumb up" and sorry to hear it didn't work for you as expected.
@ThorstenMueller not your fault, you laid it out perfectly. Its probably the quality of my samples.
Thanks again
You made a reference to your computer speed. Care to elaborate on its GPU and CPU and ram?
You're absolutely right. I forgot adding it to the description. Thanks to your hint, my computer specs are now in description 😊.
Is online Huggingface better than local?
The tts model is the same. It's just the question of your local available compute power. In my case huggingface has been more performant.
Great
Thank you 😊, i'm impressed by f5 too.
What GPU do you have on your computer?
An nvidia 1050 ti in this case.
can this be deployed and hosted on a server?
Yes, absolutely 😊.
Hello Thorsten, thanks for your great channel. I came about these videos which shows how one can train F5 with different languages th-cam.com/video/UO4usaOojys/w-d-xo.html th-cam.com/video/RQXHKO5F9hg/w-d-xo.html As you are experienced with training of speech models, I am wondering how much hours material would be required to train a German language model in good quality and what things should be considered in regards to training data. In the referenced youtube video the creator simply takes audiobooks. Can one expect to get a good quality model in this way?
Hello Christoph, thanks for your nice feedback on my channel 😊.
Currently f5 tts can't be trained in german, but they are working on it. github.com/SWivid/F5-TTS/issues/87#issuecomment-2418043522
For my german "Thorsten-Voice" datasets i recorded over 30k audio files, but this should not be required now.
can we use it for making TH-cam videos and monetize it ? i mean is legal
I'm no expert, but from what I understand, no, because although the f5 model itself is open source and available to use commercially, the license for the dataset on which it was trained is restricted and does not allow commercial use. I would love someone to tell me I'm wrong about this as I was getting really excited about f5 until I found this out...
I can not give any legal advices. Here (huggingface.co/SWivid/F5-TTS) is written:
"2024/10/14. We change the License of this ckpt repo to CC-BY-NC-4.0 following the used training set Emilia, which is an in-the-wild dataset. Sorry for any inconvenience this may cause. Our codebase remains under the MIT license."
So i guess @PatrickAngwin seems right.