Stop storing your secrets and API keys in your code!! Try Keeper, a password manager you can use in the terminal: (built for devs/admins): www.keeper.io/networkchuck I did it…..after days of frustration, blood, sweat and coffee..I finally figured out a way to clone a voice to use with my fully local, AI voice assistant!!!! This isn’t using cloud-based products like ElevenLabs…no…we are using a fully-local, open-source project called Piper TTS. This works wonderfully with the Assist voice pipeline in Home Assistant. 📝GUIDE and WALKTHROUGH: blog.networkchuck.com/posts/how-to-clone-a-voice/ 🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy **Sponsored by Keeper
i often have to slow down your videos to see and take notes what youre doing why not have it do the same (not even fully done with the video yet very happy with this my dad has wanted a morgan freeman ai assistant
It probably has been years since i watched a 37 min video without skipping once let alone a tech video. I feel like my attention span has been permanently increased.
The problem is you can’t access GPU from Docker…. Well, you can but you’ll end up doing all the same fiddling but with extra headache of the Docker layer
@@josevaldoandredasilvajunio4691 I spent the better part of a day trying to make John the Ripper run from a snap so it could use OpenCL. The. I learned the only way was to mount the snap and run it directly from there
Someone should make a shell script that can be editable that asked user for inputs, then just install and runs everything. Not that hard. Crazy, no one has done it.
Quick note -- instead of removing silence, you would have been better served splitting at silence. The output would have been more intelligible for transcription, would not have required as many mid-word cuts which cause issues, etc etc
That was intense. I can't imagine, how much time, work, coffee and nerves you put in this project, but it really was worth it. Terry sounds great! I hope the next project is less nerve wrecking. xD
Just bought a new house and am currently working on setting up automation and localizing everything offline. Challenge I'm hitting right now is getting mics in every space that go back to the assistant instead of having pi's everywhere. Also trying to limit the response to the room from which the request came from. Thanks for all the content! You have definitely made the process way more understandable and fun.
Hey thanks for all your videos on home automation. I've started my own home automation journey watching your channel and learning what's possible. Now looking forward to commanding my home like the USS Enterprise.... "Computer; make coffee" :D
Ah, finally the day has come where I'll be able to have Chuck's voice play whenever I come through the front door, greeting me with a "Welcome home, daddy ;)"
I made a Pi Led Agent a couple days ago. I can turn Led ON and OFF using whisper(small model) to translate my voice to llama3.2:3b, then llama generates a response that executes a condition based on the string it provides and toggles the LED. Also model can respond using voice of piper(small model) with another prompt that llama does, besides the one that controls the LED. I use pre-promts to guide. Like explain to the LLM what it is, comands it should generate, and give it a few examples of how its done, as this can improve its responses.
You should make your Terry clone voice available for purchase. LOL my wife wants her Voice Assistant to sound like him. My daughter wants Adam Sandler. My son's preference is "Amy" and he prefers "OK NABU" as the wake word. I called mine Alfred but is using your voice. I might try to clone Morgan Freeman's voice but this takes so much effort. My wife says your cloned voice is a little too fast. I might try to see if it can slowed down with a setting versus having to relearn. I love the idea of taking my home automation local. this has consumed all of my free time of late but with your guidance have made leaps and strides. Mine is a bit too slow. I will see what I can do to speed that up. I have 48 CPU threads and 128GB of RAM but my GPU is a single RTX 3060 12G model. as a proof of concept this has exceeded my expectations but to take it to the next level I will need more. Upgrading the Power Supply has solved my crashing issue. Keep up the awesome work. I tried making a crude Adam Sandler voice with low sample rate but it just didn't work. I am surprised there is not some repository of these files. Maybe there is. Probably not for free but for the right price and save me a week worth of my free time I would probably pay. Getting your voice to work was awesome but very tedious. and your sample voice files you made super easy. Getting those files are proving tricky.
Hey Chuck, awesome video! I’m working on image detection, and it gave me an idea for your next project. How about a video on training custom image detection models? Like recognizing specific objects (e.g., PET bottles, toys) to expand what a home assistant can do. It could add some cool features to your Raspberry Pi assistant. Would love to see your take on it!
Had a lot of issues getting it running on macOS, but was able to successfully get it up and running on my Ubuntu machine with python 3.10.12. After a few minutes of training, I tested it out and was surprised with the results. Pretty cool! If I have hours of quality recordings, what would the amount be to get a quality voice? Did you ever figure out why yours was a bit quirky?
Alright! I can get my Samuel L Jackson voice back! I was so annoyed when Amazon disabled my Ask Sam, I paid $2 for that! Time to ditch my echo devices.. lol!
I also wanted to train my local ai voice assistant with my voice and started using the piper studio in the German language. It wanted me to say a lot of sentences that sounds like they're from an software call-center and could be used for software scam calls i.E. "The activation key you've entered is invalid" and in combination with other sentences like "Then call the police and see how far you get there" it sounds pretty strange to me. Then I saw a disclaimer on the page that says "By clicking Submit, you agree to dedicate your recorded audio to the public domain (CC0)". Is there anything known that the voice recorded by the software is distributed to the www and used for malicious phone calls?
The instructions on your blog is incomplete... stuff missing and lots of libraries fail with torch and stuff. Cna you please try on a fresh ubuntu wsl install and follow your own guide and correct the errors coming up.
Ditto. Training is failing for me currently (and I'm trying to find answers to that), but to get there I also had to do the following to get some missing dependencies when running in Ubuntu on WSL: sudo apt install gcc build-essential python3-dev
The topic is so crazy and fascinating, I think I‘ll do a home project like this. The only thing that bothers me that I don‘t want to run my desktop pc 24/7.
Not sure if anyone has thought of this... But I just downloaded an Audible with a celebrity reading and now have 3 hours of perfect training material 😂
Lookin forward to building a local digital assistant with Multiple Personality Disorder, where Dr. Jekyl sounds like Morgan Freeman and Mr. Hyde sounds like Samuel L Jackson....
If you encounter the error "Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory", then run the following command: cd /usr/lib/wsl/lib/ sudo rm -r libcuda.so.1 sudo rm -r libcuda.so sudo ln -s libcuda.so.1.1 libcuda.so.1 sudo ln -s libcuda.so.1.1 libcuda.so sudo ldconfig Other then this one issue, this has got to be one of the coolest things I have done in a while, thanks for the great tutorial!
Thanks so much for this favorite online classes. You are the best teacher. Please teach me how to make a raspberry pi that contains local chatgpt for generating texts to 3d gaming characters ❓
Lmao, the mike monologues were the best thing I've ever heard. I really need to buy a new Pi so I can set up home assistant... I have an old RPi2, but it doesn't have the specs needed to run home assistant :(
Hey there Chuck, Great video, One more request or suggestion, whatever seems right, Make it talk with emotions, like the LLM is giving the responses and it is just reading it as it is, Maybe it should emphasize on those words, add some filler words and talk actually like it is a human talking. For example *talks intensely* shouldn't be read, instead adapted as emotion. Thank you, this is one of a gem Channel I have found which actually teaches cool stuffs.
Hey Chuck fantastic and clear video! thank you! However you bring mixed messages, when you mention Keeper you said that is good that is "Cloud Based", but in your video, it seems like you prefer local installations (1:11 mark)
What is good for an individual (local hardware) may not be good for a company. As an individual, I’m willing to accept the cost and pain of maintaining a local infrastructure because it’s fun. For a business, the highest value becomes reliability.
I copied your last video and was like damn, i wish i could make my own, literally you a bit later, thank you :) The Ai thing is running in a virtualmachine in proxmox with a gtx970 so it's a little sheit but it works XD
I've literally been following this series that you have been updating, From the Start to now - I have Ollama with AlwaysReddy setup on my Ubuntu 24.04 OS - Running this - I will be trying to implement this on a New Raspberry Pi 5 (Quick question, will it be beneficial to add the AI HAT that you get for the Pi?) But really interested in this project and thank you so much for the inspiration to follow along the journey. Much respect, Great Channel.
@NetworkChuck, the Terry Crews voice clone does sound great, but I feel like you must have left something out. You attempted to use an automated process to generate an onnx file from your recorded voice, but the results were poor. You went back to Piper Recording Studio to get a decent voice clone. You said Mike spent some quality time with Piper Recording Studio for good results. I don't imagine Terry used Piper Recording Studio. So what did you do differently to achieve such a good result from prerecorded audio?
Hey Chuck, at 31:40 you made some folders in that share directory, you can make that faster by hitting: Ctrl+Shift+N. This wil create a new folder which is faster than using right click> New> Folder.
onnx is not a universal format for tts. There are more pth files for tts readily available. Also -- ONNX (Open Neural Network Exchange) is an open format built to represent machine learning models... any models be it stable diffusion, GPT's... I'm not guru, learned it all today
Hi Chuck! I want to integrate this to all the bedrooms in my soon to be home. I already plan to build in Sonos speakers into the ceiling(Sonos in-ceiling speakers). Is it possible to use this speakers instead of the small speaker that you are currently using? Thanks mate! Really enjoying your content! 🙌 (About to build my dream home and wants to make it smart/AI)
Super cool video. Tried the steps and I get an error trying to install numpy 1.24.4. : "module 'pkgutil' has no attribute 'ImpImporter'." Did you run into this as well? Can't find a solution just yet.
Hi Chuck, I really enjoyed your tutorial. Sorry if I am doing something wrong but, I have tried several times to add a lengthy comment which keep disappearing, do you know why this might be? Ernie
Stop storing your secrets and API keys in your code!! Try Keeper, a password manager you can use in the terminal: (built for devs/admins): www.keeper.io/networkchuck
I did it…..after days of frustration, blood, sweat and coffee..I finally figured out a way to clone a voice to use with my fully local, AI voice assistant!!!! This isn’t using cloud-based products like ElevenLabs…no…we are using a fully-local, open-source project called Piper TTS. This works wonderfully with the Assist voice pipeline in Home Assistant.
📝GUIDE and WALKTHROUGH: blog.networkchuck.com/posts/how-to-clone-a-voice/
🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
**Sponsored by Keeper
i like you content
why not just slow down your videos have your ai hear it then slowly train the ai to speed it up
that way it can hear you annunciate
i often have to slow down your videos to see and take notes what youre doing why not have it do the same (not even fully done with the video yet very happy with this my dad has wanted a morgan freeman ai assistant
That laptop 3080 is more like a 3070 or a 3070ti at best..... but still better than my 3050 6gb running my ollama lol.
@@Yuriel1981 Your being very generous
It probably has been years since i watched a 37 min video without skipping once let alone a tech video. I feel like my attention span has been permanently increased.
thanks for bringing that to my attention, i hadn´t realized it was that long , crazy
18:14 "You have no idea how amazing it is to get to this point with no errors" -- really hit home
With all the dependency issues and fiddling around, someone should totally make this toolkit into a docker image!
The problem is you can’t access GPU from Docker…. Well, you can but you’ll end up doing all the same fiddling but with extra headache of the Docker layer
@@coffeegonewrong dont remind me of this it gives me ptsd, when tried to do a similar project i almost pulled my hairs out making it pick up my GPU
@@josevaldoandredasilvajunio4691 I spent the better part of a day trying to make John the Ripper run from a snap so it could use OpenCL. The. I learned the only way was to mount the snap and run it directly from there
Someone should make a shell script that can be editable that asked user for inputs, then just install and runs everything. Not that hard. Crazy, no one has done it.
@coffeegonewrong yeah, but if you are not that worried about performance, and you are patient, you could just let it use the CPU.
Quick note -- instead of removing silence, you would have been better served splitting at silence. The output would have been more intelligible for transcription, would not have required as many mid-word cuts which cause issues, etc etc
Bravo, this is the peak educational youtube content. Learning with a twisted bit of fun
Be careful with showing yt-dlp...
Linus had a strike for similar reasons, I think this video might receive the same "attention" from TH-cam unfortunately
Honestly the chuck voice had me laughing so hard after 30 min of development 😂
The end results of all the methods were so cool. Worth watching the entire video.
Him: a CPU will work
Me: looking at my HP 540 g3
🥲
Yeah, naw dawg......I feel for you.
It will work - sooner or later
@@WWSchoof later, much much later
You can get free cloud computing that would do better btw if you've got a decent internet connection
I had like flash backs for the 1st 10seconds, from being a kid yelling at those recorded talk back hamster toys with that same audio playing back XP
That was intense. I can't imagine, how much time, work, coffee and nerves you put in this project, but it really was worth it. Terry sounds great! I hope the next project is less nerve wrecking. xD
Just bought a new house and am currently working on setting up automation and localizing everything offline. Challenge I'm hitting right now is getting mics in every space that go back to the assistant instead of having pi's everywhere. Also trying to limit the response to the room from which the request came from.
Thanks for all the content! You have definitely made the process way more understandable and fun.
I'm working towards that direction. Any tips with the progress you've made so far?
Hey thanks for all your videos on home automation. I've started my own home automation journey watching your channel and learning what's possible. Now looking forward to commanding my home like the USS Enterprise.... "Computer; make coffee" :D
Ah, finally the day has come where I'll be able to have Chuck's voice play whenever I come through the front door, greeting me with a "Welcome home, daddy ;)"
I'm so excited to try this out, each video I've tried to keep up and implement the home assistant and local ai. The voice is a wild addition
Awesome! Now I can put your voice to the life-size doll I have of you....
I made a Pi Led Agent a couple days ago. I can turn Led ON and OFF using whisper(small model) to translate my voice to llama3.2:3b, then llama generates a response that executes a condition based on the string it provides and toggles the LED. Also model can respond using voice of piper(small model) with another prompt that llama does, besides the one that controls the LED. I use pre-promts to guide. Like explain to the LLM what it is, comands it should generate, and give it a few examples of how its done, as this can improve its responses.
GUYS!!!!! I just had a genies idea, what if for everyone, nonmatter what it had you say you just yell, "TIMMY!!!!"
You were my hero just with the other video, and now @just 1:23 you are more hero than hero .... LOL
OMG the Terry voice is AMAZING!!!!!
You should make your Terry clone voice available for purchase. LOL my wife wants her Voice Assistant to sound like him. My daughter wants Adam Sandler. My son's preference is "Amy" and he prefers "OK NABU" as the wake word. I called mine Alfred but is using your voice. I might try to clone Morgan Freeman's voice but this takes so much effort. My wife says your cloned voice is a little too fast. I might try to see if it can slowed down with a setting versus having to relearn. I love the idea of taking my home automation local. this has consumed all of my free time of late but with your guidance have made leaps and strides. Mine is a bit too slow. I will see what I can do to speed that up. I have 48 CPU threads and 128GB of RAM but my GPU is a single RTX 3060 12G model. as a proof of concept this has exceeded my expectations but to take it to the next level I will need more. Upgrading the Power Supply has solved my crashing issue. Keep up the awesome work. I tried making a crude Adam Sandler voice with low sample rate but it just didn't work. I am surprised there is not some repository of these files. Maybe there is. Probably not for free but for the right price and save me a week worth of my free time I would probably pay. Getting your voice to work was awesome but very tedious. and your sample voice files you made super easy. Getting those files are proving tricky.
Hey Chuck, awesome video! I’m working on image detection, and it gave me an idea for your next project. How about a video on training custom image detection models? Like recognizing specific objects (e.g., PET bottles, toys) to expand what a home assistant can do. It could add some cool features to your Raspberry Pi assistant. Would love to see your take on it!
Thanks, Chuck! I was really looking forward to this video. I absolutely love your content!
Had a lot of issues getting it running on macOS, but was able to successfully get it up and running on my Ubuntu machine with python 3.10.12. After a few minutes of training, I tested it out and was surprised with the results. Pretty cool! If I have hours of quality recordings, what would the amount be to get a quality voice? Did you ever figure out why yours was a bit quirky?
Alright! I can get my Samuel L Jackson voice back! I was so annoyed when Amazon disabled my Ask Sam, I paid $2 for that! Time to ditch my echo devices.. lol!
I also wanted to train my local ai voice assistant with my voice and started using the piper studio in the German language.
It wanted me to say a lot of sentences that sounds like they're from an software call-center and could be used for software scam calls i.E. "The activation key you've entered is invalid" and in combination with other sentences like "Then call the police and see how far you get there" it sounds pretty strange to me.
Then I saw a disclaimer on the page that says "By clicking Submit, you agree to dedicate your recorded audio to the public domain (CC0)".
Is there anything known that the voice recorded by the software is distributed to the www and used for malicious phone calls?
29:21 The Vsauce music caught me off guard!
Please need to talk about cash for servers, how it is done, and from what background should I learn this technique, and do you have courses about it?
I ended up trying a bunch of API LLMs and Open Ai 's Conversation agent and TTS is awesome and fast if you dont want to use your own hardware
Wow the Terry crews voice was amazing! Proper voice for your beefy Terry AI server! Congratulations
This is awesome. Thank you for all your work. And special Thanks for sparing us the crying. :-)
The instructions on your blog is incomplete... stuff missing and lots of libraries fail with torch and stuff. Cna you please try on a fresh ubuntu wsl install and follow your own guide and correct the errors coming up.
Ditto. Training is failing for me currently (and I'm trying to find answers to that), but to get there I also had to do the following to get some missing dependencies when running in Ubuntu on WSL:
sudo apt install gcc build-essential python3-dev
remember when you had 100k subs years ago, so happy to see you with big success!
The topic is so crazy and fascinating, I think I‘ll do a home project like this. The only thing that bothers me that I don‘t want to run my desktop pc 24/7.
Not sure if anyone has thought of this... But I just downloaded an Audible with a celebrity reading and now have 3 hours of perfect training material 😂
They're called audiobooks, audible is just the app that ruined the commercial audiobook landscape.
Do you need coffee? I like coffee 😊
Thanks Chuck, that vid had me wanting more, what a project! I hope some other shanagins come about from this😊
That's sick!!! Amazing content sir!
Lookin forward to building a local digital assistant with Multiple Personality Disorder, where Dr. Jekyl sounds like Morgan Freeman and Mr. Hyde sounds like Samuel L Jackson....
If you encounter the error "Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory", then run the following command:
cd /usr/lib/wsl/lib/
sudo rm -r libcuda.so.1
sudo rm -r libcuda.so
sudo ln -s libcuda.so.1.1 libcuda.so.1
sudo ln -s libcuda.so.1.1 libcuda.so
sudo ldconfig
Other then this one issue, this has got to be one of the coolest things I have done in a while, thanks for the great tutorial!
Now we're talking. Been waiting for this one!
I now know what I'm doing when I get home, Thanks Chuck!
When chuck asked. Dont you want this in your home? I was like. F YEAH I DO!
Absolute Beast Mode..!!! You Rock ...!! Cheers :)
Oh man, love it! Freakin' cool! Totally worth the effort! 😁
I nearly wet myself when you played your voice after the training! Technology can't live without it😂
Bro has Brad Boimler vibes! I'm here for it.
He just needs the Boimler scream!
@@ethanberg1 That could be the beard that Boimler has been growing all season.
Nice, I never recognized Mike as the voice of Mandark on Dexter's Laboratory before now. That's awesome.
I will try it, hope it works for me, is the project i have been waiting for! Thanks for sharing
Lmfao your 1000% becoming my voice assistant when I have the time!
Thanks so much for this favorite online classes. You are the best teacher. Please teach me how to make a raspberry pi that contains local chatgpt for generating texts to 3d gaming characters ❓
This is amazing, great you figured everything out. And ofcourse i want this in my home assistant 😮
Next video must be putting your voice in a Chuck the assassin doll with creepy phrases pleaaasse 😂
Lmao, the mike monologues were the best thing I've ever heard. I really need to buy a new Pi so I can set up home assistant... I have an old RPi2, but it doesn't have the specs needed to run home assistant :(
I can't sleep without watching your video's 🎉🎉
Re: training issues: garbage in, garbage out. AI transcriptions are not suitable for AI training.
Awesome! Love it! Are you going to release the Terry voice as well?
Fun fact: Demirkapı means iron door in Turkish (bill probably are)
Hey there Chuck, Great video, One more request or suggestion, whatever seems right, Make it talk with emotions, like the LLM is giving the responses and it is just reading it as it is, Maybe it should emphasize on those words, add some filler words and talk actually like it is a human talking. For example *talks intensely* shouldn't be read, instead adapted as emotion.
Thank you, this is one of a gem Channel I have found which actually teaches cool stuffs.
@NetworkChuck what kind of keyboard do you use? I'm dying to know because it just sounds SO good
That was amazing..I am currently building mine❤❤
Was it _really_ free, though? haha. Awesome job! Thanks for this!
I can recommend using stable whisper instead of whisper to get better timestamps.
Thanks! This is going to make my life so much easier. Going to use a pi zero 2 W and a keyestudio 2 mic hat.
Chuck says "So many little things to remember" all I hear is "take notes and write a script as you will never remember them all"
Networkchucks voice with an chinease accent sounds so funny 😆😆
I remember thinking ".wav" files were huge!
When this guy opens up his camera gear, bugs and errors completely stop existing... I wish reality was like that.
Thanks Chuck, I think I'm to dumb to do that, But it looks so cool and out of the cloud 👍
Hey Chuck fantastic and clear video! thank you! However you bring mixed messages, when you mention Keeper you said that is good that is "Cloud Based", but in your video, it seems like you prefer local installations (1:11 mark)
What is good for an individual (local hardware) may not be good for a company. As an individual, I’m willing to accept the cost and pain of maintaining a local infrastructure because it’s fun. For a business, the highest value becomes reliability.
@NetworkChuck Hello my name is suck 😁🤣
"Hello, my name is suck, my voice has just been trained" 🤣🤣🤣🤣🤣🤣
Time to find me some Majel Roddenberry clips I guess (for respectful personal, non-commercial, non-distribution use of course)
I copied your last video and was like damn, i wish i could make my own, literally you a bit later, thank you :)
The Ai thing is running in a virtualmachine in proxmox with a gtx970 so it's a little sheit but it works XD
I've literally been following this series that you have been updating, From the Start to now - I have Ollama with AlwaysReddy setup on my Ubuntu 24.04 OS - Running this - I will be trying to implement this on a New Raspberry Pi 5 (Quick question, will it be beneficial to add the AI HAT that you get for the Pi?) But really interested in this project and thank you so much for the inspiration to follow along the journey.
Much respect,
Great Channel.
@NetworkChuck, the Terry Crews voice clone does sound great, but I feel like you must have left something out. You attempted to use an automated process to generate an onnx file from your recorded voice, but the results were poor. You went back to Piper Recording Studio to get a decent voice clone. You said Mike spent some quality time with Piper Recording Studio for good results. I don't imagine Terry used Piper Recording Studio. So what did you do differently to achieve such a good result from prerecorded audio?
Hey Chuck, at 31:40 you made some folders in that share directory, you can make that faster by hitting: Ctrl+Shift+N. This wil create a new folder which is faster than using right click> New> Folder.
Sir big fan ❤
onnx is not a universal format for tts. There are more pth files for tts readily available. Also -- ONNX (Open Neural Network Exchange) is an open format built to represent machine learning models... any models be it stable diffusion, GPT's... I'm not guru, learned it all today
We all know that Morgan Freeman is what Chuck is going to change it to after the video ends.
This was so cool!
So I asked my girl whose voice should I try this with. Peter Steele is my winter project.
Good job chuck
Hi Chuck! I want to integrate this to all the bedrooms in my soon to be home. I already plan to build in Sonos speakers into the ceiling(Sonos in-ceiling speakers). Is it possible to use this speakers instead of the small speaker that you are currently using? Thanks mate! Really enjoying your content! 🙌 (About to build my dream home and wants to make it smart/AI)
Hi Ya & best wishes. Thanks for work. Be Happy. Sevastopol/Crimea.)
I may end up spending a month clipping "The A-Team" to get mine to talk like Mr. T, fool!
Hey chuck, still you didn't fix that longer conversation....any way to fix it? Kr summarise the context?
Holy clone!
Super cool video. Tried the steps and I get an error trying to install numpy 1.24.4. :
"module 'pkgutil' has no attribute 'ImpImporter'."
Did you run into this as well? Can't find a solution just yet.
it doesn't work for 40xx series of gpu, so everyone on 40xx series should do the github issue fix.
"Hi, my name is Chuck. My voice is my password. Verify me."
YOU LET OUT MAGIC SMOKE!
9 is better suited for u....u shld definitely use 9 lol
Nice to see your still uploading, used to watch you after school everyday ages ago through my window (we were neighbors)
WHAT
As creepy as it is, I kinda want that Mike voice, lol 😆😅...
Time Chuck gets onboard with uv for Python. It will change your life.
bro there are much easier ways to clone voice locally. but still fun to watch this video 👌👌
Which python version are you using @NetworkChuck ?
Awesome 🥰🥰🥰🥰
great video
Hi Chuck, I really enjoyed your tutorial. Sorry if I am doing something wrong but, I have tried several times to add a lengthy comment which keep disappearing, do you know why this might be? Ernie
29:20 oh no not the csauce