I really appreciate all the enthusiasm here guys! That being said, any future videos are being published at youtube.com/@Jarods_Journey so be sure to head on over there as I might be working on the things you guys are suggesting but are published there instead! I didn't expect the vid to amass 2k+ views on my personal channel.
Top notch perfomence, video flow was great, music was perfectly soothing and i love vidoes like these. I found it a little difficult to understand excactly how the system was set up, but you probably explained that in one of your earlier videos, so I'll check those out. Cant fathom that you only have 130 subscribers. Keep up the good work!
Appreciate it! I'll admit, definitely not the most beginner friendly, but working on making some future vids to explain it better. All of those will be on my alt account 🫡
Hi Jarod! This is the best script i saw till now! Congrats! I have a question, because i'm new in Python and AI. You are using the standard library to recognise your speech, right? And this library can recognise only the English language? Could you make use of Whisper from OpenAI to recognise other languages also?
Appreciate it! Unfortunately, I'm not too sure what the standard library is, if that is Speech Recognition then yes, but I haven't tried with other languages. This video is already outdated and the current model uses the whisper API as the standard speech to text model 🤟
i am just a passerby who doesn't know programming at all, yet the video is still capable of leading me along. But there is one problem, i uses a mac, so there is no winsound. i wonder if it is possible to use use vivy on mac by replacing winsound with something else.
Glad you're able to find some use in my repo! Since my repo already utilizes pyaudio, you could use it to play your own sound, you would just need to get the path to the sound and play it at that moment. If you ask chat GPT, it could spin something up for you really quickly on how you might replace winsound with it. That being said, do the exe files also not produce a sound on Mac? I only have windows so I wouldn't be able to verify unless I install an emu
@@jarodmica i did some research and found that i can potentially replace winsound with pygame, but as for the coding part i have no clue lol. as for the question about exe, i don't know. I couldn't even run the codes to begin with because macos doesn't have winsound.
@@nanuwu8895 Gotcha! Well I kinda hate the sound of winsound, so it's going to happen sooner or later that I replace it with something nicer sounding. I'll add it to the project todos
In the pipeline 🤟, slowly figuring out ways to incorporate both offline and online versions of the assistant. This one will probably stay online, and I'll fork it or create a new repo for the offline version
@@RobertJene Haha I wouldn't either, but can't deny that their product is much more superior to these llama models. gpt4all has some bugs and my results in playing around with it still put it much lower than openAI, but I'm excited for these next few months to see what comes out
@@jarodmica I don't need it to be perfect for what I want to use for voice interaction, and you don't have to make the script as complex as the ones in your last couple videos. Just show us how to make the connections so that we can update it as GPT4all gets better in the future
@@RobertJene ofc! It's being built with modularity in mind, all that needs to be swapped out should be a couple of names and functions and it'll still work as normal. A working prototype should be out by the weekend
Jarod, just came to this video. Man, talk about creating what should have been included on day one. I haven’t attempted following your video yet but will. I’m going to take the vacuum out of a roomba, mount 5’-10” pole on it with a 2’ shoulder-high cross bar. Drape a sheet over it. On top of post is a camera mounted on a face tracking base. Then some sort of funny mask under the camera so that when this roomba drives after self-charging, it goes around like normal. If it spots a face, it tracks your face and films you Jarod, just came to this video. Man, talk about creating what should have been included on day one. I haven’t attempted following your video yet but will. I’m going to take the vacuum out of a roomba, mount 5’-10” pole on it with a 2’ shoulder-high cross bar. Drape a sheet over it. On top of post is a camera mounted on a face tracking base. Then some sort of funny mask under the camera so that when this roomba drives after self-charging, it goes around like normal. If it spots a face, it tracks your face and films you and even maintains tracking as roomba bumps around. Your video is making this perfect! Thank you.
Hello dear Jarod. My question is, how or using what software did you put your webcam video viewer in the bottom right corner? I am asking because I also want to publish educational computer science videos on youtube, and I want the viewers to see my face. Edit: By the way, I also went to Chico state!
Small world huh! I use OBS for all of my recording needs. It's open-source and generally pretty good so you'd be able to set it up fairly quickly whenever you need to.
Hey Jarod! There's a way to change the language answer? I've tried to tweak something and another but i'm not too experienced with programming IA txt-to-speech ;-;
There is, but unfortunately, Vivy isn't set up for other languages yet. I know how you'd go about doing it, what you need to do is already have another voice installed on your system and then match the voice index to that one for the speech recognition. Everything else would follow and they would detect a different language.
Most likely your extensions are hidden, so you gotta go to the menu bar, check the option that says show extensions, and make sure your file isn't actually keys.txt.txt
I'm having the same problem. And the file is not called keys.txt.txt... It is because of these lines: self.voice = self.user.get_voices_by_name(voice_name)[0] voicename = "Rem" (there is no Rem) But changing the voicename to Rachel fixed the problem Hope that helps.
@@samueloslorasmussen6239 Ahh, correct, I totally forgot that would throw errors on your guys end. I'll have to change the logic as my default variable isn't being pulled
It looks interesting and the steps are clear, but for those of us who have no idea about coding, we easily get lost. I tried to follow the guide, but there are things that I don't understand.
Definetely understandable, I think I created it with an intention of being able to set up the code by following exactly as I had done. I am working on a follow-up that will be easier to follow for all, but it'll be on the other channel
Great. An example of my problem is that at the beginning, I entered the git clone command, but my computer didn't recognize it. I did some research and found out that I needed to install git. Then, at 2:07, you had a pre-opened code, and in the terminal, I didn't have code/Chatbot written, only desktop, and I got stuck there. Can you perform the procedure on a clean PC, that way we would all be in the same condition.
@@nvadito7579 Ah gotcha! Yeah all the dependencies etc. will be on the video using a VM. It'll most likely be a multi part series though so that you guys can install everything in series
Your tutorial is out of date. First off, it's no longer "from gpt_assistant import ChatGPT". It's now "from package.gpt_assistant import ChatGPT" Second, I tried to run it from the cmdline on the VS code and it complains about needing some additional keys for me to implement, even though I have my keys set. Because of that I can't even make my bot to work. Also how come I have to define get_user_input()? And why do I have to import a module for get_file_paths? Sorry if I sound critical, which I am, but this project took me nowhere but through a roller coaster of pain and tears and no success on my end. And yes I did read through the documentation on github.
Lol I appreciate the criticism. The more recent tutorials are on my main channel, that's where I have future updates for all of these (in the title). You need the keys.txt file in the same dir as the assistants, as well, make sure it's not keys.txt.txt so you need to show your file extensions if your on windows. get_user_input is no longer used, it seems that I still have it in my readme but it is not in the code. Path directories can be defined in the same script, but it's cleaner to have it in a utils module. Also, TH-cam is a shabby place to debug, so if you wanna open up an issue on GitHub, it might be easier
I really appreciate all the enthusiasm here guys! That being said, any future videos are being published at youtube.com/@Jarods_Journey so be sure to head on over there as I might be working on the things you guys are suggesting but are published there instead! I didn't expect the vid to amass 2k+ views on my personal channel.
Top notch perfomence, video flow was great, music was perfectly soothing and i love vidoes like these.
I found it a little difficult to understand excactly how the system was set up, but you probably explained that in one of your earlier videos, so I'll check those out.
Cant fathom that you only have 130 subscribers.
Keep up the good work!
Appreciate it! I'll admit, definitely not the most beginner friendly, but working on making some future vids to explain it better. All of those will be on my alt account 🫡
@@jarodmica thanks because for a beginner its hard to follow
Hi Jarod! This is the best script i saw till now! Congrats! I have a question, because i'm new in Python and AI. You are using the standard library to recognise your speech, right? And this library can recognise only the English language? Could you make use of Whisper from OpenAI to recognise other languages also?
Appreciate it! Unfortunately, I'm not too sure what the standard library is, if that is Speech Recognition then yes, but I haven't tried with other languages. This video is already outdated and the current model uses the whisper API as the standard speech to text model 🤟
1:50 you can put all that in a requirements.txt file and have us type the command to do everything in that file
Very true, as I'm browsing more repos, this is the way to go
@@jarodmica cool cool
i am just a passerby who doesn't know programming at all, yet the video is still capable of leading me along. But there is one problem, i uses a mac, so there is no winsound. i wonder if it is possible to use use vivy on mac by replacing winsound with something else.
Glad you're able to find some use in my repo! Since my repo already utilizes pyaudio, you could use it to play your own sound, you would just need to get the path to the sound and play it at that moment. If you ask chat GPT, it could spin something up for you really quickly on how you might replace winsound with it.
That being said, do the exe files also not produce a sound on Mac? I only have windows so I wouldn't be able to verify unless I install an emu
@@jarodmica i did some research and found that i can potentially replace winsound with pygame, but as for the coding part i have no clue lol. as for the question about exe, i don't know. I couldn't even run the codes to begin with because macos doesn't have winsound.
@@nanuwu8895 Gotcha! Well I kinda hate the sound of winsound, so it's going to happen sooner or later that I replace it with something nicer sounding. I'll add it to the project todos
@@jarodmica Lets gooo
Can you do a video of how to connect to locally installed chatgpt4all instead of having to use a paid Chat GPT account's API key?
In the pipeline 🤟, slowly figuring out ways to incorporate both offline and online versions of the assistant. This one will probably stay online, and I'll fork it or create a new repo for the offline version
@@jarodmica sweet because I don't want to pay rent to open AI for chat gpt
@@RobertJene Haha I wouldn't either, but can't deny that their product is much more superior to these llama models. gpt4all has some bugs and my results in playing around with it still put it much lower than openAI, but I'm excited for these next few months to see what comes out
@@jarodmica I don't need it to be perfect for what I want to use for voice interaction, and you don't have to make the script as complex as the ones in your last couple videos. Just show us how to make the connections so that we can update it as GPT4all gets better in the future
@@RobertJene ofc! It's being built with modularity in mind, all that needs to be swapped out should be a couple of names and functions and it'll still work as normal. A working prototype should be out by the weekend
Jarod, just came to this video. Man, talk about creating what should have been included on day one.
I haven’t attempted following your video yet but will. I’m going to take the vacuum out of a roomba, mount 5’-10” pole on it with a 2’ shoulder-high cross bar. Drape a sheet over it. On top of post is a camera mounted on a face tracking base. Then some sort of funny mask under the camera so that when this roomba drives after self-charging, it goes around like normal. If it spots a face, it tracks your face and films you Jarod, just came to this video. Man, talk about creating what should have been included on day one.
I haven’t attempted following your video yet but will. I’m going to take the vacuum out of a roomba, mount 5’-10” pole on it with a 2’ shoulder-high cross bar. Drape a sheet over it. On top of post is a camera mounted on a face tracking base. Then some sort of funny mask under the camera so that when this roomba drives after self-charging, it goes around like normal. If it spots a face, it tracks your face and films you and even maintains tracking as roomba bumps around.
Your video is making this perfect! Thank you.
Hello dear Jarod. My question is, how or using what software did you put your webcam video viewer in the bottom right corner? I am asking because I also want to publish educational computer science videos on youtube, and I want the viewers to see my face.
Edit: By the way, I also went to Chico state!
Small world huh! I use OBS for all of my recording needs. It's open-source and generally pretty good so you'd be able to set it up fairly quickly whenever you need to.
@@jarodmica Oh cool. Thanks!
Hey Jarod! There's a way to change the language answer? I've tried to tweak something and another but i'm not too experienced with programming IA txt-to-speech ;-;
There is, but unfortunately, Vivy isn't set up for other languages yet. I know how you'd go about doing it, what you need to do is already have another voice installed on your system and then match the voice index to that one for the speech recognition. Everything else would follow and they would detect a different language.
if i run assistant.exe, in console he say, "No API Key set for Eleven Labs", but i entered api keys in the keys.txt file
Most likely your extensions are hidden, so you gotta go to the menu bar, check the option that says show extensions, and make sure your file isn't actually keys.txt.txt
I'm having the same problem. And the file is not called keys.txt.txt...
It is because of these lines:
self.voice = self.user.get_voices_by_name(voice_name)[0]
voicename = "Rem" (there is no Rem)
But changing the voicename to Rachel fixed the problem
Hope that helps.
@@samueloslorasmussen6239 Ahh, correct, I totally forgot that would throw errors on your guys end. I'll have to change the logic as my default variable isn't being pulled
where did you train that voice samples models?
Eleven Labs to train the voices
@@jarodmica thank you
good is the explain but yes one create is much really more hard
It looks interesting and the steps are clear, but for those of us who have no idea about coding, we easily get lost. I tried to follow the guide, but there are things that I don't understand.
Definetely understandable, I think I created it with an intention of being able to set up the code by following exactly as I had done. I am working on a follow-up that will be easier to follow for all, but it'll be on the other channel
Great. An example of my problem is that at the beginning, I entered the git clone command, but my computer didn't recognize it. I did some research and found out that I needed to install git. Then, at 2:07, you had a pre-opened code, and in the terminal, I didn't have code/Chatbot written, only desktop, and I got stuck there. Can you perform the procedure on a clean PC, that way we would all be in the same condition.
@@nvadito7579 Ah gotcha! Yeah all the dependencies etc. will be on the video using a VM. It'll most likely be a multi part series though so that you guys can install everything in series
Your tutorial is out of date.
First off, it's no longer "from gpt_assistant import ChatGPT". It's now "from package.gpt_assistant import ChatGPT"
Second, I tried to run it from the cmdline on the VS code and it complains about needing some additional keys for me to implement, even though I have my keys set. Because of that I can't even make my bot to work.
Also how come I have to define get_user_input()? And why do I have to import a module for get_file_paths?
Sorry if I sound critical, which I am, but this project took me nowhere but through a roller coaster of pain and tears and no success on my end.
And yes I did read through the documentation on github.
Lol I appreciate the criticism. The more recent tutorials are on my main channel, that's where I have future updates for all of these (in the title). You need the keys.txt file in the same dir as the assistants, as well, make sure it's not keys.txt.txt so you need to show your file extensions if your on windows.
get_user_input is no longer used, it seems that I still have it in my readme but it is not in the code. Path directories can be defined in the same script, but it's cleaner to have it in a utils module.
Also, TH-cam is a shabby place to debug, so if you wanna open up an issue on GitHub, it might be easier
I am using ur repo but my assistant only speaks French or Malay for some reason
🤔, if your default system voice is Malay or French, it'll choose one of those. To my knowledge, it natively outputs english