Create a GPT4ALL Voice Assistant in 10 minutes
ฝัง
- เผยแพร่เมื่อ 2 ส.ค. 2023
- Use Python to code a local GPT voice assistant. In this video we learn how to run OpenAI Whisper without internet connection, background voice detection in Python and implementing the GPT4ALL Python library to access any language model on GPT4ALL through your Python programs.
DISCORD: / discord
GITHUB: github.com/Ai-Austin/GPT4ALL-...
GPT4ALL DOWNLOAD: gpt4all.io/index.html
ANACONDA: www.anaconda.com/download
VoiGPT: voigpt.com
Official Ai Austin Discord: / discord
Help me make more videos:
www.buymeacoffee.com/aiaustin
#python #ai #programming - บันเทิง
Should I make a tutorial on how I make my videos ?
Yes
Yes, been meaning to ask this for a while. Your talking head is extremely well done
Yes
Yes please
Sure
Damn, this is really cool. Could easily enable API on a networked computer (dedicated for AI) and just take this same concept and build either a WebApp or Android/iOS app and you have a personal assistant like Alexa, Google, or Siri, but better... and way more customizable. AI is so freaking awesome and we are still in the early days, amazing. Thanks!
Perfect, I've been in dire need for a good simple voice listening python code. I can utilize this to with a openai assistant api to generate data a lot easier than having to have typed everything out on a keyboard. I'm working on making a dataset for a 1B LLM model to run my own smarthome alexa alternative and this is perfect.
are you still able to load the whisper model file directly into the terminal? I keep getting path not found, trying to find the right cache location or if it works
This is great ... I'm going to use it as a base template, I want to add chat context management and basic tool calls.
That's awesome to hear. My goal exactly is that you guys take these and build upon them to improve your own assistant. There is so many useful features that can be added onto these.
How would you compare this with your bard voice assistant?
Can this give answer based on previous prompts?
Hi, Everything went well except I couldn't retreive the model. Please help
Can you build a requirements.txt and .md file for your instructionals?
Bro in your previous video of bard voice assistant I'm facing error which is NoneType object is not subscriptable help me please 😭
install portaudio before installing PyAudio, not after, as in the video, otherwise you get an error. Great video, keep up the good work!
Sorry, but "get an error"? Which system? Doing what?
@@TiborBerki For OSX as stated in the video
Hi there for me my problem is I have a gpt4all folder, however no GPT4ALL folder, and my falcon file is not a bin and I cannot find it anywhere, please HELP
i tried to follow it wont let me load the model you have in line 12 at 3:01 , when it finally saw the model in the right spot it tried to download it and gave an error
Wow, this is great! Would love to see the prompts closely. Please do a tutorial on it, thanks!
we can use winget as well right?
I followed your guide and the code seems fine, but when prompted to say my wake word, it does nothing... my microphone is working, but is there a setting in VS Code I need to enable input/output audio?
If you are on Windows I would recommend joining the Discord. There is an issue with new versions of windows and the listen in background function. Multiple users have worked through it and provided their code solutions in the programming-chat channel.
Could it be possible to use a RVC based tts service for custom voices with this? I cloned Jarvis voice from the movies and i wanna use it as my personal assistant voice.
whenever you figure this out please tell me TwT
is this tutorial available in PDF format? text version instead of video--i like to copy/paste :)
this is what i wanted. definitely doing this soon. as soon as my broken shoulder heals because its pretty crap to use the rn.
kinda want to use it with tortoise but i guess i can use any tts system and just give it an RVC pass
how can i change voice input language to another
Awesome 🎉🎉🎉
Hi Austin, I took your code and went another route, instead of having the voice assistante I am trying to create a local instance of gtp4all on a local server and then have my home devices (esp32 with sensors) have communication with this instance of Ai, so that it can take decisions based on what they report (sample project of a Ai autonomous small greenhouse).
I have the core of the project working, I am stuck however in the keeping context part. Could you make a video about this? I can have GTP4All in cli and start a conversation with it via node js but the conversation wont keep context. The final idea is to have a express JS api to have each device talk to GTP4ALL
Hey man, I am working on a few ai based projects and looking for a partner who is passionate as I am. I like what you are trying to do, let me know if you like to link up and have a chat.
Great video! Just one question, can the assistant only run when we open python? Unlike siri or alexa which can run even when our pc or phone is off?
You have to leave the script on in the background.
What to do if i don't have the tiktoken_ext folder?
Can you do a video on animated an AI voice assistant giving them a body and face?
Awesome video :D
"I don't know why they thought it was a good idea to program this shit into this library, but we will modify the library to hack around this inconvenience" @ 4:35
Caught me off guard tbh but this was the best part, lol.
Question: can the voice recognition distinguish the boss voice from other people voice if there are many people speaking in the backgroud?
It cannot. If you have a Mac you can enable voice isolation for the running terminal microphone. If you are speaking and someone is speaking further from the mic, it will completely isolate the intended voice just like on an iPhone.
@@Ai_Austin Thanks for your kind and on-point rep.
Please am facing a problem while trying to run the program. Python was not found; run without arguments to install from Microsoft store but i already installed python. Help me please
@@ndellejoel2705 You need to make sure that you have the full path of python.exe in your system environment variables for cmd/powershell to be able to pick it up. For example, mine is in 'C:\Users\willi\AppData\Local\Programs\Python\Python39'. Yours may not be in that location, or even the same version.
To ad the location to your path, in the Start menu search for "Edit the system environment variables", it should appear as the first hit, with control panel in small letters underneath the title, click that. once it opens, in the bottem left of the window is a button for "Environment Variables...", click it.
A new window will appear, with two panes, the top one is for the current user, you'll see in the list an entry for "Path", select it and hit the Edit button below the top pane.
Yet another window will appear with a list of all of the locations that are searched for when running programs in cmd/powershell, in the top left is a button for "New", click that, you'll then see the cursor appear in the first blank row, this is where you enter the path to the python folder on your system. Once done, click OK to close this screen, and again on the previous screen, and finally once more to close the System Properties window.
You should now be able to try running python again, and if everything went well it should work.
Hello,
I'm trying to get GPT4All working on my computer. However, there has been an update from ggml files to gguf. The version of GPT4All that I have, can only detects and read ggufs, but the problem is that it cannot read them. Do you have any solution so that I can successfully read models such as Falcon ?
Yes. You are attempting to use a new version of GPT4ALL's python module than shown in the video. Whenever attempting to follow a tutorial and implement newer versions of the library then what was shown, I always recommend going to the developer documentation for the newer version of the library I want to implement. In this case I would use the 'GPT4ALL Python Bindings Official Documentation' to understand any changes to their library.
@@Ai_Austin I found my solution. My download path included an non-ASCII caracter, that's why the model cannot be readed. Now I'm trying to get the code works
could you figure it out?
@@clementbernard4259
Wonderful project. Thanks a lot. It works so nicely. Now can I run it on Raspberry Pi4 B 8G?
Hi; I'm on a Mac m1 when I run the script, I get this error:
Traceback (most recent call last):
File "/Users/dennis/Desktop/AI Assistant/GPT4all.py", line 16, in
tiny_model = whisper.load_model(tiny_model_path)
^^^^^^^^^^^^^^^^^^
AttributeError: module 'whisper' has no attribute 'load_model'
Did I miss a step?
I found the problem, I had the incorrect whisper. Need pip install -U openai-whisper
CAN SOMEONE TELL ME WITCH FILE PATH AND WHERE I FIND IT
I am using window 10. There is an error message :
"Failed to load llamamodel-mainline-cuda-avxonly.dll: LoadLibraryExW failed with error 0x7e
Failed to load llamamodel-mainline-cuda.dll: LoadLibraryExW failed with error 0x7e"
Is there someone with same error message?
Thanks for the video but i have a Questtion: When i install GPT4All the downloaded model files dont end in .bin only .gguf, and because of that im having issues when calling the model. I even renamed it to include the .bin ending but the falcon model would not load into GPT4All software, the Mistral had no issues with name change, but both give me Value error: Unable to Instantiate?? solutions? BTW using Linux
GGUF is the new file format for GPT4ALL models. Simply provide the correct path to the GGUF file if using a newer version of GPT4ALL than I showed.
how to remove gpt libraries? because during installation I ran out of space on the partition and it created some extra space immediately. and I don't know what to do now. will you help?
pip uninstall {library_name}
why do you use whisper for wake word detection instead of a dedicated model like porcupine?
Just no need for it. The tiny model works fast for wake word and you would need to use it for transcribing the prompt still. Simplicity sake of not using more libraries and code than necessary to achieve practically unnoticeable performance difference.
@@Ai_Austin I agree, the latency is almost non existent
Wow dude, how hard was it to figure all those steps out? Can you make a video about the System Prompts and how to use them in the most powerfull way or how we get gpt4all to work only with our local documents?
I will definitely make a video about system prompts soon. I don't think the GPT4ALL python library has the local docs plugin like the application but I will look into that more.
is there a possibility to impliment the custom instruction function from gpt3-4 into this?
You can absolutely add system messages in GPT4All's Python bindings. Refer to their official documentation and it's super simple to implement.
Im still beginner. Can you show me how to run command prompt?
Hello! Running Fedora here; I can't find the ggml-model-gpt4all-falcon-q4_0.bin anywhere on my computer, i only have gpt4all-falcon-q4_0.gguf, is it ok to replace the former with the later on the .py file?
me 2 if you found the solution please tell me
Error-- File "C:\Users\ucpat\AppData\Local\Programs\Python\Python310\lib\site-packages\whisper.py", line 69, in
libc = ctypes.CDLL(libc_name)
File "C:\Users\ucpat\AppData\Local\Programs\Python\Python310\lib\ctypes\__init__.py", line 364, in __init__
if '/' in name or '\\' in name:
TypeError: argument of type 'NoneType' is not iterable
I got this working but it won't respond to my prompt after responding to the wake up. It's as if the listen_in_background() thread stalls and never makes the callback more than once.
If you are on Windows that is exactly what is happening. Great intuition. For whatever reason it no longer is supported on windows. My recommendation is to implement the listen() function and it will solve this issue for Windows.
lost me @3:20 I dont have a nomic folder on mac
Could you do this but with Ollama rather than GPT4All?
A mi me sale un error y no funciona.
Hi @Austin, whenever I run the main.py, it sayssyntaxerror:(unicorn error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape.
I'm using a windows PC. Can you help me out?
Absolutely. So in this case you got a Syntax Error and don't know what that means. The next step you need to take is go research the what a Syntax Error is. Once you know what it is, you get a strong idea of what is causing it. You debug your error until you solve it. And that is how to program.
@@Ai_Austin Thanks Austin! I got the issue resolved. The reserved backslash symbol was giving this error. I appended 'r' in order to make the path as a raw string. However, there is a parameter that seems to remain uninitiated when CPU mode is active and if GPU is active, it shows 'running out of memory'. I believe the Falcon model that I downloaded is not compatible with GPT4ALL. Have you had any such issues?
Nope. All the GPT4ALL models are compatible with GPT4ALL. Again your answer is in the error message. "You dont have enough memory to run this model" which would lead a software developer (someone who debugs their own code and is not a script kitty) to look into the GPT4ALL documentation on model/hardware compatibility.
GPT4All Falcon now uses .gguf model files rather than .bin. I assume this method will only work with .bin?
Nope everything is all the same. You'll just have to download the new model file format if you are going to use the new version of the python library. You could use the same version I did and use the bin, but those models may potentially be updated so just use those is my advice and follow their official Python Bindings Documentation for how to download the gguf models.
@@Ai_Austin Where to get the new model file format or how to get it?
For the description there "On Windows use Powershell to run" they both have the same Uri - just FYI - that threw me for a loop
What tool are you using to animate your AI avatar?
I have a whole video showing that.
need an updated version for python 3.11 and above. playsound is for python 3.9 and below
can you add all the dependencies in description plz
This has not worked for me once.
Dude great video, i have got an error, running in windows
AssertionError: Audio source must be entered before adjusting, see documentation for ``AudioSource``; are you using ``source`` outside of a ``with`` statement?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\USER\Desktop\PY\GTP4ALL\gpt4all_voice\main.py", line 78, in
start_listening()
File "C:\Users\USER\Desktop\PY\GTP4ALL\gpt4all_voice\main.py", line 70, in start_listening
with source as s:
File "C:\ProgramData\anaconda3\Lib\site-packages\speech_recognition\__init__.py", line 189, in __exit__
self.stream.close()
^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'close'
Any idea of what is it?
i installed pyttsx3==2.90 not just pyttsx3, The chat bot is not reacting to the wake word.
i solving this but still has some problems, the program get stuck after the method "speak", and i want it to make it talk in spanish, well if i get the solution i'll post it
This is awesome, is there a way to maintain persistence in conversations? i.e. make it remember past conversations, who i am etc?
Yep yep. All things you can do with Python. Great reason to keep learning!
@@Ai_Austin error during wake word detection errno13 permission denied 🥺Pls give solution
I can make this portable like in a Rasperry Pi ?
No. You need a computer capable of running a large language model.
This is truly awesome thank you so much for taking the time to teach us programmably challenged. It acknowledges me saying "Jarvis" then says "Listening". However right after that my mic turns off and doesn't turn back on again for me to speak my prompt. Any ideas?
We've been trying to debug this in the discord server. It seems that some windows users, and some windows users aren't experiencing that same issue. Either pyttsx3 or speechrecognition has a bug currently on Windows that isn't letting the program run. Both of those are supposed to be supported on windows, but that is being welcome to the world of python programming on windows. A lot of the time these open source developers kind of leave everybody hanging. I really advise anybody who is not a highly advanced python programmer to use Linux to develop your projects, and then debug windows nightmare of an ecosystem afterwards..
@@Ai_Austin Well, I'll be on the look out for any changes to the github because after weeks and weeks of trying to build wheels and get things working on Windows this is absolutely the closest I've come to succeeding. Thanks to you. I appreciate very much how you explained the pathing in which to make it portable in such an intuitive way. Keep it up and thank you for being an asset to us all.
@tobetrashed Pro tip: Learning to code on Windows is learning to code on hard mode. I will forever recommend new programmers create a linux partition on their hard drive. Then you can code your projects and cross the added challenge of getting them running on windows after you know your code works on a operating system that isn't a headache for development.
Is this because im usinf a Windows PC?
The term 'brew' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ brew install ffmpeg
+ ~~~~
+ CategoryInfo : ObjectNotFound: (brew:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
brew in only for mac users.
use scoop or choco instead
YE. HEHE. sorry for lolig.
the creator is a geek good. and sometimes forget how normal folks
dont know OS basic stuff.
(abbreviated from "Homebrew" :) 🤣
Good job Austin 👌 I gotta ask you:
Is it using the GPT2?
Is it still working today?
Can any other model be used without the use of api?
Thanks for the video 👍🏻
It isn't using GPT2, it's using GPT4All. I don't know if it still works, but it should. I'm not sure what you mean by "any other model", but GPT4All has a few models.
does whisper work on 3.11
Yes
Thanks a lot bro 😊😊❤.
Help please.
pip install playsound
Collecting playsound
Using cached playsound-1.3.0.tar.gz (7.7 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
Is this because I need to create a virtual environment? I don't know how to as yet but thanks to TH-cam by tomorrow I will
solution:
pip install wheel
pip install playsound
@@colin6345 That didn't work for me, but I was able to use:
pip install "ai-core-sdk[aicore-content]"
If anyone else has that problem
had the same issue after i tried your solution, doing
"pip install --upgrade setuptools wheel"
THEN "pip install playsound"
did the trick
Wow, great. Do you know, how to train GPT4ALL with an own LLM? Let's say with study material and then I can learn via chat that? Or is this not possible with GPT4ALL?
If you have a fine tuned model, you can use it with GPT4ALL. Training a model on study material, would probably be a more complex task than studying it yourself. You might want to look into embeddings instead of fine tuning for this task.
Is there a way to add our own Voice model to this project? Awesome stuff btw
Absolutely there are many many ways you can implement TTS in this program instead of just using the built in TTS engine. I am researching best open source alternatives to Eleven Labs we could use to generate more realistic and customizable TTS voices. Have a long list to teat so can't say what I recommend there quite yet..
could you update your baard api voice assistant video ?
I am not going to be doing a full video on it soon. But I am updating the BardVoice GitHub repo for changes to the bard API tonight.
I don't know if I am missing something but I find your videos hard to follow for example not only am I on Windows and your on mac I also feel like there's some setup missing like what is gpt4all_voice %.?? All I know is that we downloaded Anoconda and gpt4all from the website and we start punching in random code in our command prompt that gives me errors because I am a smooth brain
gpt4all_voice is the folder I said to create at the very start to put your python file. Programming is a learning curve. Even einstein would have felt smooth brain starting out. Its not a natural skill. But it is achievable.
@@Ai_Austin I hope I didn't come off as rude I am just enthusiastic about this project and frustrated at my short comings
Totally empathize with it. Do expect my programming tutorials to be very fast paced, you'll prefer that on e you get past the basics. The terminal looks scary at first but it will become your best friend that lets you have way more power with a computer in your hand than 99% of people if you keep learning! I do also have a public Discord server where you can ask others for help on these tutorials. I respond in there as much as possible too!
@@Ai_Austin "gpt4all_voice is the folder I said to create at the very start to put your python file." You never said that in the video.
@@Roulade123 I 100% did lol
amazing, most underrated youtuber
you : will be using vs code
me: on linux with text editor
i'm with you on this, finally getting around to learning vim instead of just using nano
3:38 not working tiny model and base model
Did you find a fix bro? Same
in Windows 10 it runs but unfortunately it doesn't correctly understand any single word
user error. whisper is excellent at speech transcription. consider the possibility that something is incorrect besides the code i show. would explain why others are running without issue on windows 10
Helo i really like your work. good jo. Please if you have time can you do a video on hoe to use LLAMA mostly on the raspberry pi because i couldnt get the gpt4all work on my raspberry pi
Running a large language model on a tiny computer is not a task that is currently realistically possible. You would have to run llama on a cloud server or faster computer. These open source models do have models that were optimized for commercial hardware but that does not mean they were optimized for the slowest commercial hardware on the market.
@@Ai_Austin Thanks for your repply all in all i like your work motivating and easy to follow
🎯 Key Takeaways for quick navigation:
00:00 🚀 *GPT for All Voice Assistant Introduction*
- Introduction to coding a GPT for All Voice Assistant.
- Overview of the voice assistant's capabilities.
- Emphasis on it being open source and offline-friendly.
01:23 🛠️ *Setting Up Development Environment*
- Installing the GPT for All desktop app and selecting a language model.
- Recommendations for using VS Code as the code editor.
- Installing Python through Anaconda and required libraries.
02:31 🧩 *Loading GPT for All Model and Dependencies*
- Creating the necessary folder structure and Python file.
- Importing libraries, setting up wake word, and loading the GPT for All model.
- Installing and configuring additional libraries for audio processing.
04:21 🌐 *Running GPT for All Offline*
- Modifying OpenAI Whisper and GPT for All libraries for offline functionality.
- Handling dependencies and loading model files locally.
- Ensuring the voice assistant works without an internet connection.
05:43 🔊 *Text-to-Speech Functionality*
- Implementing text-to-speech functionality using different methods for Mac and non-Mac systems.
- Setting up PiTTS library for non-Mac systems.
- Creating a function for the voice assistant to speak.
06:34 🎤 *Voice Recognition and Wake Word Detection*
- Developing functions for detecting the wake word in audio input.
- Utilizing the Whisper model for transcribing audio and detecting the wake word.
- Integrating background listening for improved efficiency.
07:58 🗣️ *Communicating with GPT for All*
- Handling user prompts, transcribing audio prompts, and generating responses with GPT for All.
- Managing program flow based on detected wake word and received prompts.
- Ensuring the voice assistant can engage in a conversation.
08:55 🎙️ *Continuous Background Listening*
- Implementing continuous background listening using the 'listen in background' function.
- Maintaining a background process for the microphone while other code runs.
- Ensuring the voice assistant remains responsive to wake words.
09:51 ⚠️ *Handling Warning Messages*
- Addressing and mitigating specific warning messages related to M1 or M2 Max CPUs.
- Improving the readability of the program's output.
- Ensuring smooth execution of the voice assistant.
10:16 🤝 *Community Engagement and Conclusion*
- Announcing the official Discord server for community support.
- Sharing the GitHub repository for the project's source code.
- Encouraging contributions and providing a platform for discussions.
Made with HARPA AI
I really wanted this to work, but it needs updating for newer python/models etc. Pretty please. :)
The code still runs for me, using the same python and library versions I made the tutorial in. This is the last voice assistant I am open sourcing. Use VoiGPT, my free to use website voice assistant, if this tutorial is too advanced. thanks for the suggestion.
would have been good to see a demo at the end of this video, not sure if I want to go through the whole process if I don't know what the end result looks like
Thanks for the tip. Considering I did a demo from 30 to 60 seconds how would seeing the demo after the code tutorial be better than before diving into the technical parts of the video? Seems like for audience retention you would prefer to see the demo, then the tutorial. But maybe your onto something and I should go straight into the code (which is where 50% of the audience leaves. And then if they watch the whole video, they get a demo at the end instead of the intro. Interesting concept for sure.
@@Ai_Austin my bad, I re-watched the video and saw the demo at the start, apologies
Hi! everyone, first of all thanks AI Austin for this great gift!!! your presentation 'tutorial' how to install and configure GPT4All is really great!!! Superb Job!! God bless you. I have a lenovo ThinkPad i7 with 8 GB RAM, running ubuntu linux 22.04.x .. I got GPT4All up and running the only thing is that it uses a very 'mechanical, robotic voice' i'm not getting the voice as yours here in your tutorial or any voice closed to yours in here ..... any suggestion?
In my newer video, the Gemini assistant I show how to stream TTS with OpenAI's API. I'd watch that and just implement OpenAI TTS into this GPT4ALL voice assistant if you want the best quality voice.
So this is completly off line. Ran at home. Open air mic. Key word listening trigger. Now I have my summer project for a weekend some time.
Me: "Prime"
Prime:"Freedom is the right of all sentient life".
Me: "Calculate the force needed to puch Megatrons heart out the back of his torso
Prime: "I did that last week. The force needed is..."
Please create ai assistant for me
Code my vector bot. The servers shut down a while ago please
Menudo timo
This is interesting, but it's too basic. It needs streaming, and the chunks should be queued for an asynchronous daemon to speak them.
theres a thousand features that can be added to this program.
sounds like you found a neat feature to better serve your personal needs.
have fun building on this program and let us know how it goes bud
Tony star built this in a cave… with a bunch of scraps…
I built this in a van down by the river, from thin air. Tony Stark gets his code from me
there must be an easier way ..jk lol .. thanks for making sure it works offline
Anyone else getting this?
(base) C:\Users\KKouts\Desktop\gpt4all_voice>python cachedmain.py
Traceback (most recent call last):
File "C:\Users\KKouts\Desktop\gpt4all_voice\cachedmain.py", line 80, in
start_listening()
File "C:\Users\KKouts\Desktop\gpt4all_voice\cachedmain.py", line 72, in start_listening
with source as s:
^^^^^^
NameError: name 'source' is not defined
Is this paid or free?
free
I don't know why you would make the video so cringe, but, good info anyways!
I don't know why you would make this comment so cringe, but, another comment i guess!
I hate dudes and kill em all in Skyrim normally but your AI Sith is making me get a stiffy. I'm running winders and got as far as -> ImportError: blobfile is not installed. Please install it by running `pip install blobfile`.
99% of the population would consider your guide totally useless, not everybody is an AI nerd.
There is 8.1 billion humans on planet earth. If 1% of humans find this video useful, that's 80 million people who I would have helped.
I'd rather be a nerd than someone who feels smart, because I can't do math. 🤔
Junk....too much work.... you nerds need to make thinks much easier to install.
The title said, create not download. Nerd.
I'm getting a wheel error for playspund even after I install wheel
pip install --upgrade wheel
man I am so lost😓
ai generated tutorial is waay lamer than it sounds in your head.
that would be so dope if ai could generate my videos. would save so much time. maybe in 10 years ai can edit videos like this, create code like this and write it into a tutorial script like i did. one day bubba, one day
Bro in your previous video of bard voice assistant I'm facing error which is NoneType object is not subscriptable help me please 😭
when I download the gpt4all falcon model from the gpt4all desktop application I don't find any bin file. It shows gpt4all-falcon-q4_0.gguf in this file. how can I use it?
me 2 as well please if you found the solution tell me its been a while and Im trying it
@@hassanissa5644 Use gpt4all-falcon-q4_0.gguf file instead of bin file
did you find a solution?@@hassanissa5644
Bro in your previous video of bard voice assistant I'm facing error which is NoneType object is not subscriptable help me please 😭
I have the same problem