Hello, thanks for great videos about AI. Especially how to install them on local machine. Sadly there is lack detailed info about how to run local LLMs on Windows machines with Radeon cards (for ex series RX 6800 XT). It would be sweet to use power of GPU instead of waiting for CPU to process queries... Looking forward for Radeon team tuts.
@@matthew_bermanplease cover how to setup a pipeline that does the actual embeddings on some edge cloud, and then downloaded back to your chromadb. Assuming fast internet speeds, it should make the RAG completion much faster for new unseen documents
For on-screen installation guides, it's best to start with a clean virtual system. This ensures that no lingering dependencies from past installations affect the process. A fresh start helps avoid complications that can arise from a cluttered and outdated system environment, which may contain residual files and settings from earlier configurations.
IMPORTANT FOR WINDOWS USERS: the last command, "PGPT_PROFILES=local make run" DOES NOT WORK. Instead, use "set PGPT_PROFILES=local" followed by "make run"
Great intro and although it's 5 months old now, a very useful starting point for those of us with clients who are asking about this technology. Much appreciated and cheers from Sydney - Dave
Just wanted to say thank you for all of your excellent insights, commentary, and instruction. If all your videos represent what you give out for free, I can only imagine how effective your paid consultation would be for anyone delving into the new frontier of modern AI.
Question: is it possible to build the entire model, with source documents and everything, and distribute it as a single package download for others to use, including the source docs and everything else-i.e., for someone else to use with a specific data set, already chunked and ingested, and where the don't have to configure they just query?
I am a newvie, and have been trying for 1 week straight. Finally had to mix it a bit. I used martinez pyrequirements and that resolved a lot of the problems with win poetry conflicts in windows. So that was a huge step. I just managed to install the ui part. I need a life.
@@mrudulasawant4677 did you read the below comment ? poetry install --extras "ui" I cannot remember now but the guy below seems to have resolve it too.
Use LM studio and AnythingLLM instead.Both programs coming with one click installers and they both have beautiful interfaces where even begginer can get around,no messing with terminals and scripts. It's a great video but to complicated for begginers i would say,it's more for advanced or intermediate users.
Have you explored how to train privategpt to always be up to date on the your Paperless-NG document repository? I'm pretty sure this would be a great pairing.
What the creator mentions at the end, adding external information tools and data extraction; makes sense. We need more than just chatwithpdf functions.
@@Sindigo-ic6xq Can a chatbot be installed on the Python 🐍 console 🖥 of a program like Paraview? Paraview is a software to display 3D models and it contains a Python 🐍 console to run scripts for repetitive tasks.
Thanks for the great video! Question: are the "Ingested files" cached and saved somewhere? Or do I have to upload them through the UI every time I run the project?
Great channel, great vid....helps keep up, Question - what about a local LLM that can you can point to a directory instead of upload individually? If I had a library of .pdf's it would be very time consuming uploading every single one....there's got to be a way to point to a directory and just let it train...even if it sits for hours pouring through all the docs. Thanks!
I am working on my college major project Multi-modal AI Research System (MARS) is an innovative tool that transforms how we handle professional digital files. It addresses the challenges of managing a variety of file types, like PDFs, CSV, excel . Using advanced AI technologies, MARS will perform tasks such as answering questions about the file, modifying the content, creating presentations, analyzing data, and generating dashboards. Our goal is to simplify file management by combining AI with diverse functionalities, making tasks efficient and user-friendly. MARS will have a chat-based UI, allowing users to send and receive files, ask questions, and get help from the system should I take this version of privategpt and enhance it as it is open source is it okay to do that ??
Thank you for the video. Is there any local model you know that can be, instead of connected to documents, connected to a database? I'd love a tutorial about that.
I've never said this and I've been on youtube since it was first released on the 90s, but you've been a great and useful find and your focus on your users and how your current videos work for them is just straight out awesome.
90s??? TH-cam is an American online video sharing and social media platform headquartered in San Bruno, California, United States. Accessible worldwide, it was launched on February 14, 2005, by Steve Chen, Chad Hurley, and Jawed Karim.
Hi, just wanted to mention that the process to install poetry on a PC is leagues more complex than on Mac, any chance you could explore expanding on that aspect of the tutorial?
thank you. This is why i read the comments first now due with more complex items. I am not tech savvy but tech aware. This saves me much heartache . Thank you so much.
I just discovered ur channel and i emidiedly subscribed Could privateGPT be used alongside memGPT in autoGen? i wonder if we can tweak it a little to use a entire codebase and chat with that. Like it would know what iv have written such as custom function etc. Also for rare languages like ibm egl this could be very usefull cause all llm's will sometimes just make up functions or classes that don't exist or wrong types of that. sry for bad englisch :)
How about a hardware optimization guide? I am a Mac user just like you, but let’s talk about private GPT And making it work with local GPUs. M1/ M3- will it work or waste of time.
I would like to see PrivateGPT with a small library of pdf's in order to see what it can do with journal articles from a particular field. I have tried this with the earlier PrivateGPT and 3200 pdf's. Some pdf's, ~200 of the 3200, would crash the embedding process and when this happened had to remove that pdf and start the loading process over from the beginning each time. This was very time consuming, but eventually was able to get ~3200 pdf's into the database. Was wondering if this is any easier now and would like to be able to put 1,000's of pdf's into the database. This could be very helpful for literature review and finding statements that you forgot where they came from.
I really do not understand too well the workings of PrivateGPT, but you seem to be able to make embeddings for many pdf's and it is all put in a database. Just provide PrivateGPT with a folder and it will take in all the pdf's, as well as other formats, and make a database which it then uses when asked a question. Sorry, I do not know the technical details better. @@akki_the_tecki
Can you have it just look at a project folder of code for example? Now that gpt4preview supports 128k tokens that’s what I want is context understanding larger projects rather than uploading and manually selecting a few small files. And edit in place options. (I wrote a short python script that does this but would love to see a full featured product.)
I had the same problem. It looks like that setup step is different now. Using the command: poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant" worked form me. You can read more on it on the "Installation" part of the PrivateGPT docs which are linked in the description.
When you just record a video of your terminal window like this the player options appear over the command you're currently typing. So if I pause to look at the command and type it out it has stuff over it making it hard to read. Maybe you could put a graphic or self-promo or something at the bottom of the screen in order to make it easier to follow along if one is just watching you and pausing. Oh and LOVE YOUR CONTENT!! Keep it up my friend! ...would love to get autogen running to show my boss at work but it's pretty difficult without a straight forward current windows install tutorial---don't suppose you plan on making an updated autogen installation guide for windows?? I have a hunch that I'm not the only user in my exact situation. Thanks again!
The PGPT_PROFILES=local make run worked only after running the following command: poetry install --extras "llms-llama-cpp ui vector-stores-qdrant embeddings-huggingface"
I intend to make a content management website (mainly with documents), utilizing a Rust developer to make the editor side a desktop app. There are some fillable fields that are tedious to do, and I was wondering: 1. Can you train the model on what constitutes the best results based only on local data. 2. Is it hard for the programmer to make it produce predefined fields, callable by the desktop app to fill the fields from the website. 3. Is it possible to bundle it with the desktop app for easier installation by a less knowledgeable person? I am still new to this, so excuse my ignorance
Thanks Mark. Intel Mac here and I seem to fail with the segment error (11) following the failed loading of Metal. My Radeon has 4gb vram and supports Metal 3 so I have thta on. For some reason I am not finding the default.metallib. As follows: ggml_metal_init: allocating ggml_metal_init: found device: AMD Radeon Pro 5500M ggml_metal_init: found device: Intel(R) UHD Graphics 630 ggml_metal_init: picking default device: AMD Radeon Pro 5500M ggml_metal_init: default.metallib not found, loading from source make: *** [run] Segmentation fault: 11 Any thoughts? Have tried no Metal and running in CPU and even a variety of model_kwargs={"n_gpu_layers": 100}, tried 0, 50, 100)
Excellent video. Thanks a lot, but I have a doubt : One thing is a script that asks questions to documents and another is questions to the LLM? Isn't there an LLM model that asks questions to documents persisted in a local chromaDB for example?
Thanks for the awesome video! A question. Why do we need to install poetry after activating the environment. If I am not wrong brew installs poetry system-wise, right?
I still can't undrestand why all this stuff is not containerize? Who wants to do all this, when spinning up a contrainer makes so much more sense. I get that someone still needs to do this, but these project teams should approach a container model approach from the get go. It's there, they already have the build and setup script, it just one more simple step to do it inside a container and then release it. There is no execuse for it, it can all be automated too.
It sucks that you did all this seemingly good work and I get "command not found: conda" basically at step 1 and cannot proceed past that. HATE IT when the demo/gist/etc does not follow what actually happens/works.
sir i did this but this gpt is so slow ..Did i need to make sure of any other installations i mean im running this on windows ..please help me if anyone knew the solution
What I need is the ability to do this. When I try, I get errors within cmd prompt not knowing how to do it, like failures of not knowing the commands... Does this all require Windows 11 or Linux, or... How do I uninstall chocolaty or other command line enhancers to start over and finally get this working? For the most part my problems are with GIT, ... Maybe making a video on how to clean out and then reinstall all files needed to install anything.. GIT be very helpful.
Poetry on Pop_OS 22.04 LTS did not work. poetry install --with ui, local failed with error: NoSuchOptionException The "--with" option does not exist. I could not get poetry to work. I will have to learn it better before I try code in this video.
Hi... yours is the only step by step guide that actually worked for me, thanks! Where are the uploaded docs stored and can I just put files into that directory or does one have to manually add via the application? Is there some limit to the amount of files (in GB or whatever) the program can handle?
Wasn't able to get these step by step instructions to work on my Mac. Tried both methods, the one in the gist file (pyenv) and the method in the video (conda).
I tried following all the step, I managed to run it but the generation of chat is so slow. I'm not sure if this is normal? Took me like more than a minute to generate an answer.
Finally got it set up, but noticed it wasn't very fast. When I open Performance tab on Task Mgr, I noticed it's not really challenging the CPU or GPU. Is there a way to make it use more of the available power to be faster? Also, to those asking setup questions: I'm very new to this and struggled a lot then instantly felt kinda like a dummy when I realized could plug in my errors into chatGPT and ask for help through it! (although I suspect it now knows I'm "looking around", heh)
There are lots of tuts on this and I think all of them incomplete for non-programmer folks. Conda, Poetry, Brew etc etc. Installing a python should be enough and I didn't find such tutorial. That is why AI is very important lol. It will save us a lot of time in the future for sure
@matthew_berman please cover how to setup a pipeline that does the actual embeddings on some edge cloud, and then downloaded back to your chromadb. Assuming fast internet speeds, it should make the RAG completion much faster for new unseen documents
Thanks for sharing this video~! I like the idea of private GPT~! BUT there is one question need to be asked: How do I make sure the PrivateGPT has the most UP-TO-DATE Internet knowledge? like ChatGPT 4-Turob has knowledge up to April 2023. This is important to many users as have access to the latest almost everything makes life easier.
great video and great work! I´m developing a ML for Smart Homes using all the data collected from sensors, etc at home. What would be a good model to use for this purpose?
Thank you. Do you mind that privateGPT can be used as a training assistent for study. For example I have a lot of Doc-files about a special topic and like to train the LLM and me. How can I do that? How can I create an own LLM with my data? Is there a how-to available? Thank you.
So I went through the install on my Mac. And it worked the first time. The pdf reviews wasn’t great…. But how do I start it up without the installation.
Is there a way to have the system ingest all of the documents in your computer so that it could look across everything for data held in disparate systems?
Hey Matt, Thanks for all your videos! How do you feel about the performance of the privateGPT? I find it to be really slow and not really ready for production. No matter if im using the GPU or CPU
I also found it to be a little slow on an i9, 3090, 128GB. When I go into process monitor the consumption is minimal. Is there any way to have it use more machine power?
Can I use this as a base to train different GPTs? I have a library of legal docs for instance. I would like a GPT to draft an NDA or a short contract, thag would be 1 GPT. Another could be : I have a list of termsheets (which I would like to grade), then insert a new one and ask GPT to tell me what should be improved in the new one vs the older ones that were judged to be very good. Thanks!
So far PrivateGPT works pretty good with these files types: .txt .pdf .docx .mp3 .csv .epub .json, but it does not work with .jpg, .jpeg, .png or .mp4 files anyway it is an amazing tool for analyzing documents very fast.
@matthew_berman Congrats for the video ;-). What parameters should i have to change to use in another languages, as chinese, portuguese, ... french? thanks
Should I do another deeper dive into PrivateGPT and actually build something with it?
The answer to questions like this is always YES!!!
Thanks. :)
Hello, thanks for great videos about AI. Especially how to install them on local machine. Sadly there is lack detailed info about how to run local LLMs on Windows machines with Radeon cards (for ex series RX 6800 XT). It would be sweet to use power of GPU instead of waiting for CPU to process queries... Looking forward for Radeon team tuts.
That would be great.
how about combine privateGPT with Memgpt?
Absolutely!
So great to invite the authors of all these awesome open source tools. This really gives a face to the technology we use.
Glad you agree!
@@matthew_bermanplease cover how to setup a pipeline that does the actual embeddings on some edge cloud, and then downloaded back to your chromadb. Assuming fast internet speeds, it should make the RAG completion much faster for new unseen documents
@@matthew_berman
Can it be done without ananconda?
For those getting an error on the poetry install, it has recently changed to this:
poetry install --extras "ui"
You are my hero, thank you very much
but does this include the "local" because when i do poetry install --extras "local" doesnt get anything
I love that you set it up locally without Chat GPT!
For on-screen installation guides, it's best to start with a clean virtual system. This ensures that no lingering dependencies from past installations affect the process. A fresh start helps avoid complications that can arise from a cluttered and outdated system environment, which may contain residual files and settings from earlier configurations.
IMPORTANT FOR WINDOWS USERS: the last command, "PGPT_PROFILES=local make run" DOES NOT WORK. Instead, use "set PGPT_PROFILES=local" followed by "make run"
Thank you
I am running into trouble trying to install poetry on windows. need some assistance getting around:
• Installing llama-cpp-python (0.2.13): Failed
Same@@robertgoughnour9328
I followed above, and "make run" doesn't appear to be recognized. What am I missing?
On a Windows 10 Machine :)
@@robertgoughnour9328 Im stuck here as well. Been trying to work it out on stack overflow and even GPT4 to no avail.
Best thing in these videos is the enthusiasm Matthew brings to them! Love it.
Thanks!
Great intro and although it's 5 months old now, a very useful starting point for those of us with clients who are asking about this technology. Much appreciated and cheers from Sydney - Dave
One of my fav channels. Great topics, perfect pacing. Keep it going!
Dude, this was incredible. Great content and you've earned yourself a new subscriber.
Same here
Just wanted to say thank you for all of your excellent insights, commentary, and instruction. If all your videos represent what you give out for free, I can only imagine how effective your paid consultation would be for anyone delving into the new frontier of modern AI.
The legend's back with another awesome video!
Can you do a video of deploying this to production? THAT WILL BE SUPER USEFUL!
Question: is it possible to build the entire model, with source documents and everything, and distribute it as a single package download for others to use, including the source docs and everything else-i.e., for someone else to use with a specific data set, already chunked and ingested, and where the don't have to configure they just query?
I am a newvie, and have been trying for 1 week straight. Finally had to mix it a bit. I used martinez pyrequirements and that resolved a lot of the problems with win poetry conflicts in windows. So that was a huge step. I just managed to install the ui part. I need a life.
how did you do that can you plz post the commands you used
@@mrudulasawant4677 did you read the below comment ? poetry install --extras "ui" I cannot remember now but the guy below seems to have resolve it too.
I have been looking forward to this!
I misread the thumbnail title as "Chat With Dogs" and got excited. 🐕🐶
you the real dawg
I’m way too tired, my brain saw the title as “gpt to chat with your dog” and I got really excited.
😂
Use LM studio and AnythingLLM instead.Both programs coming with one click installers and they both have beautiful interfaces where even begginer can get around,no messing with terminals and scripts.
It's a great video but to complicated for begginers i would say,it's more for advanced or intermediate users.
Have you explored how to train privategpt to always be up to date on the your Paperless-NG document repository? I'm pretty sure this would be a great pairing.
poetry command gives error in ubuntu : the "--with" option does not exists, tried to search solution in google but no luck
Kinda had same issue:
poetry install --with ui,local
Group(s) not found: local (via --with), ui (via --with)
What the creator mentions at the end, adding external information tools and data extraction; makes sense. We need more than just chatwithpdf functions.
in what is it better than ai pdf? I mean, when it comes to referencing and text extraction
@@Sindigo-ic6xq
Can a chatbot be installed on the Python 🐍 console 🖥 of a program like Paraview?
Paraview is a software to display 3D models and it contains a Python 🐍 console to run scripts for repetitive tasks.
Super cool! Thank you.
Thanks for sharing this informative content with us. It work awesome in my local system.
Could you create a video tutorial demonstrating the setup process for the Mixteal 8x7B model on PrivateGPT?
Thank you. Clear, step-by-step and easy to follow. Up and running
Thanks for the great video!
Question: are the "Ingested files" cached and saved somewhere? Or do I have to upload them through the UI every time I run the project?
It's really great for my school !
Thank you for sharing video.
Great channel, great vid....helps keep up, Question - what about a local LLM that can you can point to a directory instead of upload individually? If I had a library of .pdf's it would be very time consuming uploading every single one....there's got to be a way to point to a directory and just let it train...even if it sits for hours pouring through all the docs. Thanks!
I am working on my college major project
Multi-modal AI Research System (MARS) is an innovative tool that transforms how we handle professional digital files. It addresses the challenges of managing a variety of file types, like PDFs, CSV, excel . Using advanced AI technologies, MARS will perform tasks such as answering questions about the file, modifying the content, creating presentations, analyzing data, and generating dashboards. Our goal is to simplify file management by combining AI with diverse functionalities, making tasks efficient and user-friendly. MARS will have a chat-based UI, allowing users to send and receive files, ask questions, and get help from the system
should I take this version of privategpt and enhance it
as it is open source is it okay to do that ??
Thank you for the video. Is there any local model you know that can be, instead of connected to documents, connected to a database? I'd love a tutorial about that.
I've never said this and I've been on youtube since it was first released on the 90s, but you've been a great and useful find and your focus on your users and how your current videos work for them is just straight out awesome.
youtube released in the 90s aye?, then why do ye sound like someone who was released in the 2000s
90s???
TH-cam is an American online video sharing and social media platform headquartered in San Bruno, California, United States. Accessible worldwide, it was launched on February 14, 2005, by Steve Chen, Chad Hurley, and Jawed Karim.
Great video, its been awhile since I tried privateGPT, I cant wait to get it going here again and see what's changed.
Thanks for the gist, I was struggling to find the instructions for linux/conda/nvidia installation.
Hi, just wanted to mention that the process to install poetry on a PC is leagues more complex than on Mac, any chance you could explore expanding on that aspect of the tutorial?
thank you. This is why i read the comments first now due with more complex items. I am not tech savvy but tech aware. This saves me much heartache . Thank you so much.
Hmm.. I think you can just "pip install poetry" lol
@@maxwellmarovich2975 Can't be that easy ... WAIT .. It's working! ;-)
Great video! Super useful. Thank you for your work.
Would you recommend PrivateGPT 2.0 for a client's website chatbot when accuracy and speed are important?
I'd love to see the deployment configuration for a VPS e.g. DigitalOcean (on Ubuntu, say)
This in combination with AutoGen would be great!
I just discovered ur channel and i emidiedly subscribed
Could privateGPT be used alongside memGPT in autoGen?
i wonder if we can tweak it a little to use a entire codebase and chat with that.
Like it would know what iv have written such as custom function etc.
Also for rare languages like ibm egl this could be very usefull cause all llm's will sometimes just make up functions or classes that don't exist or wrong types of that.
sry for bad englisch :)
I wanted to ask that how much storage space will all of this take?
How about a hardware optimization guide? I am a Mac user just like you, but let’s talk about private GPT And making it work with local GPUs. M1/ M3- will it work or waste of time.
is there no ready to go install file? :(
I would like to see PrivateGPT with a small library of pdf's in order to see what it can do with journal articles from a particular field. I have tried this with the earlier PrivateGPT and 3200 pdf's. Some pdf's, ~200 of the 3200, would crash the embedding process and when this happened had to remove that pdf and start the loading process over from the beginning each time. This was very time consuming, but eventually was able to get ~3200 pdf's into the database. Was wondering if this is any easier now and would like to be able to put 1,000's of pdf's into the database. This could be very helpful for literature review and finding statements that you forgot where they came from.
how are you keeping in the data base without using the option in UI. Directly ??
I really do not understand too well the workings of PrivateGPT, but you seem to be able to make embeddings for many pdf's and it is all put in a database. Just provide PrivateGPT with a folder and it will take in all the pdf's, as well as other formats, and make a database which it then uses when asked a question. Sorry, I do not know the technical details better. @@akki_the_tecki
As of July 2024, is PrivateGPT still your go-to for chatting with local documents?
Can you have it just look at a project folder of code for example? Now that gpt4preview supports 128k tokens that’s what I want is context understanding larger projects rather than uploading and manually selecting a few small files. And edit in place options. (I wrote a short python script that does this but would love to see a full featured product.)
4:50
getting this error: Group(s) not found: local (via --with), ui (via --with)
Did u get any solution to this? I am stuck at the same step.
I had the same problem. It looks like that setup step is different now. Using the command: poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant" worked form me. You can read more on it on the "Installation" part of the PrivateGPT docs which are linked in the description.
This is awesome. Can you recommend any freelancers who can do this stuff for clueless business owners like me!
are you planning some comparison of privategpt vs localgpt? very interested to hear your input
I am getting this error while running poetry install --extras ui in windows 11: ModuleNotFoundError: No module named 'tomlkit'
You don't need conda if using poetry already
When you just record a video of your terminal window like this the player options appear over the command you're currently typing. So if I pause to look at the command and type it out it has stuff over it making it hard to read. Maybe you could put a graphic or self-promo or something at the bottom of the screen in order to make it easier to follow along if one is just watching you and pausing. Oh and LOVE YOUR CONTENT!! Keep it up my friend! ...would love to get autogen running to show my boss at work but it's pretty difficult without a straight forward current windows install tutorial---don't suppose you plan on making an updated autogen installation guide for windows?? I have a hunch that I'm not the only user in my exact situation. Thanks again!
🎉 thank you sir from a village of India
The PGPT_PROFILES=local make run worked only after running the following command:
poetry install --extras "llms-llama-cpp ui vector-stores-qdrant embeddings-huggingface"
suggesting is ussing a smaller model in the background... ccp
YES!! After several failed attempts and a lot of google searching I came upon the same command.
This should be updated in the video description.
I intend to make a content management website (mainly with documents), utilizing a Rust developer to make the editor side a desktop app.
There are some fillable fields that are tedious to do, and I was wondering:
1. Can you train the model on what constitutes the best results based only on local data.
2. Is it hard for the programmer to make it produce predefined fields, callable by the
desktop app to fill the fields from the website.
3. Is it possible to bundle it with the desktop app for easier installation by a less knowledgeable person?
I am still new to this, so excuse my ignorance
Just starting out... What would be the recommended tutorial(s) to get a foundation in the requisites for this tutorial?
Poetry command has been changed. The one that worked for me was: poetry install --extras "ui"
Thanks Mark. Intel Mac here and I seem to fail with the segment error (11) following the failed loading of Metal. My Radeon has 4gb vram and supports Metal 3 so I have thta on. For some reason I am not finding the default.metallib. As follows:
ggml_metal_init: allocating
ggml_metal_init: found device: AMD Radeon Pro 5500M
ggml_metal_init: found device: Intel(R) UHD Graphics 630
ggml_metal_init: picking default device: AMD Radeon Pro 5500M
ggml_metal_init: default.metallib not found, loading from source
make: *** [run] Segmentation fault: 11
Any thoughts? Have tried no Metal and running in CPU and even a variety of
model_kwargs={"n_gpu_layers": 100}, tried 0, 50, 100)
poetry install --with ui,local, NoSuchOptionException The "--with" option does not exist.
Yeah, mate, the new way to do this is with the following command: poetry install --extras "ui". It seems that "local" is not necessary anymore.
What was the solution? Did you add this under tool.poetry.extras? If so, what did you add?
how large is the context though?
Excellent video. Thanks a lot, but I have a doubt :
One thing is a script that asks questions to documents and another is questions to the LLM?
Isn't there an LLM model that asks questions to documents persisted in a local chromaDB for example?
Is there a page maximum for pdf
I wonder how long it'll be before LM-Studio also supports embeddings models so we can 'chat with our docs' too.
Thank You for sharing.
Does this support other language for prompting ?
Thanks for the awesome video! A question. Why do we need to install poetry after activating the environment. If I am not wrong brew installs poetry system-wise, right?
I still can't undrestand why all this stuff is not containerize? Who wants to do all this, when spinning up a contrainer makes so much more sense. I get that someone still needs to do this, but these project teams should approach a container model approach from the get go. It's there, they already have the build and setup script, it just one more simple step to do it inside a container and then release it. There is no execuse for it, it can all be automated too.
It sucks that you did all this seemingly good work and I get "command not found: conda" basically at step 1 and cannot proceed past that. HATE IT when the demo/gist/etc does not follow what actually happens/works.
Great video, I had learned a lot from your channel. May I know which Mac machine (with the spec) you’re using? Thanks!
Very informative video. Can you please describe the configuration of your Mac which you are using for this?
sir i did this but this gpt is so slow ..Did i need to make sure of any other installations i mean im running this on windows
..please help me if anyone knew the solution
What I need is the ability to do this. When I try, I get errors within cmd prompt not knowing how to do it, like failures of not knowing the commands... Does this all require Windows 11 or Linux, or... How do I uninstall chocolaty or other command line enhancers to start over and finally get this working? For the most part my problems are with GIT, ... Maybe making a video on how to clean out and then reinstall all files needed to install anything.. GIT be very helpful.
Poetry on Pop_OS 22.04 LTS did not work.
poetry install --with ui, local failed with error:
NoSuchOptionException
The "--with" option does not exist.
I could not get poetry to work. I will have to learn it better before I try code in this video.
Hi... yours is the only step by step guide that actually worked for me, thanks! Where are the uploaded docs stored and can I just put files into that directory or does one have to manually add via the application? Is there some limit to the amount of files (in GB or whatever) the program can handle?
Wasn't able to get these step by step instructions to work on my Mac. Tried both methods, the one in the gist file (pyenv) and the method in the video (conda).
Awesome, one question: can you actually upload many documents and ask about all of their contents?
I tried following all the step, I managed to run it but the generation of chat is so slow. I'm not sure if this is normal? Took me like more than a minute to generate an answer.
same for me. I wonder if there is a way to allocate more PC Power to make it run faster?
Finally got it set up, but noticed it wasn't very fast. When I open Performance tab on Task Mgr, I noticed it's not really challenging the CPU or GPU. Is there a way to make it use more of the available power to be faster?
Also, to those asking setup questions: I'm very new to this and struggled a lot then instantly felt kinda like a dummy when I realized could plug in my errors into chatGPT and ask for help through it! (although I suspect it now knows I'm "looking around", heh)
There are lots of tuts on this and I think all of them incomplete for non-programmer folks. Conda, Poetry, Brew etc etc. Installing a python should be enough and I didn't find such tutorial. That is why AI is very important lol. It will save us a lot of time in the future for sure
how to eject uploaded files from privategpt
Thankyou for the guide. How long did it take for you to ingest the HP book? I have a 3080 and a 7800X3d and its taking 2.5hrs.
@matthew_berman please cover how to setup a pipeline that does the actual embeddings on some edge cloud, and then downloaded back to your chromadb. Assuming fast internet speeds, it should make the RAG completion much faster for new unseen documents
Group(s) not found: local (via --with), ui (via --with) while running poetry install --with ui,local command
Thanks for sharing this video~! I like the idea of private GPT~! BUT there is one question need to be asked: How do I make sure the PrivateGPT has the most UP-TO-DATE Internet knowledge? like ChatGPT 4-Turob has knowledge up to April 2023. This is important to many users as have access to the latest almost everything makes life easier.
PrivateGPT has no knowledge from the beginning. Its knowledge is the texts that you load into it.
great video and great work! I´m developing a ML for Smart Homes using all the data collected from sensors, etc at home. What would be a good model to use for this purpose?
Thank you. Do you mind that privateGPT can be used as a training assistent for study.
For example I have a lot of Doc-files about a special topic and like to train the LLM and me.
How can I do that? How can I create an own LLM with my data? Is there a how-to available? Thank you.
Awesome video, very informative. By the way, is there a work around to install llm model in enterprise laptop where software install is not allowed.
hey your content is so great. can i ask for specification of your pc to run that privateGPT ? because that gpt looks works smooth in your system
So I went through the install on my Mac. And it worked the first time. The pdf reviews wasn’t great…. But how do I start it up without the installation.
Hey is there is any limits like how much MBs of Docs i able to uplod in llm or there is no limit ? Pls Help me out in this Matthew.
Hi, is it possible you to make a new Private GTP to see how to use API with it?
Thank you, but i have a unrealted question, that how can i save a model if i fine-tune it with gradientAI in google colab?
I have documents with Images and text. If the LLM references text that has an image near it, how can I see that image or page of the document?
Is there a way to have the system ingest all of the documents in your computer so that it could look across everything for data held in disparate systems?
Hey Matt, Thanks for all your videos! How do you feel about the performance of the privateGPT? I find it to be really slow and not really ready for production. No matter if im using the GPU or CPU
I also found it to be a little slow on an i9, 3090, 128GB. When I go into process monitor the consumption is minimal. Is there any way to have it use more machine power?
Can I use this as a base to train different GPTs? I have a library of legal docs for instance. I would like a GPT to draft an NDA or a short contract, thag would be 1 GPT. Another could be : I have a list of termsheets (which I would like to grade), then insert a new one and ask GPT to tell me what should be improved in the new one vs the older ones that were judged to be very good. Thanks!
when do we get a query function with llm chat added? This way we can ask it to write new information, relative to something we had it ingest?
So far PrivateGPT works pretty good with these files types: .txt .pdf .docx .mp3 .csv .epub .json, but it does not work with .jpg, .jpeg, .png or .mp4 files anyway it is an amazing tool for analyzing documents very fast.
poetry install --extras "ui" is now correct. Local doesn't exist. Please update tutorial. Thanks for all the work!
@matthew_berman
Congrats for the video ;-). What parameters should i have to change to use in another languages, as chinese, portuguese, ... french? thanks