PrivateGPT 4.0 Windows Install Guide (Chat to Docs) Ollama & Mistral LLM Support!

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 มิ.ย. 2024
  • 🚀 PrivateGPT Latest Version (0.4.0) Setup Guide Video April 2024 | AI Document Ingestion & Graphical Chat - Windows Install Guide🤖 Private GPT using the Ollama backend and Mistral LLM! Easy Setup on Windows!
    Welcome to the April 2024 version 0.4.0 of PrivateGPT!
    🌐 New Features Overview.
    In this version the complexities of setting up GPU support has been removed you can now choose to integrate this version of Private GPT with Ollama and have it do all the heavy lifting! This version still features the Web Frontend!
    🔨 Building PrivateGPT on Windows is now Easy!
    PrivtateGPT using Ollama Windows install instructions. Get PrivateGPT and Ollama working on Windows quickly! Use PrivateGPT for safe secure offline file ingestion, Chat to your Docs!
    👍 Like, Share, Subscribe!
    If you found this guide helpful, give it a thumbs up, share it with your friends, and don't forget to subscribe for more tech tutorials and AI insights. Stay tuned for future updates on StuffAboutStuff4045!
    🔗 Links.
    Ollama.
    ollama.com/
    PrivateGPT Install Instructions used.
    docs.privategpt.dev/installat...
    PrivateGPT Github Project Page.
    github.com/zylon-ai/private-gpt
    Make for Windows.
    gnuwin32.sourceforge.net/pack...
    📌 Timestamps
    0:00 Introduction to how PrivateGPT Evolved the 3 Main Versions
    2:25 Setup Ollama for PrivateGPT
    3:37 Private GPT Required SW v0.4.0 & System Setup
    6:28 Setup PrivateGPT on Windows
    10:15 Testing PrivateGPT and Ollama
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 199

  • @Offsuit72
    @Offsuit72 2 หลายเดือนก่อน +10

    I cannot thank you enough. I've been struggling for several days on this, it turns out I was using outdated info and half installing the wrong versions of things. You made things so clear and I'm thrilled to be successful in this!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      You're welcome! I am glad to hear the video assisted! Thanks so much for reaching out.

  • @radudamian3473
    @radudamian3473 2 หลายเดือนก่อน +6

    Thank you. Liked and subscribed. I most appreciate your patience to give step by step, easy to understand and follow instructions. Helped me, a total noob...so hat off

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Thanks for the sub and the great feedback. Appreciated!

  • @JiuJitsuTech
    @JiuJitsuTech หลายเดือนก่อน +1

    Thank you for this vid! I watched several others and this was the most straight forward approach. Super helpful !!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Thanks for the feedback, glad the video was helpful.

  • @chahrah.5209
    @chahrah.5209 หลายเดือนก่อน +1

    Huge thank for the video, AND for taking the time to help solve problems in the comments, it was just as helpful. Definitely subscribing.

  • @DeTruthful
    @DeTruthful หลายเดือนก่อน +1

    Thanks man did a few other tutorials couldn't figure it out. This made it so simple. Subscribed!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Thanks for the sub! Glad the video helped out.

  • @likanella
    @likanella 7 วันที่ผ่านมา +1

    Thank you, thank you so much. There were no detailed instructions anywhere. Everything worked out! You're great!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 วันที่ผ่านมา

      You're welcome! Glad to hear you are up and running. Thanks for the feedback!

  • @curtisdevault6427
    @curtisdevault6427 6 วันที่ผ่านมา +1

    Thank you for this! I've been struggling with this for a few days now, you provided up to date and clear instructions that made it super simple!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 วันที่ผ่านมา

      Great to hear! Glad you are up and running. Thanks for the feedback.

  • @OmerAbdalla
    @OmerAbdalla หลายเดือนก่อน +2

    This is a great installation guide. Precise and clear steps. I made one mistake when I tried to setup the environment variable in Anaconda Command Prompt instead of Powershell prompt and once I fixed my mistae I was able to complete the configuration successfully. Thank you very much.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      You're welcome! Thanks for reaching out. Glad the video helped.

  • @GrahamJefferson
    @GrahamJefferson 23 วันที่ผ่านมา +1

    Thank you for taking the time to make this video, it was just what I was looking for. 😎

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  22 วันที่ผ่านมา

      Glad it was helpful! Thanks for taking the time to reach out.

  • @bananacomputer9351
    @bananacomputer9351 หลายเดือนก่อน +2

    after two hours of research,I start over with your tutorial, and finished in 10 minutes,thank you, thank you!!!

  • @ilieschamkar6767
    @ilieschamkar6767 หลายเดือนก่อน +1

    It worked like a charm, thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Great to hear! Thanks for the feedback much appreciated.

  • @Lucas-iv6ld
    @Lucas-iv6ld หลายเดือนก่อน +1

    It worked, thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Glad it helped, Good to hear your are up and running.

  • @RyanHokie
    @RyanHokie หลายเดือนก่อน +1

    Thank you for your detailed tutorial

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      You’re welcome 😊 Glad the video assisted. Thank you so much for the feedback.

  • @Matthew-Peterson
    @Matthew-Peterson 2 หลายเดือนก่อน +1

    Brilliant Guide. Subscribed.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Welcome aboard! Thanks for the sub and feedback.

  • @cinchstik
    @cinchstik 2 หลายเดือนก่อน

    got it to run on virtual box. works great! Thanks

  • @aysberg9403
    @aysberg9403 2 หลายเดือนก่อน +1

    excellent explanation, thank you very much

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Pleasure! Glad the video assisted. Thanks for the feedback!

  • @rchatterjee48
    @rchatterjee48 26 วันที่ผ่านมา +1

    Thank you very much it works

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  26 วันที่ผ่านมา

      You're welcome! Thanks for making contact.

  • @maxxxxam00
    @maxxxxam00 2 หลายเดือนก่อน +1

    Excellent video, very clear step guides. Do you have or could you make a docker compose file that does all the steps in a docker environment?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, thank you so much for the feedback, let me look into it and I will revert soon! Thanks.

  • @rummankhan5499
    @rummankhan5499 14 วันที่ผ่านมา +1

    awesome ! best tutorial ever... can you please make a video on web deploy/upload of local/privategpt... without openai (if thats doable)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  14 วันที่ผ่านมา

      Hi, thank you for the feedback! Noted on the video idea. Glad you are up and running.

  • @Quicksilver87878787
    @Quicksilver87878787 13 วันที่ผ่านมา

    Thanks! Is there any specific reason why you are using Conda as opposed to virtualenv?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 วันที่ผ่านมา

      Hi, I use Anaconda for most of my AI environments when working on Windows. I find it easy to work with and install required SW etc. Thanks for reaching out.

  • @feliphefaleiros9540
    @feliphefaleiros9540 21 วันที่ผ่านมา

    muito bem explicado, obrigado por pelos videos. Em todas versoes que explicou mostrou passo a passo você é foda
    very well explained, thank you for the videos. In all the verses you explained, you showed step by step you are awesome

  • @erxvlog
    @erxvlog 25 วันที่ผ่านมา +1

    This was excellent. One issue that did come up was uploading pdfs....there was an error related to "nomic". I signed up for nomic and installed it. PDFs seem to be working now.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  23 วันที่ผ่านมา

      Thanks for reaching out. Glad to hear you are up and running.

  • @creamonmynutella2476
    @creamonmynutella2476 หลายเดือนก่อน +2

    is there a way to make this automatically start when the system is powered on?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Sure its possible with PowerShell scripts. Let me check it out and revert.

  • @Whoisthelearner
    @Whoisthelearner หลายเดือนก่อน

    Great thanks for the awesome video, i wonder whether you would know any similar setup fro the new llama3 llm? If yes, it would be great if you can make a new video about it!!!! Great thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  29 วันที่ผ่านมา +1

      Hi, sure you can. You can install llama3 on Ollama. You would need to change the config files. The link below should assist until I can update this video. Thanks for the feedback and the video idea.
      docs.privategpt.dev/manual/advanced-setup/llm-backends

    • @Whoisthelearner
      @Whoisthelearner 28 วันที่ผ่านมา

      @@stuffaboutstuff4045 Great thanks for the prompt reply and the link. Looking forward to your new video as well!! You make very easy for beginner like me! Really appreciate your work

    • @Whoisthelearner
      @Whoisthelearner 28 วันที่ผ่านมา

      @@stuffaboutstuff4045 if you don't mind, allow me to ask a question, I am plannign to adopt the ollama approach but i don't know of what part of the video should i turn to the command PGPT_PROFILES=ollama make run. Great thanks!

  • @chjpiu
    @chjpiu 2 หลายเดือนก่อน +1

    Thanks a lot. Please let me know how to change the LLM model in privategpt? For me, the default model is mistral

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, sorry for the late reply. You can check out this page to change your LLM. Let me know if you came right with this. Thanks for reaching out! 🔗docs.privategpt.dev/manual/advanced-setup/llm-backends🔗

  • @fishingbeard2124
    @fishingbeard2124 หลายเดือนก่อน

    Can I suggest that next time you make a video like this you enlarge the window with the commands. 75% of your window in blank and the important text is small so I think it would be helpful to have less blank space and larger text. Thanks

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Thanks for the input, agreed I started to zoom in on the command prompts in newer vids. Thanks for reaching out and I hope the video helped.

  • @anjeel08
    @anjeel08 หลายเดือนก่อน

    This is simply superb. I could install it and run it with your clear step by step instructions. Thank you so very much. However I do notice that uploading the documents to be able to chat with my own set of data takes so long time. Is there a way we can tweak this and make uploading the document easier. I am only using 1 word doc of 30 pages with mainly text and one pdf document of size 88 pages with text and images. word doc was uploaded in 10 min but the pdf runs endlessly. Appreciate if you could make a video on how to use Open AI instead of one of the online providers to get speed (When confidentiality is not important). Thank you in advance for your tip.

    • @firatguven6592
      @firatguven6592 หลายเดือนก่อน

      I wrote also a comment, I am complaining about the same issue. I have also the version 2.0 also from him and as if it wasnt uploading slow enough, but in version 2.0 in any case the upload was considerably faster. I had during the upload 80% load on my 32 Thread cpu but now in 4.0 the cpu is just boring itself with 5%, which explains the slower upload. The parsing nodes are genereting the embeddings much slower. Since I have more than 10000 pdf files, it is unacceptable to wait endless during the uplad. Now I am waiting since 40 minutes only for 2 huge files with around 3000 pages, which took with the old one only 20 mins totally. I have no idea, how long it will take till end and we are talking only about 2 x files. The other 9998 Files will not even be uploaded in one year if the problem will not be solved. I am disappointed to lose time with 4.0.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right..
      docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @drmetroyt
    @drmetroyt 7 วันที่ผ่านมา

    Hope could install this as docker container

  • @Reality_Check_1984
    @Reality_Check_1984 2 หลายเดือนก่อน +1

    Looks like they released a 0.5.0 today. If you install this now and look at the version it will be 0.5.0. All of your install instructions still work as it wasn't a fundamental change like the last big update. They added pipeline ingestion which I hope fixes the slow ollama ingestion speed but so far I still think llama is faster.

    • @Reality_Check_1984
      @Reality_Check_1984 2 หลายเดือนก่อน +1

      so I ran it over night and ollama is still not performing well with ingestion. It definitely under utilizes the hardware for ingestion. Right now a lot of the local LLMs don't seem to leverage the hardware as well when it comes to ingestion. That is an improvement I would like to see in general. Not just of ollama or privateGPT. The ability to ingest faster through better hardware utilization/improved processing and storing ingest files long term on the drive along with the ability to query the drive and load relevant chunks into the vRAM would significantly expand the depth and breadth of what these tools can be used for. vRAM is never going to offer enough and constantly training models won't work either.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi thanks for the update. Had a bit of a scare with the update available moments after publishing this vid 😊. Thanks for the confirmation, I also checked and the install instructions remain intact. Appreciate the feedback. PS. I totally agree with the performance comment made.

  • @nunomlucio5789
    @nunomlucio5789 2 หลายเดือนก่อน +1

    In terms of speed, I feel that the previous version is way faster than this one using Ollama, previous version I mean using CUDA and so.... in terms of answering and even loading documents

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน +1

      Agreed, just increases the build difficulty as bit. 👨‍💻 Thanks for reaching out.

  • @abhiudaychandra
    @abhiudaychandra หลายเดือนก่อน

    Hi. Thanks for the great video, but the uploading of even just one document & answering is slow that I just cannot use it any further. Could you please tell me how to uninstall the privategpt, other applications I can of course uninstall, and is there some command i should enter to remove files?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, yes its a bit slower against a local LLM dependent on the GPU available in the machine. Did you try and use Open AI or one of the online providers, if you want to have it super fast. If confidentiality is not your main concern maybe give it a go. If you want to remove just uninstall all SW and delete the project folder you built PrivateGPT in and you should be fine. Thanks for reaching out.

  • @tarandalinux8323
    @tarandalinux8323 12 วันที่ผ่านมา

    Thank you for the great video. I'm at 9:48 and the command $env:PGPT_PROFILES="ollama" gives me an error: The filename, directory name, or volum label syntax is incorrect.
    (privategpt) C:\gpt\private-gpt>$env:PGPT_PROFILES="ollama" (I don't get the colors you get)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 วันที่ผ่านมา

      Hi, can you confirm you are running this in your Anaconda PowerShell terminal, Check the steps I use from about 9:20 in the video. Let me know if you are up and running.

  • @shashankshukla6691
    @shashankshukla6691 2 หลายเดือนก่อน

    thank you, but how can we make use of NVDIA gpu if we have one on our device, like i have NVDIA T600

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, if you build with Ollama it will offload to the GPU automatically (Nvidia or AMD). It does not hammer it to its full potential I have seen. This utilization will get better each evolution of the project. Let me know if you got the GPU to kick in when offloading.

  • @farfaouimohamedamine3288
    @farfaouimohamedamine3288 หลายเดือนก่อน +1

    Hi, thank you for your tutorial i have followed the steps as u did but i get this error when i try to install the dependencies of the privateGpt :
    (privategpt) C:\pgpt\private-gpt>poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
    No Python at '"C:\Program Files\Python312\python.exe'
    NOTE : for the virtual envirement, i did not created inside the system32 directory, i did the creation on the pgpt directory

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, did you get this resolved. Yes, I also created a pgpt folder in the root of the drive. Just to confirm are you running Python 3.11.xx? Let me know if you came right with this.

  • @lherediav
    @lherediav 2 หลายเดือนก่อน +2

    For some reason Anaconda doesnt recognize the CONDA command in my end doestn show (base) at the begining of the anaconda prompt, any solutions? i am stuck in that part 7:46 part

    • @lherediav
      @lherediav 2 หลายเดือนก่อน

      when i open anaconda prompt shows this: Failed to create temp directory "C:\Users\Neo Samurai\AppData\Local\Temp\conda-\"

    • @thehuskylovers1432
      @thehuskylovers1432 2 หลายเดือนก่อน

      Same issue Here i cannot pass this neither v2 or this version

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, just checking if you came right with this? When you open your Anaconda Prompt or Anaconda PowerShell prompt they must open and load and show (base). Is this not showing in both Anaconda Prompt or Anaconda PowerShell prompt? Did you try and open both in admin mode? It seems there is a problem with the anaconda install on the machine.

  • @Stealthy_Sloth
    @Stealthy_Sloth หลายเดือนก่อน +1

    Please do one for llama 3.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Thanks for the idea. If you want you can try and get it up and running. You can install the 8b model if you use Ollama. (ollama run llama3:8b) The link below has the example configs that would need to change. Thanks for reaching out and for the feedback, much appreciated. docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @workmail6406
    @workmail6406 หลายเดือนก่อน

    Hello, I have managed to follow the instructions up until 9:50 for running the environment with make run, however when I iniate the command in an administrator anaconda powershell after locating it to my private-gpt folder I encounter the error "The term 'make' is not recognized as the name of a cmdlet, function". I have no idea how I can get Anaconda Powershell to recongnize the prompt to run on my Windows pc. What can I do to finally start the private gpt server?

    • @workmail6406
      @workmail6406 หลายเดือนก่อน

      Now that I installed gitbash from the makeforwindows website it works. However, I now run into this error when running make run:
      Traceback (most recent call last):
      File "", line 198, in _run_module_as_main
      File "", line 88, in _run_code
      File "C:
      gpt\private-gpt\private_gpt\__main__.py", line 5, in
      from private_gpt.main import app
      File "C:
      gpt\private-gpt\private_gpt\main.py", line 4, in
      from private_gpt.launcher import create_app
      File "C:
      gpt\private-gpt\private_gpt\launcher.py", line 12, in
      from private_gpt.server.chat.chat_router import chat_router
      File "C:
      gpt\private-gpt\private_gpt\server\chat\chat_router.py", line 7, in
      from private_gpt.open_ai.openai_models import (
      File "C:
      gpt\private-gpt\private_gpt\open_ai\openai_models.py", line 9, in
      from private_gpt.server.chunks.chunks_service import Chunk
      File "C:
      gpt\private-gpt\private_gpt\server\chunks\chunks_service.py", line 10, in
      from private_gpt.components.llm.llm_component import LLMComponent
      File "C:
      gpt\private-gpt\private_gpt\components\llm\llm_component.py", line 9, in
      from transformers import AutoTokenizer # type: ignore
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py", line 26, in
      from . import dependency_versions_check
      ImportError: cannot import name 'dependency_versions_check' from partially initialized module 'transformers' (most likely due to a circular import) (C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py)
      make: *** [run] Error 1
      Any Idea how I can resolve this?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, can you confirm loading all the required SW including all the Make steps I perform from 3:35 into the video. Let me know if you were able to resolve this. Also confirm you are running everything in the same terminals and admin mode where needed. Make sure you use Python within 3.11.xx in your Anaconda Environment.

  • @drSchnegger
    @drSchnegger 2 หลายเดือนก่อน +1

    If I make a Prompt, I get an error: Collection make_this_parameterizable_per_api_call not found
    whenn i do another Prompt, i get the errror:
    NoneType' object has no attribute 'split

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน +1

      Hi, from what I can gather you will get this error if you prompt documents but no documents are loaded. Can you ensure you uploaded documents into PrivateGPT and selected them prior to prompt. Let me know if you come right with this. Thanks for reaching out! If problem persits check out these links and check if it helps, github.com/zylon-ai/private-gpt/issues/1334 , github.com/zylon-ai/private-gpt/issues/1566

    • @JiuJitsuTech
      @JiuJitsuTech หลายเดือนก่อน +2

      From the git issues page, this resolved the issue for me. "This error occurs when using the Query Docs feature with no documents ingested. After the error occurs the first time, switching back to LLM Chat does not resolve the error -- the model needs to be restarted." Enter Ctrl-C in Powershell Prompt to stop the server and of course 'make run' to re-start.

  • @pranavmalhotra7635
    @pranavmalhotra7635 หลายเดือนก่อน +1

    ERROR: Could not find a version that satisfies the requirement pipx (from versions: none)
    ERROR: No matching distribution found for pipx
    I am receiving this error and hence I am unable to proceed with thwe installation any tips?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, can you confirm you are installing pipx in a normal admin mode command prompt. Just to check if you followed the steps from 6:30 in the video onwards. If still not working can you confirm you have Python 3.11.xx installed with the pip package that ships with. Let me know if you came right with this. Thanks for reaching out.

  • @user-bg7zh7ub2h
    @user-bg7zh7ub2h หลายเดือนก่อน

    i have a question how do i run it again if my system restarts what steps do i have to do again or command to run again can we set to autostart when my system starts

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, you can just run Anaconda PowerShell prompt again, activate the environment you created. Make sure you are in the project folder. Set the env variable you want to use and execute make run. Check the steps performed in the Anaconda PowerShell from 9:24 in the video. Let me know if you are up and running. Thanks for reaching out.

  • @guille8237
    @guille8237 หลายเดือนก่อน +1

    i got it running but I want to change the model to deep seek coder, how do I do it?never mind

  • @vaibhavdivakar4653
    @vaibhavdivakar4653 2 หลายเดือนก่อน +1

    I followed te steps and for some reason when i do make run command, it is giving me "no Module called uvicorn".
    i installed the module using pip command it still says the same error..
    :(

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน +2

      Hi, does it not launch at all and stops with this error. Its seems its the webserver that needs to start. When you launch it I know it can display uvicorn.error message but when you open the browser you will see the site up and everything works.
      If you get this, uvicorn.error - Uvicorn running on http : // 0. 0. 0. 0 : 8001 then it works. But by the comment it sound like you have the whole module missing. PrivateGPT is a complicated build, but the steps in video are valid, I would suggest retracing the required SW and versions required like Python etc. and the setup steps just to make double sure no steps were missed. I also find more success running the terminals in admin mode to avoid issues. Let me know if you came right with this and thanks for making contact.

    • @SiddharthShukla987
      @SiddharthShukla987 หลายเดือนก่อน +2

      I also faced the same issue because I forgot to start the env. Check yours too

  • @user-vr5lg6mv2i
    @user-vr5lg6mv2i หลายเดือนก่อน

    is python 3.12 will do the work or specifically i need 3.11 ?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, You would need Python 3.11.xx. The code currently checks if the installed Python version is in that range. I got build errors with 3.12 installed in the environment. Let me know if you are up and running.

  • @JanaFourie-cm5eh
    @JanaFourie-cm5eh หลายเดือนก่อน

    Hi, when querying files only the sources appear after it stopped running (files ingestion seems to work fine). How can I fix this? Or is it still running but extremely slow...?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  29 วันที่ผ่านมา

      Hi, did you come right with this. There are some good comments on this video on speeding up the install including working with large docs that slow down the system. Check the link below, maybe this can assist. Also check the terminal when this happens for any hints on what might be hanging up.
      docs.privategpt.dev/manual/document-management/ingestion#ingestion-speed

    • @JanaFourie-cm5eh
      @JanaFourie-cm5eh 27 วันที่ผ่านมา

      @@stuffaboutstuff4045 Thanks, how can I contact you? I noted you are South African through the accent!

  • @dauwswinnen2721
    @dauwswinnen2721 20 วันที่ผ่านมา +1

    I did everything but installed the wrong model. How can I change models after doing everything?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  20 วันที่ผ่านมา +1

      Hi,
      If you are using Ollama you can just update your config file in your PrivateGPT folder to point to the model downloaded in Ollama. if you want multiple models on Ollama its fine. I use my Ollama to feed numerous AI frontends with multiple LLMs running.
      Check the link below for the defaults:
      Default Mistral 7b.
      On your Ollama box:
      Install the models to be used, the (default for PRIVATEGPT settings-ollama.yaml) is configured to use mistral 7b LLM (~4GB) and nomic-embed-text Embeddings (~275MB).
      Commands to run in CMD:
      ollama pull mistral
      ollama pull nomic-embed-text
      ollama serve
      docs.privategpt.dev/installation/getting-started/installation

  • @SuffeteIfriqi
    @SuffeteIfriqi 16 วันที่ผ่านมา

    Such a great video, which in my case makes it even frustrating because I'm literally stuck at the last step.
    It says:
    make: *** Keine Regel, um "run" zu erstellen. Schluss.
    Which translates into:
    make: ***No Rule, to create "run". Stop.
    Any idea what this might be caused by? I've restarted the entire process twice, no luck...
    Thank you so much.

    • @SuffeteIfriqi
      @SuffeteIfriqi 16 วันที่ผ่านมา

      I suspect it might caused by Gnu's path, although I did include it in the env variables...

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  14 วันที่ผ่านมา

      Hi, did you manage to resolve this issue. Please check from about 9:15 into the video. Are you completing these steps in an Admin Anaconda PowerShell with the environment activate and from the correct folder. Let me know if you came right. Thanks..

  • @mohith-qm9vf
    @mohith-qm9vf หลายเดือนก่อน

    Hi, will this installation work for ubuntu? if not what changes do I need to make??? thanks a lot

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, just checking if you got this built on Ubuntu. If not you can follow the steps for Linux using the link below. Thanks for reaching out.
      docs.privategpt.dev/installation/getting-started/installation

    • @mohith-qm9vf
      @mohith-qm9vf 29 วันที่ผ่านมา

      @@stuffaboutstuff4045 thanks a lot!!

  • @travisswiger9213
    @travisswiger9213 2 หลายเดือนก่อน

    how do i restart this? I've got it running a few times, but I if i restart i have a hell of a time getting it working again. can i make a bat file some how?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน +1

      Hi, when you launch it in the Anaconda PowerShell Prompt, just go back to that terminal when done and press "Control + C". This will shut it down. You can save the starting profile as a PowerShell script and start it or as bat if you use cmd. Thanks for making contact, let me know if you came right with this.

  • @patrickdarbeau1301
    @patrickdarbeau1301 2 หลายเดือนก่อน

    Hello, I got the following error message when running the command:
    " poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant" "
    " No module named 'build' "
    Can you help me ? Thanks

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, did you install all the required SW I install at the start.

    • @Matthew-Peterson
      @Matthew-Peterson 2 หลายเดือนก่อน

      Close both Anaconda Prompts and restart the process. Dont rebuild your project though. GPT4 says its a connection issue when creating and sometimes a computer restart sorts the issue. Worked for me.

    • @guille8237
      @guille8237 หลายเดือนก่อน

      open your tomfile and update the tom file with the correct Build version then update the lock file

  • @The_Gamer_Boi_2000
    @The_Gamer_Boi_2000 หลายเดือนก่อน

    whenever i try to install poetry on pipx it gives me this error "returned non-zero exit status 1."

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, just checking in if you resolved this issue? Just want to confirm that you are following the steps I use in the video to install poetry, please check from 6:28 in the video. I use a command prompt in admin mode to complete all these steps. From 7:36 we back in Anaconda and Anaconda PowerShell prompts. Also confirm you are using Python 3.11.xx for the Anaconda environment otherwise you will get a bunch of build errors and failures. Let me know and thanks for reaching out.

    • @The_Gamer_Boi_2000
      @The_Gamer_Boi_2000 หลายเดือนก่อน

      @@stuffaboutstuff4045 im pretty sure i was doing those steps but im using webui now instead cuz its easier to do

  • @BetterEveryDay947
    @BetterEveryDay947 2 หลายเดือนก่อน +1

    can you make a vs code version?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, thanks for the idea, they release new version so quickly I will check how I can incorporate in the next one. Thanks for reaching out.

  • @OscarPremium-ql5hh
    @OscarPremium-ql5hh 12 วันที่ผ่านมา

    How do I start it up again ones I finished all the steps in the video successfully? Just visit the browser domain again?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 วันที่ผ่านมา

      Hi, you will have to activate your conda environment. Make sure you are in the project folder and launch Anaconda PowerShell again. Check the steps from 9:24 in the video. Let me know if you are up and running.

    • @OscarPremium-ql5hh
      @OscarPremium-ql5hh 3 วันที่ผ่านมา

      @@stuffaboutstuff4045 Wow, Thanks for your answer! Just amazing!

  • @user-jw1mz4et1e
    @user-jw1mz4et1e 2 หลายเดือนก่อน

    i install it and works but is very very slow to answer is it possible to speed it up?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, it is not the fastest with Ollama, the upside is its relatively easy to get working. Should confidentiality not be an issue using the Open AI profile will increase speed exponentially. You could also build this local if you have a proper GPU but expect a more complicated install to follow. Thanks for reaching out.

  • @mrxtreme005
    @mrxtreme005 2 หลายเดือนก่อน +1

    20gb space required?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, yes if you load all the required SW. This ensure you dont get errors if you build the other non Ollama options..

  • @vichondriasmaquilang4477
    @vichondriasmaquilang4477 21 วันที่ผ่านมา

    so confuse what is the purpose of install ms visual studio? you didnt use it

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  20 วันที่ผ่านมา

      Hi, VS Studio components are used in the background for compilation and build of the programs. Hope the video helped and your PrivateGPT is up and running.

  • @firatguven6592
    @firatguven6592 หลายเดือนก่อน

    Thank you very much it works like your previous guide privateGPT2.0. But compared to the previous GPT2.0 this one is uploading the files much slower, as if it wasnt slow enough. with the 2.0 my all 32 Threads CPU was working under 80% load during the upload process, you could see that it is doing something important due to the load. But now is the CPU load only around 5%, which takes considerably more time, because I guess the parsing nodes are genereting now the embeddings much slower. This is unfortunately a deal braeaker for me. Since I have lots of huge pdf files which needs to be uploaded. I cannot wait 1 week or more just for upload. At the end a 4.0 version should be improvement but I cannot see any improvements here. Can somebody list a real improvement list please except the ollama, which is for me not a real improvement becaue the version 2.0 worked also very fine. I will switch back to 2.0, unless I can understand where is the failure?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right..
      docs.privategpt.dev/manual/advanced-setup/llm-backends

    • @firatguven6592
      @firatguven6592 หลายเดือนก่อน

      @@stuffaboutstuff4045 Thanks for advice, if change anything in the backend, it comes to error, despite according to the official manual and your explanation. If I setup ofr both ollama then it works but as mentioned the file upload is extreme slow. Now I found a solution by installing from scratch according to version 2.0 with llmacpp with huggingface embeddings, whereas I changed the ingest_mode from single to parallel now it works much faster. There should be more options in order to increase the speed by increasing the bash size or worker counts. Since they did not work before, i will not change and corrupt the installation as long as you can provide a manual how to increase the embedding speed to maximum most probably with help of gpu like in chat. The GPU support in chat works good but during embedding the GPU is not being used

    • @firatguven6592
      @firatguven6592 หลายเดือนก่อน +1

      @@stuffaboutstuff4045 after changing to parallel the cpu utilization is at 100% and that explains the faster embedding. Since I have one of the fastest consumer cpus the result is now finally satisfying.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      @@firatguven6592 Awesome, glad you are running at acceptable speeds.

    • @firatguven6592
      @firatguven6592 หลายเดือนก่อน

      ​@stuffaboutstuff4045 in addition to that, i could change some paramters in settings.yaml with help of LLM. These are, batch size to 32 or 64, dimension from 384 to 512, device to cuda and ingest_mode: parallel, which gave the most improvement. Now the embeddings are really fast. Thank you very much. I would like to test once also the mode sagemaker, since I could not succeed that mode working. I will try it later again.

  • @hasancifci1423
    @hasancifci1423 หลายเดือนก่อน +1

    Thanks! Do NOT start with the newest version of Python. It does not support. If you did, uninstall it. If you have a problem with pipx install poetry, delete the pipx folder.

  • @FunkyZangel
    @FunkyZangel หลายเดือนก่อน

    Can I do this all completely offline? I have a computer that has no access to the internet. I want to see if i can download everything into a usb and then transfer it over to that computer. Can anyone help me please

    • @Whoisthelearner
      @Whoisthelearner 29 วันที่ผ่านมา

      I think you can once you have everything installed, at least that works for me

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  29 วันที่ผ่านมา

      Hi, as noted below, that correct. Once installed you can disconnect the machine if you have the LLM local. Let me know if you come right.

    • @FunkyZangel
      @FunkyZangel 24 วันที่ผ่านมา

      @@stuffaboutstuff4045 Hi thanks for the reply. I am struggling a little understanding this. Do I have to download a portable version for everything or just a portable VSC? Meaning if I want the privategpt to work on another machine from the thumbdrive, do I just need to transfer the VSC files or must I transfer everything, such as git, anaconda, python etc?

  • @JeffreyMerilo
    @JeffreyMerilo หลายเดือนก่อน +1

    Great video! Thank you so much! Got it to work with version 5. How can we increase the tokens? I get this error File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\chat_engine\context.py", line 204, in stream_chat
    all_messages = prefix_messages + self._memory.get(
    ^^^^^^^^^^^^^^^^^
    File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\memory\chat_memory_buffer.py", line 109, in get
    raise ValueError("Initial token count exceeds token limit")

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, can you have a look at this post and see if it helps. Let me know if you come right.
      github.com/zylon-ai/private-gpt/issues/1701

  • @Rohin-gq7nx
    @Rohin-gq7nx 21 วันที่ผ่านมา

    im facing an error where it is not responding to any of my requests

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  20 วันที่ผ่านมา

      Hi, I take it everything is started and tested using the steps in the vid. Can you confirm your backend as up and running. Hope you got this resolved. Let me know.

  • @AstigsiPhilip
    @AstigsiPhilip 26 วันที่ผ่านมา

    Hi, is this privategpt can handle 70,000 pdf files?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  22 วันที่ผ่านมา

      Hi, I personally have not worked with massive datasets. I know some in the comments have. You might want to check out the link for bulk and batch ingestion.
      docs.privategpt.dev/manual/document-management/ingestion#bulk-local-ingestion

  • @alicelik77
    @alicelik77 2 หลายเดือนก่อน

    Time 9:21 you opened new anaconda powershell prompt. Why did you need new powershell prompt even you were working on a powershell prompt already

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน +1

      Hi, look carefully, I am in a normal Anaconda prompt at that stage and the next commands need to go into Anaconda PowerShell. 👨‍💻 Thanks for reaching out, hope the video helped..

  • @noneofbusiness9764
    @noneofbusiness9764 2 หลายเดือนก่อน +1

    What about a step by step linux installation?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน +1

      Hi, Thanks for the idea, let me look into that.

  • @VaporFever
    @VaporFever 2 หลายเดือนก่อน

    How can I add llama3?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, if you are using Ollama you can install it and test it out, I am currently downloading it to test.
      8B Model can be installed on Ollama using - ollama run llama3:8b or you can install the 70B Model -ollama run llama3:70b. Let me know if you get it working.

  • @SirajSherief
    @SirajSherief หลายเดือนก่อน

    Can we do this for Ubuntu machine ?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, yes you can, the packages and flow will be similar to the video but obviously following the Linux steps. You can check out what's involved in building on Linux by checking out the below link. Thanks for reaching out, let me know if you come right. 🔗docs.privategpt.dev/installation/getting-started/installation

    • @SirajSherief
      @SirajSherief หลายเดือนก่อน

      Thanks for your kindly response. But now I'm facing a new problem while try to run the private_gpt module:
      "TypeError: BertModel.__init__() got an unexpected keyword argument 'safe_serialization'"
      Please convey me how to resolve this error?

  • @Omnicypher001
    @Omnicypher001 2 หลายเดือนก่อน

    Using a Chrome browser to host a web app doesn't seem very private to me.

  • @anishkushwaha9973
    @anishkushwaha9973 2 หลายเดือนก่อน

    Not working it's showing error whatever prompt im giving

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, what error do you get? Let me know and maybe I can help you out.. Thanks!

    • @anishkushwaha9973
      @anishkushwaha9973 2 หลายเดือนก่อน

      ​@@stuffaboutstuff4045its showing Error Collection make_this_parameterizable_per_api_call not found

  • @adamseng8514
    @adamseng8514 3 วันที่ผ่านมา

    I am getting this error every time I try to upload a file to ingest or when I type a message. Everything along the way installed normally and went well so far
    HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

  • @reaperking537
    @reaperking537 หลายเดือนก่อน

    private-gpt answers me blank. any solution?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, can you confirm what LLM you sending it to? Ollama local like in the video? Are you getting no responses when you ingests docs and on the LLM Chat? Both not working? Anything happening in the terminal when it process in the Web UI? Let me know and we can hopefully get you up and running.

    • @reaperking537
      @reaperking537 หลายเดือนก่อน

      @@stuffaboutstuff4045 I have difficulty with PROFILES="ollama" (LLM: ollama | Model: mistral). I followed the same steps indicated in the video. LLM Chat (no file context) doesn't work, it gives me blank responses; and Query files doesn't work either, it also gives me blank responses. The error I get in the terminal is the following: [WARNING ] llama_index.core.chat_engine.types - Encountered exception writing response to history: timed out

    • @reaperking537
      @reaperking537 หลายเดือนก่อน

      @@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.

    • @reaperking537
      @reaperking537 หลายเดือนก่อน

      @@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.

  • @user-yr3xm1jk1q
    @user-yr3xm1jk1q หลายเดือนก่อน +1

    Is it support Arabic language?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, you can have a look at these threads, you would need to have LLM support for the language. I hope they point you in the right direction.
      github.com/zylon-ai/private-gpt/issues/28
      github.com/zylon-ai/private-gpt/discussions/764

    • @user-yr3xm1jk1q
      @user-yr3xm1jk1q หลายเดือนก่อน

      @@stuffaboutstuff4045 🙏 thx

  • @JiuJitsuTech
    @JiuJitsuTech หลายเดือนก่อน

    To run git clone, from the Anaconda Prompt, I had to install "conda install -c anaconda git". I was then able to run "git clone ...". Else, the Prompt window was just hanging for me when I tried to git clone.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Glad you are up and running. Thanks for sharing.

  • @methodssss
    @methodssss 24 วันที่ผ่านมา

    running into the issue with chatting with private gpt in browser.
    Error
    HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

    • @methodssss
      @methodssss 24 วันที่ผ่านมา

      I am sorry, that was the error for querying file. the error I get when trying to use the LLM Chat is "NoneType' object has no attribute 'split"

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  22 วันที่ผ่านมา

      Hi, this is usually a document ingestion or documents not selected issue. Can you check out the below link and restart the model. Let me know if you came right with this.
      github.com/zylon-ai/private-gpt/issues/1566

  • @thakurajay999
    @thakurajay999 หลายเดือนก่อน

    Error
    Collection make_this_parameterizable_per_api_call not found

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, this issue would usually arise when the system does not see documents selected or ingested. Have a look at the below two posts. Let me know if resolved.
      github.com/ollama/ollama/issues/3052
      github.com/zylon-ai/private-gpt/issues/1334

  • @talatriaz
    @talatriaz 2 หลายเดือนก่อน +1

    Doesn't work for me - only difference is that I'm using Win 11. All versions of software installed are the same as in example except for updated pip ands poetry versions.
    Everything is smooth until I get to the very last step. After running make run I get the following output:
    poetry run python -m private_gpt
    Traceback (most recent call last):
    File "", line 198, in _run_module_as_main
    File "", line 88, in _run_code
    File "C:\pgpt\private-gpt\private_gpt\__main__.py", line 5, in
    from private_gpt.main import app
    File "C:\pgpt\private-gpt\private_gpt\main.py", line 3, in
    from private_gpt.di import global_injector
    File "C:\pgpt\private-gpt\private_gpt\di.py", line 3, in
    from private_gpt.settings.settings import Settings, unsafe_typed_settings
    File "C:\pgpt\private-gpt\private_gpt\settings\settings.py", line 5, in
    from private_gpt.settings.settings_loader import load_active_settings
    File "C:\pgpt\private-gpt\private_gpt\settings\settings_loader.py", line 9, in
    from pydantic.v1.utils import deep_update, unique_list
    ModuleNotFoundError: No module named 'pydantic.v1'
    make: *** [run] Error 1
    Seems like root cause is missing Pydantic v1 module. I have checked using pip list, and pydantic 1.10.7 is clearly present. Problem with make gnu???
    Has anyone else experienced this or is it just me?.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, did you come right with this? Just checking, you have the required SW, running everything in correct terminals (CMD, Anaconda, Anaconda PowerShell etc. I usually also ensure I run in admin mode terminal to avoid some issues. If you run make can you confirm its in the machines path? After adding to path ensure you open a new prompt windows so it loads the path. Let me know if the above helps..

    • @talatriaz
      @talatriaz 2 หลายเดือนก่อน

      @@stuffaboutstuff4045 Apparently the problem was windows 11. I repeated the exact same steps on a Win 10 system and it worked perfectly.

  • @cookiedufour
    @cookiedufour หลายเดือนก่อน

    After executing "make run", i run into some problems : " ----LOGGING ERROR---- Traceback (most recent call last):
    File "C:\ProgramData\anaconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]
    ~~~~~~~~~~~~~^^^^^
    KeyError: During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
    File "C:\ProgramData\anaconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]"
    etc, there is still a lot. I followed every instruction carefully so, I don't know from where comes the problem... pls help

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน +1

      Hi, can you confirm that when you set the environment variable you are doing this in a Anaconda PowerShell prompt in admin mode. Make sure your environment is activated in the PowerShell terminal, these are the steps from 9:17 into the video. Let me know if you come right. Thanks for reaching out.

    • @cookiedufour
      @cookiedufour หลายเดือนก่อน

      @@stuffaboutstuff4045 yes, I opened a new anaconda powershell prompt and ran it as admin. I am thinking of starting everything over again... What would you advise me to do ? uninstall everything and refollw the steps of your video ? Thanks for your answear !

  • @Cool_Monk-ey
    @Cool_Monk-ey หลายเดือนก่อน +1

    In the last step --- Logging error ---
    Traceback (most recent call last):
    File "C:\Users\1ub48\anaconda3\envs\privavtegpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]
    ~~~~~~~~~~~~~^^^^^
    KeyError:
    I got this error and privategpt didn't show up pls help me someone

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, just checking if you came right with this. Sound like something is going wrong when you set PGPT Profile and execute make run. Can you confirm you are doing this in admin mode Anaconda PowerShell prompt. Ensure environment is active first and check steps 9:20 into the video. Let me know if resolved, thanks for reaching out.

    • @travs007
      @travs007 หลายเดือนก่อน

      @@stuffaboutstuff4045 I'm having the same problem access denied to mistral

  • @navaneethk7798
    @navaneethk7798 หลายเดือนก่อน

    (base) PS C:\WINDOWS\system32> conda activate privategpt
    (privategpt) PS C:\WINDOWS\system32> cd .\pgpt\
    (privategpt) PS C:\WINDOWS\system32\pgpt> cd .\private-gpt\
    (privategpt) PS C:\WINDOWS\system32\pgpt\private-gpt> $env:PGPT_PROFILES="ollama"
    (privategpt) PS C:\WINDOWS\system32\pgpt\private-gpt> make run
    make: *** No rule to make target 'run'. Stop. 🤥

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, just checking if you managed to resolve this? Did you follow all the make steps I use from about 5:25 in the video. Also just check the steps from about 8:05, I create my folder for the software in the root of the drive i.e. c:\pgpt. Just check those steps and confirm SW install location. Lastly make sure you load the $env variables in Admin mode Anaconda PowerShell prompt. Let me know if you were able to resolve..

  • @zackmathieu4829
    @zackmathieu4829 หลายเดือนก่อน

    I get and error after executing make run: KeyError:

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  หลายเดือนก่อน

      Hi, can you confirm you are setting the environment variable and using make run in a admin mode Anaconda PowerShell prompt. Check 9:16 onwards in the video. If error persist please confirm if your are using Ollama like I do in the video or building for local Lllama. Thanks for reaching out!

    • @zackmathieu4829
      @zackmathieu4829 หลายเดือนก่อน

      @@stuffaboutstuff4045 I've followed all of those steps correctly but I have found that after I run, poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant", setuptools never completes installation no matter how long i wait.
      Also even though I have completely uninstalled python and then reinstalled only python 3.11.0, when I check the version in Anaconda Prompt is returns 3.11.9

  • @Cashemacom-ud8xb
    @Cashemacom-ud8xb 2 หลายเดือนก่อน

    How do I enable GPU for this?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 หลายเดือนก่อน

      Hi, if you follow the instructions and config for Ollama, Ollama will handle the GPU offload. Otherwise you have to build it for full local for Llama-CPP support. Let me know if you come right..