Llama-3 🦙 with LocalGPT: Chat with YOUR Documents in Private

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 มิ.ย. 2024
  • In this video we will look at how to start using llama-3 with localgpt to chat with your document locally and privately.
    🦾 Discord: / discord
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Patreon: / promptengineering
    💼Consulting: calendly.com/engineerprompt/c...
    📧 Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
    Signup for Advanced RAG:
    tally.so/r/3y9bb0
    LINKS:
    Github: github.com/PromtEngineer/loca...
    TIMESTAMPS:
    [00:00] LLAMA-3 with LocalGPT
    [00:22] Setting Up the Environment
    [02:01] Installing Required Packages
    [03:22] Accessing Gated Models
    [05:45] Ingesting Documents and Running LocalGPT
    All Interesting Videos:
    Everything LangChain: • LangChain
    Everything LLM: • Large Language Models
    Everything Midjourney: • MidJourney Tutorials
    AI Image Generation: • AI Image Generation Tu...
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 40

  • @engineerprompt
    @engineerprompt  15 วันที่ผ่านมา

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

  • @engineerprompt
    @engineerprompt  หลายเดือนก่อน +1

    Want to learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0

  • @deepharia4209
    @deepharia4209 หลายเดือนก่อน

    ok now see i have Windows and 6gb GPU VRAM and around 64gb normal RAM which LLM model could I run locally but I need a UI for the text prompt so I can chat easily with that chat bot with many functionalities such text to text, speech, video, ETC. so please tell me sir.

  • @NovPiseth
    @NovPiseth หลายเดือนก่อน

    Hello thanks for great video you help me alot about this. Could you help me to add Panda and PandaAI? it could help me to analys the data from the excel and/or csv file. Thanks

  • @zahidahmad1894
    @zahidahmad1894 หลายเดือนก่อน

    I want a specific conversational chatbot with very few amount of data. How can I do it?

  • @TraveleroftheSoul7674
    @TraveleroftheSoul7674 หลายเดือนก่อน

    there is a problem in the code. Even when I ingest new files it's still gives answer and make mess with the last file I deleted. How to handle this. I tried different prompts but it's not working for me?

  • @o1ecypher
    @o1ecypher หลายเดือนก่อน +1

    a .exe or a gui for windows would me nice gradio like stable diffusion please

  • @thegooddoctor6719
    @thegooddoctor6719 หลายเดือนก่อน +3

    By Far the LocalGPT is the most robust RAG system out there - Thank you - But I'm running it on a i9 13900/4090 GPU system - Is there any plans on making the RAG system a bit faster - It can take up to 5 minutes to come back with a response...... Thanks again - Very Cool...

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน +1

      Yes, I am experimenting with using ollama for the LLM and I think that will increase the speed. Working on major updates, stay tuned :)

    • @laalbujhakkar
      @laalbujhakkar หลายเดือนก่อน +2

      on m2 mbp 16gb with ollama+llama38b+anythingllm is returning in. seconds …

    • @thegooddoctor6719
      @thegooddoctor6719 หลายเดือนก่อน

      @@laalbujhakkar Then again I'm having it search 300 MB of documents.........

  • @kingfunny4821
    @kingfunny4821 หลายเดือนก่อน +1

    can use this offline
    and
    Can I save the conversation so that I can refer to it after a period of time or when creating a new conversation?

    • @sauravmukherjeecom
      @sauravmukherjeecom หลายเดือนก่อน +1

      Yes,
      For memory you will have to send the past conversation as context. Try looking into one of the rope trained models with longer context length.

    • @bobby-and2crows
      @bobby-and2crows หลายเดือนก่อน

      Yeah fella

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน +2

      This is for offline use. localgpt has a flag save_qa that will enable you to save your conversations and you can load them.

  • @pablolbrown
    @pablolbrown 25 วันที่ผ่านมา

    Any idea when support for Apple Silicon M3 is coming?

    • @engineerprompt
      @engineerprompt  24 วันที่ผ่านมา

      It already supports Apple Silicon. Make sure you correctly install the llamacpp version. Instructions are in the Readme

  • @azizjaffrey123
    @azizjaffrey123 หลายเดือนก่อน

    Please keep this code version for future use, if you update code and if people cannot find code from this video they skip , which i personally did on your old video on LocalGPT and started watching this but for my gpu old code was compatable but cannot clone, since that version doesnt exist

  • @colosys
    @colosys หลายเดือนก่อน

    Could you help me configure localGPT with pgvector embeddings? :$ I'm seriously struggling

  • @zahidahmad1894
    @zahidahmad1894 หลายเดือนก่อน

    4gb gpu 16 gb ram. Will llama3 work fine?

  • @alexanderlevakin9001
    @alexanderlevakin9001 หลายเดือนก่อน +2

    May i use llama3 with languages other then english?

    • @sauravmukherjeecom
      @sauravmukherjeecom หลายเดือนก่อน +3

      Yes you can. Out of the total training data around 5 or 10 percent (forgot now) is languages other than English. Which is close to the total training data for llama 2.

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน +1

      Yes, you can as pointed out. You also want to make sure to use a multi-lingual embedding model.

  • @Player-oz2nk
    @Player-oz2nk หลายเดือนก่อน

    Very interested in how to correctly ingest csv files and formats and limitations

    • @sauravmukherjeecom
      @sauravmukherjeecom หลายเดือนก่อน

      Csvs are tricky. You can either go by adding the data to a database and then querying on it. Or create text chunks out of it.

    • @Player-oz2nk
      @Player-oz2nk หลายเดือนก่อน

      @@sauravmukherjeecom assuming foe larger cvs importing directly to db would make more sense and smaller file we could chunk

  • @adityamishra611
    @adityamishra611 20 วันที่ผ่านมา

    I am getting this error: You are trying to offload the whole model to the disk

  • @shaonsikder556
    @shaonsikder556 หลายเดือนก่อน

    Which screen recorder do you use?

  • @Felipe_7076
    @Felipe_7076 หลายเดือนก่อน

    I'm asking in Portuguese and this answer comes out in English? Thanks

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน

      are the documents in Portuguese? You might have to tell the model to generate answers in Portuguese

  • @FranchGuy
    @FranchGuy หลายเดือนก่อน

    Hi , is there way to contact you for privet project ?

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน

      There is a link in the video description or email me at engineerprompt at gmail

  • @EDRM-my5rd
    @EDRM-my5rd 27 วันที่ผ่านมา

    I tested the ingest and query model with PDF edition of FINANCIAL ACCOUNTING International Financial Reporting Standards ELEVENTH EDITION using default parameters and answers were 80% wrong, particularly with sample journal entries from the context:
    > Question:
    provide example of VAT journal entries
    > Answer
    * The sales revenue is recorded as a debit to the "Sales Revenue" account, which increases the company's assets.

  • @user-ut8zv6xr7t
    @user-ut8zv6xr7t หลายเดือนก่อน +1

    Why use this over something like AnythingLLM?

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน

      They solve the same problem. My goal with localgpt is to be a framework for testing different components of RAG as lego blocks.

  • @kunalr_ai
    @kunalr_ai หลายเดือนก่อน

    😂kuch samaj nahi aa raha .. kaha se start karna hai

    • @engineerprompt
      @engineerprompt  หลายเดือนก่อน

      there is a playlist on localgpt on the channel. that will be a good starting point :)