Reliable, fully local RAG agents with LLaMA3.2-3b

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 พ.ย. 2024

ความคิดเห็น • 44

  • @homeandr1
    @homeandr1 หลายเดือนก่อน +11

    Great explanation, would be great to do one more tutorial using multimodal local RAG, considering the different chunks like tables, texts, and images, where you can use unstructured, Chroma, and MultiVectorRetriever completely locally.

  • @leonvanzyl
    @leonvanzyl หลายเดือนก่อน +24

    The tutorial was "fully local" up until the moment you introduced Tavily 😜😉.
    Excellent tutorial Lance 👍

    • @sergeisotnik
      @sergeisotnik หลายเดือนก่อน

      Any internet search, by definition, is no longer local. However, embeddings here are used from a third-party service (where only the first 1M tokens are free).

    • @starbuck1002
      @starbuck1002 หลายเดือนก่อน +4

      @@sergeisotnik Hes using nomic-embed-text embedding model locally, so there is no token cap at all.

    • @sergeisotnik
      @sergeisotnik หลายเดือนก่อน +7

      @@starbuck1002 It looks like you're right. I saw that `from langchain_nomic.embeddings import NomicEmbeddings` is used, which usually means an API call. But in this case, the initialization is done with the parameter `inference_mode="local"`. I didn’t check the documentation, but it seems that in this case, the model is downloaded from HuggingFace and used for local inference. So, you’re right, and I was wrong.

  • @ytaccount9859
    @ytaccount9859 หลายเดือนก่อน +3

    Awesome stuff. Langgraph is a nice framework. Stoked to build with it, working through the course now!

  • @ravivarman7291
    @ravivarman7291 หลายเดือนก่อน +1

    Amazing session and content explained very nicely in just 30 mins; Thanks so much

  • @joxxen
    @joxxen หลายเดือนก่อน

    You are amazing, like always. Thank you for sharing

  • @SavvasMohito
    @SavvasMohito หลายเดือนก่อน

    That's a great tutorial that shows the power of LangGraph. It's impressive you can now do this locally with decent results. Thank you!

  • @adriangpuiu
    @adriangpuiu หลายเดือนก่อน +4

    @lance, please add langgraph documentation to the chat. the community will appreciate that. Let me know what you think

  • @sunderrajan6172
    @sunderrajan6172 หลายเดือนก่อน

    Beautifully done; thanks

  • @becavas
    @becavas หลายเดือนก่อน +4

    Why did you use lama3.2:3b-instruct-fp16 instead of lama3.2:3b?

  • @VictorDykstra
    @VictorDykstra หลายเดือนก่อน

    Very well explained.😊

  • @henson2k
    @henson2k 25 วันที่ผ่านมา +1

    You make LLM to do all hard work for candidates filtering

  • @thepeoplesailab
    @thepeoplesailab หลายเดือนก่อน

    Very informative ❤❤

  • @davesabra4320
    @davesabra4320 24 วันที่ผ่านมา

    Thanks it is indeed very cool. Last time you used 32Gb , do you think this will run with 16Gb? memory.

  • @Togowalla
    @Togowalla หลายเดือนก่อน

    Great video. What tool did you use to illustrate the nodes and edges in your notebook?

  • @marcogarciavanbijsterveld6178
    @marcogarciavanbijsterveld6178 21 วันที่ผ่านมา +1

    I'm a med student interested in experimenting with the following: I'd like to have several PDFs (entire medical books) from which I can ask a question and receive a factually accurate, contextually appropriate answer-thereby avoiding online searches. I understand this could potentially work using your method (omitting web searches), but am I correct in thinking this would require a resource-intensive, repeated search process?
    For example, if I ask a question about heart failure, the model would need to sift through each book and chapter until it finds the relevant content. This would likely be time-consuming initially. However, if I then ask a different question, say on treating systemic infections, the model would go through the entire set of books and chapters again, rather than narrowing down based on previous findings.
    Is there a way for the system to 'learn' where to locate information after several searches? Ideally, after numerous queries, it would be able to access the most relevant information efficiently without needing to reprocess the entire dataset each time-while maintaining factual accuracy and avoiding hallucinations.

    • @JesterOnCrack
      @JesterOnCrack 4 ชั่วโมงที่ผ่านมา

      I'll take a minute to try and asnwer your question to the best of my ability.
      Basically, what you are describing are ideas that seem sound for your specific application, but are not useful everywhere. Whenever you restrict search results, there is a chance you're not finding the 1 correct answer you needed. Speaking from experience, even a tiny chance of not finding what you need is enough to deter many customers.
      Of course, your system would have a tradeoff in efficiency - completing queries quicker.
      Bottom line is, there are ways to achieve this with clever data- and AI-engineering. I don't think that there is a single straightforward fix to your problem though.

  • @developer-h6e
    @developer-h6e 20 วันที่ผ่านมา

    is it possible to make agent that when provide with few hundred links extracts info in all links and store it

  • @beowes
    @beowes หลายเดือนก่อน

    Question: You have operator.add on the loopstep, but tnen increment the loopstep in the state too… am i wrong in that it would then incorrect?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w หลายเดือนก่อน

    Can you consider doing an example of contextual retrieval that Anthropic recently introduced.

  • @sidnath7336
    @sidnath7336 หลายเดือนก่อน

    If different tools require different key word arguments, how can these be passed in for the agent to access?

  • @hari8568
    @hari8568 หลายเดือนก่อน

    Is there an elegant way to handle recursion errors?

  • @AlexEllis
    @AlexEllis หลายเดือนก่อน

    Thanks for the video and sample putting all these parts together. What did you use to draw the diagram at the beginning of the video? Was it generated by a DSL/config?

    • @blakenator123
      @blakenator123 หลายเดือนก่อน

      looks like excalidraw

  • @johnrogers3315
    @johnrogers3315 หลายเดือนก่อน

    Great tutorial, thank you

  • @andresmauriciogomezr3
    @andresmauriciogomezr3 หลายเดือนก่อน

    thank you

  • @ericlees5534
    @ericlees5534 21 วันที่ผ่านมา

    why does he make it so easy..

  • @fernandobarros9834
    @fernandobarros9834 หลายเดือนก่อน

    Great tutorial! Is it necessary to add a prompt format?

    • @skaternationable
      @skaternationable หลายเดือนก่อน +1

      Using PromptTemplate/ChatPromptTemplate works as well. It seems that the .format here is equivalent to the `input_variables` param within the former 2 classes

    • @fernandobarros9834
      @fernandobarros9834 หลายเดือนก่อน

      @@skaternationable Thanks!

  • @jamie_SF
    @jamie_SF หลายเดือนก่อน

    Awesome

  • @serychristianrenaud
    @serychristianrenaud หลายเดือนก่อน

    thanks

  • @ephimp3189
    @ephimp3189 หลายเดือนก่อน

    Is it possible to add a "fact checker" method? What if the answer is obtained from a document that gives false information? it would technically answer the question, just not be true

  • @aiamfree
    @aiamfree หลายเดือนก่อน

    it's sooooo fast!

  • @ghostwhowalks2324
    @ghostwhowalks2324 หลายเดือนก่อน

    amazing stuff which can be done with few lines of code. disruption coming everywhere

  • @HarmonySolo
    @HarmonySolo หลายเดือนก่อน +10

    LangGraph is too complicated, you have to implement State, Node etc. I would prefer to implement the Agent workflow by myself, which is mush easier at least I do not need to learn how to use LangGraph

    • @generatiacloud
      @generatiacloud 21 วันที่ผ่านมา +1

      Any repo to share?

    • @RazorCXTechnologies
      @RazorCXTechnologies 20 วันที่ผ่านมา

      Excellent tutorial! Another easier option is to use n8n instead because it has Langchain integration with AI agents built in and almost no code required to achieve same functionality. N8n also has automatic chatbot interface and webhooks.

    • @kgro353
      @kgro353 12 วันที่ผ่านมา

      langflow is best solution

  • @_Learn_With_Me_EraofAI
    @_Learn_With_Me_EraofAI หลายเดือนก่อน

    Unable to access chat ollama

  • @HELLODIMENSION
    @HELLODIMENSION หลายเดือนก่อน

    You have no idea how much u saved me 😂 salute 🫡 thank u.