LangChain and Ollama: Build Your Personal Coding Assistant in 10 Minutes

แชร์
ฝัง

ความคิดเห็น • 42

  • @AISoftwareDeveloper
    @AISoftwareDeveloper  หลายเดือนก่อน +1

    Here is the source code repo: github.com/aidev9/tuts/tree/main/langchain-ollama

  • @arunbhati101
    @arunbhati101 หลายเดือนก่อน +1

    Great explanation and easy to understand example. Thanks for sharing your knowledge.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Thank you @arunbhati101, I am glad you got something out of it. What videos would you like to see in the future?

  • @TimHoffeller
    @TimHoffeller 27 วันที่ผ่านมา +1

    Hi, pretty cool stuff and very helpful! You mentioned the rag approach. A follow up with this approach would be very cool😊. Thanks a lot for your work!

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  27 วันที่ผ่านมา +1

      Hi @TimHoffeller, yes RAG with LangChain would provide a more scalable solution. Luckily, my next video will be covering that very topic with LangChain, Supabase and a local Ollama. Check back in the next few days and let me know your thoughts. Thank you for the comment.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  25 วันที่ผ่านมา +1

      @@TimHoffeller the RAG video is now available. Any feedback is appreciated.

  • @AaronBlox-h2t
    @AaronBlox-h2t หลายเดือนก่อน +1

    Cool video. Source code is great . Also including the relevant urls in the video would be good. Thanks.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Thanks, here are the links, also available in the video description now.
      github.com/pillarstudio/standards/blob/master/reactjs-guidelines.md
      gist.github.com/nlaffey/99fdb37c0ba286f38a0582564061dea8

  • @Zenith_pop
    @Zenith_pop หลายเดือนก่อน +3

    one thing in case of bigger model u need it for more memory in tha case vram is an issue and u need to quantization but then accuracy defects using 8bit or 4 bit also in that case code errors happen , smaller cap are usless due to being applicable in real life projects , but good

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Yes, for bigger models, VRAM and GPU can be highly beneficial. And you're right, bigger models will deliver better results for real life projects. Thank you for the comment.

  • @adriangpuiu
    @adriangpuiu หลายเดือนก่อน +1

    what it would be nice, is to have a meta agent that creates dynamic tools and reinserts them into the flow when needed

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Yeah, that would be cool. Based on the user prompt, the meta agent creates its own tools and executes them as needed. Controlling the agent will be a beast

  • @umarkhan8787
    @umarkhan8787 หลายเดือนก่อน

    Nice man 👍🏻, just wanted to where should i append my tool/function response, like in system or in tools😅

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      You'd want to append the tool response into the messages array. You can then filter that array by instance type, passing ToolMessage as the filter. That will do the trick 👍

  • @genzprogrammer
    @genzprogrammer หลายเดือนก่อน +1

    Thanks Bro, 🎉 Correct Time. Was looking for something like this

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Thanks for the comment. What would you like to see next?

    • @genzprogrammer
      @genzprogrammer หลายเดือนก่อน

      @@AISoftwareDeveloper How can we load a complete codebase from a git repo and Implement a Rag for that codebase?

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      @@genzprogrammer Easy. What would the RAG do - are you thinking a chatbot to talk to the codebase or something more advanced, like artifacts generation?

    • @genzprogrammer
      @genzprogrammer หลายเดือนก่อน

      @@AISoftwareDeveloper Thinking of build something like a Chat where it takes my query and search the vectordb for which file should i change Nd what should i change

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      @@genzprogrammer that's a great idea. So, it will tell you what files need to be updated based on a feature change you're thinking. Can you give me one or two examples of queries you'd ask?

  • @Aksafan
    @Aksafan หลายเดือนก่อน +4

    Hey, this was surprisingly helpful!! Thank you so much!
    Can I ask for a source code please?

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน +1

      Here you go: github.com/aidev9/tuts/tree/main/langchain-ollama

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Thanks for the comment. What topic would you like to see next?

    • @Aksafan
      @Aksafan หลายเดือนก่อน

      @@AISoftwareDeveloper Some tuning would be great, cause the quality of code (that React components) is so bad tbh. I asked several frontend engineers to look at it and it was not the best one.
      Maybe it's cause of models I used (llama 3.2-3b and 3.1-7b).
      Also, that would be great to know how to prepare a proper guideline for a model to use, cause links (event to GitHub raw MD file) are not working when there are a bunch of other links on that page.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Here is the repo: github.com/aidev9/tuts/tree/main/langchain-ollama

  • @AlexK-xb4co
    @AlexK-xb4co หลายเดือนก่อน +1

    ollama models by default are configured to have 2k context size, fyi

  • @sathishbabu3322
    @sathishbabu3322 หลายเดือนก่อน +1

    Awesome video. Can you teach me how to convert this coding to .py file so I can run it in my local machine visual studio code app and check with other compatible models

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน +1

      Thank you for the comment. Yes, you can save a Jupyter notebook and run it as a Python scrip locally. Watch this to learn how: Jupyter Notebooks in VS Code on MacOS
      th-cam.com/video/3pbFb7X2ObU/w-d-xo.html

    • @sathishbabu3322
      @sathishbabu3322 หลายเดือนก่อน +1

      @@AISoftwareDeveloper I am using windows machine. Hope that video does covers it as well. And thank you so much for a quick response. Really appreciated

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน +1

      The video doesn’t cover windows, but once you have VSCode installed, running Python is the same. I hope that helps. Thank you for your comment.

  • @MubashirullahD
    @MubashirullahD หลายเดือนก่อน +2

    Build it in 10 minutes, waste days in the future dealing with dependency issues

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน +2

      Not to worry, mate. We’ll have AI to fix all dependency issues 😉

    • @codelinx
      @codelinx หลายเดือนก่อน +2

      Sounds like someone has SDE …. Small developer energy😂😂🫰

    • @MubashirullahD
      @MubashirullahD หลายเดือนก่อน

      @codelinx
      XD

  • @agraciag
    @agraciag หลายเดือนก่อน +1

    Next step? using voice to give it the prompt :-)

  • @NanoGi-lt5fc
    @NanoGi-lt5fc หลายเดือนก่อน

    I hv one question can i somehow create github co pilot who just giveme suggestions wht i need to do using this ?

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  หลายเดือนก่อน

      Hey, definitely. You can use this and create your own Github Copilot as a VSCode extension. Here's how to get started: code.visualstudio.com/api/get-started/your-first-extension

    • @NanoGi-lt5fc
      @NanoGi-lt5fc หลายเดือนก่อน +1

      @@AISoftwareDeveloper thanks sir I will try this will it give me suggestions like an co pilot give can I publish that extension as well