LangChain Templates Tutorial: Building Production-Ready LLM Apps with LangServe

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ย. 2024
  • In this LangChain Templates Tutorial, we dive deep into building production-ready LLM (Language Model) applications with LangServe. 🚀
    📌 What You'll Learn:
    1. Discover the Power of LangChain Templates: Learn what LangChain Templates are and how they simplify LLM application development.
    2. Installation and Execution: Get hands-on guidance on installing and running LangChain Templates, making your development process smooth.
    3. Unleash the Potential of LangServe: Explore LangServe, a tool that helps developers effortlessly deploy LangChain runnables and chains as a REST API.
    4. LangServe Playground Exploration: Take a tour of the LangServe Playground and delve into its endpoints, including invoke and batch, to unlock advanced capabilities.
    5. Customize Templates for Your Needs: Learn how to tailor LangChain Templates to your custom use cases, making them perfectly suited to your project requirements.
    6. Loading Data with Ease: Discover how to load documents using URLs and data directories to seamlessly integrate your data into your LangChain Templates.
    7. Python Client Access: Explore how to access deployed templates in a Python client application, enabling you to interact with your LLM apps effortlessly.
    Get ready to supercharge your LLM app development with LangChain Templates and LangServe. Watch the tutorial now and level up your skills! 🔥📚 #LLM #LangChain #LangServe #tutorial
    If you're curious about the latest in AI technology, I invite you to visit my project, AI Demos, at www.aidemos.com/. It's a rich resource offering a wide array of video demos showcasing the most advanced AI tools.
    For even more in-depth exploration, be sure to visit my TH-cam channel at / @aidemos.futuresmart . Here, you'll find a wealth of content that delves into the exciting future of AI and its various applications.
    🚀 Top Rated Plus Data Science Freelancer with 8+ years of experience, specializing in NLP and Back-End Development. Founder of FutureSmart AI, helping clients build custom AI NLP applications using cutting-edge models and techniques. Former Lead Data Scientist at Oracle, primarily working on NLP and MLOps.
    💡 As a Freelancer on Upwork, I have earned over $100K with a 100% Job Success rate, creating custom NLP solutions using GPT-3, ChatGPT, GPT-4, and Hugging Face Transformers. Expert in building applications involving semantic search, sentence transformers, vector databases, and more.

ความคิดเห็น • 15

  • @joseluisbeltramone599
    @joseluisbeltramone599 9 หลายเดือนก่อน +2

    Very good video. Thanks a lot for making it.

    • @FutureSmartAI
      @FutureSmartAI  9 หลายเดือนก่อน

      Glad you liked it!

  • @kaikai7702
    @kaikai7702 6 หลายเดือนก่อน

    how to add memory in langchaian sever?

    • @FutureSmartAI
      @FutureSmartAI  6 หลายเดือนก่อน

      In this video I have show how to add memory to chain th-cam.com/video/fss6CrmQU2Y/w-d-xo.htmlsi=2QWgHBkJ7eutw-vm

  • @oluwaseunakinropo6318
    @oluwaseunakinropo6318 6 หลายเดือนก่อน

    Hii Pradip, as usual amazing content you put out there!
    I created a rag app which read each line from a txt file in the same folder, passes it through an api. The returned data is chunked and embedded then passed to the retrieval chain. how best do you think I can do this for large scale process i.e reading the original txt file one after the other, passing it to the LLM and then appending the result into a final file. I would appreciate some insight 🙏🏾

  • @abhineeth
    @abhineeth 9 หลายเดือนก่อน +1

    Thank you for the quick tutorial, just wondering how this could be deployed on the web.

    • @FutureSmartAI
      @FutureSmartAI  9 หลายเดือนก่อน +1

      Hi in the video it has shown how to run it as fastapi which can be deployed. If you want to know how to deploy fastapi on cloud like aws you can watch th-cam.com/video/7FVPn25mmEQ/w-d-xo.htmlsi=FAtDYHUduXugcN34

    • @abhineeth
      @abhineeth 9 หลายเดือนก่อน

      @@FutureSmartAI Thank you.

  • @sapnilpatel1645
    @sapnilpatel1645 9 หลายเดือนก่อน +1

    Nice tutorial.

  • @jillanisofttech2977
    @jillanisofttech2977 10 หลายเดือนก่อน +1

    great tutorial

  • @mohanvishe2889
    @mohanvishe2889 5 หลายเดือนก่อน

    👍

  • @humayounkhan7946
    @humayounkhan7946 9 หลายเดือนก่อน

    Hi Pradip, how do we make the input document dynamic? meaning if its deployed on a web app, how can someone just input their own documents and the web app would be able to answer based on those new documents instead of something pre-loaded, do we require another API/cloud storage etc?

    • @FutureSmartAI
      @FutureSmartAI  9 หลายเดือนก่อน

      We can store all uploaded docs in folder and load docs from that folder. If each user only wants to ask questions to their on files it means you need to create seperate index for each user or better when insert doc in vector database add user id in metedata so when that user asks question you only fetch doc which has metadat containing that user id

  • @suryakothamagiclist
    @suryakothamagiclist 7 หลายเดือนก่อน

    Hello Pradip, What is the best way to get in touch with you?

    • @FutureSmartAI
      @FutureSmartAI  7 หลายเดือนก่อน

      You can message me on LinkedIN.