Better Agents with

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ม.ค. 2025

ความคิดเห็น • 14

  • @AI-Makerspace
    @AI-Makerspace  4 หลายเดือนก่อน +3

    LlamaIndex Workflows - AI Makerspace: colab.research.google.com/drive/1p5L8pWzJG4KuopFNoc70OZiNpFkdYVCf?usp=sharing
    Event Slides: www.canva.com/design/DAGQd1a2yLY/5dKKRMR_f67OvUr-ADhnlg/view?DAGQd1a2yLY&

  • @mmasa1
    @mmasa1 2 หลายเดือนก่อน

    Dr Greg and the Wiz, what a dream team! Thanks for your wonderful, truly wonderful efforts. I am certainly getting better following you guys. God bless.

  • @mentorships3309
    @mentorships3309 2 หลายเดือนก่อน

    Dr Greg, you are absolutely incredible teacher. Keep it up!

  • @bradhatch8302
    @bradhatch8302 3 หลายเดือนก่อน

    Love this. Thanks for explaining this step by step (pun not intended).

  • @johnini
    @johnini 2 หลายเดือนก่อน +1

    hello!! thank you a lot for the videos! what is the best way to interact in sort of chat engine of chat loop with a workflow?

    • @AI-Makerspace
      @AI-Makerspace  2 หลายเดือนก่อน +1

      Can you expand on your request?

    • @johnini
      @johnini 2 หลายเดือนก่อน

      @@AI-Makerspace thank you for answering! I'm curious about the best practices for building a chat engine or chatbot that can interact in a continuous loop with a workflow.
      Currently, we are receiving one response at a time from the workflow, but I was wondering if we could enhance this by buffering the "chatmemory" and keep on with the conversation.
      Should this be achieved with a loop? I feel like I remember a llamaIndex or Langchain tool that kept the chat engine running, but I might be mistaken, maybe I was just re-querying. Also, how can I ensure other workflows share the same context?
      Additionally, is it possible to store interactions as vectorized semantic and episodic memories in a vector database, allowing the system to recall past conversations and in the future query from those memories and the RAG? and maybe do some type of reranking.

    • @johnini
      @johnini 2 หลายเดือนก่อน

      from llama_index import SimpleDirectoryReader, VectorStoreIndex
      from colorama import Fore, Style, init
      init(autoreset=True)
      def chat():
      print(f"{Fore.CYAN}Loading documents...")
      index = VectorStoreIndex.from_documents(
      SimpleDirectoryReader("./data").load_data()
      )
      chat_engine = index.as_chat_engine()
      print(f"{Fore.GREEN}Ready! Type 'quit' to exit
      ")
      while True:
      query = input(f"{Fore.GREEN}You: {Style.RESET_ALL}").strip()
      if query.lower() == 'quit': break
      if query:
      print(f"{Fore.BLUE}Assistant: {Style.RESET_ALL}{chat_engine.chat(query)}
      ")
      if __name__ == "__main__":
      try:
      chat()
      except Exception as e:
      print(f"{Fore.RED}Error: {e}")

  • @cmagganas
    @cmagganas 3 หลายเดือนก่อน

    LOVE LOVE LOVE the Snatch Dags meme

  • @washedtoohot
    @washedtoohot 4 หลายเดือนก่อน +1

    No comments yet? I’m claiming first! ❤

    • @AI-Makerspace
      @AI-Makerspace  4 หลายเดือนก่อน

      Let's gooooooooooooooooo!! 💜

  • @shivafeaz882
    @shivafeaz882 3 หลายเดือนก่อน +1

    It's looks the like Llama Index's version of the Microsoft Prompt Flow😅

    • @AI-Makerspace
      @AI-Makerspace  3 หลายเดือนก่อน

      Perhaps! We haven't looked closely into that specific tool. We do our best to focus on the patterns that underlie the toolsets so we can make connections like this though - good stuff!