Dr Greg and the Wiz, what a dream team! Thanks for your wonderful, truly wonderful efforts. I am certainly getting better following you guys. God bless.
@@AI-Makerspace thank you for answering! I'm curious about the best practices for building a chat engine or chatbot that can interact in a continuous loop with a workflow. Currently, we are receiving one response at a time from the workflow, but I was wondering if we could enhance this by buffering the "chatmemory" and keep on with the conversation. Should this be achieved with a loop? I feel like I remember a llamaIndex or Langchain tool that kept the chat engine running, but I might be mistaken, maybe I was just re-querying. Also, how can I ensure other workflows share the same context? Additionally, is it possible to store interactions as vectorized semantic and episodic memories in a vector database, allowing the system to recall past conversations and in the future query from those memories and the RAG? and maybe do some type of reranking.
Perhaps! We haven't looked closely into that specific tool. We do our best to focus on the patterns that underlie the toolsets so we can make connections like this though - good stuff!
LlamaIndex Workflows - AI Makerspace: colab.research.google.com/drive/1p5L8pWzJG4KuopFNoc70OZiNpFkdYVCf?usp=sharing
Event Slides: www.canva.com/design/DAGQd1a2yLY/5dKKRMR_f67OvUr-ADhnlg/view?DAGQd1a2yLY&
Dr Greg and the Wiz, what a dream team! Thanks for your wonderful, truly wonderful efforts. I am certainly getting better following you guys. God bless.
Dr Greg, you are absolutely incredible teacher. Keep it up!
Love this. Thanks for explaining this step by step (pun not intended).
LAWL GOTTEM
hello!! thank you a lot for the videos! what is the best way to interact in sort of chat engine of chat loop with a workflow?
Can you expand on your request?
@@AI-Makerspace thank you for answering! I'm curious about the best practices for building a chat engine or chatbot that can interact in a continuous loop with a workflow.
Currently, we are receiving one response at a time from the workflow, but I was wondering if we could enhance this by buffering the "chatmemory" and keep on with the conversation.
Should this be achieved with a loop? I feel like I remember a llamaIndex or Langchain tool that kept the chat engine running, but I might be mistaken, maybe I was just re-querying. Also, how can I ensure other workflows share the same context?
Additionally, is it possible to store interactions as vectorized semantic and episodic memories in a vector database, allowing the system to recall past conversations and in the future query from those memories and the RAG? and maybe do some type of reranking.
from llama_index import SimpleDirectoryReader, VectorStoreIndex
from colorama import Fore, Style, init
init(autoreset=True)
def chat():
print(f"{Fore.CYAN}Loading documents...")
index = VectorStoreIndex.from_documents(
SimpleDirectoryReader("./data").load_data()
)
chat_engine = index.as_chat_engine()
print(f"{Fore.GREEN}Ready! Type 'quit' to exit
")
while True:
query = input(f"{Fore.GREEN}You: {Style.RESET_ALL}").strip()
if query.lower() == 'quit': break
if query:
print(f"{Fore.BLUE}Assistant: {Style.RESET_ALL}{chat_engine.chat(query)}
")
if __name__ == "__main__":
try:
chat()
except Exception as e:
print(f"{Fore.RED}Error: {e}")
LOVE LOVE LOVE the Snatch Dags meme
No comments yet? I’m claiming first! ❤
Let's gooooooooooooooooo!! 💜
It's looks the like Llama Index's version of the Microsoft Prompt Flow😅
Perhaps! We haven't looked closely into that specific tool. We do our best to focus on the patterns that underlie the toolsets so we can make connections like this though - good stuff!