IBM’s main task is now to create the catchup teaching videos. No innovation or breakthroughs are coming out from them. I see so many IBM old timers are watching with amazement how smaller open source innovators are moving lightning fast in GenAI and then they start recording teaching videos… I do not see any new things / new demos coming out from IBM.
thank u!! One question: Do you know how to evaluate an agentic rag? Do I have to take anything into account compared to a normal rag evaluation? (e.g. with RAGAs..) Best regards
Semantic routing uses more straightforward methods like cosine similarity and other predefined rules to make the decision on which route to take. The example I drew used an LLM agent, which can understand and interpret more complex/nuanced queries, understand context, but is much heavier (compute and latency) because it is using an LLM. -David
Overkill for most consumer facing applications and, as the answer to generalist queries, not specific enough a system for internal tooling. Lots of noise introduced here.
Got a particular question? A standard rag pipeline just takes your question convert it into a vector and searches the data source to find content that is similar to it. By adding an agent in front of it, the question can be interpreted to figure out which data sources should be used to fetch that content. It's like the difference between asking a single book a question or asking a question to a librarian who can find the right book for you before looking into the book for the answer to that question. Its hard with all the unique fancy naming they give these architectures, it abstracts the simplicity of what is really happening underneath.
Jesus. He did not even say how the agent uses the llm to select the right data source. He did not even say how agent is implemented. So incomplete.. this video is incomplete. Delete and present with a complete one.
Excellent explanations! Simple without additional fluff. Thank you.
You make the best LLM + RAG explainers in the world. Thanks so much.
Fantastic description, it seems using agents to help you route to potentially different databases is a game changer, thank you.
Thanks for this well explained video. This is the most simplified explanation of agentic rag that sank into my grey matter
Thanks. This is pretty useful and much better than the naïve approach of overloading the model with lot's of irrelevant data from the vector DB.
IBM’s main task is now to create the catchup teaching videos. No innovation or breakthroughs are coming out from them. I see so many IBM old timers are watching with amazement how smaller open source innovators are moving lightning fast in GenAI and then they start recording teaching videos… I do not see any new things / new demos coming out from IBM.
thank u!! One question: Do you know how to evaluate an agentic rag? Do I have to take anything into account compared to a normal rag evaluation? (e.g. with RAGAs..) Best regards
Please do RAG + Big Data next
An interesting suggestion, any specific element or question you have about the topic?
Where can I learn this approach?
We need to know how to implement agent and what is this? is this another llm to determine the context and route to right db
So the Agent is also a pretrained LLM with those 2 vector db ???
Im confused ... Andrew Ng says just RAG is Agentic AI. But RAG does not need LLM during the query phase. So what's correct here?
In the reflection I think that’s the teleprompter… or maybe I am hallucinating 🤣
“More responsible, more accurate, more adaptable, “
plus more secure as well?
What is the difference between this and semantic routing?
Semantic routing uses more straightforward methods like cosine similarity and other predefined rules to make the decision on which route to take. The example I drew used an LLM agent, which can understand and interpret more complex/nuanced queries, understand context, but is much heavier (compute and latency) because it is using an LLM.
-David
So the agent will acts like a controller here .
Overkill for most consumer facing applications and, as the answer to generalist queries, not specific enough a system for internal tooling. Lots of noise introduced here.
This pipeline doesnt make a lot of sense.
skill issue
Got a particular question?
A standard rag pipeline just takes your question convert it into a vector and searches the data source to find content that is similar to it.
By adding an agent in front of it, the question can be interpreted to figure out which data sources should be used to fetch that content.
It's like the difference between asking a single book a question or asking a question to a librarian who can find the right book for you before looking into the book for the answer to that question.
Its hard with all the unique fancy naming they give these architectures, it abstracts the simplicity of what is really happening underneath.
Jesus. He did not even say how the agent uses the llm to select the right data source. He did not even say how agent is implemented. So incomplete.. this video is incomplete. Delete and present with a complete one.
Make your own
Might want to clarify your first word in the context of this discussion.