I've tried this and unfortunately it's way too slow (30 seconds for a query with a 2 sentence output!) and does not produce its 'logical form solver' transparently enough. We don't know what chunks were retrieved or from where. Unfortunately it's still quite a way from being usable in a practical app.
The premise of Knowledge Augmented Generation is promising, but the current KAG code bases failed to deliver today. The TLDR version is that ultimately I saw no notes or edges created in Neo4j. Even weirder is that in spite of there being no graph it was still giving me results in the UI. (the UI is not open source and appears to be locked down) Ultimately the config needs to become more solid and consistent -- and there needs to be agreement between the OpenSPG/openspg and OpenSPG/KAG development teams on whether Ollama is supported. An odd mix of Java and Python. Hopefully this gets straightened up soon. Prompt Engineering... Normally love your stuff. What would be helpful as a starting point is a Jupyter Notebook from OpenSPG that walks through (and validates) the pipeline step by step. A follow-up would be a reproducible and well documented evaluation against other solutions - LazyGraphRAG, LightRAG / nanorag, etc.
Tried it. It's quite slow for answering simple queries. Also indexing about 20 pages in a PDF took about 300k tokens, which is still quite cheap with DeepSeek, but it seems a lot. Indexing also took like an hour or something. User interface is partly chinese, quite a bit of bugs. Seems unfinished. Answers that we're outputted were mostly correct though
Thank you for sharing great tutorial …reading from others comments it looks like the model is slow with responses which prevented me trying it but nevertheless informative nonetheless thanks 🙏
Very helpful tutorial for setup and good explanation on KAG systems however I used it and did not like the answers. it takes too long to give the answers and the answer are not good. also too much time needed to make embeddings from the chunks.
i notice it tries to use mysql i already have mysql and that default port ofc is in use this is bs! use sql lite or something like a file thats separate from a server and dont require ports ffs!
@@matiasm.3124 they have basically being using SQL in the backend and the biggest problem with the Graph based approach is that it adds an LLM in between that increases cost and decreases speed in the indexing process and on top of that they are using 2 LLMs for indexing so double the cost and slower the speed also in the retrieval process they have query decomposition using NLP and stuff the chunk retrieval process is slower than usual as they are also using an LLM in the middle so this approach might get you the best results and relevance uptill now but it will come with high costs and slower speed as another person said that yiu have to wait 30 seconds for a single answer that too was not complicated so there is no practical use for this
If you are interested in learning more about Advanced RAG systems, checkout my RAG Beyond Basics Course: prompt-s-site.thinkific.com/courses/rag
Why hindi audio track is not available 😢
nice. wondering if we can use this on groq platform to speed things up?
I've tried this and unfortunately it's way too slow (30 seconds for a query with a 2 sentence output!) and does not produce its 'logical form solver' transparently enough. We don't know what chunks were retrieved or from where. Unfortunately it's still quite a way from being usable in a practical app.
It depends on the use case! Sometimes it's acceptable to wait a few minutes, or maybe even hours.
how does continual update works with kag
Same q!!
Is there any way to use "Medical Information Extraction" instead of OpenIE?
Very nicely explained, congrats mate! 👏👏👏
Thank you for this video. Please make a follow up on how to build this. 🙏🏻
How does it compare with light RAG? Is it Better?
Can graph rag establish logical connection between them..
How is it compared to llm-graph-builder from neo4j?
wasn't aware of it. Will need to check it out.
The premise of Knowledge Augmented Generation is promising, but the current KAG code bases failed to deliver today. The TLDR version is that ultimately I saw no notes or edges created in Neo4j. Even weirder is that in spite of there being no graph it was still giving me results in the UI. (the UI is not open source and appears to be locked down)
Ultimately the config needs to become more solid and consistent -- and there needs to be agreement between the OpenSPG/openspg and OpenSPG/KAG development teams on whether Ollama is supported.
An odd mix of Java and Python. Hopefully this gets straightened up soon.
Prompt Engineering... Normally love your stuff. What would be helpful as a starting point is a Jupyter Notebook from OpenSPG that walks through (and validates) the pipeline step by step. A follow-up would be a reproducible and well documented evaluation against other solutions - LazyGraphRAG, LightRAG / nanorag, etc.
Tried it. It's quite slow for answering simple queries. Also indexing about 20 pages in a PDF took about 300k tokens, which is still quite cheap with DeepSeek, but it seems a lot. Indexing also took like an hour or something. User interface is partly chinese, quite a bit of bugs. Seems unfinished.
Answers that we're outputted were mostly correct though
U used openai api key?
Not to be confused with CAG
I got confused 4 a second. Thanks buddy❤
🤣🤣
What about combining KAG and SPARQL ?
Why hindi audio track is not available
why would we need hindi audio??
@stefanmisic7405 because youtube add audio track feature is available.may we more enjoy like that mr beast videos
What do you think about Colpali vs KAG? which is the best?
I feel Colpali performs well with accuracy and cost effective. Because image retrieval is from Colpali and we host it. Thoughts?
@@tirushv9681 Is there anyone providing the api for colpali usage? I want to use this in my app. Want to give access via api to limited users/
very interesting! thanks
Is this based on rdf?
It is the ReAct prompting with graph backend
All these things still in infancy stage to be used in real use cases, ..will take time to match up with traditional RAG
I agree! I am happy to see the progress and the research in retrieval. Retrieval is still one of the most useful applications of the LLMs.
Nice can you do the one in python using all local services/llm please.
What is better KAG or Agentic RAG?
You can use KAG as a tool for an agent to do retrieval.
@@engineerprompt Like the agent part in the Agentic RAG?
@@surajjaiswal1371 exactly. RAG is just a tool that will be available to your agent.
@@engineerprompt Okay, thanks!
Thank you for sharing great tutorial …reading from others comments it looks like the model is slow with responses which prevented me trying it but nevertheless informative nonetheless thanks 🙏
amazing!
There is also CAG (Cache-Augmented Generation) much more faster.
Very helpful tutorial for setup and good explanation on KAG systems however I used it and did not like the answers. it takes too long to give the answers and the answer are not good. also too much time needed to make embeddings from the chunks.
Another day, another -ag 😂
need python version
No FAG? Fetched Augmented Generartion relax everyone. Uses internet fetch over docs that's why it's FAG not RAG
i notice it tries to use mysql i already have mysql and that default port ofc is in use this is bs! use sql lite or something like a file thats separate from a server and dont require ports ffs!
I would NOT reccomend this bs at all! terrable!
@@JNET_Reloaded why ??? Explain
@@matiasm.3124 they have basically being using SQL in the backend and the biggest problem with the Graph based approach is that it adds an LLM in between that increases cost and decreases speed in the indexing process and on top of that they are using 2 LLMs for indexing so double the cost and slower the speed also in the retrieval process they have query decomposition using NLP and stuff the chunk retrieval process is slower than usual as they are also using an LLM in the middle so this approach might get you the best results and relevance uptill now but it will come with high costs and slower speed as another person said that yiu have to wait 30 seconds for a single answer that too was not complicated so there is no practical use for this
kag waw! i try project with mypufd4llm, is posible combination with this?