i never commented on any of the tech videos till with my 8 years of experience. I got an interview in few days and i wanted to use graphRAG but i knew about graphDB but this video made me realize few things that its not the number of technical words you use to explain something but its how you make such complicated concept look easy. Kudos Prof Emil Eifrem. Please release more videos on different concepts on GraphRAG.
*𝓣𝓲𝓶𝓮𝓼𝓽𝓪𝓶𝓹𝓼 𝓫𝔂 𝓘𝓷𝓽𝓮𝓵𝓵𝓮𝓬𝓽𝓒𝓸𝓻𝓷𝓮𝓻* 0:00 - Introduction: Leveraging Data and Relationships 0:49 - Evolution of Search: From AltaVista to Google 3:03 - Google’s Knowledge Graph: Concepts and Structure 4:50 - GraphRAG Era: Integrating LLMs with Knowledge Graphs 5:28 - What is GraphRAG? 7:06 - Example: Customer Service Bot with GraphRAG 8:46 - Benefits of GraphRAG: Accuracy and Development 10:27 - Easier Development with GraphRAG 12:00 - Comparing Graph and Vector Representations 13:40 - Explainability and Governance in GraphRAG 14:19 - Getting Started with GraphRAG: Data Sources 15:57 - Demo Introduction: Knowledge Graph Builder Tool 18:16 - Knowledge Graph Demo: Building and Visualizing Data 18:48 - Conclusion and QR Code for Resources
knowledge graph as label property , Knowledge graph with owl2 ontologies. Data provenance , uuid creation. There is a good scope in neo4j to have both. Coexistence with ref, owl and label property graph is the one which neo4j should talk
So, a global digital platform could create a shared graph representation of parts of millions of simultaneous conversations that people around the world select to be merged into a form of collective human and digital intelligence and problem solving.
what are the applications of this technology in mapping out professional entworks / relationships? in a way, forming as a basis of a modern, relationship-based CRM?
One thing that comes to mind is any kind of automated research, with a graph of papers, edges representing relations such as citations, shared authors, or topics. Computation or latency would hardly be an issue here.
Hm, maybe computation can be even lower to yield better results than with vector only RAGs. But the main thing, it's about quality of the 'R' content...
Aha, I like that GraphRag concept. I am augmenting my LLM now with RAG on PDF documents inside my enterprise. But the results were poor and not released for production. Seems like GraphRAG can help.
Hi, is it possible for you to share more on this? I would like to know about your experience. lmk if its possible for you to drop your email. Thank you
NOTHING breeds ENTHUSIASM like... ENTHUSIASM. "...check this shit out!" It's fun when you're a PASSIONATE & SKILLED DATA SCIENTIST and you see a presentation by ANOTHER (better!) PASSIONATE & SKILLED DATA SCIENTIST ! LOVED the video...
🎯 Key points for quick navigation: 00:00:00 *📊 Introduction and Background* - Emil Eifrem introduces his dedication to helping developers build better applications by leveraging relationships between data points, - Overview of the lecture's focus on the combination of LLMs and knowledge graphs. 00:00:45 *🔍 Evolution of Search Technology* - Discussion on the history and evolution of web search engines from AltaVista to Google, - Explanation of the challenges with keyword-based search and how Google's PageRank algorithm, a graph-based algorithm, revolutionized search. 00:02:48 *📚 Introduction of Knowledge Graphs* - Google's shift from PageRank to Knowledge Graphs, emphasizing the difference between "things" and "strings," - Description of the visual structure of Knowledge Graphs, combining structured and unstructured data. 00:04:24 *🧠 The Advent of Gen and AI in Search* - Google's recent advancements in AI-powered search, integrating LLMs and Knowledge Graphs, - Explanation of "GraphRAG" and how it combines retrieval-based Knowledge Graphs with Gen technologies. 00:06:01 *🤖 Implementing GraphRAG in Applications* - Detailed example of building a customer service bot using GraphRAG, - Explanation of how GraphRAG enhances typical RAG-based applications by leveraging knowledge graphs for better retrieval and context. 00:08:20 *🌟 Benefits of GraphRAG* - Higher accuracy of responses in AI applications using GraphRAG compared to traditional RAG, - Easier development of applications once a Knowledge Graph is established, with examples from high-profile companies. 00:10:09 *🛠 Advantages in Development and Governance* - Clarity and transparency in application development using graphs versus vectors, - Improved explainability, auditability, and governance for business IT. 00:12:54 *🚀 Getting Started with GraphRAG* - Steps and considerations in creating a Knowledge Graph, - Demonstration of the Knowledge Graph Builder tool, showing the process of uploading and visualizing data. 00:18:01 *🎉 Conclusion and Final Thoughts* - Final insights and practical example using the Knowledge Graph Builder, - Encouragement to try out the tool and further explore the benefits of GraphRAG in applications. Made with HARPA AI
How is this useful if there is still hallucination happening and the accuracy is only 70%? LLMs have a preference for their internal knowledge over external knowledge coming from RAG or knowledge graphs. Anyone has tips to solve this?
I mean... if "traditional RAG", if you will, has proven immensely useful in various contexts, then surely a method reporting significantly higher performance will be useful, too.
@@hEmZoRz I would think so, but I do not understand how, if you can not trust it 100% when using LLMs. I asked this question once to Philip Rathle (CTO @ Neo4j) and he kindly replied: “It comes back to what GraphRAG pattern you are using, and “Locus of Reasoning”. I’m planning a follow-up post that will cover this.”
While waiting for his post I asked ChatGPT to clarify.. Locus of Reasoning: refers to the place where most of the reasoning or decision-making happens: inside the LLM (internal) or within the external knowledge source (external). External Locus of Reasoning: The LLM is designed to pull in external knowledge (like a knowledge graph) first and treat it as the primary source for generating its response, making it more likely to use the external knowledge over its own. Different GraphRAG patterns can influence where the “Locus of Reasoning” is placed: In Post-Retrieval or Iterative patterns, the LLM is more likely to be influenced by the external data. Post-Retrieval: The LLM first retrieves relevant information from the knowledge graph and then integrates it with its internal knowledge to generate a response. In this pattern, the external knowledge is directly presented to the LLM before it starts generating. Iterative: A dynamic, back-and-forth pattern where the LLM may ask for more information from the knowledge graph during the generation process. This pattern tends to have more active integration of external knowledge throughout the reasoning.
@@johannesdeboeck Admittedly, I'm not an expert in the RAG field and don't know exactly what goes into rigorous validation of these systems, but I would imagine that the benchmarks are very difficult (after all, a benchmark where current models would score >95% regularly wouldn't be particularly useful), hence the seemingly low accuracy scores. If that's the case, the accuracy in simpler tasks is likely to be significantly higher, even close to 100%. Now, clearly real-life tasks also come in many flavours and difficulties. I'm sure there's a whole host of applications that are simple enough in nature that RAG performs sufficiently close to 100%, where it's not mission critical to achieve absolute perfect 100% reliability on the RAG system (after all, humans aren't perfect either). edit: for what it's worth, I've played around with vector RAG in personal projects, involving straightforward tasks, and while I haven't performed any thorough validations, I'd guesstimate these systems perform somewhere in the territory of 90-95% accuracy.
@@hEmZoRz Thanks for your input. There is a good video that explains what the exact issue is with using external knowledge such as RAG systems or knowledge graphs. It is on the Diffbot TH-cam channel and is called "Trying to make LLMs less stubborn in RAG" I am not affiliated with Diffbot. It just shows very good how the LLM can be stubborn and refuse to use external knowledge.
That's part of the rationale. Because graph space is explicit and visible, it can actually be EASIER in production. Vector space is opaque. You have no idea *why* two things are cosine-ish proximate. If you have a green apple and a tennis ball, they're probably close in vector space. But why? Is it the roundness of the things? Is it the green-ness? You have no idea. It's 0.738475 near in some euclidean space. In graph space, you know they're related because of their color, or an apple and orange are similar because they're both fruits etc. That visible and explicit nature of the graph frequently makes it easier in production, because you understand what's going on. On your higher order comment though, I *completely* agree: keep it simple, make it work, iterate from there.
My heart goes out to anyone doing a live tech demo.
i never commented on any of the tech videos till with my 8 years of experience. I got an interview in few days and i wanted to use graphRAG but i knew about graphDB but this video made me realize few things that its not the number of technical words you use to explain something but its how you make such complicated concept look easy. Kudos Prof Emil Eifrem.
Please release more videos on different concepts on GraphRAG.
Best introductory video for graph rag
Best refresher starter on Graph RAG. Brilliant effort. Respect!
Wish I could have heard more from him
Thank you!
Clear, concise yet compelling story telling!
Thank for sharing 👍
*𝓣𝓲𝓶𝓮𝓼𝓽𝓪𝓶𝓹𝓼 𝓫𝔂 𝓘𝓷𝓽𝓮𝓵𝓵𝓮𝓬𝓽𝓒𝓸𝓻𝓷𝓮𝓻*
0:00 - Introduction: Leveraging Data and Relationships
0:49 - Evolution of Search: From AltaVista to Google
3:03 - Google’s Knowledge Graph: Concepts and Structure
4:50 - GraphRAG Era: Integrating LLMs with Knowledge Graphs
5:28 - What is GraphRAG?
7:06 - Example: Customer Service Bot with GraphRAG
8:46 - Benefits of GraphRAG: Accuracy and Development
10:27 - Easier Development with GraphRAG
12:00 - Comparing Graph and Vector Representations
13:40 - Explainability and Governance in GraphRAG
14:19 - Getting Started with GraphRAG: Data Sources
15:57 - Demo Introduction: Knowledge Graph Builder Tool
18:16 - Knowledge Graph Demo: Building and Visualizing Data
18:48 - Conclusion and QR Code for Resources
Great stuff now i need to rewrite and migrwte my LLM apps to this for a try
Great one. Super helpful to understand the usage of graph in RAG
Great presentation!
Amazingly well presented. thank you Emil!
this channel's youtuber's name is "Emil"?
knowledge graph as label property , Knowledge graph with owl2 ontologies. Data provenance , uuid creation. There is a good scope in neo4j to have both. Coexistence with ref, owl and label property graph is the one which neo4j should talk
Great présentation thanks !
So, a global digital platform could create a shared graph representation of parts of millions of simultaneous conversations that people around the world select to be merged into a form of collective human and digital intelligence and problem solving.
This is wild
data is the new money
Now you can do a "meanings" comparison with "users" accounts and learn everything about them! Good for sales with recommendations
Sounds expensive with unclear value tbh, but cool idea
@@infraiait’s wild if you’re a bot :-)
Well presented!!
can it do downstream task like summarization ?
You had me at "pagerank"
what are the applications of this technology in mapping out professional entworks / relationships? in a way, forming as a basis of a modern, relationship-based CRM?
Where would you think GraphRAG will be "really" needed since higher computation is needed compared to Normal RAGs
One thing that comes to mind is any kind of automated research, with a graph of papers, edges representing relations such as citations, shared authors, or topics. Computation or latency would hardly be an issue here.
Hm, maybe computation can be even lower to yield better results than with vector only RAGs. But the main thing, it's about quality of the 'R' content...
This is amazing.
Wow! this is so cool!
Great company
Aha, I like that GraphRag concept. I am augmenting my LLM now with RAG on PDF documents inside my enterprise. But the results were poor and not released for production. Seems like GraphRAG can help.
Hi, is it possible for you to share more on this? I would like to know about your experience. lmk if its possible for you to drop your email. Thank you
NOTHING breeds ENTHUSIASM like... ENTHUSIASM.
"...check this shit out!"
It's fun when you're a PASSIONATE & SKILLED DATA SCIENTIST and you see a presentation by ANOTHER (better!) PASSIONATE & SKILLED DATA SCIENTIST !
LOVED the video...
That's pretty cool but the title is a bit misleading as it doesn't touch the actual GraphRAG method (that is, a specific paper and project).
🎯 Key points for quick navigation:
00:00:00 *📊 Introduction and Background*
- Emil Eifrem introduces his dedication to helping developers build better applications by leveraging relationships between data points,
- Overview of the lecture's focus on the combination of LLMs and knowledge graphs.
00:00:45 *🔍 Evolution of Search Technology*
- Discussion on the history and evolution of web search engines from AltaVista to Google,
- Explanation of the challenges with keyword-based search and how Google's PageRank algorithm, a graph-based algorithm, revolutionized search.
00:02:48 *📚 Introduction of Knowledge Graphs*
- Google's shift from PageRank to Knowledge Graphs, emphasizing the difference between "things" and "strings,"
- Description of the visual structure of Knowledge Graphs, combining structured and unstructured data.
00:04:24 *🧠 The Advent of Gen and AI in Search*
- Google's recent advancements in AI-powered search, integrating LLMs and Knowledge Graphs,
- Explanation of "GraphRAG" and how it combines retrieval-based Knowledge Graphs with Gen technologies.
00:06:01 *🤖 Implementing GraphRAG in Applications*
- Detailed example of building a customer service bot using GraphRAG,
- Explanation of how GraphRAG enhances typical RAG-based applications by leveraging knowledge graphs for better retrieval and context.
00:08:20 *🌟 Benefits of GraphRAG*
- Higher accuracy of responses in AI applications using GraphRAG compared to traditional RAG,
- Easier development of applications once a Knowledge Graph is established, with examples from high-profile companies.
00:10:09 *🛠 Advantages in Development and Governance*
- Clarity and transparency in application development using graphs versus vectors,
- Improved explainability, auditability, and governance for business IT.
00:12:54 *🚀 Getting Started with GraphRAG*
- Steps and considerations in creating a Knowledge Graph,
- Demonstration of the Knowledge Graph Builder tool, showing the process of uploading and visualizing data.
00:18:01 *🎉 Conclusion and Final Thoughts*
- Final insights and practical example using the Knowledge Graph Builder,
- Encouragement to try out the tool and further explore the benefits of GraphRAG in applications.
Made with HARPA AI
How is this useful if there is still hallucination happening and the accuracy is only 70%?
LLMs have a preference for their internal knowledge over external knowledge coming from RAG or knowledge graphs.
Anyone has tips to solve this?
I mean... if "traditional RAG", if you will, has proven immensely useful in various contexts, then surely a method reporting significantly higher performance will be useful, too.
@@hEmZoRz I would think so, but I do not understand how, if you can not trust it 100% when using LLMs.
I asked this question once to Philip Rathle (CTO @ Neo4j) and he kindly replied:
“It comes back to what GraphRAG pattern you are using, and “Locus of Reasoning”. I’m planning a follow-up post that will cover this.”
While waiting for his post I asked ChatGPT to clarify..
Locus of Reasoning:
refers to the place where most of the reasoning or decision-making happens: inside the LLM (internal) or within the external knowledge source (external).
External Locus of Reasoning:
The LLM is designed to pull in external knowledge (like a knowledge graph) first and treat it as the primary source for generating its response, making it more likely to use the external knowledge over its own.
Different GraphRAG patterns can influence where the “Locus of Reasoning” is placed:
In Post-Retrieval or Iterative patterns, the LLM is more likely to be influenced by the external data.
Post-Retrieval: The LLM first retrieves relevant information from the knowledge graph and then integrates it with its internal knowledge to generate a response. In this pattern, the external knowledge is directly presented to the LLM before it starts generating.
Iterative: A dynamic, back-and-forth pattern where the LLM may ask for more information from the knowledge graph during the generation process. This pattern tends to have more active integration of external knowledge throughout the reasoning.
@@johannesdeboeck Admittedly, I'm not an expert in the RAG field and don't know exactly what goes into rigorous validation of these systems, but I would imagine that the benchmarks are very difficult (after all, a benchmark where current models would score >95% regularly wouldn't be particularly useful), hence the seemingly low accuracy scores. If that's the case, the accuracy in simpler tasks is likely to be significantly higher, even close to 100%.
Now, clearly real-life tasks also come in many flavours and difficulties. I'm sure there's a whole host of applications that are simple enough in nature that RAG performs sufficiently close to 100%, where it's not mission critical to achieve absolute perfect 100% reliability on the RAG system (after all, humans aren't perfect either).
edit: for what it's worth, I've played around with vector RAG in personal projects, involving straightforward tasks, and while I haven't performed any thorough validations, I'd guesstimate these systems perform somewhere in the territory of 90-95% accuracy.
@@hEmZoRz Thanks for your input.
There is a good video that explains what the exact issue is with using external knowledge such as RAG systems or knowledge graphs. It is on the Diffbot TH-cam channel and is called "Trying to make LLMs less stubborn in RAG"
I am not affiliated with Diffbot. It just shows very good how the LLM can be stubborn and refuse to use external knowledge.
Give that man another minute!
Oh man, I had a few more things I did want to show in that demo. Maybe I'll try record another version of just the (full) demo!
I would really appreciate that and I assume others too!
@@emileifrem7726 If you have time, please do!
I would love a full version! We in the company are the "graph/ontology" people, and explaining the benefits and process of GraphRAG can be challenging
cool!
i certainly shall indeed be trying said QR code for a least an hour goofing off on graphs of utterly unrelated topics... why thank you...
Oh no, was the QR code wrong? Which one?
would easily be mess in PROD.. my experience; keep it simple and make it work
That's part of the rationale. Because graph space is explicit and visible, it can actually be EASIER in production. Vector space is opaque. You have no idea *why* two things are cosine-ish proximate. If you have a green apple and a tennis ball, they're probably close in vector space. But why? Is it the roundness of the things? Is it the green-ness? You have no idea. It's 0.738475 near in some euclidean space. In graph space, you know they're related because of their color, or an apple and orange are similar because they're both fruits etc. That visible and explicit nature of the graph frequently makes it easier in production, because you understand what's going on. On your higher order comment though, I *completely* agree: keep it simple, make it work, iterate from there.
graphrag is outdated, it's way too computationally expensive, there's a much more efficient solution
Can you share details of the better method?
GraphRag isn't a fitting name.
GraphRage!
ALways enjoy Emil's framings. Graph is still real deal. Soon maybe we'll talk about Experience Graphs -- KG _ xG _ AI/LLMs