NODES 2023 - Using LLMs to Convert Unstructured Data to Knowledge Graphs

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 มิ.ย. 2024
  • Discover how Large Language Models (LLMs) can unlock insights within text, social media, and web content. In this session, Noah will demonstrate how LLMs can be used for entity extraction, semantic relationship recognition, and context inference to generate interconnected knowledge graphs. This session will hopefully inspire you to harness LLMs for your uses of unstructured data.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 17

  • @joshuacunningham7912
    @joshuacunningham7912 4 หลายเดือนก่อน +3

    Thank you very much for this helpful and inspiring presentation!

  • @capri300
    @capri300 3 หลายเดือนก่อน +2

    Nice talk. Concise and providing the just the right amount of information. massive thank you for using animations in your slides it helped tremendously with your flow. Trying the github repo as we speak.

    • @neo4j
      @neo4j  2 หลายเดือนก่อน

      Thank you

  • @ahmed_hefnawy1811
    @ahmed_hefnawy1811 2 หลายเดือนก่อน +3

    chunking is one of the most steps to build a stable RAG flow, KG will change the RAG Game

  • @tacticalforesightconsultin453
    @tacticalforesightconsultin453 2 หลายเดือนก่อน +2

    I've done and presented a project like this with more transparency over 5 years ago, and completed it within a few weeks time. The only concern there was, was with polysemy (word with multiple parts-of-speech).
    It really helped to condense the information down and easily see implications across the documents.

  • @kennethnielsen3453
    @kennethnielsen3453 4 หลายเดือนก่อน +6

    Surprised you didn't use the Matrix movies instead :D

  • @user-bl4vm2nv8g
    @user-bl4vm2nv8g 2 วันที่ผ่านมา

    can you provide information regarding seed from URI for azure storage seed provider

  • @user-fe6ns8mu5p
    @user-fe6ns8mu5p 4 หลายเดือนก่อน +4

    Quick question let say we are working with maybe 100s of files to create graph, would'nt it be too costly to use llm?

    • @MrRubix94
      @MrRubix94 3 หลายเดือนก่อน +1

      That's the real question

    • @user-fe6ns8mu5p
      @user-fe6ns8mu5p 3 หลายเดือนก่อน

      @@MrRubix94 Any idea on how we can solve it?

    • @MrRubix94
      @MrRubix94 3 หลายเดือนก่อน

      No idea. I have yet to dive into the subject myself.@@user-fe6ns8mu5p

    • @MrGara1994
      @MrGara1994 3 หลายเดือนก่อน +1

      I think what you do there is pre index the vector database, and before sending the request, you preload the top n chunks. And most likely optimizing the knowledge graph by limiting the amount of tokens per chunk to the most optimal number for different tasks.

    • @user-fe6ns8mu5p
      @user-fe6ns8mu5p 3 หลายเดือนก่อน

      @@MrGara1994 if that the case this might be a dumb question 😅, but if we are using vector to get top n chunks then is there any different with doing kg or normal vector search?