Graph RAG: Improving RAG with Knowledge Graphs

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024

ความคิดเห็น • 134

  • @aswarytiwari8806
    @aswarytiwari8806 4 หลายเดือนก่อน +43

    Would really appreciate the comparison of all the Graph based RAG frameworks, like LlamaIndex's Property Graph RAG, MS's GraphRAG etc.

  • @roguesecurity
    @roguesecurity 4 หลายเดือนก่อน +29

    Thanks for sharing this video. It was very informative. I would love to see a comparison between different Graph RAG solutions.

    • @danieljbukowski
      @danieljbukowski 4 หลายเดือนก่อน

      +1 to this -- very interesting and would love to hear about other approaches using graphs!

    • @andytan4978
      @andytan4978 4 หลายเดือนก่อน

      +1

    • @potatodog7910
      @potatodog7910 4 หลายเดือนก่อน

      Same

  • @alexisdamnit9012
    @alexisdamnit9012 หลายเดือนก่อน +1

    Really good walkthrough of Microsoft's GraphRAG. The associated costs with LLM APIs are definitely costly. But even if you wanted to use local or in-house LLMs you'll have to pay for the infrastructure and/or time to compute.

  • @kennethpinpin9053
    @kennethpinpin9053 4 หลายเดือนก่อน +11

    Excellent concise introduction on GraphRAG and why, how, when it is needed. Please compare this with Neo4j and Llama Index.

  • @abhijitbarman
    @abhijitbarman 4 หลายเดือนก่อน +13

    Thanks for sharing this video.I would love to see a comparison between different Graph RAG solutions.

    • @aussie84tom
      @aussie84tom 4 หลายเดือนก่อน +3

      Would love this too, especially neo4j and a focus on running it locally or with GROQ

    • @shubhamgattani5357
      @shubhamgattani5357 4 หลายเดือนก่อน

      @@aussie84tom Agree!

    • @TijsZwinkels
      @TijsZwinkels 4 หลายเดือนก่อน

      +1 - Would be very interesting to see whether these perform comparatively, or whether one is better than the other.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 4 หลายเดือนก่อน +10

    this looks great. yes, would be interested to compare against llamaindex.

  • @VivekHaldar
    @VivekHaldar 4 หลายเดือนก่อน +5

    Love the focus on cost tradeoff. Thanks!

  • @danielshurman1061
    @danielshurman1061 4 หลายเดือนก่อน +3

    Another great choice of important topics and video to show the current best implementations. Thank you!

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      Thank you. Always glad to see your comments.

  • @Independent_AI
    @Independent_AI 4 หลายเดือนก่อน +1

    Great job. Please publish more videos about GraphRAG with other competitors.

  • @aa-xn5hc
    @aa-xn5hc 4 หลายเดือนก่อน +9

    Great. Please more on this topic

  • @AyanKhan-dc3eu
    @AyanKhan-dc3eu 2 หลายเดือนก่อน +1

    I think using open source models like llama 3.1 locally will really fit here in the graph rag process. Companies having their own infrastructure to run llms can take advantage of this.

    • @kk008
      @kk008 หลายเดือนก่อน

      Yes I used same n it's good

  • @robboerman9378
    @robboerman9378 4 หลายเดือนก่อน

    Very useful, also good to share the associated cost. It would be very interesting to see a video comparing the different GraphRAG implementations as you mentioned.

  • @mehmetbakideniz
    @mehmetbakideniz 4 หลายเดือนก่อน

    very usefull indeed! please publish more videos comparing different frameworks in terms of accuracy and the cost.

  • @rajpootatul02
    @rajpootatul02 4 หลายเดือนก่อน

    Thanks for sharing this video. It was highly informative. I'd be interested in seeing a comparison of different Graph RAG solutions.

  • @paulmiller591
    @paulmiller591 4 หลายเดือนก่อน +1

    Great video and very timely. Please do more like these.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      thank you, planning on doing that :)

  • @mr.anonymus6110
    @mr.anonymus6110 4 หลายเดือนก่อน

    comparing them and a short video showcasing it on a local llm would be nice for sure

  • @CollinParan
    @CollinParan 4 หลายเดือนก่อน +11

    Interested in the Neo4J version

  • @mehmetnaciakkk3983
    @mehmetnaciakkk3983 3 หลายเดือนก่อน

    Thanks for this. Yes, a comparison of Microsft´s Graph Rag with Neo4J´s and Llama Index´s implementations would be great!

  • @MPReilly2010
    @MPReilly2010 4 หลายเดือนก่อน

    Excellent! Outstandingly thorough and clear details. Subscribing now.

  •  2 หลายเดือนก่อน

    Awesome! Please do one with open source models too

    • @engineerprompt
      @engineerprompt  2 หลายเดือนก่อน +1

      here is the open-source version: th-cam.com/video/_XOCAVsr3KU/w-d-xo.html

    •  2 หลายเดือนก่อน

      @@engineerprompt thanks

  • @Ayush-tl3ny
    @Ayush-tl3ny 4 หลายเดือนก่อน +4

    Can you please tell how can we change the embedding model to use some open source embedding

  • @jekkleegrace
    @jekkleegrace 4 หลายเดือนก่อน +2

    yes please compare with llama index

  • @dawid_dahl
    @dawid_dahl 3 หลายเดือนก่อน

    Great overview. Thank you!

  • @IslandDave007
    @IslandDave007 4 หลายเดือนก่อน

    It would be very interesting to see the results of this in global mode compared to dropping the whole book into the prompt of Gemini or Claude if it fit under their token limit. Obviously once you get beyond those limits RAG is required.
    Also would be great to see this run fully locally against a standard RAG local solution using the exact same LLM.

  • @ilianos
    @ilianos 4 หลายเดือนก่อน

    Regarding the cost implications (last part of the video): I think
    - a benchmark for different LLMs with GraphRAG
    and/or
    - some request router (but specialised for GraphRAG)
    would be suitable to optimize the right LLM for the right kind of request.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      Agree, I might do a video comparing different models when it comes to cost. Will be an interesting comparison

  • @ilianos
    @ilianos 4 หลายเดือนก่อน +2

    One thing that I don't understand is: why is it limited to only 300 tokens? When you consider the current best models nearly all have 128k now...

    • @aaronhhill
      @aaronhhill 4 หลายเดือนก่อน

      Yeah, Claude has a 200,000 token limit and Gemini how has a 2,000,000 token limit.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +2

      During RAG, you have to chunk your documents into subdocs. The idea is that you only want to use part of the document that are relevant to the user query since each token costs money. You get to choose the chunk size. In the paper they used 300 but you can set it to any value you want. But larger chunks will incure more charges if you are using an api.

    • @jefframpe5075
      @jefframpe5075 4 หลายเดือนก่อน

      Greg Kamradt @Dataindependent has a good video on 5 levels of text splitting for RAG, which explains the trade offs between different methods. th-cam.com/video/8OJC21T2SL4/w-d-xo.htmlsi=Go7eAYu0kkL_exiv

  • @tasfiulhedayet
    @tasfiulhedayet 4 หลายเดือนก่อน +1

    Interested to know about the difference between those different frameworks

  • @DayLearningIT-hz5kj
    @DayLearningIT-hz5kj 4 หลายเดือนก่อน

    Please share more about this topic 👍👍 so the most expensive part is in creating the graph ?
    Do you think that it is really improved the accuracy of the response ?

  • @MuhammadZubair-n7d
    @MuhammadZubair-n7d 4 หลายเดือนก่อน

    yes please make a comparison video and do you have videos for using other techniques as well?

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      will do the comparison. I dont' have those videos up but planning on them soon.

  • @dewanjeesoma
    @dewanjeesoma 3 หลายเดือนก่อน

    Thanks for sharing the video.I would love to see a comparison between different Graph RAG solutions. I have one question that for incremental document ingestion how the relationship build with the existing document set?

  • @sebastianmanns5391
    @sebastianmanns5391 4 หลายเดือนก่อน +1

    Would be suuuuper nice if you could compare Microsoft versus LlamaIndex versus Neo4j implementations :)

  • @GoranSkular
    @GoranSkular 2 หลายเดือนก่อน

    Thank you! Great video. I would also want a comparison with neo4j if possible.. thanks

  • @unclecode
    @unclecode 4 หลายเดือนก่อน

    You know, I wonder if you could check the cost breakdown and token consumption for the indexing and inference phases. How much of the seven dollars is for inference? If it's low, we could just index once, right? But seven dollars is huge, so it doesn't work. Also, I saw the tokens moved were 900,000+ context tokens and around 160,000 generated tokens. There's room for improvement. I don't think we need 900,000 context tokens for that question. Maybe those context tokens are from the initial prompts used for indexing. If that's the case, rerunning it and noting the costs for indexing vs. inference would be useful.
    By the way, great content as usual. The knowledge graph concept really clicked for me, like the linear flat nave RAG, a longstanding computer science idea. Its connection to ontology, which interests me, is amazing. Remember Stanford's WordNet and VerbNET? They were golden datasets for extracting relationships and similarities. Now, using large language models to generate knowledge bases from text lets us use those good old time algorithms safely.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      Just checked, there were only 25 calls to the embedding API and about 570ish calls to GPT4o. So probably 90-95% is coming from GPT4o calls. That's during the index creation process. So this will be needed. In any case, I think this process runs only once to create the graph and then the retrieval part is pretty much like normal RAG. I will try to run the indexing with something like Haiku which will be much lower cost but hopefully will be good enough indexing.
      I agree with you. I think the LLM community need to look back some of the techniques built earlier for IR and reuse them. Will get much better performance out of these LLM based applications.

    • @unclecode
      @unclecode 4 หลายเดือนก่อน

      @@engineerprompt Interesting. In that case, a viable production-level solution is to fine-tune a small model (~1B) specifically for generating knowledge graphs, with different flavors for topics like medicine, math, and business. This would create accurate knowledge graphs at a much lower cost. It's a great business opportunity. Honestly, I never thought of naive RAG as a whole pipeline! To answer complex questions, creating a knowledge graph is essential-perfect for a startup.
      Secondly, if inference is like normal RAG, that's good. But we need to dig into the GraphRAG library to see how much context it fetches from the knowledge graph and injects into the prompt. If it's too much, we'll have an issue. It’s about heuristically picking the minimal context needed to produce answers, so there's an optimization scenario here.
      Summary: use a small language model for generating a knowledge graph, and optimize how much context the GraphRAG library injects from the knowledge graph to answer questions. Please check the latter one if you find time.

  • @andrewandreas5795
    @andrewandreas5795 4 หลายเดือนก่อน

    Great video! Could this be used with other models like Claude or maybe even OS models?

  • @armans4494
    @armans4494 4 หลายเดือนก่อน

    Great video! Thank you

  • @dcmumby
    @dcmumby 4 หลายเดือนก่อน

    Thanks , what uses would traditional RAG be more effective ? Would love to see more detailed implementation as well as using alternative providers. Also very interested in comparing the competition

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      In cases where you have well structured QA dataset, e.g. HR policies. In those cases you don't really need advanced techniques like knowledge Graphs.

  • @mohanmadhesiya3116
    @mohanmadhesiya3116 4 หลายเดือนก่อน +1

    we want comparing videos

  • @GoldenkingGT101
    @GoldenkingGT101 4 หลายเดือนก่อน +3

    Can it be done with ollama?

    • @CollinParan
      @CollinParan 4 หลายเดือนก่อน

      He said you can in the video, by changing the endpoint in your settings file.

    • @paulmiller591
      @paulmiller591 4 หลายเดือนก่อน

      Yes he did mention that was an option

    • @kk008
      @kk008 หลายเดือนก่อน

      Yes I did and it is possible

  • @jeffsteyn7174
    @jeffsteyn7174 4 หลายเดือนก่อน

    Excellent. Going to subscribe for more info. Maybe i missed it but it seems like theres no specialised db used like with neo4j. So effectively we can use any data store?

  • @remyrflIt
    @remyrflIt 4 หลายเดือนก่อน +1

    Does it have to use OpenAI api key to make this work, or other OpenAI-compatible api (like from Perplexity or Together) will work too?

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      This works on OpenAI or compatible APIs.

  • @josephroman2690
    @josephroman2690 4 หลายเดือนก่อน

    interested in the video of comparison of all neoj4, llamaindex and this one

  • @qicao7769
    @qicao7769 4 หลายเดือนก่อน +2

    After see the cost caused by GraphRag, I am going to change the setting of Embadding and to my local LLM.... too expensive

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      Agreed but you definitely want to watch my next video on the topic

  • @tushitapatel5782
    @tushitapatel5782 2 หลายเดือนก่อน +1

    What are "communities"?

  • @JatinKashyap-Innovision
    @JatinKashyap-Innovision 4 หลายเดือนก่อน

    Please create the comparison video. Thanks.

  • @DEEPANMN
    @DEEPANMN 4 หลายเดือนก่อน +1

    Is this possible use a networkx graph library instead of LLM generated graph! I have readymade graph on my private dataset

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      I think its possible. You will need the LLM to generate a query that can traverse the graph

  • @bayu_sm
    @bayu_sm 4 หลายเดือนก่อน

    Thanks for sharing, I like the explanation. Please compare this with Llama Index and Neo4j

  • @gregsLyrics
    @gregsLyrics 4 หลายเดือนก่อน

    Always amazing vid and wisdom. I do wish there was a Star-Trek Vulcan mind meld available. There is so much I do not understand and I think I am jumping in way ahead of the learning curve - which makes this task that much more difficult. I have a vast amount of scanned pdf that I think is perfect for this, except the "scanned" part. These documents are not machine readable and I greatly fear OCR (garbage in garbage out). Plus I have tens of thousands of pages of bank statements, credit card statements, investment statements , tax returns that I need to analyze and tie to the data. Currently doing all of this manually with printed pages, post-it notes and highlighters. Will your RAG course get me up to speed and teach a design process with the various tools that I will be able to grasp?

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      The RAG course is focused on different techniques on how to build better RAG applications. In this course, I don't go too much into the ingestion part. For your problem, you might want to look into multimodal models like Claude or Gemini where you can directly process these scanned pages but the cost can add up quickly

    • @BACA01
      @BACA01 4 หลายเดือนก่อน

      For this you'll need a RAG with local vision models VLLMs

  • @dhruvparthasarathy2050
    @dhruvparthasarathy2050 หลายเดือนก่อน

    How well do these techniques work for mapping relationships between various entities in our codebase?

  • @mrchongnoi
    @mrchongnoi 4 หลายเดือนก่อน

    Thank you for the video very good. Very expensive solution. It seems to me that this solution would not work for a company that has thousands of documents.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      Yes, in the current form it's going to be expensive. Now if you were to replace GPT4o with an open weights model or less expensive model, then it could be a different story IF they provide same level of accuracy

  • @ramielkady938
    @ramielkady938 2 หลายเดือนก่อน

    This video is for the IT guys that will set that up for users correct ? Can you do a similar video for users where you actually SHOW the graph and how to use it ?

  • @mohammmedalmulhem622
    @mohammmedalmulhem622 4 หลายเดือนก่อน

    Thank you for the great video. Is there a maximum input size for creating a GraphRAG? Every time I try to insert my documents, I get an error with "create_summarized_entities."

  • @DePhpBug
    @DePhpBug 4 หลายเดือนก่อน

    Been wondering , especially the book.txt , let's say if is a .json object will our RAG works well with json objective when we feed those data into doing embedding etc through the RAG proceess?

  • @joshlyn9041
    @joshlyn9041 4 หลายเดือนก่อน

    Thanks for the video, could you explain how we can go to the actual data sources that GraphRAG cites? like [Data: Reports (63,65)], cheers

  • @ledinhngoc1102
    @ledinhngoc1102 4 หลายเดือนก่อน

    Is there any topic talk about entity ambiguous or entity linking in Graph RAG? How the system handle entities with different form but are synonyms?

  • @kk008
    @kk008 หลายเดือนก่อน

    Is it possible for any knowledge graph input rather than text data?

  • @u.a3
    @u.a3 4 หลายเดือนก่อน

    Great video! When you mentioned the costs, was it the cost of implementing the graph rag and creating it, or does that also include the answers retrieved? If every answers costs that much then surely the price is too high. But if it is just a one-time high cost of creating the graph rag and the actual answers don't cost as much then I guess it is not really an issue.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      This was combined cost of graph rag creation and query part. The major portion is coming from the dataset creation part.

    • @u.a3
      @u.a3 4 หลายเดือนก่อน

      @@engineerprompt clears it up, thank you!!

  • @henkhbit5748
    @henkhbit5748 4 หลายเดือนก่อน

    Thanks, if we can use open-source LLM then the cost will be less or use groq, right?. There are no examples using python code instead of command line?

  • @ganesh_kumar
    @ganesh_kumar 4 หลายเดือนก่อน

    Baring the cost, can we use it for large number of docs and will it have good accuracy, what about the time taken for the complete indexing? And latency??

  • @rockypunk91
    @rockypunk91 4 หลายเดือนก่อน

    If you index different documents at different point of time. We end up with multiple artifacts in the output folder.
    How should one do a search over all outputs. Like a production level application

  • @KristianSchou1
    @KristianSchou1 4 หลายเดือนก่อน

    Have you experimented with a custom documentloader? - I'm working on a RAG system at work, and I've found that the prebuild loaders are severely lacking when it comes to handling text extractiong from pdf-files from "the real world". Would be nice to hear your thoughts

    • @IamMarcusTurner
      @IamMarcusTurner 4 หลายเดือนก่อน

      llama parse could from llama index could be a good choice. They noticed the same

    • @BACA01
      @BACA01 4 หลายเดือนก่อน

      You would need a RAG with local VLLM.

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 4 หลายเดือนก่อน

    question thought: can graphRAG work with multi-modal data ? I would actually think why not as a picture can be classified/linked to a specific node.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      I think it will be able to. Usually, for multimodal RAG, you use a vision model to generate descriptions for each image and preserve the links. Then you can embed the images descriptions just like you do for text

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 4 หลายเดือนก่อน

      @@engineerprompt gotcha. How and where do you think would the images be stored/retrieved ? Let's say we attach a summary of a picture + metadata about the page/chapter/subject/id: maybe we could store the images in a separate database and retrieve it at runtime. Cheers!

  • @BACA01
    @BACA01 4 หลายเดือนก่อน

    Can we use it with VLLMs?

  • @Rajesh-v1g
    @Rajesh-v1g 3 หลายเดือนก่อน

    What will be the cost of instead of GPT-4o we used the Open Source Llama 3.1 model? Please share your insights. Thanks.

    • @mudamanagement
      @mudamanagement 3 หลายเดือนก่อน +1

      Cost comes down to tokens vs compute. You can use hosted endpoints or run on your own gpu/ lpu. While hosted endpoints are still lower cost than GPT-4, if you’re going through ~60Bn tokens monthly assuming 50/50 input v output, hosting your own model would be more advantageous.

  • @fintech1378
    @fintech1378 4 หลายเดือนก่อน

    cost will come down significantly in the near future, as long as it works then people should start initializing the workflow

  • @lavamonkeymc
    @lavamonkeymc 4 หลายเดือนก่อน

    How did u make the flowchart with sonnet?

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      I provided the paper to it and then asked it to create visual representation of the proposed method. Usually, it takes a couple of iterations for it to do a good job

  • @muhammadazeembinrosli3806
    @muhammadazeembinrosli3806 2 หลายเดือนก่อน

    Can anyone suggest how I can deploy the indexer and retriever in the Azure serverless environment?

  • @MehdiAllahyari
    @MehdiAllahyari 4 หลายเดือนก่อน

    Great video for showing how graphrag works. However, as interesting as this graphrag approach may sound, it's not a practical one to use it in production. For a fairly short text/book (65 pages) it cost $7. Think if you want to process 100s of documents, it'll be prohibitively expensive!

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน +1

      Agree, I think with time the cost will go down or other approaches will evolve over time based on this.

    • @MehdiAllahyari
      @MehdiAllahyari 4 หลายเดือนก่อน

      @@engineerpromptyes the cost is already very low and will go even lower. But even if price becomes very low, the overall price Wii be still expensive, because then people will upload more and more text and the same thing happens. For me graphrag really has nothing new. It’s simply a hype. There are better tools for extracting entities and relationships. We have a couple of videos about rag and KG in our channel. It’s called twosetai. Check it out. 😊

  • @COC-ys5ir
    @COC-ys5ir หลายเดือนก่อน

    Sir I am getting error like number of columns must be same as keys
    pls help

  • @aa-xn5hc
    @aa-xn5hc 4 หลายเดือนก่อน

    What about Ollama ?

  • @sriramkommi7684
    @sriramkommi7684 4 หลายเดือนก่อน

    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte , bro im getting this error while running the indexing pipeline , can u please help me

  • @JumpDiffusion
    @JumpDiffusion 4 หลายเดือนก่อน

    Nowhere in the paper (From local to global..) they mention use of an embedding model.

  • @bigqueries
    @bigqueries 4 หลายเดือนก่อน

    pls compare all graph rag

  • @MoadKISSAI
    @MoadKISSAI 4 หลายเดือนก่อน

    Can you please make a comparison with Neo4j?

  • @malikrumi1206
    @malikrumi1206 4 หลายเดือนก่อน

    Why have knowledge graphs become all the rage? How is that any better than a traditional database? Note that PosgreSQL already has a vector store extension, and since it has been out for a while now, I would assume all the others do, too. Since a number of others are requesting a head to head comparison with llamaindex, why not include a comparison with a traditional RDBMS? In one version, the model uses an SQL tool to find (read) the query answer in postgres, and then in the second, it looks for vectors in pg_vector?!

  • @sanjaybhatikar
    @sanjaybhatikar หลายเดือนก่อน

    When I heard Kamala talk of price controls, I was sad but when I saw people cheering her on I was heartbroken. USA has fundamentally changed. There is no coming back from this.

  • @svenandreas5947
    @svenandreas5947 4 หลายเดือนก่อน

    ollama is not working for embedding

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      were you able to use the LLM via Ollama? I think for embeddings there is not a standard API so that's why its not working.

  • @mogliff3414
    @mogliff3414 4 หลายเดือนก่อน

    I am not able to see RAG beyong basic due to card . Need other platform say paid you tube etc

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      Can you please clarify. Are you not able to signup because of the payment method? please email me and we can sort it out. Thanks

  • @j4cks0n94
    @j4cks0n94 4 หลายเดือนก่อน

    That is SO expensive for just a book

  • @jayco10125
    @jayco10125 4 หลายเดือนก่อน

    Use the o-mini model

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      would be a good option.

  • @renandiniz2019
    @renandiniz2019 4 หลายเดือนก่อน

    Compare microsoft with neo4j

  • @lesptitsoiseaux
    @lesptitsoiseaux 4 หลายเดือนก่อน

    How can I add metadata that can be valuable to use as entities? I don't mind buying your course if it helps my use case: I am building a recommendation engine for classes based off transcripts (multiple per class).

  • @marconeves9018
    @marconeves9018 4 หลายเดือนก่อน

    Are you not familiar with local hosting? Just wondering why you don't opt to showcase these tool integrations by going local instead of using paid APIs-- you closed the video with the premise of this being more expensive but thats greatly reduced if youre hosting it yourself.

    • @engineerprompt
      @engineerprompt  4 หลายเดือนก่อน

      check out other videos on my channel :)

  • @VinMan-ql1yu
    @VinMan-ql1yu 4 หลายเดือนก่อน

    "It only supports plain text" kinds killed the video for my use case..." I was looking for something more advanced...

  • @Sri_Harsha_Electronics_Guthik
    @Sri_Harsha_Electronics_Guthik 4 หลายเดือนก่อน

    th-cam.com/video/vX3A96_F3FU/w-d-xo.html
    can we do local model for graphragapikey?

  • @lesptitsoiseaux
    @lesptitsoiseaux 4 หลายเดือนก่อน

    How can I add metadata that can be valuable to use as entities? I don't mind buying your course if it helps my use case: I am building a recommendation engine for classes based off transcripts (multiple per class).