What is Retrieval-Augmented Generation (RAG)?

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ธ.ค. 2024

ความคิดเห็น • 550

  • @xzskywalkersun515
    @xzskywalkersun515 ปีที่แล้ว +914

    This lecturer should be given credit for such an amazing explanation.

    • @cosmicscattering5499
      @cosmicscattering5499 10 หลายเดือนก่อน +8

      I was thinking the same, she explained this so clearly.

    • @tariqmking
      @tariqmking 9 หลายเดือนก่อน +4

      Yes this was excellently explained, kudos to her.

    • @brianmi40
      @brianmi40 9 หลายเดือนก่อน +19

      Or at least credit for being able to write backwards!

    • @victoriamilhoan512
      @victoriamilhoan512 7 หลายเดือนก่อน +3

      The connection between a human answering a question in real life vs how LLMs (with or without RAG) do it was so helpful!

    • @aguiremedia
      @aguiremedia 7 หลายเดือนก่อน +1

      Why. Chat gpt wrote it

  • @vt1454
    @vt1454 ปีที่แล้ว +539

    IBM should start a learning platform. Their videos are so good.

    • @XEQUTE
      @XEQUTE ปีที่แล้ว +10

      i think they already do

    • @srinivasreddyt9555
      @srinivasreddyt9555 9 หลายเดือนก่อน

      Yes, they have it already. TH-cam.

    • @siddheshpgaikwad
      @siddheshpgaikwad 8 หลายเดือนก่อน +4

      Its mirrored video, she wrote naturally and video was mirrored later

    • @Hossam_Ahmed_
      @Hossam_Ahmed_ 8 หลายเดือนก่อน

      They have skill build but not videos at least most of the content

    • @CaptPicard81
      @CaptPicard81 8 หลายเดือนก่อน

      They do, I recently attended a week long AI workshop based on an IBM curriculum

  • @geopopos
    @geopopos 9 หลายเดือนก่อน +92

    I love seeing a large company like IBM invest in educating the public with free content! You all rock!

    • @theupsider
      @theupsider 10 วันที่ผ่านมา

      Apparently there are scientists in charge who are pushing for such an agenda. Love to see it.

  • @ericadar
    @ericadar ปีที่แล้ว +103

    Marina is a talented teacher. This was brief, clear and enjoyable.

  • @jordonkash
    @jordonkash 10 หลายเดือนก่อน +90

    4:15 Marina combines the colors of the word prompt to emphasis her point. Nice touch

  • @ntoscano01
    @ntoscano01 11 หลายเดือนก่อน +35

    Very well explained!!! Thank you for your explanation of this. I’m so tired of 45 minute TH-cam videos with a college educated professional trying to explain ML topics. If you can’t explain a topic in your own language in 10 minutes or less than you have failed to either understand it yourself or communicate effectively.

  • @ReflectionOcean
    @ReflectionOcean ปีที่แล้ว +31

    1. Understanding the challenges with LLMs - 0:36
    2. Introducing Retrieval-Augmented Generation (RAG) to solve LLM issues - 0:18
    3. Using RAG to provide accurate, up-to-date information - 1:26
    4. Demonstrating how RAG uses a content store to improve responses - 3:02
    5. Explaining the three-part prompt in the RAG framework - 4:13
    6. Addressing how RAG keeps LLMs current without retraining - 4:38
    7. Highlighting the use of primary sources to prevent data hallucination - 5:02
    8. Discussing the importance of improving both the retriever and the generative model - 6:01

  • @TheAllnun21
    @TheAllnun21 ปีที่แล้ว +30

    Wow, this is the best beginner's introduction I've seen on RAG!

  • @natoreus
    @natoreus 7 หลายเดือนก่อน +25

    I'm sure it was already said, but this video is the most thorough, simple way I've seen RAG explained on YT hands down. Well done.

  • @AlexandraSteskal
    @AlexandraSteskal 4 หลายเดือนก่อน +3

    I love IBM teachers/trainers, I used to work at IBM and their in-house education quality was AMAZING!

  • @digvijaysingh6882
    @digvijaysingh6882 6 หลายเดือนก่อน +16

    Einstein said, "If you can't explain it simply, you don't understand it well enough." And you explained it beautifuly in most simple and easy to understand way 👏👏. Thank you

  • @aam50
    @aam50 ปีที่แล้ว +20

    That's a really great explanation of RAG in terms most people will understand. I was also sufficiently fascinated by how the writing on glass was done to go hunt down the answer from other comments!

  • @vikramn2190
    @vikramn2190 ปีที่แล้ว +44

    I believe the video is slightly inaccurate. As one of the commenters mentioned, the LLM is frozen and the act of interfacing with external sources and vector datastores is not carried out by the LLM.
    The following is the actual flow:
    Step 1: User makes a prompt
    Step 2: Prompt is converted to a vector embedding
    Step 3: Nearby documents in vector space are selected
    Step 4: Prompt is sent along with selected documents as context
    Step 5: LLM responds with given context
    Please correct me if I'm wrong.

    • @judahb3ar
      @judahb3ar 8 หลายเดือนก่อน

      I’m not sure. Looking at OpenAI documentation on RAG, they have a similar flow as demonstrated in this video. I think the retrieval of external data is considered to be part of the LLM (at least per OpenAI)

    • @PlaytimeEntertainment
      @PlaytimeEntertainment 8 หลายเดือนก่อน +3

      I do not think retrieval is part of LLM. LLM is the best model at the end of convergence after training. It can't be modified rather after LLM response you can always use that info for next flow of retrieval

    • @velocityra
      @velocityra 7 หลายเดือนก่อน

      Thank you. So many people praising this even though it didn't explain anything that can't be googled in 2 seconds.

  • @m.kaschi2741
    @m.kaschi2741 ปีที่แล้ว +8

    Wow, I opened youtube coming from the ibm blog just to leave a comment. Clearly explained, very good example, and well presented as well!! :) Thank you

  • @hamidapremani6151
    @hamidapremani6151 10 หลายเดือนก่อน +2

    The explanation was spot on!
    IBM is the go to platform to learn about new technology with their high quality content explained and illustrated with so much simplicity.

  • @maruthuk
    @maruthuk ปีที่แล้ว +22

    Loved the simple example to describe how RAG can be used to augment the responses of LLM models.

  • @AnjanaSilvaAJ
    @AnjanaSilvaAJ 26 วันที่ผ่านมา +1

    This is a fantastic video to learn about RAG in under 7 minutes. Thank you

  • @ltkbeast
    @ltkbeast 2 หลายเดือนก่อน +2

    Every time I watch one of these videos I'm amazed at the presenter's skill at writing backwards.

  • @kallamamran
    @kallamamran 11 หลายเดือนก่อน +5

    We also need the models to cross check their own answers with the sources of information before printing out the answer to the user. There is no self control today. Models just say things. "I don't know" is actually a perfectly fine answer sometimes!

  • @ghtgillen
    @ghtgillen ปีที่แล้ว +76

    Your ability to write backwards on the glass is amazing! ;-)

    • @jsonbourne8122
      @jsonbourne8122 ปีที่แล้ว +35

      They flip the video

    • @Paul-rs4gd
      @Paul-rs4gd 11 หลายเดือนก่อน +12

      @@jsonbourne8122 So obvious, but I did not think of it. My idea was way more complicated!

    • @aykoch
      @aykoch 7 หลายเดือนก่อน +3

      They're almost always left-handed as well...

    • @7th_CAV_Trooper
      @7th_CAV_Trooper 6 หลายเดือนก่อน +11

      @@aykoch she is right handed. when she writes, the arm moves away from the body. left hand arm would move toward the body. because the video is flipped, it's a bit of a mind trick to see it.

    • @bikrrr
      @bikrrr 6 หลายเดือนก่อน +1

      ​@@jsonbourne8122 Nice attention to detail as they made sure the outfit was symmetrical without any logos and had a ring on each hand's ring finger, making it harder to tell it was flipped.

  • @jyhherng
    @jyhherng ปีที่แล้ว +6

    this let's me understand why the embeddings used to generate the vectorstore is a different set from the embeddings of the LLM... Thanks, Marina!

  • @javi_park
    @javi_park 11 หลายเดือนก่อน +67

    hold up - the fact that the board is flipped is the most underrated modern education marvel nobody's talking about

    • @RiaKeenan
      @RiaKeenan 10 หลายเดือนก่อน

      I know, right?!

    • @euseikodak
      @euseikodak 10 หลายเดือนก่อน +8

      Probably they filmed it in front of a glass board and flipped the video on edition later on

    • @politicallyincorrect1705
      @politicallyincorrect1705 10 หลายเดือนก่อน +1

      Filmed in front of a non-reflective mirror.

    • @TheTomtz
      @TheTomtz 9 หลายเดือนก่อน +2

      Just simply write on a glass board ,record it from the other side and laterally flip the image! Simple aa that.. and pls dont distract people from the contents being lectured by thinkin about the process behind the rec🤣

    • @thewallstreetjournal5675
      @thewallstreetjournal5675 9 หลายเดือนก่อน +1

      Is the board fliped or has she been flipped?

  • @kingvanessa946
    @kingvanessa946 11 หลายเดือนก่อน +1

    For me, this is the most easy-to-understand video to explain RAG!

  • @ssr142812
    @ssr142812 6 หลายเดือนก่อน

    I have few questions here @ (1) When I prompt and it is not present in context store, shall I get generated text from LLM?
    2. when I prompt and a match with embeddings of context store, shall I get content generated from both LLM and Context store?
    3. How to enforce RAG framework in Langchain? Appreciate answers

  • @Will-lg9ev
    @Will-lg9ev 6 หลายเดือนก่อน

    As a salesperson that actually loves tech. This was an awesome explanation and the fact it was visual helped a ton!!!! Thanks

  • @444Yielding
    @444Yielding 8 หลายเดือนก่อน +3

    This video is highly underviewed for as informative as it is!

  • @sarangag
    @sarangag 2 หลายเดือนก่อน

    Nicely explained. My questions/doubts?
    1. Doesn't this raise questions about the process of building and testing LLMs?
    2. In such scenarios will the test and training data used be considered authentic and not "limited and biased"?
    3. Is there a process/standard on how often the "primary source data" should be updated?

  • @GregSolon
    @GregSolon 10 หลายเดือนก่อน

    One of the easiest to understand RAG explanations I've seen - thanks.

  • @evaiintelligence
    @evaiintelligence 8 หลายเดือนก่อน

    Marina has done a great job explaining LLM and RAGs in simple terms.

  • @shreyjain3344
    @shreyjain3344 4 หลายเดือนก่อน

    The explanation is good and easy to understand for a student like me who is new to this topic it gives me a clear idea of what RAG is.

  • @toenytv7946
    @toenytv7946 9 หลายเดือนก่อน +1

    Great down the rabbit hole video. Very deep and understandable. IBM academy worthy in my opinion.

  • @paulaenchina
    @paulaenchina 11 หลายเดือนก่อน +1

    This is the best explanation I have seen so far for RAG! Amazing content!

  • @projectfocrin
    @projectfocrin ปีที่แล้ว +5

    Great explanation. Even the pros in the field I have never seen explain like this.

  • @redwinsh258
    @redwinsh258 ปีที่แล้ว +23

    The interesting part is not retrieval from the internet, but retrieval from long term memory, and with a stated objective that builds on such long term memory, and continually gives it "maintenance" so it's efficient and effective to answer. LLMs are awesome because even though there are many challenges ahead, they sort of give us a hint of what's possible, without them it would be hard to have the motivation to follow the road

  • @ivlivs.c3666
    @ivlivs.c3666 6 หลายเดือนก่อน

    lecturer did a fantastic job. simple and easy to understand.

  • @RickOShay
    @RickOShay 7 หลายเดือนก่อน +1

    Less Helium! How does this system resolve conflicting answers from the datastore and generative process? Does the datastore answer always take precedence - and if so - is there a logic or reasoning layer that checks how reliable and up-to-date the datastore is and its reliability index?

  • @EmmettYoung
    @EmmettYoung 3 หลายเดือนก่อน

    I really like the analogy from the beginning! It was very smooth explanation! Well done!

  • @rajeshseptember09
    @rajeshseptember09 6 หลายเดือนก่อน

    I have no "Data Science" background. But I completely understood. You simplified this so unbelievably well. Thanks !

  • @ReelTaino
    @ReelTaino 11 หลายเดือนก่อน

    Please keep all these videos coming! They are so easy to understand and straightforward. Muchas gracias!

  • @AbhishekVerma-jw3jg
    @AbhishekVerma-jw3jg 4 หลายเดือนก่อน

    This was such simple and clear explanation of complex subject. Thanks Marina :)

  • @Aryankingz
    @Aryankingz ปีที่แล้ว +4

    That's what Knowledge graphs are for, to keep LLMs grounded with a reliable source and up-to-date.

  • @LindsayRichardson-rv2wn
    @LindsayRichardson-rv2wn 4 หลายเดือนก่อน

    Thank you for providing a thorough and accessible explanation of RAG!

  • @neutron417
    @neutron417 ปีที่แล้ว +2

    From which corpus/database are the documents retrieved from? Are they up-to date? and how does it know the best documents to select from a given set?

  • @HimalayJoriwal
    @HimalayJoriwal 9 หลายเดือนก่อน

    Best explanation so far from all the content on internet.

  • @mohamadhijazi3895
    @mohamadhijazi3895 8 หลายเดือนก่อน

    The video is short and consice yet the delivery is very elegant. She might be the best instructor that have teached me. Any idea how the video was created?

  • @Linkky
    @Linkky หลายเดือนก่อน

    Really comprehensive, well explained Marina Danilevsky !

  • @neotower420
    @neotower420 9 หลายเดือนก่อน

    tokens as a [word] is what I'm working on right now (solo, self learning LLM techniques), this video helped me realize how the model doesn't know what it's outputting obviously, but AI-AI is different, so building tokens that have dimensional vectors that process in a separate model, can be used for explainable AI.

    • @neotower420
      @neotower420 9 หลายเดือนก่อน

      meaning a separate model processes the response itself, meta, it's for building evolution learning. AI-AI machine learning, you need a way to configure in between the iterations.

  • @janhorak8799
    @janhorak8799 9 หลายเดือนก่อน +24

    Did all the speakers have to learn how to write in a mirrored way or is this effect reached by some digital trick?

    • @VlogBySKSK
      @VlogBySKSK 8 หลายเดือนก่อน

      There is a digital mirroring technique which is used to show the content this way...

    • @mao-tse-tung
      @mao-tse-tung 8 หลายเดือนก่อน +6

      She was right handed before the mirror effect

    • @Helixur
      @Helixur 7 หลายเดือนก่อน +1

      Writing on a clear glass, camera is behind the glass. It's like standing a glass and lookin at a person in an interrogation room

    • @vipulsonawane7508
      @vipulsonawane7508 6 วันที่ผ่านมา

      @Helixur you got my answer buddy!! Simple

  • @past_life_project
    @past_life_project 11 หลายเดือนก่อน

    I have watched many IBM videos and this is the undoubtedly the best ! I will be searching for your videos now Marina!

  • @sawyerburnett8319
    @sawyerburnett8319 11 หลายเดือนก่อน +1

    Wow, having a lightbulb moment finally after hearing this mentioned so often. Makes more sense now!

  • @rsu82
    @rsu82 7 หลายเดือนก่อน

    good explanation, it's very easy to understand. this video is the first one when I search RAG on TH-cam. great job ;)

  • @jean-charles-AI
    @jean-charles-AI 5 หลายเดือนก่อน +1

    This explantation is one of the best out there.

  • @gbluemink
    @gbluemink 10 หลายเดือนก่อน +1

    So the question I have here is when I have an answer from my LLM but not the Rag data, what is the response to the user? "I don't know" or the LLM response that may be out of date or without a reliable source? Looks like a question for an LLM :)

  • @rujmah
    @rujmah 9 หลายเดือนก่อน

    Brilliant explanation and illustration. Thanks for your hard work putting this presentation together.

  • @rvssrkrishna2
    @rvssrkrishna2 9 หลายเดือนก่อน

    Very precise and exact information on RAG in a nutshell. Thank you for saving my time.

  • @rhitikkrishnani510
    @rhitikkrishnani510 4 หลายเดือนก่อน

    Thats one of the best explaination I have got so far ! Thanks a ton !

  • @damen238
    @damen238 5 วันที่ผ่านมา

    I spent all of the 1st watch talking while a friend watched it aswell trying to figure out is she is a robot because of the backwards writing. Good and fast info the 2nd watch. Great job

  • @Jaimin_Bariya
    @Jaimin_Bariya 17 วันที่ผ่านมา +1

    Hey, JP here again,
    Thank You IBM

  • @batumanav
    @batumanav 2 หลายเดือนก่อน

    Amazing explanation. Starting from scratch and gained great perspective on this in a very short time.

  • @JonCoulter-u1y
    @JonCoulter-u1y ปีที่แล้ว +16

    The ability to write backwards, much less cursive writing backwards, is very impressive!

    • @IBMTechnology
      @IBMTechnology  ปีที่แล้ว +9

      See ibm.biz/write-backwards

    • @jsonbourne8122
      @jsonbourne8122 ปีที่แล้ว

      Left hand too!

    • @NishanSaliya
      @NishanSaliya ปีที่แล้ว

      @@IBMTechnology Thanks .... I was reading comments to check for an answer for that question!

  • @ravirajasekharuni
    @ravirajasekharuni 4 หลายเดือนก่อน +1

    Outstanding explanation. Its very easy to underatand. I like the way the video is made with presenter writing to the blackboard . I want to know what SOFTWARE/TOOL is used to make this video/presentation. Its really cool.

  • @lewi594
    @lewi594 11 หลายเดือนก่อน +2

    This is brilliant and concise, helped make sense of a complex subject..
    Can this be implemented in a small environment with limited computing? Such that the retriever only has access to a closed data source

  • @AravindBadrinath
    @AravindBadrinath ปีที่แล้ว +4

    Very well explained.❤
    But what happens if RAG and LLM trained data has conflict. in this case LLM knows answer as Jupiter and rag content store is saying answer is Saturn. Is it that RAG always gets higher weightage?

    • @Famaash
      @Famaash 11 หลายเดือนก่อน

      Yes, I think that's what she also implied.

  • @kartikpodugu
    @kartikpodugu ปีที่แล้ว +4

    I understand the concept, but how does LLM retrieve data from the source? Does this retrieval also involves some deep learning model ? how the data is formatted inside the source. I have so many questions. Can you provide some good reference ?
    Also if LLM is providing information from the source, what is the value add LLM is bringing here ?

    • @CharlesMacKay88
      @CharlesMacKay88 ปีที่แล้ว +2

      Can be stored into a secondary graph/vector database or use internet to look things up for real time information e.g sports scores for yesterdays game.

    • @hammadusmani7950
      @hammadusmani7950 ปีที่แล้ว

      I would lookup Document Question Answering models. OpenAI has an option to enable function calling. A given prompt can be part of a chain of LLM's, I would also lookup LangChain. For the most up to date information, you basically just add recent data to the prompt.

    • @kartikpodugu
      @kartikpodugu ปีที่แล้ว +1

      @@CharlesMacKay88, But how actually does LLM retrieve this information. Will it create a query by itself to retrieve data from vector database like you mentioned ? or if it is retrieving data from internet, how will it make sure it is correct. Can you point me to some good to understand all of these things in a better manner.

  • @mengyu-iv8wn
    @mengyu-iv8wn 8 หลายเดือนก่อน +1

    Hi, thanks for your share and I have a question regarding the RAG framework. Is the content of the answers solely retrieved from documents, or does the LLM integrate the retrieved content with its own knowledge before providing a response?

  • @paulw4259
    @paulw4259 8 วันที่ผ่านมา

    Thanks. Great video.
    I've had too many conversations where Chatgpt has apparently just made stuff up. I know that's not what happens really, but it seems like it and it still makes untrue statements.
    I'm glad researchers are working to improve things.

  • @WallyAlcacio
    @WallyAlcacio หลายเดือนก่อน

    Loved this method of explaining concepts. Thank you!

  • @rockochamp
    @rockochamp ปีที่แล้ว +1

    very well executed presentation.
    i had to think twice about how you can write in reverse but then i RAGed my system 2 :)

  • @francischacko3627
    @francischacko3627 8 หลายเดือนก่อน

    perfect explanation understood every bit , no lags kept it very interesting ,amazing job

  • @cionheart
    @cionheart หลายเดือนก่อน

    That was a really good video. My question about the last part is, beside improving the llm and the retriever, how can content be optimized?

  • @vipulsonawane7508
    @vipulsonawane7508 6 วันที่ผ่านมา

    Wow, simple neat and clear explanation!!!

  • @AntenorTeixeira
    @AntenorTeixeira ปีที่แล้ว

    That's the best video about RAG that I've watched

  • @amirzakrishan
    @amirzakrishan หลายเดือนก่อน

    Thank you for the explanation. I totally understood the Generation and Retrieval parts, could where did the Augmentation occur?

  • @Bikashics
    @Bikashics 5 หลายเดือนก่อน

    Thanks Marina !!! For that such a simple explanation on such a complex topic !!!

  • @rafa1rafa
    @rafa1rafa ปีที่แล้ว +2

    Great explanation! The video was very didactic, congratulations!

  • @Anubis2828
    @Anubis2828 9 หลายเดือนก่อน

    Great, simple, quick explanation

  • @Kekko400D
    @Kekko400D 10 หลายเดือนก่อน

    Fantastic explanation, proud to be an IBMer

  • @bhaskarmothali
    @bhaskarmothali 6 หลายเดือนก่อน

    Exactly what I was trying to understand, great explanation!

  • @sudhakarveeraraghavan5832
    @sudhakarveeraraghavan5832 8 หลายเดือนก่อน

    Very well explained and it is easily understandable to non AI person as well. Thanks.

  • @yeezhihao
    @yeezhihao ปีที่แล้ว +1

    Can someone help me explain why we are better off pulling from an updater content store vs retrain the model with the content store data?

  • @prasannakulkarni5664
    @prasannakulkarni5664 8 หลายเดือนก่อน

    the color coding on your whiteboard is really apt here !

  • @stanislavzayarsky
    @stanislavzayarsky 10 หลายเดือนก่อน

    Finally, we got a clear explanation!

  • @421sap
    @421sap ปีที่แล้ว

    Thank you, Marina Danilevsky ....

  • @xdevs23
    @xdevs23 9 หลายเดือนก่อน +7

    The entire video I've been wondering how they made the transparent whiteboard

  • @bdouglas
    @bdouglas 8 หลายเดือนก่อน

    That was excellent, simple, and elegant! Thank you!

  • @limitlessrari1
    @limitlessrari1 6 หลายเดือนก่อน

    Great explaination. It's very helpful for my project a GEN Ai intern

  • @pietruszqa-netguru
    @pietruszqa-netguru 7 หลายเดือนก่อน

    Hi. You mention that retriever has to give "best quality data" to the model. How is the quality assessed? Ar there any parameters/markers as a benchmark to decide whether or not the data provided is of high quality?

  • @peterciank
    @peterciank 4 หลายเดือนก่อน

    outstanding explenation and lecturer! Well done!

  • @JasonVonHolmes
    @JasonVonHolmes 9 หลายเดือนก่อน

    This was explained fantastically.

  • @mikezooper
    @mikezooper 6 หลายเดือนก่อน

    You’re an amazing teacher.

  • @cxiao-lz7tb
    @cxiao-lz7tb 5 หลายเดือนก่อน

    This is very cool, and I would like to ask how this demonstration effect is filmed.

  • @datpandx
    @datpandx 10 หลายเดือนก่อน +1

    I have a question, so for these RAG models, is it possible to have them in local? Like download my model and read the data from my computer or so, or is it somethin we have to have in the cloud always running?

  • @BartomiejGawron
    @BartomiejGawron ปีที่แล้ว +1

    Are there any simplified LLMs for RAG? They don't need to have all the information, so they can be much smaller and faster.

  • @VishalSharma-gp6dm
    @VishalSharma-gp6dm 9 หลายเดือนก่อน

    that reverse writing made be anxious, but a very smart explanation for RAG!!

  • @MraM23
    @MraM23 10 หลายเดือนก่อน

    Great lessons! Nice of you to step out 🙃 and make such engaging and educative content This is a very useful in helping us in critical thinking. Thank you for sharing this video. 👍
    Current ai models may impose neurotypical norms and expectations based on current data trained on . 🤔
    Curious to see more on how IBM approach the challenges and limitations of Ai

  • @lauther_27
    @lauther_27 ปีที่แล้ว +1

    Amazing video, thanks IBM ❤

  • @khalidelgazzar
    @khalidelgazzar ปีที่แล้ว +2

    Great explanation. Thank you!😊

  • @ConernicusRex
    @ConernicusRex ปีที่แล้ว

    why not have a temp model layer between the retrieval and response building that you quickly train in a transfer learning capacity atop the normal llm model and use that to generate the response? It seems like taking up input tokens with the additional instructions and the retrieved data is an exponentially higher stack of compute power and could lead to a faster than average loss of conversational context as retrieved data fills the context space with every query. This hot-swap approach to the model would save a lot of that by "baking in" the new data in a model re-normalized for it's updated information per-query.

    • @ConernicusRex
      @ConernicusRex ปีที่แล้ว

      oh. and i'm available for hire right now, so- i'm on linkedin.

  • @toddlilly4780
    @toddlilly4780 ปีที่แล้ว +4

    Love this! How is she writing on the glass? Is she writing backward? Or how does that work?

    • @IBMTechnology
      @IBMTechnology  ปีที่แล้ว +7

      See ibm.biz/write-backwards

    • @johncornell2498
      @johncornell2498 ปีที่แล้ว +2

      Lol, so simple when explained@@IBMTechnology, thanks I can actually pay attention to what she's saying now 😆

  • @vnaykmar7
    @vnaykmar7 ปีที่แล้ว +2

    Such an amazing explanation. Thank you ma'am!