Correction 1. Vector databases use mathematical representations like vectors, not arrays. Array-like similarity oversimplifies. 2. Transparency reduces bias but cannot entirely eliminate it without robust training data. 3. Embeddings are typically static; new embeddings are created for updated data.
1. Vector DB may or may not stored relevant information related to the question. 2. LLM may already have information or more accurate information to the question. So RAG may not always be helpful for GenAi applications
There is good point of halucinations of AI and the video unfortunately does not address it. The data governance is not addressing this issue we still can have a scenario where input is valid but output generated by AI is a garbage.
@@vintastic_you have to make sure the relevant data is going to the model. Good info into the data base is only half the battle. Semantic chunking. Size of chunks..types of search. Type of vector database used. For example PG Vector is a Postgres plugin and is not near as good at retrieval (usually) as something like pinecone
Then the prompt used can also tremendously affect the model. You have to put it in the right context and use industry specific terms when prompting. Even a genius needs context or a bit of time to think. No matter who good the model, you have to know a bit about the specific industry to obtain great results. It’s like explains a noise to you mechanic or telling them you have a miss-fire on cylinder 1.
Hey! If I ask a RAG-based language model, "Tell me the features of the iPhone 17," what will it tell me? Will it say it doesn't know or will it hallucinate? I understand that once the iPhone 17 is released, the database will be updated to provide the correct information. But what happens if I ask about it before its release?
I can see two scenarios here: if it is indeed RAG-based, then you have provided info about the yet-to-be-released iPhone 17. So the LLM will respond based on that. If you don't have it in your additional documents/vector DBs, then I'd recommend you to have always added something along the lines of "only answer with facts you have access to" to your system prompt and to set Temperature to a low number. (Temperature is a parameters in LLMs that defines how "creative" the model can be) Great question and it highlights the importance of having experts in GenAI guiding enterprises on how to implement this in a way that suits their use cases.
Hi I have few Questions , Please find time to answer 0. Are we filtering the Data in the Vector DB ? (If Yes? then ) 1. How are we filtering the relevant data from our vector DB to augment to our prompt for the LLM ? 1. 1 who is doing this process , another LLM, our own code , or some-different tool? 1.2 Are we feeding the complete data as whole to the LLM? 1.2 if we are filtering the vector data using Rule Based Mechanism then what will be the use case of LLM , how is the power of LLM being drawn if we are the one who is making the decision what to feed as a relevant data to the LLM?
Hi This is what my understanding Storing Data as Embeddings: Correctness: Storing data (documents, images, etc.) as embeddings in a Vector DB is a valid approach. Embeddings represent the data in a high-dimensional vector space, capturing its semantic meaning. Consideration: Ensure that the embedding model you use is appropriate for your data type. For images, you might use a different model (e.g., CLIP) compared to text embeddings. Searching with Embeddings: Correctness: Converting the search query into embeddings and then comparing these embeddings with those stored in your Vector DB is correct. This allows for semantic search, which is more effective than keyword-based search. Consideration: Ensure that the conversion process and similarity calculations (e.g., cosine similarity) are implemented correctly. The returned plain text should be accurately relevant to the search query. Summarization by LLM: Correctness: Sending the retrieved plain text content to an LLM for summarization is appropriate. LLMs are designed to generate summaries and provide concise explanations based on the input text. Consideration: Ensure that the LLM is correctly configured for summarization tasks. Provide clear instructions or prompts to achieve the desired summarization quality. Returning Summarized Text to User: Correctness: Receiving the summarized text from the LLM and returning it to the user is the final step in the process. This is standard practice for providing user-friendly summaries. Consideration: Validate that the summarized content meets user expectations and provides accurate, meaningful information. 1. We Store all out relevant data in Vector DB ..like documents , images etc as part of Embeddings 2. When User Searches, it will not hit LLB directly, it will convert our search into Embeddings and return the resutlt with plain text 3. Then we send the same to text for summarization to LLM 4. Then LLM returns the summarized text back to user
IMO. Data Gov Management seems to be the same as correct Database data input workflow. Prompt is a another way to query the database. LLM avoid use of query language to interrogate the DB. In this way a common people can query the DB. Appears to be a good idea to fine tuning a large LLM. But is not a fine tuned train is more similar to data embedding. In RAG how the vector database interact with LLM ? The vector database grows the LLM's latent space ? Is there a possibility that the LLM parameters can overlaps vector parameters making a mix of knowledge?
Thank you for your sharing! Very helpful and easy to follow. Just one question, is there anyway we can test or reinforcement train the model to make sure the outputs are appropriate?
Does not address how do you validate the Q1 results returned are accurate. You should have built in a process parallel to querying the LLM of actually querying the results and training the LLM to address any discrepancies, if that is possible or correct them.
No , Vector DB stores all the sensitive and confidential data Once obtained send that data to LLM to get it summarized because Vector DB would have given only the connecting data for the string
And then a wide spread global epidemic crisis is brought to light wherein our gold standard "books" (peer reviewed journals) are rife with bad and corrupt data due to mismatched incentivization and misalignment of directives; and we then realize...how much good data through science do we really have? Shame we polluted the books we are supposed to be able to trust now that we have this magnificent technology here. 😭
they mirror the video) you can notice that most of the people writing on a glass board are left handed(in reality 90% of planet’s population is right handed), that’s also because they mirrored the video
Terrible Analogy! When a journalist is ants to do research, he goes to a library and asks the librarian?? As opposed to doing a google search ? This scenario is from last century before Luv was born ?
Correction
1. Vector databases use mathematical representations like vectors, not arrays. Array-like similarity oversimplifies.
2. Transparency reduces bias but cannot entirely eliminate it without robust training data.
3. Embeddings are typically static; new embeddings are created for updated data.
The most important step here is left to the end. You can only use RAG with a transparent or locally built LLM.
Very simple and clear explanation.. cheers to IBM
Shawn & Luv!!!! Awesome job!!!!
great quick session, thanks !!
Very Well explained! Thank you so much.
The guy on the left wanted to laugh out loud at that sketch 😂
Clean database, stable generator and clever retriever.
Thought Shawn was very flirty until I realised he didn't say "love" but "Luv"
Same Here XD
Clear and simple, thank you guys and thank you IBM
1. Vector DB may or may not stored relevant information related to the question. 2. LLM may already have information or more accurate information to the question. So RAG may not always be helpful for GenAi applications
There is good point of halucinations of AI and the video unfortunately does not address it. The data governance is not addressing this issue we still can have a scenario where input is valid but output generated by AI is a garbage.
What's the solution?
@@vintastic_you have to make sure the relevant data is going to the model. Good info into the data base is only half the battle. Semantic chunking. Size of chunks..types of search. Type of vector database used. For example PG Vector is a Postgres plugin and is not near as good at retrieval (usually) as something like pinecone
Then the prompt used can also tremendously affect the model. You have to put it in the right context and use industry specific terms when prompting. Even a genius needs context or a bit of time to think. No matter who good the model, you have to know a bit about the specific industry to obtain great results. It’s like explains a noise to you mechanic or telling them you have a miss-fire on cylinder 1.
@@vintastic_ human expert review for every response
Thanks for the straight forward description of RAG.
I suggest prioritizing a terminal rather than a drawing board.
Bias within LLM's is a topic that needs more like shed on.
Good job. I still need to learn more about data accuracy in a LLM.
Neat and detail explanation.
Hey! If I ask a RAG-based language model, "Tell me the features of the iPhone 17," what will it tell me? Will it say it doesn't know or will it hallucinate? I understand that once the iPhone 17 is released, the database will be updated to provide the correct information. But what happens if I ask about it before its release?
I can see two scenarios here: if it is indeed RAG-based, then you have provided info about the yet-to-be-released iPhone 17. So the LLM will respond based on that.
If you don't have it in your additional documents/vector DBs, then I'd recommend you to have always added something along the lines of "only answer with facts you have access to" to your system prompt and to set Temperature to a low number. (Temperature is a parameters in LLMs that defines how "creative" the model can be)
Great question and it highlights the importance of having experts in GenAI guiding enterprises on how to implement this in a way that suits their use cases.
Nice explanation. Well done boys 😁
Poetic journey: the essence of refund details and expected actions
You'd think with this being IBM the mic would be better
Exactly love.
Great explanation. Thank you
Thank for sharing !!
Hi I have few Questions , Please find time to answer
0. Are we filtering the Data in the Vector DB ? (If Yes? then )
1. How are we filtering the relevant data from our vector DB to augment to our prompt for the LLM ?
1. 1 who is doing this process , another LLM, our own code , or some-different tool?
1.2 Are we feeding the complete data as whole to the LLM?
1.2 if we are filtering the vector data using Rule Based Mechanism then what will be the use case of LLM , how is the power of LLM being drawn if we are the one who is making the decision what to feed as a relevant data to the LLM?
Hi This is what my understanding
Storing Data as Embeddings:
Correctness: Storing data (documents, images, etc.) as embeddings in a Vector DB is a valid approach. Embeddings represent the data in a high-dimensional vector space, capturing its semantic meaning.
Consideration: Ensure that the embedding model you use is appropriate for your data type. For images, you might use a different model (e.g., CLIP) compared to text embeddings.
Searching with Embeddings:
Correctness: Converting the search query into embeddings and then comparing these embeddings with those stored in your Vector DB is correct. This allows for semantic search, which is more effective than keyword-based search.
Consideration: Ensure that the conversion process and similarity calculations (e.g., cosine similarity) are implemented correctly. The returned plain text should be accurately relevant to the search query.
Summarization by LLM:
Correctness: Sending the retrieved plain text content to an LLM for summarization is appropriate. LLMs are designed to generate summaries and provide concise explanations based on the input text.
Consideration: Ensure that the LLM is correctly configured for summarization tasks. Provide clear instructions or prompts to achieve the desired summarization quality.
Returning Summarized Text to User:
Correctness: Receiving the summarized text from the LLM and returning it to the user is the final step in the process. This is standard practice for providing user-friendly summaries.
Consideration: Validate that the summarized content meets user expectations and provides accurate, meaningful information.
1. We Store all out relevant data in Vector DB ..like documents , images etc as part of Embeddings
2. When User Searches, it will not hit LLB directly, it will convert our search into Embeddings and return the resutlt with plain text
3. Then we send the same to text for summarization to LLM
4. Then LLM returns the summarized text back to user
nice work.
Interesting , thanks both
Excellent video, love it
The unhappiness in their eyes and frowns tell the pain of working for Artificial Intelligence jobs
good explanation
Awesome video
Thanks guys, very clear!
IMO. Data Gov Management seems to be the same as correct Database data input workflow. Prompt is a another way to query the database. LLM avoid use of query language to interrogate the DB. In this way a common people can query the DB. Appears to be a good idea to fine tuning a large LLM. But is not a fine tuned train is more similar to data embedding. In RAG how the vector database interact with LLM ? The vector database grows the LLM's latent space ? Is there a possibility that the LLM parameters can overlaps vector parameters making a mix of knowledge?
Cool explanation
Meh, they leave the biggest question unanswered, how are enterprises expected to govern the data that was used to train the LLM ?
Very clear explanation, thanks! How do you manage to avoid using "blackbox" models?
Great backwards writing skills!
Informative
Thank you for your sharing! Very helpful and easy to follow. Just one question, is there anyway we can test or reinforcement train the model to make sure the outputs are appropriate?
This was an excellent explanation! Thank you.
So with a rag approach, can I say that we can update the original vector db with our own processed data?
Nobody's ever been fired for buying RAGs from IBM.
... yet
Love the 1980s meme.
My left arm started tingling. Quite hard to concentrate now. 😅
The demanding world of refund specifics and anticipated actions
exactly love
Gotcha.
Ironically, because transfers between banks and cards always go so smoothly, don't they? But seriously, it's all good.
Behind the scenes: Binance CEO shares insights into future developments in an exclusive interview
ok, gotcha 👌
How to do this video? screen as the board.
Do these guys write on the whiteboard backwards or how does that work?
Yes they learned to write backwards for this video because it's cheaper to do that than to run the algorithm to flip a video horizontally
@@JShaker 😂😂😂
Is there a pane of glass in front of them, or is this some other technology?
yes it's a glass
good
Does not address how do you validate the Q1 results returned are accurate. You should have built in a process parallel to querying the LLM of actually querying the results and training the LLM to address any discrepancies, if that is possible or correct them.
Do LLMs store our sensitive data when using RAG?
No ,
Vector DB stores all the sensitive and confidential data
Once obtained send that data to LLM to get it summarized because Vector DB would have given only the connecting data for the string
7:10 sounds like support for open source.
🤔I think Luv saw the connection the entire time
8615 Balistreri Forge
Did you need to learn to write backwards for this videos?? or is there a product that help you with this nice board?
You record the video then mirror the image. They simple write on the clear board
th-cam.com/video/Uoz_osFtw68/w-d-xo.html
And then a wide spread global epidemic crisis is brought to light wherein our gold standard "books" (peer reviewed journals) are rife with bad and corrupt data due to mismatched incentivization and misalignment of directives; and we then realize...how much good data through science do we really have? Shame we polluted the books we are supposed to be able to trust now that we have this magnificent technology here. 😭
Dude, keep on topic. This isn’t the place for your grievances
i know right? it's too bad we not have unbiased data to make the most of this technology :(
Alverta Field
👏👏
1411 Carson River
946 Cremin Ranch
lol. must be annoying to talk to "Luv". "Hi, Luv", "Exactly, Luv"
Etha Road
Sipes Forks
Dibbert Throughway
Dayna Course
Pacocha Estates
kabhi haans bhi liya karo..
(Smile a bit bros)
How in the world is this dude writing inverted for us too read straight lol
they mirror the video) you can notice that most of the people writing on a glass board are left handed(in reality 90% of planet’s population is right handed), that’s also because they mirrored the video
It's a skill only left-handed people have
Malika Spur
Tom Harbor
Terrible Analogy! When a journalist is ants to do research, he goes to a library and asks the librarian??
As opposed to doing a google search ?
This scenario is from last century before Luv was born ?
Single take artists
Boring
77091 Schowalter Passage