This is the best explanation of RAG ever given by anyone - Detailed + Beginner to Advanced. One suggestion, we need a playlist for Gen AI and Agentic AI to follow in sequence, currently it's hard to navigate over youtube channel !!
I really missed your videos, and now you have come back with a comprehensive and wonderful video that solved a lot of my problems, so thank you and I hope you continue.
Excellent work brother. Very lucid explanation. It will even help the season developer to refine their understanding of the concepts. Looking forward for more such videos like these. You have earned my respect and a subscriber :).
Great video! Could you please create one on combining fine-tuning with Retrieval-Augmented Generation (RAG) for chatbots? Fine-tuning can be costly for certain use cases, but applying it selectively to establish the tone or behavior of RAG models could be highly efficient. This would be useful for instances where we want the model to follow a specific conversational style without extensive, full-scale fine-tuning
Hello Pradip, could you make a video on how can we split based on the ranking of the employee in an organization? For example executives would have access to financials but a junior would have access to a guide documentation
Thanks a lot for such a wonderful content. Could u please list down all step required to get code in local and make it running 1. Clone code using git clone 2. Replace api key 3. Run command for Backend 4 run command for fronted This will help to play around with this code
Its like deploying any api check this th-cam.com/video/7FVPn25mmEQ/w-d-xo.htmlsi=k9IiN8XS13hn_O48 th-cam.com/video/904cW9lJ7LQ/w-d-xo.htmlsi=aydaAzyj-nyJPDke
The context consists of relevant documents retrieved from a vector database. These documents are processed to extract only the page content, excluding metadata, and all content is combined into a single long string, with each document separated by two newline characters.
In newer version of langchain v0.3, these chains has been deprecated, in favor of the more flexible and powerful frameworks of LCEL and LangGraph. Then, why do you prefer to use these chain??
I've also demonstrated in the video how to achieve similar functionality without this specific chain, using LCEL instead. Additionally, I don’t believe these chains have been fully deprecated; even the 0.3 documentation still includes them. You can check it out here: python.langchain.com/docs/tutorials/qa_chat_history/
my langsmith account is showing: Failed to execute 'getReader' on 'ReadableStream': ReadableStreamDefaultReader constructor can only accept readable streams that are not yet locked to a reader || how to resolve this?
Planning open source models in next video. I work with multiple clients and they still prefer open ai models rather spending money hosting open source models
@@FutureSmartAI Thank you for the reply. Interesting to know and company which I work for want to host open source models due to privacy and security and no need to worry about vendor lock in. Can you please guide or have any plans using litellm to make the code model agnostic?
This is the best explanation of RAG ever given by anyone - Detailed + Beginner to Advanced. One suggestion, we need a playlist for Gen AI and Agentic AI to follow in sequence, currently it's hard to navigate over youtube channel !!
Great suggestion!
This is best production level RAG code. That is what I was looking for.
I really missed your videos, and now you have come back with a comprehensive and wonderful video that solved a lot of my problems, so thank you and I hope you continue.
Great to hear!
Please keep uploading like this exquisite content. Thank you!
Excellent work brother. Very lucid explanation. It will even help the season developer to refine their understanding of the concepts. Looking forward for more such videos like these. You have earned my respect and a subscriber :).
Glad it was helpful!
Much needed. What an explanation. Thanks a lot 🙌🏻
Glad it was helpful!
Nice series. Please keep uploading.
bhai, what an explaination! just wow
Great video! Could you please create one on combining fine-tuning with Retrieval-Augmented Generation (RAG) for chatbots? Fine-tuning can be costly for certain use cases, but applying it selectively to establish the tone or behavior of RAG models could be highly efficient. This would be useful for instances where we want the model to follow a specific conversational style without extensive, full-scale fine-tuning
Excellent
Timestamp
RAG: 31:27
Embedding: 38:07
Nothing...Just log in to say "Merry Christmas and Happy New Year!" to Professor Pradip and other "classmates"🥳
Hello Pradip, could you make a video on how can we split based on the ranking of the employee in an organization? For example executives would have access to financials but a junior would have access to a guide documentation
thank you!!
very important and great videos please keep posting , how can we store unstructured data like table and image in vector db
Awesome sir thanks a lot
I have a request sir please make a vedio on integrating diagram generation feature also in this chatbot sir it helps alot sir
Great explanation! Where can I find the code/notebook present in the video?
link In the description
it was such a great course. We can deploy this application on azure or aws also ? not only streamlit right?
Yes you can deploy this app anywhere
Thanks
please make video on all topics in same way
Best explanation and the code walkthrough is amazing. Bdw, where can i get the documentsthat you have used in the code?
Link in the description that has code and docs
❤❤
Thanks a lot for such a wonderful content.
Could u please list down all step required to get code in local and make it running
1. Clone code using git clone
2. Replace api key
3. Run command for Backend
4 run command for fronted
This will help to play around with this code
How to run both application
I m new to python
Thank you for this great tuto, I have question though : The app you developed is on localhost, but how we can deploy so that it's available online ?
Its like deploying any api check this
th-cam.com/video/7FVPn25mmEQ/w-d-xo.htmlsi=k9IiN8XS13hn_O48
th-cam.com/video/904cW9lJ7LQ/w-d-xo.htmlsi=aydaAzyj-nyJPDke
@FutureSmartAI thank you
Nice one! please, the blog post seems not to be opening.
Finally
One thing i can't understand is : what is context ? In the example 52:00 you put all the document as context.
The context consists of relevant documents retrieved from a vector database. These documents are processed to extract only the page content, excluding metadata, and all content is combined into a single long string, with each document separated by two newline characters.
In newer version of langchain v0.3, these chains has been deprecated, in favor of the more flexible and powerful frameworks of LCEL and LangGraph. Then, why do you prefer to use these chain??
I've also demonstrated in the video how to achieve similar functionality without this specific chain, using LCEL instead. Additionally, I don’t believe these chains have been fully deprecated; even the 0.3 documentation still includes them. You can check it out here:
python.langchain.com/docs/tutorials/qa_chat_history/
Hi Pradip. Is it still worth it to pursue this space/market on upwork? is there still demand for it ?
Yes
this is one amazing video sir ! I have a question, weaviate seems to give a tough competition to chromaDB so how to choose between vector DBs
Let me see if I could create video on that
How can be save tokens. every time we hit LLM or open AI. we consume some tokens. and token are pricey right? how do we save them ?
my langsmith account is showing: Failed to execute 'getReader' on 'ReadableStream': ReadableStreamDefaultReader constructor can only accept readable streams that are not yet locked to a reader || how to resolve this?
how to evaluate to built RAG application?
great video! can you share the code used?
In the description
open ai api key limit exceed when ever i have used that it shows the same
😮💨
Please teach Huggingaface or ollama open source models instead of Open AI LLMs
Planning open source models in next video. I work with multiple clients and they still prefer open ai models rather spending money hosting open source models
@@FutureSmartAI Thank you for the reply. Interesting to know and company which I work for want to host open source models due to privacy and security and no need to worry about vendor lock in. Can you please guide or have any plans using litellm to make the code model agnostic?
Can i use gemini pro instead of open ai ?
Yes
wonderful...you are doing excellent.. can we get this code file?
Yes. code and explanation link in description
@@FutureSmartAI code isn't complete - can you publish it on git or add google collab link
@@Pratik345-b1y All code is availble on hashnode series whose link is in description. Blog will also have link to original notebook
@@FutureSmartAI Got it - Thank you so much - Your work is really awesome !! Please create more content - its more valuable than any paid courses
@@FutureSmartAI i am not able to find
you should sahre the github Repo as well
yes check link in description
27:36
1:15:05
I think It would be better to revise
Thanks for suggestions
lottery lag gai
Hi @pradip
19:47