Hey Sam! Just wanted to drop a quick note to express my sincere gratitude for your incredibly helpful videos. They’ve been a game changer for me, offering clarity and guidance when I needed it most. A thousand thanks for your hard work and dedication. Keep up the great work! 🌟 #ThankYouSam
This is mind-blowing Sam Appreciate your work Only one observation I have is that in any medical context, sources should always be mentioned Can you add it to the notebook or show us how to do it please?
Fantastic job Sam! Looking for these LangChain vids just wish we did not use the OpenAPI key and rather an open-source HuggingFace example. I would encourage you to go in that direction.
@@NicolasEmbleton You can do that, Langchain supports Llamacpp through llama-cpp-python bindings OR i can link you a API wrapper for KoboldCPP (which is what i use). Seems to work so far for everything, but I am just learning langchain too. I can try to help out and link the Koboldcpp wrapper i found if you want.
Hey Sam, I'm trying to do my own implementation of this, but it seems like when the agent is determining the best answer, it gets caught up in the details of webMD. For example, it will first assess the sprained ankle, but then realize that it could be linked with a blood clot condition and then focus on that and the final answer is an explanation on how to avoid blood clots and other diseases, when in reality all I wanted to know was what I need to do once my ankle is sprained. Let me know if you could help! thank you so much!!
Hi Sam, Great tutorial and learning a lot from you. Just one question on this, Is it possible to add citations to the answer and list down the links from which the answer is generated?
Please also consider doing these videos with local LLMs from transformers, (Python)-Llama-Cpp, autogptq etc., at least a few. I do not have an OpenAI API key nor am I really interested in paying for one or getting one in general. However, I do have successfully run several local LLM using my Hardware, which are surprisingly good! Like, I'm not talking "gibberish good", I am talking like "ChatGPT good". I think using local LLMs will give you even more freedom, especially in picking the model you actually want to use (Different model, different purpose).
Can I ask what model you are using? I have done a number of No OpenAI vids in the past and I will do more in the future for sure. I would hope that people see that the code here can be converted for any Non OpenAI Model pretty easily.
Your videos are some of the most advanced, yet you still made them comprehensible. Thanks a lot. Do you think these could be made better with gpt functions? Or ReAct perform similarly?
Yeah almost everything can be made better with GPT functions lately. This was recorded before they existed. Most the principles would still be the same.
He did. Check out his tutorial on LangGraph. This is important because LangGraph seems a better option for multi-agent. If you want to have more control over the flow of your decision flow, check the tutorial out. One notable part of that tutorial is the part about conditional edge.
Thank you for the video! I have a use case where my agent will query a dataframe/csv and also have the memory buffer. The architecture i went through was make a custom agent ( same as you have done in the video) and use the dataframe agent as a tool for querying the dataframe. What are your thoughts on this?
Excellent video. 1 quick question, if one wanted to make a “subject matter experts” could it be done using a similar approach with out fine tuning a model?
Fantastic video Sam. Thank you. I built a similar agent using custom tools, I updated the prompt to write a full detailed report incorporating also information from all its observations however the final answer is always shortly summarized without the full details. Any idea why and how to solve?
Hi , Thanks for this detailed explanation, i have a doubt , how do we select Agent, there are different types of agents in LLM and how do we choose which one to use , is it based on the task we are working on like that or can we use anyone? please could you explain
Thank you ..This really helps....I used only Google serper API as tool & mistral 7B model ..For some questions I could see in the log that {tool name} is not a valid tool & gave me not good answer. But after sometimes when I tried again it used the tool & gave me good answer for the sane question. What could be the reason?..can you please help me
To do that you need to serialize the conversations. the load and save each time etc. I usually save to a DB like Firestore so it is quick and easy to just keep a UUID for the conversation etc.
Hello Sam, nice videos. I went through the example but when it came to the part with memory I noticed the custom agent still does not recall previous conversation context (even after adding the memory components to custom agent) do you know why that is and how to fix it?
Great video!!! I have a questions, say the LLM need more information from the user, do you know how to incorporate in this agent the "I need more info" conversation? Thanks
I really like your tutorials, is there one where you add a tool into a retrievalqa agent? Like sending email / save the communication after human is satisfied?
@@samwitteveenai sorry for my bad English, what I did try to explain is, adding a tool to send a email, just as an example, or basically after the human answer do an action who is programmatically done not by chatgpt
Hey Sam, can you tell how can we increase the quantity of answer that llm model is giving us after analyzing websites? The problem I am facing right is, now no matter where and how many times I mention 300 words it keeps on giving answer in one or two lines only
The models often won't listen to a length like that. You can either Fine Tune the model for longer responses of can try something like asking for multiple paragraphs. Don't forget in this case the inputs are not huge either.
I tried to do it with 4 models and they all kinda sucked at it including Falcon7B etc. It really needs to either be a big open Source model or fine tuned for the task.
The front end options are dependent on the end implementation. Look for - how to implement chatbots in X web framework (Django, grails, whatever). Langchain is a framework for building processes on top of back end components like the LLM, Vector DB.
@@robxmccarthy thanks for the reply. I’ve yet to see any completed application with this tool to my knowledge, only a lot of people explaining what it could do on videos. Guess it will just be an exciting time as more folks begin to build 🤘
Can you please do a video for building chatbot on large custom data of company (something like 40-50 pdf some audio files some ppt) with memory using pinecone . That can generate output from the text given if not found generate generic possible response using open ai
This won't normally happen in this use case with the 4k token limits, but you can use a transformation chain to trim the amount of tokens or put in a summarization chain etc.
Hi! Nice video, i'm having trouble implementing this, i make use of Custom Tools with StructuredTool.from_function(func...) and apparently it works well but the input of some tools is not always a single string, for one tool i need it to input a JSON and string which in action input seems fine but actually is all on a single string
Hello , Sam, I am a med student from India , currently I am trying to make a paper on the use langchain and llm models in medical education. I have some technical difficulties. Is it be possible to contact with you on this ? I can't say much here. Your advice and help will help a lot
These videos would have more utility if you could transcribe (maybe with AI) the video and interleave the transcript with screen images. The basic problem is that people will mostly remember only a couple points/names and that's it.
actually this is getting the info via DuckDuckGo and the sections that beings back. But I agree the idea was to show how people could something like this for their own site.
Very insightful video! Thanks Sam. One question, in this case you're using the DuckDuckGoSearchRun pre-built function and modifying it a bit so it narrows the search to a specific website. When working with Toolkits like GmailToolkit, they probably have a pre-built function (which I can't find anywhere) with the steps to search for emails in this case, so I'm not sure if I can customize my own function to act somewhat different, the same way you did in this video. It's clear to me that I would need to change the description of the tools so the agent can do a better job, but I'm not sure if in this case it's advisable to change the main function built to read through emails. Also, my plan is to customize and change a bit the agent.agent.llm_chain.prompt.template since I think it could improve search results. As you can see I'm trying to build an email reader chatbot that is useful for when I don't have time to read them all, so instead I'd ask this agent something like "What did Sam has sent me recently?" Thanks again mate
Hey Sam, loving your videos - I'm having a hell of a time trying to use the new OpenAI Functions Agent in a similar manner. For some reason the prompt organises itself so that System Msg is less adhered to. ie the prompt is delivered as: SystemMessage; ChatHistory; HumanInput.
It's always nice when someone takes the time to explain it. This helps a lot than just reading the documentation. Thanks!
Outstanding, Sam. As a physician this really helps me understand how you can unlock the web with just LangChain and LLMs. Bravo!
The more Python I learn, the more awesome your videos become :) Thank you for sharing!
Hey Sam! Just wanted to drop a quick note to express my sincere gratitude for your incredibly helpful videos. They’ve been a game changer for me, offering clarity and guidance when I needed it most. A thousand thanks for your hard work and dedication. Keep up the great work! 🌟 #ThankYouSam
Hi Sam, I love your videos! They are so helpful.
Great job as usual! Adding a source like bing in the chain might add a little edge to the response. Really useful content keep it coming
This is definitely going in “Gem List” Thank You!
This is mind-blowing Sam
Appreciate your work
Only one observation I have is that in any medical context, sources should always be mentioned
Can you add it to the notebook or show us how to do it please?
I will look at showing meta data and sources at some point in the future. It is not that easy in this case for webpages.
This video was SO helpful for troubleshooting!
Fantastic job Sam! Looking for these LangChain vids just wish we did not use the OpenAPI key and rather an open-source HuggingFace example. I would encourage you to go in that direction.
Same here. Can we use like Open Assistant or other open source?
I second that. Llama2 or any open source one.
@@NicolasEmbleton You can do that, Langchain supports Llamacpp through llama-cpp-python bindings OR i can link you a API wrapper for KoboldCPP (which is what i use). Seems to work so far for everything, but I am just learning langchain too. I can try to help out and link the Koboldcpp wrapper i found if you want.
Hey Sam, I'm trying to do my own implementation of this, but it seems like when the agent is determining the best answer, it gets caught up in the details of webMD. For example, it will first assess the sprained ankle, but then realize that it could be linked with a blood clot condition and then focus on that and the final answer is an explanation on how to avoid blood clots and other diseases, when in reality all I wanted to know was what I need to do once my ankle is sprained. Let me know if you could help! thank you so much!!
Hi Sam,
Great tutorial and learning a lot from you.
Just one question on this, Is it possible to add citations to the answer and list down the links from which the answer is generated?
Please also consider doing these videos with local LLMs from transformers, (Python)-Llama-Cpp, autogptq etc., at least a few. I do not have an OpenAI API key nor am I really interested in paying for one or getting one in general. However, I do have successfully run several local LLM using my Hardware, which are surprisingly good! Like, I'm not talking "gibberish good", I am talking like "ChatGPT good". I think using local LLMs will give you even more freedom, especially in picking the model you actually want to use (Different model, different purpose).
Can I ask what model you are using? I have done a number of No OpenAI vids in the past and I will do more in the future for sure. I would hope that people see that the code here can be converted for any Non OpenAI Model pretty easily.
@@samwitteveenai Sorry, TH-cam won't let me answer you and deletes my messages as soon as I name any model. O_O
Your videos are some of the most advanced, yet you still made them comprehensible. Thanks a lot. Do you think these could be made better with gpt functions? Or ReAct perform similarly?
Yeah almost everything can be made better with GPT functions lately. This was recorded before they existed. Most the principles would still be the same.
Hi Sam, thanks for your contributions. Can you create a new custom agent tutorial because it is already little bit outdated.
He did. Check out his tutorial on LangGraph. This is important because LangGraph seems a better option for multi-agent. If you want to have more control over the flow of your decision flow, check the tutorial out. One notable part of that tutorial is the part about conditional edge.
Another great video. Thank you Sam
Thank you for the video! I have a use case where my agent will query a dataframe/csv and also have the memory buffer. The architecture i went through was make a custom agent ( same as you have done in the video) and use the dataframe agent as a tool for querying the dataframe. What are your thoughts on this?
Excellent video. 1 quick question, if one wanted to make a “subject matter experts” could it be done using a similar approach with out fine tuning a model?
When would you use AgentTokenBufferMemory? How does it differ to conversation memory?
Fantastic video Sam. Thank you. I built a similar agent using custom tools, I updated the prompt to write a full detailed report incorporating also information from all its observations however the final answer is always shortly summarized without the full details. Any idea why and how to solve?
How can i create custom tools within flowise to use them with the agent?
I can't seem to find the source for this video on the GitHub repo in the description. Can you give a link to the source from this video?
La fuente de datos ¿Puedo usarla para PDF, búsqueda en Internet al mismo tiempo?
Hi , Thanks for this detailed explanation, i have a doubt , how do we select Agent, there are different types of agents in LLM and how do we choose which one to use , is it based on the task we are working on like that or can we use anyone? please could you explain
Thank you ..This really helps....I used only Google serper API as tool & mistral 7B model ..For some questions I could see in the log that {tool name} is not a valid tool & gave me not good answer. But after sometimes when I tried again it used the tool & gave me good answer for the sane question. What could be the reason?..can you please help me
It’s an amazing tutorial, but I'm very curious about how you organize the memories for a multi-section chat.
what do you mean by multi-section chat?
I mean, many people chat with the model at the same time, and the memory of each person who is chatting with it.
To do that you need to serialize the conversations. the load and save each time etc. I usually save to a DB like Firestore so it is quick and easy to just keep a UUID for the conversation etc.
@@samwitteveenai thanks Sam, it worked this way.
Can you please share the exact link of code, where it is on github,
Because there are lot of folder on link that you gave in description.
Hello Sam, nice videos. I went through the example but when it came to the part with memory I noticed the custom agent still does not recall previous conversation context (even after adding the memory components to custom agent) do you know why that is and how to fix it?
Great video!!! I have a questions, say the LLM need more information from the user, do you know how to incorporate in this agent the "I need more info" conversation? Thanks
You would achieve that by putting it in the prompt.
I really like your tutorials, is there one where you add a tool into a retrievalqa agent? Like sending email / save the communication after human is satisfied?
I made some Retrieval vids for PDF and Text the principles in those can be used for Emails as well.
@@samwitteveenai sorry for my bad English, what I did try to explain is, adding a tool to send a email, just as an example, or basically after the human answer do an action who is programmatically done not by chatgpt
Hey Sam, can you tell how can we increase the quantity of answer that llm model is giving us after analyzing websites? The problem I am facing right is, now no matter where and how many times I mention 300 words it keeps on giving answer in one or two lines only
The models often won't listen to a length like that. You can either Fine Tune the model for longer responses of can try something like asking for multiple paragraphs. Don't forget in this case the inputs are not huge either.
How can I keep the conversation context of multiple users separately?
Create multiple memory objects for each user
Great thanks again for another amazing video. Can you we do this example with custom language model or any hugging face language model?
I tried to do it with 4 models and they all kinda sucked at it including Falcon7B etc. It really needs to either be a big open Source model or fine tuned for the task.
But how is this scalable and where will the actual user interact? I feel like LangChain is a waste of time.
The front end options are dependent on the end implementation. Look for - how to implement chatbots in X web framework (Django, grails, whatever).
Langchain is a framework for building processes on top of back end components like the LLM, Vector DB.
@@robxmccarthy thanks for the reply. I’ve yet to see any completed application with this tool to my knowledge, only a lot of people explaining what it could do on videos.
Guess it will just be an exciting time as more folks begin to build 🤘
Can you please do a video for building chatbot on large custom data of company (something like 40-50 pdf some audio files some ppt) with memory using pinecone . That can generate output from the text given if not found generate generic possible response using open ai
How would I include the url that the agent got its answer from?
You need to include it in meta data and pass it back. I have n example of doing this in another video
Nice video. I'm wondering what happens if the prompt reaches the model token limit. Do you know anything about that?
This won't normally happen in this use case with the 4k token limits, but you can use a transformation chain to trim the amount of tokens or put in a summarization chain etc.
Any ways to print the source as well to refer?
Yeah this requires using meta data on the look ups.
Could you share how much is the cost for each of these questions using ReACT with OpenAI LLM? I think it will be quite a lot.
If you are using the Chat turbo model then the cost isn't that much.
Hi! Nice video, i'm having trouble implementing this, i make use of Custom Tools with StructuredTool.from_function(func...) and apparently it works well but the input of some tools is not always a single string, for one tool i need it to input a JSON and string which in action input seems fine but actually is all on a single string
hey can you share your discord id or something I was really looking for someone who knows how to make a custom tool and really need help with that
Hello , Sam, I am a med student from India , currently I am trying to make a paper on the use langchain and llm models in medical education. I have some technical difficulties. Is it be possible to contact with you on this ? I can't say much here. Your advice and help will help a lot
These videos would have more utility if you could transcribe (maybe with AI) the video and interleave the transcript with screen images.
The basic problem is that people will mostly remember only a couple points/names and that's it.
Pretty sure that WebMD ToS forbid scraping (as should every website with any sort of worthwhile content).l
actually this is getting the info via DuckDuckGo and the sections that beings back. But I agree the idea was to show how people could something like this for their own site.
where is the notebook?
Unfortunately Google took it down. It is in the Github. I have no idea why they removed it
I got to ask it. Are you or were you ever in the past ever a trained teacher so help you, God?
No, the question is not for you. The question is for
Sam Witteveen
Nope
Very insightful video! Thanks Sam. One question, in this case you're using the DuckDuckGoSearchRun pre-built function and modifying it a bit so it narrows the search to a specific website. When working with Toolkits like GmailToolkit, they probably have a pre-built function (which I can't find anywhere) with the steps to search for emails in this case, so I'm not sure if I can customize my own function to act somewhat different, the same way you did in this video. It's clear to me that I would need to change the description of the tools so the agent can do a better job, but I'm not sure if in this case it's advisable to change the main function built to read through emails. Also, my plan is to customize and change a bit the agent.agent.llm_chain.prompt.template since I think it could improve search results. As you can see I'm trying to build an email reader chatbot that is useful for when I don't have time to read them all, so instead I'd ask this agent something like "What did Sam has sent me recently?" Thanks again mate
Hey Sam, loving your videos - I'm having a hell of a time trying to use the new OpenAI Functions Agent in a similar manner. For some reason the prompt organises itself so that System Msg is less adhered to. ie the prompt is delivered as: SystemMessage; ChatHistory; HumanInput.
You’re really amazing Sam! I’ve learned so much from watching your videos….keep them coming and you’re one of the best out there IMHO!🥳🦾😎👏🏼