Appreciate the effort you put into making this video -- thank you! :) HF also mention in their documentation that the default tools can be customised -- say you're not happy with the result you're getting from the diffusion model they use, well, you can replace it with another tool/model from within HF. However, I'm not sure how exactly to do that! Is it possible to do a tutorial on this? -- Thank you!
yes but obody is using it ? agents which use llms as tools ? and llms which use rags as tools ? so the llm should be using the rag to save data and extract data .. ie manage its own memeory !
Hello James, First of all, I want to say that your video was great, as always. However, I have a couple of questions regarding the setup you demonstrated. I noticed that you only added the OpenAI key, but I didn't see you adding a Hugging Face key. Could you please clarify how the system is working without the Hugging Face key? Additionally, I'm curious about the source of the inference. Could you please explain where the inference is coming from and who is hosting these models? Is HuggingFace hosting it for free? Thank you for your time and clarification.
hey, so no need to include a hugging face key because we're not using their inference service here, we're actually running MPT-7B locally - so when we run Hugging Face transformers locally no API keys are needed
how different itt is from the "Pipeline" feature that was available before, you were just telling a work it in pipeline, it was finding a ddefault model and gave the output So how is this different from that
My only concern (I think) is that if the LLM is generating python code, it might be possible to get it to generate malicious code. The framework would need to make sure that any generated code is very limited in scope, to maybe only downloading models and reading/writing files within a sandboxes directory. It's pretty cool though.
I'm pretty sure this is a dumb question, but how does openai's GPT know which HF tools to use? GPT training stopped well before these tools were available, right? What am I missing? Thank you for making this video!
Very cool! Wondering why OopenAI agent pulls from huggingface models, I thought that it would use their plugins. How does it works? It's really impressive how lean the API is
Really well done and executed, I am wondering whether LLM and Agents can interact with private Knowledge Base (Graphs) to optimize output results?. Thank you for your support and dedication to the community, really appreciated
so do you think it is better than langchain? also I want to know how did they manged to make an llm to use tools how can I make that to for example if want to make llama model to learn how to use photoshop how can I make this I really want to know?
In the prompt template you have something like "You have the following tools at your disposal [insert table of tool names and descriptions] here are examples of how to use them [insert examples] when appropriate use tools to complete the users request. Here is the request [insert user prompt here]" You can do better than this, I've not messed around with prompting but you probably want it to reason about what tools would be most appropriate before selecting and using a tool. Replace the various parts of the template with info from your tools file and the user prompt when the user prompts the system. Whenever the output contains the use of a tool you stop the LLM output and send the command(s) to the tool and include a description of the result of that back into the conversation with a specific way for the LLM to refer back to this result (for follow up requests). Have the LLM continue until the task is done or has been determined to be beyond the LLM + tools capabilities. Obviously there is more to it than this. You want to display results, deal with follow up requests properly, create all the descriptions/examples and write the code that turns the LLMs request to use a tool into actual commands the tool can process, deal with results and errors etc, but this is the basic idea. You can do this with LangChain, but this is already set up as an agent and is already set up to work with all the models on Huggingface. If you understand how to interact with Photoshop using text commands then you can teach an LLM how to do it using a prompt. I'm sure you could modify the tools file to make it output commands for photoshop, but you would probably have to implement a way for it to send those to photoshop (and return results) yourself (or just wait until someone else does it) but I don't know because I've not even look at this yet.
Hi James, another cool video. I wanted to buy your Udemy Course but the 70% discount mentioned is not there anymore. I guess it was only for few hours.
AFAIK You wouldn't directly use the code it is generating, the agent appears to be using that on it's own, it isn't really for you. What you would do is develop a UI for the agent, for example maybe hook it up to Slack, or a JS chat window, or just within a notebook. You would then ask it to do things, and it will automatically download the appropriate model(s) to complete that task, and provide you with the output. The code that you see is really just for helping you as a developer understand what it is doing under the hood. For end-user use you wouldn't show that code, although you might log that info for troubleshooting issues reported by the users.
haven't tested it enough to know how this implementation functions, but in langchain we can modify the descriptions to specify when exactly to use the tool and that works well
@@jamesbriggs Yeah, that sounds OK...I'm still at the stage of running prefab chains with LangChain... But what seems needed is for an agent, even just a "chat" process, to request disambiguation or more guidance from the original prompt. I.e. ask follow -up questions. I'm sure there must be a "tool" for that. It's the major flaw with AutoGPT concepts as well. Hard to keep up with all the new stuff!
pretty sad this was not captialized on more : as the concept of Agents Running LLMS is better than LLM Running Agents !!! LLM and transformers ad diffusers etc as tools ! ... once the model is in your environment : this is a good way to acess them : even i feel everything is backwards !!! should we be hitting the rag ? or should the rag be a tool also ? if the rag was Given to the model it would be better than all the agents querying the rag then passing it to the model ! ?? crazy !!
From my experience the system is far from functional. The prompt templates used by HF are too limited, and mostly only the prompts that they display in their official notebook work well. Straying from these prompt templates leads to errors in the tools. Many things to be fixed here.
Losing your job to ai agents is unacceptable. Ai Jobloss is here. So are Ai as weapons. Can we please find a way to Cease Ai / GPT? Or begin Pausing Ai before it’s too late?
They’re not that good, actually the more you use these things the more you’ll realize how limited they are - maybe better models that do cause job loss are coming but I don’t think todays models will
9:28 Fort a traditional software engineer, this is a scary none-deterministic form of meta programming that is impossible to end to end test 😂 “I’m going to use the following tool: ‘hack_my_computer’”
I am so grateful for having the chance to follow you along. Really appreciate and love your content and the way you present it.
How can the ready-made projects on the platform be linked to Blogger blogs? I have long days searching to no avail
With a specific url, it can access to webpage and do ocr text recognition on that page that's amazing
Appreciate the effort you put into making this video -- thank you! :)
HF also mention in their documentation that the default tools can be customised -- say you're not happy with the result you're getting from the diffusion model they use, well, you can replace it with another tool/model from within HF. However, I'm not sure how exactly to do that!
Is it possible to do a tutorial on this? -- Thank you!
Yes definitely, have already been working on it - their custom tools approach is great and will talk about it soon
@@jamesbriggs outstanding
@@jamesbriggs Looking forward!
Thank you! :)
Nice work. Great pace. Instantly useful
yes but obody is using it ? agents which use llms as tools ?
and llms which use rags as tools ?
so the llm should be using the rag to save data and extract data .. ie manage its own memeory !
James - can you please create an LLM course? I'll pay for it ❤
I’m doing it for free :)
th-cam.com/play/PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F.html
@@jamesbriggs thank you!
Gracias!
@@jamesbriggs common open source W
@@jamesbriggs you are the best❤
Hello James,
First of all, I want to say that your video was great, as always. However, I have a couple of questions regarding the setup you demonstrated. I noticed that you only added the OpenAI key, but I didn't see you adding a Hugging Face key. Could you please clarify how the system is working without the Hugging Face key?
Additionally, I'm curious about the source of the inference. Could you please explain where the inference is coming from and who is hosting these models? Is HuggingFace hosting it for free?
Thank you for your time and clarification.
hey, so no need to include a hugging face key because we're not using their inference service here, we're actually running MPT-7B locally - so when we run Hugging Face transformers locally no API keys are needed
This LLM craze is on another level !
Great video, James! 👍🏻
Thanks!
how different itt is from the "Pipeline" feature that was available before, you were just telling a work it in pipeline, it was finding a ddefault model and gave the output
So how is this different from that
How did he get that background for hugging face
My only concern (I think) is that if the LLM is generating python code, it might be possible to get it to generate malicious code. The framework would need to make sure that any generated code is very limited in scope, to maybe only downloading models and reading/writing files within a sandboxes directory. It's pretty cool though.
It already is sandboxed.
Awesome content as always!
great info James. Could you share how to finetune a LLM on our own data? thanks!
does the agent decide which model to download and use or will it use the model you import?
I'm pretty sure this is a dumb question, but how does openai's GPT know which HF tools to use? GPT training stopped well before these tools were available, right? What am I missing? Thank you for making this video!
it's in the prompt, they insert the tool name and a short description of the tool and when to use it :)
Very cool! Wondering why OopenAI agent pulls from huggingface models, I thought that it would use their plugins. How does it works? It's really impressive how lean the API is
I love that the notebook name is Untitled17.ipynb. Btw awesome tutorials.
fantastic - great video!
Love the Viz of those huge neutral networks! Where can we get similar things?
Simple and impressive library! Maybe gife is some representation between elephant and giraffe. Or maybe it's just nonsense...
Thanks for your great work! 👏 Do you plan to make a video on using NVIDIA NeMo Guardrails with LangChain?
Really well done and executed, I am wondering whether LLM and Agents can interact with private Knowledge Base (Graphs) to optimize output results?.
Thank you for your support and dedication to the community, really appreciated
of course they can!
Glad about OSNAP already being a thing
so do you think it is better than langchain? also I want to know how did they manged to make an llm to use tools how can I make that to for example if want to make llama model to learn how to use photoshop how can I make this I really want to know?
In the prompt template you have something like "You have the following tools at your disposal [insert table of tool names and descriptions] here are examples of how to use them [insert examples] when appropriate use tools to complete the users request. Here is the request [insert user prompt here]" You can do better than this, I've not messed around with prompting but you probably want it to reason about what tools would be most appropriate before selecting and using a tool.
Replace the various parts of the template with info from your tools file and the user prompt when the user prompts the system.
Whenever the output contains the use of a tool you stop the LLM output and send the command(s) to the tool and include a description of the result of that back into the conversation with a specific way for the LLM to refer back to this result (for follow up requests). Have the LLM continue until the task is done or has been determined to be beyond the LLM + tools capabilities.
Obviously there is more to it than this. You want to display results, deal with follow up requests properly, create all the descriptions/examples and write the code that turns the LLMs request to use a tool into actual commands the tool can process, deal with results and errors etc, but this is the basic idea.
You can do this with LangChain, but this is already set up as an agent and is already set up to work with all the models on Huggingface.
If you understand how to interact with Photoshop using text commands then you can teach an LLM how to do it using a prompt. I'm sure you could modify the tools file to make it output commands for photoshop, but you would probably have to implement a way for it to send those to photoshop (and return results) yourself (or just wait until someone else does it) but I don't know because I've not even look at this yet.
@@kevinscales can we speak private ?
Hi James, another cool video. I wanted to buy your Udemy Course but the 70% discount mentioned is not there anymore. I guess it was only for few hours.
The agent generates the code, then magic? It's not clear to me how to reuse the agent's code outside of a notebook.
AFAIK You wouldn't directly use the code it is generating, the agent appears to be using that on it's own, it isn't really for you. What you would do is develop a UI for the agent, for example maybe hook it up to Slack, or a JS chat window, or just within a notebook. You would then ask it to do things, and it will automatically download the appropriate model(s) to complete that task, and provide you with the output. The code that you see is really just for helping you as a developer understand what it is doing under the hood. For end-user use you wouldn't show that code, although you might log that info for troubleshooting issues reported by the users.
Try sharing ur notebook links too
uploaded now! github.com/aurelio-labs/cookbook/blob/main/gen-ai/agents/hf-agents/hf-agents-intro.ipynb
How to get access to these example notebooks ?
added here! github.com/aurelio-labs/cookbook/blob/main/gen-ai/agents/hf-agents/hf-agents-intro.ipynb
Really appreciate your tutorials/videos. Learning a ton and implementing them in one or two projects 🙂
Can you share that colab notebook? Thanks❤
just uploaded, here it is github.com/aurelio-labs/cookbook/blob/main/gen-ai/agents/hf-agents/hf-agents-intro.ipynb
So... When I tell a robot it has a hammer, won't everything look like a nail?
haven't tested it enough to know how this implementation functions, but in langchain we can modify the descriptions to specify when exactly to use the tool and that works well
@@jamesbriggs Yeah, that sounds OK...I'm still at the stage of running prefab chains with LangChain... But what seems needed is for an agent, even just a "chat" process, to request disambiguation or more guidance from the original prompt. I.e. ask follow -up questions. I'm sure there must be a "tool" for that. It's the major flaw with AutoGPT concepts as well. Hard to keep up with all the new stuff!
theres no slowing down in this space.... hopefully I will still have a job in 2 years... (Im an ML engineer )
Everyday there’s something new it crazy
build agents to do the job for you, just don't tell your boss
I think the animal turned gold because you used "lazer" instead of "laser"?
Maybe that one step failed because you misspelled laser 😊
Oops 😅
pretty sad this was not captialized on more :
as the concept of Agents Running LLMS is better than LLM Running Agents !!!
LLM and transformers ad diffusers etc as tools ! ...
once the model is in your environment :
this is a good way to acess them : even i feel everything is backwards !!!
should we be hitting the rag ? or should the rag be a tool also ? if the rag was Given to the model it would be better than all the agents querying the rag then passing it to the model ! ?? crazy !!
Isn’t huggingface actually a company?
wow cool cheap api? openai a little expensive.
From my experience the system is far from functional. The prompt templates used by HF are too limited, and mostly only the prompts that they display in their official notebook work well. Straying from these prompt templates leads to errors in the tools. Many things to be fixed here.
yeah I do agree, plenty of work to be done but I also wouldn't expect anything more from the first version - I think it's very promising
Losing your job to ai agents is unacceptable. Ai Jobloss is here. So are Ai as weapons. Can we please find a way to Cease Ai / GPT? Or begin Pausing Ai before it’s too late?
They’re not that good, actually the more you use these things the more you’ll realize how limited they are - maybe better models that do cause job loss are coming but I don’t think todays models will
Perhaps it is our society that needs to change ? In this society, everything is a weapon..
9:28 Fort a traditional software engineer, this is a scary none-deterministic form of meta programming that is impossible to end to end test 😂 “I’m going to use the following tool: ‘hack_my_computer’”