I'm a manager for the team responsible for all of my company's GenAi features. I'll definitely be asking everyone on my team to watch this video and any others related to this. Looking forward to seeing more vids like this one.
This video was really good, I'm trying to build an agent for my internship project and you explained everything very well. I'm looking forward to also include more complex functions using lambda, im still unsure on how to do it though. If I can manage to also use some kind of RAG or knowledge base it will be cool
Thank you very much for this video. I just have a little doubt about the model permission, but Google solved this. About the code example and the tour in the Bedrock was perfect! Thank you!
You mentioned that you have the full weather example available in the description, but i still haven't been able to locate it. I went to the linked page in the description but it's not there. Have i missed something?
I may be asking this question out of context here but what I want to understand is how are we assuming parameters directly in the code. For eg: we directly used event['agent'] or event.getparameters[] so is there a standard to this that how will the agent invoke the lambda function. Do we have a fixed template somewhere that (event,context) has to be prepared a certain way before it calls lambda function ?
I still have some questions around the bigger picture of the Bedrock architecture... I understand agents and their use cases but I thought the idea when building an app was to 'front-end' the agents with a broader context FM that would be the actual chatbot interface? In other words, I am working on an application that will have a number (maybe 6 or so) of specialized agents that I thought would be invoked by the chat interface FM on an as needed basis. Also, can agents interact with each other in the background? If I can't front-end the agents I would need a chat interface for every agent I build which I very much doubt is the way the architecture is designed. Do you have something that shows a complete end-to-end application that encompasses all components?
Hi, great tutorial! I have one question, I'm testing an Agent with a Lambda function I already had previously. The process that the Lambda does is working fine, and the Lambda is not failing, but in the Agent console I get this response "The server encountered an error processing the Lambda response. Check the Lambda response and retry the request". Right now the Lambda is returning a Json, I have also tested just returning a String, and both cases are giving me that message as response. What could be happening in this case?
Anyone has some comment about latency? I using agents with claude sonnet 3.5 v1, with advanced prompt. The conversation is ok, and the access to API also, but the latency its very large. simples answers, sometimes, are close to 12 seconds. Any idea?
Can the model take the function output and build a response based upon it? E.g if you was to ask what time was it 2 hours ago - can it get the time from the function and perform its own logic on how to respond to the user? Or does it just return anything from the lambda exactly?
Interesting demonstration of capabilities. However, didnt understand why we created a group with 2 functions within, and WE wrote the logic of mapping the LLM-discovered-intent to the actual function-to-be-called! Wouldnt it be possible to have 2 (lambda) functions that each performs ONE separate task and LLM discovers which (lambda) function to call, and what parameters to provide?
That's conceivable. Have an experiment. The Alexa platform already extracts intent from the input, so the input through Alexa becomes deterministic... but that sounds fun to play around with.
Honestly what a useless tool just has bunch of UI to do drag and drop. ideally to build complex solutions you would require many agents. This can only solve rag types of systems a typical Q&A system.
Mike speaks very fast but he uses very simple english so any non-native speaker can easily understand him. Thanks Mike!
it is so easy that I play it at 1.5x and I understand
Agents are the future. I have built so many agents to automate emails, calendar, PR using just Llamaindex, compoiso & helicone.
I'm a manager for the team responsible for all of my company's GenAi features. I'll definitely be asking everyone on my team to watch this video and any others related to this. Looking forward to seeing more vids like this one.
There is no link to the github repo in the description.
Excellent plain English explanations, and a real live demonstration, instead of pre-recorded cheating videos. Excellent!
We're glad you like it! 😀
This video was really good, I'm trying to build an agent for my internship project and you explained everything very well.
I'm looking forward to also include more complex functions using lambda, im still unsure on how to do it though.
If I can manage to also use some kind of RAG or knowledge base it will be cool
Thank you very much for this video.
I just have a little doubt about the model permission, but Google solved this.
About the code example and the tour in the Bedrock was perfect!
Thank you!
Literally in love with your videos Mike! I always learn something new, and in an easy to digest way.
That's fantastic to hear! 😀 ☁️ 🙌
You mentioned that you have the full weather example available in the description, but i still haven't been able to locate it. I went to the linked page in the description but it's not there. Have i missed something?
I love that o1-mini can do everything he said LLM's cant do. I love this space in tech. You can barely get content out before it is outdated lol
I may be asking this question out of context here but what I want to understand is how are we assuming parameters directly in the code.
For eg: we directly used event['agent'] or event.getparameters[] so is there a standard to this that how will the agent invoke the lambda function. Do we have a fixed template somewhere that (event,context) has to be prepared a certain way before it calls lambda function ?
I still have some questions around the bigger picture of the Bedrock architecture... I understand agents and their use cases but I thought the idea when building an app was to 'front-end' the agents with a broader context FM that would be the actual chatbot interface? In other words, I am working on an application that will have a number (maybe 6 or so) of specialized agents that I thought would be invoked by the chat interface FM on an as needed basis. Also, can agents interact with each other in the background? If I can't front-end the agents I would need a chat interface for every agent I build which I very much doubt is the way the architecture is designed. Do you have something that shows a complete end-to-end application that encompasses all components?
Great job. Loved this explanation of Agents. Thanks!
Glad you like it! 😁
So good. Keep up these excellent videos!!!
What about a demo using multiple agents, multiple llm, langchain and langsmith to do tracing?
Great job. Very easy to follow!
Thank you! 😊 🤝 ☁️
Hi, great tutorial!
I have one question, I'm testing an Agent with a Lambda function I already had previously.
The process that the Lambda does is working fine, and the Lambda is not failing, but in the Agent console I get this response "The server encountered an error processing the Lambda response. Check the Lambda response and retry the request".
Right now the Lambda is returning a Json, I have also tested just returning a String, and both cases are giving me that message as response.
What could be happening in this case?
How about serpapi? Can it be integrated? I am talking about access permission since serp needs its own api key.
Thanks for the video. How can I call a Bedrock prompt flow from the code?
Anyone has some comment about latency? I using agents with claude sonnet 3.5 v1, with advanced prompt. The conversation is ok, and the access to API also, but the latency its very large. simples answers, sometimes, are close to 12 seconds. Any idea?
Agents are cool. The python integration part is the one that gives me a pause.
Can the model take the function output and build a response based upon it? E.g if you was to ask what time was it 2 hours ago - can it get the time from the function and perform its own logic on how to respond to the user? Or does it just return anything from the lambda exactly?
Interesting demonstration of capabilities. However, didnt understand why we created a group with 2 functions within, and WE wrote the logic of mapping the LLM-discovered-intent to the actual function-to-be-called! Wouldnt it be possible to have 2 (lambda) functions that each performs ONE separate task and LLM discovers which (lambda) function to call, and what parameters to provide?
This is cool. Can this be modified to use Alexa so that the input comes in through voice as slots.
That's conceivable. Have an experiment. The Alexa platform already extracts intent from the input, so the input through Alexa becomes deterministic... but that sounds fun to play around with.
Great presentation . Can agents communicate with each other for assistance ?
I am getting error while asking the time.
Error:
"""
Your request rate is too high. Reduce the frequency of requests.
"""
You're missing "import datetime" in the Lambda function code
what if your python function call another function?
Amazing ✌
😀 🙌
it miss "import datetime"
Do you, guys, know about OpenAPI at AWS ? This is just to complicated .....
Honestly what a useless tool just has bunch of UI to do drag and drop. ideally to build complex solutions you would require many agents. This can only solve rag types of systems a typical Q&A system.
Agents are mainly for RAG , what we’re you expecting?