Amazing video! Can you please show more about knowledge bases and agents, how to keep the knowledge base updated when information is constantly changing in a database.
@@foobar_codes for knowledge base , when information is relatively static like company policy documents, and other repo. we should use Knnowlede bases , when we need dynamic infor or real time info we use agents exactly what this demo was about . i.e the way you saw calling an api for weather we can also use lambda to call internal webservice that gets data from database using the same mecchanism as shown . when the pdfs of policy change we need to update the knowledge base this could be from s3 event trigger i.e in PUT or UPDATE this trigger can firre a compute that can call update Knowledge base ( stepfunctions, lambda, pipelines etc ) whatever works best for you
If I have a SaaS platform where each account can have a file (PDF, Excel, DOC, etc.) as a knowledge base, the question is: do we need a separate model for each user, or should we train a single model with the knowledge and use context to distinguish and split the information between accounts?
Hello Ma'am!!! I have a issue in creating the knowledge base. When I create it shows failed to create open search serverless collection.eventhough I gave the full access access for bedrock and opensearch service for the user and made the s3 bucket to be accessed by the opensearch service, the issues is not fixed. can you help me to clear that issue? I'm struggling with that issue !!! please help me
@@leandros3712 hey I did it and it worked. Thanks 😄!!! btw i don't know why it showed CORS error. Anyways thank you !!! Is there any chance for me to contact you. since I'm being a beginner in the learning, sharing the knowledge would be vital for my progress !!! pardon my english 😅
Hello there, i need help of how to write the Lambda function, I tried to upload the zip from your shared github but getting error "Access denied when calling Bedrock. Check your request permissions and retry the request." when trying to access the agent.
Very helpful, thanks! However, I have a little question... In the last part of the video you say that it is possible to overide de prompt with the decisions and actions the agent is taken. What it is that for? Maybe is it possible to do reinforcement learning of the model that way, by modifying this trace?
Is it possible to optimize the cost for the OpenSearch service by using AWS Bedrock Agents? Following this example, a charge of approximately USD $6 per day is incurred, which amounts to nearly USD $180 per month without using the agent 😢
please can you do a video of integrating the chatbot on your website? I want my customers to be able to see the chatbot when they click on my website, so that they can ask it questions about my product
Your CSV data source rows have city names followed by latitude & longitude in different spreadsheet columns. Question: HOW does the model know it was a CSV file and HOW did you teach (or WHAT taught) the model where the lat+lon values for a city were? IOW: WHAT part of AWS understands that the imported S3 object to be a CSV file that further understands some relation of the spreadsheet's city column and the spherical point mapping columns? Is this accomplished by first row titles on this spreadsheet ?
More Debugging: THIS DEMO DOES NOT WORK on my Bedrock when testing the knowledge base (encoded with Titan text v2 - just like she did) consisting of the same exact worldcities.csv spreadsheet and choosing Anthropic for the model - just like she did. It is NOT finding a lot of the cities in the spreadsheet. Furthermore, the worldcities.csv has lat/long for Montevideo as -34.8836 / -56.1819 , not -34.7667 / -56.3806, which is actually Ciudad del Plata in Uruguay. Since the temperature and Top-P / Top-K can't be adjusted for testing , I'm not sure if it's a small hallucination or just WRONG.
@@LarryHale-x3v yo can use some of prompt engineering techniques , try adding an example giving one row and show how you expect to parse , and give specific instructions like say I cannot find the city if it is not in the datset etc , next the data needs to be in the csv in first place , note she used public data set which was free , but also had other paid data sets that has lot more cities populated for production you would use data set that has all the information .
Lovely! The best video on Bedrock for new people I encountered. Thank you!
Thanks for this wonderful explaination
Awesome video! That really helped me out and explained how Bedrock works.
Great overview as usual, Thank you! Please create a video on how to deploy these via SAM
Noted
Great explanation! Thank you!
Really great video, how we add postprocessing for example to send telemetry to Langfuse?
Amazing video! Can you please show more about knowledge bases and agents, how to keep the knowledge base updated when information is constantly changing in a database.
Noted 🔥
@@foobar_codes for knowledge base , when information is relatively static like company policy documents, and other repo. we should use Knnowlede bases , when we need dynamic infor or real time info we use agents exactly what this demo was about . i.e the way you saw calling an api for weather we can also use lambda to call internal webservice that gets data from database using the same mecchanism as shown . when the pdfs of policy change we need to update the knowledge base this could be from s3 event trigger i.e in PUT or UPDATE this trigger can firre a compute that can call update Knowledge base ( stepfunctions, lambda, pipelines etc ) whatever works best for you
If I have a SaaS platform where each account can have a file (PDF, Excel, DOC, etc.) as a knowledge base, the question is: do we need a separate model for each user, or should we train a single model with the knowledge and use context to distinguish and split the information between accounts?
Hello Ma'am!!! I have a issue in creating the knowledge base. When I create it shows failed to create open search serverless collection.eventhough I gave the full access access for bedrock and opensearch service for the user and made the s3 bucket to be accessed by the opensearch service, the issues is not fixed. can you help me to clear that issue? I'm struggling with that issue !!! please help me
Try adding for the IAM User all aoss policies for the OpenSearch Serverless service. This worked for me. (Sorry my english)
@@leandros3712 hey I did it and it worked. Thanks 😄!!! btw i don't know why it showed CORS error. Anyways thank you !!!
Is there any chance for me to contact you. since I'm being a beginner in the learning, sharing the knowledge would be vital for my progress !!! pardon my english 😅
have this same isseu!
Hello there, i need help of how to write the Lambda function, I tried to upload the zip from your shared github but getting error "Access denied when calling Bedrock. Check your request permissions and retry the request." when trying to access the agent.
I still have a question. Is it possible to create, for example, 3 knowledge bases and point it for the same instance?
Very helpful, thanks!
However, I have a little question... In the last part of the video you say that it is possible to overide de prompt with the decisions and actions the agent is taken. What it is that for? Maybe is it possible to do reinforcement learning of the model that way, by modifying this trace?
Hi! Thanks for this video, You rock. I’m having trouble setting up the role with proper policies to create a KB, how did you do this?
Great video! Thank you! 😁
Glad you liked it!
That was brilliant and helped me so much! A big thank you...
Glad it helped!
Is it possible to optimize the cost for the OpenSearch service by using AWS Bedrock Agents? Following this example, a charge of approximately USD $6 per day is incurred, which amounts to nearly USD $180 per month without using the agent 😢
For playing around you can use Pinecone, it has even a free tier.
Hello What about the pricing of Knowledge Base?
can you help me with agents in amazon bedrock for devops tasks? how to implement that?
please can you do a video of integrating the chatbot on your website? I want my customers to be able to see the chatbot when they click on my website, so that they can ask it questions about my product
How can I deploy it in my website?
Thanks for the explanation 🙏
Glad it was helpful!
great video. can you have next the video on the client side using this like you talked about in the end of your video
Noted.
Your CSV data source rows have city names followed by latitude & longitude in different spreadsheet columns. Question: HOW does the model know it was a CSV file and HOW did you teach (or WHAT taught) the model where the lat+lon values for a city were?
IOW: WHAT part of AWS understands that the imported S3 object to be a CSV file that further understands some relation of the spreadsheet's city column and the spherical point mapping columns? Is this accomplished by first row titles on this spreadsheet ?
More Debugging: THIS DEMO DOES NOT WORK on my Bedrock when testing the knowledge base (encoded with Titan text v2 - just like she did) consisting of the same exact worldcities.csv spreadsheet and choosing Anthropic for the model - just like she did. It is NOT finding a lot of the cities in the spreadsheet.
Furthermore, the worldcities.csv has lat/long for Montevideo as -34.8836 / -56.1819 , not -34.7667 / -56.3806, which is actually Ciudad del Plata in Uruguay. Since the temperature and Top-P / Top-K can't be adjusted for testing , I'm not sure if it's a small hallucination or just WRONG.
@@LarryHale-x3v yo can use some of prompt engineering techniques , try adding an example giving one row and show how you expect to parse , and give specific instructions like say I cannot find the city if it is not in the datset etc , next the data needs to be in the csv in first place , note she used public data set which was free , but also had other paid data sets that has lot more cities populated for production you would use data set that has all the information .
i hate AWS Lex. It's horrible