hello ! Thank you very much for taking the time to teach how to use this technology... could you teach how to read csv files from an AWS S3 bucket with AWS bedrock? For those of us who work with BBDD it would be very useful, thank you very much
Hello, I am glad you liked the video. What do you mean by reading the CSV file from S3 using Bedrock? Are you referring about Bedrock Knowledgebase where files located in S3 are read, and their vector embeddings are stored in Amazon Open Search index? Could you elaborate? Also, what is BBDD? Thanks
@@MyCloudTutorials Hey, i would be interested in the case you mentionned, working with the knowledgebase. Also would be great to include a lambda that auto-syncs the base, with an s3 upload trigger connected. Also, conversation memory would be an interesting topic, not sure if it is automatically included when working with bedrock, but there are langchain pakages that make it easy. If you got time, i would also love to see the response from bedrock beeing saved to an s3 as well. Thanks in advance!
@@MyCloudTutorials hello ! Thank you very much for taking the time to respond. Now that I've looked around your entire channel, I mean doing exactly the same thing you did in this video, th-cam.com/video/KFibP7KnDVM/w-d-xo.html (name: Chat with your PDF - Gen AI App - With Amazon Bedrock, RAG, S3, Langchain and Streamlit [Hands-On]), but for files with CSV formats. Have a good day, greetings
Knowledgebase is totally doable. But from what I have seen, it takes an S3 bucket's content, create vector embedding and store in ElasticSearch (Amazon Open Search). I didn't see an example on what happens when you add more files to S3. In that case definitely a lambda trigger, which can perform similar steps. That would be a good tutorial to make, I will add to the TODO list and add a comment here when ready. Thanks
Very informative. Volume is slightly low for some reason.
Sorry about the volume, will try to check again prior to uploading other videos
hello ! Thank you very much for taking the time to teach how to use this technology... could you teach how to read csv files from an AWS S3 bucket with AWS bedrock? For those of us who work with BBDD it would be very useful, thank you very much
Hello, I am glad you liked the video.
What do you mean by reading the CSV file from S3 using Bedrock? Are you referring about Bedrock Knowledgebase where files located in S3 are read, and their vector embeddings are stored in Amazon Open Search index? Could you elaborate?
Also, what is BBDD?
Thanks
@@MyCloudTutorials Hey, i would be interested in the case you mentionned, working with the knowledgebase. Also would be great to include a lambda that auto-syncs the base, with an s3 upload trigger connected. Also, conversation memory would be an interesting topic, not sure if it is automatically included when working with bedrock, but there are langchain pakages that make it easy. If you got time, i would also love to see the response from bedrock beeing saved to an s3 as well. Thanks in advance!
@@MyCloudTutorials hello ! Thank you very much for taking the time to respond. Now that I've looked around your entire channel, I mean doing exactly the same thing you did in this video, th-cam.com/video/KFibP7KnDVM/w-d-xo.html (name: Chat with your PDF - Gen AI App - With Amazon Bedrock, RAG, S3, Langchain and Streamlit [Hands-On]), but for files with CSV formats. Have a good day, greetings
The idea is that you can interpret the data coming out of the CSV file in S3, but using Bedrock to protect data leakage
Knowledgebase is totally doable. But from what I have seen, it takes an S3 bucket's content, create vector embedding and store in ElasticSearch (Amazon Open Search).
I didn't see an example on what happens when you add more files to S3.
In that case definitely a lambda trigger, which can perform similar steps.
That would be a good tutorial to make, I will add to the TODO list and add a comment here when ready.
Thanks
Your content is okay, but try to increase the volume it will be great
There was an issue with this video's volume, will be careful for others. Thanks for feedback