My Cloud Tutorials
My Cloud Tutorials
  • 33
  • 72 140
Python Multithreading Hands-on Tutorial - Part 1
In this video, we will learn about Multithreading in Python. First we create a simple python program performing 2 distinct tasks without threads. We will simulate the I/O blocking using sleep method.
We will rewrite the same program with threading, using the Thread class. We should be able to see how it effectively multi-task. When one function is waiting (sleeping) the other thread performs work on the second function.
#python #multithreading #tutorial #tutorials
มุมมอง: 107

วีดีโอ

CKAD exam practice question - Create kubernetes (K8s) job with completion and parallelism
มุมมอง 1964 หลายเดือนก่อน
In this video, we will solve a practice CKAD exam question related to Kubernetes job. The question has following 2 parts: Part 1) your ondemand video processing team needs a Job template located at vod-job.yaml . This Job should run image busybox:1.35.0 and execute sleep 2 && echo ”video processing done” . It should be in namespace video-on-demand, run a total of 5 times and should execute 2 ru...
Kubernetes CKAD Exam Question - Create Namespace, Pod and check status using Kubectl and Jsonpath
มุมมอง 1394 หลายเดือนก่อน
In this video, we will solve a practice CKAD exam question. The question has following 2 parts: Part 1) Create a single Pod of image httpd:2.4-alpine3.20 in Namespace application (The namespace may not exist) . The Pod should be named webserver1 and the container should be named webserver1-container. Part 2) Create a Shell Script webserver1-status.sh, that uses kubectl to output the status of t...
Prediction App - Machine Learning , Serverless - Amazon S3, API Gateway, Lambda [Hands-On Coding]
มุมมอง 8745 หลายเดือนก่อน
In this hands-on video we will build an End to End ML Prediction Application. Timestamps: 00:00:00 - Project introduction and architecture 00:04:31 - Exploratory data analysis, train, test the classifier model 00:34:48 - Build Backend - Upload model file to S3, Create docker image, lambda and test 00:56:50 - Build Frontend - Create API Gateway and React Application 01:16:16 - Recap We will buil...
Create AWS Lambda Function Using Docker Image - Hands on tutorial
มุมมอง 3.5K5 หลายเดือนก่อน
In this video, we will learn how to create AWS Lambda function using a docker image and run docker container. Why do we need it? We will start by discussing some challenges that we face while creating Lambda functions using zipped archive. We will also discuss a blog post, which shows potential performance improvement of Lambda COLD start time for large packages. Timestamps: 00:00 - Introductio...
How to connect Visual Studio Code to AWS EC2 over SSH
มุมมอง 1.8K5 หลายเดือนก่อน
In this how to tutorial, I am showing how to connect visual studio code to an Amazon EC2 instance, create/copy/move files locally and they are available in the ec2 instance. What is the need for this? It's always harder to make/change the tab sensitive files like python, yaml in vi or nano editor in the EC2. One incorrectly placed tab can break the program. Please don’t forget to like, share, c...
Text Summarization - Gen AI App , Amazon Bedrock and Lambda, Docker, Lambda, Streamlit, Python
มุมมอง 1K6 หลายเดือนก่อน
In this video, we will implement a text summarization application. This is a hands-on coding and beginner friendly tutorial. Time Stamps: 0:00 - Introduction 1:21 - AWS Console - Bedrock overview 3:07 - Write Backend Lambda Function in Python and Test using Postman 17:22 - Write Frontend Code with Python, Streamlit & Docker 31:26 - Conclusion This application has 2 parts: BACKEND: - The backend...
Gen AI Chatbot - Customer Service Agent - Hands-On Coding Tutorial - With Google Cloud Vertex AI
มุมมอง 6276 หลายเดือนก่อน
This video shows how to build a simple customer service agent chatbot that helps customers with item return. The model is provided context, so it asks questions to the customers if the item was purchased within last 15 days and it's unused. It makes the decision based on the Customer's response to those questions. The video also shows how to enhance user experience by streaming responses from t...
Chat with your PDF - Gen AI App - With Amazon Bedrock, RAG, S3, Langchain and Streamlit [Hands-On]
มุมมอง 31K6 หลายเดือนก่อน
In this video we will build a CHATBOT like application with AWS Amazon Bedrock, docker, python, Langchain, and Streamlit. We will use Retrieval-Augmented generation concept to provide context to the Large Language model along with user query to generate response from our Knowledgebase. Time Stamps: 0:00 - Introduction 0:35 - Quick Demo 3:44 - Application Architecture and details 8:40 - Build Ad...
GCP Cloud Run and Artifact Registry Demo For Containerized Workloads
มุมมอง 2946 หลายเดือนก่อน
We will use Google Cloud Artifact Registry and Cloud Run for this tutorial. We will create a Python application running in a docker container. The docker image is uploaded to the Artifact Registry on GCP and used to create the Cloud Run Service. The demo shows the ease with which a containerized workload can be run in Cloud Run. In some other videos, we will also create event-based and batch pr...
Amazon Bedrock and Generative AI Introduction With Meta's Llama2 and Llama3 Models
มุมมอง 7416 หลายเดือนก่อน
In this video, I am discussing Amazon Bedrock. We will check the bedrock console and pricing information. We will also create a basic Python program to invoke the llama2 and llama3 text generation models via Bedrock Runtime API calls. Amazon Bedrock offers model invocation via API calls and its serverless. Amazon cloud provisions the infrastructure and we can focus on the application side. The ...
Amazon Networking - AWS VPC Peering - Hands-On Demo
มุมมอง 9692 ปีที่แล้ว
In this video, I will demonstrate Amazon VPC Peering. We will create an hands-on example, by creating the following: 1. VPCs in 2 separate AWS regions 2. Each VPC with Public and Private Subnet 3. Create Route Tables and associate subnets 4. Launch EC2 instances in each of the subnets 5. Setup Security groups 6. SSH into Private instances via Public instances 7. Establish VPC peering 8. Modify ...
Amazon SNS Subscriptions - Filter Policy HandsOn Demo
มุมมอง 1.7K2 ปีที่แล้ว
This tutorial is about Amazon Simple Notification Service (SNS) on how to control message delivery to subscribers using Filter policy. In this hands on demo, we will create the following: - An Amazon SNS Topic - Subscribe multiple emails to the topic - Create Filter policy for each subscription, so they receive messages with particular message attributes The scripts file used in this demo can b...
Kubernetes GitOps using ArgoCD
มุมมอง 7232 ปีที่แล้ว
This is introductory video to demo how ArgoCD is used for Kubernetes GitOps. In this demo, we will create a new Kubernetes Cluster, Setup ArgoCD and create an application to monitor the Infrastructure github repo. We will also demo the ArgoCd's auto-healing and pruning capability. I have created the docker image with 2 different tags for this demo. You can check them at hub.docker.com/repositor...
Amazon VPC And Networking - Hands-On Demo
มุมมอง 3.1K2 ปีที่แล้ว
In this video, I will demonstrate Amazon VPC and Networking. I have divided this video in 4 different parts. In each part we will perform certain steps to understand various VPC components. 00:00 Introduction 03:48 Part1: Create VPC, Internet Gateway, Public Subnet, Route Table, IAM Role, Security Group, EC2 Instance 12:59 Part2: Create Private Subnet, Route Table, EC2 instance, Security Group ...
Kubernetes CKAD exam - Persistent Volume, Claims and Pods
มุมมอง 1K2 ปีที่แล้ว
Kubernetes CKAD exam - Persistent Volume, Claims and Pods
Install Jenkins on Amazon EC2 - AWS Devops
มุมมอง 2342 ปีที่แล้ว
Install Jenkins on Amazon EC2 - AWS Devops
CKA CKAD Practice Question - Kubernetes Secrets
มุมมอง 1.5K2 ปีที่แล้ว
CKA CKAD Practice Question - Kubernetes Secrets
Kubernetes CKAD Exam - Setup Egress Network Policy
มุมมอง 2.3K2 ปีที่แล้ว
Kubernetes CKAD Exam - Setup Egress Network Policy
Cleanup
มุมมอง 1292 ปีที่แล้ว
Cleanup
Demo Latency Based Failover
มุมมอง 5322 ปีที่แล้ว
Demo Latency Based Failover
Create and expose deployment using NodePort Service in Kubernetes - CKA/CKAD Practice Question
มุมมอง 3.5K3 ปีที่แล้ว
Create and expose deployment using NodePort Service in Kubernetes - CKA/CKAD Practice Question
Kubernetes CKA Practice Question - Worker node kubelet troubleshooting
มุมมอง 5K3 ปีที่แล้ว
Kubernetes CKA Practice Question - Worker node kubelet troubleshooting
Session Vs JWT For Authorization
มุมมอง 1443 ปีที่แล้ว
Session Vs JWT For Authorization
Custom Domains And Health Checks for Resilient Multi-region Serverless Application
มุมมอง 6383 ปีที่แล้ว
Custom Domains And Health Checks for Resilient Multi-region Serverless Application
Resilient Multi region Serverless Application - Create SSL Certificates
มุมมอง 5323 ปีที่แล้ว
Resilient Multi region Serverless Application - Create SSL Certificates
Create Serverless Application and Deploy to Two AWS Regions
มุมมอง 7743 ปีที่แล้ว
Create Serverless Application and Deploy to Two AWS Regions
Resilient Multi-region Serverlesss Application - Setup HostedZone + NameServers
มุมมอง 7223 ปีที่แล้ว
Resilient Multi-region Serverlesss Application - Setup HostedZone NameServers
AWS Serverless Application - Multi-region with failover - List of Tasks
มุมมอง 9973 ปีที่แล้ว
AWS Serverless Application - Multi-region with failover - List of Tasks
Multi-region AWS Serverless Application - With Regional Fail-over and Routing Demo
มุมมอง 3.8K3 ปีที่แล้ว
Multi-region AWS Serverless Application - With Regional Fail-over and Routing Demo

ความคิดเห็น

  • @paulohss2
    @paulohss2 2 หลายเดือนก่อน

    Great content! It would be nice to learn how do you mounted the credentials on docker (I tried to follow what you answerd here in the coments to another person, but seems like some steps are missing to do the mounting with AWS CLI)

    • @MyCloudTutorials
      @MyCloudTutorials หลายเดือนก่อน

      On the machine where you are running this, aws cli must be configured. You can test by simply running aws s3 ls If so, there is a file in <HOME>/.aws/credentials which needs to be mounted on docker by passing -v flag Thanks

  • @michaelhanna6783
    @michaelhanna6783 2 หลายเดือนก่อน

    Really good content

    • @MyCloudTutorials
      @MyCloudTutorials 2 หลายเดือนก่อน

      I am glad you like it, Thanks!

  • @alfonsolopez4348
    @alfonsolopez4348 2 หลายเดือนก่อน

    Thank you, very clear explanation!

    • @MyCloudTutorials
      @MyCloudTutorials 2 หลายเดือนก่อน

      Glad it was helpful! Thanks

  • @maahinpanchal5828
    @maahinpanchal5828 2 หลายเดือนก่อน

    Sir the embedding model which you are utilizing is paid or free trial are available?

    • @MyCloudTutorials
      @MyCloudTutorials 2 หลายเดือนก่อน

      Its paid model. But it charges based on how many tokens you use. Thanks

  • @jdgame3604
    @jdgame3604 2 หลายเดือนก่อน

    How much does it cost for this particular project on aws bedrock?

    • @MyCloudTutorials
      @MyCloudTutorials 2 หลายเดือนก่อน

      It depends on how many calls you make to bedrock for embedding, querying etc. S3 storage is cheap. Overall it costed me less than $0.50 for this, but your costs may vary. Thanks

  • @jdgame3604
    @jdgame3604 2 หลายเดือนก่อน

    where can i get your medical pdf to practice along?

    • @MyCloudTutorials
      @MyCloudTutorials 2 หลายเดือนก่อน

      You can find it online, search for arthritis pdf. I downloaded from www.versusarthritis.org/media/22726/what-is-arthritis-information-booklet.pdf

  • @mukundhachar303
    @mukundhachar303 3 หลายเดือนก่อน

    Thank you Girish. Its very very good tutorial!

    • @MyCloudTutorials
      @MyCloudTutorials 3 หลายเดือนก่อน

      You are welcome. I am glad you liked it.

  • @anandsahu6767
    @anandsahu6767 3 หลายเดือนก่อน

    Is it possible to use google llm model and embeddings for the project. and if i want to upload this project on cloud can i use AWS Fargate

    • @MyCloudTutorials
      @MyCloudTutorials 3 หลายเดือนก่อน

      Yes, Any embedding model should work. You have to use same embedding model for creating and querying. Once you containerize the application, you can run on AWS ECS (Fargate or EC2) or EKS. Thanks

    • @anandsahu6767
      @anandsahu6767 3 หลายเดือนก่อน

      ​@@MyCloudTutorials thanks 👍

  • @jayanthr7752
    @jayanthr7752 3 หลายเดือนก่อน

    Hi Thanks for sharing this tutorial, much appreciated. I am trying to upload a custom pdf file . But it fails at step Creating the Vector Store Admin module. Error : TypeError: expected string or bytes-like object, got 'NoneType'

    • @MyCloudTutorials
      @MyCloudTutorials 2 หลายเดือนก่อน

      This error normally occur when the pdf reader is not able to parse pdf file. Could you try with a few different pdfs and see if it works. Thanks

  • @saviours9967
    @saviours9967 4 หลายเดือนก่อน

    Do u have code download for the same..very good tutorial..

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      Thank you. The source code is available in Github: github.com/mycloudtutorials/generative-ai-demos/tree/master/bedrock-chat-with-pdf

    • @saviours9967
      @saviours9967 4 หลายเดือนก่อน

      @@MyCloudTutorials thank you so much..i was able to download..but am actually getting strange errors on windows..is there anyway i contact you..not a developer per se...

  • @TechHeda7
    @TechHeda7 4 หลายเดือนก่อน

    Hi, my request for anthropic claude model rejected. How to use that model then?

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      Why was it rejected? Could you re-apply and mention that you are evaluating the model? If not, then you may need to use llama model but have to change the request / response accordingly in user.py

    • @TechHeda7
      @TechHeda7 4 หลายเดือนก่อน

      @@MyCloudTutorials Seems like once we provide the details regarding why we require the model...we can't edit those details again. Initially I mentioned random information in the company details, purpose etc... So I created another account and requested the model with appropriate purpose...then it was accepted

  • @ayencoscolfield3312
    @ayencoscolfield3312 4 หลายเดือนก่อน

    Thanks man just discovered your channel and its being so helpful . Thanks a bunch man

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      Glad to hear it! Thanks so much!

  • @test-AWS
    @test-AWS 4 หลายเดือนก่อน

    Hi girish, thank you so much for the tutorial, I am a bit confused about the pricing for titan, it says a really small cost per 1000 tokens, will end users query on already precomputed embeddings which we store? The costs incurred will only be for the initial computation, the storage and the query processing? Please continue the gen-ai series, learning so much.

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      Cost will be applied every time you use any model from bedrock (or openAI or any hosting platform). So you paid initial cost for Embedding your content and stored in a vector store. Every time a user queries, you are converting that query into a vector embedding (using the same model that you initially used) so a little cost is involved, then you make the search, find the similar document and send this all together to a LLM. There will be inferencing cost for input and output tokens from LLM at this point. I hope this clarifies. Thanks

  • @jaytan56
    @jaytan56 4 หลายเดือนก่อน

    Hi there! Thank you so much for this tutorial! It's been very helpful! However, I was wondering how I would do it differently if I wanted to merge the admin and user side together such that the User themselves can upload their own pdf and then query the chatbot on their own document. If so, how many clients would I need and can I still RAG? I suppose 1 S3 for storing the PDF, and 1 Bedrock for the vector embeddings. Do I need anymore clients for the LLM? I currently have an LLM I can chat with using ConversationBufferMemory and ConversationChain using predict. Thank you once again for your help!

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      I am glad you liked the tutorial. About the case you mentioned: You can merge both admin and user application. A little different approach can be taken. I am assuming you have the user logged in (so you can get their id) You can ask user to upload the pdf and show them "processing" kind of spinner, while you generate the embeddings and create vector index. If you need to store their vector index, you can always make a subfolder in S3 with their userid and may be assign this processing a job For example, s3://<userid>/<jobid - unique for the pdf>/<index files> You may want to consider saving this meta information to a Database like DynamoDB or Mysql so you know where to load the index from for this particular user. You can ask them to upload to S3 and trigger an event that creates the index. There are several possible ways to solve this, it depends on the use case and business requirements. Please let me know your thoughts! Thanks

  • @Fonsecaj89
    @Fonsecaj89 4 หลายเดือนก่อน

    Let´s say I have more than one pdf but are all related on the same topic. Do you remomend to use the same fass db file for all of the vectors? or 1 fass per pdf?

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      I have used PineCone and OpenSearch (ElasticSearch on AWS) for multiple documents indexing. Basically for similar topic, you want to create one index and append the embeddings & content in that index. I haven't used FAISS for such case, but if it does support then I will use one db file otherwise how will I load into one searchable index (I am not sure). I might try doing something and share if I find some solutions. Thanks

  • @davidlopezfelix3668
    @davidlopezfelix3668 4 หลายเดือนก่อน

    Awesome video. I implemented and worked! I have a doubt, How do you decide what model uses? There are many models available on Bedrock and you are using anthropic one

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      It depends on the use case, data and computing resources at your disposal. For example, some embedding models outputs vectors of length 4096, some smaller ones 1536 or even 768. 4096 will have more hyperplanes I would assume than 768 but it will be more compute intensive to index and retrieve. There could be more reasons as well, like loss of accuracy etc.

  • @vamsiB13
    @vamsiB13 4 หลายเดือนก่อน

    Hi, I am getting the error - ValueError: not enough values to unpack (expected 2, got 1) on line st.write(get_response(llm, faiss_index, question)). Could you please help?

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      You should put some debug (print) in the get_response method and see where the issue is. May be the LLM didn't respond with data. Thanks

    • @vamsiB13
      @vamsiB13 4 หลายเดือนก่อน

      @@MyCloudTutorials Thanks for the reply. I was using the same model for embeddings and the llm <facepalm>. I have updated that and its working now.

  • @BoraKaan52
    @BoraKaan52 4 หลายเดือนก่อน

    Thank you, it was really clear and simple explanation.

    • @MyCloudTutorials
      @MyCloudTutorials 4 หลายเดือนก่อน

      Glad it was helpful! Thanks.

  • @crossmindedninja5522
    @crossmindedninja5522 5 หลายเดือนก่อน

    Im getting error when conncecting to host in vscode when you agreed for fingerprint i didn't get thwt option and directly said erorr

    • @MyCloudTutorials
      @MyCloudTutorials 5 หลายเดือนก่อน

      What happens when you try to run the connection command in a terminal? Try and see if if you can make it work there. Thanks

  • @everythingwhatyouwanttokno2488
    @everythingwhatyouwanttokno2488 5 หลายเดือนก่อน

    In real world lambda don't work ecs ,eks or sagemaker endpoints

    • @MyCloudTutorials
      @MyCloudTutorials 5 หลายเดือนก่อน

      Hello @everythingwhatyouwanttokno2488 , thanks for note. I tend to disagree and think it's on case basis. Yes, I understand Sagemaker endpoint is very popular, but if ECS, and EKS work, then there is no limitation on why lambda can not work. In case of ECS and EKS, you need to have the task running all the time (unless using fargate), where as if you are using the same docker container image for Lambda or ECS, it shouldn't matter. You can always keep Lambda function warm using a ping call from an event bridge scheduler. Serverless is a paradigm shift. Please do share if you have any case where ECS/EKS work but lambda with container doesn't work. The only limitation I can see is 15 mins timeout for lambda and 29 seconds for API gateway to Lambda integration. But if any use case for prediction is making a user wait for more than 29 seconds on browser (Or 15 minutes to lambda function url) then it should be more async integration for such cases. it would take similar long time to respond from Sagemaker / ECS /EKS end points as well. Thanks

  • @sebastianfuertesdev9768
    @sebastianfuertesdev9768 5 หลายเดือนก่อน

    Thank you so much! it was an amazing tutorial

    • @MyCloudTutorials
      @MyCloudTutorials 5 หลายเดือนก่อน

      I am glad you liked it.Thanks

    • @TechHeda7
      @TechHeda7 4 หลายเดือนก่อน

      @@MyCloudTutorials @palgorithm 0 seconds ago Hi, my request for anthropic claude model rejected. How to use that model then? I provided random details in the use case (company, website etc...)

  • @SameerAli-nm8xn
    @SameerAli-nm8xn 5 หลายเดือนก่อน

    Your content is okay, but try to increase the volume it will be great

    • @MyCloudTutorials
      @MyCloudTutorials 5 หลายเดือนก่อน

      There was an issue with this video's volume, will be careful for others. Thanks for feedback

  • @rajv739
    @rajv739 5 หลายเดือนก่อน

    Very informative. Volume is slightly low for some reason.

    • @MyCloudTutorials
      @MyCloudTutorials 5 หลายเดือนก่อน

      Sorry about the volume, will try to check again prior to uploading other videos

  • @norendermoody6509
    @norendermoody6509 5 หลายเดือนก่อน

    It will be helpful if you provide the AWS setup of Bedrock in a video

  • @bensdevjourney7081
    @bensdevjourney7081 6 หลายเดือนก่อน

    Would be nice to know, if we want to do something like this we first need to request access to the Titan Embeddings G1 - Text model. This was a great tutorial but it overshadowed setting up access Key/secret on AWS as well as requesting the model on AWS. Subscribing to hopefully see more guidance on how to set up things on the AWS side. Well done

  • @suhrobyusupov3520
    @suhrobyusupov3520 6 หลายเดือนก่อน

    Hello sir, great tutorial, can we do the exact same thing with other type of data, not only pdf, sql tables for example

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Yes I don't think why not. You have 2 options: 1) If you are already using PostGreSql, then you can use pgvector extension to make it a vector data store. 2) If you do not want to use that, or not using PostGreSql and still want to vectorize your data and store in a vector index, you can certainly do that. Steps will be something like: 1. Let's assume you are using ElasticSearch as Vector store 2. Read from table(s) , sql query, get the data 3. Call an embedding model with the column(s) of data you need to generate the embedding for. 4. Create a JSON object with plain text and embedding data. 5. Make Api calls to Elastic search to index the data On querying side: 1. Convert the query into embedding using the same model that you used to create the embedding while indexing 2. Run similarity search on Elastic Search. I am going to make a tutorial on something similar soon. Stay tuned. I will add a comment here with the same when ready. Thanks

  • @AashiAgarwal-eb2dx
    @AashiAgarwal-eb2dx 6 หลายเดือนก่อน

    Hi I need to provide memory for remembering the context in the chatbot, could you please suggest me the way how I can implement to the above code.

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Langchain has Contextual memory. Please check this: js.langchain.com/v0.1/docs/modules/memory/ I will make a video soon, once ready, I will put a link in this comment. Thanks

  • @tanviralamsyed
    @tanviralamsyed 6 หลายเดือนก่อน

    Hey there, is it possible for the user to upload their own pdf and the ai model to become a chatbot

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Well, if you do not want to use RAG (like I showed in this video) then you will have to take a Foundation Model and train it with your data. So, yes, it's possible but it would require more resources, time etc (depending upon the base model you choose and your data). For continuously moving data, RAG works out better as you don't have to constantly fine tune the models as and when data is available. Another thing to think about is cost. Once you train and create a custom model, you have to host it somewhere (whether cloud or in data center) which will incur some cost. I hope it helps. Thanks

  • @AashiAgarwal-eb2dx
    @AashiAgarwal-eb2dx 6 หลายเดือนก่อน

    I am facing the error TypeError: expected string or bytes-like object, got 'NoneType' Traceback: File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.__dict__) File "C:\Users\RM7028239\Documents\bot_bedrock\admin\admin.py", line 91, in <module> main() File "C:\Users\RM7028239\Documents\bot_bedrock\admin\admin.py", line 80, in main result = create_vector_store(request_id, splitted_docs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\Documents\bot_bedrock\admin\admin.py", line 48, in create_vector_store s3_client.upload_file(Filename=folder_path + "/" + file_name + ".faiss", Bucket=BUCKET_NAME, Key="my_faiss.faiss") File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\boto3\s3\inject.py", line 145, in upload_file return transfer.upload_file( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\boto3\s3\transfer.py", line 367, in upload_file future = self._manager.upload( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\s3transfer\manager.py", line 323, in upload self._validate_if_bucket_supported(bucket) File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\s3transfer\manager.py", line 492, in _validate_if_bucket_supported match = pattern.match(bucket) ^^^^^^^^^^^^^^^^^^^^^

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Could you check if the folder and file exist and has data locally? Before this line s3_client.upload_file(...), you can print the path, read the file locally and check the size. Most likely there is a typo somewhere. Thanks

  • @AashiAgarwal-eb2dx
    @AashiAgarwal-eb2dx 6 หลายเดือนก่อน

    Also I am facing error as I am trying to connect with my aws to load the embeddings but facing an error ValueError: Error raised by inference endpoint: An error occurred (ResourceNotFoundException) when calling the InvokeModel operation: Could not resolve the foundation model from the provided model identifier. Traceback: File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.__dict__) File "C:\Users\RM7028239\Documents\bot_bedrock\admin\admin.py", line 95, in <module> main() File "C:\Users\RM7028239\Documents\bot_bedrock\admin\admin.py", line 85, in main result = create_vector_store(request_id, splitted_docs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\Documents\bot_bedrock\admin\admin.py", line 49, in create_vector_store vectorstore_faiss=FAISS.from_documents(documents, bedrock_embeddings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\langchain_core\vectorstores.py", line 550, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\langchain_community\vectorstores\faiss.py", line 930, in from_texts embeddings = embedding.embed_documents(texts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\langchain_community\embeddings\bedrock.py", line 169, in embed_documents response = self._embedding_func(text) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\RM7028239\AppData\Local\anaconda3\Lib\site-packages\langchain_community\embeddings\bedrock.py", line 150, in _embedding_func raise ValueError(f"Error raised by inference endpoint: {e}")

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Please check if you have allowed access to Foundation Model you are using. You have to request the model access (mostly given instantly) in AWS console. Also check if you are using right AWS region, as the models access approval is Region specific. Also check the model_id, confirm there is no typo, extra space or special characters. Thanks

  • @AashiAgarwal-eb2dx
    @AashiAgarwal-eb2dx 6 หลายเดือนก่อน

    I am using this command docker build -f pdf-reader-admin . but facing the error [+] Building 0.0s (0/0) docker:default [+] Building 0.0s (1/1) FINISHED docker:defaultd => [internal] load build definition from pdf-reader-admin 0.0s => => transferring dockerfile: 2B 0.0s ERROR: failed to solve: failed to read dockerfile: open pdf-reader-admin: no such file or directory View build details: docker-desktop://dashboard/build/default/default/sp35c5kmas0gsmbwzummn34dv

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Could you confirm that you have the Dockerfile (I think it's case-sensitive) in the directory? You should have it, if you cloned from this repo: github.com/mycloudtutorials/generative-ai-demos/tree/master/bedrock-chat-with-pdf/Admin Please check. Thanks

    • @AashiAgarwal-eb2dx
      @AashiAgarwal-eb2dx 6 หลายเดือนก่อน

      Thanks issue fixed.

  • @blackssails
    @blackssails 6 หลายเดือนก่อน

    hello ! Thank you very much for taking the time to teach how to use this technology... could you teach how to read csv files from an AWS S3 bucket with AWS bedrock? For those of us who work with BBDD it would be very useful, thank you very much

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Hello, I am glad you liked the video. What do you mean by reading the CSV file from S3 using Bedrock? Are you referring about Bedrock Knowledgebase where files located in S3 are read, and their vector embeddings are stored in Amazon Open Search index? Could you elaborate? Also, what is BBDD? Thanks

    • @nicknick1766
      @nicknick1766 6 หลายเดือนก่อน

      @@MyCloudTutorials Hey, i would be interested in the case you mentionned, working with the knowledgebase. Also would be great to include a lambda that auto-syncs the base, with an s3 upload trigger connected. Also, conversation memory would be an interesting topic, not sure if it is automatically included when working with bedrock, but there are langchain pakages that make it easy. If you got time, i would also love to see the response from bedrock beeing saved to an s3 as well. Thanks in advance!

    • @blackssails
      @blackssails 6 หลายเดือนก่อน

      @@MyCloudTutorials hello ! Thank you very much for taking the time to respond. Now that I've looked around your entire channel, I mean doing exactly the same thing you did in this video, th-cam.com/video/KFibP7KnDVM/w-d-xo.html (name: Chat with your PDF - Gen AI App - With Amazon Bedrock, RAG, S3, Langchain and Streamlit [Hands-On]), but for files with CSV formats. Have a good day, greetings

    • @blackssails
      @blackssails 6 หลายเดือนก่อน

      The idea is that you can interpret the data coming out of the CSV file in S3, but using Bedrock to protect data leakage

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Knowledgebase is totally doable. But from what I have seen, it takes an S3 bucket's content, create vector embedding and store in ElasticSearch (Amazon Open Search). I didn't see an example on what happens when you add more files to S3. In that case definitely a lambda trigger, which can perform similar steps. That would be a good tutorial to make, I will add to the TODO list and add a comment here when ready. Thanks

  • @KeerthanaPriyaDevaraj
    @KeerthanaPriyaDevaraj 6 หลายเดือนก่อน

    Im getting an error when im trying to run the code. ValueError: Error raised by inference endpoint: Unable to locate credentials Traceback: File "C:\Users\DELL\Desktop\bedrock-chat-with-pdf\venv_bedrock\Lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.__dict__) File "C:\Users\DELL\Desktop\bedrock-chat-with-pdf\Admin\admin.py", line 88, in <module> main() File "C:\Users\DELL\Desktop\bedrock-chat-with-pdf\Admin\admin.py", line 78, in main result = create_vector_store(request_id, splitted_docs) Can you help me with this?

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Hi @KeerthanaPriyaDevaraj Do you have AWS Cli installed and setup with IAM credential that has access to Bedrock? If not, Please follow AWS documentation for installing AWS Cli from here: docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html If you already have an IAM user created, create access_key and secret_access_key in the IAM console and download the credentials. Use `aws configure` command to setup the CLI. Also, make sure that you have the models access enabled in the Bedrock console. I hope this helps. Thanks

    • @xoxies
      @xoxies 6 หลายเดือนก่อน

      Is possible lambda aws ?

    • @MyCloudTutorials
      @MyCloudTutorials 6 หลายเดือนก่อน

      Very much possible in Lambda. If you are using langchain with Lambda, you need to include that as dependency. I shall make a video on how to do it in lambda, no ETA as of now, but I will surely tag you when I post a video on that. Thanks

    • @xoxies
      @xoxies 6 หลายเดือนก่อน

      ​@@MyCloudTutorialsI'm trying, but I have an error: "Unable to import module" for "langchain community", I generated a layer in the lambda function, but it doesn't work, I'm waiting for your video, thanks

  • @kunaspandana4906
    @kunaspandana4906 ปีที่แล้ว

    Thank you

  • @deepakchandrashekar3748
    @deepakchandrashekar3748 ปีที่แล้ว

    Thank you Girish I was very much excited with your channel because of your crystal clear explanation, I wasted more than 4 to 6 hours watching one of the TH-cam channel to understand NAT Gateway, BastionHost Configuration, VPC Peering but I was not clear with their explaination. But your explanation helped me to understand within 2 hours above all concepts and I was able to get the Hands on NAT Gateway, BastionHost Configuration, VPC Peering. Thank YOU Sir, Salute :) 🎉 👍

    • @MyCloudTutorials
      @MyCloudTutorials ปีที่แล้ว

      Thanks! I am glad that you liked it.

    • @deepakchandrashekar3748
      @deepakchandrashekar3748 ปีที่แล้ว

      @@MyCloudTutorials I request you to make more videos on AWS and DevOps so it will be useful learners like us. Iam waiting for more videos

  • @javascript_developer
    @javascript_developer ปีที่แล้ว

    nice explanation thanks.

  • @_m.guler_
    @_m.guler_ 2 ปีที่แล้ว

    Im learning a lot from you, thank you.

  • @_m.guler_
    @_m.guler_ 2 ปีที่แล้ว

    thank you

  • @maximiliandebian3172
    @maximiliandebian3172 2 ปีที่แล้ว

    Thanks for 13%. I think I will pass the CKA exam on my second attempt now 😁

  • @Cybernetblog
    @Cybernetblog 2 ปีที่แล้ว

    Great 🎉

  • @Cybernetblog
    @Cybernetblog 2 ปีที่แล้ว

    I wish you include vpc peering.. but still very much perfect

    • @MyCloudTutorials
      @MyCloudTutorials 2 ปีที่แล้ว

      Hi @honmane humble I have a separate video on VPC peering. Please check it out at th-cam.com/video/q4EZbIBCOJE/w-d-xo.html Thanks

  • @Cybernetblog
    @Cybernetblog 2 ปีที่แล้ว

    Thanks for this beautiful tutorial.

  • @dhiyantoyify6989
    @dhiyantoyify6989 2 ปีที่แล้ว

    Hi ji , Thank u for videos , pls continue ur job.

  • @dinakarjanga6710
    @dinakarjanga6710 2 ปีที่แล้ว

    Hi....I really like your way of explanation...thank you. Could you please post a video on Node Affinity/Anti Affinity and Pod Affinity/Anti Affinity. Those topics are really so confusing. Thanks in Advance.

  • @bharatnbaxani7587
    @bharatnbaxani7587 2 ปีที่แล้ว

    Great video Girish. Please keep making videos on kubernetes, thanks. A lot.

    • @MyCloudTutorials
      @MyCloudTutorials 2 ปีที่แล้ว

      Thanks Bharat. I will make more videos for sure.

  • @palanisamy-dl9qe
    @palanisamy-dl9qe 2 ปีที่แล้ว

    Thanks for your time to make this video is very useful

    • @MyCloudTutorials
      @MyCloudTutorials 2 ปีที่แล้ว

      I am super glad that you found the video helpful. Please share with your friends, post links to your social media, help me spread the word. Thank you

  • @palanisamy-dl9qe
    @palanisamy-dl9qe 2 ปีที่แล้ว

    Thanks for the video, please do the more video in AWS

  • @palanisamy-dl9qe
    @palanisamy-dl9qe 2 ปีที่แล้ว

    Thanks for the video, please add more video with AWS services, if you have any playlists please let me know about AWS.

  • @prakashkrishnan7132
    @prakashkrishnan7132 2 ปีที่แล้ว

    Excellent Demo. Thanks a ton

  • @YouTubers-rj9xv
    @YouTubers-rj9xv 2 ปีที่แล้ว

    Bro excellent teaching thank you please keep sharing your knowledge.

    • @MyCloudTutorials
      @MyCloudTutorials 2 ปีที่แล้ว

      Thank you, trying to make these videos as and when I get time