Did you enjoy this video? Try my premium courses! 😃🙌😊 ● Hands-On Computer Vision in the Cloud: Building an AWS-based Real Time Number Plate Recognition System bit.ly/3RXrE1Y ● End-To-End Computer Vision: Build and Deploy a Video Summarization API bit.ly/3tyQX0M ● Computer Vision on Edge: Real Time Number Plate Recognition on an Edge Device bit.ly/4dYodA7 ● Machine Learning Entrepreneur: How to start your entrepreneurial journey as a freelancer and content creator bit.ly/4bFLeaC Learn to create AI-based prototypes in the Computer Vision School! www.computervision.school 😃🚀🎓
One thing I'm still trying to work out is how to make sure any data you send to the endpoint for predictions is encoded and scaled the same way it was done to data used in training the model. As a user, I want to send regular data from a query and then encode any categorical data and have everything scaled and normalized within the lambda. I know we have those encoders outputted during training but I'm not sure it's so easy to load them into the lambda with something like a .pkl file.
Thank you, Felipe for taking the effort to making this video and sharing. Much appreciated! This is exactly what I needed for a predictive modeling project that I’m about to work on. I’m new to AWS, but not any more after watching your tutorial on TH-cam. You took a simple example and walked through the steps that were not only easy to follow, but also gives lots of confidence. Once again - ton of Thanks! Had to come back to your video to subscribe 😊
fantastic video , thanks a ton , bravo.... It too mch of manual stuff . May be amazon in the upcoming months or years, they make this more simple just with a simple click away. The real pain & the boring stuff IAM role creation
Great video as always...very informative! I have a related question... Have you been able to use Mediapipe inside Sagemaker? I'm using some of your ASL Prediction code and attempting to do in AWS...but mediapipe is presenting issues.
Thanks a lot man for this wonderful stuff. This is my first working on AWS Sagemaker and this whoel cycle and already feelign confident that I have learned a lot. Next Lookign to Learng from your Yolo8 Video for object detection adn than to deploy that model using this cycle. If you have tutorial on this or planning to make one this would be great, Yolo model on AWS Sagemaker. But Thank You soo much man
Hello! how are you? Which is better for accurately identifying color? openCV or yolov8? Thank you for this channel, you are helping me a lot with my project.
Make a video of deep fake detection using the face forensics++ dataset(image and video) if possible. And how to deploy this model and integrate into a website or mobile app.
🎯 Key Takeaways for quick navigation: 00:00 🚀 *Introduction and AWS SageMaker Overview* - Introduction to AWS SageMaker tutorial. - Important note about potential costs associated with AWS services. - Overview of the steps involved in the tutorial, including creating a notebook instance, data preparation, and deploying a model. 02:09 📊 *Data Preparation for Machine Learning* - Explanation of the importance of data preparation in machine learning. - Introduction to the Iris dataset, its features, and target classes. - Steps involved in data preparation, including downloading and converting the dataset to numerical values. 05:18 📦 *Downloading and Unzipping the Iris Dataset* - Demonstrating how to download the Iris dataset. - Showing the contents of the downloaded dataset. - Unzipping the downloaded dataset and displaying its files. 08:52 🔄 *Shuffling and Reordering the Dataset* - Explaining the importance of shuffling the dataset. - Demonstrating how to shuffle the dataset using pandas. - Showing the shuffled dataset. 12:12 🔄 *Changing the Label Column Index* - Explaining the necessity of changing the label column index. - Showing how to move the label column to the first position in the dataset. - Displaying the dataset with the modified column order. 15:23 🧩 *Splitting the Data into Training and Validation Sets* - Explaining the process of splitting the dataset into training and validation sets. - Demonstrating how to create the training and validation sets based on a specified split percentage. - Emphasizing the importance of this step for model training. 17:08 ☁️ *Moving Data into an S3 Bucket* - Creating an S3 bucket and specifying naming requirements. - Uploading the training and validation data to the S3 bucket. - Verifying the data's presence in the S3 bucket. 23:25 🤖 *Creating the Machine Learning Model Object* - Introduction to creating a machine learning model object. - Specifying the XGBoost algorithm for the model. - Initializing the SageMaker Estimator for model training. 23:54 🛠️ *Creating Sagemaker Model Container* - To build a machine learning model using Sagemaker, you need to create a container that defines the algorithm. You can choose from various algorithms, and in this section, the container for the XGBoost algorithm is being created. 25:09 🛡️ *Configuring Execution Role and Instance* - Specifying the execution role is crucial for Sagemaker to perform training. You also configure the number of instances and their types, as well as the storage size and location for saving the trained model. 29:05 ⚙️ *Setting Hyperparameters for Training* - Hyperparameters are essential for model training. In this section, you set hyperparameters for the XGBoost model, including the number of classes, number of rounds (epochs),and other specific parameters. 35:35 🚀 *Deploying the Trained Model* - After training the model, you deploy it using Sagemaker. This step involves creating an endpoint for making predictions with the trained model. 39:09 📡 *Calling the Deployed Model via Lambda and API Gateway* - To interact with the deployed model, you set up a Lambda function and an API Gateway. This allows you to send input data to the model and receive predictions in return. 46:13 🧪 *Testing the API Endpoint* - The final section demonstrates how to test the API endpoint using tools like Postman. Input data is sent to the API, and predictions are received, confirming that the entire pipeline is functional. Made with HARPA AI
Did you enjoy this video? Try my premium courses! 😃🙌😊
● Hands-On Computer Vision in the Cloud: Building an AWS-based Real Time Number Plate Recognition System bit.ly/3RXrE1Y
● End-To-End Computer Vision: Build and Deploy a Video Summarization API bit.ly/3tyQX0M
● Computer Vision on Edge: Real Time Number Plate Recognition on an Edge Device bit.ly/4dYodA7
● Machine Learning Entrepreneur: How to start your entrepreneurial journey as a freelancer and content creator bit.ly/4bFLeaC
Learn to create AI-based prototypes in the Computer Vision School! www.computervision.school 😃🚀🎓
Thank you for going through the extra steps of making the lambda function, the api gateway, and showing how to pass and access data via the event!
One thing I'm still trying to work out is how to make sure any data you send to the endpoint for predictions is encoded and scaled the same way it was done to data used in training the model. As a user, I want to send regular data from a query and then encode any categorical data and have everything scaled and normalized within the lambda. I know we have those encoders outputted during training but I'm not sure it's so easy to load them into the lambda with something like a .pkl file.
Your very very intentional inflection is so helpful for me to remember❤ thank you Felipe
Would love to see deployment of a custom CNN Based model on AWS, becasue there are very few tutorials regarding it.
Ok, noted. I will try to make some content about it. 🙌
Thank you, Felipe for taking the effort to making this video and sharing. Much appreciated! This is exactly what I needed for a predictive modeling project that I’m about to work on. I’m new to AWS, but not any more after watching your tutorial on TH-cam. You took a simple example and walked through the steps that were not only easy to follow, but also gives lots of confidence. Once again - ton of Thanks!
Had to come back to your video to subscribe 😊
Awesome video!!
How did you copy the data url to the sagemaker? I didn't get that bit.
Most Awaited Video for me..Best Video to Understand Computer Vision Model Deployment..Thanks
😃 Glad you enjoyed it! 🙂🙌
fantastic video , thanks a ton , bravo.... It too mch of manual stuff . May be amazon in the upcoming months or years, they make this more simple just with a simple click away. The real pain & the boring stuff IAM role creation
Great video as always...very informative! I have a related question...
Have you been able to use Mediapipe inside Sagemaker? I'm using some of your ASL Prediction code and attempting to do in AWS...but mediapipe is presenting issues.
Thanks a lot man for this wonderful stuff. This is my first working on AWS Sagemaker and this whoel cycle and already feelign confident that I have learned a lot.
Next Lookign to Learng from your Yolo8 Video for object detection adn than to deploy that model using this cycle. If you have tutorial on this or planning to make one this would be great,
Yolo model on AWS Sagemaker.
But Thank You soo much man
great!! how did you solve cors issues?
Hats off to you boss
Fantastic!
You're a gem! ♥️
Thank you! 😃
thank you bro
Hello! how are you? Which is better for accurately identifying color? openCV or yolov8?
Thank you for this channel, you are helping me a lot with my project.
Hi, for identifying color I would use OpenCV. 🙌
Ok, thanks!!@@ComputerVisionEngineer
thanks for this one!
You are welcome! 🙌
get_image_url doesn't appear to work correctly..
Can this model be used in an Oak-D to detect the images?
Make a video of deep fake detection using the face forensics++ dataset(image and video) if possible. And how to deploy this model and integrate into a website or mobile app.
I will try to. 🙌
whats an estimate of the total aws cost this tutorial will take
🎉🎉🎉🎉❤❤❤❤
😃🙌🎉🎉
can i do this in free tier
🎯 Key Takeaways for quick navigation:
00:00 🚀 *Introduction and AWS SageMaker Overview*
- Introduction to AWS SageMaker tutorial.
- Important note about potential costs associated with AWS services.
- Overview of the steps involved in the tutorial, including creating a notebook instance, data preparation, and deploying a model.
02:09 📊 *Data Preparation for Machine Learning*
- Explanation of the importance of data preparation in machine learning.
- Introduction to the Iris dataset, its features, and target classes.
- Steps involved in data preparation, including downloading and converting the dataset to numerical values.
05:18 📦 *Downloading and Unzipping the Iris Dataset*
- Demonstrating how to download the Iris dataset.
- Showing the contents of the downloaded dataset.
- Unzipping the downloaded dataset and displaying its files.
08:52 🔄 *Shuffling and Reordering the Dataset*
- Explaining the importance of shuffling the dataset.
- Demonstrating how to shuffle the dataset using pandas.
- Showing the shuffled dataset.
12:12 🔄 *Changing the Label Column Index*
- Explaining the necessity of changing the label column index.
- Showing how to move the label column to the first position in the dataset.
- Displaying the dataset with the modified column order.
15:23 🧩 *Splitting the Data into Training and Validation Sets*
- Explaining the process of splitting the dataset into training and validation sets.
- Demonstrating how to create the training and validation sets based on a specified split percentage.
- Emphasizing the importance of this step for model training.
17:08 ☁️ *Moving Data into an S3 Bucket*
- Creating an S3 bucket and specifying naming requirements.
- Uploading the training and validation data to the S3 bucket.
- Verifying the data's presence in the S3 bucket.
23:25 🤖 *Creating the Machine Learning Model Object*
- Introduction to creating a machine learning model object.
- Specifying the XGBoost algorithm for the model.
- Initializing the SageMaker Estimator for model training.
23:54 🛠️ *Creating Sagemaker Model Container*
- To build a machine learning model using Sagemaker, you need to create a container that defines the algorithm. You can choose from various algorithms, and in this section, the container for the XGBoost algorithm is being created.
25:09 🛡️ *Configuring Execution Role and Instance*
- Specifying the execution role is crucial for Sagemaker to perform training. You also configure the number of instances and their types, as well as the storage size and location for saving the trained model.
29:05 ⚙️ *Setting Hyperparameters for Training*
- Hyperparameters are essential for model training. In this section, you set hyperparameters for the XGBoost model, including the number of classes, number of rounds (epochs),and other specific parameters.
35:35 🚀 *Deploying the Trained Model*
- After training the model, you deploy it using Sagemaker. This step involves creating an endpoint for making predictions with the trained model.
39:09 📡 *Calling the Deployed Model via Lambda and API Gateway*
- To interact with the deployed model, you set up a Lambda function and an API Gateway. This allows you to send input data to the model and receive predictions in return.
46:13 🧪 *Testing the API Endpoint*
- The final section demonstrates how to test the API endpoint using tools like Postman. Input data is sent to the API, and predictions are received, confirming that the entire pipeline is functional.
Made with HARPA AI