- 17
- 5 857
Tai Do
เข้าร่วมเมื่อ 7 ก.พ. 2013
Lesson 3 - Docker and FastAPI | from scratch | live demos | MLOps
0:00 - Introduction
0:50 - FastAPI basics
4:35 - Add emotion prediction to FastAPI
9:20 - Build & test Docker image
18:40 - Update app to take URL input
26:00 - Share Docker image on Docker Hub
Before deploying our model to production, it's essential to build and test it locally. In this episode, we'll cover the following steps:
1. Creating a FastAPI app
2. Building a Docker image
3. Running the Docker container locally
4. Sharing your Docker image
If you find this video helpful, don’t forget to hit the 👍 button-it really motivates me to keep creating awesome content! Got questions or thoughts? Drop them in the comments-I’d love to hear from you!
Blog post: medium.com/@doductai8590/lesson-3-docker-and-fastapi-ea2707da14ee
Source code: github.com/dtdo90/emotion_recognition_mlops/tree/main
---------------------------------------------------------------------------------------------------------------------------------------------
📞 Connect with Me and Hanh
On LinkedIn:
👉 LinkedIn: www.linkedin.com/in/tai-do-9463002b7/
👉 LinkedIn: www.linkedin.com/in/nguyenqh10/
On Github
🤖 Github: github.com/dtdo90
🤖 Github: github.com/nguyenqh
0:50 - FastAPI basics
4:35 - Add emotion prediction to FastAPI
9:20 - Build & test Docker image
18:40 - Update app to take URL input
26:00 - Share Docker image on Docker Hub
Before deploying our model to production, it's essential to build and test it locally. In this episode, we'll cover the following steps:
1. Creating a FastAPI app
2. Building a Docker image
3. Running the Docker container locally
4. Sharing your Docker image
If you find this video helpful, don’t forget to hit the 👍 button-it really motivates me to keep creating awesome content! Got questions or thoughts? Drop them in the comments-I’d love to hear from you!
Blog post: medium.com/@doductai8590/lesson-3-docker-and-fastapi-ea2707da14ee
Source code: github.com/dtdo90/emotion_recognition_mlops/tree/main
---------------------------------------------------------------------------------------------------------------------------------------------
📞 Connect with Me and Hanh
On LinkedIn:
👉 LinkedIn: www.linkedin.com/in/tai-do-9463002b7/
👉 LinkedIn: www.linkedin.com/in/nguyenqh10/
On Github
🤖 Github: github.com/dtdo90
🤖 Github: github.com/nguyenqh
มุมมอง: 90
วีดีโอ
Lesson 2 - Model Inference and ONNX packaging | from scratch | live demo | mlops
มุมมอง 7621 วันที่ผ่านมา
0:00 Evaluate the trained model 5:00 Inference on images, videos and live webcam feeds 18:42 ONNX packaging 24:10 Inference on ONNX model In this video, we'll perform inference on the trained model on images, videos and live webcam feeds. Additionally, we convert the model to ONNX format, preparing it for integration into a Docker image, which will be covered in the next episode. If you find th...
Lesson 1 - Model training and monitoring | from scratch | Pytorch Lightning and WandB | MLOps
มุมมอง 23028 วันที่ผ่านมา
0:00 Build an end-to-end machine learning pipeline 2:00 Create a custom data module 15:20 Create vgg16 classification model from scratch 33:55 Manage model parameters in yaml files with hydra 36:55 Train and monitor 49:00 Evaluate the trained model In this video, we go through the process of data processing, model creation, and training using PyTorch Lightning! 🚀 PyTorch Lightning is a powerful...
05 - Graph Attention Network (GAT) explained | step-by-step
มุมมอง 2213 หลายเดือนก่อน
0:00 Update equations for GCN, GraphSage and GAT 5:34 GAT from scratch 19:00 Train and test 24:45 GAT from dgl In this video, we go through the steps in creating a simple graph attention network with Graph Attention Convolution (GATConv) layers. We will create a GNN model in which GATConv is implemented from scratch, train it and compare its performance with the exact same model which uses the ...
YOLOv8 Object Tracking: Tracking Bounding Boxes and Keypoints in Every Frame | step-by-step
มุมมอง 1394 หลายเดือนก่อน
0:00 Main problems 1:53 Plot keypoints and their connectivity 7:30 Detect large objects 10:20 Track keypoints using bounding box movement 15:15 Plot tracked bounding boxes and keypoints on every frame 28:23 Copy keypoints detected by yolov8 In this video, we’ll explore the process of tracking main objects in a video, specifically boxers in a boxing match video by focusing on both their bounding...
YOLOv8 Object Tracking: Step-by-Step Guide to Tracking Main Objects in Every Frame
มุมมอง 1354 หลายเดือนก่อน
0:00 Main problems 2:10 Detect large objects 5:00 Track in every frame with ByteTrack In this video, we’ll explore the process of tracking large objects in a video. This approach is useful for focusing on the main subjects, like following the movements of boxers in a match while leaving the audience in the background. The subjects will be tracked and drawn in every single frame. We'll go throug...
04 - Graph Classification | step-by-step
มุมมอง 1005 หลายเดือนก่อน
0:00 Data preparation 8:50 GNN with GCN 22:00 GNN with SageConv In this video, we’ll be exploring the implementation of graph classification models using SAGEConv and GCN. We'll be working with the GIN dataset, which includes 1113 graphs spread across 2 classes, each containing between 10 and 500 nodes. Together, we'll walk through the detailed steps to create a model that consists a sequence o...
Step-by-step Guide To Creating Yolov8 From Scratch!
มุมมอง 2.2K5 หลายเดือนก่อน
0:00 Yolov8 architecture 3:07 Backbone 44:12 Neck 1:00:14 Head In this video, we'll go through the exciting process of building the YOLOv8 detection model from scratch. YOLOv8, like many advanced models, is composed of three key components: backbone, neck, and head. We'll explore the creation of each of these components step by step. To wrap up the video, we'll demonstrate how to overfit the mo...
03 - Link Prediction with GraphSage explained | step-by-step
มุมมอง 2545 หลายเดือนก่อน
0:00 What is link prediction? 1:53 Preparing the graph for link prediction 15:20 GNN with SageConv 26:20 Training and Testing In this video, we explore the exciting process of link prediction using GraphSAGE Convolution (SAGEConv). Link prediction helps us determine the likelihood of a connection between two nodes in a graph. We'll be working with the Cora graph, where we: 1. Enhance the learni...
02 - Graph Sage Convolution (SageConv) explained | step-by-step
มุมมอง 1855 หลายเดือนก่อน
0:00 Introduction to DGL 4:40 GraphSage from scratch 13:50 Train and test 23:49 GraphSage from built-in function In this video, we go through the steps in creating a simple graph neural network (GNN) with GraphSage Convolution (SAGEConv) layers. We will create a GNN model in which SAGEConv is coded from scratch, train it and compare its performance with the exact same model which uses the built...
01 - Graph convolutional network (GCN) explained | step-by-step
มุมมอง 1955 หลายเดือนก่อน
0:00 Introduction to DGL 6:20 A simple case study 17:52 GCN from scratch 38:46 Train and test 55:37 GCN from built-in function In this video, we go through the steps in creating a simple graph neural network (GNN) with graph convolution (GCN) layers. The feature updating process in a graph dataset can be done via 3 functions in DGL: (1) message_func(edges) sends information along the edges, (2)...
Install Deep Graph Library DGL for MacOs M1, M2, M3
มุมมอง 3125 หลายเดือนก่อน
In this video, I go through the steps to install DGL with MacOs. It is important to install the correct dependencies: 1. pydantic, PyYAML, numpy 1.26.4 (version 2 doesn't work) 2. pytorch version: torch 2.1.2 torchvision 0.16.2 torchaudio 2.1.2 If you enjoyed this video, please press the 👍 button. That would mean a lot for me to make the next video asap. As always, feel free to drop a comment d...
Emotion Detection with Vgg16, Pytorch Lightning and SORT | live demo | from scratch - Part 3
มุมมอง 1446 หลายเดือนก่อน
0:00 Changes on model to get 72.5% 12:49 Inference on videos 37:45 Inference on live webcam This is the last part in my sequel of tutorials on facial emotion detection. In this video, we combine our model and SORT (and object tracking algorithm) in the following way: 1. Use model detects to detect facial regions and emotions 2. Track the movement of the facial regions in each from using SORT Fu...
Emotion Detection with Vgg16, Pytorch Lightning and SORT | live demo | from scratch - Part 2
มุมมอง 1216 หลายเดือนก่อน
0:00 Recap VGG16 on FER2013 3:15 Inference on images 12:53 Inference on videos 23:45 Inference on live webcam Colab link: colab.research.google.com/drive/1xAXJ9x4e4xQCXy0xUDP7LbLRC29Sg9s6 We implement facial emotion recognition from scratch. The work flow is divided into 2 parts: 1. Train VGG16 on FER2013 (by Pytorch Lightning). 2. Inference: on images, videos and live webcam. This video 📝 focu...
Emotion Detection with Vgg16, Pytorch Lightning and SORT | live demo | from scratch - Part 1
มุมมอง 1677 หลายเดือนก่อน
0:00 Work flow 1:55 Data processing 28:00 VGG16 with Pytorch Lightning 54:20 Train result Colab link: colab.research.google.com/drive/1xAXJ9x4e4xQCXy0xUDP7LbLRC29Sg9s6 We implement facial emotion recognition from scratch. The work flow is divided into 2 parts: 1. Train VGG16 on FER2013 (by Pytorch Lightning). 2. Inference: we will do inference on images, videos and live camera. This video 📝 foc...
Graph Transformer with Edge Features explained from scratch
มุมมอง 4217 หลายเดือนก่อน
Graph Transformer with Edge Features explained from scratch
Graph Transformer explained with paper implementation from scratch
มุมมอง 8717 หลายเดือนก่อน
Graph Transformer explained with paper implementation from scratch
Hi what would you recommend for newer programmers when getting into neural network algorithms? Love the content btw!
Hi. It depends on your background. However, I would say to do (or study) what you like, but try to do something slightly different from what people already did. You would learn a lot doing things that were not done before.
Can you do yolo panoptic (yolopv2) from scratch sir? It would be a huge help.
great video. informative.Keep going!!!
Great!!!! work Thank you !!
@@irushabasukala871 glad you like it!
Dont really understand at 31.51 why the h is the raw input features without passing it through the model eg h = model(g_main, g_main.ndata['feat'])
good job!
Great Sir.
how to apply c2f-DWR instead of c2f
outdated already unfortuantely
find anything new?
@@Jo-vu7gi use conda is my advice, that worked out of the box for me but it was a long time ago so I don't remember exactly what I did
The loss value is about 15 milions, quite high value at the epoch 0. Can you explain why this happen, do this also happen when I using library like Ultralytics?
Yolov10 was known for consistent dual assignments for NMS-free training, can you describe the general idea of it? Thank you sir, hope you have a nice day.
The lecture is very helpful for me, thanks a lot sir!
Thank youuuuuuu My LORD 😭😭😭😭
Hi, great video! Can you please the same thing for the yolo11 and add an extra step showing how to train the model too.
Hi. I would like to improve the accuracy of yolo recognition. The task is to recognize the contours of empty circles superimposed on each other in the image, as I understand it, you need to change the Bounding box to circles. How long will it take? Is it difficult? I want to get the radius of the circle instead of the width and height, the exact coordinates along the xy axis
Hi. That would be a nice project. However, I think it might not be a simple task. In terms of dataset, I'm not sure if there is any available dataset. For the model, we need to change the head to output radius + center, which might not be trivial.
@@taido4883 Hi! I already have a dataset of 2,000 labeled images in JSON format, including COCO, YOLO v1.1, YOLO v8, and others. All data was annotated using CVAT. Regarding the idea of recognizing circles, I consulted with other specialists, and we came to the conclusion that the center of the circle should match the center of the predicted bounding box. The radius can be calculated as the average of half the width and height of the bounding box, i.e., (height + width) / 4. This approach might simplify the model modification. What do you think?
Cam on a !
Cam on e ung ho nha!
can you share the dataset
Can you share Full code with GIthub repo
@@balasubramaniamv1109 Sorry I didn't see your comment. The github repo is in the description github.com/dtdo90/Object_tracking_yolov8_ByteTrack
@@taido4883❤️❤️
Thanks so much Tai! I needed this input!
Really?! So happy if this is useful for you!
I hope you are well. It happens that I want to use Dataset file on my photos, I have the photos and annotations in different folders. What is the proper structure I should have for dataset to work correctly? Because I'm trying to adapt it and it doesn't take the annotations only the photos. Thanks for your attention
Hi. It is cool that you want to adapt it on your own data. Here is the data structure ├── COCO ├── images ├── train2017 ├── 1111.jpg ├── 2222.jpg ├── val2017 ├── 1111.jpg ├── 2222.jpg ├── labels ├── train2017 ├── 1111.txt ├── 2222.txt ├── val2017 ├── 1111.txt ├── 2222.txt
@@taido4883 can i See how txt files Looks?
@@taido4883 perfect! My files are . XML ,What do your txt files look like? Do i need to modify my py Dataset file?
@@taido4883 thanks a lot! How Is the structure of txt files? Is the Same as XML?
did you try loading in the original yolov8 weights into the model
I didn't try it. I expect errors if we load the weights directly because the naming conventions of the modules differ. We might need to map the names to the original model in order to load its weights.
can you share the link of the dataset you used
Sure. You can get it from Kaggle www.kaggle.com/datasets/awsaf49/coco-2017-dataset
@@taido4883 thanks
@@taido4883 Thanks for your resourceful explanation, but faces some problem using coco dataset. It has no txt file, but your explanation code mention f'{data_dir}/train2017.txt'). Bit confused. Please help
nice tutorial man <3
glad you like it!
Very Nice. Do more videos on GNN.
Can you do for detr(detection transformer) also, it will be really good sir
Sound good, definitely will dive into it in the future!
I was just about to do the same , even create a mini API for full pipeline... Subscribed
That would be cool!
Hi Tai, that was really grate and Im persuaded to engage with your channel, that was amazing, too
Hi (not sure how to call you!). I am very happy you like it 👍
I noticed that the inference is quite inaccurate at night time. Perhaps lighting condition is an important factor! Let me know how your model performs 👍
Awesome , makes me understand coding this from scratch easily. Many thanks ^^
Thanks man. So glad that you like it 👍
it's nice to see you here. As Andrej Karpathy visual content is a great access to wider audience.
Thanks man!
very instructive <3
Thanks man!
To find attention coefficients, the original paper uses point-wise multiplication between Q and K. I use matrix multiplication between Q and K.T. Are these two operations equivalent? Anyone knows the advantage of using the point-wise multiplication? On top of my mind, I can only resort it to: (1) efficiency (it's easier to implement model in dgl with pointwise multiplication?), (2) data (pointwise multiplication is better for specific data?).
Thanks guys for all likes and comments! A correction in my explanation for Laplacian PE: The positional vectors = k eigenvectors corresponding to k smallest eigenvalues of the matrix L=D-A with D=degree matrix, A= adjacency matrix.
Chúc anh thành công với khóa học của NicholaiAI. Good luck bro, keep up good work <3, hope u release more videos to see progress.
👏🏽👏🏽👏🏽
Awesome video!