Tennis Shots Identification and Counting using YOLOv7 Pose Estimation and LSTM Model
ฝัง
- เผยแพร่เมื่อ 6 ต.ค. 2024
- #yolo #yolov7 #objectdetection #poseestimation #lstm #computervision #deeplearning #opencv #pytorch
In this video 📝, we will learn how we can use YOLOv7 Pose Estimation model and LSTM model combine to identify and count different tennis shots.
If you enjoyed this video, be sure to subscribe and press the 👍 button
🧑🏻💻 My AI and Computer Vision Courses⭐:
📙 AI 4 Everyone: Dive into Modern AI with Llama 3.1 and Gemini (13$):
www.udemy.com/...
📙 YOLOv9 & YOLOv10: Learn Object Detection, Tracking & WebApps (13$):
www.udemy.com/...
📕 Learn LangChain: Build #22 LLM Apps using OpenAI & Llama 2 (14$):
www.udemy.com/...
📚 Computer Vision Web Development: YOLOv8 and TensorFlow.js (13$):
www.udemy.com/...
📕 Learn OpenCV: Build # 30 Apps with OpenCV, YOLOv8 & YOLO-NAS (13$):
www.udemy.com/...
📗 YOLO-NAS, OpenAI, SAM with WebApps using Flask and Streamlit (13$): www.udemy.com/...
📘 YOLO-NAS The Ultimate Course for Object Detection & Tracking (13$): www.udemy.com/...
📙 YOLOv8: Object Detection, Tracking & Web Apps in Python 2023 (13$) : www.udemy.com/...
📚 YOLOv7 YOLOv8 YOLO-NAS: Object Detection, Tracking & Web Apps in Python 2023 (13$): www.udemy.com/...
Follow Me:
LinkedIn: / muhammad-moin-7776751a0
GitHub: github.com/Muh...
Chat with us on Discord:
► / discord
For Consultation Call:
www.upwork.com...
Tennis Shots Identification and Counting using YOLOv7 Pose Estimation and LSTM Model (Medium Blog Post):
/ tennis-shot-identifica...
Google Colab File Link:
colab.research...
🧑🏻💻 My AI and Computer Vision Courses⭐:
📙 AI 4 Everyone: Dive into Modern AI with Llama 3.1 and Gemini (13$):
www.udemy.com/course/ai-4-everyone-dive-into-modern-ai-with-llama-31-and-gemini/?couponCode=13DOLLARS
📙 YOLOv9 & YOLOv10: Learn Object Detection, Tracking & WebApps (13$):
www.udemy.com/course/yolov9-learn-object-detection-tracking-with-webapps/?couponCode=SEPTEMBER13DOLLARS
📕 Learn LangChain: Build #22 LLM Apps using OpenAI & Llama 2 (14$):
www.udemy.com/course/learn-langchain-build-12-llm-apps-using-openai-llama-2/?couponCode=SEPTEMBER13DOLLARS
📚 Computer Vision Web Development: YOLOv8 and TensorFlow.js (13$):
www.udemy.com/course/computer-vision-web-development/?couponCode=SEPTEMBER13DOLLARS
📕 Learn OpenCV: Build # 30 Apps with OpenCV, YOLOv8 & YOLO-NAS (13$):
www.udemy.com/course/learn-opencv-build-30-apps-with-opencv-yolov8-yolo-nas/?couponCode=SEPTEMBER13DOLLARS
📗 YOLO-NAS, OpenAI, SAM with WebApps using Flask and Streamlit (13$): www.udemy.com/course/yolo-nas-object-detection-tracking-web-app-in-python-2023/?couponCode=SEPTEMBER13DOLLARS
📘 YOLO-NAS The Ultimate Course for Object Detection & Tracking (13$): www.udemy.com/course/yolo-nas-the-ultimate-course-for-object-detection-tracking/?couponCode=SEPTEMBER13DOLLARS
📙 YOLOv8: Object Detection, Tracking & Web Apps in Python 2023 (13$) : www.udemy.com/course/yolov8-the-ultimate-course-for-object-detection-tracking/?couponCode=SEPTEMBER13DOLLARS
📚 YOLOv7 YOLOv8 YOLO-NAS: Object Detection, Tracking & Web Apps in Python 2023 (13$): www.udemy.com/course/yolov7-object-detection-tracking-with-web-app-development/?couponCode=SEPTEMBER13DOLLARS
Thank you very much ♥
You're welcome 😊
Superb work👏🏿👏🏿👏🏿
Thanks Joel
Great work. Very informative and well explained. Just one question, What if we train the model on single person video poses and at inference time we want to infer both player 1 and 2 instead of just player two, Can we pass the whole frame instead of player 2 points or we need to pass both 1 and 2 one by one to get the results?
Awesome stuff! I'd love to run the google colab notebook to follow along but I get an error. I think the google drive link is down. Would you mind updating it when you have a chance? Thanks!
same ...
Hi Muhammad, good video. Have one question, you use YOLOv7 just for getting the keypoints from the videos right?, so the LSTM that you used is for the predict model. I would appreciate your clariffication. Thanks!
Hello Sir, it was a very nice video with good explanation. I have just one question, What is backhandgroundstroke_data1 and from where do we get that package??
Hello, did you find the solution to your problem?
@@ezginuruyaroglu595 i did not find it
Can i use this video for edicational purposes
How did you normalize the keypoints data? Is it done based on the image size or the bbox size?
HI Sir, I came from your yolo-nas custom detection video.
May I know how to check mAP@0.50:0.95 and also mAR of the yolo-nas model?
Hello very nice video I want to ask something I want to ask something backhandgroundstroke_data where did we download it or did we create it I did not see the explanation in the video I would be very happy if you help me
i dont know for some reason he omitted this part of how to create the data
@@thedatascientistchannel7176 same?
@@thedatascientistchannel7176 Hello, did you find the solution to your problem?
How does this only detect one person in the frame?
He defines a region of interest
25:10