Thank you very much :) Feel free to subscribe. New Kaggle competition is up: www.kaggle.com/competitions/nfl-player-contact-detection and I'll most likely drop some video about it soon :)
Your work is truly exceptional, and I greatly admire it. I'm optimistic that, in time, you will skillfully implement animal 3D pose estimation using YOLO. Keep up the remarkable work!
Thanks a lot. I'm doing my best, but I have a lot to do at work. If you want to watch my videos, you can do it on the Roboflow channel: www.youtube.com/@Roboflow. I usually produce one video a week there.
Awesome! Thank you for sharing your code :) Can I ask you which camera did you use? I want to try your project too, and I wonder each scene matches in same timestamp when you put two videos for estimating correct 3d data.
Thank you! This is actually very good question. With current setup I got 3 FPS on Tesla T4, so we would need to get a lot more efficient. But I think that combination of smaller model and more powerful GPU could bring us to 20 FPS
Great work! I have been working on a very similar project, but done live for multiusers at around 15fps. I was wondering how do you fuse the info from the 2 cameras in the 3d model? Do you know the position of each camera? Or one camera relative to the other?
@@SkalskiP it would be great to have some details on your 3D modeling for positioning the 17 joints in space. I understand you have some relative solution, where you preset the size (height) of the model to 1000. I am interested in a 3D positioning in space of the joints with a given coordinate origin. I was wondering if you have some insights or thoughts on that.
@@asdfds6752 Yes I try to calibrate both models to the same pre set dimensions. I’ll try to include that on blog. I should be sharing that link this week on LI.
@@SkalskiP tracking the object, for example, if person is captured in a camera lets say he has ID 1, he should be tracked by other camera with same ID.
@Skalski, have u ever trained keypoint detection with other architecture (OpenPose, pifpaf, etc) but with custom dataset? I'm sry if my question is a little off topic hehe....
Hello, nice video! When I try to replicate the project, by doing the following: from utils.general import check_img_size from models.experimental import attempt_load It doesn't seem to recognize those files and it shows the following: Import "utils.general" could not be resolved(reportMissingImports) Import "models.experimental" could not be resolved(reportMissingImports) How can it be solved? Because after executing that block of code, it no longer allows to make a correct plot_image Can you help me please?
@@SkalskiP Haha! Good answer :). I'm interested in the possibility of taking this to a real time solution using 2 synced cameras and a fairly standard laptop (with GPU)
great tutorial, i've been making a project about human pose estimation (just in case of study) and i am stucking in this field about yolov7 right now, i really need some help, how can i contact you
Delighted to see the tutorial. Thanks for sharing Piotr!
Thank you very much :) Feel free to subscribe. New Kaggle competition is up: www.kaggle.com/competitions/nfl-player-contact-detection and I'll most likely drop some video about it soon :)
Your work is truly exceptional, and I greatly admire it. I'm optimistic that, in time, you will skillfully implement animal 3D pose estimation using YOLO. Keep up the remarkable work!
Awesome video and project, thank you for sharing!
Great job! Happy to help ;)
Thank you very much once again! Respect for those mad skills!
Great @Skalski, keep doing suc amazing projects, --- subscribed for more...
You made my day! I'll do my best :) Thank you!
Came here after I saw a LinkedIn post of you! Great work!
let's go!!!!! love this !!
It is so nice to hear that! 💥
Awesome, we want more videos :)
Thanks a lot. I'm doing my best, but I have a lot to do at work. If you want to watch my videos, you can do it on the Roboflow channel: www.youtube.com/@Roboflow. I usually produce one video a week there.
great explanation! cool 😎
Awesome!
Thank you for sharing your code :)
Can I ask you which camera did you use?
I want to try your project too, and I wonder each scene matches in same timestamp when you put two videos for estimating correct 3d data.
@Skalski, that looks so nice! Would it work in real time using two live video streams?
Thank you! This is actually very good question. With current setup I got 3 FPS on Tesla T4, so we would need to get a lot more efficient. But I think that combination of smaller model and more powerful GPU could bring us to 20 FPS
Very good job! Can we send in .bvh format the result of the animation?
Great work! I have been working on a very similar project, but done live for multiusers at around 15fps. I was wondering how do you fuse the info from the 2 cameras in the 3d model? Do you know the position of each camera? Or one camera relative to the other?
I open sourced the code. But I'm working on a blog post right now. Any specific stuff that you'd like to read about?
@@SkalskiP it would be great to have some details on your 3D modeling for positioning the 17 joints in space.
I understand you have some relative solution, where you preset the size (height) of the model to 1000. I am interested in a 3D positioning in space of the joints with a given coordinate origin. I was wondering if you have some insights or thoughts on that.
@@asdfds6752 Yes I try to calibrate both models to the same pre set dimensions. I’ll try to include that on blog. I should be sharing that link this week on LI.
@@SkalskiP please post a link to that blog article. Thanks!
@@moses5407 hi! I'm still working on it... have 500-600 words already. But have so much work, I don't have time to finish. :/ Sorry for that.
can you make a tutorial on tracking objects in multiple cameras?
Just so I understand. You mean object detection tracking or key points tracking?
@@SkalskiP tracking the object, for example, if person is captured in a camera lets say he has ID 1, he should be tracked by other camera with same ID.
@@madeshprasadc2551 understood! Sounds like interesting video to do. Let me think about it. For now I have my hands full with deep fake video.
@@SkalskiP and a tutorial on how to train paddleOCR. model on custom data
@Skalski, have u ever trained keypoint detection with other architecture (OpenPose, pifpaf, etc) but with custom dataset?
I'm sry if my question is a little off topic hehe....
Hello, nice video! When I try to replicate the project, by doing the following:
from utils.general import check_img_size
from models.experimental import attempt_load
It doesn't seem to recognize those files and it shows the following:
Import "utils.general" could not be resolved(reportMissingImports)
Import "models.experimental" could not be resolved(reportMissingImports)
How can it be solved? Because after executing that block of code, it no longer allows to make a correct plot_image
Can you help me please?
AMAZING
Hi, how to access the site of your tutorial?
Do you think this can be done in Mediapipe?
Do you do contract work?
It depends on the contract and the work :)
@@SkalskiP Haha! Good answer :). I'm interested in the possibility of taking this to a real time solution using 2 synced cameras and a fairly standard laptop (with GPU)
@@moses5407 hm… I’d love to see that happening :) I’d say it should be possible.
@@SkalskiP can you give me an estimate to do that work?
great tutorial, i've been making a project about human pose estimation (just in case of study) and i am stucking in this field about yolov7 right now, i really need some help, how can i contact you
Hi! You can reach out to me here: github.com/SkalskiP/sport/discussions/categories/q-a