I just wanted to take a moment to say that this is hands-down the best content I’ve seen on Raspberry Pi content (Not to take away from DroneBotWorkshop's channel. I love his content too). The presentation was top-notch, and every step was clearly and thoughtfully demonstrated. I almost never comment on TH-cam videos-despite spending a large portion of my days watching them-but your work compelled me to break that habit. This is exactly the kind of quality content that stands out, and I truly appreciate the effort you’ve put into making it so accessible and inspiring. Please keep doing what you’re doing! Your work is making a big impact, and if there’s any way I can support you further, I’d love to do so. Thank you again for this incredible video!
Thank you very much Mr Spaghetti, it means a lot that you took the time to comment this, we are just happy that people are making use of our videos and as long as people do we will keep on making them! P.S. we also love DroneBotWorkshop 💜
Not my forte but as a (retired ha ha) Application Developer I found this video absolutely fascinating. Great presenter .... right speed (for me). M. Townsville
Some good use cases. Use hand gestures, or poses as “unlock” or “activate/execute” commands. Eg. Only a particular pose opens a door, a “Vulcan salute” executes a command etc etc. Great content guys as always. 👍
That sounds like quite an involved project. Depth estimation can give some shakey results and I think a better solution would be to have a 2nd camera running pose estimation and using a bit of math to combine them like this video: www.reddit.com/r/MachineLearning/comments/zo2nl1/p_football_player_3d_pose_estimation_using_yolov7/?rdt=48728 This very quickly becomes a very involved project though, and we have had some issues with these pipelines trying to run 2 instaces of pose estimation at once.
@Core-Electronics yeah it would give shakey results, but you could use the fact people's limbs don't change length to help with the results in a force directed layout. You could also filter out the high frequency shakeyness with fft. A second camera would be better of course, but if you only have 1 of them you could still then get something useful.
I just wanted to take a moment to say that this is hands-down the best content I’ve seen on Raspberry Pi content (Not to take away from DroneBotWorkshop's channel. I love his content too). The presentation was top-notch, and every step was clearly and thoughtfully demonstrated. I almost never comment on TH-cam videos-despite spending a large portion of my days watching them-but your work compelled me to break that habit. This is exactly the kind of quality content that stands out, and I truly appreciate the effort you’ve put into making it so accessible and inspiring.
Please keep doing what you’re doing! Your work is making a big impact, and if there’s any way I can support you further, I’d love to do so. Thank you again for this incredible video!
Thank you very much Mr Spaghetti, it means a lot that you took the time to comment this, we are just happy that people are making use of our videos and as long as people do we will keep on making them!
P.S. we also love DroneBotWorkshop 💜
Not my forte but as a (retired ha ha) Application Developer I found this video absolutely fascinating. Great presenter .... right speed (for me). M. Townsville
Thanks!!! Great video
Some good use cases. Use hand gestures, or poses as “unlock” or “activate/execute” commands.
Eg. Only a particular pose opens a door, a “Vulcan salute” executes a command etc etc.
Great content guys as always. 👍
Awesome tutorial vid!
nice video
How can this be combined with depth estimation to get the joint positions in 3d space?
That sounds like quite an involved project. Depth estimation can give some shakey results and I think a better solution would be to have a 2nd camera running pose estimation and using a bit of math to combine them like this video:
www.reddit.com/r/MachineLearning/comments/zo2nl1/p_football_player_3d_pose_estimation_using_yolov7/?rdt=48728
This very quickly becomes a very involved project though, and we have had some issues with these pipelines trying to run 2 instaces of pose estimation at once.
@Core-Electronics yeah it would give shakey results, but you could use the fact people's limbs don't change length to help with the results in a force directed layout. You could also filter out the high frequency shakeyness with fft. A second camera would be better of course, but if you only have 1 of them you could still then get something useful.
1 use tab button more to auto complete! and 2 use ubuntu its more stable ! and 3 its dayta not darta! watch star trek if you dont believe me.
We Australians get on the nerves of the entire English speaking world with how we pronounce "Dar-Tar"