Awesome video as always! I used these exact models on my latest project to make hand gesture control available on react websites. Ended up dropping fingerpose and building a neural network to recognize the gestures in the end. It’s now an npm package called HAND-IT. Might be fun to mess around with. Keep up the great work!
This is sick @Jacob, this package? www.npmjs.com/package/hand-it Looks freaking amazing, I'll definitely check it out. Might make a video on it if you're okay with it!
@@NicholasRenotte That's it! Thanks man, you are more than welcome to use it. Let me know if you have any issues or if I could be clearer with the docs.
@@NicholasRenotte There's a lil request to you sir There's one frnd of mine , he knows basis of React and not these hooks nd UseRef and all So it's a request pls when you use a feature of some tech like hooks Plss lil bit give a intro of that so that anyone can be comfortable Rest everything is nice ❤
I can't thank you enough for this video. I found you from Reddit. You make code incredibly friendly to understand!! Great video! how would I include this for a 3js code?
its great video, perfect response on real time. sir what we do in case of multiple sign i.e more than 100 sign. Would i have to define the gesture for every sign manually as you did for IloveYou in this tutorial ,or there is other method to do this
@@NicholasRenotte sir can you make a tutorial to use this method with neutral network i.e we defect keypoint from hand then label it accordingly their corresponding sign and then trained a neutral network
Sure is, although it might be a little trickier to configure every single pose @Ace! I think a better approach, if you didn't want to use straight pipe object detection, would be to pass the keypoints to a custom TF model with a multi-class classifier to determine the poses.
Thank you very much for these tutorials, If possible kindly make a tutorial on navigating React websites using gestures (hand or facial) in which we first train the model by taking some sample images from webcam in the tensorflow js like that Pac Man game on tensorflow js examples site
Definitely @Mr Big, code below: 1. Add this to your import once you have an I love you photo // OLD IMAGE IMPORTS import * as fp from "fingerpose"; import victory from "./victory.png"; import thumbs_up from "./thumbs_up.png"; // NEW IMAGE IMPORTS import * as fp from "fingerpose"; import victory from "./victory.png"; import thumbs_up from "./thumbs_up.png"; import i_love_you from "./iloveyou.png"; 2. Change your images hooks section // OLD HOOK SETUP const [emoji, setEmoji] = useState(null); const images = { thumbs_up: thumbs_up, victory: victory }; // NEW HOOK SETUP const [emoji, setEmoji] = useState(null); const images = { thumbs_up: thumbs_up, victory: victory, I_love_you: i_love_you};
Very well put together vid, Nicholas!
Thanks @Andrew!
Awesome video as always! I used these exact models on my latest project to make hand gesture control available on react websites. Ended up dropping fingerpose and building a neural network to recognize the gestures in the end. It’s now an npm package called HAND-IT. Might be fun to mess around with. Keep up the great work!
This is sick @Jacob, this package? www.npmjs.com/package/hand-it
Looks freaking amazing, I'll definitely check it out. Might make a video on it if you're okay with it!
@@NicholasRenotte That's it! Thanks man, you are more than welcome to use it. Let me know if you have any issues or if I could be clearer with the docs.
@@ChromeCover91 definitely, will do!!
Heyyy, am soo happy to see this series of video again😍😍😍 Tysm sir ❤❤
You deserve more subscribers
Yess @Jaskeerat, had the code for a little while just had to record! We're getting there!
@@NicholasRenotte There's a lil request to you sir
There's one frnd of mine , he knows basis of React and not these hooks nd
UseRef and all
So it's a request pls when you use a feature of some tech like hooks
Plss lil bit give a intro of that so that anyone can be comfortable
Rest everything is nice ❤
@@slowedReverbJunction definitely, so more of a clear explanation on how to use hooks?
@@NicholasRenotte yes sir , that much will do justice 💝
Excellent! Just what I was looking for.
this episode of the series THE NOSS (thanos) was amazing!!!!
🤣 I had to bring it back @Atif!
Wonderful Explanation as always. Kindly give a tutorial on hand gesture recognition using LSTM
Hmmm, any particular use case?
@@NicholasRenotte I was wondering if you could go for any 5 commonly used gestures.
@@AMRUTHAK-qu6sl sure, let me take a look into it.
I can't thank you enough for this video. I found you from Reddit. You make code incredibly friendly to understand!! Great video!
how would I include this for a 3js code?
How to make navigation from one page to another with these gesture recognition.
Hi. Could you program Custom Handpose Gesture Detection in python?
Can you explain the Deep Learning aspect from this application?
Good work bro 🔥
Thanks 🙏 so much @Ameya!!
sir can you please create a video regarding computer screen controlling using hand gestures using lstm deep learning algorithm.
Wonderful video!
Thanks so much @JohannFlies 🙏🙏🙏!!
I would like to implement this LSTM and open pose to detect continuous sign using python or pytorch...can you give some idea about it ..
Try taking a look at action detection models on Tensorflow Hub @Barithi!
Thanks, brother.
love it.
Awesome man!! Let me know how you go @Eranda!
@@NicholasRenotte Could you tell in another video how to generate sound separately?
@@erandaupeshitha2126 generate sound or music using AI?
@@NicholasRenotte oh, sorry. Generate music using AI.
@@erandaupeshitha2126 yah, awesome! Got it planned!
its great video, perfect response on real time. sir what we do in case of multiple sign i.e more than 100 sign. Would i have to define the gesture for every sign manually as you did for IloveYou in this tutorial ,or there is other method to do this
Yep, you would unfortunately, admittedly running detection for each sign for take a while to setup.
@@NicholasRenotte sir can you make a tutorial to use this method with neutral network i.e we defect keypoint from hand then label it accordingly their corresponding sign and then trained a neutral network
@@mutaherkhan2161 yah, I'll add it to the list, might be a little while away though!
@@NicholasRenotte you are great..
@@NicholasRenotte sir must reply me the link
can you implement this project in React Native??
Working on it as we speak @Test Jawwad!
Is this possible for sign language?
Sure is, although it might be a little trickier to configure every single pose @Ace! I think a better approach, if you didn't want to use straight pipe object detection, would be to pass the keypoints to a custom TF model with a multi-class classifier to determine the poses.
how can we work with this offline?
Look for some of the mediapipe tutorials with Python on the channel!
Hi Great Post!! Could you make a video to control Mouse curser with your gestures?
Heya @Avinash, yup, it's planned for this year, got a bunch of control flow stuff coming!
@@NicholasRenotte wowww♥️. I would be keenly waiting for it
Thanks ☺️
@@avinash-tripathy perfect, will let you know as soon as it's out there!
@@NicholasRenotte Make it soon sir 🤩 waiting...
👍👍
Does it detect in low lights ?
It performs better in a well lit room, but you can add some low light photos to improve its accuracy in low lit conditions!
much love
Thanks soo much @Prasad!! Back at ya my bruv!
Can we do it with React native?
how to show the result?
Other than what's shown here?
Thank you very much for these tutorials, If possible kindly make a tutorial on navigating React websites using gestures (hand or facial) in which we first train the model by taking some sample images from webcam in the tensorflow js like that Pac Man game on tensorflow js examples site
Heya @Ken, definitely, I've got it planned when we take a look at control flow!
@@NicholasRenotte Thankss, eagerly waiting for that😇 😇
Thanks ! You are awesome guy . Could you please guide showing an "i_love_you.png" picture when we pose "I love you" gesture ???
Definitely @Mr Big, code below:
1. Add this to your import once you have an I love you photo
// OLD IMAGE IMPORTS
import * as fp from "fingerpose";
import victory from "./victory.png";
import thumbs_up from "./thumbs_up.png";
// NEW IMAGE IMPORTS
import * as fp from "fingerpose";
import victory from "./victory.png";
import thumbs_up from "./thumbs_up.png";
import i_love_you from "./iloveyou.png";
2. Change your images hooks section
// OLD HOOK SETUP
const [emoji, setEmoji] = useState(null);
const images = { thumbs_up: thumbs_up, victory: victory };
// NEW HOOK SETUP
const [emoji, setEmoji] = useState(null);
const images = { thumbs_up: thumbs_up, victory: victory, I_love_you: i_love_you};
@@NicholasRenotte Hi Nicholas I followed this code but nothing is showing up when i do the "i_love_you" pose, what coudl possibly be wrong??
Never mind, I worked it out! Thanks anyway, Keep the good work! Love ur Vids!!
@@hugosimoes6181 might need to tweak the pose for your hand and/or improve lighting in your room!
Wish it wasnt react. Prefer when its plain javascript. Easier to learn then :) but thanks anyway!.
Cheers @Andrew!
@@NicholasRenotte i managed todo it without react. :) but now i will look at your videos how to train models :) thank you
@@AndrewTSq nicee! Awesome work!