Real Time AI GESTURE RECOGNITION with Tensorflow.JS + React.JS + Fingerpose

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ม.ค. 2025

ความคิดเห็น • 183

  • @atifasadkhan
    @atifasadkhan 4 ปีที่แล้ว +3

    Thanks Nicholas I wish you soon have 1M subscribers within a year

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +3

      Hell yes 🙌 , I've got 185 more video ideas on my list (that's the real number) so there's no stopping soon!

    • @tharaniv6267
      @tharaniv6267 3 ปีที่แล้ว

      @@NicholasRenotte that's great....

  • @graciemitchell1169
    @graciemitchell1169 4 ปีที่แล้ว +2

    These videos are soooo helpful! Keep up the great work!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      Thanks so much @Gracie Mitchell! Definitely, plenty more in the pipeline!

  • @srrahul6029
    @srrahul6029 3 ปีที่แล้ว +2

    Hey Nicholas ! Your videos are awesome !

  • @andysantisteban
    @andysantisteban 3 ปีที่แล้ว +2

    Gran video! Gracias a ti tengo una idea de tesis para terminar mi carrera. Estoy muy agradecido

  • @doobinl8505
    @doobinl8505 ปีที่แล้ว +5

    Hi Nick, Thanks for the great video. I don't know why but i cloned your repository and had to change one of my line code for it to work. Wanted to leave it as a reference for other developers
    const confidence = gesture.gestures.map(
    // had to change prediction.confidence
    (prediction) => prediction.score
    );

    • @gildedgold8627
      @gildedgold8627 ปีที่แล้ว +1

      hi, thank you so much for this - i've been struggling with this bug for abt an hour and this rly helped! how did you figure this out?

    • @pandaplays971
      @pandaplays971 10 หลายเดือนก่อน

      thanks a lot dude

  • @vikashchand.
    @vikashchand. 4 ปีที่แล้ว +5

    Yes! custom GESTURES video! 😂🙏

  • @blind_sauce
    @blind_sauce 3 ปีที่แล้ว +1

    Tried today with my Lego Ev3, works like magic!

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Ooooooh sick, I always wanted an Ev3. You can code with them using Js?

    • @blind_sauce
      @blind_sauce 3 ปีที่แล้ว

      @@NicholasRenotte By using ev3dev, you can run Python or node on EV3. But I just run Fingerpose JS in my laptop then send realtime recognition result to EV3, so I can use my gesture as command to the Lego robot.

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      @@blind_sauce that is amazing, you've just given me an excuse to buy one! YESSSS!

  • @susantasharma
    @susantasharma 4 ปีที่แล้ว +1

    Amazing. You got an instant subscriber.

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      Awesome @Susanta, thanks so much 🙏, welcome to the team!

  • @izaro6294
    @izaro6294 3 ปีที่แล้ว +1

    Keep up the great work!

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      Thanks so much! Definitely, plenty more to come 🙏

  • @akshitdayal2689
    @akshitdayal2689 3 ปีที่แล้ว

    You're amazing! Thanks for this!

  • @sardorkamoliddinov99
    @sardorkamoliddinov99 ปีที่แล้ว +1

    Good morning! I have a problem with 85th line code. google Chrome show error reading("name"). Can you help me

    • @Sina-db1jn
      @Sina-db1jn ปีที่แล้ว

      i have the same error "Cannot read properties of undefined (reading 'name')
      TypeError: Cannot read properties of undefined (reading 'name')
      at detect (localhost:3000/static/js/bundle.js:102:52)"

  • @_Hadiyal_Mohit
    @_Hadiyal_Mohit ปีที่แล้ว

    simply great video😁

  • @faustozecca4187
    @faustozecca4187 3 ปีที่แล้ว +1

    Fantastic work sir! I found your work while doing a bit of research to see if any of the makers/TH-camrs have been using hand gesture recognition to control their camera slider rigs. How cool would it be if we could control a 1 to 6 DOF CNC camera robot (linear, cartesian, gimbal, radial) by gesturing in the field of view with your hand(s)? I am thinking about all of my YT heroes that spend so much of their time making instructional videos to deliver the DIY/maker content we all love to gobble up.
    -Forefinger + middle finger +thumb triad for X,Y,Z,i,j,k targeting
    -'come here', and 'go away' gestures for zoom
    -focal distance control
    I love the honesty of your intro. I imagine myself as T. Stark designing the Mk2, but am painfully aware of my limitations. Thanks for letting me dream and flex my imagination!

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Thank you so much @Fausto, that sounds like an amazing use case! Oh I've got aspirations to build my own Jarvis one day 🤣

  • @shivanirao8657
    @shivanirao8657 4 ปีที่แล้ว +5

    Hey Nick, That was an amazing tutorial for gesture recognition right in the time of need. :) Just wanted to know how to shift the same codes to nodeJS and pre-existing bootstrap templates instead of React? I am currently working on similar stuff that recognizes hand gestures and controls the website but unable to proceed due to minimal knowledge of TensorFlow. Could you please provide your insights on the same?

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +2

      Definitely, sounds like an awesome project @Shivani! You can embed the Tensorflow components on the client side and use them in a similar way to what I've done! These scripts allows you to bring TF and handpose into your existing bootstrap app :

    • @shivanirao8657
      @shivanirao8657 4 ปีที่แล้ว +1

      @@NicholasRenotte thank you nick. Would definitely try this :)

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      @@shivanirao8657 awesome, let me know how you go!

  • @RoseAce87
    @RoseAce87 4 ปีที่แล้ว +6

    Great video! Do you think I could train similar technology to recognise some basic sign language?

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +2

      Definitely @Bethany Wallace 🙌, you can train custom gestures (in this case for single handed poses) for basic sign language. I'm planning on creating a video on it in a few weeks time!

    • @RoseAce87
      @RoseAce87 4 ปีที่แล้ว +1

      @@NicholasRenotte awesome thank you! I'm looking into this for a university project so I will be sure to come back and watch!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Awesome @Bethany Wallace!!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Heya @Bethany, still a little basic but hopefully can help you out: th-cam.com/video/pDXdlXlaCco/w-d-xo.html

  • @holduspokus9743
    @holduspokus9743 2 ปีที่แล้ว +1

    hi Nick !
    that's is an awesome product , I was wondering if something like this exist with the body gesture ? I'd like to add extra feature on my stream when I'm on VR, so it could make action depending of my body movements ...

  • @nikitachernevsky8521
    @nikitachernevsky8521 3 ปีที่แล้ว +2

    Very cool video! I have only one problem - if I want to add more emoji for detection from hand - where can I see the code for other hand emoji? Thanks!

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      You can add other classes @Nikita, just add the images and parse the logic :)

    • @nikitachernevsky8521
      @nikitachernevsky8521 3 ปีที่แล้ว

      @@NicholasRenotte Thanks!

  • @NickScholten-q7u
    @NickScholten-q7u ปีที่แล้ว

    Hi Man. I'm wondering if you're able to use any camera you want or if you need a specific camera.

  • @QCmemories
    @QCmemories 3 ปีที่แล้ว

    Thanks Nicholas!
    And what do you thing about applying filter to this work? Eg Kalman filter? I'm working on this but it's bugging me.

  • @lory4kids272
    @lory4kids272 3 ปีที่แล้ว +1

    Hey Nick, do you think for gesture recognition better to use Kinect? could more efficient?

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Interesting suggestion. If Kinect has motion tracking already it might be faster particularly if it's embedded!

  • @dadcraft64
    @dadcraft64 2 ปีที่แล้ว

    this is awesome, how could i approach a drag and drop reactjs component using gesture recognition?

  • @melaniecasabar2183
    @melaniecasabar2183 2 ปีที่แล้ว +1

    The .name method doesn't work, any ideas why
    " Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'name') "

    • @offjoao5659
      @offjoao5659 ปีที่แล้ว

      Did you fix it? I have tried various methods but I'm not sure what is the error here

    • @melaniecasabar2183
      @melaniecasabar2183 ปีที่แล้ว +1

      @@offjoao5659
      I did not include this line:
      const confidence = gesture.gestures.map(
      (prediction) => prediction.confidence
      );
      const maxConfidence = confidence.indexOf(
      Math.max.apply(null, confidence)
      );
      Then I replaced maxConfidence with 0
      Before:
      setEmoji(gesture.gestures[maxConfidence].name);
      After:
      setEmoji(gesture.gestures[0].name);
      It worked fine to me, after that. Let me know if it worked for you.

    • @offjoao5659
      @offjoao5659 ปีที่แล้ว +1

      @@melaniecasabar2183 Thank you, your answer has actually helped me solve something. I was looking at the documentation for the fingerpose npm module and found out that :
      prediction.confidence
      is actually:
      prediction.score
      so after a lot of debugging and lots of console logs later, this is the finished code:
      const confidence = gesture.gestures.map(
      (prediction) => prediction.score
      );
      //console.log(confidence)
      //console.log(gesture)
      const maxConfidence = confidence.indexOf(
      Math.max.apply(null, confidence)
      );
      console.log(maxConfidence)
      setEmoji(gesture.gestures[maxConfidence].name);

    • @nhlakaniphomagwaza6170
      @nhlakaniphomagwaza6170 ปีที่แล้ว

      @@melaniecasabar2183 Thank you for your help. I was having the same error and I applied your method and it worked.

    • @shototodoroki4719
      @shototodoroki4719 7 หลายเดือนก่อน

      @@melaniecasabar2183 thank you

  • @cal1092
    @cal1092 2 ปีที่แล้ว +3

    For some reason the .name method doesn't work, any ideas why
    " Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'name') "

    • @cal1092
      @cal1092 2 ปีที่แล้ว +1

      Can someone please help out, I don't know why it doesn't work

    • @lindenhamer4766
      @lindenhamer4766 2 ปีที่แล้ว +1

      Hey, same problem here! Haven’t found a solution yet :(

    • @pauldoring3003
      @pauldoring3003 2 ปีที่แล้ว

      did you find a solution?

    • @cal1092
      @cal1092 2 ปีที่แล้ว

      @@pauldoring3003 yeah, can’t remember what I did tho lol

    • @pauldoring3003
      @pauldoring3003 2 ปีที่แล้ว

      @@cal1092 Could you maybe look it up for me? :)

  • @cheshire0041
    @cheshire0041 3 ปีที่แล้ว +1

    Is it possible to implement this on a mobile phone and using phone camera to replace the webcam? What are the changes needed?

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      Sure can, would need a little rework to pick up the camera. I believe you could take a look at Tensorflow for React Native!

  • @skrame01
    @skrame01 6 หลายเดือนก่อน

    Can it detect simple waving, like just waving your whole hand back and forth?

  • @lindenhamer4766
    @lindenhamer4766 2 ปีที่แล้ว +1

    Hi Nicholas, Thankyou for the great tutorials thus far! I've managed to follow along with no errors during the first video, had most of this one but i've come across a error that results in it not logging to the console the correct gesture (but I can still see the detection of the correct pose if I check the console and the array. When I change my hand from to the gesture I lose my markers. I have checked my code against your git repo and I haven't found any differences (minus the extra stuff you added). This is the error:
    App.js:78 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'name')
    at detect (App.js:78:1)
    It's saying the name after the: setEmoji(gesture.gestures[maxConfidence].name); is undefined. When I check the console (as I mentioned it does detect) I can see the "name" in the array before the correct gesture.
    The only thing I can think of is the fact the video was made two years ago and something may have changed?
    any help would be super appreciated, as i'd love to finish the project.
    Thank you

    • @lindenhamer4766
      @lindenhamer4766 2 ปีที่แล้ว

      I should add I'm really new to coding, only having coding for 10 weeks so it could be something simple (most likely is) lol

    • @melaniecasabar2183
      @melaniecasabar2183 2 ปีที่แล้ว

      have you figured it out? I'm having the same issue

  • @LightningXBolttt
    @LightningXBolttt 4 ปีที่แล้ว +1

    Hey !! Please make a video on detecting custom gestures ? Like using it to detect sign language using the same concepts in the above video ?

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +2

      Yep! Still coming, I'm doing a video on custom gestures this week using fingerpose!

    • @LightningXBolttt
      @LightningXBolttt 4 ปีที่แล้ว +1

      @@NicholasRenotte Awesome 😁

    • @LightningXBolttt
      @LightningXBolttt 4 ปีที่แล้ว +1

      @@NicholasRenotte Hey .. when will your video come ? Still waiting 😀

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Heya @@LightningXBolttt, my schedule got knocked around a bit last week! Should be out next week 😁!

    • @LightningXBolttt
      @LightningXBolttt 4 ปีที่แล้ว +1

      @@NicholasRenotte just got the notification of the video...Would try it as soon as possible..Thanks alot 😄

  • @MejiMaru
    @MejiMaru 4 ปีที่แล้ว

    Yesss... Good Job. Thank You

    • @MejiMaru
      @MejiMaru 4 ปีที่แล้ว +1

      No, I’m not talking about just swiping left or right to transition between screens.
      How to do this :D...

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Thanks so much @Meji Maru! Ooooh, that's a little more hardcore, let me have a think about that!

    • @MejiMaru
      @MejiMaru 4 ปีที่แล้ว +1

      @@NicholasRenotte Thank you! It's my favorite TH-cam Channel. What a treat!

  • @bhavithareddybhavitha1241
    @bhavithareddybhavitha1241 2 ปีที่แล้ว +1

    hello Nick , I followed the whole process that you have explained in video but I'm getting an error like type error: cannot read properties of undefined (reading 'name') at detect . Can you help me out with this.

  • @jaihindyadav8190
    @jaihindyadav8190 3 ปีที่แล้ว

    Hello I want a solution for the hand gesture for example if someone joining his hand for prayer the same thing to be seen on the wall and also need to know what device will help me to get this done.

  • @jerrylee1657
    @jerrylee1657 4 ปีที่แล้ว +1

    Thanks Nicholas! It is another wonderful tutorial!
    May I ask a question that when I use the react useState, the canvas's drawing is quite slow and cannot match to the hand movement. Could you have any solution of that since I would like to add some simple canvas game and integrate to tensorflow hand checking, thank you!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Definitely @Jerry, i worked with some of the other subscribers on performance fixes. Check out the source code, it's now a lot smoother as we stop the model loading so many times: github.com/nicknochnack/GestureRecognition

    • @jerrylee1657
      @jerrylee1657 4 ปีที่แล้ว +1

      @@NicholasRenotte Hi Nicholas, thank you for your information! the react useEffect is quite good for performance fixes and I have changed the code into the following for running requestAnimationFrame and It can run smoothly as well ~
      However, I have a big problem that when adding a simple canvas animation (just a circle crossing the canvas). It is very slow and sometime is undefined... Could you help me to create a code for a circle crossing the canvas into this tutorial? Thank you very much !
      //updated code for performance
      const requestRef = useRef();
      const runHandpose = async () => {
      const net = await handpose.load();
      console.log("Handpose model loaded.");
      async function frameLandmarks(){
      requestRef.current = requestAnimationFrame(frameLandmarks)
      detect(net);
      }
      requestRef.current = requestAnimationFrame(frameLandmarks)
      };
      useEffect(()=>{
      runHandpose();
      return()=> {
      cancelAnimationFrame(requestRef.current);
      }
      },[]);

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      @@jerrylee1657 definitely, what you'd need to do is define another function that runs as part of the setInterval. Then you can use the canvas to draw and redraw your circle across the page. Check this out: developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Basic_animations

    • @jerrylee1657
      @jerrylee1657 4 ปีที่แล้ว +1

      Nicholas Renotte yes, thats what i need! Thank you for the information!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      Anytime @@jerrylee1657 , let me know how you go. Pumped to see your game! Would love to see you share it here once you're done 😁!

  • @AH-pz5ex
    @AH-pz5ex 3 ปีที่แล้ว

    Hi Nick! Great video. Is it possible for a live video of handwriting to show up similar to your emojis but as alphabets. Say one was to write the letter A, could the AI pick it up as so?

  • @naratilife
    @naratilife 3 ปีที่แล้ว +1

    Hi Nicholas, Can we use this Hand pose and Finger pose in python:? If yes, then any leads to the resources?

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Sure can, check this out: th-cam.com/video/vQZ4IvB07ec/w-d-xo.html the gesture control isn't implemented but can be done manually.

    • @naratilife
      @naratilife 3 ปีที่แล้ว

      @@NicholasRenotte Thank You.

  • @atifasadkhan
    @atifasadkhan 4 ปีที่แล้ว +1

    Here we goooooooooooooo

  • @dosoncoder
    @dosoncoder 2 ปีที่แล้ว

    Can You do about 3D Hand Pose estimation ?
    Plssss !!!

  • @bibekmunikar3226
    @bibekmunikar3226 ปีที่แล้ว

    Hey Nicholas, Such a great video it is. Keep going!!
    I am trying to make App which simply help for tracking hand in the pool or any water surface who are drowning.
    Can you please help with a start and any further area I need to consider.
    thanks
    I am trying to use REACT.JS

  • @tingchengwang3575
    @tingchengwang3575 3 ปีที่แล้ว +1

    Thank you for sharing

  • @ryuketsueki
    @ryuketsueki 3 ปีที่แล้ว

    awesome video nicholas, one thing, is there template of this code in python?

  • @edwinlau1820
    @edwinlau1820 2 ปีที่แล้ว +1

    help~ confidence in line 71 can't be define, and the maxConfidence always -1

  • @niharzutshi9662
    @niharzutshi9662 2 ปีที่แล้ว

    If we want to add more emojis, In which directory are we gonna add them?

    • @niharzutshi9662
      @niharzutshi9662 2 ปีที่แล้ว

      Could you help with me the logic?
      I tried but not working :(

  • @joeckjoeck4321
    @joeckjoeck4321 3 ปีที่แล้ว +1

    Is there a way to detect gestures dynamicly, so that a move to left the app also says left?

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      Heya @Joeck, not with this technique, you could use action detection to do it though!

    • @joeckjoeck4321
      @joeckjoeck4321 3 ปีที่แล้ว +1

      @@NicholasRenotte I don't really know to solve it. One of my idea was yesterday to realise a tap gesture.
      Because the display is not 100% reliable (because it is only a camera, a stereo camera would be better) I look in the last 5 frames to see whether the index finger ☝️ was recognized there. If the current gesture is a fist ✊, then you typed.
      My new idea to detect the movement direction is to analyse where the position of one landmark is. I look to see if the old position value is larger / smaller. The change in position must be at least 5-10 pixels so that the small fluctuations of the hand are not so important. The principle is similar to my typing gesture. Worth a try right? 😊
      And a big thank you to your videos, they are really good, and this project will also be immortalized in my homework for the university 🙈

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      ​@@joeckjoeck4321 that sounds like an awesome approach. I'm actually working on something right now that might be able to help you. It's in Python but should be able to hit the spot. Basically it's a two step machine learning model. This is how it works:
      Step 1: Webcam captures Poses, Facial Landmarks and Hand Gestures using Media Pipe Holistic, call this Model 1
      Step 2: Images are saved and are manually classified for different body language and facial expressions
      Step 3: A secondary model, let's call this Model 2, is trained to match the landmarks for certain poses, facial landmarks and gestures to certain body language expressions
      Step 4: Bringing it all together, when detecting a new frame the landmarks from Model 1 are passed to Model 2 to detect overall body language (sad, happy, nervous, angry)
      You could take a similar approach, capture the last five (or more) frames and use it to classify a dynamic action. I started smashing out the code this morning and realised I could do so much more with it, as soon as it's done I'll shoot you the code!

    • @joeckjoeck4321
      @joeckjoeck4321 3 ปีที่แล้ว +1

      @@NicholasRenotte That sounds good 😃
      At the beginning I tried to use mediapipe but I had problems with the installation so that I look for another ways.
      My study is due to be handed in soon and can therefore no longer introduce any major changes. But for the future, however, I will definitely continue to pursue your implementation in privat.
      I also think it's great that you answer and that rather fast - a really nice service of you 😂🙈

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      @@joeckjoeck4321 oh, shucks. Got some stuff coming on it soon so hopefully it helps! Anytime, if you need a hand just hmu!

  • @sparky7043
    @sparky7043 ปีที่แล้ว +1

    Hey man ,great work but I want some help from you
    code: setEmoji(gesture.gestures[maxConfidence].name);
    I am getting an error from this line of code ,In that .name part . please help me with that.
    Uncaught runtime errors:
    ×
    ERROR
    Cannot read properties of undefined (reading 'name')
    TypeError: Cannot read properties of undefined (reading 'name')
    at detect (localhost:3000/static/js/bundle.js:108:52)
    when I start the app this is showing this kind of error

  • @himanisharma271
    @himanisharma271 3 ปีที่แล้ว +1

    Which algorithm you used here??

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      This is done using the HandPose model from Tensorflow.Js!

  • @manjunathshenoy3774
    @manjunathshenoy3774 ปีที่แล้ว

    is it possible to run this on stored video instead of webcam? anyone pls guide

  • @raneggg
    @raneggg 3 ปีที่แล้ว +1

    Can this be done on datasets using video frames?

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Probably easier to do it with Python but it should work! You would need to have the vid uploaded and process the detections frame by frame!

    • @raneggg
      @raneggg 3 ปีที่แล้ว

      @@NicholasRenotte we were planning to do it on video datasets for dynamic gesture recognition.

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว +1

      @@raneggg got it, would definitely check out doing it in Python using Media Pipe!

  • @aadityaknv3990
    @aadityaknv3990 4 ปีที่แล้ว

    Hey nick.
    How do we stop the handpose model from loading again and again. Because this is leading to the browser taking upto way too much memory space and crashing

    • @aadityaknv3990
      @aadityaknv3990 4 ปีที่แล้ว

      If youre using chrome, You can notice the rise in memory taking up by browser in chrome task manager

    • @aadityaknv3990
      @aadityaknv3990 4 ปีที่แล้ว +1

      Its happening only when we are displaying the gesture emoji. Is it because of useState?

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Possibly @Aaditya Knv, I think it might be refreshing the whole application each time which is causing it to remount the model. I'm working on improving performance for the custom gesture video. Will shoot through an update as soon as it's there.

    • @ChromeCover91
      @ChromeCover91 4 ปีที่แล้ว +1

      If I'm not mistaken, the runHandpose function is running every time the component re-renders when state changes. This is causing the setInterval to compound... If you put the runHandpose function inside a useEffect hook with no dependancies, it should only run once, when the component initially mounts.

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +4

      @@ChromeCover91 YOU ARE A GOD AMONGST MEN! That was it, I think it was not only the setInterval compounding but also the fact that the handpose model was continuously reloading. I owe you a beer or coffee!
      @Aditya Knv, here's what you need to change (I've also updated the Github repository):
      1. Import use effect
      // OLD CODE
      import React, { useRef, useState } from "react";
      // NEW CODE
      import React, { useRef, useState, useEffect } from "react";
      2. Wrap runHandpose in a useEffect hook with no dependencies
      // OLD CODE
      runHandpose()
      // NEW CODE
      useEffect(()=>{runHandpose()},[]);
      3. Speed up detection rate
      // OLD CODE- This is optional but makes the detections more seamless
      const runHandpose = async () => {
      const net = await handpose.load();
      console.log("Handpose model loaded.");
      // Loop and detect hands
      setInterval(() => {
      detect(net);
      }, 100);
      };
      // NEW CODE
      const runHandpose = async () => {
      const net = await handpose.load();
      console.log("Handpose model loaded.");
      // Loop and detect hands
      setInterval(() => {
      detect(net);
      }, 10); // MAKE DETECTIONS FASTER
      };

  • @petermakaumutuku9150
    @petermakaumutuku9150 4 ปีที่แล้ว +1

    How do I add five finger wave pose?

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      Heya @Peter, you could do it with custom gestures! Check this out th-cam.com/video/WajtPtLAg-o/w-d-xo.html

  • @shadim.tanani8368
    @shadim.tanani8368 4 ปีที่แล้ว +1

    Thank you very much for your response
    I want your assistant (I am doing a graduate project about arm robot ,I bought Kinect One and Adapter, but I can't write the code needed to Kinect to read hand signals only) please Help me

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Heya @Shady, you just need to make sure the Kinect is available as a webcam, I don't have one so I'm going based on feel here but check this out: social.msdn.microsoft.com/Forums/en-US/af2ca1de-3de2-40dd-8694-4daadccbc487/pc-not-seeing-xbox-one-kinect-as-webcam?forum=kinectv2sdk

    • @shadim.tanani8368
      @shadim.tanani8368 4 ปีที่แล้ว +1

      @@NicholasRenotte Thank you very much for your response , I have Kinect one Xbox ,Running it on my laptop is very normal, but it gives me a complete picture

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      @@shadim.tanani8368 complete or incomplete picture?

  • @The_Vanillax
    @The_Vanillax 3 ปีที่แล้ว +1

    is this usable in vr games?

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Possibly, I'm not super up to speed with VR development but I'm going to be giving integration into Blender and Unity a go soonish!

  • @jeffdjkenya
    @jeffdjkenya 3 ปีที่แล้ว

    great job .. kindly please help in with the codes

  • @herdinhardianto6636
    @herdinhardianto6636 3 ปีที่แล้ว +2

    Hi Sir, i'm ur subscriber from indonesia. Your content is very usefull for my study. But, now i'm in the last semester in college. I have to make a final product to finish my study. And i try to make a sign language with fingerpose. The sign language that i use is bisindo based on indonesian sign language. But when i tried to use this step based on ur video to my product i got some problem and i feel confused to solve that. But sorry before sir, would you help me to make another video how to make a sign language with fingerpose maybe 3 or 5 alphabet from bisindo. so , i can to learn how to make it from your video. Thanks sir
    I'm sorry before sir, if my english is so bad. Thanks you sir. God Bless You

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Heya @Herdin, got a new vid on sign language coming soon! Stay tuned.

    • @herdinhardianto6636
      @herdinhardianto6636 3 ปีที่แล้ว

      @@NicholasRenotte Ok Sir, thanks you so much. I'll be waiting. God Bless U Sir 👏

  • @atifasadkhan
    @atifasadkhan 4 ปีที่แล้ว +1

    Nick can you please hekp me build an app where a gestures will be used as an input to a quiz app I dont know how to start, I want to assign peace gesture to option 1, victory gesture to option 2 and based on this an app that takes quiz with gesture inputs please help Nick please!!!!!!!!!!!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +2

      Heyaa @Atif khan, once you've got the quiz app setup instead of using setEmoji like this: setEmoji(gesture.gestures[maxConfidence].name);
      You could make some of the changes below, change:
      const [emoji, setEmoji] = useState(null)
      to:
      const [quizResult, setQuizResult] = useState(null)
      replace:
      setEmoji(gesture.gestures[maxConfidence].name);
      with:
      if(gesture.gestures[maxConfidence].name == 'victory'){
      setQuizResult(1)
      } else if (gesture.gestures[maxConfidence].name == 'thumbs_up' ){
      setQuizResult(2)
      }
      // then you can pass the quizResult variable to your quiz app!

    • @atifasadkhan
      @atifasadkhan 4 ปีที่แล้ว

      Oh thanks Nicholas I'll try today

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Hmmmm, given your use case, I'm wondering if real time gesture recognition is the right tool @Atif khan. You might be better off using an image classifier and taking a photo of the gesture the user wants to pass through. That way you can associate a single frame with a single answer.

    • @atifasadkhan
      @atifasadkhan 4 ปีที่แล้ว +1

      @@NicholasRenotte thanks Nicholas will try

    • @atifasadkhan
      @atifasadkhan 4 ปีที่แล้ว +1

      @@NicholasRenotte Can you make a tutorial on this ?

  • @igor_cojocaru
    @igor_cojocaru 4 ปีที่แล้ว +1

    Yeah man!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      Yeahyaa @Igor Cojocaru, anything else you'd like to see?!

    • @igor_cojocaru
      @igor_cojocaru 4 ปีที่แล้ว +1

      @@NicholasRenotte I was trying to make some custom gestures, but facing problems. Can't understand how to add a new gesture description. It would be cool if you will share how you do that. Thanks

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว

      Awesome, I've got coming soon!

  • @spqri3
    @spqri3 3 ปีที่แล้ว +1

    You rule.

  • @sumitkushwaha1804
    @sumitkushwaha1804 3 ปีที่แล้ว +1

    Sir, tell me how you made yourself

    • @NicholasRenotte
      @NicholasRenotte  3 ปีที่แล้ว

      Like this th-cam.com/video/xSElsMUqFqI/w-d-xo.html :)

  • @diegocaumont5677
    @diegocaumont5677 4 ปีที่แล้ว +1

    Thx!

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      DIEGOOOO! Thanks so much as always @Diego Caumont 🙏!

  • @shadim.tanani8368
    @shadim.tanani8368 4 ปีที่แล้ว +1

    Can you please send the source code ; please

    • @NicholasRenotte
      @NicholasRenotte  4 ปีที่แล้ว +1

      Heya @Shady, here it is! github.com/nicknochnack/CustomGestureRecognition

  • @isovertime
    @isovertime 4 ปีที่แล้ว +1

    cool

  • @Nostalgia-futuro
    @Nostalgia-futuro 2 ปีที่แล้ว

    The main problem with your videos is that teach how to install libraries, (aka install hand pose, human pose), what if I want to create an animal pose estimation based on my poses, can you create a video on how to create your own datasets and install in a pose estimation model

  • @MrAnnl25
    @MrAnnl25 4 ปีที่แล้ว

    sexy