How to train a new model for gesture recognition - ML on Android with MediaPipe

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.ย. 2024
  • Learn how to train a new model for gesture recognition based on new data using MediaPipe Model Maker. This tutorial covers a Python sample to train a model for the game Rock, Paper, Scissors in Google's Colab tool, which allows you to build Python and MediaPipe programs directly from your browser.
    Resources:
    Colab → goo.gle/3mRh8wo
    Intro to Machine Learning (ML Zero to Hero - Part 1)→ goo.gle/3KvXzRZ
    Check out TensorFlow on TH-cam → goo.gle/40qfRKn
    Watch more episodes on ML on Android with MediaPipe → goo.gle/MLAMP
    Subscribe to Google Developers → goo.gle/develo...
    #Android #ML

ความคิดเห็น • 23

  • @GoogleDevelopers
    @GoogleDevelopers  ปีที่แล้ว +2

    Colab → goo.gle/3mRh8wo
    Intro to Machine Learning (ML Zero to Hero - Part 1)→ goo.gle/3KvXzRZ

  • @ahsanweb
    @ahsanweb 2 หลายเดือนก่อน

    To create the training data, do I have to take photos of people with different hand sizes and colors while they doing gestures? Is there a way to generate the hand gesture images or use stick drawing figures?

  • @mahimanzum
    @mahimanzum ปีที่แล้ว +6

    Is there any resources for actually implementing the android codes to deploy this model in an app?

    • @muhaphavoc9015
      @muhaphavoc9015 4 หลายเดือนก่อน

      did you find anything helpful Please???

  • @smyThegmc
    @smyThegmc 8 หลายเดือนก่อน +3

    thanks for the tutorial. I trained my new model to recognize asl alphabet with one hand and the results are really good when i try it uploading to web demo you provided. But I couldnt replace it with the model inside android studio project. I think its because that model has 8 classes and mine has 25 but I couldnt figure out how I can customize the android studio project so it uses my own model can you help me?

    • @smyThegmc
      @smyThegmc 6 หลายเดือนก่อน

      I replaced it in folder and it worked I dont know why it didnt before

    • @mohammad_hajeer
      @mohammad_hajeer 6 หลายเดือนก่อน +1

      ​@@smyThegmc
      Hello sir, can you please just give me a simple explanation of how to train the model to recognize the ASL since I have a senior project in my college, and thanks sorry for taking your time

    • @smyThegmc
      @smyThegmc 6 หลายเดือนก่อน

      @@mohammad_hajeer i will send you the tutorials website

    • @alanfrancoromeroleanos8077
      @alanfrancoromeroleanos8077 5 หลายเดือนก่อน

      ​​ @smygmc6936 Por favor también podrías enviármelos? actualmente mediapipe_model_maker tiene problemas de compatibilidad en google colab y no logro ejecutar el ejemplo de la documentación oficial

    • @catlord777x3
      @catlord777x3 2 หลายเดือนก่อน

      @@smyThegmc hey could you please provide ur github repo for this. i be very gratefull for this

  • @73gCOVERChannel
    @73gCOVERChannel ปีที่แล้ว +2

    Can the MediaPipe model be trained on videos in order to get dynamic gesture recognition instead of static ones?

    • @paultr88
      @paultr88 11 หลายเดือนก่อน

      Unfortunately not right now, but the way you'd do this is get a TFLite model (there was a Kaggle competition a few months ago with sign language models) and then pass data in from the hand landmarker task that's recorded over a series of time to get back classifications.

    • @a_09_shreyabhalgat25
      @a_09_shreyabhalgat25 11 หลายเดือนก่อน

      hey @@paultr88
      can you guide me more regarding this
      i am working on the same thing

  • @zamanbhatti5145
    @zamanbhatti5145 10 หลายเดือนก่อน +1

    cool we get rid from object detection model for Sign Langauge Detection and it's errors

  • @zaidahmed4069
    @zaidahmed4069 9 หลายเดือนก่อน

    Hello !
    I'm looking to finetune one of the mediapipe solutions on my custom dataset. My end goal is to identify the landmarks of cows and extract x,y,z coordinates of keypoints respectively. Please guide me through that how to make a custom dataset for it ? which tool to use? which format of dataset is supported with mediapipe especially when dealing with keypoints extraction.
    Thank you so much for your time and consideration.

    • @ptruiz_google
      @ptruiz_google 9 หลายเดือนก่อน

      Hey. Unfortunately landmarks aren't set up for non-human or custom datasets right now with MediaPipe. You'd likely have better luck with regular TensorFlow Lite, but that isn't something I've really dived into to be able to explain.

    • @ramanandr7562
      @ramanandr7562 6 หลายเดือนก่อน

      ​@@ptruiz_google can we train using video sequences converted into numpy arrays.

  • @ramanandr7562
    @ramanandr7562 6 หลายเดือนก่อน

    Instead of training with image dataset can we do it using video dataset?

    • @ramanandr7562
      @ramanandr7562 5 หลายเดือนก่อน

      @@sandeepatn3718 i collected videos directly form our system webcam and preprocessed it into numpy arrays corresponds to each frames of the video.

  • @yeungsophia9532
    @yeungsophia9532 ปีที่แล้ว +1

    dependency error for installation... colab error in numpy, mac m1 error in mediapipe version... the latest mediapipe 0.10.0 installed but dependency on former 0.9 version

  • @DSET_ChiragNRaj
    @DSET_ChiragNRaj ปีที่แล้ว +1

    Can i deploy this on to Raspberry pi

    • @ptruiz_google
      @ptruiz_google ปีที่แล้ว +1

      Last week we posted a blog entry on the official Google Developers Blog - yes! The training is still best done on colab, but implementing the model for gesture recognition works on the Raspberry Po (I just tested it yesterday, actually). You can find the example in our sample repo.

  • @MarineEx
    @MarineEx ปีที่แล้ว

    Perfect