One of the most professionally made video tutorial on the whole of youtube, keep making brilliant videos like these I find this to be the best video under the mediapipe series. Congratulations for the gift you have and thanks for sharing it with us
One of the best programmer guide out there! I love how you explain into detail on what you did and why you did it, helps someone like me who is learning just from youtube! I'm gonna try and create my own project based on this program for boxing. Which can detect what time of move a person is doing and can give pointers if that person is not using the proper technique, wish me luck!
I wanted to detect my hand pose as an object. So I went to tensorflow - 2 days of configuring. Made autolabel with groundingDINO - 3 days of configuring. And now I found this. 1 Hour and the model's ready to detect in real time... DAIM what I was doing this 5 days?! And thanks A LOT for your effort, also for that particular thing, when you shown how to add a bunch of new staff to the existing model
The 210th comment is dedicated to the best content on TH-cam. Period! Though late, I am lucky to have discovered your channel. Subscribed and notifications turned on!
Hey I'm working in a body and face emotion detection for my paper, but I couldn't find sources that could help me, but after seeing your video and the way everytime you'll explain every bit of code is really appreciated, this actually sparked that interest in coding for me again, thank you for being very caring you're amazing
@@NicholasRenotte Mann! I thought you wouldn’t respond! Im in my junior year of college and am currently doing a machine learning project. I have few doubts and I believe you could help me with it. It would great if you could tell me how to reach out to you and I can get them clarified!
@@OfficialBalThackeray nah man! I try to get to all comments. Hit me up on Discord, I check every night and if crazy every second night. bit.ly/3dQiZsV
Hi Nicholas, great stuff. I thoroughly enjoy your tutorials as they are mind blowing, I'd love to have a tutorial on pose deviation comparing poses of two people. Awesome work man!!!
Awesome content!! I know I mentioned this in a previous video but I would still love to see a “virtual coach” type implementation. Something that goes beyond just static poses but actually tracks a movements key points over time and could detect form quality by comparing them to a “good form” and “bad form” example.
Yup! Got it planned but it'll be a longer tutorial, just taking a little recovery break from long vids this week. Should do it in the next couple of weeks though @Caleb!
@@CalebSchantzChristFollower oh thanks man, I honestly needed to hear that it's okay! Been feeling a little bad I haven't hit my two a week since I released the big TFOD tutorial.
I'd love to have a tutorial where you use the face geometry module from mediapipe. And maybe add some 3D models tracked to the face like glasses ! Great video as always :)
hi, thanks for you video lesson. I have a issue, when I load model and run, has a waring prompt: /Users/mac/miniforge-pypy3/envs/mp/lib/python3.7/site-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names "X does not have valid feature names, but". what's I make mistake? thanks.
at [1:03:13], you told that we could use in a different use cases. But you told like action detection could be different so could you please explain what is the differences and what we have to do if you could say like 'follow this way' advices? thank you so much..
@@NicholasRenotte at first, thanks for your all attention. I already checked this out but when I implement and test all these processes in that video I cannot obtain efficient and accurate results. Do you think may I implement and train for the push up action by using LSTM units? Cuz I couldnt get good results even in basic arm and hand actions. Watcha think about it?
first, i want to thank you about this big effort. second, can you make a video about fall detection using media pipe please this video will help me a lot
so can i actually use this to detect more sign languages using those joints...... man u actually helped a lot in giving people ideas and inspiration. will always support ur youtube channel!
@@NicholasRenotte I tried to look online to find documentation on how you would pass to a RNN for action detection, but didnt find anything significant. Do you have documentation or video you could share with us? :D
Is there a way to know to which joint the x1, y1, or x2, y2 are associated to? Am I mistaken to say that x1 would be associated to the first value of the Pose Landmark Model which is the nose?
I need help with the training custom model. I get "no capture output live stream" ,when I try to read in the csv. I am not using jupyter notebook, I am using visual studio code for python.
hey nicholas, I was trying to get this to work but i kept getting the error where I could not convert either "Happy" or "Sad" from string to float, I can't get around this. Do you know why?
I was wondering If it would be possible install the program/coding on to a camera or raspberry pie and have a real time over lay. idea for airsoft/paintball headset/molecule.
Woah that would be sick, you probably could. Haven't tested it though yet. The doco mentions running it on Raspberry Pi and a Jetson about half way down: google.github.io/mediapipe/getting_started/install.html
While rendering in the last part i.e. the detection, I am able to detect but I am also getting a warning. The warning is " X does not have valid feature names, but StandardScaler was fitted with feature names". Please can someone help me out with this...
Hi Nick, loved this video! I was wondering if you could go more into how to do hyperparameter tuning and building more advanced pipelines? Do you have other videos going more in detail on those? Thanks!
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. But can you help me how to make the decoder like the in the video works with hands too? specifically how to extract the hands landmark? thankyou
Thank you so much. Great tutorial. There is only one problem. The hands landmarks do not record coordinates if I use one of the hands to turn off camera. basically, both hands should be in the view of videocamera from the start till the end to record coordinates. I cannot figure out how to stop recording automatically after certain time. Could you please help with this?
Great Tutorial, very professionally and amazingly done. Thanks a lot !!! just an issue in loading the model again after restarting the kernel. Have to train the model every time when I open my jupyter notebook(cuz when loading the model in this condition it is not making predictions even after saving in pickel). Is there any way that I can keep the model intact even after I close the notebook or I have to train it everytime I open my notebook ? Please guide. Is there any way wherein I can save it not using pickel but using h5 format. Please guide !!
@@NicholasRenotte Firstly, A Big Thanks for the reply !!!! You Are AWESOME❤️❤️. I am saving the the model with pickel and its working fine for when I am making detections for the first time (just after training), but as soon as I close the notebook and open it the next time and load the same model its not making the detections, and I have to train it everytime. Hence I am unable to save/load it (i.e pickel might not be working or maybe I am doing something wrong in the last part). Can I do it using joblib as well ? Please guide. Thank you.
Hey Nick, need a help. Can you please create a tutorial where i pass images to mediapipe holistic and it will create coords.csv on the basis of those images.
I trained my model and it's working fine. I have one issue. My application has to do the estimation on a whole classroom. How can we apply this to a group of people? It just detects one face per pic. Would really love some suggestion.
this video is awesome I want to ask because my code isn't working. I get an error like the data row doesn't go to the coords.csv file and I think it's in the try: it can't be read but it gets skipped because when I try to print (row) or whatever it just doesn't work.
@@NicholasRenotte Hi Nick awesome video! I have the same issue. Here's the error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[12], line 62 60 # Export to CSV 61 with open('coords.csv', mode = 'a', newline='') as f: ---> 62 csv_writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) 63 csv_writer.writerow(row) 65 cv2.imshow('Raw Webcam Feed', image) TypeError: '_csv.writer' object is not callable
FIXED! This works for me I changed the Export CSV part to this: file_name = 'coords.csv' with open(file_name, 'a', newline='', encoding='UTF8') as my_file: my_file.write(str(row) + ' ') FULL Cell below: cap = cv2.VideoCapture(0) # Initiate holistic model with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:
# Extract Face landmarks face = results.face_landmarks.landmark face_row = list(np.array([[landmark.x, landmark.y, landmark.z, landmark.visibility] for landmark in face]).flatten())
# Concate rows row = pose_row+face_row
# Append class name row.insert(0, class_name)
# Export to CSV file_name = 'coords.csv' with open(file_name, 'a', newline='', encoding='UTF8') as my_file: my_file.write(str(row) + ' ') #print(my_file.closed) except: pass
Hi Nicholas, I didn't understand the process for CSV file work...in this video (Capture and Landmarks using openCV and CSV part.).Can you provide the CSV file
Awesome content! But I have an error when running, how should I deal with it? UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names warnings.warn(
Hi Nicholas !Thank you for nice work sharing. May i know the way to increase the fps for mediapipe . i have very low fps as 1 to 2 fps only. Kindly advice. thank you
Hey guys in need of some help! Im stuck on the exporting a csv file part. Cant figure out why a csv file is not being created. Ive copied exactly to the point but it still doesnt work.
Hi there. Great video, tanks. Can I use a ply file instead a live webcam? I need to evaluate some children and want to use 3d scan (kinectics) and posterior analisys.
how do we know jupyter notebook is installing the correct compatible version of the library when using this code. ? !pip install mediapipe opencv-python pandas scikit-learn
I want to combine pose recognition and face recognition to enable me recognize the person that is posing. How do I go about it? Please can you share more lights on this? Thanks.
Love the content! Just curious would it be possible to use other videos and images to generate the landmark for different poses or facial expression and then our webcam to see if the model can detect ourselves doing the poses or facial expressions?
@@NicholasRenotte Hi Nicholas, wanted to commend on the awesome content, and format of presentation. Makes it easy to understand. Great job, and a huge thank you. Could you please point me to the video where you use other videos/pictures to train the model?
Hello Sir! can you please help me out... my kernel is constantly getting dead! python version 3.9 lljupyter notebook I installed jupyter using pip in command prompt...
Hi @Nicholas Renotte thank you for this nice tutorial. I was wondering how you could give the face detection data a bit more weight? So that I can do a classification that relies more on if i for instance smile or not? Or is that a training thing mostly? I still want the body language to count, but give more weight to the face in the model. Hope that makes sense? Cheers
My model is very confident to non trained labels. Suppose, if i do nothing and just stay in the frame, the model still predicts one of the action with high confidence value. How can we solve such problems? Do you have any idea regarding such issues?
Awesome video btw, I'm currently trying to recreate the model in my local machine. However, seems like I fail to append the different classes of poses/facial expressions, how can I change this?
Thanks for the great video as always !!, May I ask a question If I want to count how long for the specific position what should I do , do you have any suggestion ? , Thanks in advance.
Your videos are amazing, still waiting for using unity to import those keypoints and rig them to a hand model. In the meantime any clue on how to do that? how to ring and which hand model to use?
Heya @Imane, was actually talking about it on Discord yesterday. I actually found someone that had an example on it the other. Let me know if you want me to try to dig up the link!
@@NicholasRenotte And also, I really need your help bro, I have lots of question bro. 1. how can I reduce the body points just by using 7 points (Eye 2, Nose 1, ear 2, shoulder 2) 2. how can I find nobody else in webcam, web came dosen't know that there is no one 3. ur tutorial 's webcam is detecting just one person, but I wanna know others 's behavior, How can I do? 4.somebody do nothing in webcame, how can I know that time - ur tutorial is detecting every time happy, sad or victory I am do really waiting ur reply as soon as possible Thanks so much
@@andthensome9277 1. Probably you can extract the points by manually selecting them. Mediapipe gives you back an array of objects, each objects represents a point. 2. When Mediapipe doesn't find any person, it gives you landmarks as "undefined". So a pseudocode would be like: if(landmarks=="undefined") print("No person detected") or something like that. 3. Using Holistic for multiple people would be too heavy as from the computational point. I don't know if it's possible. As for Face Mesh there is a configuration called "max_num_faces" that you can choose how many faces to detect. It also goes for hands. 4. You could add into the training phase the state IDLE where you do nothing. Hope this helps.
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. you have awesome teaching skills. One Question is what it takes to extend this to Action detection. Can you please make one tutorial for custom action detection? Thanks
Heya @Mikolaj, you would need to update the code to work with the hands model. So instead of the face and pose rows you would need to do it for the left hand and right hand models!
great tutorial I am to trying to run it but i get this error when i was trying to do the training ML classification model :This solver needs samples of at least 2 classes in the data, but the data contains only one class:'Wakanda Forever' I hope you can help me with that
Hi Nicholas , thank you about the stuff , now in my study i work about AI application of fitness training . I would like to get help from you, I need the detection all the body joints.
Can anybody please provide a resource on how to integrate mediapipe body language detection model and mediapipe sign language model and channel the output at the same time.
One of the most professionally made video tutorial on the whole of youtube, keep making brilliant videos like these
I find this to be the best video under the mediapipe series. Congratulations for the gift you have and thanks for sharing it with us
Thanks sooo much @Hargobind, so glad you enjoyed it!!
I just discovered your channel and I‘m obsessed. Thank you so much for doing such great content🙏🏻🙏🏻🙏🏻
Thank you so much @Frieda, so stoked you're getting value from it!
Me tooo !!!!!
One of the best programmer guide out there! I love how you explain into detail on what you did and why you did it, helps someone like me who is learning just from youtube! I'm gonna try and create my own project based on this program for boxing. Which can detect what time of move a person is doing and can give pointers if that person is not using the proper technique, wish me luck!
🙌🙌🙌 sending you all the luck, you'll smash it @Francis!
have anyone actually implemented this code Please let me know Very urgent !!
I wanted to detect my hand pose as an object. So I went to tensorflow - 2 days of configuring. Made autolabel with groundingDINO - 3 days of configuring. And now I found this. 1 Hour and the model's ready to detect in real time... DAIM what I was doing this 5 days?!
And thanks A LOT for your effort, also for that particular thing, when you shown how to add a bunch of new staff to the existing model
HI. you can send requirements.txt string , please!
The most professional easy to understand and implement tutorials on youtube. You really are the best.
The 210th comment is dedicated to the best content on TH-cam. Period! Though late, I am lucky to have discovered your channel. Subscribed and notifications turned on!
Anytime my guy, better late than never, WELCOME TO THE FAM!!!
Hey I'm working in a body and face emotion detection for my paper, but I couldn't find sources that could help me, but after seeing your video and the way everytime you'll explain every bit of code is really appreciated, this actually sparked that interest in coding for me again, thank you for being very caring you're amazing
HI. you can send requirements.txt string , please!
your tutorials are the best tutorials I have seen in my life, congratulations!
Thanks a tonnn @Eduardo!!
Man! Your tutorials are really really cool!! And DAMN RIGHT we want a bigger data science series!!!!!
Awesome work man!!! Cheers!!!
YESSS! So glad you enjoyed it!
@@NicholasRenotte Mann! I thought you wouldn’t respond!
Im in my junior year of college and am currently doing a machine learning project. I have few doubts and I believe you could help me with it. It would great if you could tell me how to reach out to you and I can get them clarified!
@@OfficialBalThackeray nah man! I try to get to all comments. Hit me up on Discord, I check every night and if crazy every second night. bit.ly/3dQiZsV
Looking forward to watching this later!! Thank you for all the quality videos. Have a great one!
Anytime @Isaac! Thanks a bunch 🙏
Hi Nicholas, great stuff. I thoroughly enjoy your tutorials as they are mind blowing, I'd love to have a tutorial on pose deviation comparing poses of two people. Awesome work man!!!
Awesome usecase, I'll add it to the list. Never even thought about comparing poses!
Only you have the best python vidos on TH-cam. Greetings from Russia)
Thanks so much @ONE! What's happening from Sydney!?
Your videos are very descriptive and useful. Your content is of high quality. Thank you.
Awesome content!! I know I mentioned this in a previous video but I would still love to see a “virtual coach” type implementation. Something that goes beyond just static poses but actually tracks a movements key points over time and could detect form quality by comparing them to a “good form” and “bad form” example.
Exactly.
Yes!!
Yup! Got it planned but it'll be a longer tutorial, just taking a little recovery break from long vids this week. Should do it in the next couple of weeks though @Caleb!
No hurry, you are already putting out content at a crazy pace! Take that break!
@@CalebSchantzChristFollower oh thanks man, I honestly needed to hear that it's okay! Been feeling a little bad I haven't hit my two a week since I released the big TFOD tutorial.
Your channel has helped me so much when working on my dissertation. Thank you 🙏
YESSSS, go getem Marc! Hope you smash it out of the park!
Thanks buddy for really helpful videos. Keep going, Wakanda Forever!
Good job brother. I will always appreciate the tasks that you do.
Thanks so much @Priyam!
I'd love to have a tutorial where you use the face geometry module from mediapipe. And maybe add some 3D models tracked to the face like glasses ! Great video as always :)
Yah, agreed! Working on a bunch of stuff in that space rn @Victor!
@@NicholasRenotte Amazing ! I’ll be there to see it as soon as it comes out !
@@victormustin2547 yesss! Thanks a bunch for checking out the vids so far as well!
Great tutorial, i wish your channel have more visibility
This is really awesome , loved the way how you explained everything, great Job. Really Thankful for this. 💯
THANK YOU SO MUCH BRO ! KEEP HUSTLING ❤️
hi, thanks for you video lesson. I have a issue, when I load model and run, has a waring prompt: /Users/mac/miniforge-pypy3/envs/mp/lib/python3.7/site-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names
"X does not have valid feature names, but". what's I make mistake? thanks.
Did you find a fix for this?
really amazing content. im currently working on my final cs project and this video (and some others) were tremendous help for me.
amazing job man!
have anyone actually implemented this code Please let me know Very urgent !!
@@harshdasila6680 i did. What you need?
@@shai8559 i am getting error
@@harshdasila6680 you can type the error and ill see if i can help
Hey Man...! You are really cool!!!! I love this project, I'm your new subscribers!!!!
YESSS! Welcome to the fam!
at [1:03:13], you told that we could use in a different use cases. But you told like action detection could be different so could you please explain what is the differences and what we have to do if you could say like 'follow this way' advices? thank you so much..
Check this out: th-cam.com/video/doDUihpj6ro/w-d-xo.html
@@NicholasRenotte at first, thanks for your all attention. I already checked this out but when I implement and test all these processes in that video I cannot obtain efficient and accurate results. Do you think may I implement and train for the push up action by using LSTM units? Cuz I couldnt get good results even in basic arm and hand actions. Watcha think about it?
hey can we use this to do real time sign launguage . If we can use this please do a video on it.Thank you
Bruce! Heya, yup, got something in mind!
first, i want to thank you about this big effort.
second, can you make a video about fall detection using media pipe please
this video will help me a lot
HI. you can send requirements.txt string , please!
so can i actually use this to detect more sign languages using those joints...... man u actually helped a lot in giving people ideas and inspiration. will always support ur youtube channel!
Thank you so much @Nurul! You could particularly with the hand models! Could even pass to a RNN for action detection!
@@NicholasRenotte I tried to look online to find documentation on how you would pass to a RNN for action detection, but didnt find anything significant. Do you have documentation or video you could share with us? :D
@@yuriemond7340 definitely, take a look at some of these action models: tfhub.dev/s?q=action
So awesome, is there any way to control a 3D model in blender using this body estimation technique? Like motion capture?
Haven't seen it in blender but I've seen it in Unity using the Barracuda framework!
@@NicholasRenotte may you pls tag me a tutorial?
@@KriGeta don't have anything yet but will shoot it through once it's up!
@@NicholasRenotte that's great 😍
thanks for the video
I have a question:
i was trying to export the coords but it didnt export to excel
what will be chenges, if i wanted to make a .h5 model ?
please replyy
Thank you so much!!!!!
Please can i do same steps for sign language recognition instead of using mediapipe+LSTM ???
Is there a way to know to which joint the x1, y1, or x2, y2 are associated to? Am I mistaken to say that x1 would be associated to the first value of the Pose Landmark Model which is the nose?
Check this out @Yuri, it shows the index mapping: google.github.io/mediapipe/images/mobile/pose_tracking_full_body_landmarks.png
55:47 that mic glitch made me spill coffee on myself
Oh shit 🤣, didn't even know that was in there.
Excellent Presentation. Just loved it.
I need help with the training custom model. I get "no capture output live stream" ,when I try to read in the csv. I am not using jupyter notebook, I am using visual studio code for python.
Just what I needed. Thank you
YESS! So glad to hear Daniel!
Amazing stuff! Getting to learn a lot:)
Thanks so much @Vishal 🙏!
This video help me in creating my final year project thanks lot ❤
Let's try something in 3D. That will look great. The AI gym trainer in 3D would be great!!
Agreed, on the R&D list as we speak!
@Nicholas Renotte , please reply, instead of classifying body language, we can detect spoken words with lip movement right?? Using lips coordinates
hey nicholas, I was trying to get this to work but i kept getting the error where I could not convert either "Happy" or "Sad" from string to float, I can't get around this. Do you know why?
I was wondering If it would be possible install the program/coding on to a camera or raspberry pie and have a real time over lay. idea for airsoft/paintball headset/molecule.
Woah that would be sick, you probably could. Haven't tested it though yet. The doco mentions running it on Raspberry Pi and a Jetson about half way down: google.github.io/mediapipe/getting_started/install.html
While rendering in the last part i.e. the detection, I am able to detect but I am also getting a warning. The warning is " X does not have valid feature names, but StandardScaler was fitted with feature names".
Please can someone help me out with this...
thats so cool! i have a question, how can we fix the flickering at then when it's running?
It would need a more powerful GPU or machine. The lag is due to the FPS drag from the ml model.
@@NicholasRenotte i have fixed it!! now running at 10 to 12 FPS, thats still good. thank you so muchhhh
The angles are not Nicholas. I tried with your code but it's not showing in rhe webcam. Please help me.
Hi Nick, loved this video! I was wondering if you could go more into how to do hyperparameter tuning and building more advanced pipelines? Do you have other videos going more in detail on those? Thanks!
Nothing around it yet but will probably do something soon @Jonathon!
I love your videos and your work
Thanks a bunch @John!
hello thank you for this tutorial i was wondering if you can use a group of still image as a dataset instead of manually recording pose
Excellent vid man, thanks
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. But can you help me how to make the decoder like the in the video works with hands too? specifically how to extract the hands landmark? thankyou
Thank you so much. Great tutorial. There is only one problem. The hands landmarks do not record coordinates if I use one of the hands to turn off camera. basically, both hands should be in the view of videocamera from the start till the end to record coordinates. I cannot figure out how to stop recording automatically after certain time. Could you please help with this?
Try using a loop rather than a key, could use something like for x in range(3000) to record 3000 frames!
@@NicholasRenotte Thank you!
Such a cool content !
Thanks for sharing this
Hye. I prepare project same as this. But, when I add timer on it, the Opencv imshow freeze. Would you mind to help me?
Haven't really played with adding a timer unfortunately @Naziha, it might be slowing down the frame rate.
Great tutorial....Can you do a tutorial where you can save body and facial motion to BVH or FBX files to use with a 3D Character..
Great Tutorial, very professionally and amazingly done. Thanks a lot !!! just an issue in loading the model again after restarting the kernel. Have to train the model every time when I open my jupyter notebook(cuz when loading the model in this condition it is not making predictions even after saving in pickel). Is there any way that I can keep the model intact even after I close the notebook or I have to train it everytime I open my notebook ? Please guide. Is there any way wherein I can save it not using pickel but using h5 format. Please guide !!
Pickle should work fine, are you sure the model is being saved? h5 is normally reserved for keras/tf models.
@@NicholasRenotte Firstly, A Big Thanks for the reply !!!! You Are AWESOME❤️❤️. I am saving the the model with pickel and its working fine for when I am making detections for the first time (just after training), but as soon as I close the notebook and open it the next time and load the same model its not making the detections, and I have to train it everytime. Hence I am unable to save/load it (i.e pickel might not be working or maybe I am doing something wrong in the last part). Can I do it using joblib as well ? Please guide. Thank you.
Hey Nick, need a help.
Can you please create a tutorial where i pass images to mediapipe holistic and it will create coords.csv on the basis of those images.
Hi Nicholas! Depending on the position of the wrist, can you tell me the coordinates of the joint at that time??
hey is there any way to train the model with images ,without doing it by ourself
Hi bro are you still doing it, if you have done it, can you help me with that as I am stuck in training a CNN model on MPII dataset
I trained my model and it's working fine. I have one issue. My application has to do the estimation on a whole classroom. How can we apply this to a group of people? It just detects one face per pic. Would really love some suggestion.
There's a new pose estimation model available that supports multi person! Check this out tfhub.dev/google/movenet/multipose/lightning/1
underrated channel
this video is awesome I want to ask because my code isn't working. I get an error like the data row doesn't go to the coords.csv file and I think it's in the try: it can't be read but it gets skipped because when I try to print (row) or whatever it just doesn't work.
Can you uncomment the try except block @Zoro and let me know what error you get. We can debug further from there!
@@NicholasRenotte Hi Nick awesome video! I have the same issue. Here's the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[12], line 62
60 # Export to CSV
61 with open('coords.csv', mode = 'a', newline='') as f:
---> 62 csv_writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
63 csv_writer.writerow(row)
65 cv2.imshow('Raw Webcam Feed', image)
TypeError: '_csv.writer' object is not callable
FIXED! This works for me
I changed the Export CSV part to this:
file_name = 'coords.csv'
with open(file_name, 'a', newline='', encoding='UTF8') as my_file:
my_file.write(str(row) + '
')
FULL Cell below:
cap = cv2.VideoCapture(0)
# Initiate holistic model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:
while cap.isOpened():
ret, frame = cap.read()
# Recolor Feed
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False
# Make Detections
results = holistic.process(image)
# print(results.face_landmarks)
# face_landmarks, pose_landmarks, left_hand_landmarks, right_hand_landmarks
# Recolor image back to BGR for rendering
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
# 1. Draw face landmarks
mp_drawing.draw_landmarks(image, results.face_landmarks, mp_holistic.FACEMESH_CONTOURS,
mp_drawing.DrawingSpec(color=(80,110,10), thickness=1, circle_radius=1),
mp_drawing.DrawingSpec(color=(80,256,121), thickness=1, circle_radius=1)
)
# 2. Right hand
mp_drawing.draw_landmarks(image, results.right_hand_landmarks, mp_holistic.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(80,22,10), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(80,44,121), thickness=2, circle_radius=2)
)
# 3. Left Hand
mp_drawing.draw_landmarks(image, results.left_hand_landmarks, mp_holistic.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(121,22,76), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(121,44,250), thickness=2, circle_radius=2)
)
# 4. Pose Detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS,
mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2)
)
# Export coordinates
try:
# Extract Pose landmarks
pose = results.pose_landmarks.landmark
pose_row = list(np.array([[landmark.x, landmark.y, landmark.z, landmark.visibility] for landmark in pose]).flatten())
# Extract Face landmarks
face = results.face_landmarks.landmark
face_row = list(np.array([[landmark.x, landmark.y, landmark.z, landmark.visibility] for landmark in face]).flatten())
# Concate rows
row = pose_row+face_row
# Append class name
row.insert(0, class_name)
# Export to CSV
file_name = 'coords.csv'
with open(file_name, 'a', newline='', encoding='UTF8') as my_file:
my_file.write(str(row) + '
')
#print(my_file.closed)
except:
pass
cv2.imshow('Raw Webcam Feed', image)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Hi Nicholas, I didn't understand the process for CSV file work...in this video (Capture and Landmarks using openCV and CSV part.).Can you provide the CSV file
mine isnt writing to the csv file for some reason
Hello, why does 'str' object has no attribute 'decode' when learning mechanically.
Sir I would get the error in landmark ot says landmark is not defined can u please help me out as soon as possible please🙏🙏
Awesome content! But I have an error when running, how should I deal with it?
UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names
warnings.warn(
facing the same error, did u find any solution ?
2 years but the video still Lit
My CSV file is around 7GB it is taking so much time for pipeline Execution i.e rf, rc,lf,gb while fitting it is taking so long what should i do??
Hi @Sazid, you could try training on a smaller portion of data. Try setting your test set to a larger percentage.
Hi Nicholas,i love the content you make.Much thanks for sharing. Can we deploy this??
Sure can! Been messing around with Kivy for CV (could probs do that).
Can anyone tell me which python version, mediapipe version, numpy version and scikit-learn version to install to make this?
Hi Nicholas !Thank you for nice work sharing. May i know the way to increase the fps for mediapipe . i have very low fps as 1 to 2 fps only. Kindly advice. thank you
Heya @MsSonoFelice, what type of machine are you running this on? Possibly try a different machine, something with a GPU perhaps?
NameError: name 'audio_classifier' is not defined i am getting this error when running the import command for mediapipe.
This is so cool! Can I set this up to only detect poses I make with my hands?
Yup, there's mediapipe models that only detect hands as well. I've got a vid on the channel!
@@NicholasRenotte Got it! Il check it out!!
Hey guys in need of some help! Im stuck on the exporting a csv file part. Cant figure out why a csv file is not being created. Ive copied exactly to the point but it still doesnt work.
Weird, any errors? Are you getting detections?
@@NicholasRenotte Dont worry i got it working!
@@francis3725 awesome!!
Hi I faced the same issue too, how do u solved it?
Hi there. Great video, tanks. Can I use a ply file instead a live webcam? I need to evaluate some children and want to use 3d scan (kinectics) and posterior analisys.
how do we know jupyter notebook is installing the correct compatible version of the library when using this code. ?
!pip install mediapipe opencv-python pandas scikit-learn
Generally it installs the latest version. You can check it in your workspace folder / Lib / site-packages
What @Gabbosaur mentioned is spot on @Raj!
I want to combine pose recognition and face recognition to enable me recognize the person that is posing. How do I go about it? Please can you share more lights on this? Thanks.
Love the content! Just curious would it be possible to use other videos and images to generate the landmark for different poses or facial expression and then our webcam to see if the model can detect ourselves doing the poses or facial expressions?
Sure can! I actually did it in the past with a video of the Royals being interviewed.
@@NicholasRenotte Hi Nicholas, wanted to commend on the awesome content, and format of presentation. Makes it easy to understand. Great job, and a huge thank you.
Could you please point me to the video where you use other videos/pictures to train the model?
@@satishchandrasekaran3045 still a work in progress at this stage!
Hello Sir!
can you please help me out...
my kernel is constantly getting dead!
python version 3.9 lljupyter notebook
I installed jupyter using pip in command prompt...
Hi @Nicholas Renotte thank you for this nice tutorial. I was wondering how you could give the face detection data a bit more weight? So that I can do a classification that relies more on if i for instance smile or not? Or is that a training thing mostly? I still want the body language to count, but give more weight to the face in the model. Hope that makes sense? Cheers
Awesome content. Is it also possible to train the csv file with tensorflow instead of sklearn? Looking forward to watching your other videos!!
Sure can!
My model is very confident to non trained labels. Suppose, if i do nothing and just stay in the frame, the model still predicts one of the action with high confidence value. How can we solve such problems? Do you have any idea regarding such issues?
How do you solve this error... AttributeError: module 'mediapipe.python.solutions.holistic' has no attribute 'FACE_CONNECTIONS'
Switch FACE_CONNECTIONS to FACE_CONTOURS. Was updated recently.
Awesome video btw, I'm currently trying to recreate the model in my local machine. However, seems like I fail to append the different classes of poses/facial expressions, how can I change this?
Amazing work! Thank you so much!
YOU ARE THE SH**!!! Props my man and thank you for making such great content!!
Hey brother, I like your content, your presentation. And a request, can you build a project on a sentence generator when some words are only given.
Yup! Check this out: th-cam.com/video/cHymMt1SQn8/w-d-xo.html
Thanks for the great video as always !!, May I ask a question If I want to count how long for the specific position what should I do , do you have any suggestion ? , Thanks in advance.
Could look at counting the number of frames that position had the top score!
@@NicholasRenotte Thanks !!
Your videos are amazing, still waiting for using unity to import those keypoints and rig them to a hand model. In the meantime any clue on how to do that? how to ring and which hand model to use?
Heya @Imane, was actually talking about it on Discord yesterday. I actually found someone that had an example on it the other. Let me know if you want me to try to dig up the link!
Heyy @@NicholasRenotte, yes if u can do that will be great! Thanks a lot for your help.
Great video!! I have a question, how can I enable the code to detect more than one person in the frame (scene)?
Heya @Hazem, not supported in MediaPipe unfortunately, check out OpenPose for multiple person tracking!
@@NicholasRenotte What library would you recommend if I want to make a counting system for people in a scene ?
@@hazemhossam2645 take a look at OpenPose, you could then count the number of detections!
@@NicholasRenotte Tysm. Do you recommend any video/link about Openpose to start with?
Good morning sir hope you have a great day thanks for this one
Anytime @Sagar Singh, what did you think of it?!
@Nicholas Renotte that is super awesome...😁
If I have a lots of 'Happy' image like Big data, How can I do deep learning with faster RNN so that I can predict 'Happy' webcam in real time
Stay tuned, coming soon!
@@NicholasRenotte And also, I really need your help bro, I have lots of question bro.
1. how can I reduce the body points just by using 7 points (Eye 2, Nose 1, ear 2, shoulder 2)
2. how can I find nobody else in webcam, web came dosen't know that there is no one
3. ur tutorial 's webcam is detecting just one person, but I wanna know others 's behavior, How can I do?
4.somebody do nothing in webcame, how can I know that time
- ur tutorial is detecting every time happy, sad or victory
I am do really waiting ur reply as soon as possible
Thanks so much
@@andthensome9277 1. Probably you can extract the points by manually selecting them. Mediapipe gives you back an array of objects, each objects represents a point.
2. When Mediapipe doesn't find any person, it gives you landmarks as "undefined". So a pseudocode would be like: if(landmarks=="undefined") print("No person detected") or something like that.
3. Using Holistic for multiple people would be too heavy as from the computational point. I don't know if it's possible. As for Face Mesh there is a configuration called "max_num_faces" that you can choose how many faces to detect. It also goes for hands.
4. You could add into the training phase the state IDLE where you do nothing.
Hope this helps.
@@Gabbosauro Thanks so much I will take ur advice and then try again reply back see yaa
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. you have awesome teaching skills. One Question is what it takes to extend this to Action detection. Can you please make one tutorial for custom action detection? Thanks
Check this out! th-cam.com/video/doDUihpj6ro/w-d-xo.html
Can you please provide a code version for doing it on a recorded video instead of the live webcam?
I do the same thing but only with hands and it doesn't save cords. It just save x1,y1,z1,v1,x2,y2,...
Why?
Heya @Mikolaj, you would need to update the code to work with the hands model. So instead of the face and pose rows you would need to do it for the left hand and right hand models!
hi nick can you make a more detailed video on explaining the libraries you imported and how the code works in detail?
Excellent video!!!
Thanks so much @01bit!
great tutorial I am to trying to run it but i get this error when i was trying to do the training ML classification model :This solver needs samples of at least 2 classes in the data, but the data contains only one class:'Wakanda Forever'
I hope you can help me with that
Hi Nicholas , thank you about the stuff , now in my study i work about AI application of fitness training . I would like to get help from you, I need the detection all the body joints.
Can anybody please provide a resource on how to integrate mediapipe body language detection model and mediapipe sign language model and channel the output at the same time.
can we use LSTM for this
Yup, could apply a RNN layer using the landmarks @Rony!
A tutorial using LSTM would be amazing , the documentation is really poor
@@denisdj6180 yah, working on an action detection tutorial atm!
@@NicholasRenotte Thank you!
@@denisdj6180 exactly :(