I kinda just jumped in here in the middle of things because I have a use at work for some ML and this seems to be a great tutorial. Since your helping me out I'm gonna throw you a bone that will make your time in the terminal less painful. Tab complete! when your trying to type "tar jxvf dlib-19.17.tar.bz2" you only need to type 'tar jxvf dli[TAB]' and it will fill in the rest, it's faster and prevents typos. If you hit the TAB and nothing happens hit TAB again and it will show you choices, you need only type enough that its a unique choice. Thanks again and happy computing.
I'll talk about the problems I encountered while installing the face recognition library. I'm using the Jetpack 4.5 operating system. Traceback (most recent call last): File "setup.py", line 42, in from setuptools import setup, Extension ModuleNotFoundError: No module named 'setuptools' For ubuntu users, this error may arise because setuptool is not installed system-wide. Simply install setuptool using the command: sudo apt-get install -y python-setuptools My second problem was about the sudo python3 setup.py install. Installation did not complete because python-3dev was not installed. I fixed it with: sudo apt-get install python3-dev After that "sudo python3 setup.py install" code works fine.
Paul- two things 1. You do not run sudo apt-get upgrade after running sudo apt-get update. I thought the update created a list of packages to be upgraded/updated with the upgrade command. What amI missing here? 2. When doing installs or other processes that can take awhile, I often start the system monitor app on the theory that if there is disk or cpu activity, things are not deaf. Not 100% correct, but does tend to ease things which watching a black box sit there... 3. Good video. I appreciate your giving credit to others and other sites where you may have gotten assistance.
@Jerald Gooch I've been wondering that same thing for months. I've have always previously followed 'update' with an 'upgrade.' And have used the 'autoremove' to remove all the un-needed programs.
My biggest problem in programming is spelling and punctuation. Appreciate all of your hard work on this. Looking forward to getting my ugly face being caught.
It took me over an hour. First crash because it was out of memory (some other apps were open). Then second try on fresh reboot. It was on 4GB of RAM Jetson.
Just awaiting delivery of my Xavier. Want to run the rest of Nano Series and then the Xavier Series. I can install the rest of the stuff on the Xavier but wondered do I need to follow all this in Lesson 38 with the Xavier having more memory.?
There will be overlap between these two series of lessons, but on the Xavier series I will spend less time on the early stuff . . . so will assume on that series that you know some python and can work your way around the linux terminal
@@paulmcwhorter Paul, I am using Xavier NX for lesson #38. Would you know if I need to install helper packages (like cmake, libopenbles etc.) and, do I need to increase the swap space for the installations on Xavier? Thanks
Very interesting....Yeah...if one of those downloads compiles doesn't work, nothing will. And yes, I am social distancing...I went into the Hallmark yesterday and they are maintaining safe practices, masks, hand sanitizer when you walk in....isles one directional.
I think places like grocery stores are putting good protocols in place, and am comfortable gradually having people come out of total isolation, but feel we are going to loosen up to much too quickly and find the whole country like new york. Nothing has changed. We flattened the curve by staying away from each other. So what happens when we stopped doing what slowed the spread? I am going to wait till the fall to decide what to do when we have more data. For now, maintaining isolation, wearing mask when I go out, and only go to things where good practices are followed.
@@paulmcwhorter Yes I agree caution is in order and opening things slowly with safe practices is a good idea. Remember though, NYC is unusual (I'm from Long Island originally and lived in the city briefly.) Once one leaves one's apartment you're surrounded by people....the elevators are crowded during rush hours, the streets are crowded and of course public transportation. The densely populated, public transport dependent, urban areas are going to be the toughest to open up. I also agree that there needs to be a lot more research on these viruses to fully understand why some people don't even know they had it and some people get acutely ill.
I take two strong espresso shots and pour them over a cup of ice. Or 8 strong shots over 4 cups of ice for my big mug. Then the ice melts, dilutes the strong espresso down to a really mellow and wonderful iced beverage. If you pour normal coffee over ice, becomes too dilute
Another great video thanks Paul, one question I had is that I am running the latest version of Jetpack and it looks like it already has an inbuilt swap because when i tried to install the swap file it came up with the following error: fallocate: fallocate failed: Text file busy i ran through the rest of the tutorial and was able to download the facial recognition okay, is it okay for me to proceed forward or should i really create a new swap before continuing?
For installing Dlib we had increased swap, But after installation of dlib, don't you think we have to reset it, Will increased swap memory cases any problem in future ?
Running some of the programs from JP4.2.1, I'm getting an error: failed to load module "canberra-gtk-module" Did I see that somewhere in comments to another video. Any idea what the issue is?
Hi, used the following which seemed to fix it. Always nervous installing things that Paul hasn't defined in case it screws things up!sudo apt-get install libcanberra-gtk-module
Since we are going to be limited on the number of faces we can train the system to recognize in the near future, is there a way to share the data set for our own face with you Paul, so that if we ever came for a visit your Jetson Nano would recognize our faces? What about the faces of our pets? Just my wife and I, plus 1 adult cat and 2 kittens here at our place during the pandemic!
I dont think it will find cats. Its starts by finding faces, like our earlier openCV with Haarcascades. Then it looks at the found face and compares to known faces. The way I did it was to train on political figures, then point camera at various youtube videos of politicians to test it. I have a training set people can download in an upcoming video
@@paulmcwhorter Thanks for the quick feed back! I didn't really think "cat" recognition was in the picture, no pun intended, but good to know we can show images. I can play with teaching it to recognize other family members. Wouldn't this be fun to take to a family reunion and have all the family members get their faces remembered by my Jetson Nano. To bad we've had to cancel our reunion for next month!
Thank you! Just subscribed. I am following your code and everything runs fine except I am not getting anywhere close to the camera update response you are getting. There is like a 2-3 delay before the camera updates. I am in 10W power mode. The only thing I can tell that is different is that I am using the Jetson Nano B01 version. This consistently happens on all my facial recognition examples.
Make sure you are not running camera at too high of a resolution and make sure you are not updating software versions. The as downloaded jetpack should run this program quickly. Sounds like you are running on CPU and not down in the GPU. Try installing jtop, and see what GPU usage is.
Thanks for taking time to reply Paul. Jtop is showing some GPU jumps, but the CPUs are jumping all over the place. I tried using a logitech web cam and its considerably better so I am thinking there is something wrong with initializing my Pi Camera(s). I set the stream to be 640 x 480 and the delay is exactly the same.
@@paulmcwhorter I figured it out. I had to change gstreamer video parameters to 10fps (10/1). I think the video buffers were not keeping in sync thus causing lag. With it set at 10fps 640x480 on a CSI camera seems to be the sweet spot.
Something just does not sound right. It should be able to go way faster than that with higher resolution. The problem is I am not a gstreamer expert, and hence can not give suggestions. I have been trying to learn more about gstreamer, as I would like to know how to squeeze as much performance as possible from the video stream. So you have the new nano board, and probably on jetpack 4.3. I have not played much with the Xavier NX, but it has the two cameras ports, and jetpack 4.4
You might also download the dual camera example program from jetson hacks github. He is running both cameras on the new board at the same time, and they are going good. Might look at that code and see if anything pops out at you.
We know you are not going to crash and burn. Just like Jeff Green (Nascar), you keep your code from hitting the wall in the middle of a disaster. We come here to watch our hero save the princess (Jetson Nano) from the cyclone that is Ubuntu.
Well, unfortunately, this is where I have to tap out. I spent over 30 hours over the weekend trying to get dlib (and face_recognition) installed, but even though I tried everything (and then a few more), it just wouldn't work. I actually managed to compile dlib (several times) with mingw32 but the resulting module had no functions for some reason. 😕 I also managed to use a few different binary wheels by resorting to an older version of Python, but it would crash once it gets to the _face_locations_ function. At 6-10GB, I can't/won't install Visual Studio, so I'll have to drop out of the course for the time being. 🤷 I'm still pretty chuffed with the generic face-tracking, and whenever I'm able to get some servos to build a gimbal, I'll do that. Maybe some day, I'll find a way to get dlib to work on Windows, then I'll pick up where I left off.
No, I'm doing it on my Windows laptop with a webcam. (And I built the gimbal with an Arduino connected to my laptop through USB which I control with CircuitPython.)
Impressive you made it this far. Can you access a raspberry pi 4? You could probably get face recognmizer to work on that. It will run slow compared to the nano, but would probably be easier than a windows machine.
sudo python3 setup.py install Traceback (most recent call last): File "setup.py", line 42, in from setuptools import setup, Extension ModuleNotFoundError: No module named 'setuptools'
Run jtop and it will show you over on the last tab. On one of these lessons I showed how to install jtop. Or, just google jtop, it is one line to install it.
Just finished the upgrade but getting a Gstreamer error - GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 Programs seem to run fine though and I'm using the raspiCam. Anybody else getting (or got) this? Any solutions?
@@paulmcwhorter Thanks for the reply Paul, glad to know it's not just me! I've tried to sort it too but no such luck. I actually upgraded the jetpack itself a week or so ago and have been suffering this eversince.
Paul, love you but I really don't need to see your face through the whole video, especially when it is blocking the bottom corner of the terminal screen. I know you try very hard to not let your face get in the way but you can make much smaller with no loss of content.
@@paulmcwhorter WOW, that is not the reply I expected from a professional like you. Most TH-camr's are happy to get feedback even if they choose to not implement them.
I disagree. Having Pauls face there on screen actually adds a personalised element to the videos which makes it feel much more like a real lesson that just a simple online 'do-as-i-do' tutorial. Yeh sure there has been a couple times his head has blocked a section of an active screen, but have we ever actually missed vital content because of this? I like to see Pauls live and human reactions to errors and bug as they happen live. It's much more natural and less curated learning experience.
I kinda just jumped in here in the middle of things because I have a use at work for some ML and this seems to be a great tutorial. Since your helping me out I'm gonna throw you a bone that will make your time in the terminal less painful.
Tab complete!
when your trying to type "tar jxvf dlib-19.17.tar.bz2" you only need to type 'tar jxvf dli[TAB]' and it will fill in the rest, it's faster and prevents typos. If you hit the TAB and nothing happens hit TAB again and it will show you choices, you need only type enough that its a unique choice.
Thanks again and happy computing.
I'll talk about the problems I encountered while installing the face recognition library.
I'm using the Jetpack 4.5 operating system.
Traceback (most recent call last):
File "setup.py", line 42, in
from setuptools import setup, Extension
ModuleNotFoundError: No module named 'setuptools'
For ubuntu users, this error may arise because setuptool is not installed system-wide. Simply install setuptool using the command:
sudo apt-get install -y python-setuptools
My second problem was about the sudo python3 setup.py install. Installation did not complete because python-3dev was not installed.
I fixed it with:
sudo apt-get install python3-dev
After that "sudo python3 setup.py install" code works fine.
Paul- two things
1. You do not run sudo apt-get upgrade after running sudo apt-get update. I thought the update created a list of packages to be upgraded/updated with the upgrade command. What amI missing here?
2. When doing installs or other processes that can take awhile, I often start the system monitor app on the theory that if there is disk or cpu activity, things are not deaf. Not 100% correct, but does tend to ease things which watching a black box sit there...
3. Good video. I appreciate your giving credit to others and other sites where you may have gotten assistance.
@Jerald Gooch I've been wondering that same thing for months. I've have always previously followed 'update' with an 'upgrade.' And have used the 'autoremove' to remove all the un-needed programs.
My biggest problem in programming is spelling and punctuation. Appreciate all of your hard work on this. Looking forward to getting my ugly face being caught.
I caught up eventually - started 10 March on Lesson 01 - Excellent Paul, LUV your classes
nearly same
Thanks Paul, Looking forward to how the 2g works on these things.
Thank you so much Paul for this very detailed explanation for facial recognition :)
Hi Paul,
Looking forward for the next lesson. Already thinking about a 4wd robot that will fallow certain person or avoid another.
Install went ahead, no problems. Looking forward to next week
Great to hear people are playing along. Fixing to get real fun. Hope you tune in next week
Amazing Paul. Keep going, do not stop, one day soon you will be the world leader in AI
That's the plan!
Thats great, any help I can offer for you to expand in India, do please feel free to let me know
Paul is already leading the training and education in AI. A fantastic teacher. A labour of love too. Priceless.
It took me over an hour. First crash because it was out of memory (some other apps were open). Then second try on fresh reboot. It was on 4GB of RAM Jetson.
Just awaiting delivery of my Xavier. Want to run the rest of Nano Series and then the Xavier Series. I can install the rest of the stuff on the Xavier but wondered do I need to follow all this in Lesson 38 with the Xavier having more memory.?
There will be overlap between these two series of lessons, but on the Xavier series I will spend less time on the early stuff . . . so will assume on that series that you know some python and can work your way around the linux terminal
@@paulmcwhorter Paul, I am using Xavier NX for lesson #38. Would you know if I need to install helper packages (like cmake, libopenbles etc.) and, do I need to increase the swap space for the installations on Xavier? Thanks
seems like I'm ready to go...on to the next one :)
Very interesting....Yeah...if one of those downloads compiles doesn't work, nothing will. And yes, I am social distancing...I went into the Hallmark yesterday and they are maintaining safe practices, masks, hand sanitizer when you walk in....isles one directional.
I think places like grocery stores are putting good protocols in place, and am comfortable gradually having people come out of total isolation, but feel we are going to loosen up to much too quickly and find the whole country like new york. Nothing has changed. We flattened the curve by staying away from each other. So what happens when we stopped doing what slowed the spread? I am going to wait till the fall to decide what to do when we have more data. For now, maintaining isolation, wearing mask when I go out, and only go to things where good practices are followed.
@@paulmcwhorter Yes I agree caution is in order and opening things slowly with safe practices is a good idea. Remember though, NYC is unusual (I'm from Long Island originally and lived in the city briefly.) Once one leaves one's apartment you're surrounded by people....the elevators are crowded during rush hours, the streets are crowded and of course public transportation. The densely populated, public transport dependent, urban areas are going to be the toughest to open up. I also agree that there needs to be a lot more research on these viruses to fully understand why some people don't even know they had it and some people get acutely ill.
Hi Paul
Great guide.
Does these installation instructions would work on a raspberry Pi 4, as well ?
Great guide, once again! I've been thinking, when will you do your next "Cool Beans" live show?
Isn't the Swap changed in the end?
We need an ice coffee making guide!!!
I take two strong espresso shots and pour them over a cup of ice. Or 8 strong shots over 4 cups of ice for my big mug. Then the ice melts, dilutes the strong espresso down to a really mellow and wonderful iced beverage. If you pour normal coffee over ice, becomes too dilute
Another great video thanks Paul, one question I had is that I am running the latest version of Jetpack and it looks like it already has an inbuilt swap because when i tried to install the swap file it came up with the following error: fallocate: fallocate failed: Text file busy i ran through the rest of the tutorial and was able to download the facial recognition okay, is it okay for me to proceed forward or should i really create a new swap before continuing?
Thank you!
I had to install setuptools and also the python3-dev otherwise i was getting fatal error Pythin.h could not be found.
For installing Dlib we had increased swap, But after installation of dlib, don't you think we have to reset it,
Will increased swap memory cases any problem in future ?
I try:
import face_recognition
import cv2
I get below error, did I miss out something during the installation?
Segmentation fault (core dumped)
Running some of the programs from JP4.2.1, I'm getting an error: failed to load module "canberra-gtk-module" Did I see that somewhere in comments to another video. Any idea what the issue is?
Dont know how to get rid of the error on jetpack 4.3. It does not matter as far as I can tell.
@@paulmcwhorter sudo systemctl restart nvargus-daemon fixes the problem
Hi, used the following which seemed to fix it. Always nervous installing things that Paul hasn't defined in case it screws things up!sudo apt-get install libcanberra-gtk-module
Since we are going to be limited on the number of faces we can train the system to recognize in the near future, is there a way to share the data set for our own face with you Paul, so that if we ever came for a visit your Jetson Nano would recognize our faces? What about the faces of our pets? Just my wife and I, plus 1 adult cat and 2 kittens here at our place during the pandemic!
I dont think it will find cats. Its starts by finding faces, like our earlier openCV with Haarcascades. Then it looks at the found face and compares to known faces. The way I did it was to train on political figures, then point camera at various youtube videos of politicians to test it. I have a training set people can download in an upcoming video
@@paulmcwhorter Thanks for the quick feed back! I didn't really think "cat" recognition was in the picture, no pun intended, but good to know we can show images. I can play with teaching it to recognize other family members. Wouldn't this be fun to take to a family reunion and have all the family members get their faces remembered by my Jetson Nano. To bad we've had to cancel our reunion for next month!
Hi Paul, Do you make custom code?
Hey I have tried everything, still I am getting No module name face_recognition. Can you please help me?
Verry good Paul keep on doining dis video´s for yor Patreons members.
HC Treintje Belgium Herman C.
Thank you! Just subscribed. I am following your code and everything runs fine except I am not getting anywhere close to the camera update response you are getting. There is like a 2-3 delay before the camera updates. I am in 10W power mode. The only thing I can tell that is different is that I am using the Jetson Nano B01 version. This consistently happens on all my facial recognition examples.
Make sure you are not running camera at too high of a resolution and make sure you are not updating software versions. The as downloaded jetpack should run this program quickly. Sounds like you are running on CPU and not down in the GPU. Try installing jtop, and see what GPU usage is.
Thanks for taking time to reply Paul. Jtop is showing some GPU jumps, but the CPUs are jumping all over the place. I tried using a logitech web cam and its considerably better so I am thinking there is something wrong with initializing my Pi Camera(s). I set the stream to be 640 x 480 and the delay is exactly the same.
@@paulmcwhorter I figured it out. I had to change gstreamer video parameters to 10fps (10/1). I think the video buffers were not keeping in sync thus causing lag. With it set at 10fps 640x480 on a CSI camera seems to be the sweet spot.
Something just does not sound right. It should be able to go way faster than that with higher resolution. The problem is I am not a gstreamer expert, and hence can not give suggestions. I have been trying to learn more about gstreamer, as I would like to know how to squeeze as much performance as possible from the video stream. So you have the new nano board, and probably on jetpack 4.3. I have not played much with the Xavier NX, but it has the two cameras ports, and jetpack 4.4
You might also download the dual camera example program from jetson hacks github. He is running both cameras on the new board at the same time, and they are going good. Might look at that code and see if anything pops out at you.
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block how to solve
We know you are not going to crash and burn. Just like Jeff Green (Nascar), you keep your code from hitting the wall in the middle of a disaster.
We come here to watch our hero save the princess (Jetson Nano) from the cyclone that is Ubuntu.
Well, unfortunately, this is where I have to tap out. I spent over 30 hours over the weekend trying to get dlib (and face_recognition) installed, but even though I tried everything (and then a few more), it just wouldn't work. I actually managed to compile dlib (several times) with mingw32 but the resulting module had no functions for some reason. 😕 I also managed to use a few different binary wheels by resorting to an older version of Python, but it would crash once it gets to the _face_locations_ function. At 6-10GB, I can't/won't install Visual Studio, so I'll have to drop out of the course for the time being. 🤷 I'm still pretty chuffed with the generic face-tracking, and whenever I'm able to get some servos to build a gimbal, I'll do that. Maybe some day, I'll find a way to get dlib to work on Windows, then I'll pick up where I left off.
I am trying to understand what you are saying . . . are you saying you are taking this class, and got to lesson 38, and dont have a jetson nano?
No, I'm doing it on my Windows laptop with a webcam. (And I built the gimbal with an Arduino connected to my laptop through USB which I control with CircuitPython.)
Impressive you made it this far. Can you access a raspberry pi 4? You could probably get face recognmizer to work on that. It will run slow compared to the nano, but would probably be easier than a windows machine.
i can not down load videos, is there a issue?
Superb! Please consider how to recognize pet faces such as cat or dog. Thanks!
We will be able to distinguish between cats and dogs, but dont think it would be easy to distinguish between cats.
Someone knows if for installing those libraries on RPI4 i need to do it other way ?
Nice video, I love it
after increasing swap, jetson doesn't boot ;(
sudo python3 setup.py install
Traceback (most recent call last):
File "setup.py", line 42, in
from setuptools import setup, Extension
ModuleNotFoundError: No module named 'setuptools'
sudo apt-get install python3-setuptools. got passed it
@@kerron_ Thank you!
@@kerron_ it did not work for me, what is your operating system?
Thanks a lot!
Is there an easy way to verify what JetPack version I'm running? Does "jtop" work on the Nano?
Run jtop and it will show you over on the last tab. On one of these lessons I showed how to install jtop. Or, just google jtop, it is one line to install it.
@@paulmcwhorter Thanks Paul, and I found the jetsonUtilities on the jetsonhacks GitHub site works too!
double chest BUMP !!!
Just finished the upgrade but getting a Gstreamer error -
GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Programs seem to run fine though and I'm using the raspiCam.
Anybody else getting (or got) this? Any solutions?
I have not figured out how to get rid of the error yet. Seems things work fine though
@@paulmcwhorter Thanks for the reply Paul, glad to know it's not just me! I've tried to sort it too but no such luck. I actually upgraded the jetpack itself a week or so ago and have been suffering this eversince.
all installed
So smart
Just liked
Paul, love you but I really don't need to see your face through the whole video, especially when it is blocking the bottom corner of the terminal screen. I know you try very hard to not let your face get in the way but you can make much smaller with no loss of content.
TH-cam is full of instructors that dont put their face on the screen . . . perhaps you should find one of them to follow.
@@paulmcwhorter WOW, that is not the reply I expected from a professional like you. Most TH-camr's are happy to get feedback even if they choose to not implement them.
Well, I find it amazing that people take a free product, and then complain about it.
I disagree. Having Pauls face there on screen actually adds a personalised element to the videos which makes it feel much more like a real lesson that just a simple online 'do-as-i-do' tutorial. Yeh sure there has been a couple times his head has blocked a section of an active screen, but have we ever actually missed vital content because of this? I like to see Pauls live and human reactions to errors and bug as they happen live. It's much more natural and less curated learning experience.
@@Bambicarus I don't having Paul's face on the screen if he would just make it a smaller square down in the corner so it doesn't block other things.
fyi I hate wireless..:)