Facial Recognition. Double-Take, Deepstack, Frigate, and CompreFace

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 มิ.ย. 2024
  • Let's Discuss. Does Facial Recognition work? I talk through my experience using Double-Take, Deepstack, and CompreFace with the Frigate NVR. This one is more of a discussion rather than a how to. I spend a little bit of time in the Double-Take interface and show you what that looks like.
    Join this channel to get access to perks:
    / @mostlychris
    Discord: / discord
    If you would like to support me:
    Buy me a beverage: ko-fi.com/mostlychris
    Become a patron: / mostlychris
    Products I reference in my videos (Contains affiliate links)
    www.mostlychris.com/my-smart-...
    www.xsplit.com?ref=chriswest&discount=mostlychri&pp=stripe_affiliate
    DISCLAIMER: Some of the links above take you to affiliate sites that may or may not pay a small commission to me. It doesn't increase the cost to you, but it does help support me in making these videos.
    Snail Mail to Send Stuff:
    Mostlychris
    24165 IH-10 West
    STE 217 #164
    San Antonio, TX 78257
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 90

  • @vinces1921
    @vinces1921 ปีที่แล้ว +7

    Chris .. great video. Your note that you find Compreface more accurate than Deepstack makes me want to try it. So I would enjoy insight into the “how to” part of your setup. Thanks again.

  • @oldenb
    @oldenb 2 ปีที่แล้ว +1

    thanks for the video! I was considering deepstack for facial recognition, as I already have it installed for BI- deepstack. I used this combo before Frigate and since detected persons get magnified in the Frigate snapshot, I was hoping to use this for facial recognition. I don't have time to play around with it now, but will do so in the coming months. I will be reading the reactions, as I am curious as well.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Thanks for watching! I am still trying to make it work better and doing some fine tuning.

  • @jddominguez
    @jddominguez ปีที่แล้ว

    Hi Chris, my compliments for making these videos. I have 4 camera's and am testing with just 1 camera for the moment. My frigate and deepstack are running on docker containers and communicating with my raspberry Pi4 with home assistant. Today I will receive a coral to see if i can detect animals, dogs, birds etc....
    A 2nd video about what confidence level and the right box size for hitting that "sweetspot" would be very interesting. Eventually you could make easy automations based on the the accepted level of recognition. Maybe as an idea, maybe you can clone an existing camera image and only "zoom in" a certain square to get a reliable facial recognition. Right now my son is also identified as me walking through the garden 🙂
    Thanks for making these videos but especially my compliments the way you talk with calmth and explaining in an understandable way. Br from Diego Dominguez from the Netherlands

    • @mostlychris
      @mostlychris  ปีที่แล้ว +1

      Thanks Juan for watching and for the feedback on the videos! I think we're just scratching the surface on object detection and alerts with Frigate and Deepstack. Stay tuned for more on that in the future.

  • @t.m.9182
    @t.m.9182 ปีที่แล้ว

    Nice video, I'm interested in home automation for convenience and security. Possibilities with this software is endless. Fortunately I haven't placed my cameras yet, so this video is very useful

  • @ilducedimas
    @ilducedimas 2 ปีที่แล้ว

    I love the aproach you used in this video. Very interesting stuff. I also have problems with deepstack if I ask it to use the GPU...that's a shame but apparently nvidia and linux don't go along too well.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Thanks! Yes, it is a shame. Why can't we all get along ? 😂

  • @marine1718
    @marine1718 2 ปีที่แล้ว

    Im using frigatre for two months I never heard about the other to I think I will take a look in to it!

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Let us know how it goes.

  • @Lucifer11392
    @Lucifer11392 ปีที่แล้ว

    I also have the same issues. I've read up on the problem and one problem I see is that the snapshot sent by frigate is from the low res stream. If it'd be from the high res, recording stream, the face even if far away would be sharper and more pixels.

  • @mauriceatkinson9520
    @mauriceatkinson9520 2 ปีที่แล้ว +2

    Great video and very interesting. I would love to see how you installed and got your HA alerts working. I currently use HA and Frigate but want to use face recognition to alert when certain people enter a room. However I only want to alert once and also do not want to alert if a certain person is already in the room.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว +2

      Thanks. I'll put your suggestions into my video idea list (and yes, I really do have one) 😀

    • @mr.gentleman1758
      @mr.gentleman1758 2 ปีที่แล้ว

      @mostlychris I would love to see a setup videos as well. I am currently configuring a new homeserver extra for that.

  • @richardreina8330
    @richardreina8330 2 ปีที่แล้ว +1

    Would be very useful for determining who's at the door or in a gangway (small areas) where a face is a significant portion of the frame. Hope we'll see more.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Agreed. This is useful when the camera can get a good view of the face at the correct angle.

  • @adambarghouti323
    @adambarghouti323 2 ปีที่แล้ว

    Thanks for the great video, very interesting. I would love to see the best way to install without Docker and how to get HA/phone alerts. Also the best and recommended way from your experience (Deepstack, CompreFace, facebox). How long it takes to recognize a face, and is it possible to use it as a room presence detection. And can I use the ESP32-CAM.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      That's a lot of criteria 😀 I'll put this on my request list.

  • @AndrewGlasgow2020
    @AndrewGlasgow2020 ปีที่แล้ว +1

    Chris. First, great video because you are opening up possibilities rather than limiting yourself to a single solution.
    You asked for a discussion so here is my two pennyworth ...
    I set up a home automation system for my elderly mother who lives alone and has poor vision and mobility. I started a couple of years ago with voice operating the TV, lights etc. using Alexa and Smart Life. I am now adding smart heating and front door access control. I run Home Assistant OS on a Dell 3050 MFF with a SONOFF ZigBee stick.
    I am now planning a 'mark 2' system that will incorporate face recognition mainly to recognise known visitors -- carers, home help, friends, neighbours and family. It could unlock the door and perhaps announce the visitor. It's hard enough for anyone to remember who they all are and Mum now has early stage dementia, so it will get harder for her. This seems to me a great use of face recognition.
    I would love to hear from anyone doing anything similar. Mark 2 is still in the planning stage so any tips will be welcome. I have in mind a dedicated system running Linux and Docker on a refurbished HP Elitedesk 800 G4 SFF computer with a dual M.2 Coral card and a low profile NVIDIA GTX 1650 GPU. I will probably try Frigate, Compreface / Deepstack, and Double Take -- with MQTT to Home Assistant, which will in turn handle phone notifications, TTS (Alexa), and automations such as unlocking the door.

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      This is an amazing use of "smart home" technology. It is something often overlooked by those in the industry. Things that truly make this valuable.

  • @RonnyRusten
    @RonnyRusten 2 ปีที่แล้ว +1

    I am using Frigate, and do person detection. That is good enough for me. If I was going to do face recognition I would need to get a proper camera, as the one I use is too low quality. I like the idea of alerting if someone unknown is detected, so I might consider it in the future. I have tried Deep Stack, but didn't really get the result I wanted, maybe I should give Compreface a try? It's not very high on my priority list, though. I have the possibility to place a camera in a lower position than yours, I think that is one of the problems with yours, they seem to be placed high up, pointing down. I think that will make face recognition hard(er)? I also need to get a Coral-stick to offload my system. But I think they still are hard to find in stock anywhere.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Agree on all points. Once I release this video to the public, I'll be curious to see what others are doing with it. While CompreFace is better at detection, it seems a bit more resource intensive and runs JAVA. But having run them all together, I need to revisit the resource usage of the detectors. You are correct with the camera angle. I just don't think it can see faces well enough. I'm using the Coral TPU for Frigate but not for the Deepstack or CompreFace detectors since those are on a different machine in a VM. That is my do everything computer, including flight sim, and with those running in the VM, it doesn't seem to affect performance on that PC.

  • @rdfogal
    @rdfogal 3 หลายเดือนก่อน

    I enjoy watching your videos. As a result of watching this one I decided to try Compreface. The problem is I can't find any documentation on installing it in Home Assistant. I'm running Home Assistant OS on an old Dell Inspiron. So far, so good. Any help would be appreciated.

  • @neildotwilliams
    @neildotwilliams 2 ปีที่แล้ว

    Hi Chris. Thanks for the video. I have been considering looking at deepstack (or something) to run alongside Frigate so this was a very nice and timely overview video without actually getting in to yaml . I wonder does the face detection use gpu rather than CPU? I have a Google Coral plugged in to my HA but I also have a Jetson Nano sat idle, as you seem to use vm's and docker I wonder can I just redirect to that. My CPU on HA is maxed out 😬

    • @JonWilliams84
      @JonWilliams84 2 ปีที่แล้ว +2

      You would have to use the nvidia-arm (JetPack) image in order to use CUDA. From my experience, it's pretty unreliable. You are better with an nvidia gpu in a full PC, but even still Deepstack is nowhere near as accurate as Compreface.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว +2

      Jon's got your answer--thanks Jon!

    • @alexsinbb
      @alexsinbb 2 ปีที่แล้ว +3

      I’m running the compreface mobilenet gpu version on a server with a 1050Ti and it’s lightning fast on recognition.

  • @S.C.A.M.B.E.R.
    @S.C.A.M.B.E.R. 3 หลายเดือนก่อน

    Hi, do you think 2MP (turret and bullet) dahua or hikvision camera would be enough for AI recognition for faces, pets, and object, not including ALPR.

  • @xxxxgm
    @xxxxgm ปีที่แล้ว

    Hola yo lo tengo funcionando en un fujitsu futro s920 median addon funciona bien y mi cpu esta al 80% probando con 1 camara... saludos!

  • @pastuh
    @pastuh 5 หลายเดือนก่อน

    Which app is best to group photos by face?
    I mean.. move them to separate folder

  • @joelwalther5665
    @joelwalther5665 2 ปีที่แล้ว +1

    Hi Chris,
    thank you for this video which is very timely. I am just trying to make a POC.
    But I have some questions.
    Is it possible to install Frigate and Deepstack docker on the same machine ? (I thought they use the same port 5000. I am not a Docker specialist).
    Here is my use case, maybe it will give you some ideas.
    The principle is based on the doorbell, where I will place a camera (Annke C800).
    When the person rings, Frigate detects a person, then I launch a facial recognition. Then, I would use IBM Waston Assistant to start a conversation. If the carier is detected and no one is home, then I could suggest to leave the package in the garden shed (via the camera's speaker or DYI speaker). If it's my son who comes home, then I announce in the speakers of the house (Alexa, Sonos, etc) that my son is home. On the other hand, if my nephew rings the bell, I tell him a joke (because he likes jokes a lot). The risk is that he will often ring the bell :-) ... and if it's the mother-in-law ???? (I let you think about the scenario ;-))
    To do this scenario, we need to use: Home Assistant, Frigate, Deepstack (or other that you propose), MQTT, Node-red, IBM Watson Assistant, IBM Speech to Text, IBM Text to Speech. ( those IBM Cloud services are free and usable in Node-red).
    I haven't ordered the hardware yet, and I wonder if an old PC i5 6500 can do the job with a Google Coral M.2 card?
    What do you think about it?
    Thanks, Joel (a Swiss guy)

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว +1

      Wow. That's a lot to unpack. You should be able to run Deepstack and Frigate Docker containers on the same machine. You will need to change the EXTERNAL port. Since each Docker container is an isolated network, the internal port doesn't matter as long as you map it correctly to an external port.
      It sounds like you have a good plan on what you want to do. I'd be interested to know if that works out for you. There are a lot of parts in that setup.

  • @waelzayed
    @waelzayed 2 ปีที่แล้ว

    I’d love to see video describing configuration details of double take. I’ll be using doorbell RTSP camera to unlock the door when one of the family members approach from door

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Thanks for the response. That's a vote for making the installation video.

  • @hillebrandstreet1882
    @hillebrandstreet1882 ปีที่แล้ว

    Best videos please keep them coming !!!

  • @Kliptrech26Rg001
    @Kliptrech26Rg001 ปีที่แล้ว

    Thanks from Brazil, your video helped me a lot, I'm using it via Docker through the UNRAID system, in unraid I needed to download docker DOUBLE TAKE and then download the detector separately in docker too (compreFACE), I accessed COMPREFACE through the IP address: port , I registered ,
    I acquired the key inserted in the CONFIG file and everything worked perfect. thanks for the tips, despite being a different system from mine as you explain in the video, but it helped me to clear up doubts about several details, it would be nice to see how you are currently using this system, if you still have COMPREFACE or if you opted for another .

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      Glad you were able to make it work. I had compreface in a separate container also and just accessed it via IP:port like you did. I'm still working with it, although it takes a lot of tuning.

  • @soubhikkhan6239
    @soubhikkhan6239 2 ปีที่แล้ว

    I would love to see your config of the docker CompreFace

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว +1

      Throw me a note on Discord in the frigate channel.

  • @DT006
    @DT006 2 ปีที่แล้ว

    I've thought about implementing double take for awhile. After seeing this I'll hold off. Most of my cameras outside would not pick up a large enough facial image for good recognition. Let us know if you figure out a way to do it though!

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Thanks for the feedback and for watching? I'm hoping to get it just right.

  • @JonWilliams84
    @JonWilliams84 2 ปีที่แล้ว

    @mostlychris - What hardware are you running your stack on, training and detection is incredibly slow on your video. To train my EliteDesk 800 G4/5 cluster takes less than 300ms for each image. Running Deepstack on my 1060 6GB takes around 120ms for inference. It may be that your camera resolution is too high, 1280x960 is as high at I go. Also, with regards to "box size/area" you can change this in your double-take config panel.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      I'm running an Ubuntu VM for the containers. However, when filming, I'm also running OBS and other stuff so what you see in the videos isn't necessarily true to life. Training has been much faster since I filmed and just varies based on my multi-purpose PC that runs a lot of stuff at once. For the resolution, I was using the 640x480 substreams. I haven't changed the box size/area because of the issue with false positives in small boxes--but I am still making adjustments and might just be able to live with the size change.

  • @adifoto6362
    @adifoto6362 2 ปีที่แล้ว

    Don't have any face recognition on my system, i am interesting in knowing more about it. Any more about it will be much appreciated. Thanks

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Thanks for the comment. I see a lot of folks using it for presence detection scenarios. I'm still tweaking it and trying to improve the experience.

  • @dave24-73
    @dave24-73 หลายเดือนก่อน

    Might be worth trying with a Coral USB accelerator by Google, allows for an edge TPU coprocessor

  • @Kiloptero
    @Kiloptero 3 หลายเดือนก่อน

    Thanks!! can't find a way to config better the Deepstack. Specially something like this confidence:
    match: 95
    unknown: 30
    save:
    matches: true
    unknown: true
    purge:
    matches: 48
    unknown: 12
    increase the match level to 95 ... save match and unknown image and specially a way to purge images... or get less images from frigate.! Thanks a lot!!

  • @patcastech
    @patcastech ปีที่แล้ว +1

    where do you get the api key is you installed this as a home assistant add-on?

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      It's available in the application.

  • @timg2973
    @timg2973 2 ปีที่แล้ว

    i use Blue Iris with deep-stack "GPU" works greats. i also have it talk to HA with facial recognition to unlock doors when i walk up.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Nice. How is the face recognition reliability?

  • @tedc6694
    @tedc6694 2 ปีที่แล้ว

    I too wonder what hardware you're using because I'm starting a system. Was previously thinking of I7 6700 but now thinking i7 8700 for "more horsepower". As far as gpu I don't know if the integrated uhd630 in in will be enough or a gtx1650 super might be worth it. I know I'll have two 8mp cams, and four 5mp cams MIN, just to start. (All AI Cams15fps)Possibly four 8mp and two 5mp just to start with and either way I'm sure I'll want to add things later.
    But if you have bumped up against cpu bottleneck I'd be curious what you're using.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      A dedicated server or one with the horsepower might be fine. I am using a VM inside Virtualbox with a Windows host. I don't dedicate that much CPU or memory to it. However, it was working fine. The issue was with the Home Assistant Blue (Odroid N2) that I was trying this on. I have a bunch of HA related stuff running on it, including the Frigate NVR. The bottleneck was that device.

  • @davidk8062
    @davidk8062 2 ปีที่แล้ว

    Amazing review, same as all other - thanks Chris! Below might not apply to face recognition tuned AIs but hopefully will help. In the world of OCR AI (character recognition), AI needs to be trained on same (usually low) quality images as is supposed to recognize. This is due to how neural network and all layers are built and how model is built.
    To improve results, it would be worth trying to use same quality images of yourself or people you want to train the model on.
    The default Frigate stream used for sensing is low quality and might be way too low to pick up a face for recognition out of it.
    For PoC, I'd suggest push Frigate to use high res stream, see if algorithms will pick any faces out from there, if yes, we're on a good path. If these would be good enough for you to recognize person, just from face perspective, ignoring all the context you know, how is dressed, etc. ignore all the other meta data, then this will be good feed to train models and expect some results.
    The overall workflow would need to be more or less (not sure if achievable):
    1. Frigate to monitor on low res.
    2. Should a movement/person be picked up, switch to high-res feed (as normally recording/events are done) and run this high res through AI recognition algorithms (and only then, when person is recognized).
    Hopefully this logic can be achieved, as otherwise monitoring all the time high res feed might be not a best idea.
    On AI models training side - models training is usually n-times more complex than using model, therefore whilst Doubl-Take might run on Arm64 architecture, for training it might require some significant resources and there are some much more powerful Arm64 systems, server grade. With that said, it might still miss some key low level instructions which will cause huge slowdown.
    Best take, though more complex, would be to run all UI and use of models on Arm64 arch, N2 based docker and should training task need to be run, run it on remote powerful system. With a bit scripting even container could be automatically spun up as containers were for exactly that - quick spin up and down to execute tasks.
    I didn't dive deep into Double-Take to check if such distributed architecture is possible. That would save resources on running daily ops on N2 and use server/desktop for training only.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Great thoughts and explanation on all of this. Detect on the high res streams is super CPU intensive so that wouldn't work out too well for long runs. We'd have to ask Blake if there is way to switch to high res for final images on detection. Otherwise, the flow you mention is good, except we have to keep in mind the audience and most home users won't have servers like that to run training on. It's a balance to get good face rec and resource usage/price point.

    • @davidk8062
      @davidk8062 2 ปีที่แล้ว

      @@mostlychris the detect on the high-res is just for PoC to validate if face recognition could be done that way.
      Indeed, if it would work, a slight change would be needed on Frigate side, to engage facial recognition AI once a person is spotted based on low-res AI for objects.
      Required logic seems to be built-in already, as recording can start only when a required object is spotted, so that's great. There would be needed just another hook (optional) to have facial recognition, i.e. prior recording and record only if face has been found.
      This would be amazin.
      Looking from resources perspective, the face recognition AI, could be getting boxed cut where person has been identified, effectively cutting down resolution to be analyzed to minimum (yet, depending on camera placement it could be high res). Either way, if one would want facial recognition, then either would have to go Coral way or get maybe more CPU, like miniPC format systems. Not the best, but having such an option would be great.
      On AI training side - anyone installing and configuring HA is most probably familiar with Docker and/or VMs hence potentially wouldn't be an issue to stood up a docker for training purposes. That docker container could be spun up automatically on demand or overnight. All of that would be dreams if proper face recognition based sourced box of person from high res picture. Hence the PoC need which could effectively start with high res full picture, as even if slow, if works for proof of concept then more buttoned up deployment could be worked out.
      Probably we should take it towards a git or other communication as if anyone would want to engage in such project wouldn't look for it on YT :)

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Interesting PoC discussion. Agreed about YT as YT comments are very hard to navigate, especially from a creator perspective. Interactions here are good for the algorithm though, or so I've been told.

  • @TobiasGeek
    @TobiasGeek 2 ปีที่แล้ว

    Hey! :D At 5:30 youre saying that you can change the minimum required box area for deepstack. How do you do that? Thanks!

    • @TobiasGeek
      @TobiasGeek 2 ปีที่แล้ว

      Well i found it :P

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Glad you found it. What is the setting (for others who will read this)?

    • @TobiasGeek
      @TobiasGeek 2 ปีที่แล้ว

      Double Take Config file:
      detect:
      # minimum confidence needed to consider a result a match
      confidence: 70
      # minimum area in pixels to consider a result a match
      min_area: 2500

  • @refaelsommer
    @refaelsommer ปีที่แล้ว

    Hi and thanks!!! Can you please help me with adding CompreFace to DoubleTake on HomeAssistant, I have 2 issues, AVX not detected and also where to create an api key? I am using Homeassistant on a VirtualBox Linux that is running on a Windows 10 NUC i7 processor that supports AVX

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      This is probably a better question for my Discord. It'll probably take some screenshots and back and forth.

  • @vierraventures
    @vierraventures ปีที่แล้ว

    Have you done anymore with this?

  • @tiloalo
    @tiloalo 4 หลายเดือนก่อน

    Any new software to replace double take? Seems to be outdated and not maintained anymore

  • @jmr
    @jmr 2 ปีที่แล้ว +1

    Thanks for doing the research. I'm not ready for facial recognition yet. I'm sure I'll add it eventually. Maybe around the time we hit the Pi6. 🤣 2027 ish would be my guess.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว +1

      Lol. It is fun to play with but in my environment not ready for production use...yet.

  • @PablloArruda
    @PablloArruda ปีที่แล้ว

    In my house I have an old laptop produced in 2012 running HA to run mqtt, notifications, automation routines. On my desktop I have linux running docker frigate and motioneye to monitor my house. The problem is that I have an nvidia 1070ti and I can't use it with the frigate, using only cpu processing (six cameras 80% CPU usage).

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      The Coral TPU helps with this...if you can find one.

  • @czi2011
    @czi2011 2 ปีที่แล้ว

    regarding your Ordroid experience: not sure if you tried to install the CompreFace add-on along with Double-Take. My understanding is that CompreFace is built on Tensorflow and AVX and/or AVX2 are requirements for Tensorflow, so they are requirements for CompreFace, too. In a nutshell: CompreFace only supports X86 architecture which provides AVX instruction set. This excludes all Raspis, Ordroids because they are based on ARM architecture. This information is somehow a bit lost at the Github page of Double-Take. Maybe this caused the slow-down of your Ordroid.
    regarding the bounding box: I intend to make some experiments without a time plan. My first thought: the Frigate snapshot is taken from the substream which we configured to be 640x480. So the relation between the FOV of our camera to the resolution of the frame and our face related to the overall image might become important, since CompreFace expects a bounding box of greater than 10000. My understanding is: the bounding box which is used for face recognition is calculated and given from Tensorflow and it has nothing to do with the bounding box we define in frigate.yml (according to my understanding this provides just a crop function like a digital zoom, but does not improve resolution of the image)

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      All good points. In the Home Assistant store, there is an add-on called Exadel CompreFace that shows as not compatible with the Odroid (not available). There is a point at which running heavy duty stuff like that along with ALL the other stuff I am running is not a good idea.
      Good luck with your experiments. Hope you get them to work as you need.

  • @oakfig
    @oakfig 2 ปีที่แล้ว

    What cameras are best for in home face detection

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      That's not an easy answer. If you are talking about JUST the camera and not using something like deepstack/compreface/etc then there are a lot of articles out there. If using software, most modern cameras that can stream live video work. The tuning comes on the software side and not necessarily the camera. EVen my older cameras work fine for face detection. Camera placement and environment also make a difference. Lots of experimentation and image training are required to get a good result.

  • @jeffro.
    @jeffro. ปีที่แล้ว

    So far, I'm only just 7 minutes in, but it's clear that you need a PTZ camera with optical zoom to do facial recognition. I have one that has 15x optical zoom, that will track faces in the camera. )I don't need THAT much zoon, due to the distances involved, but I got a really good deal on Amazon, and the camera works really well.)
    So, I'm thinking that when one of the software detectors sees a face, I can have the software send a signal to the camera to turn on the 'face-tracking' so that it will get enough clear images to perform the facial recognition. I'm just learning all these software packages, figuring things out as I go.
    It may be easier than I think. But optical zoom is essential, I think.

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      That is a big issue with face detection. You need to be at human height level and many cameras that don't zoom are up above pointed down so they don't see the faces as well.

  • @YKSGuy
    @YKSGuy ปีที่แล้ว

    My conclusion so far is the ONLY camera capturing enough face detail for training is my Doorbell, All other training data I do form cell phone photos.. If I train off of 1080p camera images there are not enough pixels in the face boxes and I get overly broad matches on all future detections.
    The default min area of 1000 is pixels I would NOT go lower, in fact I get a lot fewer false positives at around 1600.
    Overall I think I am abandoning it till I get higher res cameras.

  • @Zbyszekakaduk
    @Zbyszekakaduk ปีที่แล้ว

    I know it's a year late. I decided to leave it in the form as if it was written a year ago :))

  • @neilos2085
    @neilos2085 ปีที่แล้ว

    a year on, i wonder where this tech is?

  • @Pyth0nym
    @Pyth0nym 2 ปีที่แล้ว

    Please make a video on how to track a person with face, and not get alerted, as you talked about.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      I'll put it on my list.

  • @TheRealAnthony_real
    @TheRealAnthony_real 2 ปีที่แล้ว

    using double-take,compreface, frigate .. all work quite well .. .the key is how to reduce false positives

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      Lots of training in the actual situation that your cameras are running in.

  • @alexsinbb
    @alexsinbb 2 ปีที่แล้ว

    Agreed on the bounding box issue. For general home surveillance you’d need all 8MP cameras to reliably get face detection. Otherwise, yea it’s only really good for in room person detection. I think jakowenko designed this to track a person in a house based on the other features.

    • @mostlychris
      @mostlychris  2 ปีที่แล้ว

      That makes sense. I don't have 8MP cameras so I might have to be satisfied with "person" rather than a specific face. Lately though I have been training the camera images with the smaller faces and it has been about 50/50 in accuracy.

  • @TSalem52
    @TSalem52 ปีที่แล้ว

    Are you still using it?

    • @mostlychris
      @mostlychris  ปีที่แล้ว

      Not currently as I've moved and need to set things back up. I do use Frigate now with object detection. Face detection is harder if the cameras aren't aligned exactly in line with faces.

  • @alainite
    @alainite 10 หลายเดือนก่อน +1

    u have an extra 0 in your deepstack port url

  • @frosty1433
    @frosty1433 9 หลายเดือนก่อน

    Well humans use a whole body to figure out who a person is.

  • @TechySpeaking
    @TechySpeaking 2 หลายเดือนก่อน

    first