Unleash your Home Cameras with FRIGATE Self-Hosted AI Video Recorder! Install on Proxmox LXC

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ก.ค. 2024
  • Do you have security cameras at your house? Would you like to locally host all of your recording and analytics, to make sure nobody else has access to your video feeds and recordings? Would you also like to integrate with Home Assistant, the greatest open automation platform in the world? Then Frigate NVR is for you! In this video, I'm going to go in depth to setup Frigate in an LXC container, for maximum efficiency. Using Podman Quadlet, I'm going to manage the Frigate container in a sane way with normal systemd and journalctl tools. And I'm going all-in on hardware passthrough, with my Coral TPU for advanced AI detections and person/cat/car counting, along with a basic Intel Quick Sync GPU to decode the video streams in hardware and reduce CPU load. So join me on this adventure!
    Find the commands to copy+paste on my blog post:
    www.apalrd.net/posts/2023/ult...
    Support me on Ko-Fi if you enjoy my content and find it useful:
    ko-fi.com/apalrd
    Feel free to chat about my upcoming projects on Discord!
    / discord
    Timestamps:
    00:00 - Introduction
    01:25 - Debian Trixie Container
    04:10 - Install Frigate
    06:21 - Caddy Reverse Proxy
    08:44 - Coral TPU Passthrough (PCIe)
    13:18 - GPU Passthrough (Intel or AMD)
    17:54 - Conclusions
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 100

  • @colinstu
    @colinstu 6 หลายเดือนก่อน +3

    Wow, this vid is EXACTLY what I needed, just ran across Frigate a month ago and then started considering how to get that going on my proxmox setup and how adding a TPU would work... and you did all that! ty

  • @LordApophis100
    @LordApophis100 8 หลายเดือนก่อน +14

    I've been running Frigate in Podman on Proxmox LXC for quite some time. But your video got me to try Quadlet too. Some notes:
    You can use apt pinning to get a newer podman from testing repos without upgrading to Trixie.
    You can also run podman under a non-root user and still use GPU passthrough. The frigate user must be in groups video and render and you add PodmanArgs=--group-add=keep-groups and PodmanArgs=--privileged.
    A privileged podman container retains the same user/group as the non-root user running it.
    Happy mine runs now with Quadlet too.

  • @goodcitizen4587
    @goodcitizen4587 8 หลายเดือนก่อน +4

    I like the inset video of your screen area while you're talking. Worked out good.

  • @MinisterOfSound
    @MinisterOfSound 2 หลายเดือนก่อน

    That's a great tutorial, thank you! I appreciate your knowledge per content ratio: so little bs and so much to take away. Pls keep that calm and efficient delivery! 👍😁

  • @MrShiffles
    @MrShiffles 6 หลายเดือนก่อน +1

    Great video as always and I've learned a few things from your video since I started messing with Frigate a few months ago ...the funny thing is I've always pronounced it like: free-GAH-tay 😂

  • @Darkk6969
    @Darkk6969 8 หลายเดือนก่อน +1

    Nice project! I have the NVIDIA Tesla P4 (got em real cheap on ebay) in use on my third ProxMox node so may give this one a try.

  • @ronaldvargo4113
    @ronaldvargo4113 8 หลายเดือนก่อน +1

    Thanks for the video on camera NVR. I would like to run this more standalone minimal automation integration. I use Hubitat so the only thing I would push there is the ability for it to kick off a stream to the dashboard when front door occupancy is detected.

  • @hsmptg
    @hsmptg 8 หลายเดือนก่อน +1

    Great video!
    Thanks

  • @goodcitizen4587
    @goodcitizen4587 8 หลายเดือนก่อน +7

    cat detection is actually very important

    • @drmosfet
      @drmosfet 8 หลายเดือนก่อน +1

      Fox 🦊 detection, not sure if AI can distinguish the difference?

    • @l0gic23
      @l0gic23 5 หลายเดือนก่อน

      Cat to ignore!

  • @norriemckinley2850
    @norriemckinley2850 8 หลายเดือนก่อน +1

    Excellent!

  • @MrTmorton77
    @MrTmorton77 8 หลายเดือนก่อน +4

    Maybe submit a PR to the frigate repo to get the listen address changed from 0.0.0.0 to "*" ? Great content btw.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +5

      There's actually already an open PR to fix the nginx.conf in Frigate - github.com/blakeblackshear/frigate/pull/3497
      It's been stale for over a year at this point. It's fully ready to go, the admins just haven't reviewed it, and now it needs to be rebased since it took so long.

  • @skynetpostmaster134
    @skynetpostmaster134 หลายเดือนก่อน

    Greatly decreased the CPU usage. Thanks!

  • @GregZuro
    @GregZuro 8 หลายเดือนก่อน +3

    Thanks for the great video.
    Are you sure that you need to use a priv container? I have an unpriv one running docker and accessing the nvidia GPU just fine.

  • @berzerker2
    @berzerker2 7 หลายเดือนก่อน +1

    Love the video, I'd love to be able to get into promox and Caddy to get reverse proxy setup. Do you have a tutorial for setting up this / what hardware you are running or recommend? I'm a noob and don't just wanna follow the recommended hardware from frigate's documentation as I want to learn more.

  • @johannesdormann6627
    @johannesdormann6627 8 หลายเดือนก่อน +3

    Great content and just at the right time. Thank you very much.
    Question. What is the setup with the mechanical keyboard and screen?

  • @eugserj9095
    @eugserj9095 8 หลายเดือนก่อน +1

    Hi, exactly what I was looking for - thank you for the video! Only one question the host in the mqtt section of frigate config - is it your home assistant server, right?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      It's whatever MQTT server you use with Home Assistant, so it could be HA OS itself if you are running an addon or something else if you are running HA Core.

  • @samstringo4724
    @samstringo4724 8 หลายเดือนก่อน +1

    Really nice! I have been looking for a way to run this without Docker in the exact setup as you with TPU and Proxmox. Thank you for that!
    I wonder if the process of passing through the iGPU (Xeon 1245 v6) is the same? Is it possible to use vGPU or do I need to PCI passthrough the hardware GPU to the container?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +1

      Contaniers don't do PCIe passthrough, since the drivers are all loaded by the host kernel. We instead need to find the driver nodes in /dev and pass those through. Since the driver is loaded by the host kernel we can pass those driver nodes to as many containers as we want without vgpu (but not VMs).
      But yes, an igpu will work just fine, as long as it's enabled in the bios - some server boards disable the igpu if they have onboard VGA for IPMI.

  • @vb7913
    @vb7913 8 หลายเดือนก่อน +3

    Can you do a review on firescrew, it looks very similar to frigate

  • @mikekane9734
    @mikekane9734 7 หลายเดือนก่อน +1

    Thank you again! What is your keyboard device?

  • @AndrewFrink
    @AndrewFrink 8 หลายเดือนก่อน +1

    I'd like to see more on quadlet. I have a "project" I'm considering tackling using it. Basically housing some Minecraft bedrock servers via podman. There doesn't seem to be a lot of info around how the .volume, and .network files interact with the .container files.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +1

      I really like Quadlet so far, but I've only used .container files, not the rest. But they are all structured like systemd units and have the same inheritance / ordering / .. automagic that the rest of Systemd has.

    • @AndrewFrink
      @AndrewFrink 8 หลายเดือนก่อน

      @@apalrdsadventures ahh. My use case is basically trying to template a container creation so it's easy to add a new similar but different container.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      Usually you can template it the same way you'd template a new machine, using a script or ansible or the like

  • @TheUkeloser
    @TheUkeloser 8 หลายเดือนก่อน +1

    This is awesome, I'm currently collecting parts for a new proxmox 3 node cluster and plan to install Frigate on it. One question I had though - I also plan to run either Jellyfin or Plex on the cluster and want to passthrough my renderdri (all nodes will be using quicksync), is it possible to passthrough the redernd128 device to multiple containers at the same time? Alternatively, can I use proxmox HA configurations to somehow ensure that it tries to keep the two containers on separate nodes?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +1

      In Linux DRI/DRM, the 'cardX' node cannot be shared but the 'renderdXXX' node can be. The card node is the only one that can do display output, but as long as the app only needs off-screen rendering, you can share it between multiple apps. They will still need to share vram and gpu resources of course.
      You also need all of the nodes to have the same path to the render node on host (since HA doesn't allow a different config for each node), so if they have multiple GPUs that could be a complication.

    • @TheUkeloser
      @TheUkeloser 8 หลายเดือนก่อน

      Thanks, that's what I guessed. None of my nodes will have separate graphics cards and will just be using the quicksync in the intel CPUs, so I'm assuming they'll all have the same path.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +1

      yeah, renderD128 is always the first card (card0), renderD129 the second (card1), etc.
      There are some quirks if a card exists and supports kernel modesetting (kms) but not rendering, then you can end up with card1 = renderD128 if card0 doesn't support render.
      But in any case, all Intel CPUs with QSV they should all be renderD128 and all use the same opts (hopefully).

  • @happy9955
    @happy9955 6 หลายเดือนก่อน

    great

  • @junialter
    @junialter 8 หลายเดือนก่อน +1

    BTW what cams are you using that support IPv6? I've been searching forever for this.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +1

      I have a mix of cameras. The no-brand Aliexpress PoE cameras that I use for my 3D printers have a very web 1.0 look and are IPv4 only, but my name brand Dahua cameras that I use outdoors all do IPv6 (at least with static addresses), and are also 802.1X capable which is nice.

  • @rkbest9783
    @rkbest9783 7 หลายเดือนก่อน +1

    This was a good walk through that worked on my last setup. I recently migrated my LXC frigate and TPU to PVE 8.1 and after following all steps. I am stuck at modprobe apex returning some error 'could not insert 'apex': Key was rejected by service'. What could be wrong? Kernal is 6.5.11-7 and header are installed. ls -l /dev does not show apex on the host terminal.

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      You'll have to rebuild the apex module for kernel 6.5. For some reason, Google fixed the module (the kernel changed in 6.4) but never rebuilt their release, so you have to go to the gasket repo, clone it, and debuild. Blame them for not maintaining their releases.

    • @rkbest9783
      @rkbest9783 7 หลายเดือนก่อน

      @@apalrdsadventures finally able to rebuild after some weird issues with BIOS secure boot preventing the apex to be discovered. Now, how do i find the frigate.container file as i dont have it under /etc/containers...

  • @Zie1u
    @Zie1u 7 หลายเดือนก่อน +1

    Do you check how. power consumption increased when frigate is working? I heard it prevents idle mode due to constantly processing video from camera via CPU. But it was tested without Coral TPU I guess. I have a home lab server that most of the time is in idle state and down to less than 20W. File server, HA or or other lab vm stuff are not always needed in my case. I am considering setup a separate machine just to do the frigate + coral if power consumption increases much. Did you maybe investigate this topic more?

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      My system is very low power already, so it's not something I'm particularly concerned with given the TDP is only like 10W.

    • @Zie1u
      @Zie1u 7 หลายเดือนก่อน

      @@apalrdsadventures 10W on machine that runs frigate + coral? That's a good info to me. Thanks :)

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      10W for the CPU, about 30 at the wall with drives

  • @Trainz2950
    @Trainz2950 5 หลายเดือนก่อน

    This is awesome, I've been looking into setting up cameras at my house with a self-hosted solution ideally. What cameras are you using for this setup? I've been trying to research cameras and it sounds like there are some compatibility concerns with some, and others that like to "phone home"...
    Also, your website is very clean, do you use some specific app to create your pages? Something like Jekyll perhaps? Or maybe just raw html/css/js?
    Another thing that you might find clever. If you use `systemctl edit $SERVICE` instead of just nano'ing the relevant config file, you don't need to manually remember to `systemctl daemon-reload`.
    As an avid systemd fan, I'm surprised I haven't come across `.container` config files yet, do you have any reference docs anywhere or are they podman specific? I did do some googling but didn't find much. Very clever integration!
    Thanks for another cool video!!

    • @apalrdsadventures
      @apalrdsadventures  5 หลายเดือนก่อน

      Hello!
      I use cameras from Dahua, they are a few years old now. They are a mix of the DH-IPC-HDW2431TP and DH-IPC-HDW5831RP-ZE models. They don't phone home, support IPv6 and 802.1X port auth, and have worked very well for me. I also have a few cheaper no-name cameras that I use for my 3D printers and those ones do try to phone home all the time.
      My website is generated in Hugo, which is similar to Jekyll.
      .container files are specific to Podman Quadlet, under the hood it will dynamically generate a .service with the equivalent Podman Run command when it daemon-reloads. docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html is the docs for it, you can configure nearly everything in Podman via systemd files.

  • @Pimp4King
    @Pimp4King 7 หลายเดือนก่อน +1

    what is that screen / keyboard combo ?

  • @thestreamreader
    @thestreamreader 8 หลายเดือนก่อน +1

    Can I do GPU pass through for frigate and then run a VM Home desktop with same GPU passed through. So one will be displayed to monitors Home Desktop other will just being doing processing for frigate.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      You can share a GPU between multiple LXC containers but not with more than one VM.

  • @reallyjohnblack
    @reallyjohnblack 3 หลายเดือนก่อน

    If you setup compreface with podman quadlet in the same lxc as frigate, can you use the same coral device for frigate and compreface at the same time?

    • @apalrdsadventures
      @apalrdsadventures  3 หลายเดือนก่อน

      You can share a coral device across multiple LXCs, although I'm not sure if the coral device itself supports being shared by multiple apps at the same time

  • @oskarma1801
    @oskarma1801 4 หลายเดือนก่อน

    Great tutorial ! I managed to get Frigate up and running, but you kind of lost me with the whole Caddy thing. I have little experience with reverse proxy. Do I need reverse proxy and if so for what's the purpose there? is it for remote access to Frigate? I have my own domain name at Cloudflare laying around. Can I use that for this proxy, and if so how? And im planning on using o Coral usb, do you have any config for that instead?

    • @apalrdsadventures
      @apalrdsadventures  4 หลายเดือนก่อน

      Frigate doesn't natively have any security on the front end, it's just plain HTTP and on a nonstandard port. Caddy adds TLS to that, and optionally you can configure authentication if you want (I didn't in this video).

  • @TheWalrus_45
    @TheWalrus_45 7 หลายเดือนก่อน +1

    What hardware do I need to do this?

  • @autohmae
    @autohmae 8 หลายเดือนก่อน +1

    Have you considered doing apt pinning instead of a full upgrade ?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      Debian testing is essentially a rolling distro, it's not their unstable branch (that's Sid).

    • @autohmae
      @autohmae 8 หลายเดือนก่อน

      @@apalrdsadventures I meant you don't have to upgrade, you can just pull in 1 package and it's dependencies.

    • @autohmae
      @autohmae 8 หลายเดือนก่อน

      Something else to add about the Docker image, it's easily possible to make a Dockerfile which uses this image as a base and makes some changes to it. Only thing which I've not seen yet is a simple update util which checks if a new base exists and does a rebuild (I would guess it exists, but I've not looked for it extensively).

  • @karloa7194
    @karloa7194 4 หลายเดือนก่อน

    Is Caddy meant to be installed on every host server for https?
    Can it be like a central reverse proxy for all the hosts like NGINX or HAproxy?

    • @apalrdsadventures
      @apalrdsadventures  4 หลายเดือนก่อน

      I install Caddy on each service (if the service doesn't do HTTPS natively). I want all of my services to use standard ports and HTTPS, and if they don't do it natively, they need a reverse proxy to do it. Gitea for example does not have a Caddy proxy since it does HTTPS and Let's Encrypt natively.
      It's possible to do a single reverse proxy, and I do use HAProxy (in TCP mode, so it's not dealing with certificates) to break out incoming IPv4 connections, but the downside to that approach is the traffic from the reverse proxy to the backend service is unencrypted, meaning it needs more firewalling to keep traffic out of that path. By running the proxy on the same service host/container I can just bind Frigate to localhost and there is no more complication.

  • @jenesuispasbavard
    @jenesuispasbavard 24 วันที่ผ่านมา

    So can you pass through the only GPU in your system to a container? In my case I just have the Intel iGPU in my 13500.

    • @apalrdsadventures
      @apalrdsadventures  24 วันที่ผ่านมา

      Containers share the same kernel as the host. A GPU has two 'nodes' in the /dev (render and card), multiple programs (even in different containers or directly on the host) can bind to the same render node to do rendering on the card. Only one can use the display outputs.

  • @williambravin1254
    @williambravin1254 6 หลายเดือนก่อน +1

    sorry for my ignorance. why not installing frigate in a vm?

    • @apalrdsadventures
      @apalrdsadventures  6 หลายเดือนก่อน +2

      Much less overhead of the container vs VM, and being able to share hardware across multiple containers vs passthrough to a single VM.

    • @williambravin1254
      @williambravin1254 5 หลายเดือนก่อน

      @@apalrdsadventures Thanks a bunch for this reply. Individuals like you and your videos make it possible for individuals like myself to participate and enhance these technologies

  • @BogdanSerban
    @BogdanSerban 5 หลายเดือนก่อน

    Man that keyboard sounds so sexy

  • @lukasz_kostka
    @lukasz_kostka 8 หลายเดือนก่อน +1

    Which cameras do you use ?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      They are all Dahua PoE cameras, a few years old.

  • @195frog
    @195frog 5 หลายเดือนก่อน

    Hi guys, i did installation of frigate step by step with this awsome tutorial. Everything runs nice on my intel N100 board, also with iGPU and CPU aceleration, respons time is ~ 10 - 13 ms, CPU load around 6% for 4 1080p cameras. Now i am trying update frigate to version 13.0, but i dont know where to start. Does anyone success with update?

  • @mikedien3609
    @mikedien3609 2 หลายเดือนก่อน

    serious question, when i have reolink cams with ptz and buildin person/animal/movement detection, auto tracking and auto recording
    do i really need that coral usb stick?
    my proxmox system is dual a xeon with 28cores/56threads and 64 gb ram ... i think that system does not need an usb co-processor,
    am i wrong? or does frigate not run without that stick?

    • @apalrdsadventures
      @apalrdsadventures  2 หลายเดือนก่อน +1

      Frigate doesn't use any detection built-in to the camera. It can do detection on CPU, nvidia and intel GPUs (not currently AMD), or the Coral TPU. It can also do video decoding on CPU or GPU.
      The CPU detector does use a ton of CPU resources so any of the other options (TPU, CUDA, Intel OpenVINO, AMD ROCm is coming) are preferred. Decoding the video stream is also somewhat intensive, and nvidia / intel / AMD can all offload that fairly easily.

    • @mikedien3609
      @mikedien3609 2 หลายเดือนก่อน

      @@apalrdsadventures
      perhaps I have expressed myself incorrectly.
      I already have a shinobi nvr vm running, with 4 reolink outdoor E1 pro cams. The "slightly more expensive" reolink cams have
      autotracking and person recognition built in. Shinobi only takes the rtsp stream and displays or records it. And I can control the cams live via onvif.
      I just want to compare shinobi and frigate and decide which one I want to continue using.
      If the cams can already track/recognize out of the box, why does frigate have to do it again? with additional hardware?
      According to the frigate documentation, my cams are fully supported.
      So do I still need the coral tpu? Is it rather necessary for less performant servers like raspi5 or n100 cpus? or always?
      and while i'm at it, what my reolink cams can't do, only the very expensive models, is the patrol run ... i.e. to monitor a large area by panning (leftright) the camera 24/7.
      can frigate do that? i.e. let the cam scan a wide area?
      kind regards

    • @apalrdsadventures
      @apalrdsadventures  2 หลายเดือนก่อน +1

      From Frigate's perspective they want a consistent experience and dataset for any camera without dealing with the API quirks of each manufacturer.

    • @mikedien3609
      @mikedien3609 2 หลายเดือนก่อน

      @@apalrdsadventures ok, i understand, that does not really answer my questions, but since shinobi runs good on proxmox without that docker/portainer quirks, and extra costs for coral tpu, i will stick to shinobi 😉

  • @tollertup
    @tollertup 8 หลายเดือนก่อน +2

    I don't get it. Are you using a laptop with full sizes cherry switches? but you said you recently rebuild your workstation =/= laptop? why does you keyboard(?) have a screen? explain!

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +2

      It's a Kwumsy K3, so it's part of the workstation setup now

  • @thesuyashrai
    @thesuyashrai 7 หลายเดือนก่อน +1

    Hey man great tutorial! However I'm kind stuck with the pcie driver. I'm using a coral pcie m.2 tpu and its detected by the proxmox host as well. Trying to passthrough into a LXC but it isn't even detected by the host.
    lspci -nn | grep 089a
    09:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a].
    However, running
    user@server:~$ ls /dev/apex_0
    ls: cannot access '/dev/apex_0': No such file or directory
    Secureboot is disabled.
    Pve headers, gasket and apex are installed as well
    Kernels is 5.15.102-1-pve
    Proxmox 7.4-3
    Can you please give me some directions?
    Thanks!

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      Did the dkms build succeed (`dkms autoinstall` will try to rebuild and tell you if there are errors). pve-headers will by default install the latest and it's likely you don't have the headers for 7.4-3, I believe the latest 7.x release was 7.4-15 and the kernel version is 5.15.108-1-pve afaik, so pve-headres will install the headers for that version. So update to the latest 7.x, reboot (so the new kernel is running), and that should make dkms happy.

    • @thesuyashrai
      @thesuyashrai 7 หลายเดือนก่อน

      @@apalrdsadventures Hey man, thanks for your reply! Actually as per your blog post, I upgraded everything to avoid any issues.
      apt update
      apt install pve-headers -y
      apt full-upgrade -y
      Then I rebooted, and fired: dkms autoinstall (Which didn't throw any error. The build was successful)
      As per that:
      ---------------------------------------------------------------------
      Proxmox: 7.4-17
      Kernel: 5.15.131-1-pve
      PVE headers: 7.4-1
      After which I fired:
      modprobe apex
      modprobe gasket
      Which didn't throw any output.
      Here are some of my confs and outputs:
      GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset vga=794 disable_idle_d3=1 amd_iommu=on iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 vfio-pci.ids=1ac1:089a kvm.ignore_msrs=1 pcie_aspm=off pcie_port_pm=off pcie_acs_overrride=downstream"
      ---------------------------------------------------------------------
      root@pve:~# lsmod | grep apex
      apex 28672 0
      gasket 122880 1 apex
      ---------------------------------------------------------------------
      root@pve:~# lspci -nn | grep 089a
      09:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
      ---------------------------------------------------------------------
      root@pve:~# ls -al /dev/apex*
      ls: cannot access '/dev/apex*': No such file or directory
      ---------------------------------------------------------------------
      Apologies for making it a mess here. Consider me kinda desperate. I'm trying to have AI face detection for 5 of my home security cameras and make automations based on whose face I detect. I'd highly appreciate your support on this!
      In any case, thanks for your help! Love your work :)

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      It looks like the driver won't load because it's already bound to vfio-pci? Try removing that argument from the cmdline and reboot

    • @thesuyashrai
      @thesuyashrai 7 หลายเดือนก่อน

      ​@@apalrdsadventures Hey man, sorry. Tried that as well. removed it, update-grub, and rebooted. Still no apex_0. No ideas what's going wrong. How come lspci -nn | grep 089a
      detects that the device is present, and still no /dev/apex_0

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      lspci -k will tell you what kernel module is currently using it. Apex is loaded in the kernel, so some other driver is taking priority.

  • @anthonypondepeyre5497
    @anthonypondepeyre5497 7 หลายเดือนก่อน

    Hi, i am a problem with the directory/etc/containers/systemd/frigate.container
    then copy this info, does'nt write this files.
    The error is [ Error writing /etc/containers/systemd/frigate.container: No such file or directory ]
    thanks you

    • @apalrdsadventures
      @apalrdsadventures  7 หลายเดือนก่อน

      The directory should be created automatically, but you can `mkdir -p /etc/containers/systemd/` to create it if it didn't get created by the podman install.

    • @anthonypondepeyre5497
      @anthonypondepeyre5497 7 หลายเดือนก่อน

      @@apalrdsadventures
      thanks you it's ok but i doesn't have running the container.
      the log
      systemctl start frigate
      Job for frigate.service failed because the control process exited with error code.
      See "systemctl status frigate.service" and "journalctl -xeu frigate.service" for details

    • @jandzban1042
      @jandzban1042 7 หลายเดือนก่อน

      Same here. I'm stuck with same error, journalctl show that /tmp/cache not exist but this ist not true. Any ideas what's wrong?

    • @d0rk4ge
      @d0rk4ge 6 หลายเดือนก่อน

      @@anthonypondepeyre5497 I'm also having this error. I tried several new LXC to make sure I didn't miss anything, but it happens every time.

  • @ewookiis
    @ewookiis 3 วันที่ผ่านมา

    The mess of documentation... Frigate might work, but getting it to work - without buying into exactly every single piece of HW listed, is just a big bad hassle. Docs are just a big bunch of contradictions all over. Source is not building correctly.. amazed that it has such a big following.

  • @PeterRichardsandYoureNot
    @PeterRichardsandYoureNot 6 หลายเดือนก่อน +1

    Unleash your cameras with frigate….as long as you have these 4 or 5 other platforms running, have integrated these 20 profiles and systems, and just happen to have the right cameras.

  • @MarkConstable
    @MarkConstable 8 หลายเดือนก่อน +6

    I was all good up until you mentioned... docker. Not sure why you wouldn't install it directly within the LXC container. If the answer is that it is ONLY available locked up inside a docker container, then it's something I'll never look at deploying. My loss.

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +8

      It's unfortunate when software isn't deployable using normal methods (i.e. compiling from source, or deb packages, or really anything else).

  • @larsla
    @larsla 8 หลายเดือนก่อน +1

    If you have a USB coral device it's pretty easy to just pass through the USB device to the LXC container and then into the Frigate docker container.
    I don't remember which setting does what, but my /etc/pve/lxc/123.conf has:
    lxc.mount.entry: /dev/bus/usb/003/ dev/bus/usb/003/ none bind,optional,create=dir 0,0
    lxc.cgroup2.devices.allow: c 189:* rwm
    lxc.apparmor.profile: unconfined
    lxc.cgroup2.devices.allow: a
    lxc.cap.drop:
    lxc.mount.auto: cgroup:rw
    lxc.cgroup2.devices.allow: c 195:* rwm
    lxc.cgroup2.devices.allow: c 243:* rwm
    lxc.cgroup2.devices.allow: c 226:* rwm
    lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
    lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

  • @carlogiga
    @carlogiga 8 หลายเดือนก่อน +1

    Wow. What kind of laptop is that?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน +1

      It's a Kwumsy K3, it's a display and keyboard for the workstation

    • @carlogiga
      @carlogiga 8 หลายเดือนก่อน

      @@apalrdsadventures thanks