Hi Beardedtinker, How do I perform an upgrade to the version of docker HA core and will it blow away my configuration files or current integrations? Do you have a video on how to do this by chance? Is cli or the GUI a better option to perform the upgrade?
Excellent question! I have video for installing Watchtower, and it will do automatic updates of all of your Docker containers - but sometimes this is bad idea. Because it will automatically do it whether you want it or not :) TO upgrade, I use watchtower also, but with special flag - run once. This way, it will not constantly watch and upgrade, but instead will run just once, upgrade and then shutdown. It is run in CLI, bat very simple to use: sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once homeassistant If you want to upgrade any other container you may have, just replace homeassistant at the end, or add to it - it can upgrade multiple in one go.
Hi Bearded. Thanks for all your videos. I was wondering if in all your research on this whether you considered the Linuxserver version of homeassistant because you can nominate a specific user rather than root? What do you think about having a dedicated homeassistant user? Not sure if this will impact on the connection of a zigbee dongle or anything else in DSM7? Any advice. Going one step further what about dedicated a dedicated user for each or a grouping of containers?
Not sure of you are thinking about internal Docker user or user needed to run Docker. But in both cases, no. This would require too much tinkering with Docker image. Also, if I'm not mistaken, Docker doesn't need root account. It can run with other admin account but does require elevated privilege. You can check Docker documentation on that and full explanation why.
Officially, it's not supported on that platform due to its architecture. But you can check this link if you feel up to some heavy lifting cynarski.eu/docker-on-synlogy-32bit/
Hi BT, a question.. how do you update the HA Core 117 to the new 118.2, if running HA inside Docker on a Synology? Inside HA i can see that there is a new update, but when i click "update" nothing happens.
There are couple of ways to do it. I use watchtower to keep containers up to date. But best bet is to use watchtower with Run once flag (--run-once). sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once home-assistant That should it I think - it would download watchtower image, run it once to update docker container named home-assistant and exit after update. You could create script for that or even create Task on Synology to run it once every day, week, or when you need it. The other way would be to stop container, remove it, download new image and run command you used first time to download Home Assistant.
Hi, I love your channel! Can you please help me with getting remote acces? What are the steps to getting remote access tot home assistant on synology. I tried to do this with reverse proxy and forward the port to 8123 but without succes. Can you please show me the steps? Thank you very much!
I'm using reverse proxy on my Synology. External port is 443 (with certificate from Synology) and forwarded to internal port 8123. BUT, you need to add in configuration.yaml file line with trusted_proxies. www.home-assistant.io/integrations/http/#trusted_proxies I have set it to my internal network range not just one IP address.
@@BeardedTinker Wow, that worked, many thanks! But I run into a next issue, is that when I run Iframe (e.g. Configurator) I receive a message: Unable to load s that load websites over if Home Assistant is served over . Do you have a solution for that?
When logging into my synology using putty I cannot see any folders I have in file station all I see is drive folder when using ls -1 command. My account is also in the Admin group on synology. Any idea why I am not seeing them?
Thanks for the quick response, I also tried this and this is what I got: Justin.Stockert@JustinNAS:~$ cd / Justin.Stockert@JustinNAS:/$ ls -1 bin boot config dev etc etc.defaults initrd lib lib32 lib64 lost+found mnt opt proc root run sbin sys tmp tmpRoot usr var var.defaults volume1 volumeUSB1 I still can't see all of my folders in file station?
Hi, great videos, keep up the good work... What is the best way to manually update home assistant in Synology Docker container without losing any configuration files? Can you do it through Synology UI or does it have to be command line? Can you post commands/instructions please, Thanks.
Thank you very much for your comment!!! I use watchtower for updates. They can also be done via Docker app, but I find that less intuitive then following command: sudo docker run --name="watchtower" -itd --restart=always -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower home-assistant Just type/copy this in terminal when connected to Synology and it will run WatchTower (only once) and update home-assistant docker image. Please, just check if you named your instance homeassistant or home-assistant. Last parameter in the line is name of docker container you want to update.
It should - BUT wait a second :) I copied wrong command. This one will install and run always, but update only HA. Wait, there is other option - run-once. Give me a second EDIT: Here it is: sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once home-assistant And after it finishes installing, it will close watchtower
@@BeardedTinker I ran the command just now and it seems to have done something but my HA is still showing the old version? Does it take a while to execute? Below is my Putty output:- run-once home-assistant Password: Unable to find image 'containrrr/watchtower:latest' locally latest: Pulling from containrrr/watchtower 3ed6cec7d4d1: Pull complete c0706b5a6b44: Pull complete 6eae408ad811: Pull complete Digest: sha256:ff3ba4f421801d07754150aee4c03fdde26a0a9783d66021e81c7097d6842f25 Status: Downloaded newer image for containrrr/watchtower:latest time="2021-01-22T17:49:22Z" level=info msg="Running a one time update."
Hello, I'm using community hassio on my xpenology. Currently supervisor reports about unhealthy install and blocks from update or any integration install. Reason is version of docker. Official dsm package is 18.09.0-0513. Minimal supported version of docker for hassio is 19.03. There is several instructions how to update docker package for a new version but it's pretty complicated. Is it possible you to make a video instructions how to update docker on Synology?
Hi Dmitry! Unfortunately, I'll not be making video on that. This update that's available on GitHub is too risky to do it except to experiment. A lot of people who did that experienced problems with containers and some couldn't run hassio with it. The biggest problem could be that you brick Synology/Xpenology. Check my video from last week to enable updates in hassio.
Hi, first of all thank you very much for all of your videos, they have been very helpful for a Home Assistant Beginner like myself. There is one question I couldn´t figure out yet. With the version shown in the video as far as I see there is no built in way of creating a backup? This version doesn´t have a snapshot function like the one on Raspberry´s. Till now I was mostly playing around with my instance and trying to learn. Now I want to build a Productive environment. Ideally I would like to run it also on my Synology for now. As I´m not sure how this will expand, I might want to switch to a dedicated device running my productive instance (Raspberry oder NUC) Whats the best way of installing my productive instance on a synology where I can easily create backups/snapshot and move all of my config to a new device (e.g. NUC), if I decide to do that later on? Is it also possible to move my current config to that new productive environment? Thanks in advance for your help! Regards rvst1
Ok, so here is what you can do. First, you can skip HA Core version (where you install it by hand) and go for hass.io - it has all addons available but also includes snapshot functionality there. That is of course easiest way to install and use it. I just move my production enviroment on this and so far it's working great! But you can of course stick to CORE version and make a script that would copy/zip contents of the /volume1/docker/home-assistant folder That's all you need as all configuration (both yaml and other) is stored there. When I was moving from CORE to hass.io I just copied contents of the folder and everything was up and running with my last configuration.
@@BeardedTinker Thanks a lot for your answer. So I would need to install hass.io as you´ve shown it in episode #22, to have the Snapshot fuction available and therefore have more flexibilty in the future. I saw your oinned comment under episode #22 (Deprecating Home Assistant Supervised on generic Linux). Do you think thats something to worry about, even tho it´s on hold for now.? Thanks in advance!
At this point, decision has been postponed. And if it does go through, you still have option to go back to HA Core, or Rpi/Nuc for Supervised version. Worst thing that can happen is that you'll need some more time to convert it to something else.
Question on HASS and Watchtower. Your instructions don't call out creating special paths for HA data (such as /config). So for my /config volume it looks like it's using the default value of /volume1/docker/home-assistant. If I'm using watchtower and it updates HA will I lose all of it's data? I recently had this happen with another container that was using defaults (ie the configuration/data was stored IN the docker image I guess?) and I want to avoid that happening here. SO two questions: 1. Is /volume1/docker/home-assistant (or similar path for other containers) a persistent location that would be unaffected by Watchtower deleting and recreating/updating the container image? 2. Where IS a container image stored normally if not docker/containername? I'd just like to understand how I can tell if a containers data/config is being stored in a safe place to protect against image destruction/corruption/deletion. I am just learning Linux so I have lot's of questions :) Thanks!
Hi Santiago! By mapping Synology folder with internal docker folder - such as /volume1/docker/home-assistant - you are creating persistent folder where files would be retained even if you completely remove docker container and docker image.. Docker image is stored in different path, and mapping allows you to expose "external path/folder" of Synology to container, but also to expose internal Docker Container path to host Synolog system. That would be very similar to mapping network path to your local PC, where you see remote files as your "local" files. Watchtower does very simple task - it remembers parameters of Docker container and it downloads new image, removes old one are recreates container with same parameters as previously. Your files if mapped with local folder should all remain intact.
@@BeardedTinker Thanks much for the reply. I think my confusion was if the /docker/home-assistant path might actually be referring to the image itself rather than a persistent location. Also I assume that for this to work you need to know what path the app/container is going to use for storage of it's data and it seems that this varies across apps sometimes. In any case it looks like I'm safe. Looking to avoid having Watchtower kill my config like it did for another container over the weekend :). Thanks!
I installed it just now on test setup, but that virtual DSM 7 BETA (with Docker 18.x.x, not new 20.x.x) and it's working. One think to note is issues with USB devices, just in case you want to use USB controller for Zigbee or Z-Wave network. For now it doesn't work out of box and requires a bit of hacking to get it working and uspported.
@@BeardedTinker thanks. I updated manually to the new Docker version on DSM 6.x and I'm using a ZWave stick. I hope there will be a solution to mount USB-devices on a container.
Yes, that's have been out for some time, but this is still not permanent fix - as script has to be run on every boot of the system. Hopefully there will be more permanent solution in future.
Hi Karl! The easiest way for me is to run following command in terminal: sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once home-assistant This will install watchtower and run it only once to update container named home-assistant Other option is to do it via Docker in Synology DSM. Go to Docker Registry, locate the image you want to update; click download and choose tag "latest". The download will replace the former image with new one. When the image download is complete, stop the container, click "clear" and then "start". This should be it.
The docker image name "homeassistant/home-assistant" (without any tags) actually refers to the "latest" tag of the image which I would not suggest for production usage. You should use "homeassistant/home-assistant:stable" to use the latest stable image.
I don' think you are correct in this case. While there are differences in latest, stable, edge, dev, beta tags for docker containers - in this (HA) case latest and stable are same (even current beta release is same as latest/stable at this point). You can check here all tags available, and just compare SHA: hub.docker.com/r/homeassistant/home-assistant/tags Also, I've been running latest tag last 2+ years and so far haven't had any issues unless there is a serious bug in release. If you are still not convinced, you can check by installing latest and checking in info release version and then install stable and compare results.
@@BeardedTinker Okay, jeah, for HA it seems that you're right, they do also tag every "stable" build as "latest", so as a result you fetch the very same images. But there are a LOT of other vendors/teams which tag even every nightly-build as "latest", whereas "stables" are released very less frequently. So let's call it "a good habit" to always use stable images (if you're setting up a production server) instead of the latest ones 🙂
Dear BeardedTinker, First and foremost, I want to thank you for your cool and interesting channel on YT. Certainly a source of help and inspiration for me, which has often helped with setting up my Home Assistant. In the beginning of this year, I saw your video to install Docker on Synology and install Home Assistant in it. At the moment I have also applied this setup, but in HA I get a message from Supervisor that I am running an unsupported installation. I suspect the Docker version 18.09.8 (which is the latest with Synology) is too low for HA which requires minimum version 19.03.0. Question: do you also have this problem and do you know whether it is possible to upgrade the Docker via eg Putty and with which commands this is possible? Thanks in advance for an answer. Benny R.
Hi Benny and thank you very much for kind words!!! In regard to your question - yes, I also have this error in a log file, and that's known issue. The problem is that Synology is using their own adaptation of Docker, so we can't do anything to update except nag Synolgy to finally release new version. There is this: community.synology.com/enu/forum/1/post/131600 I haven't tried it because there are (I think still not fixed) some bugs that would create more headaches then it would be worth to upgrade. So we are left with two options - wait for Docker 19.03 version, or wait for DSM V7 that should have it, but will break Home Assistant (Supervised version).
@@BeardedTinker Thanks for your reply, then we'll wait for DSM V7 and see what happens. I am thinking about running the Docker and HA on a Rasberry Pi 3b but I am not sure yet ..., another device that we need to maintain ;-)
If you decide to go RPi way you don't need Docker. There is native version with HassOS. But be careful of the SD card, they can fail sometimes due to too many writing on it
Great tutorial! I have a problem getting to the root directory and instead I am in the folder which is one level down. I thought I have the admin privilege. Do u have a solution for this?
@@BeardedTinker Thanks for the replied. I have since managed to change to root directory using command "sudo -i" but I still have problem with installing it. It seems to that i had problem on -restart on failure. Sorry I dont know unix command at all hahah
Just curious. Why don't you use Synology Docker app? I know Docker and I know the CLI, but on my Syno I always use GUI interface to fast up the whole configuration process.
Great question and thanks for asking! Since I have bunch of Dockers running I just got used to it because of limitation Docker on Synology has. For example you can't map folders that are not visible in File station. And some of my Dockers use folder in root of Synology. But you are correct, you can get Home Assistant up in Docker GUI. Problem could be later on with Zigbee2MQTT that requires altering /dev/ permission and doing that in GUI can be challenging if not impossible. And also, since I keep as backup all of my commands, it's just copy/paste to recreate Docker if needed.😀
If you are not using ZigBee or Z-Wave you can do it from UI. But some configuration parameters are not available from UI (for ex. mapping devices, mapping folders that are not under /volume path,...)
@@philou1516 you can't make snapshots like that in Docker version, but it's enough to just archive (zip) contents of the configuration folder and all subfolders (including hidden ones starting with . - for ex. .storage) Addons can't also be added using supervisor, but can be installed by hand and work OK. I've run it like that for a long time - now I have it in Virtual Machine on Synology.
Negative side is that uses a bit more resources but positiv side is that's completely independent installation, with its own IP address and everything is supported. Add-ons, supervisor,...
In HA Core (docker version) there is no snapshot functionality. But if you save your home-assistant folder with all hidden files and folders, you can use that to restore. This is also more or less same as how snapshot works. It saves individual folder in zip file.
All ok for me except accessing the Add-on store via supervisor. I get an endless spinning wheel. I tried the recommendations in the comments i.e. restarting the package, uninstall/reinstall, clearing the supervisor container. Is there a way to add supervisor to the standard homeassistant docker image to access the store? Note I am using an old DS412+ with DSM 6.2.3. I already run homebridge in the docker. these are the errors in the log "20-10-15 12:27:55 WARNING (MainThread) [supervisor.updater] Can't fetch versions from version.home-assistant.io/stable.json: Cannot connect to host version.home-assistant.io:443 ssl:default [Try again]"
Hi! No, you can't add Supervisor to Home Assistant docker. Looking at the error do you maybe have Firewall on Synology turned on? If yes, can you try disabling it. It looks like Docker is unable to resolve DNS request to home-assistant site. Most likely, some network issue at your side.
@@BeardedTinker Thanks for that. A FW rule was added during installation and disabling the FW has no effect. But you made me think of the docker container configuration. I notice the Hassio package installs containers in 'bridge mode' and not 'host mode' like you recommended for the standard home assistant package (-net=host). Does your Hassio Docker installation also use a bridge ?
@E C Ok, here is how it should look: Bridge: hassio_supervisor hassio: hassio_supervisor, all addon packages hassio_* containers. host. hassio_multicast & homeassistant So this is how containers should be devided between Docker networks. You can try fix for Docker network, maybe that can help - here is a link to post that contains it: community.home-assistant.io/t/hass-io-on-synology-dsm-native-package/125559/2?u=beardedconti
Hi @@BeardedTinker, I realise I should have been commenting on video #22 maybe you can transfer the thread? I succeeded by complete uninstall, disable firewall and then a fresh install. I restarted the hassio package once just to make sure and everything is fine. The key was a comment to disable firewall before installation. Thanks for your great feedback!
Hello, Bearder! I see critical error for HA on Synology docker: Docker version 18.09.8 is not supported by Supervisor! As I saw it is necessary to downgrade from Docker version 18.09.8 to Docker version 18.06.1. Is it correct? Please, could you write command for downdrade with keeping HA and other containers. Thank you
Privet Andrey! Just to check what you have - is it Home Assistant Core (running in docker independently), or are you using Home Assistant Supervised (community package)? I can't verify right now Docker version, don't have access from where I'm now. Hassio/Supervised is working with 18.09.0-0513 - as this is latest that is available to me.
I cannot understand how to enable https for home assistant. Using console I can check that current synology certificate files copied to HA container. Then in configuration.yaml I add http: ssl_certificate: /certificate/fullchain.pem ssl_key: /certificate/privkey.pem But if I try to access HA using https I get PR_CONNECT_RESET_ERROR. So it is not clear how to enable SSL for HA and then how to make automatic redirect from http to https?
Hi Ilija! To be sure, I would recommend that you first copy certificates to folder where you are keeping your files. And in configuration.yaml add following fodlers: ssl_certificate: /config/fullchain.pem ssl_key: /config/privkey.pem I think (but not 100% sure) that problem is with path to certificate. You can also check this video here: th-cam.com/video/yHN8XN14xHg/w-d-xo.html
Hi @@BeardedTinker. Thanks for your help. The problem was that I didn't map certificate folder. Your support helped me to setup https, but now I can only open my HA using mydns.com. If I try to login using local IP address then after enter credentials on /auth page I redirected to /lovelace page with big HA logo in the center and RETRY button. But I don't want to be obliged to always use external dns when I am in LAN.
@@ИльяПопов-т4н glad that you got first part working. Did you in web UI configure internal/external URL. For external, leave mydns.com and internally use IP_ADDRESS:8123. Not 100% sure, but this should work.
If I try to login on IP_ADDRESS:8123 then after enter credentials on /auth page I redirected to /lovelace page with big HA logo in the center and RETRY button. And if I try IP_ADDRESS:8123 then webpage doesn't load at all. Instead I just see ERR_EMPTY_RESPONSE
For secure login from remote location did you try openvpn server with tls handshake? Because then you use only one port and if someone try to connect to thet port must provide tls handshake and openvpn cert, else server weel refuce connection. Update : Or use reverse connection on synology nas.
Great question Neno! No, but I'll play a bit with it in next few days. If I use something for VPN I normally go with L2TP/IPsec. But in my setup I even don't use that. I would love if I could protect whole of my setup with YubiKey 2FA but, Synology... Still no support. Then there would be issue of Home Assistant and (my personal opinion) it's still lacking any kind of security in it. (fully intergated user tiering, groups) I did hear something about OAuth on yesterdays State of the union for HA, but I was doing 5 things and wasn't concentraded and missed a lot. OpenVPN should be best of 3 options available on Synology. Too bad they removed advanced behaviour analytics they had in Synology.
@BeardedTinker Can you please help with configuration.yaml and http section in it? I have duckdns domain plus valid lets encrypt cert, port forwarded on my router and chrome is accepting my certificate for my other FWD connections (e.g to https synology service - so I know it's working), but not sure how to adjust HA yaml file to accept https and use my certificates. I installed HA and rest in synology docker as per your videos. Only connection to http (my-domain.duckdns.org:port) is showing me HA login page. Https ends with error page. Synology is set to use letsencrypt cert as default. I tried to uncomment HA yaml http section, without success: http: base_url: my-domain.duckdns.org:port (my-domain & port scrubbed) Any ideas please?
One way you can do it (maybe not cleanest, but works) is to copy certificate into home assistant folder (let's presume you used same folder structure as in video): cp /usr/syno/etc/certificate/system/default/* /volume1/docker/home-assistant/ You can create Task in Task Scheduler in Synology Control Panel, to copy it every day at for ex. 2 AM. That will copy cert, chain and privkey.pam and make them visible to HA. Adding following to http: should be sufficient: ssl_profile: intermediate ssl_certificate: /config/fullchain.pem ssl_key: /config/privkey.pem And as you wrote, base_url: should be domain name without https in front. Just name with port. Let me know if this works for you.
@@BeardedTinker Great, thank you, that actually worked like a charm :) Now I can start to play with HA. Does it mean that I lost access localy via LAN? Do I have to go via duckdns url / internet? Or is there way to setup http for lan access and https remotely?
That's great!!! There should also be other way mapping certificates to docker image - it should be cleaner and wouldn't need any other command or task schedule. But you would need to create docker image from scratch. (all your settings should still remain).
Hi, thanks for this very nice video. I have tried to implement it, it went smootly but does not really work as expected at the end. First of all, after I have done docker run, docker ps shows the image and the container. However, in Docker App on the syno, I only see the image and no container :-( . Looks like I'd have to recreate a container from the image which is probably not the thing to do, as the docker does show the container ... Second issue, I have a rfxcom device plugged on the synology, (on dev/ttyUSB0) as well as a cc2531 sniffer, on /dev/ttyACM0). Home assistant did not see these devices, and when I tried to install rfxcom it told me /dev/ttyUSB0 was not correct. So I searched on the net, stopped the container, but could not redo docker run until I did docker rm on the image. => was it the rigtht thing to do or is there a simpler / nicer way to stop and restart with a new set of options ? I have actually added --device /dev/ttyUSB0 -device /dev/ttyACM0 to the docker run command, and then could install the rfxcom as a local, serail device on /dev/ttyUSB0. => something you could add in your great tutorial. Still, in HA Still, I cannot have the rfxcom seeing any device, and I now have to see how to make the zigbee seen. I have installed MQTT successfully, but no new device is discovered on the zigbee sniffer :-( Sorry for the length of this comment and for all the questions, but I'd appreciate some help if possible :-). Thanks.
Hi Michel! Let me try and answer what I know :) In regard to no container - after docker run, it should be running. If it's not running, you maybe missed error when docker container was created, that's what usually happens. Maybe typo in folder mapping or something like that. For rfxcom - can't help, I don't have it, but cc2531, I do have. I never added it to HA, as it is not needed inside HA, but I do see it in Supervisor (I'm using hass.io SynoCommunity package - unsupported - to run HA Supervised). Docker works in such a way that you "can't" edit configuration, you have to recreate it (after removing it) new one with configuration that you want, so that's ok. Are you planning tu use internal zigbee support in HA - ZHA? I don't think it supports this stick. MQTT is just broker, it doesn't know what to do except to accept and send messages. For zigbee to work, you need Zigbee2MQTT - it connects on one side to stick /dev/ttyACM0 and on the other side to MQTT. MQTT then is "database" (this is very inaccurate description) that then accepts messages and information from zigbee network, and makes it available to Home Assistant. Hope I manage to answer you or at least help you get in right direction 😃
@@BeardedTinker Thank you so much for answering that fast:-). I have tried docker stop and start in interractive in putty, seems to be running OK, but still clicking on Container in Docker Syno's GUI shows "no container is created, create on from the image" or similar message as I am translating from French. See the output in putty when I start the container. It seems OK to me. mraskin@DiskStation:/volume3/docker/home-assistant$ sudo docker start -i "home-assistant" [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] udev.sh: executing... starting version 3.2.9 [12:06:51] INFO: Update udev information [cont-init.d] udev.sh: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. 2020-12-20 12:07:02 WARNING (MainThread) [homeassistant.components.rfxtrx] The 'debug' option is deprecated, please remove it from your configuration 2020-12-20 12:07:04 WARNING (MainThread) [homeassistant.components.netatmo] Webhook not registered - https and port 443 is required to register the webhook Note I am running on volume3 which is the one holding homes here, this is my main volume after I got a crash on volume1, long time ago. So not sure why the docker GUI does noi see this started process :-( . Looks like there is no link between what I am doing in putty and the docker App on the syno. :-( WRT the rfxcom and cc2531 stick, at the begining I wanted to use them in Domoticz, also installed on the Syno. rfxcom works fine, the cc2531 stick not. I could add the material on Domoticz, but I cannot find ways to add devices. This is why I gave a trial to HA !
Sadly, Docker in Synology is not that... good. For controlling Docker containers, I mostly use Portainer. This is web based management for Docker - it allows you to start/stop delete them to see status, enter using CLI/terminal etc. Rfxcom is native integration for Home Assistant, but as I said, I have not used it as I don't have any devices that use it. For cc2531, you need to use Zigbee2MQTT docker container. This container connects to cc2531 stick and allows control and pairing for zigbee devices. It then posts all the information to mqtt server (that acts as database). HA can very easily connect to mqtt - you just need to specify IP address and username/password if you created it. All devices that you add in zigbee2mqtt should then be available as entities in HA. That's how I control my zigbee devices...
@@BeardedTinker Good evening. Thanks for your answer. I actually have made some progress. I have eventually seen the container running in the Docker GUI on Synology after it rebooted, this morning, after a system update. Not sure why it was not working before, although "docker ps" was showing the running application. I have (re) installed zigbee2mqtt in the docker, and manage to link a Xiaomi door switch:-) . I can see in in both HA and Domoticz now ! Actually before linking the Xiaomi device I tried to link the device I tried earlier (an Ikea switch), but failed. When I try to pair it, nothing happens :-( . I need to dig in further on this. I do not have the Ikea router because I think this should not be useful. If you have any idea for pairing Ikea devices without the Ikea router, you're welcome to chime in ;-) (I think there is an app in HA that supports the Ikea router) Thanks, Michel.
Thanks for posting this Michael! Procedure for Ikea switch is same as for Xiaomi sensor. Sometimes those devices go to sleep so after you set it in pairing mode, you just click if few times until it gets paired. I have 5 of them.
Hey, If I use SSH to connect to my Synology, my home folder appears and not the folder from this tutorial. It's not clear for me how to get to /volume1 as it's not in the list of folders.
@@BeardedTinker I should have admin rights. This is my first linux experience. Is it possible to change the default folder or choose the default folder where you login to?
@@bertendewael3809 It is possible, but Docker folders should be syste shared folder. What yo ucan try to do is type "sudo -i" when connected to Synology via terminal. It should prompt you for the password. Then try to type cd / to change to root folder, and try ls -l command once again.
Thanks for asking. Not very soon. I find Home assistant a bit "easier" to comprehend, extend and control. I will install it and play a bit, just have to get a bit more time - and that's something I can't find enough lately! 😉
Thanks for asking. You can not get it installed that way. Home assistant is part of the hassio. Next Thursday there will be video on how to do it - get hass.io on Synology. It includes supervisor, DNS and Home assistant + all add-ons
Great video! I have been doing a lot of research on what to migrate to after my rPi 4, and this video helped me alot. I have one worry/question about committing to synology. On HA website it says this about installs on synology: "Synology only provide Python 3.5.1, which is not compatible with Home Assistant 0.65.0 or later. Until Synology offer an updated version of Python, Home Assistant 0.64 is the most recent version that will be able to be installed." Do you know if that is just an old disclaimer that they never updated? I thought I saw your version to be .9 something. Maybe I am having a misconception about the variety of HA you installed; The all in one simple image is all I have messed with on the Pi so far. Anyway thanks again and I cant wait to watch the rest of your videos!
Thank you for your comment David 😉 I would have to read docs about this Python version disclaimer, but no, there is no issue with HA 9n higher version. You can run both Docker version (my main setup) and also Hass.io version (my second recording setup). I think they are talking about 3rd option - to install it as you would on Linux machine, by pulling it from GitHub.
It all depends - if you want to use for example USB Zigbee stick, you'll have to use command line. Not all commands/setting are available through Docker UI.
@@BeardedTinker I mean what's different core version & gui versiyon. I mean if i try to use home assistant on vm pc still do i need these commands? If i'm using core version, can i install add on?
It's hard to explain in comments. In HA core you don't have add-ons. For that you need VMM - Virtual Machine Manager. Command line can be used in both HA Core to create docker and in VMM to control parts of the system. But you don't have to.
You don't need hassio to get DuckDNS working - you can do it from Synology. You can get hassio spk/pkg to install hassio as standard Synology package. It can be found on Home Assistant community web site, but it has on big flaw in my opinion. Although it works ok.
@@samvanst In order to use DuckDNS, you can use this page as a guide - click on Synology to get information: www.duckdns.org/install.jsp That will make your DuckDNS domain name point to your router/dynamic IP address. Next step is to create Let's Encrypt certificate to secure your Synology/connection - use this guide for it: www.synology.com/en-global/knowledgebase/DSM/help/DSM/AdminCenter/connection_certificate Last step, if you are aware of potential risks, have your system secured with hard password - long combination of random letters, numbers and symbols, you can make HA available by adding port forwarding in your router setting. Just forward external porta 8123 to your Synology IP address also on ports 8123. In order you can of course use different external port then 8123 to further mask your HA installation, for example use port 10110 externally and forward to internal port 8123 on your router.
Hi Glenn!! Sorry, first episode and I didn't think that far when recording. Command to create folder is mkdir, and in order to create foder for home-assistant, you can use following: mkdir /volume1/docker/home-assistant That will create folder where it's needed. If you have any question, just ask!
Hi BeardedTinker, great video and tutorial, thank you so much - been looking to do this for long time. Are your commands and the procedure listed on maybe your website. Its got to be easier to follow than on a TH-cam vid?? If yes what address please? Please keep up great work.
Thank you for your comment. Unfortunately not yet. Web site is up but I never got enough free time to get all the steps there. This is in my to-do list. I try to get all commands and steps also in video description.
Hmm - on Synology there is community package and it runs hassio. Not sure if this could be done without virtualization platform. Or by hand pulling it from GitHub and installing on some supported Linux distribution.
That's normal Jim. Synology is not officially supported, but you may still use it and it works great (although there are some that have issues). If you have problem that you can't update system or add-ons, check this video for workaround. th-cam.com/video/a01qvvDS7QQ/w-d-xo.html But it will still report unsupported system for Docker (but not just that).
Hey, I opened putty and put up a connection to my xpenology server. But when i put ls -l it doesn't see my folders. Not even volume1. How can i work around this?
And you can normally access XPenology in web browser? Did you enable SSH/telnet access in control panel in DSM? I presume yes. Can you try typing in terminal sudo ls / to list root folder?
@@BeardedTinker wow Quick reply. I can press cd volume 1 and I guess it doesnt really effect the other outcome. Will check if I can Just follow the whole tutorial.
I don't know how good you are with linux. XPenology should be same as Synology, although there are some differences. My test setup runs on XPenology. Can you try logging in terminal and typing: sudo ls / and also you can try: sudo cd /volume1 Just to be sure that there are no issues with privileges. And just to be sure - you have to type volume1 without any space, not sure if above was typo or that is problem.
@@BeardedTinker My understanding of linux is at noobie level. Your tutorials are great. I managed to install Home-assistant. Will complete your whole tutorial series. Thanks for making these.
On my docker in unraid i can install home assistant in network bridge mode such that the ip address of home assistant is different to the ip address of the nas. How do i achieve this in synology? Great video by the way.
Hi Joseph! Sure, it's possible, but that would open up another pandora box in managing network for other docker images that would need to communicate with Home Assistant (for ex. mqtt, influxdb,...) But you would need to create two networks - one macvlan to assign Home Assistant different IP address, and another one to bridge Synology and Home Assistant. More or less that would be same Docker network setup as the one used for PiHole and AdGuard - videos on that are here on the channel.
@@BeardedTinker Thanks. I have followed your tutorial to install pihole on docker and via a macvlan and it is working fine. When i try to create another macvlan for home assistant so that i can force home assistant to use a specific ip address (in my case i want to assign home assistant 192.168.1.12), the command "sudo docker network create -d macvlan --subnet=192.168.1.0/24 --ip-range=192.168.1.12/32 --gateway=192.168.1.1 -o parent=ovs_eth0 home-assistant" failed with an error message "failed to allocate gateway 192.168.1.1): Address already in use". Any idea how to mitigate this issue? Thanks again
@@josephlo6005 Well, yes, you can create only one macvlan network. But that shouldn't be the problem. What you could to is you could (never tried it) increase IP rande from --ip-range=192.168.1.12/32 to for example /30 - that would increase number of available addresses to 2. BUT, I've never tested this. (check here: github.com/docker/libnetwork/issues/2384) Also, I've found a workaround (again, didn't test it) - create macvlan with 2 subnets. # Macvlan (-o macvlan_mode= Defaults to Bridge mode if not specified) docker network create -d macvlan \ --subnet=172.16.86.0/24 \ --gateway=172.16.86.1 \ -o parent=eth0 pub_net # Run a container on the new network specifying the --ip address. docker run --net=pub_net --ip=172.16.86.10 -itd alpine /bin/sh # Start a second container and ping the first docker run --net=pub_net -it --rm alpine /bin/sh ping -c 4 172.16.86.10 (docs.docker.com/v17.09/engine/userguide/networking/get-started-macvlan/#macvlan-bridge-mode-example-usage)
One more thing, which I cannot understand is why I get error on fresh HA Logger: homeassistant.components.stream.worker Source: components/stream/worker.py:176 Integration: Stream (documentation, issues) First occurred: 20:20:47 (20 occurrences) Last logged: 20:33:27 Error demuxing stream while finding first packet: I just installed HA from Docker on Synology and after creating user and password I add automatically my Synology to device list, because HA found it automatically.
No. Look screenshot from my main page yadi.sk/i/kHVxn3pyS2Xsxg. I don't have any module to show video from camera on it. On second screen I just have 2 modules with state of Synology discs and CPU.
@@ИльяПопов-т4н do you have any camera in Synology? The only reason for this should be problem with stream, and stream is usually used for media (video). You can stop receiving it if you add following to configuration.yaml file: logs: libav.NULL: critical This will disable it from logging any errors.
Are you using Synology? I have remote access done via namechap DNS service and SSL certificate from Let's Encrypt. It's not that hard but you must always be aware what you're doing since it can compromise your home network security.
@@BeardedTinker I can acces my HASS using the Synology quickconnect on port 8123. the problem is when I tried to set up smartthings component it's not allowing me to do so. it says need to setup remote access.
I don't have SmartThings, but briefly went through documentation. You need to setup your domain name. DuckDNS is not listed as DDNS provider, but you can create new entry. Check this install info on DuckDNS page: www.duckdns.org/install.jsp You need to add in Control Panel - External Access - DDNS New provider: www.duckdns.org/update?domains=_HOSTNAME__&token=__PASSWORD__&ip=__MYIP_ Also, - base_url: in configuration.yaml has to be setup and it has to be your DuckDNS domain name. After you do that, don't forget to also create SSL certificate (also through Control Panel) with Let's encrypt and provide same DuckDNS name,
Just dropping a note that i loved your way of presenting. Keep it up. You explained everything nicely. I did see in another video that you creating some seperate network for pihole. My question is : Are you smart devices also setup on a seperate network ? And if so is it the same way you setup pihole ?
Hi Jurgen and thank you for your comments - they mean a lot! 😉 Actually no, my devices are all on same network - for now. This is something on my "to explore a bit more" list. I know everyone is now hyped about separating devices on different networks, but I'll have to test something first and see if this is ok for home setup and how it influences whole smart home. Sorry for making a too long response, but here is what makes me question this setup. If you put all your smart switches, smart speakers, sensors etc in separate network, you still have to enable access from one network to the other for your Home Assistant setup. And when you do allow traffic between them, what's the purpose of separate network. In PiHole and AdGuard (that I personally use) setup, I've created separate bridged network, so that Synology can also use them for DNS resolves. This is due to network limitations of setup. If you don't plan to have your Synology use PiHole/AdGuard as DNS resolver and malware filter you don't need it.
@@BeardedTinker first off stop apologizing hahha. I love long answerrs if it is on topic. What I read online is that with outbound and inbound setting on vlan and fw rules you can sperate what has acces to a) www b) LAN. So for example you sensor on a none vlan that does not need inet while connected to a different vlan for hassio that does. You could ping hassio probably but never de sensor so security on the sensor is not really to worry about. The reason I am asking it now is that once you have 30 plus devices on you setup creating different vlan and changing it would become a daunting time consuming task. Love your vids keep them coming!!! Ooh yeah don't mind people that say it too long. One they run on to issue they will appreciate the in depth explanation. And finally ++++ for using best practices I know enough devs that could learn something from you.
Ps using mdns will enable casting on de different vlan in theory. I am all theoretical for now my packages are coming in this week. Last question hahha (now my turn for sorry because is kinda off topic) : watchtower updates to the latest version. I notice you don't have a rollback version setup in docker or do you?
@@jurgenwoen4986 My lab/recording setup is using my home network. And my home network is a bit "special" - something that I started to get up according to best practices and then... 2 years later 😀 I'm using Mikrotik for my routing and Mikrotik doesn't support mDSN, so a lot of fun stuff is unavailable to me and that's why my ESPhome always shows all my devices as offline, since it can't find them using mDNS. On the other hand, it allows me to whitelist/blacklist what devices can or can't go to internet, setting vlan's is a bit easier, setting routing is a lot harder but it's fun to play with "serious" router rather then 2$ ones that ISP love to give to end users. Just never had time to go in more depth.
@@jurgenwoen4986 no, you are correct. So any rollback I do, I do by hand. Had to do it once so far I think. Also I have no container blacklisted/excluded - and that is something that I was considering for some time too. This is something I should start tinkering with soon.
Helou people . Can anybody give me the full Variable and Value from enviorement ? I have an older version of doker and have to imput them manual to work. i did the same for unifi controler in doker and it work.
@@BeardedTinker I wanted to share a screenshot but youtube doesent allow it. From advanced setting In docker when creating >>enviromnment. There are some values. TZ, path, lang etc...
Most of them you don't need. As you have probably seen from video, there are only few that are needed. Except them, nothing else is needed (except maybe device mapping for Zigbee/Z-Wave).
One from this video th-cam.com/video/QdBYUbj0B5Q/w-d-xo.html ? It is running great, but on test setup. It updated every time without glitch, had 0 errors so far. My main setup is still in normal Docker.
Hello, I'm sorry, I had such kind of error when I followed guide, how to install on synology device home assistant : warning : published ports are discarded when using host network mode But everything works and opens, there is possibility to get feedback from server, that everything ok? Sorry for that, I'm new to all this stuff, and slowly got inside all that kitchen :) But any way, thank you very much for your videos, there are great :)
No worries about that, it's normal. When starting docker if you use --net=host -p (port definition) will be discarded. You could also remove -p (and port numbers after it). I just like to leave both options. And welcome to Home Assistant fun - hope that you will enjoy it as much as I do!
@@BeardedTinker thanks a lot for that major job, that you do :) oh yeah I like it very much :D hmm maybe you have your own telegram channel, think that you will have major community there :D and also easy to monitor and communicate, cause I don't have twitter and facebook lol :D the only one option to ask questions to you, is here in comments, hope that it will be ok for you. Because next question, I have about Grafana :D did all how it supposed to be, but it is not showing me like you spoke 1/0/1... on sunrise :)
I'm currently in process of setting up Discord server. Probably will not use Telegram for this as I don't find it easy to scroll through conversation. It's great tool and I have been using it for about 8 years, but Discord should be easier to work with.
@@htdocs-eu I've created Discord server few days ago discord.gg/HkxDRN6 - it's much more easy to manage then Telegram groups. As for Sunrise/Grafana - will have to check a bit later. There was an issue with sunrise sensor in 0.105 Home Assistant. Not sure if those two were related or not, but a lot of my automations stopped working for at least 1-2 days and restarting HA sometimes fixed it but not always. Unfortunately since I had a VERY busy week at work, never managed to check more into this issue.
Hi Beard Master. Do you buy any chance know how to install docker on a nine supported syno with arm architecture? I could find the right installs command anywhere
Thanks for the comment Jurgen! This all depends what you want to install. Not all images support arm, and also depends what arm (v6, v7,..) you want to install to. For example Home Assistant can run on armv7 - docker pull homeassistant/armv7-homeassistant and armv6 by using armhf image.
Привет Vlad! That is not part of the Synology. This is standard terminal emulator - I'm using PuTTY. There is link to it in video description. www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
You have a very calm and pleasent to hear voice. Keep this quality up! Thanks!
Thank you very much!
Thank you so much! My Hassio on my Ras Pi won't boot up so in desperation I searched to see if it could be done with my NAS. Super happy now!
+James Cahours I've been running it for a long time now and it works perfectly! Good luck with your setup!
Hi Beardedtinker, How do I perform an upgrade to the version of docker HA core and will it blow away my configuration files or current integrations? Do you have a video on how to do this by chance? Is cli or the GUI a better option to perform the upgrade?
Excellent question! I have video for installing Watchtower, and it will do automatic updates of all of your Docker containers - but sometimes this is bad idea. Because it will automatically do it whether you want it or not :)
TO upgrade, I use watchtower also, but with special flag - run once. This way, it will not constantly watch and upgrade, but instead will run just once, upgrade and then shutdown.
It is run in CLI, bat very simple to use:
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once homeassistant
If you want to upgrade any other container you may have, just replace homeassistant at the end, or add to it - it can upgrade multiple in one go.
Hi Bearded. Thanks for all your videos. I was wondering if in all your research on this whether you considered the Linuxserver version of homeassistant because you can nominate a specific user rather than root? What do you think about having a dedicated homeassistant user? Not sure if this will impact on the connection of a zigbee dongle or anything else in DSM7? Any advice.
Going one step further what about dedicated a dedicated user for each or a grouping of containers?
Not sure of you are thinking about internal Docker user or user needed to run Docker.
But in both cases, no. This would require too much tinkering with Docker image. Also, if I'm not mistaken, Docker doesn't need root account. It can run with other admin account but does require elevated privilege.
You can check Docker documentation on that and full explanation why.
Thanks for your video :) I have Synology DS220j and I didn't find Docker in packages center to download... do you have any suggestion ? Thanks a lot
Officially, it's not supported on that platform due to its architecture.
But you can check this link if you feel up to some heavy lifting
cynarski.eu/docker-on-synlogy-32bit/
Hi... I found :) my DS220j is not compatible to Docker :(.
Thanks !!!
Hi BT, a question.. how do you update the HA Core 117 to the new 118.2, if running HA inside Docker on a Synology?
Inside HA i can see that there is a new update, but when i click "update" nothing happens.
There are couple of ways to do it. I use watchtower to keep containers up to date. But best bet is to use watchtower with Run once flag (--run-once).
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once home-assistant
That should it I think - it would download watchtower image, run it once to update docker container named home-assistant and exit after update.
You could create script for that or even create Task on Synology to run it once every day, week, or when you need it.
The other way would be to stop container, remove it, download new image and run command you used first time to download Home Assistant.
Hi, I love your channel! Can you please help me with getting remote acces? What are the steps to getting remote access tot home assistant on synology. I tried to do this with reverse proxy and forward the port to 8123 but without succes. Can you please show me the steps? Thank you very much!
I'm using reverse proxy on my Synology. External port is 443 (with certificate from Synology) and forwarded to internal port 8123.
BUT, you need to add in configuration.yaml file line with trusted_proxies.
www.home-assistant.io/integrations/http/#trusted_proxies
I have set it to my internal network range not just one IP address.
@@BeardedTinker Wow, that worked, many thanks! But I run into a next issue, is that when I run Iframe (e.g. Configurator) I receive a message: Unable to load s that load websites over if Home Assistant is served over .
Do you have a solution for that?
When logging into my synology using putty I cannot see any folders I have in file station all I see is drive folder when using ls -1 command. My account is also in the Admin group on synology. Any idea why I am not seeing them?
You can try typing "cd /" and then "ls -l" to list everything.
When you login into machine, it is positioning you in your "home" folder.
Thanks for the quick response, I also tried this and this is what I got: Justin.Stockert@JustinNAS:~$ cd /
Justin.Stockert@JustinNAS:/$ ls -1
bin
boot
config
dev
etc
etc.defaults
initrd
lib
lib32
lib64
lost+found
mnt
opt
proc
root
run
sbin
sys
tmp
tmpRoot
usr
var
var.defaults
volume1
volumeUSB1
I still can't see all of my folders in file station?
@@justinstockert8079 all folders that you see are inside volume1 folder. cd /volume1 and they will be there.
Hi, great videos, keep up the good work... What is the best way to manually update home assistant in Synology Docker container without losing any configuration files? Can you do it through Synology UI or does it have to be command line? Can you post commands/instructions please, Thanks.
Thank you very much for your comment!!!
I use watchtower for updates. They can also be done via Docker app, but I find that less intuitive then following command:
sudo docker run --name="watchtower" -itd --restart=always -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower home-assistant
Just type/copy this in terminal when connected to Synology and it will run WatchTower (only once) and update home-assistant docker image. Please, just check if you named your instance homeassistant or home-assistant. Last parameter in the line is name of docker container you want to update.
@@BeardedTinker I used to use watchtower but have since uninstalled it, will that command still work without having watchtower installed? Thanks
It should - BUT wait a second :) I copied wrong command. This one will install and run always, but update only HA. Wait, there is other option - run-once. Give me a second
EDIT: Here it is:
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once home-assistant
And after it finishes installing, it will close watchtower
@@BeardedTinker Thanks I'll give it a try and report back later...
@@BeardedTinker I ran the command just now and it seems to have done something but my HA is still showing the old version? Does it take a while to execute? Below is my Putty output:-
run-once home-assistant
Password:
Unable to find image 'containrrr/watchtower:latest' locally
latest: Pulling from containrrr/watchtower
3ed6cec7d4d1: Pull complete
c0706b5a6b44: Pull complete
6eae408ad811: Pull complete
Digest: sha256:ff3ba4f421801d07754150aee4c03fdde26a0a9783d66021e81c7097d6842f25
Status: Downloaded newer image for containrrr/watchtower:latest
time="2021-01-22T17:49:22Z" level=info msg="Running a one time update."
Hello, I'm using community hassio on my xpenology. Currently supervisor reports about unhealthy install and blocks from update or any integration install. Reason is version of docker. Official dsm package is 18.09.0-0513. Minimal supported version of docker for hassio is 19.03. There is several instructions how to update docker package for a new version but it's pretty complicated. Is it possible you to make a video instructions how to update docker on Synology?
Hi Dmitry! Unfortunately, I'll not be making video on that. This update that's available on GitHub is too risky to do it except to experiment. A lot of people who did that experienced problems with containers and some couldn't run hassio with it.
The biggest problem could be that you brick Synology/Xpenology.
Check my video from last week to enable updates in hassio.
Thanks a lot. I'm currently watching video you mentioned.
Hi,
first of all thank you very much for all of your videos, they have been very helpful for a Home Assistant Beginner like myself. There is one question I couldn´t figure out yet. With the version shown in the video as far as I see there is no built in way of creating a backup? This version doesn´t have a snapshot function like the one on Raspberry´s. Till now I was mostly playing around with my instance and trying to learn. Now I want to build a Productive environment. Ideally I would like to run it also on my Synology for now. As I´m not sure how this will expand, I might want to switch to a dedicated device running my productive instance (Raspberry oder NUC) Whats the best way of installing my productive instance on a synology where I can easily create backups/snapshot and move all of my config to a new device (e.g. NUC), if I decide to do that later on?
Is it also possible to move my current config to that new productive environment?
Thanks in advance for your help!
Regards
rvst1
Great question! And I'll answer you a bit later, currently I'm in countryside enjoying a bit of a break!
BeardedTinker No worries. Enjoy your break! Looking forward to hear from you.
Ok, so here is what you can do. First, you can skip HA Core version (where you install it by hand) and go for hass.io - it has all addons available but also includes snapshot functionality there. That is of course easiest way to install and use it. I just move my production enviroment on this and so far it's working great!
But you can of course stick to CORE version and make a script that would copy/zip contents of the /volume1/docker/home-assistant folder
That's all you need as all configuration (both yaml and other) is stored there.
When I was moving from CORE to hass.io I just copied contents of the folder and everything was up and running with my last configuration.
@@BeardedTinker Thanks a lot for your answer. So I would need to install hass.io as you´ve shown it in episode #22, to have the Snapshot fuction available and therefore have more flexibilty in the future. I saw your oinned comment under episode #22 (Deprecating Home Assistant Supervised on generic Linux). Do you think thats something to worry about, even tho it´s on hold for now.? Thanks in advance!
At this point, decision has been postponed. And if it does go through, you still have option to go back to HA Core, or Rpi/Nuc for Supervised version.
Worst thing that can happen is that you'll need some more time to convert it to something else.
Question on HASS and Watchtower. Your instructions don't call out creating special paths for HA data (such as /config). So for my /config volume it looks like it's using the default value of /volume1/docker/home-assistant. If I'm using watchtower and it updates HA will I lose all of it's data? I recently had this happen with another container that was using defaults (ie the configuration/data was stored IN the docker image I guess?) and I want to avoid that happening here.
SO two questions:
1. Is /volume1/docker/home-assistant (or similar path for other containers) a persistent location that would be unaffected by Watchtower deleting and recreating/updating the container image?
2. Where IS a container image stored normally if not docker/containername? I'd just like to understand how I can tell if a containers data/config is being stored in a safe place to protect against image destruction/corruption/deletion.
I am just learning Linux so I have lot's of questions :)
Thanks!
Hi Santiago!
By mapping Synology folder with internal docker folder - such as /volume1/docker/home-assistant - you are creating persistent folder where files would be retained even if you completely remove docker container and docker image..
Docker image is stored in different path, and mapping allows you to expose "external path/folder" of Synology to container, but also to expose internal Docker Container path to host Synolog system.
That would be very similar to mapping network path to your local PC, where you see remote files as your "local" files.
Watchtower does very simple task - it remembers parameters of Docker container and it downloads new image, removes old one are recreates container with same parameters as previously. Your files if mapped with local folder should all remain intact.
@@BeardedTinker Thanks much for the reply. I think my confusion was if the /docker/home-assistant path might actually be referring to the image itself rather than a persistent location. Also I assume that for this to work you need to know what path the app/container is going to use for storage of it's data and it seems that this varies across apps sometimes. In any case it looks like I'm safe. Looking to avoid having Watchtower kill my config like it did for another container over the weekend :). Thanks!
Very helpful videos! Did you get a chance to try HA on docker with DSM 7 beta?
I installed it just now on test setup, but that virtual DSM 7 BETA (with Docker 18.x.x, not new 20.x.x) and it's working.
One think to note is issues with USB devices, just in case you want to use USB controller for Zigbee or Z-Wave network. For now it doesn't work out of box and requires a bit of hacking to get it working and uspported.
@@BeardedTinker thanks. I updated manually to the new Docker version on DSM 6.x and I'm using a ZWave stick. I hope there will be a solution to mount USB-devices on a container.
When there is permanent fix for this I'll definitely play a bit more and do new video.
I didn't try this yet, but there should be a solution: community.home-assistant.io/t/zwave-on-synology-dsm-7/308583/10
Yes, that's have been out for some time, but this is still not permanent fix - as script has to be run on every boot of the system.
Hopefully there will be more permanent solution in future.
How do you update home assistant via docker on the synology? Whats the best method?
Hi Karl! The easiest way for me is to run following command in terminal:
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once home-assistant
This will install watchtower and run it only once to update container named home-assistant
Other option is to do it via Docker in Synology DSM.
Go to Docker Registry, locate the image you want to update; click download and choose tag "latest". The download will replace the former image with new one.
When the image download is complete, stop the container, click "clear" and then "start".
This should be it.
The docker image name "homeassistant/home-assistant" (without any tags) actually refers to the "latest" tag of the image which I would not suggest for production usage.
You should use "homeassistant/home-assistant:stable" to use the latest stable image.
I don' think you are correct in this case. While there are differences in latest, stable, edge, dev, beta tags for docker containers - in this (HA) case latest and stable are same (even current beta release is same as latest/stable at this point). You can check here all tags available, and just compare SHA: hub.docker.com/r/homeassistant/home-assistant/tags
Also, I've been running latest tag last 2+ years and so far haven't had any issues unless there is a serious bug in release.
If you are still not convinced, you can check by installing latest and checking in info release version and then install stable and compare results.
@@BeardedTinker Okay, jeah, for HA it seems that you're right, they do also tag every "stable" build as "latest", so as a result you fetch the very same images. But there are a LOT of other vendors/teams which tag even every nightly-build as "latest", whereas "stables" are released very less frequently.
So let's call it "a good habit" to always use stable images (if you're setting up a production server) instead of the latest ones 🙂
Dear BeardedTinker,
First and foremost, I want to thank you for your cool and interesting channel on YT. Certainly a source of help and inspiration for me, which has often helped with setting up my Home Assistant.
In the beginning of this year, I saw your video to install Docker on Synology and install Home Assistant in it. At the moment I have also applied this setup, but in HA I get a message from Supervisor that I am running an unsupported installation. I suspect the Docker version 18.09.8 (which is the latest with Synology) is too low for HA which requires minimum version 19.03.0.
Question: do you also have this problem and do you know whether it is possible to upgrade the Docker via eg Putty and with which commands this is possible?
Thanks in advance for an answer.
Benny R.
Hi Benny and thank you very much for kind words!!!
In regard to your question - yes, I also have this error in a log file, and that's known issue.
The problem is that Synology is using their own adaptation of Docker, so we can't do anything to update except nag Synolgy to finally release new version.
There is this: community.synology.com/enu/forum/1/post/131600
I haven't tried it because there are (I think still not fixed) some bugs that would create more headaches then it would be worth to upgrade.
So we are left with two options - wait for Docker 19.03 version, or wait for DSM V7 that should have it, but will break Home Assistant (Supervised version).
@@BeardedTinker Thanks for your reply, then we'll wait for DSM V7 and see what happens.
I am thinking about running the Docker and HA on a Rasberry Pi 3b but I am not sure yet ..., another device that we need to maintain ;-)
If you decide to go RPi way you don't need Docker. There is native version with HassOS. But be careful of the SD card, they can fail sometimes due to too many writing on it
@@BeardedTinker Thanks for your advice, I had the "SanDisk Ultra Micro SDXC 64GB - UHS1 & A1" in mind.
That should be ok. A1 is application rating if I'm not mistaken.
Great tutorial! I have a problem getting to the root directory and instead I am in the folder which is one level down. I thought I have the admin privilege. Do u have a solution for this?
In what folder are you landing?
Can you sudo in machine and try then cd /
@@BeardedTinker Thanks for the replied. I have since managed to change to root directory using command "sudo -i" but I still have problem with installing it. It seems to that i had problem on -restart on failure. Sorry I dont know unix command at all hahah
Just curious. Why don't you use Synology Docker app? I know Docker and I know the CLI, but on my Syno I always use GUI interface to fast up the whole configuration process.
Great question and thanks for asking!
Since I have bunch of Dockers running I just got used to it because of limitation Docker on Synology has. For example you can't map folders that are not visible in File station. And some of my Dockers use folder in root of Synology.
But you are correct, you can get Home Assistant up in Docker GUI.
Problem could be later on with Zigbee2MQTT that requires altering /dev/ permission and doing that in GUI can be challenging if not impossible.
And also, since I keep as backup all of my commands, it's just copy/paste to recreate Docker if needed.😀
Hi, great video as always I just wonder why not installing home-assistant from docker UI instead all those command lines?
If you are not using ZigBee or Z-Wave you can do it from UI. But some configuration parameters are not available from UI (for ex. mapping devices, mapping folders that are not under /volume path,...)
@@BeardedTinker Ok, and what about doing snapshots if there is no supervisor anymore, please? And also no more system management?
@@philou1516 you can't make snapshots like that in Docker version, but it's enough to just archive (zip) contents of the configuration folder and all subfolders (including hidden ones starting with . - for ex. .storage)
Addons can't also be added using supervisor, but can be installed by hand and work OK.
I've run it like that for a long time - now I have it in Virtual Machine on Synology.
@@BeardedTinker Are that the reasons you choose HA on VM instead?
Negative side is that uses a bit more resources but positiv side is that's completely independent installation, with its own IP address and everything is supported. Add-ons, supervisor,...
Is there a way to back-up and restore the home assistant configuration in docker synology NAS? I can't find snapshots.
In HA Core (docker version) there is no snapshot functionality. But if you save your home-assistant folder with all hidden files and folders, you can use that to restore.
This is also more or less same as how snapshot works. It saves individual folder in zip file.
@@BeardedTinker Thank you!
All ok for me except accessing the Add-on store via supervisor. I get an endless spinning wheel. I tried the recommendations in the comments i.e. restarting the package, uninstall/reinstall, clearing the supervisor container. Is there a way to add supervisor to the standard homeassistant docker image to access the store? Note I am using an old DS412+ with DSM 6.2.3. I already run homebridge in the docker. these are the errors in the log "20-10-15 12:27:55 WARNING (MainThread) [supervisor.updater] Can't fetch versions from version.home-assistant.io/stable.json: Cannot connect to host version.home-assistant.io:443 ssl:default [Try again]"
Hi! No, you can't add Supervisor to Home Assistant docker.
Looking at the error do you maybe have Firewall on Synology turned on? If yes, can you try disabling it. It looks like Docker is unable to resolve DNS request to home-assistant site. Most likely, some network issue at your side.
@@BeardedTinker Thanks for that. A FW rule was added during installation and disabling the FW has no effect. But you made me think of the docker container configuration. I notice the Hassio package installs containers in 'bridge mode' and not 'host mode' like you recommended for the standard home assistant package (-net=host). Does your Hassio Docker installation also use a bridge ?
@E C Ok, here is how it should look:
Bridge: hassio_supervisor
hassio: hassio_supervisor, all addon packages hassio_* containers.
host. hassio_multicast & homeassistant
So this is how containers should be devided between Docker networks.
You can try fix for Docker network, maybe that can help - here is a link to post that contains it: community.home-assistant.io/t/hass-io-on-synology-dsm-native-package/125559/2?u=beardedconti
Hi @@BeardedTinker, I realise I should have been commenting on video #22 maybe you can transfer the thread? I succeeded by complete uninstall, disable firewall and then a fresh install. I restarted the hassio package once just to make sure and everything is fine. The key was a comment to disable firewall before installation. Thanks for your great feedback!
@E C Thanks for followup and glad you got it working!!! Unfortunately I can't move the comment !
Hello, Bearder! I see critical error for HA on Synology docker:
Docker version 18.09.8 is not supported by Supervisor!
As I saw it is necessary to downgrade from Docker version 18.09.8 to Docker version 18.06.1. Is it correct? Please, could you write command for downdrade with keeping HA and other containers. Thank you
Privet Andrey! Just to check what you have - is it Home Assistant Core (running in docker independently), or are you using Home Assistant Supervised (community package)?
I can't verify right now Docker version, don't have access from where I'm now. Hassio/Supervised is working with 18.09.0-0513 - as this is latest that is available to me.
@@BeardedTinker I use community version: hass.io from Fredrike+
OK, I will try and see if I can get version 18.09.8 myself and test it. Version 18.09.0 is working normally.
BTW, what version of DSM are you running? I still didn't update to 6.2.3-25426
@@BeardedTinker I use DSM 6.1.7-15284 Update 1
Great tutorial!!
Love your work :)
Thank you
Thank you Ben, much appreciated,!😊
Great video. thanks for your help in installing HA. However, I must say the timezone should have been updated while creating docker.
Thank you for your comment! Yes, you can update any data to customize your setup according t your need - time zone, folders, ports,...
TZ In docker is usually one of those things that is never included in the pre defined package - Frustrating when the damn time never works right.
I cannot understand how to enable https for home assistant. Using console I can check that current synology certificate files copied to HA container. Then in configuration.yaml I add
http:
ssl_certificate: /certificate/fullchain.pem
ssl_key: /certificate/privkey.pem
But if I try to access HA using https I get PR_CONNECT_RESET_ERROR.
So it is not clear how to enable SSL for HA and then how to make automatic redirect from http to https?
Hi Ilija! To be sure, I would recommend that you first copy certificates to folder where you are keeping your files. And in configuration.yaml add following fodlers:
ssl_certificate: /config/fullchain.pem
ssl_key: /config/privkey.pem
I think (but not 100% sure) that problem is with path to certificate.
You can also check this video here:
th-cam.com/video/yHN8XN14xHg/w-d-xo.html
Hi @@BeardedTinker. Thanks for your help. The problem was that I didn't map certificate folder. Your support helped me to setup https, but now I can only open my HA using mydns.com. If I try to login using local IP address then after enter credentials on /auth page I redirected to /lovelace page with big HA logo in the center and RETRY button. But I don't want to be obliged to always use external dns when I am in LAN.
@@ИльяПопов-т4н glad that you got first part working. Did you in web UI configure internal/external URL. For external, leave mydns.com and internally use IP_ADDRESS:8123. Not 100% sure, but this should work.
If I try to login on IP_ADDRESS:8123 then after enter credentials on /auth page I redirected to /lovelace page with big HA logo in the center and RETRY button. And if I try IP_ADDRESS:8123 then webpage doesn't load at all. Instead I just see ERR_EMPTY_RESPONSE
Hi Could you list all the commands that you typed in the terminal its too small to see. Thanks!
I've added one that was missing (creating folder) and also described steps a bit more in video description. If you are having issue, just tell me.
For secure login from remote location did you try openvpn server with tls handshake? Because then you use only one port and if someone try to connect to thet port must provide tls handshake and openvpn cert, else server weel refuce connection.
Update : Or use reverse connection on synology nas.
Great question Neno! No, but I'll play a bit with it in next few days. If I use something for VPN I normally go with L2TP/IPsec. But in my setup I even don't use that.
I would love if I could protect whole of my setup with YubiKey 2FA but, Synology... Still no support. Then there would be issue of Home Assistant and (my personal opinion) it's still lacking any kind of security in it. (fully intergated user tiering, groups) I did hear something about OAuth on yesterdays State of the union for HA, but I was doing 5 things and wasn't concentraded and missed a lot.
OpenVPN should be best of 3 options available on Synology. Too bad they removed advanced behaviour analytics they had in Synology.
@@BeardedTinker I definitely agree 2FA should use all user by default. It ads a very big chunk to security of all acounts witch support a 2FA.
@BeardedTinker Can you please help with configuration.yaml and http section in it? I have duckdns domain plus valid lets encrypt cert, port forwarded on my router and chrome is accepting my certificate for my other FWD connections (e.g to https synology service - so I know it's working), but not sure how to adjust HA yaml file to accept https and use my certificates. I installed HA and rest in synology docker as per your videos. Only connection to http (my-domain.duckdns.org:port) is showing me HA login page. Https ends with error page. Synology is set to use letsencrypt cert as default.
I tried to uncomment HA yaml http section, without success:
http:
base_url: my-domain.duckdns.org:port (my-domain & port scrubbed)
Any ideas please?
One way you can do it (maybe not cleanest, but works) is to copy certificate into home assistant folder (let's presume you used same folder structure as in video):
cp /usr/syno/etc/certificate/system/default/* /volume1/docker/home-assistant/
You can create Task in Task Scheduler in Synology Control Panel, to copy it every day at for ex. 2 AM.
That will copy cert, chain and privkey.pam and make them visible to HA.
Adding following to http: should be sufficient:
ssl_profile: intermediate
ssl_certificate: /config/fullchain.pem
ssl_key: /config/privkey.pem
And as you wrote, base_url: should be domain name without https in front. Just name with port.
Let me know if this works for you.
@@BeardedTinker Great, thank you, that actually worked like a charm :) Now I can start to play with HA. Does it mean that I lost access localy via LAN? Do I have to go via duckdns url / internet? Or is there way to setup http for lan access and https remotely?
That's great!!! There should also be other way mapping certificates to docker image - it should be cleaner and wouldn't need any other command or task schedule. But you would need to create docker image from scratch. (all your settings should still remain).
Hi, thanks for this very nice video. I have tried to implement it, it went smootly but does not really work as expected at the end. First of all, after I have done docker run, docker ps shows the image and the container. However, in Docker App on the syno, I only see the image and no container :-( . Looks like I'd have to recreate a container from the image which is probably not the thing to do, as the docker does show the container ...
Second issue, I have a rfxcom device plugged on the synology, (on dev/ttyUSB0) as well as a cc2531 sniffer, on /dev/ttyACM0).
Home assistant did not see these devices, and when I tried to install rfxcom it told me /dev/ttyUSB0 was not correct.
So I searched on the net, stopped the container, but could not redo docker run until I did docker rm on the image.
=> was it the rigtht thing to do or is there a simpler / nicer way to stop and restart with a new set of options ?
I have actually added --device /dev/ttyUSB0 -device /dev/ttyACM0 to the docker run command, and then could install the rfxcom as a local, serail device on /dev/ttyUSB0.
=> something you could add in your great tutorial.
Still, in HA Still, I cannot have the rfxcom seeing any device, and I now have to see how to make the zigbee seen. I have installed MQTT successfully, but no new device is discovered on the zigbee sniffer :-(
Sorry for the length of this comment and for all the questions, but I'd appreciate some help if possible :-).
Thanks.
Hi Michel! Let me try and answer what I know :)
In regard to no container - after docker run, it should be running. If it's not running, you maybe missed error when docker container was created, that's what usually happens. Maybe typo in folder mapping or something like that.
For rfxcom - can't help, I don't have it, but cc2531, I do have. I never added it to HA, as it is not needed inside HA, but I do see it in Supervisor (I'm using hass.io SynoCommunity package - unsupported - to run HA Supervised).
Docker works in such a way that you "can't" edit configuration, you have to recreate it (after removing it) new one with configuration that you want, so that's ok.
Are you planning tu use internal zigbee support in HA - ZHA? I don't think it supports this stick.
MQTT is just broker, it doesn't know what to do except to accept and send messages. For zigbee to work, you need Zigbee2MQTT - it connects on one side to stick /dev/ttyACM0 and on the other side to MQTT.
MQTT then is "database" (this is very inaccurate description) that then accepts messages and information from zigbee network, and makes it available to Home Assistant.
Hope I manage to answer you or at least help you get in right direction 😃
@@BeardedTinker Thank you so much for answering that fast:-). I have tried docker stop and start in interractive in putty, seems to be running OK, but still clicking on Container in Docker Syno's GUI shows "no container is created, create on from the image" or similar message as I am translating from French.
See the output in putty when I start the container. It seems OK to me.
mraskin@DiskStation:/volume3/docker/home-assistant$ sudo docker start -i "home-assistant"
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] udev.sh: executing...
starting version 3.2.9
[12:06:51] INFO: Update udev information
[cont-init.d] udev.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
2020-12-20 12:07:02 WARNING (MainThread) [homeassistant.components.rfxtrx] The 'debug' option is deprecated, please remove it from your configuration
2020-12-20 12:07:04 WARNING (MainThread) [homeassistant.components.netatmo] Webhook not registered - https and port 443 is required to register the webhook
Note I am running on volume3 which is the one holding homes here, this is my main volume after I got a crash on volume1, long time ago.
So not sure why the docker GUI does noi see this started process :-( .
Looks like there is no link between what I am doing in putty and the docker App on the syno. :-(
WRT the rfxcom and cc2531 stick, at the begining I wanted to use them in Domoticz, also installed on the Syno. rfxcom works fine, the cc2531 stick not. I could add the material on Domoticz, but I cannot find ways to add devices. This is why I gave a trial to HA !
Sadly, Docker in Synology is not that... good. For controlling Docker containers, I mostly use Portainer. This is web based management for Docker - it allows you to start/stop delete them to see status, enter using CLI/terminal etc.
Rfxcom is native integration for Home Assistant, but as I said, I have not used it as I don't have any devices that use it. For cc2531, you need to use Zigbee2MQTT docker container. This container connects to cc2531 stick and allows control and pairing for zigbee devices.
It then posts all the information to mqtt server (that acts as database). HA can very easily connect to mqtt - you just need to specify IP address and username/password if you created it.
All devices that you add in zigbee2mqtt should then be available as entities in HA. That's how I control my zigbee devices...
@@BeardedTinker Good evening. Thanks for your answer. I actually have made some progress. I have eventually seen the container running in the Docker GUI on Synology after it rebooted, this morning, after a system update. Not sure why it was not working before, although "docker ps" was showing the running application. I have (re) installed zigbee2mqtt in the docker, and manage to link a Xiaomi door switch:-) . I can see in in both HA and Domoticz now !
Actually before linking the Xiaomi device I tried to link the device I tried earlier (an Ikea switch), but failed. When I try to pair it, nothing happens :-( . I need to dig in further on this. I do not have the Ikea router because I think this should not be useful.
If you have any idea for pairing Ikea devices without the Ikea router, you're welcome to chime in ;-) (I think there is an app in HA that supports the Ikea router)
Thanks,
Michel.
Thanks for posting this Michael! Procedure for Ikea switch is same as for Xiaomi sensor.
Sometimes those devices go to sleep so after you set it in pairing mode, you just click if few times until it gets paired. I have 5 of them.
Hey, If I use SSH to connect to my Synology, my home folder appears and not the folder from this tutorial. It's not clear for me how to get to /volume1 as it's not in the list of folders.
What do you see when you use 'ls -l' command?
Does your account have admin privileges?
@@BeardedTinker I see all the folders from my Synology "Home" folder.
You have to make sure you are member of the administrator group. By the looks of it - you are normal user and you can't access those folders.
@@BeardedTinker I should have admin rights. This is my first linux experience. Is it possible to change the default folder or choose the default folder where you login to?
@@bertendewael3809 It is possible, but Docker folders should be syste shared folder. What yo ucan try to do is type "sudo -i" when connected to Synology via terminal.
It should prompt you for the password. Then try to type cd / to change to root folder, and try ls -l command once again.
You are amazing in teaching things thanks
How open terminal on desktop?
You need app for that - I use putty as terminal client.
thanks for the video... was wondering if you will be doing a video on Domoticz?
Thanks for asking. Not very soon. I find Home assistant a bit "easier" to comprehend, extend and control. I will install it and play a bit, just have to get a bit more time - and that's something I can't find enough lately! 😉
Hello, how to i get hassio working inside home assistant?, i'm upgrading from a PI to a Synology docker but i have no supervisor, please help
Thanks for asking. You can not get it installed that way. Home assistant is part of the hassio. Next Thursday there will be video on how to do it - get hass.io on Synology. It includes supervisor, DNS and Home assistant + all add-ons
BeardedTinker Yeay thanks , any clues ? I have to get rid of my raspberry
@@EstebanBurneo there is beta package in Syno Community repository. It's almost 2 click install
Thanks !!
Great tutorial! Got it to work. Thanks.
That's great!
Great video! I have been doing a lot of research on what to migrate to after my rPi 4, and this video helped me alot. I have one worry/question about committing to synology. On HA website it says this about installs on synology:
"Synology only provide Python 3.5.1, which is not compatible with Home Assistant 0.65.0 or later. Until Synology offer an updated version of Python, Home Assistant 0.64 is the most recent version that will be able to be installed."
Do you know if that is just an old disclaimer that they never updated? I thought I saw your version to be .9 something. Maybe I am having a misconception about the variety of HA you installed; The all in one simple image is all I have messed with on the Pi so far.
Anyway thanks again and I cant wait to watch the rest of your videos!
Thank you for your comment David 😉
I would have to read docs about this Python version disclaimer, but no, there is no issue with HA 9n higher version. You can run both Docker version (my main setup) and also Hass.io version (my second recording setup).
I think they are talking about 3rd option - to install it as you would on Linux machine, by pulling it from GitHub.
@@BeardedTinker Awesome thank you. Is your main setup also running on a RS814?
@@Vevexus No, I run it on DS415+ but I have upgraded RAM from stock version.
Do we have to use command line for this?
It all depends - if you want to use for example USB Zigbee stick, you'll have to use command line. Not all commands/setting are available through Docker UI.
@@BeardedTinker I mean what's different core version & gui versiyon. I mean if i try to use home assistant on vm pc still do i need these commands? If i'm using core version, can i install add on?
It's hard to explain in comments. In HA core you don't have add-ons. For that you need VMM - Virtual Machine Manager.
Command line can be used in both HA Core to create docker and in VMM to control parts of the system. But you don't have to.
Hey Thanks for the vid! What about hassio on synology? to get the addon duckdns for home assistant to work remotely? Or is that not possible?
You don't need hassio to get DuckDNS working - you can do it from Synology.
You can get hassio spk/pkg to install hassio as standard Synology package. It can be found on Home Assistant community web site, but it has on big flaw in my opinion. Although it works ok.
@@BeardedTinker you shouldn't do that hassio addon ?
how can you connect to the home assistant remotely via ure synology ?
@@samvanst In order to use DuckDNS, you can use this page as a guide - click on Synology to get information:
www.duckdns.org/install.jsp
That will make your DuckDNS domain name point to your router/dynamic IP address.
Next step is to create Let's Encrypt certificate to secure your Synology/connection - use this guide for it:
www.synology.com/en-global/knowledgebase/DSM/help/DSM/AdminCenter/connection_certificate
Last step, if you are aware of potential risks, have your system secured with hard password - long combination of random letters, numbers and symbols, you can make HA available by adding port forwarding in your router setting.
Just forward external porta 8123 to your Synology IP address also on ports 8123. In order you can of course use different external port then 8123 to further mask your HA installation, for example use port 10110 externally and forward to internal port 8123 on your router.
@@BeardedTinker Which flaw do you think hass.io has?
Hi, great video but i cant read the commands you type into putty for making a folder "home-assistant"
Hi Glenn!! Sorry, first episode and I didn't think that far when recording.
Command to create folder is mkdir, and in order to create foder for home-assistant, you can use following:
mkdir /volume1/docker/home-assistant
That will create folder where it's needed. If you have any question, just ask!
Command to run docker is following: sudo docker run -id --name="home-assistant" --restart on-failure -p 8123:8123 -e "TZ=Europe/Zagreb" --net=host -v /volume1/docker/home-assistant:/config -v /usr/syno/etc/certificate/system/default:/certificate homeassistant/home-assistant
Hi BeardedTinker, great video and tutorial, thank you so much - been looking to do this for long time. Are your commands and the procedure listed on maybe your website. Its got to be easier to follow than on a TH-cam vid?? If yes what address please? Please keep up great work.
Thank you for your comment. Unfortunately not yet. Web site is up but I never got enough free time to get all the steps there.
This is in my to-do list.
I try to get all commands and steps also in video description.
Great manual! Thanks a lot, you helped me to run HA on WD My Cloud EX2 Ultra. Is it possible to run hassio in the same way? :)
Hmm - on Synology there is community package and it runs hassio. Not sure if this could be done without virtualization platform. Or by hand pulling it from GitHub and installing on some supported Linux distribution.
@@BeardedTinker I see, thanks for help
Super useful, thanks!
Thank you for your comment Briant!!! Much appreciated.
I have a problem now with HA complaining about the Synology version of docker which is v18 and HA requires v19
That's normal Jim. Synology is not officially supported, but you may still use it and it works great (although there are some that have issues).
If you have problem that you can't update system or add-ons, check this video for workaround.
th-cam.com/video/a01qvvDS7QQ/w-d-xo.html
But it will still report unsupported system for Docker (but not just that).
Just to clarify, are you running HA in Docker or hassio package? Standalone docker version shouldn't complain, but hassio package will.
Thank you for this very useful video!!!
Thank you for your comment 👍
Hey, I opened putty and put up a connection to my xpenology server. But when i put ls -l it doesn't see my folders. Not even volume1. How can i work around this?
And you can normally access XPenology in web browser? Did you enable SSH/telnet access in control panel in DSM? I presume yes. Can you try typing in terminal sudo ls / to list root folder?
@@BeardedTinker wow Quick reply. I can press cd volume 1 and I guess it doesnt really effect the other outcome. Will check if I can Just follow the whole tutorial.
I don't know how good you are with linux. XPenology should be same as Synology, although there are some differences. My test setup runs on XPenology.
Can you try logging in terminal and typing:
sudo ls /
and also you can try:
sudo cd /volume1
Just to be sure that there are no issues with privileges. And just to be sure - you have to type volume1 without any space, not sure if above was typo or that is problem.
@@BeardedTinker My understanding of linux is at noobie level. Your tutorials are great. I managed to install Home-assistant. Will complete your whole tutorial series. Thanks for making these.
On my docker in unraid i can install home assistant in network bridge mode such that the ip address of home assistant is different to the ip address of the nas. How do i achieve this in synology? Great video by the way.
Hi Joseph! Sure, it's possible, but that would open up another pandora box in managing network for other docker images that would need to communicate with Home Assistant (for ex. mqtt, influxdb,...)
But you would need to create two networks - one macvlan to assign Home Assistant different IP address, and another one to bridge Synology and Home Assistant. More or less that would be same Docker network setup as the one used for PiHole and AdGuard - videos on that are here on the channel.
@@BeardedTinker Thanks. I have followed your tutorial to install pihole on docker and via a macvlan and it is working fine. When i try to create another macvlan for home assistant so that i can force home assistant to use a specific ip address (in my case i want to assign home assistant 192.168.1.12), the command "sudo docker network create -d macvlan --subnet=192.168.1.0/24 --ip-range=192.168.1.12/32 --gateway=192.168.1.1 -o parent=ovs_eth0 home-assistant" failed with an error message "failed to allocate gateway 192.168.1.1): Address already in use". Any idea how to mitigate this issue? Thanks again
@@josephlo6005 Well, yes, you can create only one macvlan network. But that shouldn't be the problem. What you could to is you could (never tried it) increase IP rande from --ip-range=192.168.1.12/32 to for example /30 - that would increase number of available addresses to 2. BUT, I've never tested this. (check here: github.com/docker/libnetwork/issues/2384)
Also, I've found a workaround (again, didn't test it) - create macvlan with 2 subnets.
# Macvlan (-o macvlan_mode= Defaults to Bridge mode if not specified)
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
# Run a container on the new network specifying the --ip address.
docker run --net=pub_net --ip=172.16.86.10 -itd alpine /bin/sh
# Start a second container and ping the first
docker run --net=pub_net -it --rm alpine /bin/sh
ping -c 4 172.16.86.10
(docs.docker.com/v17.09/engine/userguide/networking/get-started-macvlan/#macvlan-bridge-mode-example-usage)
@@BeardedTinker Thanks. I will try that.
@@josephlo6005 Did you manage to do it?
One more thing, which I cannot understand is why I get error on fresh HA
Logger: homeassistant.components.stream.worker
Source: components/stream/worker.py:176
Integration: Stream (documentation, issues)
First occurred: 20:20:47 (20 occurrences)
Last logged: 20:33:27
Error demuxing stream while finding first packet:
I just installed HA from Docker on Synology and after creating user and password I add automatically my Synology to device list, because HA found it automatically.
Hard to say. Are you trying to add cameras? Or play media?
Looks like problem with stream and format of video/audio format.
No. Look screenshot from my main page yadi.sk/i/kHVxn3pyS2Xsxg. I don't have any module to show video from camera on it. On second screen I just have 2 modules with state of Synology discs and CPU.
@@ИльяПопов-т4н do you have any camera in Synology? The only reason for this should be problem with stream, and stream is usually used for media (video).
You can stop receiving it if you add following to configuration.yaml file:
logs:
libav.NULL: critical
This will disable it from logging any errors.
Yes. I have one camera in Synology.
I don't think that it is good idea to disable logging any errors. I don't want to be blind.
Check in entities if your camera was automatically added to HA.
This would suppress errors only for that library.
I am wondering if there is a way to set up remote access to home assistant using Duck DNS?
Are you using Synology? I have remote access done via namechap DNS service and SSL certificate from Let's Encrypt.
It's not that hard but you must always be aware what you're doing since it can compromise your home network security.
@@BeardedTinker I can acces my HASS using the Synology quickconnect on port 8123. the problem is when I tried to set up smartthings component it's not allowing me to do so. it says need to setup remote access.
I don't have SmartThings, but briefly went through documentation.
You need to setup your domain name.
DuckDNS is not listed as DDNS provider, but you can create new entry.
Check this install info on DuckDNS page: www.duckdns.org/install.jsp
You need to add in Control Panel - External Access - DDNS New provider:
www.duckdns.org/update?domains=_HOSTNAME__&token=__PASSWORD__&ip=__MYIP_
Also, - base_url: in configuration.yaml has to be setup and it has to be your DuckDNS domain name.
After you do that, don't forget to also create SSL certificate (also through Control Panel) with Let's encrypt and provide same DuckDNS name,
+rachid amine did you manage to solve this? Have you checked web link for DuckDNS in Synology?
Great video!
Bravo! Grazie!
Thank you Nicola! And happy New year!
Just dropping a note that i loved your way of presenting. Keep it up. You explained everything nicely. I did see in another video that you creating some seperate network for pihole. My question is : Are you smart devices also setup on a seperate network ? And if so is it the same way you setup pihole ?
Hi Jurgen and thank you for your comments - they mean a lot! 😉
Actually no, my devices are all on same network - for now. This is something on my "to explore a bit more" list. I know everyone is now hyped about separating devices on different networks, but I'll have to test something first and see if this is ok for home setup and how it influences whole smart home. Sorry for making a too long response, but here is what makes me question this setup. If you put all your smart switches, smart speakers, sensors etc in separate network, you still have to enable access from one network to the other for your Home Assistant setup. And when you do allow traffic between them, what's the purpose of separate network.
In PiHole and AdGuard (that I personally use) setup, I've created separate bridged network, so that Synology can also use them for DNS resolves. This is due to network limitations of setup. If you don't plan to have your Synology use PiHole/AdGuard as DNS resolver and malware filter you don't need it.
@@BeardedTinker first off stop apologizing hahha. I love long answerrs if it is on topic. What I read online is that with outbound and inbound setting on vlan and fw rules you can sperate what has acces to a) www b) LAN. So for example you sensor on a none vlan that does not need inet while connected to a different vlan for hassio that does. You could ping hassio probably but never de sensor so security on the sensor is not really to worry about. The reason I am asking it now is that once you have 30 plus devices on you setup creating different vlan and changing it would become a daunting time consuming task. Love your vids keep them coming!!! Ooh yeah don't mind people that say it too long. One they run on to issue they will appreciate the in depth explanation. And finally ++++ for using best practices I know enough devs that could learn something from you.
Ps using mdns will enable casting on de different vlan in theory. I am all theoretical for now my packages are coming in this week. Last question hahha (now my turn for sorry because is kinda off topic) : watchtower updates to the latest version. I notice you don't have a rollback version setup in docker or do you?
@@jurgenwoen4986 My lab/recording setup is using my home network. And my home network is a bit "special" - something that I started to get up according to best practices and then... 2 years later 😀
I'm using Mikrotik for my routing and Mikrotik doesn't support mDSN, so a lot of fun stuff is unavailable to me and that's why my ESPhome always shows all my devices as offline, since it can't find them using mDNS.
On the other hand, it allows me to whitelist/blacklist what devices can or can't go to internet, setting vlan's is a bit easier, setting routing is a lot harder but it's fun to play with "serious" router rather then 2$ ones that ISP love to give to end users. Just never had time to go in more depth.
@@jurgenwoen4986 no, you are correct. So any rollback I do, I do by hand. Had to do it once so far I think. Also I have no container blacklisted/excluded - and that is something that I was considering for some time too. This is something I should start tinkering with soon.
Спасибо за видео
Thank you for the comment!!!
Great tutorial, thank you
Thanks for the comment Ara!!!
Helou people . Can anybody give me the full Variable and Value from enviorement ? I have an older version of doker and have to imput them manual to work. i did the same for unifi controler in doker and it work.
What variable and value? And what version of Docker do you have - I think this was version 18.03 or something like that.
@@BeardedTinker Thank you for the reply. I am searching for wheel_link value and full path from the doker setting in environment.
Not sure I get you. What is wheel_link? And what environment value are you looking for?
In terms of running HA nothing should have changed
@@BeardedTinker I wanted to share a screenshot but youtube doesent allow it. From advanced setting In docker when creating >>enviromnment. There are some values. TZ, path, lang etc...
Most of them you don't need. As you have probably seen from video, there are only few that are needed. Except them, nothing else is needed (except maybe device mapping for Zigbee/Z-Wave).
How is the Hass.io native synology package running ?
One from this video th-cam.com/video/QdBYUbj0B5Q/w-d-xo.html ? It is running great, but on test setup. It updated every time without glitch, had 0 errors so far. My main setup is still in normal Docker.
Like so many other installing HA... fine, but what about showing us an installation with HA running hass.io so we can install apps/plugins.
Your wish is my command 😉
th-cam.com/video/QdBYUbj0B5Q/w-d-xo.html
Would it be possible for you to do a Hass.io on Synology inside Docker. It would be gratefully appreciated.
Best wishes.
Thanks for the question Foggy! I've answered you on Discord and posted couple of links/screen shots.
@@BeardedTinker Hi BT, indeed great tutorial! Would you mind sharing the Hass.io information?
@@JoDeVulder Thank you for your comment! I'll release video next week on hass.io installation. It's really easy and simple.
@@BeardedTinker looking forward too that video. A big thanks in advance!
Hello,
I'm sorry, I had such kind of error when I followed guide, how to install on synology device home assistant :
warning : published ports are discarded when using host network mode
But everything works and opens, there is possibility to get feedback from server, that everything ok?
Sorry for that, I'm new to all this stuff, and slowly got inside all that kitchen :) But any way, thank you very much for your videos, there are great :)
No worries about that, it's normal.
When starting docker if you use --net=host -p (port definition) will be discarded. You could also remove -p (and port numbers after it). I just like to leave both options.
And welcome to Home Assistant fun - hope that you will enjoy it as much as I do!
@@BeardedTinker thanks a lot for that major job, that you do :) oh yeah I like it very much :D hmm maybe you have your own telegram channel, think that you will have major community there :D and also easy to monitor and communicate, cause I don't have twitter and facebook lol :D the only one option to ask questions to you, is here in comments, hope that it will be ok for you. Because next question, I have about Grafana :D did all how it supposed to be, but it is not showing me like you spoke 1/0/1... on sunrise :)
I'm currently in process of setting up Discord server.
Probably will not use Telegram for this as I don't find it easy to scroll through conversation. It's great tool and I have been using it for about 8 years, but Discord should be easier to work with.
@@htdocs-eu I've created Discord server few days ago discord.gg/HkxDRN6 - it's much more easy to manage then Telegram groups.
As for Sunrise/Grafana - will have to check a bit later. There was an issue with sunrise sensor in 0.105 Home Assistant. Not sure if those two were related or not, but a lot of my automations stopped working for at least 1-2 days and restarting HA sometimes fixed it but not always. Unfortunately since I had a VERY busy week at work, never managed to check more into this issue.
Hi Beard Master. Do you buy any chance know how to install docker on a nine supported syno with arm architecture? I could find the right installs command anywhere
Thanks for the comment Jurgen! This all depends what you want to install. Not all images support arm, and also depends what arm (v6, v7,..) you want to install to. For example Home Assistant can run on armv7 - docker pull homeassistant/armv7-homeassistant and armv6 by using armhf image.
@@BeardedTinker freak I am all out of luck I'm running on a armv5. Argh ty you so much for the fast replay. Back to the drawing board for me.
как открыть Category на 3:21 ???? без этого видео не имеет смысла !!
Привет Vlad! That is not part of the Synology. This is standard terminal emulator - I'm using PuTTY. There is link to it in video description. www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
@@BeardedTinker Огромное спасибо из России !