I've been promising this video for a few weeks at least, so I'm glad to be posting it finally. I hope you all get some useful information out of it. Thank you again to all of you for watching, subscribing, and to my patrons at Patreon.
Thank you for this. I've been trying to get NPM up and running for about a week, tinkering a few hours a day and reading various help articles and watching TH-cam videos, and I would still be completely lost as to how it's supposed to actually work if I had not found _this_ video. I spent at least one extra day figuring out how to give the Portainer container its own self-signed SSL certificate--and pulled it off--and not until I saw this did I realize that's not even necessary. You've saved me hours of struggle. Luckily, I'd only deployed portainer so far. :) To make sure my understanding is correct, it's safe for container IPs behind the NPM to be unsecured HTTP-only, because NPM itself is providing the security (SSL) when interfacing with the outside world. It would also never have occurred to me to use the internal docker 172.* IPs--no other video does that that I've watched so far. map
Trying this today, doesn't work. Apparently this no longer works properly and has something to do with the security header? They posted in the FAQ on the NPM website stating why it doesn't work. It's unfortunate because I want to be able to manage NPM remotely, but have an access list and SSL/TLS in place. Thank you so much for the videos!
Thanks for the explanation. Just two hints: When securing your NPM with authentication, the SSL certs won't be able to update automatically. So after 90 days you have to disable, renew the certs and enable again. Maybe not the most convenient way. And for the proxy hosts I suggest using the container name instead of the IP, cause than you won't need to update anything in case the IP changes (e.g. the container cycled and got a new IP address)
Ive been trying to set up a reverse proxy for the longest time and you are the only youtuber willing to publicly post their IP address when explaining how the a name records works
My public IP changes over time, only once every few months, but that makes it a bit less dangerous for me, whereas others may have a static address, and therefore don't want to share it. I hope my ability to share openly helped you.
Had just set this up earlier today but didn't think about securing the actual manager with itself, thanks for that! You could go one step further and stop docker from exposing port 81 in the docker-compose file to stop any http access from the home network and force all traffic through https.
Glad it helped. And yes not exposing the port is an option, but then connecting through becomes a bit more of a pain. You need to setup a docker network specifically, but also not a horrible thing to do.
@@AwesomeOpenSourcejust got a domain with cloudflare as they now do everything under one service can you help me with what I do next...new to the whole domain setup please?
I have no idea why but i tryed it so many times over and over and it never worked until i watched that video in witch you did exactly the same as me and it worked. IT Stuff is somtimes really confusing. But thanks for that video its really great to learn
Glad it helped. Just note, once you set the access list rule up, you'll have to remove it every 90 days to renew the cert, then put it back on. A simple click, but some manual work there. I suppose you could create a rule for the LetsEncrypt IP that may get around that.
I really appreciate your videos and the level of detail covered. Thanks! Just to add: my IP is proxied/masked via Cloudflare, meaning my authorised IP in the access list gets blocked. I could get around this by creating an seperate A record for the subdomain and not proxying it in Cloudflare.
Nice video, clean explanation and good learning material as always. Keep up the good work! Any chance you would be interested in doing an updated video with Authelia and Ngnix Proxy Manager? This will give you SSO with sites and 2FA.
Thank you for the compliment, I'm glad you're getting something out of my content. And yes, I actually just did one on Authelia about 2 weeks ago. Check it out! th-cam.com/video/5KtbmrUwYNQ/w-d-xo.html
Hi there. If I understand correctly what you're doing here - I think you probably should mention that you are port forwarding from your router to the internal IP address of the box that's running docker and hosting nginx proxy manager. Perhaps that was in the previous video?
i have question, if i have 2 VM and from each one of those has webserver with panel inside (ex:CWP), should i change the port to non default ? ex : 8080, 4443
I dont use it for the things you said, cause haproxy does that in my case. I use it for the Websocket Support in Front of an unifi controller, cause that controller does have a problem with websockets behind a reverse proxy and nginx proxy manager solves that better than a apache vhost
hi you are doing a great job with your tutorials ,i am learning alot .i am facing a small problem with proxy manager, i have a old ATA device which gives certificate error if i open the web page then i need to click continue to the site to open the configuration page, my problem is that when this device configured proxy manager always give me 502 bad gateway and i see in logs (sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream),can you help how to resolve the error?Thanks
@@AwesomeOpenSource possible to make an updated video? For the life of me I simply can't get my domain and proxy manager to communicate with each other...
Thanks! I have NGINX with MariaDB with a database "db" with npm username and password. If I want to now setup Matomo in the same docker compose, would you recommend using the same MariaDB database and login credentials as NGINX, or separate? Thanks 😊
You can absolutely use the same mariadb and credentials if it's just you accessing the data. Just use a different db inside of maria db. There's an environment variable mariadb uses called DB_NAME, that's the one you want to use with a different name. You'd have to create it in the docker shell in the mariadb container using the maria db cli most likely, instead of the environment variables, but it should still work.
Thanks for the hint! It worked! 😊 But brought me to the next question, how and where to put this tracker code in html from Matomo in my docker environment? Sorry for asking irrelevant questions to this video 🙈
So you create a "site" in Matomo for each website you want to track visits for, then it will generate a snippet of code. Take that code, and put it in the html header of the site, and save. Then Matomo will start collecting information. I have a video on Matomo, but it's pretty old. May still work through.
Hi!! great video, in my case information abour dns, name server help me a lot this things, I had tray before but url really no setup correctly, this video clear my mind, thanks!
Thanks, I know it's old video, but for its working only locally, my router seems always closing port 80, is that popular problem I need to host proxy manager in cloud?
Some ISPs block ports 80 and 443 intentionally. Options are to run a VPN like wireguard and use that as a tunnel to get pas the port blocking of the ISP.
I have a dynamic IP from the ISP, but it doesn’t change very often, so easy enough for me to just update it manually if needed. If you have one that changes regularly, then you may want to look into using cloudflare and their dynamic IP service to keep things up to date, or check out my video on using DuckDNS for getting a url and keeping the dynamic IP up to date.
great video. although its 2 yrs old. I setup the home_ip method to secure the npm and seemed to save the any option. but now i cant login. the npm page loads, take user and password and thinks for a second and goes back to login page. Does this method still work? When i switch back to publicly available, i can see the dashbaord.
It should still work. The re-routing of it to itself may be problematic depending on your router hardware. You may want to turn on hair-pinning (aka NAT redirect, or NAT reflection) if it's not already on. Beyond that, you may just check the docker logs and see if you can get any useful info on what might be happening from there.
Man thank you. Very clean representation for the extras proxy manager has to offer. What i am wondering.... is it safer to have Proxy manager page with no ssl on your safe home network. Or let it run via Nginx and with the ip block.... because al the data leaves your home network.
im having problems with the sign in part i got npm running set manage @my domain when i click it npm opens when i add access list as with user and password with lets encrypy cert all good now when i go to proxy host click on manage i get the sign in box but my user name and password do not work were im i going wrong
Try to restart the NPM database. Sometimes it just loses connection from the app...if that doesn't work, then check out his new quick start guide. The more recent NPM instructions don't include the maria db portion anymore, so there may be an issue. If you followed this video directly, you might check his new quick start docker-compose and try it. You'll have to re-setup everything though if you do.
to resolve the port 80 fight : I kept apache2 on 80 from docker-compose.yml of nginx i changed the port to 8080 instead of 80 Now both running with no conflict. Do i run into bump if nginx needs to serve apache2, will need to proxy pass from 8080 (nginx) to 80 ( Apache) ....? Update: That is not the way it was meant for Nginx, it should be in the front and apache in the back served by nginx, i think i need to do the opposite, make apache2 listen to 8080 and proxy pass nginx to serve it....🤔
also if you reboot and for some reason you cannot get any docker running like me from docker ps ( empty) then start docker services : sudo docker-compose up to stop it: sudo docker-compose down
Hello. When I'm changing the user name and password in the npm and db container through a compose-file, maria db refuse the access to the npm container. I checked and rechecked several times everything fits but i get an error in mariadb like " Acces denies for user 'name' @ 'ipadress of the container' ('using password :YES)". Even when I only change the password in the compose file and keeping the npm default name I get the error. I'm using the "new" basic compose-file found on the site which doesn't required a json file (at least that's how I understand it). I really don't know why the database refuse the access to the npm container. Help? Ah and when I tried to add an access list through npm for my the npm management site I got the error "Existing token contained invalid user data", even If I just copied what you did... So right know I'm using the default file (i know I shouldn't but I only use it to host homeassistant and even like that I only have one connected device ^^, I'm a real beginner), so I think there's no real risk but letting it like this at the moment but I want to do the installation properly (thus securely) once I'm growing my home automation system.
I hit this recently, but when i looked back at my chosen password for the mysql root user it had some characters that mysql didn't like and it resulted in that error message. Beyond that you might ask over on the JC21 Github page and see if there are any other answers or thoughts on why you might be getting this error. github.com/jc21/nginx-proxy-manager/issues
so guessing to make this more secure you can setup a wireguard server (gluetun/netbird, pivpn etc) and use the wireguard created network in authorised connections in nginx if connecting from outside your home network?
@@AwesomeOpenSource not sure what i'm doing wrong, trying to get ACL to work, created user, prompt appears it won't accept the account...i followed the video
@@iamrage4753 make sure you have the ACL set right. Double check the user info in the ACL as well. I know it sounds simple, but spelling mistakes happen.
I'm trying to deploy this in Portainer, but it's already running as service and port 80 is busy. I have a RaspiOS Lite with OpenMediaVault + Docker/Portainer. I have deployed several services with no issue, but I'm too newbie to know how to fix this. I've tried several commands that should disable it, but it keeps restarting. I'd like to use NginxPM to be able to access my homeserver from outside (like Airsonic from my phone). Any hint?
If port 80 is in use, you can run the host port as a different port, then on your firewall from the internet, forward port 80 requests to the port you set for NGinX Proxy Manager. it can then proxy the traffic to the various applications you are running. So it would look like this in your docker-compose. ports: - 8082:80 Then on your firewall, you'd forward port 80 traffic to 8082.
@@AwesomeOpenSource Yes! That did the trick! Thank you so much! Now I can study how to configure it, I really hope I can open my server to the outside!
Hi, I am facing some issue when I apply the access list for the nginx proxy manager it keep asking for the pop up login. cannot get into the proxy manger. Is there any solution for it. version 2.11.2
why NginX Proxy Manager works with apps in docker (in same docker where is npm)? with other servers still i need to put ports with domain name? (servers are other VMs with diffrent ip address) can You explain me that? what im doing wrong?
This is a good question, but certainly not a simple one. You can use NGinX Proxy Manager in the same host and other docker apps in several ways. 1. You can use it with the host IP and the port you assign / expose in the docker container (left side of the colon). 2. You can use it with the container name and the internal (non-exposed) port (right side of the colon, or you could even leave the port out of the "docker run" or "docker-compose" commands in this case; the only caveat is that the NPM container, and the app container must be running on the same docker network, and it cannot be the default network. You have to create the docker network you want to use for this. Yes, with NPM running on one server, and the apps running on a separate server, or in a separate VM you have to enter the IP / domain of the server and the external (exposed) port number (left side of the colon) of the container. That's the only way NPM can proxy the traffic to a separate host machine. I hope that helps.
@@AwesomeOpenSourceOMG … thanks… I struggled 10 hours because I used the external port in the internal forwarding of of the nginx-proxy-manager entry. Was just about to install anew. What would have resulted in the same problem.
Indeed, but Authelia is a whole other level of work IMO... so for the beginners, its easier to take small steps, understand, then take the next step up, and so on. At least, that's how I learn, so that's how I share.
Great work! Just getting into running Docker and all of this, and really enjoying it. I am coming across a weird error on my instance though - in which I cannot add SSL, as it just results in an error on Nginx saying "Internal Error". I vagguely recall you doing something with the custom locations tab when you switch ports from 80 to destination... Is that what I should be doing before generating the SSL?
I actually just had a thought, I am using Dyn-DNS to point to my IP (as we don't get static IP's here, without paying through the nose)... Could that be the issue?
It depends on the site you're running. If you're going to set a specific port for the SSL connection that isn't 443, then you want to add an entry for that port on the 2nd tab as well. I usually (these days) just enter the port I mapped for port 80 (for instance 8081:80 - I would put in 8081), and then enable SSL to do it's thing at the container level. I have a newer follow up video on using NPM with Docker and Docker-Compose at this link - and I go into a bit more detail and provide some better practices with using NGinX Proxy Manger (NPM) with Docker. th-cam.com/video/cjJVmAI1Do4/w-d-xo.html
so qq, wouldnt the best security be to not set up the entry for nginx admin port 81 altogether, if you don't setup the entry it would not be publically accessible at all, you would only be able to access it on localhost or the server IP:81 when you are on the internal network. That is essentially what you did in the end with ACL. And what would be the use case to change the settings for nginx from outside your network? I assume its not something you need to mess around with everyday especially not from outside your network. Just want to check my thought process here, I am thinking about this right or am I missing something entirely? P.S haven't tried this yet, but quite intrigued and thanks again for the tutorial
If you have this running inside a LAN, you should absolutely not expose it outside the LAN. But, if you run this on a VPS like Digital Ocean, or SSD Nodes, etc. Where there is really no "LAN" so to speak, you could simply not enable the exposed port (and when you need it, SSH into the server, and re-run the compose file exposing it long enough to make changes, then re-run the compose command again with the line commented out). That's the most safe way. But if you want to secure it, then on a VPS, definitely run a firewall in front of this, (and by that I mean a separate firewall application, as Docker likes to make firewall rules in iptables and expose the ports for you, and they are placed above all other rules). So, there's a lot to this, but if you want to expose it to manage a remote server, this is just one way of doing it. Hope that helps.
@@AwesomeOpenSource I think you should be able to run a tunnel over ssh so you could connect to the port when the tunnel is open but without exposing it to the Internet (I'm just learning about that capability of ssh!)
Something just went down as soon as I added the access control rule, I can't proceed further the nginx login screen. The login button doesn't work. Anyone has idea what might be wrong? Would appreciate any help on this. Thanks
Not sure if you have to use MariaDB, but between restarts of your docker container, you need to make sure it doesn't lose your data. Also, MariaDB is more performant that SQLLite after you get enough entries.
can someone tell me the diffrence between using sqlite or mariaDB for nginx proxy manager ? I do not see any diffrence in performance even at 28 websites
Hi there, Thanks for your efforts. I have an issue to pass HTTPS to a Magento 2.4 has its own Let's Encrypt Cert. I couldn't make it work at all. Do you have an idea how to do it?
Haven't tried to run magento, but you may just replace the certs in NPM with the ones you got for Magento, and assign them tot hat proxy host. If you look in NGinX Proxy manager (NPM) you'll see a Certificates tab where you can add your own certificates if you prefer.
@@AwesomeOpenSource Thanks for replying! I kinda figured out what's causing this issue. I have Cloudflare set up with NPM and, by proxying the dns records on Cloudflare AND using access lists on NPM, it doesn't work. I verified this by trying 2 things: - I unproxied the DNS record and kept the access list, which worked; - I tried adding in the access list the anycast IPs of Cloudflare, this also worked. Of course, the latter doesn't make sense to have, so now I'm debating whether to: - unproxy the dns records on Cloudflare and use access lists to restrict access to internal services from outside OR - figure out a way to use local DNS records(with Pi-hole) so that I don't have to add DNS records on Cloudflare and don't have to bother with access lists. Unfortunately I'm having trouble with the latter choice, I can't seem to make it work the way I'd like to. What I'm trying to achieve here is to be able to proxy the internal services(like pihole, portainer, even npm like you did in the video) but only access them from inside. However, I could solve this issue by doing what you also did in this video, but it kinda scares me to think that I'm exposing my public IP address(by not proxying the DNS records through Cloudflare). Are you still using this setup like in the video? if so, aren't you scared of possibly exposing your public IP address? Also, what do you think about these 2 options? would you recommend me one of these 2 or even another option that I don't know?
Great tutorial. But I have 1 concern, I can still access my server nginx manager at port 81 directly using ip address of VPS server. How to disable access to server by ip address so that only through domain it can opened or so.
@@V1nc3nt00 yes - as you saw ufw doesn't work by default as it turns out Docker sets its own firewall rules that bypass it. Therefore what I did was change npm to only bind ports 80 and 443 (in Portainer it was was set to 80-81 and 443 so I just changed it to 80 and 443). You still direct your npm virtual host to localhost:81 as the npm container still exposes that port internally, it just means it isn't available externally anymore. I hope that makes sense, just reply if not!
Great tutorial - but I have a question. I have setup different proxy hosts and it is working great. I also have a RPi running PiHole. And I created lets say "pihole.example.com" in NPM. Safari (my default web browser) on Mac OS X is redirecting all dns http addresses to https and that creates an issue with my Pihole which is a http-address. It is working fine in Chrome. That means I would need to make a SSL Certificate to the PiHole to get it work in Safari. I am not interested in connecting to the PiHole from outside my network. Of course I could use the local IP-address if I would like to connect to the PiHole but it is harder to remember. I have setup the Custom Locations so it forwards to the "/admin" and that is working fine in Chrome. Any ideas on how to do the SSL setup for the PiHole? Because I only get error 502 Bad Gateway.
So, for Pi-hole, because it’s for DNS I just use it with the IP. I didn’t mess with trying to setup a FQDN since I won’t be accessing it off my network anyway. I would imagine the SSL part could work, but you’d want to setup DNS challenge for your domain with a wildcard so you can apply the carts without having to have external access.
@@AwesomeOpenSource Well I wouldn't mind the SSL if Safari would work with http. I use NPM most locally, but you mean the problem is when it comes to SSL is the DNS itself? It feels like more complex fixing this than learning the IP of the DNS :-)
Yep, that's teh only thing you have to be aware of. Setup a longer lived custom cert like the 15 year cert from CloudFlare, or be prepared to turn off the ACL long enough to renew the certs, then put the ACL back on. Alternatively, you could setup DNS Only challenges for LetsEncrypt.
No, as LetsEncrypt will first try to get to the site via Port 80 before giving out the certificates for SSL. I mean you could only allow traffic on 443, but LetsEncrypt wouldn't be able to issue CA Certs for SSL.
@@AwesomeOpenSource I did manage to issue the certificate for nginx duckdns subdomain, can I shut off the port 80 after the cert is initiated and then open it when renewal is due?
Thank you for this video. But how about if I have multiple domains running on virtual machines in my home network? Is there a way I can route nginxproxy to other VM"s or sandboxes within my internal network? Some of my VM's are not always on because they are on my laptop.. Currently my home router will only allow me to NAT one address externally to the internet. My goal is to utilize multiple domains with one secure IP.. thanks.
For each domain, you need to create an A record and point it to your home public IP. Then handle each domain / sub-domain request with NginX Proxy manager and point those to the proper VM ro machine inside your network. I have several domains pointing to my one public IP, then NPM is running on 1 server inside my network, and it handles all of my traffic routing for those requests.
@@AwesomeOpenSource Is that one server a dedicated static ip or is it utilizing your internal router IP? I think the issue is I should allocate all routing to external and firewall with docker network to bridge internal networking. I can see the nginxproxy hitting one of my apache servers but still getting a “504 Gateway timeout”. Perhaps the virtual-machine networking is also affecting the route. thanks again!!
@@tom98vr4 Yeah, my server is inside my network, so it goes like this. Internet -> my public IP -> Home Router -> port 443 and 80 to my Server (statically set IP now) -> NGinX Proxy Manager on that server -> any other server I want to serve back out.
@@AwesomeOpenSource I am not sure what am i doing wrong, but I am also getting "504 Gateway Timeout" when using proxy host to forward 81 port. My structure is like this: Internet -> VPS -> NGinx Proxy manager + Docker I am accessing it through internet only. Any pointers?
There is a program installed along with NPM that runs a cron job essentially and will attempt to update the certs automatically every 75 or 80 days I believe, as the LetsEncrypt certs expire after 90 days.
@@AwesomeOpenSource Do the Let's Encrypt certificates renew properly if the Access List is set to home only? Or every 90 days do we have to make it publicly accessible so Let's Encrypt can renew the cert? Sorry I'm still new to all of this. Thanks for all your hard work on these videos!!!
Thanks for the video. It was very comprehensive. For whatever reason though I cannot get Access List based on IP to work. it seems to work for a moment but I think that is caching. I am using my public IP. I have a Unifi UDM--- not sure if that is causing an issue. Any ideas?
@@AwesomeOpenSource 403 error. I work in IT so you would think that I would be better at giving details. I wish they include an option for 2FA. I would prefer that.
Just a side note, do not include in the password of user $ same for mysql root password no $ as character otherwise you will get : Invalid interpolation format for "environment" option in service "db": "MYSQL_PASSWORD=..."
Do we have to portforward Nginx poxy manager to make it work? I've been struggling trying to redirect a webserver that's on another machine with it and I saw a couple of places saying that it needs to be portforwarded
So, you forward the ports from outside your network to the server running NGinX Proxy Manager, and that should only be port 80 and 443. Inside your network, you shouldn't have to port forward unless you have the firewall up on each individual machine. In that case you need to open the port(s) on the individual machine that allow the application to work.
@@novaleary4488 Probably a good idea, but if you don't forward port 81, then the port the interface is running on isn't open to the internet and should only be able to be accessed from inside your network. If you plan to access it from outside the network, then yes, you should definitely secure it.
@@AwesomeOpenSource Ah ok. So, basically in order for nginx to redirect a request to the server from outside the network, i have to forward ports 80 and 443 to the nginx hosting machine, right?
could you do a video on apache2 with docker container & nginx manager? How about an email server? Ive got no problem just using good old nginx itself but would rather use the manager
Let me see what I can do, but Apache and NginX kind of do the same things, so not sure it's useful to do that. Email, is a different beast, as NPM is really meant for forwarding traffic to web sites, vs. mail traffic as far as I know. but let me see what I can find.
No, you could use something like Dynamic IP updating through Cloudflare, or DuckDNS, or if your registrar has dynamic IP updates, look around for a docker container you can use for that. I have a video on Duck DNS out there already. Maybe that would get you started.
ive installed via docker and everything is working except i cannot access the admin portal externally, works just fine internally i have opened port 80 and 443 and pointed that to the docker server. i have created an A record pointing to my external IP and have tested my A record with telnet and another port pointing to another service and that works. any ideas?
@@AwesomeOpenSource if I go to the public IP internally I arrive at the NGINX page and when I define port 81 I get to the admin side, externally I get site cannot be reached. but when i try to access another service externally pointing to another machine no issues
@@AwesomeOpenSource yeah 100% positive both 80 and 443 both using TCP are pointed towards my host and i swear I've had port 80 open in the past to run a simple Apache web server so the ports shouldn't be blocked at an ISP level but I'm going to give them a call..... lol i shouldn't be struggling so much to do something this simple
@@James-li8cm In the case where you are accessing it locally or via a VPN, then you don't need to, but if you are wanting to access the control panel from outside the network via the internet, then creating a proxy entry for the admin panel and not having port 81 open is the safer way to go.
@@AwesomeOpenSource but doesn't blocking port 81 at the firewall level do that already? because the proxy doesn't work if it can't access port 81 because of the firewall rules? I say this, because I am running into this exact problem... I am having difficulty getting it to block between cloudflare's funkyness and docker ufw issues... its been a nightmare... I thought about securing it through the proxy server, but that's moot if you can access it from the IP/port but then blocking the port would work... but eliminate the need to lock it down at the proxy server level...
@@James-li8cm it just depends on where you are proxying from and to, but what you are accomplishing is giving access through 443 to the admin interface which is proxies to the internal 81. But you’ve closed off outside access to 81 directly.
Do you think I can use this software in kubernetes? Like deploy it in a pod, point a domain to the server external ip and setup forwarding through NPM like i would with my regular docker containers?
I'm not familiar enough with Kubernetes to know if that would work. But if it's just managing docker under the covers, then should work in theory. Sorry...maybe someone else has done it.
@@AwesomeOpenSource Hey, thanks for your quick reply! I'm pretty much a noob with k8s too, but I'll see if I can get it going. Anyway thanks for the work you put in your videos and for the community!
Same as your manage.routemehome address I created a dns for the NPM itself with LE. All great. The NPM-login page looks good but when I try to login it only reloads the login page. It works when I use the ip address instead of the newly created DNS. Any ideas what could have happened? I've tried different web browsers - same result. Only happens on the proxy itself.
Is there any message at all on the login screen? If no, then I'd say check the logs for the container and see what it says when you try to login that way.
@@AwesomeOpenSource No message at all. The page just reloads. Checked the Portainer logs of nginx_app_1 and found this. [3/20/2021] [3:03:47 PM] [Express ] › ⚠ warning Existing token contained invalid user data Tested to change the password but no difference. I also edited the proxy host from https to http - same result and same warning in the logs.
You can still access it locally I believe. But, you can use something like DuckDNS (lots of tutorials on it) to make sureyour non-static IP doesn't have that affect on you.
@@AwesomeOpenSource When you setup NGINX with SSL, you would generate a DH bit. So this is not possible for NPM then. I would hope NPM would put an option for this.
Nice Video, thanks! ..everything works fine.. and i can reach nginx with my URL and SSL. BUT if i start playing with access lists it does no more work! I checked my public ip and inserted it under Access as you showed in your video.. i always got 403 Forbidden.. i also tried with a User.. there i have to insert registered user email and pw? The additional login screen appeared and i already thougt its working.. but after login the normal login appeared.. and then i tried to login but nothing happens.. it always shows the nginx login and after trying to login the login and pw field is empty and nothing happens. Any idea?
No, but the ACL is really a pain. A great option, but seriously painful. If you're trying to access NginX Proxy manager login from outside though, I'd say keep that an internal network access only thing, just so you don't have to mess with it, or only allow access through a VPN from the outside. Keep port 81 closed for sure.
Dynamic DNS should work, as th up changing would be before anything reaches the proxy manager, so you would have your DDNS.name pointing to whatever you current IP is.
Hi, I'm using bitwarden password with nginx proxy manager. When i use a username and password i have a issue. I try to connect from outsite and i wrote my username and password afterthat i can see bitwarden screen and i'm writing bitwarden user informations but afterthat again nginx asking username and password. I writing again but not accept. What can be issue?
@@okanerdem Sometimes, that ACL can get triggered over and over if the software is redirecting somehow...it may just need a special rule added under the advanced tab, but you'll have to check their pages for that info.
can't get this to work for my rasberry pi.. after installing it shows bad connection at port 81 login:1 Unchecked runtime.lastError: The message port closed before a response was received.
what base OS are you using on rpi? Is the firewall on by any chance? Sometimes people have had to remove the container and images, and just try againa dn then it all works, no idea why. Haven't tried it on a pi myself.
@@maxlimgj CAn you ensure nothing else is using port 80 and port 443, and that you did setup the NPM ports as host 80 and 443 mapped to container 80 and 443?
@@AwesomeOpenSource Thank you. Because there is an option to forward another port for a specific path. But I could not figure out a way to just forward another port for the same IP.
So, if you'rre asking how to do it from the outside to the inside of your network, you want to set the wildcard "*" for your domain "example.com" as an A Record, and you want to point that to your public IP address. Next, you need to port-forward ports 80 and 443 in your router / firewall to your machine's private IP of 192.168.1.56 Now, when you visit "anything.example.com" it should route to your home public IP, and pass through your router firewall to your NGinX Proxy Manager server. NPM will then check it's records to see if "anything.example.com" is a known server, and if so, will forward the traffic to the appropriate server inside your network. Hope that helps,
@@AwesomeOpenSource thanks for response, I check and no see the option to put servername wild card at NPM, because I want proxy all traffic of wild card domain to another destination and people working at destination can config whatever they like on this, another level proxy i think. Btw, thanks if you find some interest pls share it. Many thanks again!!!
@@andrewnhien9714 The option for * routing in NPM may not work. I suppose you could try just entering the url as *.example.com in the Details tab, and then forwarding that to the IP at the destination, then test it. I just don't know that it will work.
@@AwesomeOpenSource I try, put it but seen like NPM don't understad it. Thanks. Btw, I see NPM can't group the domain proxy, like I have over 100 domains, if I put in NPM it really hard to manage, Do you have any option for manage proxy tool(I mean another tool?), and if I have to do that, anyway to convert the option from nginx(original file) to NPM?
Haven't planned on any federated videos yet, Mastodon, Pixelfed, etc. I have been reading up on them, but want to understand the update process better as that's where I understand running federated systems to need more patience and knowledge. It's in the future, but not sure when yet. And, yeah, I get busy sometimes and can't answer right away, or I see a question where I need to think about the answer first, or go do some research, so I don't answer right away. And of course time differences and schedules can have a dealying effect. I'm Central Time US, and work a regular job during the week, and my family has all kinds of little projects on the weekend. Time, time, time....if only I could master time.
@@AwesomeOpenSource sir please help on Rocket.chat after changing file upload system to “file system” method when uploading mp3 it plays well but the MP3 player slider and the duration (lapse) of it is missing don’t know how to fix this sir I am had given the uploads folder permission 775 and both 777 tried both of them still isn’t showing the slider of the player and time duration
@@solidsnake5239 Not sure why it would do that, but haven't messed with filesystem storage in years. Are you planning to have a company of users and thier attachments? If not, then I would stick with GridFS personally.
@@AwesomeOpenSource why i chose file system over gridfs is wondering it would load the uploads fast because it’s uploading and downloading from the same server itself where Rocket.chat is deployed to. I do see some speed in that case but true mp3 slider is no longer visible and time duration for mp3 for example like how long is the mp3 playing isn’t also visible but it all works fine on gridfs except for speed the deployment is getting slower and stuck on gridfs I guess I’m running on linode 4GB 2 cpu 80GB
@@AwesomeOpenSource When I got the local ip address for npm and add the fqdn then pointed to ip address for nginx proxy and I saved that where I get no more access to the web page of npm. I like your videos and are the best out there keep up the 👍 . I wish you could connect to my system and help me out that way so I can learn more from you. Thanks
"kind of minimal instructions" But the quick setup that the narrator likes so much shows the same information at GitHub as it does under setup that he first showed. I don't understand what he's thinking. In his previous video, he says one shouldn't be setting up docker as root, but then it what he's doing the entire time. How confusing! Why aren't his actions consistent with his words? A teacher? He'll have the most confused students. Goodness.
Not knowing which video you watched last, it's hard to know if you've watched them in order. I have learned over time not to run things as root in docker from my viewers comments as I've grown in my open source journey. I try to avoid it whenever possible.
I've been promising this video for a few weeks at least, so I'm glad to be posting it finally. I hope you all get some useful information out of it. Thank you again to all of you for watching, subscribing, and to my patrons at Patreon.
Thank you for this. I've been trying to get NPM up and running for about a week, tinkering a few hours a day and reading various help articles and watching TH-cam videos, and I would still be completely lost as to how it's supposed to actually work if I had not found _this_ video.
I spent at least one extra day figuring out how to give the Portainer container its own self-signed SSL certificate--and pulled it off--and not until I saw this did I realize that's not even necessary. You've saved me hours of struggle. Luckily, I'd only deployed portainer so far. :)
To make sure my understanding is correct, it's safe for container IPs behind the NPM to be unsecured HTTP-only, because NPM itself is providing the security (SSL) when interfacing with the outside world.
It would also never have occurred to me to use the internal docker 172.* IPs--no other video does that that I've watched so far. map
Trying this today, doesn't work. Apparently this no longer works properly and has something to do with the security header? They posted in the FAQ on the NPM website stating why it doesn't work. It's unfortunate because I want to be able to manage NPM remotely, but have an access list and SSL/TLS in place. Thank you so much for the videos!
Thanks for the explanation. Just two hints: When securing your NPM with authentication, the SSL certs won't be able to update automatically. So after 90 days you have to disable, renew the certs and enable again. Maybe not the most convenient way.
And for the proxy hosts I suggest using the container name instead of the IP, cause than you won't need to update anything in case the IP changes (e.g. the container cycled and got a new IP address)
Oh yes. I have learned so much since I made this video way back, but great tips indeed.
Really great tips. Thank you
@@AwesomeOpenSource going to bring a update on this? Maybe including fail2ban? 😁
Ive been trying to set up a reverse proxy for the longest time and you are the only youtuber willing to publicly post their IP address when explaining how the a name records works
My public IP changes over time, only once every few months, but that makes it a bit less dangerous for me, whereas others may have a static address, and therefore don't want to share it. I hope my ability to share openly helped you.
@AwesomeOpenSource Others publish their domain names. Anyone who can handle nslookup is able to determine their IP anyway, even if it changes.
Learned something today and it works. I managed to block port 81 too in firewall thanks to the comment section. Thank you.
Glad you got it blocked, and that you are now more secure!
Just started my home lab and found your channel through a few question searches. Fantastic content.
5/5
+1 sub good sir
Happy to hear it, and welcome aboard!
Had just set this up earlier today but didn't think about securing the actual manager with itself, thanks for that! You could go one step further and stop docker from exposing port 81 in the docker-compose file to stop any http access from the home network and force all traffic through https.
Glad it helped. And yes not exposing the port is an option, but then connecting through becomes a bit more of a pain. You need to setup a docker network specifically, but also not a horrible thing to do.
@@AwesomeOpenSource can you look at covering the same type of config with caddy with also cloudsec might be a good combo please
This proxy is for sure one of the best things for securely homelabbing and even on production! Thanks so much once again!
My pleasure.
@@AwesomeOpenSourcejust got a domain with cloudflare as they now do everything under one service can you help me with what I do next...new to the whole domain setup please?
A great tutorial. Like that you make your tutorials very detailed. You don't miss a bit. Thanks
Glad you like them!
Thank you, that is the best tutorial you streamed. answers lots of questions I had. well done love your videos.
Thanks so much!
I have no idea why but i tryed it so many times over and over and it never worked until i watched that video in witch you did exactly the same as me and it worked. IT Stuff is somtimes really confusing. But thanks for that video its really great to learn
I'm glad it finally worked for you. That's the important part.
the best video for nginx for beginners thank you!
My pleasure.
thanks for the access list part. that was the missing info i needed. i wanted to use SSL for internal only stuff and this info should solve my inquiry
Glad it helped. Just note, once you set the access list rule up, you'll have to remove it every 90 days to renew the cert, then put it back on. A simple click, but some manual work there. I suppose you could create a rule for the LetsEncrypt IP that may get around that.
@@AwesomeOpenSource or just use Let's Encrypt DNS certificate their are much more easy to use
I really appreciate your videos and the level of detail covered. Thanks!
Just to add: my IP is proxied/masked via Cloudflare, meaning my authorised IP in the access list gets blocked. I could get around this by creating an seperate A record for the subdomain and not proxying it in Cloudflare.
Correct.
Super great video 😊👌
Thank you
Glad you liked it.
Very well explained and useful tutorial. Thank you!
My pleasure
as always good stuff, you can get UUID & PGID by just typing username id for example brain id
Very good to know! Thanks for that. Glad you enjoyed it.
Nice video, clean explanation and good learning material as always. Keep up the good work!
Any chance you would be interested in doing an updated video with Authelia and Ngnix Proxy Manager? This will give you SSO with sites and 2FA.
Thank you for the compliment, I'm glad you're getting something out of my content. And yes, I actually just did one on Authelia about 2 weeks ago. Check it out! th-cam.com/video/5KtbmrUwYNQ/w-d-xo.html
Hi there. If I understand correctly what you're doing here - I think you probably should mention that you are port forwarding from your router to the internal IP address of the box that's running docker and hosting nginx proxy manager. Perhaps that was in the previous video?
I'm sure it was. I have a few videos where I break this down. But thank you for the feedback.
awesome, thanks to explain clearly,
i have question, if i have 2 VM and from each one of those has webserver with panel inside (ex:CWP), should i change the port to non default ?
ex : 8080, 4443
I dont use it for the things you said, cause haproxy does that in my case. I use it for the Websocket Support in Front of an unifi controller, cause that controller does have a problem with websockets behind a reverse proxy and nginx proxy manager solves that better than a apache vhost
Very nice.
Super nice video, Thank you
Thank you too
well detailed explanation
Glad you liked it
Great video, can you do another one explaining how to use npm with ssh and rdp?
Let me see what I can find, but I feel like SSH and RDP should really be used with a VPN more than a reverse proxy.
hi you are doing a great job with your tutorials ,i am learning alot .i am facing a small problem with proxy manager, i have a old ATA device which gives certificate error if i open the web page then i need to click continue to the site to open the configuration page, my problem is that when this device configured proxy manager always give me 502 bad gateway and i see in logs (sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream),can you help how to resolve the error?Thanks
Thanks for doing these videos Mick. Can you do one for Piwigo?
Looks very interesting. Let me look into it a bit more.
Even easier than using the Docker IP you can even just use the container name 🙂
Oh yeah, i've been going that direction more recently.
@@AwesomeOpenSource possible to make an updated video? For the life of me I simply can't get my domain and proxy manager to communicate with each other...
Awesome explanations. What is that analytics conatiner? Can you please share the name? Thanks!
I use Matomo to get some analytics on visits to my site. It's open source, and privacy protecting, which is Awesome!
Thanks! I have NGINX with MariaDB with a database "db" with npm username and password. If I want to now setup Matomo in the same docker compose, would you recommend using the same MariaDB database and login credentials as NGINX, or separate? Thanks 😊
You can absolutely use the same mariadb and credentials if it's just you accessing the data. Just use a different db inside of maria db. There's an environment variable mariadb uses called DB_NAME, that's the one you want to use with a different name. You'd have to create it in the docker shell in the mariadb container using the maria db cli most likely, instead of the environment variables, but it should still work.
Thanks for the hint! It worked! 😊 But brought me to the next question, how and where to put this tracker code in html from Matomo in my docker environment? Sorry for asking irrelevant questions to this video 🙈
So you create a "site" in Matomo for each website you want to track visits for, then it will generate a snippet of code. Take that code, and put it in the html header of the site, and save. Then Matomo will start collecting information. I have a video on Matomo, but it's pretty old. May still work through.
Hi!! great video, in my case information abour dns, name server help me a lot this things, I had tray before but url really no setup correctly, this video clear my mind, thanks!
Glad it helped.
Thanks, I know it's old video, but for its working only locally, my router seems always closing port 80, is that popular problem I need to host proxy manager in cloud?
Some ISPs block ports 80 and 443 intentionally. Options are to run a VPN like wireguard and use that as a tunnel to get pas the port blocking of the ISP.
when you set the A records you plugged you IP address, was that a static one? and if it is dynamic how to do the A and CNAME records?
I have a dynamic IP from the ISP, but it doesn’t change very often, so easy enough for me to just update it manually if needed. If you have one that changes regularly, then you may want to look into using cloudflare and their dynamic IP service to keep things up to date, or check out my video on using DuckDNS for getting a url and keeping the dynamic IP up to date.
great video. although its 2 yrs old. I setup the home_ip method to secure the npm and seemed to save the any option. but now i cant login. the npm page loads, take user and password and thinks for a second and goes back to login page. Does this method still work? When i switch back to publicly available, i can see the dashbaord.
It should still work. The re-routing of it to itself may be problematic depending on your router hardware. You may want to turn on hair-pinning (aka NAT redirect, or NAT reflection) if it's not already on. Beyond that, you may just check the docker logs and see if you can get any useful info on what might be happening from there.
Man thank you. Very clean representation for the extras proxy manager has to offer. What i am wondering.... is it safer to have Proxy manager page with no ssl on your safe home network. Or let it run via Nginx and with the ip block.... because al the data leaves your home network.
I personally keep my home network NPM port closed for management. That doesn't mean it's safe, but the firewall blocks that port.
Thanks for doing this videos, could you also do one with streams and how this work? Thanks in advance.
Happy to try. Can you tell me what kind of stream? Like peer to peer video chat, or like Jellyfin?
im having problems with the sign in part i got npm running set manage @my domain when i click it npm opens when i add access list as with user and password with lets encrypy cert all good now when i go to proxy host click on manage i get the sign in box but my user name and password do not work were im i going wrong
Try to restart the NPM database. Sometimes it just loses connection from the app...if that doesn't work, then check out his new quick start guide. The more recent NPM instructions don't include the maria db portion anymore, so there may be an issue. If you followed this video directly, you might check his new quick start docker-compose and try it. You'll have to re-setup everything though if you do.
@24:41 - is it possible to put in your IP access list "let's encrypt" subnets ???
I think you can do that. You'd add the lets encrypt server IP(s) to your ACL.
to resolve the port 80 fight :
I kept apache2 on 80
from docker-compose.yml of nginx i changed the port to 8080 instead of 80
Now both running with no conflict.
Do i run into bump if nginx needs to serve apache2, will need to proxy pass from 8080 (nginx) to 80 ( Apache) ....?
Update:
That is not the way it was meant for Nginx, it should be in the front and apache in the back served by nginx, i think i need to do the opposite, make apache2 listen to 8080 and proxy pass nginx to serve it....🤔
I agree, i think you should put NGinX in front of Apache.
also if you reboot and for some reason you cannot get any docker running like me from docker ps ( empty) then start docker services :
sudo docker-compose up
to stop it:
sudo docker-compose down
Indeed.
Thanks for this!
My pleasure.
Also using up addresses is not a good idea. Use the dns system docker has for container names so when the up addresses change it will still work.
Indeed, I've learned a ton since I made this video.
Great, is it possible to make a video to secure NPM with crowdsec and it's open Source? Thanks
Let me check out crowdsec. Not sure what it is. But if so, I'll see if I can make one.
@@AwesomeOpenSource aaaawesome thanks
Hello. When I'm changing the user name and password in the npm and db container through a compose-file, maria db refuse the access to the npm container. I checked and rechecked several times everything fits but i get an error in mariadb like " Acces denies for user 'name' @ 'ipadress of the container' ('using password :YES)". Even when I only change the password in the compose file and keeping the npm default name I get the error.
I'm using the "new" basic compose-file found on the site which doesn't required a json file (at least that's how I understand it). I really don't know why the database refuse the access to the npm container. Help?
Ah and when I tried to add an access list through npm for my the npm management site I got the error "Existing token contained invalid user data", even If I just copied what you did...
So right know I'm using the default file (i know I shouldn't but I only use it to host homeassistant and even like that I only have one connected device ^^, I'm a real beginner), so I think there's no real risk but letting it like this at the moment but I want to do the installation properly (thus securely) once I'm growing my home automation system.
I hit this recently, but when i looked back at my chosen password for the mysql root user it had some characters that mysql didn't like and it resulted in that error message. Beyond that you might ask over on the JC21 Github page and see if there are any other answers or thoughts on why you might be getting this error. github.com/jc21/nginx-proxy-manager/issues
so guessing to make this more secure you can setup a wireguard server (gluetun/netbird, pivpn etc) and use the wireguard created network in authorised connections in nginx if connecting from outside your home network?
I have a Netmaker video where I will show just how to do what you're saying, and I have one on a wireguard option called Selfhosted gateway.
@@AwesomeOpenSource question, what happens if you have a dynamic public IP that changes on connection renew for those access lists?
@@AwesomeOpenSource not sure what i'm doing wrong, trying to get ACL to work, created user, prompt appears it won't accept the account...i followed the video
@@iamrage4753 make sure you have the ACL set right. Double check the user info in the ACL as well. I know it sounds simple, but spelling mistakes happen.
@@AwesomeOpenSource already checked, are there any password length\type it has to be? the account does have to be different from the nginxpm account?
I'm trying to deploy this in Portainer, but it's already running as service and port 80 is busy. I have a RaspiOS Lite with OpenMediaVault + Docker/Portainer. I have deployed several services with no issue, but I'm too newbie to know how to fix this. I've tried several commands that should disable it, but it keeps restarting. I'd like to use NginxPM to be able to access my homeserver from outside (like Airsonic from my phone). Any hint?
If port 80 is in use, you can run the host port as a different port, then on your firewall from the internet, forward port 80 requests to the port you set for NGinX Proxy Manager. it can then proxy the traffic to the various applications you are running. So it would look like this in your docker-compose.
ports:
- 8082:80
Then on your firewall, you'd forward port 80 traffic to 8082.
@@AwesomeOpenSource Yes! That did the trick! Thank you so much! Now I can study how to configure it, I really hope I can open my server to the outside!
Hi, I am facing some issue when I apply the access list for the nginx proxy manager it keep asking for the pop up login. cannot get into the proxy manger. Is there any solution for it. version 2.11.2
why NginX Proxy Manager works with apps in docker (in same docker where is npm)? with other servers still i need to put ports with domain name? (servers are other VMs with diffrent ip address) can You explain me that? what im doing wrong?
This is a good question, but certainly not a simple one. You can use NGinX Proxy Manager in the same host and other docker apps in several ways.
1. You can use it with the host IP and the port you assign / expose in the docker container (left side of the colon).
2. You can use it with the container name and the internal (non-exposed) port (right side of the colon, or you could even leave the port out of the "docker run" or "docker-compose" commands in this case; the only caveat is that the NPM container, and the app container must be running on the same docker network, and it cannot be the default network. You have to create the docker network you want to use for this.
Yes, with NPM running on one server, and the apps running on a separate server, or in a separate VM you have to enter the IP / domain of the server and the external (exposed) port number (left side of the colon) of the container. That's the only way NPM can proxy the traffic to a separate host machine.
I hope that helps.
@@AwesomeOpenSourceOMG … thanks… I struggled 10 hours because I used the external port in the internal forwarding of of the nginx-proxy-manager entry. Was just about to install anew. What would have resulted in the same problem.
It would be more secure (and easier to use in production) to integrate/build authelia instead of activating access lists.
Indeed, but Authelia is a whole other level of work IMO... so for the beginners, its easier to take small steps, understand, then take the next step up, and so on. At least, that's how I learn, so that's how I share.
Great work! Just getting into running Docker and all of this, and really enjoying it. I am coming across a weird error on my instance though - in which I cannot add SSL, as it just results in an error on Nginx saying "Internal Error". I vagguely recall you doing something with the custom locations tab when you switch ports from 80 to destination... Is that what I should be doing before generating the SSL?
I actually just had a thought, I am using Dyn-DNS to point to my IP (as we don't get static IP's here, without paying through the nose)... Could that be the issue?
It depends on the site you're running. If you're going to set a specific port for the SSL connection that isn't 443, then you want to add an entry for that port on the 2nd tab as well. I usually (these days) just enter the port I mapped for port 80 (for instance 8081:80 - I would put in 8081), and then enable SSL to do it's thing at the container level.
I have a newer follow up video on using NPM with Docker and Docker-Compose at this link - and I go into a bit more detail and provide some better practices with using NGinX Proxy Manger (NPM) with Docker. th-cam.com/video/cjJVmAI1Do4/w-d-xo.html
so qq, wouldnt the best security be to not set up the entry for nginx admin port 81 altogether, if you don't setup the entry it would not be publically accessible at all, you would only be able to access it on localhost or the server IP:81 when you are on the internal network. That is essentially what you did in the end with ACL.
And what would be the use case to change the settings for nginx from outside your network? I assume its not something you need to mess around with everyday especially not from outside your network.
Just want to check my thought process here, I am thinking about this right or am I missing something entirely?
P.S haven't tried this yet, but quite intrigued and thanks again for the tutorial
If you have this running inside a LAN, you should absolutely not expose it outside the LAN. But, if you run this on a VPS like Digital Ocean, or SSD Nodes, etc. Where there is really no "LAN" so to speak, you could simply not enable the exposed port (and when you need it, SSH into the server, and re-run the compose file exposing it long enough to make changes, then re-run the compose command again with the line commented out). That's the most safe way. But if you want to secure it, then on a VPS, definitely run a firewall in front of this, (and by that I mean a separate firewall application, as Docker likes to make firewall rules in iptables and expose the ports for you, and they are placed above all other rules). So, there's a lot to this, but if you want to expose it to manage a remote server, this is just one way of doing it. Hope that helps.
@@AwesomeOpenSource I think you should be able to run a tunnel over ssh so you could connect to the port when the tunnel is open but without exposing it to the Internet (I'm just learning about that capability of ssh!)
Something just went down as soon as I added the access control rule, I can't proceed further the nginx login screen. The login button doesn't work. Anyone has idea what might be wrong? Would appreciate any help on this. Thanks
Did you set an IP as a rule as well? If so, make sure you are attaching from that IP and that NPM recognizes the IP properly.
@@AwesomeOpenSource It's fixed now, thanks. 🤝
I checked, NPM works without mariadb also. Then what is the of using it?
Not sure if you have to use MariaDB, but between restarts of your docker container, you need to make sure it doesn't lose your data. Also, MariaDB is more performant that SQLLite after you get enough entries.
What erp solution you have setup there ... I see a sub domain setup
If you'll give me a time stamp, I'll see if i can tell you.
can someone tell me the diffrence between using sqlite or mariaDB for nginx proxy manager ? I do not see any diffrence in performance even at 28 websites
No, probably not, I would think you would need several 100 sites before you saw a performance hit.
Hi there, Thanks for your efforts. I have an issue to pass HTTPS to a Magento 2.4 has its own Let's Encrypt Cert. I couldn't make it work at all. Do you have an idea how to do it?
Haven't tried to run magento, but you may just replace the certs in NPM with the ones you got for Magento, and assign them tot hat proxy host. If you look in NGinX Proxy manager (NPM) you'll see a Certificates tab where you can add your own certificates if you prefer.
I unfortunately don't know why, but access lists don't seem to work how they should. If I use my public IP(it's static) like you did, it doesn't work.
Did you check the logs for NGinX Proxy Manager? You can do 'docker logs nginx-proxy-manager' or whatever your container name is.
@@AwesomeOpenSource Thanks for replying!
I kinda figured out what's causing this issue. I have Cloudflare set up with NPM and, by proxying the dns records on Cloudflare AND using access lists on NPM, it doesn't work.
I verified this by trying 2 things:
- I unproxied the DNS record and kept the access list, which worked;
- I tried adding in the access list the anycast IPs of Cloudflare, this also worked.
Of course, the latter doesn't make sense to have, so now I'm debating whether to:
- unproxy the dns records on Cloudflare and use access lists to restrict access to internal services from outside OR
- figure out a way to use local DNS records(with Pi-hole) so that I don't have to add DNS records on Cloudflare and don't have to bother with access lists.
Unfortunately I'm having trouble with the latter choice, I can't seem to make it work the way I'd like to. What I'm trying to achieve here is to be able to proxy the internal services(like pihole, portainer, even npm like you did in the video) but only access them from inside.
However, I could solve this issue by doing what you also did in this video, but it kinda scares me to think that I'm exposing my public IP address(by not proxying the DNS records through Cloudflare).
Are you still using this setup like in the video? if so, aren't you scared of possibly exposing your public IP address?
Also, what do you think about these 2 options? would you recommend me one of these 2 or even another option that I don't know?
@24:41 - how would it auto-renew cert if you enable access control?
It can’t unless you setup DNS challenge instead.
Great tutorial.
But I have 1 concern, I can still access my server nginx manager at port 81 directly using ip address of VPS server.
How to disable access to server by ip address so that only through domain it can opened or so.
Depends on your VPS provider, but with Digital Ocean I have a firewall option I can apply to my VPS's , and so I block port 81.
@@AwesomeOpenSource presumably using the likes of ufw you could just block it there?
Did you found a way to do this? If I block port 81 with ufw it doesn't change anything :(
@@V1nc3nt00 yes - as you saw ufw doesn't work by default as it turns out Docker sets its own firewall rules that bypass it. Therefore what I did was change npm to only bind ports 80 and 443 (in Portainer it was was set to 80-81 and 443 so I just changed it to 80 and 443). You still direct your npm virtual host to localhost:81 as the npm container still exposes that port internally, it just means it isn't available externally anymore. I hope that makes sense, just reply if not!
@@alanjrobertson Thank you, I got it now running :)
Love it
Thankyou!
Doesn't blocking public access to a site kill your Lets Encrypt auto renewal?
It sure will. You'll have to be mindful to take down that wall and manually request new certs, then put it back up.
Great tutorial - but I have a question. I have setup different proxy hosts and it is working great. I also have a RPi running PiHole. And I created lets say "pihole.example.com" in NPM. Safari (my default web browser) on Mac OS X is redirecting all dns http addresses to https and that creates an issue with my Pihole which is a http-address. It is working fine in Chrome. That means I would need to make a SSL Certificate to the PiHole to get it work in Safari. I am not interested in connecting to the PiHole from outside my network. Of course I could use the local IP-address if I would like to connect to the PiHole but it is harder to remember. I have setup the Custom Locations so it forwards to the "/admin" and that is working fine in Chrome. Any ideas on how to do the SSL setup for the PiHole? Because I only get error 502 Bad Gateway.
So, for Pi-hole, because it’s for DNS I just use it with the IP. I didn’t mess with trying to setup a FQDN since I won’t be accessing it off my network anyway. I would imagine the SSL part could work, but you’d want to setup DNS challenge for your domain with a wildcard so you can apply the carts without having to have external access.
@@AwesomeOpenSource Well I wouldn't mind the SSL if Safari would work with http. I use NPM most locally, but you mean the problem is when it comes to SSL is the DNS itself? It feels like more complex fixing this than learning the IP of the DNS :-)
Thank you
You're welcome
I can not seem to find the NPM nginx.config file in Unraid.
It may not exist anymore. Since this video was made, the NPM project has removed the need for the config file and for the mysql / mariadb.
@@AwesomeOpenSource I was able to get it sorted. Thank you for this awsome video.
I'm guessing your certificate won't renew this way right? (due to the access list restriction?)
Yep, that's teh only thing you have to be aware of. Setup a longer lived custom cert like the 15 year cert from CloudFlare, or be prepared to turn off the ACL long enough to renew the certs, then put the ACL back on. Alternatively, you could setup DNS Only challenges for LetsEncrypt.
is there a way to set it where only port 443 is open without port 80?
No, as LetsEncrypt will first try to get to the site via Port 80 before giving out the certificates for SSL. I mean you could only allow traffic on 443, but LetsEncrypt wouldn't be able to issue CA Certs for SSL.
@@AwesomeOpenSource I did manage to issue the certificate for nginx duckdns subdomain, can I shut off the port 80 after the cert is initiated and then open it when renewal is due?
@@photozen8398 yep. Just don’t forget. LetsEncrypt will tell you when it’s expiring though so that’s nice.
Interesting but will cert renewal work after applying just changes?
That's the same question that came to my mind. I'm not sure wether renewal flow does the same checks as the creation flow
Thank you for this video. But how about if I have multiple domains running on virtual machines in my home network? Is there a way I can route nginxproxy to other VM"s or sandboxes within my internal network? Some of my VM's are not always on because they are on my laptop.. Currently my home router will only allow me to NAT one address externally to the internet. My goal is to utilize multiple domains with one secure IP.. thanks.
For each domain, you need to create an A record and point it to your home public IP. Then handle each domain / sub-domain request with NginX Proxy manager and point those to the proper VM ro machine inside your network. I have several domains pointing to my one public IP, then NPM is running on 1 server inside my network, and it handles all of my traffic routing for those requests.
@@AwesomeOpenSource Is that one server a dedicated static ip or is it utilizing your internal router IP? I think the issue is I should allocate all routing to external and firewall with docker network to bridge internal networking. I can see the nginxproxy hitting one of my apache servers but still getting a “504 Gateway timeout”. Perhaps the virtual-machine networking is also affecting the route. thanks again!!
@@tom98vr4 Yeah, my server is inside my network, so it goes like this.
Internet -> my public IP -> Home Router -> port 443 and 80 to my Server (statically set IP now) -> NGinX Proxy Manager on that server -> any other server I want to serve back out.
@@AwesomeOpenSource I am not sure what am i doing wrong, but I am also getting "504 Gateway Timeout" when using proxy host to forward 81 port.
My structure is like this:
Internet -> VPS -> NGinx Proxy manager + Docker
I am accessing it through internet only.
Any pointers?
Hi thanks for the video. I was wondering how the SSL certificate can be renewed automatically in this case of let's encrypt?
There is a program installed along with NPM that runs a cron job essentially and will attempt to update the certs automatically every 75 or 80 days I believe, as the LetsEncrypt certs expire after 90 days.
@@AwesomeOpenSource Thanks for your fast reply. By any chance could you do a video tutorial showing the whole process of it?
@@ninja2807 Maybe. I'd have to think on how to show that it does get new certificates. Not sure how I would time that exactly.
@@AwesomeOpenSource I understand, but perhaps just the process of the implementation and the script for it. Thanks.
@@AwesomeOpenSource Do the Let's Encrypt certificates renew properly if the Access List is set to home only? Or every 90 days do we have to make it publicly accessible so Let's Encrypt can renew the cert? Sorry I'm still new to all of this. Thanks for all your hard work on these videos!!!
Thanks for the video. It was very comprehensive. For whatever reason though I cannot get Access List based on IP to work. it seems to work for a moment but I think that is caching. I am using my public IP. I have a Unifi UDM--- not sure if that is causing an issue. Any ideas?
Hmmmm.... I don't know if it would be caching anything, but what specifically is happening?
@@AwesomeOpenSource 403 error. I work in IT so you would think that I would be better at giving details. I wish they include an option for 2FA. I would prefer that.
@@johnmroz315 403 when you visit from the allowed IP? Is it on the same network, or from an outside network?
@@AwesomeOpenSource yes, allowed IP. I have tried both the local subnet and the public IP since I have seen conflicting info on this. Thanks.
@@AwesomeOpenSource 403 from the allowed network. I have tried adding the local subnet and the public IP. Saving the configurations. Rebooting etc.
Just a side note, do not include in the password of user $ same for mysql root password no $ as character otherwise you will get :
Invalid interpolation format for "environment" option in service "db": "MYSQL_PASSWORD=..."
Great tip!
Would be great if you could check out Nebula - an open source overlay network that is like a vpn but it's not.
Yah, I've been looking at it. It's interesting. I'll see what I can put together on it.
Do we have to portforward Nginx poxy manager to make it work? I've been struggling trying to redirect a webserver that's on another machine with it and I saw a couple of places saying that it needs to be portforwarded
So, you forward the ports from outside your network to the server running NGinX Proxy Manager, and that should only be port 80 and 443. Inside your network, you shouldn't have to port forward unless you have the firewall up on each individual machine. In that case you need to open the port(s) on the individual machine that allow the application to work.
@@AwesomeOpenSource Ah ok! So I should follow your video for securing the interface before port forwarding the ports directly, right?
@@novaleary4488 Probably a good idea, but if you don't forward port 81, then the port the interface is running on isn't open to the internet and should only be able to be accessed from inside your network. If you plan to access it from outside the network, then yes, you should definitely secure it.
@@AwesomeOpenSource Ah ok. So, basically in order for nginx to redirect a request to the server from outside the network, i have to forward ports 80 and 443 to the nginx hosting machine, right?
@@novaleary4488 correct.
Hi, is there a way to add "noindex, nofollow, nosnippet, noarchive ..." so that search engines don't index the content of subdomains?
Great questions, but I really don't know. I'm sure there are, but you'd have to ask over at the NGinX Proxy Manager github.
could you do a video on apache2 with docker container & nginx manager? How about an email server? Ive got no problem just using good old nginx itself but would rather use the manager
Let me see what I can do, but Apache and NginX kind of do the same things, so not sure it's useful to do that. Email, is a different beast, as NPM is really meant for forwarding traffic to web sites, vs. mail traffic as far as I know. but let me see what I can find.
Do wonder if you know how to setup the access list to work with a cloudflare setup...
can't seem to get it to work for me
Sorry, no, I haven't messed with Cloudflare yet.
Stole this from another video's comments: Edit your proxy host. In the advanced tab add "real_ip_header CF-Connecting-IP;"
Hi, public ip adress must be static?
No, you could use something like Dynamic IP updating through Cloudflare, or DuckDNS, or if your registrar has dynamic IP updates, look around for a docker container you can use for that. I have a video on Duck DNS out there already. Maybe that would get you started.
ive installed via docker and everything is working except i cannot access the admin portal externally, works just fine internally
i have opened port 80 and 443 and pointed that to the docker server.
i have created an A record pointing to my external IP
and have tested my A record with telnet and another port pointing to another service and that works.
any ideas?
If you just go to your public IP what do you get?
@@AwesomeOpenSource if I go to the public IP internally I arrive at the NGINX page and when I define port 81 I get to the admin side, externally I get site cannot be reached.
but when i try to access another service externally pointing to another machine no issues
@@tye595 and you are sure you forwarded 80 and 443 to your internal host? And your ISP doesn’t block those ports?
@@AwesomeOpenSource yeah 100% positive both 80 and 443 both using TCP are pointed towards my host and i swear I've had port 80 open in the past to run a simple Apache web server so the ports shouldn't be blocked at an ISP level but I'm going to give them a call..... lol i shouldn't be struggling so much to do something this simple
ehhhh turns out ISP had blocked the ports....
but can you still access the site via the ip and port?
seems like the access list only works for the dns proxy...
Sure, if you don’t close the port on the firewall.
@@AwesomeOpenSource but then whats the point?
if you can control this via firewall... why configure it at the nginx level?
@@James-li8cm In the case where you are accessing it locally or via a VPN, then you don't need to, but if you are wanting to access the control panel from outside the network via the internet, then creating a proxy entry for the admin panel and not having port 81 open is the safer way to go.
@@AwesomeOpenSource but doesn't blocking port 81 at the firewall level do that already?
because the proxy doesn't work if it can't access port 81 because of the firewall rules?
I say this, because I am running into this exact problem...
I am having difficulty getting it to block between cloudflare's funkyness and docker ufw issues... its been a nightmare...
I thought about securing it through the proxy server, but that's moot if you can access it from the IP/port
but then blocking the port would work... but eliminate the need to lock it down at the proxy server level...
@@James-li8cm it just depends on where you are proxying from and to, but what you are accomplishing is giving access through 443 to the admin interface which is proxies to the internal 81. But you’ve closed off outside access to 81 directly.
Do you think I can use this software in kubernetes? Like deploy it in a pod, point a domain to the server external ip and setup forwarding through NPM like i would with my regular docker containers?
I'm not familiar enough with Kubernetes to know if that would work. But if it's just managing docker under the covers, then should work in theory. Sorry...maybe someone else has done it.
@@AwesomeOpenSource Hey, thanks for your quick reply! I'm pretty much a noob with k8s too, but I'll see if I can get it going. Anyway thanks for the work you put in your videos and for the community!
@@unixbashscript9586 my pleasure. Let me know how it goes.
Same as your manage.routemehome address I created a dns for the NPM itself with LE. All great. The NPM-login page looks good but when I try to login it only reloads the login page. It works when I use the ip address instead of the newly created DNS. Any ideas what could have happened? I've tried different web browsers - same result. Only happens on the proxy itself.
Is there any message at all on the login screen? If no, then I'd say check the logs for the container and see what it says when you try to login that way.
@@AwesomeOpenSource No message at all. The page just reloads. Checked the Portainer logs of nginx_app_1 and found this.
[3/20/2021] [3:03:47 PM] [Express ] › ⚠ warning Existing token contained invalid user data
Tested to change the password but no difference. I also edited the proxy host from https to http - same result and same warning in the logs.
@@fredrictirheden9282 That's definitely a new one. You might check with the devs at the NGinX_Proxy_Manager github page.
@@fredrictirheden9282 I am having exactly the same issue. Did you find a solution or is there another thread that I can follow?
@@jacobm007 I don't really know how or remember what I did to make it work. But it is working now. I'm afraid I can't help you with this issue.
Hi. Does NPM support DNS challenge mode for SSL certs? Much easier than default mode
Not that I’m aware of, but it may. NPM itself, I don’t believe boasts that feature, but NGinX may support it outside of the NPM UI.
if you do just me, if my public ip is not static when it changes will i be locked out? or can i still access it locally?
You can still access it locally I believe. But, you can use something like DuckDNS (lots of tutorials on it) to make sureyour non-static IP doesn't have that affect on you.
Where do you set the DH bit value for the SSL?
I don't. The SSL in this case is from LetsEncrypt, so it is just whatever they issue as their certificates.
@@AwesomeOpenSource When you setup NGINX with SSL, you would generate a DH bit. So this is not possible for NPM then. I would hope NPM would put an option for this.
Nice Video, thanks! ..everything works fine.. and i can reach nginx with my URL and SSL. BUT if i start playing with access lists it does no more work! I checked my public ip and inserted it under Access as you showed in your video.. i always got 403 Forbidden.. i also tried with a User.. there i have to insert registered user email and pw? The additional login screen appeared and i already thougt its working.. but after login the normal login appeared.. and then i tried to login but nothing happens.. it always shows the nginx login and after trying to login the login and pw field is empty and nothing happens. Any idea?
No, but the ACL is really a pain. A great option, but seriously painful. If you're trying to access NginX Proxy manager login from outside though, I'd say keep that an internal network access only thing, just so you don't have to mess with it, or only allow access through a VPN from the outside. Keep port 81 closed for sure.
With your setup, you can't do ddns, correct?
Dynamic DNS should work, as th up changing would be before anything reaches the proxy manager, so you would have your DDNS.name pointing to whatever you current IP is.
Hi, I'm using bitwarden password with nginx proxy manager. When i use a username and password i have a issue. I try to connect from outsite and i wrote my username and password afterthat i can see bitwarden screen and i'm writing bitwarden user informations but afterthat again nginx asking username and password. I writing again but not accept. What can be issue?
Are you using the Access Control Lists in NGinX Proxy Manager?
@@AwesomeOpenSource yes
@@okanerdem Sometimes, that ACL can get triggered over and over if the software is redirecting somehow...it may just need a special rule added under the advanced tab, but you'll have to check their pages for that info.
@@AwesomeOpenSource Actually if you could share a video about that it can be great :)
can't get this to work for my rasberry pi.. after installing it shows bad connection at port 81
login:1 Unchecked runtime.lastError: The message port closed before a response was received.
what base OS are you using on rpi? Is the firewall on by any chance?
Sometimes people have had to remove the container and images, and just try againa dn then it all works, no idea why. Haven't tried it on a pi myself.
@@AwesomeOpenSource I'm using "Ubuntu 20.04.1 LTS, I've tried reinstalling the containers, stack and images, turning off my firewall already I'm getting this error [error] 241#241: *30 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: nginxproxymanager, request: "GET /api/ HTTP/1.1", upstream: "127.0.0.1:3000/", host: "127.0.0.1:81"
@@maxlimgj CAn you ensure nothing else is using port 80 and port 443, and that you did setup the NPM ports as host 80 and 443 mapped to container 80 and 443?
@@AwesomeOpenSource Yes it's working now already thank you turns out I need to do port forwarding ^.^
@@maxlimgj very happy you got it figured out.
Is there a way to have multiple ports open for the same domain?
Yes, I think you can add more ports in the second tab.
@@AwesomeOpenSource Thanks for your reply. Can you please share a guide as to how to do it?
@@advaitghaisas2832 I’ll see what I can come up with.
@@AwesomeOpenSource Thank you. Because there is an option to forward another port for a specific path. But I could not figure out a way to just forward another port for the same IP.
@@advaitghaisas2832 Yah, it shows a path there, but it can also take a URL, and you then just type the same ip with a different port number.
Excellent tut - please get yourself a decent mic.
This is a very old tutorial, but I appreciate the feedback. I do have a better mic, as well as some post audio steps I do now.
Hi, is possible use a free domain? and if yes, which?
You could use something like DuckDNS and setup a domain through them to point to your IP.
Thanks your sharing, I need add wildcard domain to proxy all traffice. Ex: *.example.com to 192.168.1.56. How can I config on Nginx proxy Manager.
So, if you'rre asking how to do it from the outside to the inside of your network, you want to set the wildcard "*" for your domain "example.com" as an A Record, and you want to point that to your public IP address.
Next, you need to port-forward ports 80 and 443 in your router / firewall to your machine's private IP of 192.168.1.56
Now, when you visit "anything.example.com" it should route to your home public IP, and pass through your router firewall to your NGinX Proxy Manager server. NPM will then check it's records to see if "anything.example.com" is a known server, and if so, will forward the traffic to the appropriate server inside your network.
Hope that helps,
@@AwesomeOpenSource thanks for response, I check and no see the option to put servername wild card at NPM, because I want proxy all traffic of wild card domain to another destination and people working at destination can config whatever they like on this, another level proxy i think. Btw, thanks if you find some interest pls share it.
Many thanks again!!!
@@andrewnhien9714 The option for * routing in NPM may not work. I suppose you could try just entering the url as *.example.com in the Details tab, and then forwarding that to the IP at the destination, then test it. I just don't know that it will work.
@@AwesomeOpenSource I try, put it but seen like NPM don't understad it. Thanks.
Btw, I see NPM can't group the domain proxy, like I have over 100 domains, if I put in NPM it really hard to manage, Do you have any option for manage proxy tool(I mean another tool?), and if I have to do that, anyway to convert the option from nginx(original file) to NPM?
19:43 well I must be doing something wrong because I get 502 Bad Gateway in my case! :(
Ok, do you have NGinX Proxy Manager running in general? Is it 502 to NPM any way you access it, or just through the proxy you are creating?
You answer but not available for answering always when will you do pixelfed deployment
Haven't planned on any federated videos yet, Mastodon, Pixelfed, etc. I have been reading up on them, but want to understand the update process better as that's where I understand running federated systems to need more patience and knowledge. It's in the future, but not sure when yet.
And, yeah, I get busy sometimes and can't answer right away, or I see a question where I need to think about the answer first, or go do some research, so I don't answer right away. And of course time differences and schedules can have a dealying effect. I'm Central Time US, and work a regular job during the week, and my family has all kinds of little projects on the weekend. Time, time, time....if only I could master time.
@@AwesomeOpenSource sir please help on Rocket.chat after changing file upload system to “file system” method when uploading mp3 it plays well but the MP3 player slider and the duration (lapse) of it is missing don’t know how to fix this sir I am had given the uploads folder permission 775 and both 777 tried both of them still isn’t showing the slider of the player and time duration
@@solidsnake5239 Not sure why it would do that, but haven't messed with filesystem storage in years. Are you planning to have a company of users and thier attachments? If not, then I would stick with GridFS personally.
@@AwesomeOpenSource why i chose file system over gridfs is wondering it would load the uploads fast because it’s uploading and downloading from the same server itself where Rocket.chat is deployed to. I do see some speed in that case but true mp3 slider is no longer visible and time duration for mp3 for example like how long is the mp3 playing isn’t also visible but it all works fine on gridfs except for speed the deployment is getting slower and stuck on gridfs I guess I’m running on linode 4GB 2 cpu 80GB
i only see this Apache is functioning normally
I guess I'm not quite understanding the context of your comment.
i tried to secure my npm but i f..... it up
In What way?
@@AwesomeOpenSource
When I got the local ip address for npm and add the fqdn then pointed to ip address for nginx proxy and I saved that where I get no more access to the web page of npm.
I like your videos and are the best out there keep up the 👍 . I wish you could connect to my system and help me out that way so I can learn more from you.
Thanks
And one more question what is the hardware requirement for npm to work fine.
@@sicanu1981 if you can find me on Telegram @MickInTx then maybe I can help you out.
@@sicanu1981 hard to mail down. On its own, not much, but as you add more to it, the more resources it uses. Sorry I can’t provide a concrete answer.
You should never use CNAME in production.
I interested...why not?
Why’s that?
"kind of minimal instructions"
But the quick setup that the narrator likes so much shows the same information at GitHub as it does under setup that he first showed. I don't understand what he's thinking.
In his previous video, he says one shouldn't be setting up docker as root, but then it what he's doing the entire time.
How confusing!
Why aren't his actions consistent with his words?
A teacher?
He'll have the most confused students. Goodness.
Not knowing which video you watched last, it's hard to know if you've watched them in order. I have learned over time not to run things as root in docker from my viewers comments as I've grown in my open source journey. I try to avoid it whenever possible.
*laughes in not port forwarding port 81
Yep, since I made this I've learned a ton about which ones to forward and how to do things properly.
@@AwesomeOpenSource I see, my comment was a joke tho so it's no big deal
He's so funny. It says satisfy any, but he says accept any. Why?
He's sloppy.
Bad vision. As long as you know what I meant, I'm happy.