I spent ages looking at tutorials for this, following them 1:1 but somehow always failing yet this tutorial nailed it for me. For some reason, none of the other videos I watched besides this mentioned that you can forward your portainers ip in nginx and thats what got me
A little late... Like more than 2 years but hey... Some things have changed but the most important thing is that I managed to do what you did, thanks to your video! A big thank-you ! 💯💯💯
@@AwesomeOpenSource WOW! Thank you for your reply. Your work motivates me to the highest degree! I just realized that I can't have a fixed IP address in my private network. Infact yes but it costs me 26€ per month :( I just started a new challenge, it's to see how to implement IPv6 on the docker :)
After going through several videos here on youtube I found yours that took one of my main doubts. Congratulations for the didactics presented in the video
Thank you! You are a life saver, everything on point, clear and concise. From my point of view the only snag was that I couldn't use a LXC container in Proxmox so I had to do it in a VM
Thank you! I'm glad you enjoy them, and please give them a try and let me know how it goes. Also consider joining my community at discuss.opensourceisawesome.com
This was really helpful, managed to get it working using digital ocean but not on my own proxmox / ubuntu / docker container instance. Forwarded the ports correctly and updated the iptable of proxmox to forward port 80 to the ip address of the VM and port of Nginx but no dice.
Bingo - I found just what I was looking for here - thanks so much for an excellent video - I am working through it and hope to get my desired result. Your explanation is very clear - sets you aprat from the rest!!!
Great video! A very good description of how to set up the Nginx with Docker - prb the best one I've seen. One question... I followed along and set this up with my home lab, BUT I found that when I reboot Docker - or I lose power my credentials in Nginx are lost along with all the reverse proxy settings I created. It works fine if I just shut down the Nginx container and restart that or other containers - it's only a problem when the entire Docker station is rebooted. Any idea why the reverse proxy settings and Nginx credentials are getting reset. All other things are persistent - like WordPress sites and databases, etc.
Hmmmm. Sounds like there is a missing volume mapping in NGinX Proxy manager. Make sure you've set them all. Since I made this video the quick start for NPM has changed a bit, so definitely look at his example on the main NPM page.
This is a fairly old video, and at the time the folders method of setting up sites wasn't well supported. These days, you can set that more easily using the Custom Locations tab in the UI. Also, you no longer need the separate config file, just the docker-compose on his quick-install page.
You are absolutely correct Peter. That's where something like DuckDNS comes in. I have a video on how to use it to help with dynamically assigned IP addresses from your ISP. th-cam.com/video/Dm5MyuUdq2s/w-d-xo.html I hope it helps if you're having trouble with dynamic IPs.
@@AwesomeOpenSource An update on that: I created another bridge and I added the VMs to it (I kept the default one for the moment too). After nginx ct restarts it can find the other containers by name only. Then I can simply remove the published ports from the containers behind nginx.
Thank you for your videos well made and very easy to follow and stuff works, I followed this video and have it all working but I think that since I had cloudflare already set up to my IP once i take down the server that I have running which is only hosting a website and replace it with this stack the connection breaks and I can no longer access my domain, I am very new at this but would like to figure it out so that i can self host several things thru the proxy manager,.....the question, is there any resources you can point me to that may help undoing what I have and go with a new set up proxy manager being the entry point, thanks you
Thanks, this tutorial helped me get things up and running. One question though, I'm also running portainer here. Is it possible to enable ssl for the portainer install using the cert we created in nginx?
After learning all of this by sweat and tears, I then happen upon this video haha. I have spent 3 months becoming fluent with docker and docker compose. Last night I went down a rabbit hole getting my reverse proxy for some of my internal services to face outward. I found the same container you did, its great. I am having some hang ups with SSL cert generation with Lets Encrypt through my google domain. It is shown in my certs in google, and applied in the proxy manager. Yet browsing to my test site led to a unsecure connection. Should the ssl cert be a wildcard and then just have an A record for every subdomain I have? I loved the video. Its holistic and it really sounds like you know the whole stack.
Thank you, and just like you, I've learned, and continue to learn through determination, and perspiration. The SSL cert should get generated by NGinX Proxy Manager (requested) for your Host entry, and stored in the NPM container. It sounds like you've generated certs in the past, and are trying to use those from Google. Maybe I'm misunderstanding, but the DNS entry should just point to your IP, and NPM should proxy the traffic to your container. As long as LetsEncrypt can reach your site on 80, it should issue a valid cert. If I'm misunderstanding let me know, and I'll try again.
Hi I have a new problem now , I am able to connect to my Nextcloud from wan but not on lan with the same account. What colud be the problem should i use split dns? Any help appreciated
Yes, you may need split DNS. Some routers can do what I call "hairpinning" which means essentially it can call a URL, go out of the network to the internet, and hairpin turn around back into the network. Other routers don't have this, or it's not on by default. So, you may just need to turn it on in the router, or setup internal DNS to point to the server as well.
I’ve read a few times that people have set it up to use a wildcard cert at the domain level, but I don’t know how to do that. Since LersEncrypt is free I just let it issue a cert for each site.
You used a vsp and right started install docker etc. Is it required to secure this public server by firewall or anything else? What is your recommendation or do you already have a video for this? Thanks Lars
I have a few videos on Securing your Self Hosted services. VPS isn't required. I run almost all of my services at home. Firewall, whether at home, or on VPS is highly recommended as one part of the security framework you want to build up over time.
@AwesomeOpenSource Thanks for your fast answer. I just saw a lot of your videos the last hours and I'm excited. Thanks for all your efforts. How can I find the securing videos? What exactly is not required for VPS, when firewall is recommended for both. I've a Synology Nas at home and don't really like to have an additional server at home and would prefer having all or most on a VPS, or wouldnt you recommend this? I'm working in ICT Consulting and support, so will use it mostly for work with my clients. Thanks 🙏 Lars
You can 100$ use a VPS for your services. My provider, Digital Ocean, has a nice separate little firewall that you can set on each of your VPSs. This makes it really easy to restrict traffic to just the ports you need. Generally 22, 80, 443. Maybe a few others depending on the applications, but at least you don't hae every port on the planet open to your services. th-cam.com/video/UfCkwlPIozw/w-d-xo.htmlsi=8xOXDooSQNBVLr5A is one of my videos on securing things a bit. and th-cam.com/video/bpWytcz4uMw/w-d-xo.htmlsi=Tx5kfavmjI3i_FpE is one on docker and firewalls.
I love your tutorials! Thank you taking the time to spread the knowledge. How do i get your script to working in Debian. I'm running Proxmox with Debian-10 turnkey core 16.0.1 in a LXC and I notice your script has Ubuntu repositories.
@@AwesomeOpenSource wow, i just lucked into this and just setup debian 10 on an old hp server. where is this script? The server has 4 600GIG drives in RAID giving me 1.2 TB and 24 GIG RAM. Hoping that will work to set this whole thing up with about 12 docker images. Replacing 8 even older HP servers running ancient ubuntu 1204 . I really hope this works. thoughts?
@@jamorahcito github.com/bmcgonag/docker_installs On that page look for the one with Debian in the title. You can click on it, and copy / paste the script, or you can pull the repo down and just use the one you need.
Followed ur video, thank you!! I have it setup and working but the version I have 2.9 doesn't show access lists or users like in ur video. Do I need to do something else to the docker image to add them?
These videos are awesome ( new subscriber ), Just started messing with docker in the last year or two and now I have all kinda self hosted stuff running behind Nginx proxy manager.... It really simplifys setting up certs and managing sites that need external access without opening random ports.. One question I have is the initial screen ( Congratulations, you reached NGINIX ) I edited that page just to have my logo , Is there anyway to make that landing page https? ( not the management :81 page, I dont need that public facing ) Wasn't sure and couldn't find a simple way. Thanks and keep up the great content!
I got a little confused on 34:54 . Could you please explain that a different way, it sounds like I can use one server and receive into it requests that will be forwarded to another server? sorry that sounds great but I did not get it 100%
You can use more than 1 server. One of the servers needs to have your proxy manager running on it. That server will receive requests for your sites. Your sites can run on other servers (both physical hardware or virtual machines), and the Proxy Manager server can send you to the right server based on the URL in the request. Hope that helps.
With the instruction in this video tutorial work with LXD container with Ubuntu 22.04 image as the base? Or will I need to make some slight adjustments?
Hi, I'm a beginner so I started slowly to set a homelab. So purchased dns from namechep, added dns records as shown in the video with my ip addresses. And installed nginx proxy manager in Windows wsl docker. In a router added port forwarding and also added inbound rules in Windows defender firewall then started accessing npm from other device connected to same wifi and I'm able to aceess and also made an entry in npm for proxy host that is also able to access from other device which connected to same wifi. But, when I tried to connect from internet not from same unable to connect. Any idea how to resolve it..? By the way my ISP not blocked port 80 and 443
first, just ping your domain and make sure it shows your public IP address. Next, use the traceroute tool to see if you can see where the connection fails. It should show each hop as you try to reach the network Depending on what you're using to connect from off the wifi, the DNS may have just not been updated and propogated to the DNS server external to your network yet.
@@AwesomeOpenSource Thanks Brian for your reply. 2 tests I did one connected to wifi ping to the machine where npm is installed getting error message as a Destination host is not reachable. When I turnoff the windows firewall for private networks ang did the ping command getting response back but for security reason I cannot disable permanently. So can you let me know how to overcome this. 2 when I ping remotely i.e. not from same wifi cursor is blinking there it self no response from ping either reachable or not reachable. In the first step itself didn't succeeded so not tried the traceroute option. Can you help me let me know how to overcome this..? If any video the channel it'll be helpful. Thanks in advance.
Awesome video! I have a doubt, i was thinking in run a custom app in port 443 to later be accessible by other server behind a firewall that only can access 443 port in https, with nginx proxy manager but my application does not have a docker image by default is it possible to run any app in npm or only the ones that have a docker image availble, and how to do it? Thanks.
You can point NPM to any service on your network and pass the TCP / UDP traffic through it. So you shouldn't let the docker aspect slow you down. I run several services that are not dockerized, and use NPM to pass the traffic with no issues at all.
@@AwesomeOpenSourceOh thanks for your response, but do you believe that if I run my app setting up a new image docker let's say an Ubuntu and let it running there, will be easier to npm listen to the service running in the docker or not? Thanks.
@@AwesomeOpenSource Thanks. I am trying to run rocketchat in a Docker and it also seems like i have to open up port 9201 (per your setup). Is that normal? Also if i want to get to Portainer Admin same thing, 81 needs to be open. Is that correct?
You have to tell me one thing, though. How did you manage to deploy multiple WordPress containers in Portainer? I manage to set up one via the App Templates, after which I fail to set up even more WordPress containers via the App Templates. I then get the message that the WordPress failed to establish a connection to the database and in the log of the 2nd DB container I see: InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files. What is going on and how do I get around this?
I think my video is misleading. I set those up using the Docker compose file for each site. Literally created 4 folders, copy paste in the Docker-compose.yak, make adjustments to the yml file as needed and run. These were migrations so also had to migrate in exported MySQL data for each.
Awesome tutorial!!! When trying to log into the admin site for the first time I am getting a "BAD GATEWAY" error from my browser I have tried three different browsers and got the same error, Do you know how I can fix this?
Usually this indicates that you have a mismatch between the database username or password in the two places you fill it in. Double check and make sure you don’t have extra characters or misspellings. Others have just deleted the containers and images and re-run it and then it works.
I was okay up until setting up nginx proxy manager. showing multiple errors after docker-compose up -d Of course, there is a newer version out now and the setup on their page is different too... Edit: seems you can't put a space in front of the environment block in the config file.. all I did is back space and line it up with the rest of the formatting and it ran. Strange.
Yeah, you definitely want to go with their latest documentation when possible, and definitely with .yml code it's important to keep the spacing exactly right. Glad you got it up and running!
This worked for a lot of my docket containers except for one, Wireguard. Have you had any experience setting up a VPN service, whether wireguard or other, and getting it to work while using Nginx Proxy Manager?
I think the issue here is that wireguard is not a web-site / app, it's a VPN. You can undoubtedly forward traffic through NGinX, but I'm not sure NGinX Proxy Manager was designed to handle that kind of traffic.
Can you be a little more specific? I created a static IP at one point, more as a 'how to do it', and not so much as necessary for the context of the rest of the video. That's my bad if it confused you.
So I know this is an old video but I’m kind of curious as nginx proxy manager has something call streams and I’m curious if I can use that with two different domains for Minecraft servers one domain on one port that is forwarded to a different port
Thanks for the great video, but when the script came to the part to install mariadb, i get a 'not found: manifest unknown' error. Could this be because that version of Mariadb is no longer valid and pulled for the repo? Is there a different way to get Mariadb for ubuntu to run with docker and portainer? Is is possible to modify the script to reflect a newer version of Mariadb?
@@funkygetaways I got it to finally to work. I went back to the Nginx Proxy Manager setup website and saw that there is a new docker-compose where mariadb is set to download the latest. After I got that in place and ran the docker-compose, I was able to login and get it setup.
I was finally successful in making this work but it does not auto-start when I reboot the server. Should it be auto-starting already or are there additional steps to achieve that?
For me it always restarts when I reboot, but you may need to check the value in the docker-compose file for "restart" and make sure it's set to "always".
Hi, I've followed all the steps. the only thing that does not work for me is if y type a subdomain that doesn't exist. It gives me a generic 404 but not the congratulations page. What have I done wrong?
You need to make sure to setup an A Record in your DNS as a wildcard. Basically create an A Record with a "*" in it instead of the subdomain. Then all subdomains without a specific A record will go to the place the * points at. Hope that helps.
Hi again. Everything was working fine (Running docker on Windows 10) unitl I restarted. Container's ip changed. Made changes to IPs in nginx proxies and does not work. I can only reach it through the host's ip port 81. I deleted everything, reinstalled everythin but stiil did not get to work again as it was.
Ahhh, yes. I have some newer videos where I talk about docker networking, and why it's useful. If you set it up like me in the video here, you'll see that I just use the default network, and the issue is that the IP can change after a reboot (though it doesn't always happen). If, however, you create a specific docker network, and put everything on that network, then the IP will always be the same there after. You also get the benefit of using the container name in place of the IP if you want, when the containers needing to talk are on the same docker network (besides the default network).
@@AwesomeOpenSource Thank you. I got it working again. There was nothing wrong with docker or nginx. When I restarted the pc, vmware loaded a service and was using port 443, so the certificates were not working for that reason.
I keep trying this step by step from the beginning and continue to get the Bad Gateway error others have mentioned. Tried changing the db but still without luck i keep getting the same error when trying to login the first time. I am running ubuntu 18.04.5 on a raspberry pi3 B+. Any help will be greatly appreciated
I had the same issue and the cause was the db image I changes line "image: 'jc21/mariadb-aria:10.4'" for "image: 'yobasystems/alpine-mariadb'" and it all went back to normal, was a while and a lot of reading to figure it out but it worked for me.
@@AwesomeOpenSource When someone connects to a service deployed in docker, and you want to know what was the IP of that person, you cannot, because services in docker only see IP of the docker hub, not the actual connected client. This is a problem, especially when using services like Matomo, because it cannot show you from what countries/cities people are connecting to your websites.
@@impact0r ok, i think 8 follow what you are asking. I think there are ways to do it, but easiest would be using the host network to do this. -network=host
If you'll check out the jc21 nginx proxy manager github page, he has removed the need for the config folder. that may help you. Now you just enter the environment variables right in the compose file.
@@AwesomeOpenSource Thanks for the information. Actualy would be good a video about export and import contaieners. I've a raspberry pi and i'm using docker on this but i want to move it on another host. I want to move all containers (include datas,config,volumes etc) to another host but i'm not to good linux. Do you think a video about that?
When restricting access creds only get asked for when accessing apps via the domain name. However, I can access the app directly with the IP address. Is this a bug?
If you're going straight through the IP and port, then you are essentially bypassing NGinX Proxy Manager, so block that port on your firewall from the outside. I think there are ways to make apps only respond if it's called with the URL as well, but not sure how to do that.
Thanks for the video. Always good stuff. I am unable to access potainer from nginx securely certificate is generated by tls, it's online, I can access from the subdomain if add :9000 at the end of the subdomain i have pointed it to. Please help. Followed the video I was successful in installing docker, docker compose, nginx proxy manager.
In order to troubleshoot, I generally start by just creating the http in NPM first. So I only fill out the details tab, then make sure it forwards the way I expect. If it doesn’t, I’ll check logs to see if I can tell what went wrong. Once I can reach http, I then edit the entry to enable https. Also, check to ensure the browser is adding https to the URL. When it doesn’t work, do you get an error? Or just nothing loading?
Hi Brian. I followed this mostly to the T, but I'm having an issue. when giving the routing of each sub-domain (nextcloud.example.com), I used the server actual IP and then the port I assigned to the docker container. Now when I activate UFW (allowing ports 22, 80 and 443) the only site I can get to is the top domain (example.com), the others won't load now. Would this be solved by using the docker IPs? I have all this running on a RAMNODE VPS.
@@AwesomeOpenSource I have the opposite problem. I can't use the docker container gateway IP. It's completely unroutable on my LAN. Pinging the gateway IP or any container behind the gateway IP completely fails. Forwarding traffic from my nginx to the container IP addresses or the docker gateway IP fails with the same message you get when the docker container isn't running at all. I have to use the physical host IP address to forward traffic to and that works perfectly.
@@hiddenfromyourview ok, so the ability to use the Docker gateway IP is based on running your container application on the same host and Docker install as NGinX Proxy Manger. If you are running them on different machines, then yes, you have to use the host IP instead.
@@AwesomeOpenSource Yea, they're running on the same physical host on my LAN. Though, after making my comment, I read there may be differences between docker.io and docker-ce. I believe you're using CE and I'm probably running io. I wonder if that's part of my issue. Thanks for following up sir! I appreciate it. I'll have to explore this one a bit more.
THANK YOU! This is an excellent tutorial. I have watched a couple of others but this has been the most clear so far. To make sure I have this right. I own nx9.us. Point all subdomains of nx9.us to the DO droplet. I set up the digital ocean droplet with NGinx proxy manager. I set up proxy manager to send movies.nx9.us to home ip xxx.xxx.xxx.xxx:8100 and forward port 8100 on my router to send it to internal ip 10.10.2.50:32400 (my plex server). If I wanted to do the same thing for books set nginx proxy books.nx9.us to point to xxx.xxx.xxx.xxx:8101 and port forward 8101 to internal 10.10.2.51:8080 (calibre server). And so on for different machines/servers. And if I had a docker server I would send to the ports set up for each service on the server. Is this about right or have I totally misunderstood?
So, yes, you can absolutely run it that way. But, going from your VPS to your home like that, instead of opening all of those ports on your home router, I would run NGinX Proxy Manager on one of the machines at home. Then only forward ports 80 and 443 to that machine for NGinX Proxy Manager. Then let it direct the incoming requests to the machines it needs to go to.
@@AwesomeOpenSource Thank you. I really appreciate it. (1) So just run NPM on main machine at the house and let it forward to each service/machine and let it mitigate everything. OR (2) did you mean just forward everything from NPM on the droplet to 80&443 at my house and let main machine there run NPM and send to specific service/machine ? Total newbie at networking. I just don't want to do something boneheaded and leave myself hung out to dry. I was just not wanting my home IP public plus it is dynamic and the D.O. is static.
@@RedNekNix I believe #1 is the better option, but you could run both. Depending on how often your home iP changes would drive which method you use. I have a dynamic home IP, but it rarely changes (I've been making these videos for over a year and it hasn't changed in that time that I recall. So, not too terrible if it does.
@@RedNekNix Please i need help when i used the docker compose version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '80:80' #HTTP Traffic - '81:81' #Dashboard Port - '443:443' #HTTPS Traffic Etc.. erreur message said port 80 et 443 is used and the container nginx didn't work.
Please i need help when i used the docker compose version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '80:80' #HTTP Traffic - '81:81' #Dashboard Port - '443:443' #HTTPS Traffic Etc.. erreur message said port 80 et 443 is used and the container nginx didn't work.
Almost got through the install, but failed with the error - "manifest for jc21/mariadb-aria:10.4 not found: manifest unknown: manifest unknown". After a google search, I replaced image: 'jc21/mariadb-aria:10.4' with image: 'jc21/mariadb-aria:latest' and got through the install
Can you make a tutorial to install this on portainer? I saw that you run NPM on your portainer. I tried as well but i got bug after bug, maybe you have the solution?...
@@AwesomeOpenSource yes i have docker and docker-compose installed. but if i install NPM via CLI and not with portainer it says limited or something, like not enough permissions to stop the container inside portainer. So that is why i tried to deploy NPM inside portainer but it fails everytime because of the config.json file... I tried to make the config.json as a bind volume but then it says in NPM "internal error" and the logs of NPM in portainer spits out errors like "access denied". And yes i have setup the db image credentials and config.json credentials the same.
@@obnoxious2336 I believe the author of NPM has removed the need for the config.json file. YOu should definitely check out the updated instructions for running NPM. This might solve your issue.
Alright. I've worked through the tutorial to the point where you enter ip_address:81. Page not found. If i enter ip_address:80 I get the congrats nginx is installed...
@@AwesomeOpenSource I think the problem is the docker0 ip address. I was wondering why you used that address. Typing the docker0 ip address and port directly into a browser doesn't work, but it works when I use the ip address of the OS that Portainer is running on.
Maybe. I’m not familiar with OMV enough to say. You can do a lot with Peoria we as well, though it’s stack and compose seems limited by the older Luba they use. But I’m sure someone can make it work if they want.
I've followed these directions and others more than once but no matter what I get "Bad gateway" when trying to log in for the first time. What am I doing wrong?
Usually this means you have a mismatch in the MySQL user or password. But a few have had good luck just wiping and re-pulling down everything. Note, the instructions on the NGinX Proxy Manager site have changed slightly, so no config file is used, but everything is put into the Docker Compose file.
@@AwesomeOpenSource I just removed the containers and the nproxy directory and started over using just the instructions on the NGinX Proxy Manager site. I even just left all of the default "npm" values in the docker-compose.yml file but still getting "bad gateway". Is there anything else I need to remove when starting over from scratch?
@@AwesomeOpenSource Docker-CE. It just suddenly started working without me doing anything else to it. I'll continue with using it for a bit and see what happens. It hasn't inspired much confidence so far. Thanks for the suggestions.
I got all the way to the let's encrypt part. I keep getting and error that won't let me use SSL, not sure, but i followed directions exactly (I thought). I did notice, in the video the ipv4 docker address used was 172.17.0.1....however my docker address didn't look like this. Mine was 172.23...last 2 were both 3 digits...is this where i messed up? I am using Windows 10 (workstation) with docker desktop using ubuntu as WSL....Also, I do have a synology behind my router as well....Not sure where to start...any thoughts?
Docker Addresses on Windows 10 may just be assigned differently . Not sure on that, but should work. Make sure your windows firewall isn't blocking port 80 and 443 for LetsEncrypt as well.
@@AwesomeOpenSource Just a quick follow up. When you created the droplet, this was your server...I have a WSL ubuntu that has its own ip and a host that has its own ip. They can ping each other. When creating my A record with Google Domains...I put Docks....but for IP...would i use the ubuntu ip or the host ip (that ubuntu is on)? This i believe is my problem and i cant figure it out and when testing its causing overlap and i have to restart (changing my ubuntu IP address) plus the time for DNS records to set. So, really my main question was the initial: Domain DNS A record ----Ubuntu IP or Host (windows) IP ---reminder they are the SAME MACHINE....I really appreciate all these videos!!
@@tbones3141 If the Ubuntu IP is on your LAN network (it's IP is on the same subnet as the rest of the machines), then you could use that IP, otherwise, you first need to get through the Windows machine, and then forward that traffic through to Ubuntu somehow. I'm not a Windows user, so no idea how to do that if that's the case.
@@AwesomeOpenSource thanks for your reply brother, now I faced with the one more problem occurred when I am try to do it via vps, after success , the admin start up page appear " Bad gateway", how should I do?, thanks
@@sovathnahim819 So, there are a few reasons I've compiled as to why this happens. 1. Make absolutely certain that you username, db name and password match for the DB in the config and the .yml file. If you have any difference, the application can't access the db, and you'll get bad gateway. 2. If you are using Docker.iio instead of Docker-ce, then this can also happen. Make sure to install and use Docker-ce. 3. Sometimes, just blowing away and re-pulling it all down seems to solve the issue. Hope one of those helps,
I just managed to make a bonehead move. I had everything working just fine but then clicked under proxy hosts, on http only and set it to https. Looks like it broke the admin page and everything (getting a 400 error now.) Been searching for a few hours and can't figure out how to undo that setting. Any help would be greatly appreciated.
So, did you set a URL to your NGinX Proxy Manager install? And then set it to https? Also, what browser are you using? Finally, have you tried IP:port instead of URL:port. You might see if you can get to :81
@@AwesomeOpenSource I was up until 4am messing with the config files and it looks like changing the scheme setting back to http fixed it. I got the subdomain ssl working again for the install. It was returning a ssl handshake error in chrome.
Your video was helpful with setting up Nginx Proxy Manager, but I am having a problem with setting up proxy hosts for other containers. Do the other containers need to be on the same network as NPM? I could not get NPM to work with the Docker0 IP as shown in your video. I was able to get it to work with 172.19.0.2:81 and install SSL with my subdomain. Portainer was showing NPM as having an IP of 172.19.0.2. I installed Snapdrop and Portainer shows this container to be on 172.19.0.4. I tried using 172.19.0.4:8082 in the proxy host, but the subdomain can not reach Sanpdrop. Snapdrop is accessible by the public.ip_address:8082. Not sure what I am doing wrong.
Without knowing for sure what commands you're running for these containers it's hard to tell. You may want to try the Docker gateway IP for each container to see if you can get to it using the gateway IP and port number you set.
@@AwesomeOpenSource So I found out that if you use the DigitalOcean Marketplace Docker image that you will have issues with setting up NGinx Proxy Manager and multiple instance of Docker containers on the same server. Once I manually setup Docker on a Ubuntu server, then everything worked perfectly. I appreciate all the videos and tutorials.
I have the same problem using Digital Ocean, except I installed docker manually following the instructions and installed NPM using the normal docker compose yml file without changing anything (other than passwords). I can't get it to work using the Docker0 IP and port 81 without a 504 error. It only works with the NPM container IP (from docker inspect container), but that's no good if you want to proxy other containers running on the same server.
@@AwesomeOpenSource Great tutorial, but I'm having the same problem with trying to forward to other containers on the same server. Always get a 504 gateway timeout. Any solution for this? My use case is: RaspberryPi running Raspbian with Home Assistant (supervised) and a Synology NAS (no VM on this model). I can get access to the Synology but not Home Assistant. I probably could use the NGINX Home Assistant addon but I like NPM better.
There is a debate as of which docker version to use and the preferable way is docker.io and not -ce (although the version 2 of container is 2.0 -ce) Is there a reason you ve chosen the -ce version which derives from docker project itself and not the ubuntu One of the answers: >>>
Sorry, just read the rest of your comment. Interesting, but yeah I suggest CE I’ver IO then as I consistently find the repo version to be several releases behind.
@@AwesomeOpenSource If you didnt need to remove any binaries so its ok I guess, But in 20.04 LTS you just go with apt install docker.io and that s it without the need to add the keys and the repos and update the cache..etc....Of course having said that .io is several versions back... justifies your decision
try as I may I just cant get past a "bad gateway" when trying to login to nginx proxy manager. I am using a pi4 with 64bit ubuntu 20.04. Seems like its a common error, but cant figure out how to fix it. Any ideas?
The three reasons I've found are: 1. user, password, or db name don't match from config to compose file. 2. Someone changes the right side value of the Volume which is the container side. 3. Sometimes, just scrap it and start over. Usually this bad gateway indicates an inability to communicate with mysql, or inability to login successfully to mysql by the application. if you want to reach out on telegram, I'm happy to try and help. I'm @MickInTX
@@AwesomeOpenSource finally got the Nginx Proxy database to work. Had to change the db from 'jc21/mariadb-aria:10.4' to 'webhippie/mariadb:latest'. I am pretty sure its a db/pi4 issue since I am using latest pi4 with 64 bit ubuntu OS. Only issue I seem to have now is I cant change the db password under admin user. Not sure why but so far thats not a huge issue.... just cancelled out when I first opened nginx-proxy admin
@@heliwrschannel2047 You should be able to set the db password to anything you want in the config.json file, as long as it matches what you put for it in the compose file. Or am I misunderstanding? Still, glad you got it going.
Worked for me, and i have no experience with ubuntu, i have managed to get working, thank you very much for your video best so far out there.
Glad it worked well. That makes me super happy!
I spent ages looking at tutorials for this, following them 1:1 but somehow always failing yet this tutorial nailed it for me.
For some reason, none of the other videos I watched besides this mentioned that you can forward your portainers ip in nginx and thats what got me
So very glad it was helpful!
Same here. For a home server, this listed all the steps required with clear instructions and examples on how to do it!
A little late... Like more than 2 years but hey... Some things have changed but the most important thing is that I managed to do what you did, thanks to your video! A big thank-you ! 💯💯💯
Super glad it was still helpful.
@@AwesomeOpenSource WOW! Thank you for your reply. Your work motivates me to the highest degree! I just realized that I can't have a fixed IP address in my private network. Infact yes but it costs me 26€ per month :(
I just started a new challenge, it's to see how to implement IPv6 on the docker :)
@@il51diablo you could also use a dynamic dns, so it’ll update your dns servers automatically once your WAN IP changes.
After going through several videos here on youtube I found yours that took one of my main doubts. Congratulations for the didactics presented in the video
Thank you.
Thank you! You are a life saver, everything on point, clear and concise. From my point of view the only snag was that I couldn't use a LXC container in Proxmox so I had to do it in a VM
IT's all good. I use several VMs.
Excellent explanation. This really brought together nginx proxy manager and how to route containers to DNS. Really appreciate it!!!
Glad you enjoyed it
Great Tutorial. Thanks for explaining everything clearly and with the right speed.
You're welcome!
Underrated Channel!! I have been struggling with docker and docker-compose and this helped a lot.
I'm so glad I was able to help, and thank you!
Another fantastic tutorial you've got there, mate.. Love the way you present it to us. Crystal clear.. 👍
Thank you kindly
this video is EXACTLY what i needed. the explaination is perfect. thank you very much
Glad it helped!
That was an extremely useful tutorial! Thanks! And kudos to Scotti-BYTE Enterprise Consulting for mentioning you on his site, too!
I appreciate it, and yes, @Scotti-BYTE is awesome!
Great video. :) Finally clear and concise explanation.
Thanks, glad it was helpful.
Great content i've watching your videos for a couple of days and find them just fantastic i'm gonna have to try some. Greetings from Guatemala.
Thank you! I'm glad you enjoy them, and please give them a try and let me know how it goes. Also consider joining my community at discuss.opensourceisawesome.com
Wow! I immediately subscribed after this amazing tutorial. This was really helpful to me. Thanks so much! 🙂
Glad it was helpful, and thanks for subscribing.
This was an absolute rock star video. Great job.
Wow, thank you!
Realy great tutorial. Helped me finding a mistake i made. And clear and easy to understand. Thanks
Very glad it helped.
This was really helpful, managed to get it working using digital ocean but not on my own proxmox / ubuntu / docker container instance. Forwarded the ports correctly and updated the iptable of proxmox to forward port 80 to the ip address of the VM and port of Nginx but no dice.
Hmmm, on my proxmox I didn't have to forward a port to the VM as it was assigned an ip from my normal LAN network range.
@@AwesomeOpenSource Thanks for the reply, turned out it was just that I can't access the domain from inside my own network!
Bingo - I found just what I was looking for here - thanks so much for an excellent video - I am working through it and hope to get my desired result. Your explanation is very clear - sets you aprat from the rest!!!
Very happy it was helpful to you.
THE BEST TUTORIAL EVER THXX BROO.. BIGG RESPECT
My pleasure my friend. Docker away!
This saved me sooooo much time. Thank you.
Glad I could help.
Wow thanks a lot for this! Very clear! Greets from Argentina!
You are welcome, glad you enjoyed it.
You explain things so well! Liked and Subbed!
Thank you so much, and glad you enjoyed it.
Excellent job, now lets see securing your npm setup.
Coming soon.
Awesome. I'm going to be setting this up this week
I hope it all goes smoothly.
@@AwesomeOpenSource Just letting it know that it went smoothly. I used Linode just because I'm used to their platform.
@@BigRedAdventures That is Awesome! Glad it went well, and thank you for the shout out on Twitter.
Finally!:D Everything works, no more 50X errors. Thank you so much
You bet!
Great tutorial Brian. Thanks a lot for this
Glad it was helpful!
Thanks man, such a good tutorial!
Glad you liked it!
Great video! A very good description of how to set up the Nginx with Docker - prb the best one I've seen. One question... I followed along and set this up with my home lab, BUT I found that when I reboot Docker - or I lose power my credentials in Nginx are lost along with all the reverse proxy settings I created. It works fine if I just shut down the Nginx container and restart that or other containers - it's only a problem when the entire Docker station is rebooted. Any idea why the reverse proxy settings and Nginx credentials are getting reset. All other things are persistent - like WordPress sites and databases, etc.
Hmmmm. Sounds like there is a missing volume mapping in NGinX Proxy manager. Make sure you've set them all. Since I made this video the quick start for NPM has changed a bit, so definitely look at his example on the main NPM page.
Man this was REALLY useful to me! Thanks a lot!
Glad it was helpful.
Great video, thanks for sharing.
Thanks for watching!
Greate video, however the SSL stuff didn't work for me! Also, you didn't mention how specify locations (folders) for each domain/subdomain!
This is a fairly old video, and at the time the folders method of setting up sites wasn't well supported. These days, you can set that more easily using the Custom Locations tab in the UI. Also, you no longer need the separate config file, just the docker-compose on his quick-install page.
Very helpful, thank you
You're welcome!
Thank you. Great tutorial!
Glad you like it
with most ISPs you don't get a static address which gets more complex to set up.
You are absolutely correct Peter. That's where something like DuckDNS comes in. I have a video on how to use it to help with dynamically assigned IP addresses from your ISP. th-cam.com/video/Dm5MyuUdq2s/w-d-xo.html I hope it helps if you're having trouble with dynamic IPs.
Dude, if you are ever in Philly, look me up. You've got a beer on me!
Thank you sir. I'll definitely keep that in mind.
npm can also be configured to connect to localhost port 81 🙂
Yes, you are right. It can connect to localhost as well.
@@AwesomeOpenSource An update on that: I created another bridge and I added the VMs to it (I kept the default one for the moment too). After nginx ct restarts it can find the other containers by name only. Then I can simply remove the published ports from the containers behind nginx.
Thank you for your videos well made and very easy to follow and stuff works, I followed this video and have it all working but I think that since I had cloudflare already set up to my IP once i take down the server that I have running which is only hosting a website and replace it with this stack the connection breaks and I can no longer access my domain, I am very new at this but would like to figure it out so that i can self host several things thru the proxy manager,.....the question, is there any resources you can point me to that may help undoing what I have and go with a new set up proxy manager being the entry point, thanks you
you are an awesome tutor man! you deserve my like and sub
I appreciate it!
Thanks, this tutorial helped me get things up and running. One question though, I'm also running portainer here. Is it possible to enable ssl for the portainer install using the cert we created in nginx?
You can create a new host entry for Portainer in NGinX proxy manager, then have it pull the ssl cert automatically for that host entry.
I never got this JC21 version to work on my Qnap NAS. I got more luck with the jlseage version. Don't know what the differences are though
After learning all of this by sweat and tears, I then happen upon this video haha. I have spent 3 months becoming fluent with docker and docker compose. Last night I went down a rabbit hole getting my reverse proxy for some of my internal services to face outward. I found the same container you did, its great. I am having some hang ups with SSL cert generation with Lets Encrypt through my google domain. It is shown in my certs in google, and applied in the proxy manager. Yet browsing to my test site led to a unsecure connection. Should the ssl cert be a wildcard and then just have an A record for every subdomain I have?
I loved the video. Its holistic and it really sounds like you know the whole stack.
Thank you, and just like you, I've learned, and continue to learn through determination, and perspiration. The SSL cert should get generated by NGinX Proxy Manager (requested) for your Host entry, and stored in the NPM container. It sounds like you've generated certs in the past, and are trying to use those from Google. Maybe I'm misunderstanding, but the DNS entry should just point to your IP, and NPM should proxy the traffic to your container. As long as LetsEncrypt can reach your site on 80, it should issue a valid cert.
If I'm misunderstanding let me know, and I'll try again.
really thankyou bro!!!
You are very welcome
Thank you so much ! This was super helpful!
Glad it helped!
Excellent tutorial.
Thanks :)
Glad it helped!
Great tutorial! Thank you! 🙏👏
My pleasure
Awesome is Awesome
Thank you.
you could ad the the host name and add ssl to the man domain of nginxproxy and add the port 81
Yes indeed. I show that on a separate video.
Hi
I have a new problem now , I am able to connect to my Nextcloud from wan but not on lan with the same account. What colud be the problem should i use split dns? Any help appreciated
Yes, you may need split DNS. Some routers can do what I call "hairpinning" which means essentially it can call a URL, go out of the network to the internet, and hairpin turn around back into the network. Other routers don't have this, or it's not on by default. So, you may just need to turn it on in the router, or setup internal DNS to point to the server as well.
Great video! Tnx for making this content!
Glad you enjoy it!
23:45 wow:)
Awesome tutorial! Do you need to request a new / different cert for all the hosts you add or can they all share the cert that was issued to NPM?
I’ve read a few times that people have set it up to use a wildcard cert at the domain level, but I don’t know how to do that. Since LersEncrypt is free I just let it issue a cert for each site.
@@AwesomeOpenSource Here is a cool video I learned how to do it # th-cam.com/video/TBGOJA27m_0/w-d-xo.html
Super!
You used a vsp and right started install docker etc. Is it required to secure this public server by firewall or anything else?
What is your recommendation or do you already have a video for this?
Thanks Lars
I have a few videos on Securing your Self Hosted services. VPS isn't required. I run almost all of my services at home. Firewall, whether at home, or on VPS is highly recommended as one part of the security framework you want to build up over time.
@AwesomeOpenSource Thanks for your fast answer. I just saw a lot of your videos the last hours and I'm excited. Thanks for all your efforts.
How can I find the securing videos?
What exactly is not required for VPS, when firewall is recommended for both.
I've a Synology Nas at home and don't really like to have an additional server at home and would prefer having all or most on a VPS, or wouldnt you recommend this?
I'm working in ICT Consulting and support, so will use it mostly for work with my clients.
Thanks 🙏
Lars
You can 100$ use a VPS for your services. My provider, Digital Ocean, has a nice separate little firewall that you can set on each of your VPSs. This makes it really easy to restrict traffic to just the ports you need. Generally 22, 80, 443. Maybe a few others depending on the applications, but at least you don't hae every port on the planet open to your services.
th-cam.com/video/UfCkwlPIozw/w-d-xo.htmlsi=8xOXDooSQNBVLr5A is one of my videos on securing things a bit. and th-cam.com/video/bpWytcz4uMw/w-d-xo.htmlsi=Tx5kfavmjI3i_FpE is one on docker and firewalls.
I love your tutorials! Thank you taking the time to spread the knowledge. How do i get your script to working in Debian. I'm running Proxmox with Debian-10 turnkey core 16.0.1 in a LXC and I notice your script has Ubuntu repositories.
Just pushed script modified for Debian 10, let me know how it goes.
@@AwesomeOpenSource wow, i just lucked into this and just setup debian 10 on an old hp server. where is this script? The server has 4 600GIG drives in RAID giving me 1.2 TB and 24 GIG RAM. Hoping that will work to set this whole thing up with about 12 docker images. Replacing 8 even older HP servers running ancient ubuntu 1204 . I really hope this works. thoughts?
@@jamorahcito github.com/bmcgonag/docker_installs On that page look for the one with Debian in the title. You can click on it, and copy / paste the script, or you can pull the repo down and just use the one you need.
Thank you Very Much!!!! A++ : )
You're welcome!
quick update, the mariadb image should be changed to "jc21/mariadb-aria:10.4.15" or you get an error. :D
I believe you can run NGinX Proxy Manager without any db section at all. It's an all in one image.
May Allah bless you with Islaam.
Thank you so much.
@@AwesomeOpenSource ❤❤❤❤❤❤❤❤❤❤❤❤
Followed ur video, thank you!! I have it setup and working but the version I have 2.9 doesn't show access lists or users like in ur video. Do I need to do something else to the docker image to add them?
I'm on 2.9.2 and have the Access Lists and such. I'm not sure why it wouldn't be showing for you.
@@AwesomeOpenSource Thanks for the reply! My exact version is 2.9.3. Any plans to upgrade? Would be interested if u still have it or not after.
@@92885amason oh i definitely still want that stuff
These videos are awesome ( new subscriber ), Just started messing with docker in the last year or two and now I have all kinda self hosted stuff running behind Nginx proxy manager.... It really simplifys setting up certs and managing sites that need external access without opening random ports.. One question I have is the initial screen ( Congratulations, you reached NGINIX ) I edited that page just to have my logo , Is there anyway to make that landing page https? ( not the management :81 page, I dont need that public facing ) Wasn't sure and couldn't find a simple way. Thanks and keep up the great content!
Interesting question. I’ll have to see what I come up with.
@@AwesomeOpenSource Cool look fwd to seeing if you find a way, thanks for the reply!
Great video, you have a new subscriber :-)
Welcome aboard!
I got a little confused on 34:54 . Could you please explain that a different way, it sounds like I can use one server and receive into it requests that will be forwarded to another server? sorry that sounds great but I did not get it 100%
You can use more than 1 server. One of the servers needs to have your proxy manager running on it. That server will receive requests for your sites. Your sites can run on other servers (both physical hardware or virtual machines), and the Proxy Manager server can send you to the right server based on the URL in the request. Hope that helps.
With the instruction in this video tutorial work with LXD container with Ubuntu 22.04 image as the base? Or will I need to make some slight adjustments?
I haven’t tried it, but I do run several LXC containers with docker apps in them with no issues, so it’s worth a try.
@@AwesomeOpenSource will the npm work with cloudflare zero trust for exposing self hosted apps to the evil ole internet? Lol
Yes, and you can find a good video on how to set it up over on the @IBRACORP channel.
Hi, I'm a beginner so I started slowly to set a homelab. So purchased dns from namechep, added dns records as shown in the video with my ip addresses. And installed nginx proxy manager in Windows wsl docker. In a router added port forwarding and also added inbound rules in Windows defender firewall then started accessing npm from other device connected to same wifi and I'm able to aceess and also made an entry in npm for proxy host that is also able to access from other device which connected to same wifi. But, when I tried to connect from internet not from same unable to connect. Any idea how to resolve it..? By the way my ISP not blocked port 80 and 443
first, just ping your domain and make sure it shows your public IP address. Next, use the traceroute tool to see if you can see where the connection fails. It should show each hop as you try to reach the network Depending on what you're using to connect from off the wifi, the DNS may have just not been updated and propogated to the DNS server external to your network yet.
@@AwesomeOpenSource Thanks Brian for your reply. 2 tests I did one connected to wifi ping to the machine where npm is installed getting error message as a Destination host is not reachable. When I turnoff the windows firewall for private networks ang did the ping command getting response back but for security reason I cannot disable permanently. So can you let me know how to overcome this. 2 when I ping remotely i.e. not from same wifi cursor is blinking there it self no response from ping either reachable or not reachable. In the first step itself didn't succeeded so not tried the traceroute option. Can you help me let me know how to overcome this..? If any video the channel it'll be helpful. Thanks in advance.
Awesome video! I have a doubt, i was thinking in run a custom app in port 443 to later be accessible by other server behind a firewall that only can access 443 port in https, with nginx proxy manager but my application does not have a docker image by default is it possible to run any app in npm or only the ones that have a docker image availble, and how to do it? Thanks.
You can point NPM to any service on your network and pass the TCP / UDP traffic through it. So you shouldn't let the docker aspect slow you down. I run several services that are not dockerized, and use NPM to pass the traffic with no issues at all.
@@AwesomeOpenSourceOh thanks for your response, but do you believe that if I run my app setting up a new image docker let's say an Ubuntu and let it running there, will be easier to npm listen to the service running in the docker or not? Thanks.
Thanks for the tutorial. Do you run a UFW firewall on the Digital ocean vps? What ports need to be open?
I didn’t for the video, but you’ll want 22, 443, 80, and you choose when setting up your various VPNs.
@@AwesomeOpenSource Thanks. I am trying to run rocketchat in a Docker and it also seems like i have to open up port 9201 (per your setup). Is that normal? Also if i want to get to Portainer Admin same thing, 81 needs to be open. Is that correct?
@@mooretyler Depends on where you are trying to run it. Are you running it on a machine in your home LAN, or on a VPS with a Public IP address?
@@AwesomeOpenSource it is on a ubuntu vps. Seems like there is a iptables rule or ufw forwarding rule that is causing the issue.
You have to tell me one thing, though. How did you manage to deploy multiple WordPress containers in Portainer? I manage to set up one via the App Templates, after which I fail to set up even more WordPress containers via the App Templates. I then get the message that the WordPress failed to establish a connection to the database and in the log of the 2nd DB container I see: InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files. What is going on and how do I get around this?
I think my video is misleading. I set those up using the Docker compose file for each site. Literally created 4 folders, copy paste in the Docker-compose.yak, make adjustments to the yml file as needed and run. These were migrations so also had to migrate in exported MySQL data for each.
Awesome tutorial!!! When trying to log into the admin site for the first time I am getting a "BAD GATEWAY" error from my browser I have tried three different browsers and got the same error, Do you know how I can fix this?
Usually this indicates that you have a mismatch between the database username or password in the two places you fill it in. Double check and make sure you don’t have extra characters or misspellings. Others have just deleted the containers and images and re-run it and then it works.
I was okay up until setting up nginx proxy manager. showing multiple errors after docker-compose up -d Of course, there is a newer version out now and the setup on their page is different too... Edit: seems you can't put a space in front of the environment block in the config file.. all I did is back space and line it up with the rest of the formatting and it ran. Strange.
Yeah, you definitely want to go with their latest documentation when possible, and definitely with .yml code it's important to keep the spacing exactly right. Glad you got it up and running!
This worked for a lot of my docket containers except for one, Wireguard. Have you had any experience setting up a VPN service, whether wireguard or other, and getting it to work while using Nginx Proxy Manager?
I think the issue here is that wireguard is not a web-site / app, it's a VPN. You can undoubtedly forward traffic through NGinX, but I'm not sure NGinX Proxy Manager was designed to handle that kind of traffic.
@@AwesomeOpenSource Noted, so based on your experience it's either use Wireguard or Proxy Manager?
@@yoshikidneo869 correct. But I’m not the end all be all. Definitely ask over on the GitHub page, as they may have better answers.
Where did you get the staic ip from please?
Can you be a little more specific? I created a static IP at one point, more as a 'how to do it', and not so much as necessary for the context of the rest of the video. That's my bad if it confused you.
So I know this is an old video but I’m kind of curious as nginx proxy manager has something call streams and I’m curious if I can use that with two different domains for Minecraft servers one domain on one port that is forwarded to a different port
I don't know if it has a Stream option. I'm not familiar with that term.
caprover does exactly that!
and its awsome
tough your setup is better for selfhosting
Thanks for the great video, but when the script came to the part to install mariadb, i get a 'not found: manifest unknown' error. Could this be because that version of Mariadb is no longer valid and pulled for the repo? Is there a different way to get Mariadb for ubuntu to run with docker and portainer? Is is possible to modify the script to reflect a newer version of Mariadb?
@@funkygetaways I got it to finally to work. I went back to the Nginx Proxy Manager setup website and saw that there is a new docker-compose where mariadb is set to download the latest. After I got that in place and ran the docker-compose, I was able to login and get it setup.
Yep, this one is a bit older, but I do have the newer method on some of my newer videos.
I was finally successful in making this work but it does not auto-start when I reboot the server. Should it be auto-starting already or are there additional steps to achieve that?
For me it always restarts when I reboot, but you may need to check the value in the docker-compose file for "restart" and make sure it's set to "always".
Hi, I've followed all the steps. the only thing that does not work for me is if y type a subdomain that doesn't exist. It gives me a generic 404 but not the congratulations page. What have I done wrong?
You need to make sure to setup an A Record in your DNS as a wildcard. Basically create an A Record with a "*" in it instead of the subdomain. Then all subdomains without a specific A record will go to the place the * points at. Hope that helps.
@@AwesomeOpenSource I am using a free domain from freenom and does not allow the use of wildcards. Thank you for replying.
Can i configure nginx manager + cloudflare proxy?
You can indeed. It takes a bit more configuration at times, but it definitely works. I use it like that for opensourceisawesome.com now.
Hi again. Everything was working fine (Running docker on Windows 10) unitl I restarted. Container's ip changed. Made changes to IPs in nginx proxies and does not work. I can only reach it through the host's ip port 81. I deleted everything, reinstalled everythin but stiil did not get to work again as it was.
Ahhh, yes. I have some newer videos where I talk about docker networking, and why it's useful. If you set it up like me in the video here, you'll see that I just use the default network, and the issue is that the IP can change after a reboot (though it doesn't always happen). If, however, you create a specific docker network, and put everything on that network, then the IP will always be the same there after. You also get the benefit of using the container name in place of the IP if you want, when the containers needing to talk are on the same docker network (besides the default network).
@@AwesomeOpenSource Thank you. I got it working again. There was nothing wrong with docker or nginx. When I restarted the pc, vmware loaded a service and was using port 443, so the certificates were not working for that reason.
keep getting internal error on generating SSL certs, any tips?
Make sure that the letsencrypt service didn't get more than one instance running. Both on host and in container.
I keep trying this step by step from the beginning and continue to get the Bad Gateway error others have mentioned. Tried changing the db but still without luck i keep getting the same error when trying to login the first time. I am running ubuntu 18.04.5 on a raspberry pi3 B+. Any help will be greatly appreciated
I had the same issue and the cause was the db image I changes line "image: 'jc21/mariadb-aria:10.4'" for "image: 'yobasystems/alpine-mariadb'" and it all went back to normal, was a while and a lot of reading to figure it out but it worked for me.
how /where to add the "proxy_buffering off;" line ?? any idea?
22:57 - Key question: how to set it up so that your analytics see real client's IP and not Proxy's IP or Docker hosts?
I don't guess I understand the question.
@@AwesomeOpenSource When someone connects to a service deployed in docker, and you want to know what was the IP of that person, you cannot, because services in docker only see IP of the docker hub, not the actual connected client. This is a problem, especially when using services like Matomo, because it cannot show you from what countries/cities people are connecting to your websites.
@@impact0r ok, i think 8 follow what you are asking. I think there are ways to do it, but easiest would be using the host network to do this.
-network=host
So, you created the config file sin the root folder and you added the folder nproxy inside and the config files are stored in it. right?
If you'll check out the jc21 nginx proxy manager github page, he has removed the need for the config folder. that may help you. Now you just enter the environment variables right in the compose file.
Your docker working on Ubuntu? Raspberry? or ?
I've run docker on Raspberry PI 3B and 4. I run it on Ubuntu currently, which is running as a VM in Proxmox.
@@AwesomeOpenSource İn this video on the proxmox right?
@@okanerdem no I did this in digital ocean on a $t / month server. It was just for the demo, but yes, on Ubuntu 20.04.
@@AwesomeOpenSource Thanks for the information. Actualy would be good a video about export and import contaieners. I've a raspberry pi and i'm using docker on this but i want to move it on another host. I want to move all containers (include datas,config,volumes etc) to another host but i'm not to good linux. Do you think a video about that?
@@okanerdem Let me see what I can come up with.
When restricting access creds only get asked for when accessing apps via the domain name. However, I can access the app directly with the IP address. Is this a bug?
If you're going straight through the IP and port, then you are essentially bypassing NGinX Proxy Manager, so block that port on your firewall from the outside. I think there are ways to make apps only respond if it's called with the URL as well, but not sure how to do that.
Thanks for the video. Always good stuff. I am unable to access potainer from nginx securely certificate is generated by tls, it's online, I can access from the subdomain if add :9000 at the end of the subdomain i have pointed it to. Please help.
Followed the video I was successful in installing docker, docker compose, nginx proxy manager.
In order to troubleshoot, I generally start by just creating the http in NPM first. So I only fill out the details tab, then make sure it forwards the way I expect. If it doesn’t, I’ll check logs to see if I can tell what went wrong.
Once I can reach http, I then edit the entry to enable https. Also, check to ensure the browser is adding https to the URL.
When it doesn’t work, do you get an error? Or just nothing loading?
Hi Brian. I followed this mostly to the T, but I'm having an issue. when giving the routing of each sub-domain (nextcloud.example.com), I used the server actual IP and then the port I assigned to the docker container. Now when I activate UFW (allowing ports 22, 80 and 443) the only site I can get to is the top domain (example.com), the others won't load now. Would this be solved by using the docker IPs? I have all this running on a RAMNODE VPS.
I believe if you use the docker container gateway IP it should be resolved, yes.
@@AwesomeOpenSource I have the opposite problem. I can't use the docker container gateway IP. It's completely unroutable on my LAN. Pinging the gateway IP or any container behind the gateway IP completely fails. Forwarding traffic from my nginx to the container IP addresses or the docker gateway IP fails with the same message you get when the docker container isn't running at all. I have to use the physical host IP address to forward traffic to and that works perfectly.
@@hiddenfromyourview ok, so the ability to use the Docker gateway IP is based on running your container application on the same host and Docker install as NGinX Proxy Manger. If you are running them on different machines, then yes, you have to use the host IP instead.
@@AwesomeOpenSource Yea, they're running on the same physical host on my LAN. Though, after making my comment, I read there may be differences between docker.io and docker-ce. I believe you're using CE and I'm probably running io. I wonder if that's part of my issue. Thanks for following up sir! I appreciate it. I'll have to explore this one a bit more.
@@hiddenfromyourview Yes, I do run Docker CE, and not docker.io for everything. Very well could be the difference.
THANK YOU! This is an excellent tutorial. I have watched a couple of others but this has been the most clear so far. To make sure I have this right. I own nx9.us. Point all subdomains of nx9.us to the DO droplet. I set up the digital ocean droplet with NGinx proxy manager. I set up proxy manager to send movies.nx9.us to home ip xxx.xxx.xxx.xxx:8100 and forward port 8100 on my router to send it to internal ip 10.10.2.50:32400 (my plex server). If I wanted to do the same thing for books set nginx proxy books.nx9.us to point to xxx.xxx.xxx.xxx:8101 and port forward 8101 to internal 10.10.2.51:8080 (calibre server). And so on for different machines/servers. And if I had a docker server I would send to the ports set up for each service on the server. Is this about right or have I totally misunderstood?
So, yes, you can absolutely run it that way. But, going from your VPS to your home like that, instead of opening all of those ports on your home router, I would run NGinX Proxy Manager on one of the machines at home. Then only forward ports 80 and 443 to that machine for NGinX Proxy Manager. Then let it direct the incoming requests to the machines it needs to go to.
@@AwesomeOpenSource Thank you. I really appreciate it. (1) So just run NPM on main machine at the house and let it forward to each service/machine and let it mitigate everything. OR (2) did you mean just forward everything from NPM on the droplet to 80&443 at my house and let main machine there run NPM and send to specific service/machine ? Total newbie at networking. I just don't want to do something boneheaded and leave myself hung out to dry. I was just not wanting my home IP public plus it is dynamic and the D.O. is static.
@@RedNekNix I believe #1 is the better option, but you could run both. Depending on how often your home iP changes would drive which method you use. I have a dynamic home IP, but it rarely changes (I've been making these videos for over a year and it hasn't changed in that time that I recall. So, not too terrible if it does.
@@AwesomeOpenSource Thank You will do. Looking forward to more videos.
@@RedNekNix Please i need help when i used the docker compose
version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '80:80' #HTTP Traffic - '81:81' #Dashboard Port - '443:443' #HTTPS Traffic
Etc..
erreur message said port 80 et 443 is used and the container nginx didn't work.
Using someone else's content is cool. It's called open source for a reason. Also fair use.
Please i need help when i used the docker compose
version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '80:80' #HTTP Traffic - '81:81' #Dashboard Port - '443:443' #HTTPS Traffic
Etc..
erreur message said port 80 et 443 is used and the container nginx didn't work.
So, something is using Port 80 and or 443 on your host machine, and you need to find out what that is. Then we can figure out how to address it.
Almost got through the install, but failed with the error - "manifest for jc21/mariadb-aria:10.4 not found: manifest unknown: manifest unknown".
After a google search, I replaced image: 'jc21/mariadb-aria:10.4' with image: 'jc21/mariadb-aria:latest' and got through the install
I had to fix this the other day. Let me see what I did. I think there is an issue with a Maria’s version somewhere.
Change the version for mariasb in the Docker-compose to “latest” instead of 10.4, and try again.
Can you make a tutorial to install this on portainer? I saw that you run NPM on your portainer. I tried as well but i got bug after bug, maybe you have the solution?...
Are you just wanting NPM on portainer. YOu need Docker installed in order to install Portainer.
@@AwesomeOpenSource yes i have docker and docker-compose installed. but if i install NPM via CLI and not with portainer it says limited or something, like not enough permissions to stop the container inside portainer. So that is why i tried to deploy NPM inside portainer but it fails everytime because of the config.json file... I tried to make the config.json as a bind volume but then it says in NPM "internal error" and the logs of NPM in portainer spits out errors like "access denied". And yes i have setup the db image credentials and config.json credentials the same.
@@obnoxious2336 I believe the author of NPM has removed the need for the config.json file. YOu should definitely check out the updated instructions for running NPM. This might solve your issue.
@@AwesomeOpenSource thanks for the reply. I noticed it after i replied to you :D.
Have a good day man!
@@obnoxious2336 I'm glad you found it. Did it work after you changed it?
Alright. I've worked through the tutorial to the point where you enter ip_address:81. Page not found. If i enter ip_address:80 I get the congrats nginx is installed...
Make sure port 81 isn't being blocked. not all ports are open by default on all hardware or machines.
@@AwesomeOpenSource Thanks. I fought it for hours with no success. Ended up using a xampp install and threw my htaccess file in and away I go :)
@@AwesomeOpenSource I think the problem is the docker0 ip address. I was wondering why you used that address. Typing the docker0 ip address and port directly into a browser doesn't work, but it works when I use the ip address of the OS that Portainer is running on.
Will this not work on omv5 w portainer? Or am i just stupid
Maybe. I’m not familiar with OMV enough to say. You can do a lot with Peoria we as well, though it’s stack and compose seems limited by the older Luba they use. But I’m sure someone can make it work if they want.
I've followed these directions and others more than once but no matter what I get "Bad gateway" when trying to log in for the first time. What am I doing wrong?
Usually this means you have a mismatch in the MySQL user or password. But a few have had good luck just wiping and re-pulling down everything. Note, the instructions on the NGinX Proxy Manager site have changed slightly, so no config file is used, but everything is put into the Docker Compose file.
@@AwesomeOpenSource I just removed the containers and the nproxy directory and started over using just the instructions on the NGinX Proxy Manager site. I even just left all of the default "npm" values in the docker-compose.yml file but still getting "bad gateway". Is there anything else I need to remove when starting over from scratch?
@@tyson0016 Are you using Docker-CE or Docker.io?
@@AwesomeOpenSource Docker-CE. It just suddenly started working without me doing anything else to it. I'll continue with using it for a bit and see what happens. It hasn't inspired much confidence so far. Thanks for the suggestions.
I got all the way to the let's encrypt part. I keep getting and error that won't let me use SSL, not sure, but i followed directions exactly (I thought). I did notice, in the video the ipv4 docker address used was 172.17.0.1....however my docker address didn't look like this. Mine was 172.23...last 2 were both 3 digits...is this where i messed up? I am using Windows 10 (workstation) with docker desktop using ubuntu as WSL....Also, I do have a synology behind my router as well....Not sure where to start...any thoughts?
Docker Addresses on Windows 10 may just be assigned differently . Not sure on that, but should work. Make sure your windows firewall isn't blocking port 80 and 443 for LetsEncrypt as well.
@@AwesomeOpenSource Just a quick follow up. When you created the droplet, this was your server...I have a WSL ubuntu that has its own ip and a host that has its own ip. They can ping each other. When creating my A record with Google Domains...I put Docks....but for IP...would i use the ubuntu ip or the host ip (that ubuntu is on)? This i believe is my problem and i cant figure it out and when testing its causing overlap and i have to restart (changing my ubuntu IP address) plus the time for DNS records to set. So, really my main question was the initial: Domain DNS A record ----Ubuntu IP or Host (windows) IP ---reminder they are the SAME MACHINE....I really appreciate all these videos!!
@@tbones3141 If the Ubuntu IP is on your LAN network (it's IP is on the same subnet as the rest of the machines), then you could use that IP, otherwise, you first need to get through the Windows machine, and then forward that traffic through to Ubuntu somehow. I'm not a Windows user, so no idea how to do that if that's the case.
My server , could not logging after server reboot , "docker-compose up -d" any solution without type this command every time of the server reboot.
In the docker-compose.yml file there is a parameter of "restart". YO ucan set that to a value of "always" and it should just startup on it's own.
@@AwesomeOpenSource thanks for your reply brother, now I faced with the one more problem occurred when I am try to do it via vps, after success , the admin start up page appear " Bad gateway", how should I do?, thanks
@@sovathnahim819 So, there are a few reasons I've compiled as to why this happens.
1. Make absolutely certain that you username, db name and password match for the DB in the config and the .yml file. If you have any difference, the application can't access the db, and you'll get bad gateway.
2. If you are using Docker.iio instead of Docker-ce, then this can also happen. Make sure to install and use Docker-ce.
3. Sometimes, just blowing away and re-pulling it all down seems to solve the issue.
Hope one of those helps,
I just managed to make a bonehead move. I had everything working just fine but then clicked under proxy hosts, on http only and set it to https. Looks like it broke the admin page and everything (getting a 400 error now.) Been searching for a few hours and can't figure out how to undo that setting. Any help would be greatly appreciated.
So, did you set a URL to your NGinX Proxy Manager install? And then set it to https? Also, what browser are you using?
Finally, have you tried IP:port instead of URL:port. You might see if you can get to :81
@@AwesomeOpenSource I was up until 4am messing with the config files and it looks like changing the scheme setting back to http fixed it. I got the subdomain ssl working again for the install. It was returning a ssl handshake error in chrome.
@@cryptomnesiac glad you got it fixed.
Your video was helpful with setting up Nginx Proxy Manager, but I am having a problem with setting up proxy hosts for other containers. Do the other containers need to be on the same network as NPM? I could not get NPM to work with the Docker0 IP as shown in your video. I was able to get it to work with 172.19.0.2:81 and install SSL with my subdomain. Portainer was showing NPM as having an IP of 172.19.0.2. I installed Snapdrop and Portainer shows this container to be on 172.19.0.4. I tried using 172.19.0.4:8082 in the proxy host, but the subdomain can not reach Sanpdrop. Snapdrop is accessible by the public.ip_address:8082. Not sure what I am doing wrong.
Without knowing for sure what commands you're running for these containers it's hard to tell. You may want to try the Docker gateway IP for each container to see if you can get to it using the gateway IP and port number you set.
@@AwesomeOpenSource So I found out that if you use the DigitalOcean Marketplace Docker image that you will have issues with setting up NGinx Proxy Manager and multiple instance of Docker containers on the same server. Once I manually setup Docker on a Ubuntu server, then everything worked perfectly. I appreciate all the videos and tutorials.
@@afp415 great info. Thank you.
I have the same problem using Digital Ocean, except I installed docker manually following the instructions and installed NPM using the normal docker compose yml file without changing anything (other than passwords). I can't get it to work using the Docker0 IP and port 81 without a 504 error. It only works with the NPM container IP (from docker inspect container), but that's no good if you want to proxy other containers running on the same server.
@@AwesomeOpenSource Great tutorial, but I'm having the same problem with trying to forward to other containers on the same server. Always get a 504 gateway timeout. Any solution for this? My use case is: RaspberryPi running Raspbian with Home Assistant (supervised) and a Synology NAS (no VM on this model). I can get access to the Synology but not Home Assistant. I probably could use the NGINX Home Assistant addon but I like NPM better.
There is a debate as of which docker version to use and the preferable way is docker.io and not -ce (although the version 2 of container is 2.0 -ce)
Is there a reason you ve chosen the -ce version which derives from docker project itself and not the ubuntu
One of the answers:
>>>
I do ce and have had an issue with it yet. Wasn’t aware of a debate. Interesting to know. Why do they say one over the other?
Sorry, just read the rest of your comment. Interesting, but yeah I suggest CE I’ver IO then as I consistently find the repo version to be several releases behind.
@@AwesomeOpenSource If you didnt need to remove any binaries so its ok I guess, But in 20.04 LTS you just go with apt install docker.io and that s it without the need to add the keys and the repos and update the cache..etc....Of course having said that .io is several versions back... justifies your decision
try as I may I just cant get past a "bad gateway" when trying to login to nginx proxy manager. I am using a pi4 with 64bit ubuntu 20.04. Seems like its a common error, but cant figure out how to fix it. Any ideas?
The three reasons I've found are:
1. user, password, or db name don't match from config to compose file.
2. Someone changes the right side value of the Volume which is the container side.
3. Sometimes, just scrap it and start over.
Usually this bad gateway indicates an inability to communicate with mysql, or inability to login successfully to mysql by the application.
if you want to reach out on telegram, I'm happy to try and help. I'm @MickInTX
@@AwesomeOpenSource finally got the Nginx Proxy database to work. Had to change the db from 'jc21/mariadb-aria:10.4' to 'webhippie/mariadb:latest'. I am pretty sure its a db/pi4 issue since I am using latest pi4 with 64 bit ubuntu OS. Only issue I seem to have now is I cant change the db password under admin user. Not sure why but so far thats not a huge issue.... just cancelled out when I first opened nginx-proxy admin
idiot!!! Just realized I had used wrong current password... first time its 'changeme' not what I set in jml file.
@@heliwrschannel2047 You should be able to set the db password to anything you want in the config.json file, as long as it matches what you put for it in the compose file. Or am I misunderstanding? Still, glad you got it going.
@@heliwrschannel2047 Oh, glad you saw that. Yeah, gets everyone I think. Don't sweat it.