i have tried and im missing the part to use my organisation custom ssl certs paths like in nginx i use the conf file where i can mention the path(since they get renewed by org on certain intervals for all sites on mounted path) but here in nginx proxy manager i have to manually update by uploading the certs in the nginx proxy manager is there a way to achieve this btw thanks for making such an easy tutorial going through all the hardwork for us 🫡BIG FAN
@@watchbro3319 Yes, for sure! We added 3 volumes (see blogpost) to our nginx one nginx-data, nginx-snippets and nginx-letsencrypt. Now: The first time it's easiest if you simply upload it via UI. THEN: Nginx Proxy manager will create a new folder in your nginx-data volume named after the certificate number e.g. /nginx-data/custom_ssl/npm-2 and here you'll find your two .pem files. So in the future you can just update those via the volume and then trigger a server refresh (you don't haven have to restart the container, you can just trigger nginx config reload inside of the container via script. Here are 2 links: - github.com/NginxProxyManager/nginx-proxy-manager/issues/1618#issuecomment-1203641077 - docs.nginx.com/nginx/admin-guide/basic-functionality/runtime-control/
@@activenode cool thanks for such prompt response yh I had found the way mention cert paths but unfortunately my org has only subdomain so npm uses that I'll try doing same setup with nginx though.. thank again 🙌 do u guys have a discord though
Omg thanks 🙏 I became subscriber . Anyone giving full guides and not just 5 min versions of TH-cam just to get views is not worth my time because I tend to loose to much time afterward not understand 70% of the full content. Thanks you and keep up the great work and give us more full guides . I just found your channel. ❤
Thank you very much for the explanation.I struggled for more than 4 hours with the minio setup, using the latest minio image reported an error "network error" on console login.After 4+ hours of struggling and debugging, I realized that the problem was in the configuration of the MINIO environment variables.Currently MINIO_SERVER_URL is deprecated and MINIO_DOMAIN is used to replace it.This modification worked, so hopefully it will be helpful to anyone who sees this video later.
After the third thorough installation, I found out following: 1. Right after you set access list in NGINX, you can only manipulate with tables, but not with storage nor users. 2. After FULL installation by the guide, with Authelia, everything works. 3. supabase-vector doesn`t start. 4. supabase-storage & supabase-studio both are in 'unhealthy' state, 5. Error on sending invitation But, main functions works. Great job David ! Thank`s a lot !
OMG dude! I literally wanted to try hosting my own supabase on Hetzner this weekend and you upload this! Thanks for saving me from wasting a ton of time 😅
Some Hint for Studio + Storage: If you use the Studio Dashboard for File Storage and copy the URL of a Storage File, it might append a "wrong" Port. This is a bug within Supabase and I already opened a Pull Request in GitHub for it.
Thanks for the hard work! I just followed your tutorial verbatim over the past couple of days. I realized the hard way that I couldn't host minio/nginx/supabase on the same server, so I tore everything down and rebuilt, excluding minio and opting for remote Wasabi S3. Little bit of wasted time, but no worries Also, I have stopped at the Basic Auth portion and have yet to configure Authelia. Current Issues: 1. Auth: user creation not working from the Dashboard, giving me a "nameserver error". Additionally, using only Basic Auth, an endless loop is created when trying to create a new user with the Dashboard prompting me to enter Basic Auth credentials endlessly. 2. Nginx Basic Auth: this is mainly an issue with NPM itself, but when enabling a "Custom Location", it breaks that proxy (at least when dealing with the supabase dashboard proxy). The temp fix is to add the "location /" as custom code in the advanced tab and it will work as intended.| 3. Logging: you seemed to have missed this part of the setup in your tutorial. Considering port 4000 is by default exposed for the logging service, it might pose a security issue. Additionally, logs are not working at all within the Dashboard using your setup and the current supabase build. 4. S3: following your setup, I had linked up to Wasabi S3 storage (uses the same AWS S3 protocol). It only works half-way out of the box, although I am still isolating the issue to see if it's a IAM policy issue. My bucket is set to allow my S3 user to access all bucket resources and folders, except Supabase will only ADD files to Wasabi, but is unable to delete them. Additionally, manually deleting files from within Wasabi will not reflect those changes inside supabase, making it unusable for S3 storage in it's current out-of-the-box state. Collectively, these issues means the supabase self-hosted solution using your tutorial is not development ready, let alone production ready, so I am continuing to troubleshoot these issues. It is unfortunate that Supabase self-hosted had the potential to be the best self-hosted BaaS on the market, but falls shortly of actually being a useful product, and would be a key primer to have me sign up for their cloud solution. Although, if a company is so willing to waste this much of my time in hopes I'll sign up for their cloud service, I'll be better satisficed searching their competitors' cloud solutions. I am new to backend development and as such setting up supabase has been a terrible experience so far (which activenode's tutorial has made me feel better about). Albeit this was a necessary process to help me understand the difference between company's providing good documentation, and those providing bad documentation. I'm confident in saying supabase seems to suffer from the latter. Being a newbie developer myself (their target demographic), I will seek to avoid supabase self-hosted (and cloud) at all costs.
Considering point 3: Yeah, it seems like logging is missing indeed, thanks for the hint, I have that on the bucket list of ToDos now but I'm struggling to keep up with the content right now due to work. Stay tuned. Considering Point 4: I can't tell much about this as it had worked in my case wonderfully as shown. Rest assured I'm going to work more on self-hosted stuff once my book is finalized but patience is key here :)
@@activenode Appreciate the work! Once it is more stable I will give it a go again for database hosting. I am looking into Directus right now as their self-hosting is a bit more straightforward since everything is contained within 1 service in the .yml file, and they offer the same supabase features + a lot more additional CMS tuned features. If you make videos on Directus + Supabase DB for a secure/production self-hosting tutorial, I will watch those as well! Although, considering Directus supports PostgresSQL, I think a supabase/directus combo might be overkill.
God knows how much it takes to compress those 45h of work and research into a 35 minute guide for your audience. Def, earned this like comment and subscriber =) Keep up the good work, we are very thankfull
Hehe, I was just reading their selfhosting guide last night, and wanted to do the setup on my contabo server. What a timing, this came just right. Thank you for the tutorial, and saving the time as well💓💓
Hey activenode, thanks you for sharing your hard work, it’s the best tutorial I found. I strongly believe self hosted supabase is a good approach, mainly for EU-based project and GPDR compliance. Even if Supabase seems to work on this subject (with their DPA, TIA, Supabase Inc. entity…), not all is very clear on that. Supabase Docker is the first approach for self hosting, K8s infra would be more suitable for scalability and maintainability but community project for k8s seems outdated. The only way I currently found to self hosting supabase using Docker in a few click is elestio services, which is GDPR compliant too. Another big point is elestio allows you to bring your own VM with their services. But, the dark side in elestio and in your video, is IMHO, the db should be decoupled from the middleware (studio, Kong and sub apis) for some reasons : easier to update the middleware, backup rules differentiation, security, etc. Same thing for storage layer with S3 compatible storage, as u suggest with minio. And idem for Analytics. It would be great if we could use a dbaas for the postgresql, unfortunately no EU hosting (Scaleway, AWS, OVH) allows to use the “not-certified” pg extensions that supabase is built on. Only AWS RDS allows pg_tle for custom extensions like pgjwt, but it’s not yet sufficient. Other point, for production env, I don’t like the idea that postgresql port was open (and don’t know if fail2ban filter for postgres is embedded in docker image). But for migration from supabase CLI (using -db-url), it would be interesting to use something like a bastion and ssh tunneling. Same idea for studio (to be available from another IP/domain). Sorry for long post, but let me know if I can help you on something about these points to think together and with the community ;) About your book, when is it due?
I will soon release basically this, with all infrastructure as code using Terraform and minimal instructions. Thanks so much for this tutorial. Really helpful. I am able to stand on the shoulder of giants!
Hey there! Sounds cool. You can also take a grain of salt from here, I think they did so too, so maybe it helps getting the very best out of your own solution: docs.digitalocean.com/developer-center/hosting-supabase-on-digitalocean/ . Tell me when it's done and post a link, would love to check it out! 🎉
Awesome explanation and tutorial thank you! I haven’t tried self-hosting yet but I suspect I’ll be moving there before too long so this will be a wonderful help!!
Hi! This guide is super useful and your style is really sophisticated! Hence i am new to Self-Hosting and thus embody all the common characteristics of a fully fledged n00b i would like to ask one question about overall security: The internet out there is basically a very scary and dangerous place, so would it be suitable to disable access to Dashboard/Proxy-Manager/MinIO at all and only expose Auth/Storage/API? If one would need to access Dashboard/Proxy-Manager one could use key-based SSH access (set on some high 5xxx port)? Regarding the security of publicly available services i would prefer attacking hummingbirds with cluster bombs rather than shooting sparrows with cannons
@@activenode It's a little sad though how complex it is to setup. The auth for OpenSearch. Plus, it only supports RSA. 2024 and people still use RSA . . .
How do you upgrade the version of the backend services if supabase pushes new changes? Resolve merge conflicts whenever you pull from the upstream supabase repo?
Dropping another thank you as I have been continuing to come back to this video to help me out. Btw, have you used Authelia in the past for implementing more complicated auth systems - if so would you recommend it?
Hey, thanks! Could you specify what you mean by "more complicated auth systems"? Did you mean adding other Auth Methods within Authelia or using Authelia for something like user management? If you constrain the question I can answer it :)
@@activenode Hey I've tried to reply twice already but somehow my comment disappeared after trying to edit it. I was curious to hear your opinion about Authelia or potentially other auth libs for the following use-case: social login with scoped jwts (so basically a simplified version of auth0 but self hosted). I also wanted to ask if you have had any experience with Hetzner load balancers? Is my assumption correct that if I used a load balancer then I wouldn't need nginx on the server directly?
hi, nice video, A video on how to integrate authentication, for example, through Google would be greatly appreciated, especially in the case of self-hosted solutions
I was so happy to see this guide, but still struggeling with minio. I can not login into the dashboard. So I digged around for an hour until I gave up .... for now ;-) But I can see how much effort it have taken to create this tutorial it is everything but easy.
@@activenode Thanks for your answer. I did the same as you did, but I struggeled with the login to minio. It seems to be so complicated. I can see that credentials are correct on the docker container, but it is simply not working. Read a lot about it find a hint with certificates?! I stopped and setted it up without minio and it is working. The frontend you used instead of Basic Auth is really great. Thanks to save me hours of trying with this guide.
@@DannyGerst I also have the same issue -"Invalid Login" error from Minio. Try to solve it. I am using exactly the same credention that I have provided in docker-compose.yml file @activenode thanks for a tutorial! Catch a coffee
Hello, Great video i have bought your book going to start reading it. quick question can the locally installed version of Supabase function like the online paid version? is it possible to add the settings button and functionality to a local Supabase instance?
This is an amazing tutorial. Thank you so much. Do you have any advice for data replication for mino and superbase in case the server got corrupted for any reasons?
Best video on this topic on youtube! Any chance you'd do a video where you spin up a server with Typesense and do the proper connections to the supabase server to use it as a serchengine for content in the Supabase db?
I think following the supabase docs for self hosting is much simpler. After watching this video, i checked docs and there were just 4 commands. I thought no way these 4 commands will run the self hosted supabase. I was so much sure that this won't work that i didn't even change the env file and after running the docker compose up command viola it is working.
Sure, if that's what you want, it will definitely spin up a Supabase with basic auth and no https and no domains. So in it's default state, especially with unchanged env, the docker spin up from the docs is not meant for production.
omg, thank you man! this is what i need :D, You pull it up. Subscribed. is there a way to configure the self host supabase with an extenal postgres database?
Thanks so much. Considering external DB: Theoeretically, it's doable. It's on my long bucket list of things I want to evaluate. In one week, there's also Supabase Launch week, so I'm waiting to see if there are any major changes that would make external databases easier to implement.
Man this is great, thanks a lot, just subscribed I want to start Supabase as a newbie programmer and wanted exactly this video. Are there any limitations to Self Hosting features, compared with the paid Cloud options? I couldnt find a feature comparision table anywhere
i saw a lot of videos about podman as a docker replacement, but not 1 video about hosting supabase with podman. Why is that? It would be nice if somebody make a video for it..
thank you so much for this tutorial as it answers most questions, but this feels incomplete just by a little. at the end it feels like the video was ended without us being able to access the other parts of supabase. i got s3 and authelia working, im able to access the studio.. but theres no way to get an api key from supabase or access any of its other dashboards it seems? the way you explained it sounded like there was a site AND a studio, but we were only able to set up the studio? we were able to set up the api on port 8000 but you showed a SITE_URL in the .env file on port 3000 which is inaccessable, i can't figure out what was meant to be there. is that where we make api keys or soemthing?
thank you for all the work youve put into this though, i was able to follow it right up until the end and get 99.9% of everything working perfect, thank you!
i figured out my own answer. api keys are generated by you and supplied in the docker configuration. thats the one and only API key. so you cant really have different api keys per app as most databases would do, but it works real well. i just feel like that specific part might be a security flaw rather than a feature... or maybe you're supposed to make a completely seperate instantce of supabase per application and just have a bunch of supabases running in tandem? really unclear there what that design is for.
Yes it's for one. Supabase.com itself is an infrastructure deploying ONE instance per project, that's why you'll have different URLs. So if you want to deploy more, you can but you need to deploy all services for each project again.
After adding the domaind and creating the SSL certificate for the nginx proxy manager itself you should go back into your docker compose file and remove port 81 from being exposed. By default, you exposed ports 81, 80 and 443 and since you are now exposing the proxy maanger via port 80 and 443 via the domain, you don't need port 81 anymore and you should remove it.
I pointed 2 ip adresses (S3 storage& supabase) from CF without DNS proxy to my domain and I still getting internal error while I try to adding S3 to proxy list nginx
Well, you have to take care of it. So whilst Supabase paid version will allow you to request support for upgrading to the newest version, with self-hosted you are on your own. Also, you are obviously on your own in any case of problems. The self-hosted version doesn't do auto-backups so, as stated in my video, use something like the Hetzner backups to achieve that. For most people the differences are neglectible but e.g. self-hosted Edge functions will not be distributed across multiple servers (obviously because you self-host them). The supabase.com Edge functions use the deno.com/deploy and hence can distribute across the world. The upside of self-hosting: You can choose a cheap but fast server close to your location and be independent from any issues, like cloudflare problems that recently happened. With GDPR regulations in place I'd recommend self-hosting as you wouldn't be 100% safe with supabase.com . You're in full control. I love both and I have both, self-hosted as well as projects on supabase.com
Decoupling the Database / Scaling is on my bucket List for Videos but don't expect it anytime soon, sorry, TH-cam is still a hobby project and it needs a lot of time. But at one point of time I'll definitely try to make it.
Hey! Would you say the server backups on Hetzner are sufficient for the database or would you use a different system/location for backups? I don’t want data to get lost and I’m not sure how good Hetzners backup systems are.
Hey there! Personally, I would. I have high trust and good experience with Hetzner for more than a decade. If you enable Backups, you get trustworthy backups. However: From a paranoid consultancy perspective, time has shown that huge companies (e.g. GitLab) loose data too. So if your Business was a super high risk Business with e.g. multiple clients depending on you and paying you then I'd consider using multiple Backup systems / Locations. Not an easy question to answer generically
@@activenode Thanks! I went with the paranoid consultancy perspective and added another backup system on top that dumps the database to the Google Cloud.
With authelia, definitely. There's often a misunderstanding of what Supabase is. This is leading to thinking Supabase has "less" features when self-hosted. That's not really true. It's just that some parts aren't part of the service Supabase but of the infrastructure and that is what supabase.com does: They implemented basically a wrapper to manage multiple Supabases and on top of that added their own Auth (which is not the same as using Supabase Auth). And in this tutorial I do the same with authelia :) You can check their docs to see how to add more users. Cheers and have a great day!
Thanks, much appreciated, always looking for feedback! Lovely Idea, multi-faceted answer: 1. I actually wanted to go with MinIO to keep it as generic as possible to make it work whenever/wherever (and then people can choose) - minio has built-in replication so you could use like 10 Copies across multiple different hosters for Safety over 9000 🚀😎 2. The Hetzner Object Storage doesn't state anything about being S3 compatibility which very much indicates that it isn't S3 compatible, which would be lovely though. But probably you meant: You could use the Storage Box as a Volume and then just not use S3 (as shown in the Video) but Volume mounts in the docker compose. Then the answer is basically answer 1). In most of my cases the instance would do just fine with "only" 40GB of Space. But when you'll have a production Page which has massive file usage on your Supabase Storage, for whatever reason, then you will need to use Storage Boxes on Hetzner, yes. Even more so in that case I would probably go as far as saying I have an instance running MinIO but MinIO is using the mount from the Storage Box - this would be massive flexibility. Or I would go ahead and search a different hoster for the Storage part only. Good thoughts, hope the answer wasn't too long.
@@activenode appreciate your answer 👌 I will give a try to install supbase on contabo instances and using thier compatible s3 storage buckets. I tried to sign up for Hetzner but they don't accept me as I am Egyptian 😅 So I will try on Contabo Thanks for your helpful tutorials 🌹❤️
Hay Man I really appreciate you ,cos after 1 month I can find out ,one of the best video about supabase-selfhosted,You are perfect man ,I appreciated you again and again God bless you
@@activenode supabase-storage container, the message error (node:1) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.
Hi! Thanks for this tutorial. Very well structured and I could easily understand everything. Unfortunately, right at the end I cannot get the dashboard working correctly. So in Nginx Proxy Manager I did not add any Custom locations (as I think you meant to say in your blog) and instead only pasted the custom config. When entering the dashboard, basic authentication does not work anymore, I can access everything without any login. (When I do it the 'old' way with custom locations, this works) Either way, in the dashboard I can neither create new users (Failed to create user: invalid JWT: unable to parse or verify signature, signature is invalid), nor access access Storage. (400 GET /storage/v1/bucket). Do you have an idea whats going wrong here? I generated my JWT Secret the way you suggested in your blog.
Hey there, happy to help. So first things first, you only want htaccess for Studio, without Authelia. I'm actually showing how to do that in the video. The original htaccess protection we removed by removing Studio from Kong routing. Now we route with the proxy manager and I am showing in the video in the Dashboard part how to do the basic protection without Authelia. Have another look, this should really solve the protection problem as its covered. Considering the second issue: That's rather unusual. When you changed the JWT before setting everything up, it should work as expected. The only thing that I can imagine where it does not work is when you just change the JWT after having span up everything because then there would be a mismatch with the JWT the system was setup (the standard one) and the one you changed. Let me know if you need further assistance.
@@activenode Sorry for getting back to you so late. I tried to set it up again from the start and ran into the same issues. Basic Auth stops working as soon as I add the Advanced Configuration (4.2. in your blog). Storage responds with "unable to get owner" and I still get "Failed to create user: invalid JWT" when trying to create users. Even though I definetily changed it before setting up my server. Any idea what I could do to proceed?
I respect the process that you had to go thru to get this done but there are quite a few errors in the tutorial since nowadays Supabase has a lot of new features and such. It would be nice for you to do a new tutorial with these features.
@@activenode Thank you for the quick response, I'm getting errors when using docker compose pull stating that "Additional property nginx is not allowed."
@@activenode So my issue with this is because I was trying to run nginx proxy manager on a server that already had nginx proxy manager setup for s3 storage.
@@undiscoveredshorts Alright then why don't you reuse that proxy manager? You could also rename it but then you'd have two proxy managers which cannot listen on the same port so I wouldn't do that. Or, if S3 is already running, you could just get rid of the existing proxy manager and add the s3 docker image with the existing volume to the Supabase docker-compose.yml . Last but certainly not least you can reuse the existing proxy manager by creating a docker network like "sharednginx" or so (docker network create sharednginx ) and add the nginx proxy manager that you have to it as well as add the external: networks definition to the supabase docker-compose.
Hey David, one of the best tutorials I have seen on TH-cam. To the point, unpretentious, practical. I have now gone through the video, back & forth, probably 50 times. I have setup everything and it seems to work but with one glitch. I have setup Minio on Hetzener and configured. I can create a folder in Studio and see the folder. However, the folder/bucket is not visible in Minio dashboard inside supabase bucket. I can upload file in Minio dashboard and it works fine. When I try uploading the file from Studio, it just shows uploading one file and stays there. Any suggestions on how to resolve this?
Hm. Have you tried looking at the logs? docker compose logs. What do they say in the exact moment when uploading fails? Do you have the bucket created in minio? Namely supabase in my case. Please note: Creating Folders in Supabase is kinda different to uploading actual files. So it might seem that it "works" but in fact it does not. So I'd assume there's no connection at all to MinIO in your case.
@@activenode Thanks for your quick response. I have checked logs already but failed to find anything. Also, the bucket is named supabase as in your case. I can send you link to docker compose file or the file itself by email sincere there no way to send direct message on TH-cam. I tried looking for your email on your website but couldn't find one. Let me know the best way to reach you. Also, thank you so much for helping out.
I followed along the tutorial but when I got to the nginx proxy manager part. I got the proxy all working and I secured the proxy manager itself but when I got to the supabase api. Completed it and it said 502 Bad Gateway, Openresty. I tried alot of different stuff but I couldn't get it to work. And I tried to setup the studio too but the same situation with that. Docker says that containers are running and I tried restarting them numeros of times.
i hat the same issue. The problem was that i had the nginx proxy manager in a different docker container than the supabase studio and api. Be sure you're running the proxy manager for supabase in the same docker like the supabase stuff
@activenode Thanks for the great Tutorial. Currently I get the follow Error, if I try to upload an file to the Storage: "Failed to upload example.pdf: headers must have required property 'authorization'" any ideas ?
Have you checked my linked article again to confirm everything is correctly set up? There are some Updates in the video description to properly forward stuff.
Let me put it like that: Usually, you don't want to self-host. Even if you later want to, you can migrate easily with my other video th-cam.com/video/nyX_EygplXQ/w-d-xo.html. You are right, it can be more cost effective and especially because of the limitations of such as Realtime, also just recently discussed here www.reddit.com/r/Supabase/comments/15f6h0j/what_do_you_want_to_know_about_supabase_for_real/khehosj/?context=8&depth=9 . The reasons for self-hosting can be multiple: Hitting realtime limits (10k is a lot but not a lot for e.g. WhatsApp with millions of users), needing it for legal reasons (EU data protection as right now Supabase is not 100% GDPR compliant), etc. But I always recommend to start off with the supabase.com variant first to not have the costs of infrastructure at first, especially for lone coders. If you're in a big team and have infra people available, just discuss it within your team. Otherwise, maintaining it mostly will be not cost effective (in the beginning) as the maintenance itself costs more time than just paying for SaaS. Cheers!
Thank you so much for your instruction!!! But is there anyone who followed the instructions who can use the "realtime" database feature without issue? When I subscribe to the channel or db change it always shows connect to websocket fail in the browser console. I know maybe this is not a good place to ask questions but I have tried everything I can to solve it but still have this issue. If anyone can use this function please let me know so I know at least it is solvable! Thank you all for the help!
Hey man. Unusual. The realtime function is actually definitely one I checked and it worked with me. Are you sure you activated WebSocket Support in your Proxy settings?
@@activenode Wow thank you so much for replying me! Yes, I do set the proxy for the api has websocket support toggle on. But I am not sure if it is truely activated since I found the access list setting also has no effect on the proxy( no password prompt when access the url for dashboard) . But thank you thank you let me know you tested the realtime function. I will start over again to see if I can solve it!
When I try to connect to the dashboard I keep getting secure connection failed. I think this has to do with something in the .env but I am not sure. Any help would be appreciated!
very cool - how can I migrate my project from supabase cloud to the self hosted version? Also, what is the cost of all this and (I'm new to this) do I need to change my dns records for this to work?
Yeah. All of the Domains used in this Video have to point to the IP of the Server where you install it. So yes, DNS changes required. Migration is mainly done by exporting your whole Database first with pgdump. But if you would say you're rather a Beginner in this field I would probably reocmmend to export every table one by one (if you don't have like 1000 tables). So recreate your structure in your self-hosted instance and then just copy over the data as a naiive approach. But maybe this helps: ironeko.com/posts/creating-a-local-backup-of-a-supabase-database
@@activenode thanks - I've got it running, however, the API isnt working at all, just lots of errors. I'll see what I can get together for you. Thanks!
@@nocodecreative It won't let you post a link? Let me try that: supabase.com . I mean if you remove the at least it should allow you to post the Text right?
I want to create applications and websites on self hosts to reduce the cost, but when using S3 or Cloudinary the cost is large, so what is the benefit of a private server if the rest of the technologies return the same cost!!?? Do the hosts not provide good handling of files and videos?
That's hard to answer without knowing your target projects. Honestly, if your only looking at costs, 25$ on supabase.com with included 100gb of storage is a pretty fair price allowing you to be careless about maintenance etc. So the question is rather: What do you need? How much storage does each project realistically need? After having talked to the CEO of Supabase he said you might even be faster without using S3 at all as there is no additional server - but then of course you need more performance on the server where you run it. On Hetzner you get the CAX31 for example for around 15 bucks with 160gb storage included - which is a little bit cheaper but then again: If your only looking at the price I'd happily pay 10 bucks more and have the SaaS version on supabase.com . So it depends. Self-Hosting also doesn't reduce costs because obviously you have to run maintenance and updates which costs time and time is money.
I am trying to provide applications like Tik Tok that depend on displaying Swipe products like Tik Tok, and I need a good performance at a small cost. I was thinking about contabo vps, but I do not understand why the applications upload the project to one server and the media to another server, as the prices of VPS are very cheap and make it possible for you to obtain the capabilities. Powerful, so do we need dedicated video servers? The reason for trying to stay away from supabase is that 25 in our country is a large price due to inflation, so can I rely on VPS and get a good video playing experience?
Hey man. Reasonable performance at very low cost is possible without the need of storage/streaming. > but I do not understand why the applications upload the project to one server and the media to another server, as the prices of VPS are very cheap and make it possible for you to obtain the capabilities. As I said in the video, you don't have to separate! Grabbing a file is probably faster as the server doesn't have to do another DNS lookup. The reasons why many people do this are multiple but let me state the most important: 1. S3 has built-in replica support. So people can just scale another 5 servers of the same S3 and have 4 backup servers which they can switch to. 2. Storage Servers need less overall power if all they do is about storage and nothing else. Hence, you get more storage at a cheaper price (as you mentioned contabo: contabo has Storage VPS which has less power than normal VPS but bigger hard drive) 3. Your power on a system is limited by all means (CPU, read-write on the hard-drive, bandwidth, etc.). So whilst your VPS is busy dealing with grabbing and delivering files, you'll have less performance on your same server which also runs the application (but for most of my projects which are not media-focused, this is totally fine). > The reason for trying to stay away from supabase is that 25 in our country is a large price due to inflation, so can I rely on VPS and get a good video playing experience? You have many many ways of ensuring a good video playing experience. Compressing videos without noticeable quality loss, setting the cache headers right, another one is using free CDNS (www.cloudflare.com/application-services/products/cdn/#CDN-Page-Pricing-AS). So it's really not all about "server power". There's more about this on the internet. I also think there's this flaw in the way of thinking that many people have in programming: "I have to find the solution that will still be good if I grow". But in fact, you don't. Let me explain. Considering this fact you have given, I am quite sure that your project is not making money at this point because you have no income on the project that would cover 25 bucks. This implies that you have no paying users right now or whatsoever. So I might be right in assuming that right now there's no such thing as "heavy load on the server" or so but you're only thinking about solutions in terms of "what if". I can totally understand that fear, but this fear is not helping in development of your project. So here's my proposal: Start with the cheapest solution you can get, e.g. one VPS, develop your application, optimize where you can (as I said above) and once you feel like hitting a limit it might be that it's because your app has grown and hence generates money and from there you can take that money and scale to more servers / bigger servers / supabase.com .
Hey @activenode great stuff ! was wondering how I could have a on-prem setup (without cloud like hetzner) but connect to local s3 minio - is this actually possible?
What exactly do you refer to with on-prem? supabase.com plus own S3 ? Or another provider setting it up for you and you want to configure S3? Does that reflect your question?
@@Ali-m2k6s That should work. I mean you can connect Supabase with the given configuration with any S3 compatible service. Hope I understood you correctly.
does this guide still work for anyone because i've tried without the s3 and there is no storage in the supabase storage and i've tried the s3 portion in the guide and still fails to create bucket in supabase studio. i noticed a differance in the .yml file # To use S3 backed storage: docker compose -f docker-compose.yml -f docker-compose.s3.yml up what is this ?
Hey there! It all works for me, except one thing: auth. It looks like the authelia_session cookie isn't sent when I request /auth/v1/users (that's what happens when you click "Create new user" in the dashboard), therefore the request is unauthenticated and it gets redirected to the login page. I've followed the additions in the description, but that doesn't change anything. Any idea on why that happens? I feel like some additional configuration for the auth endpoint is missing.
Okay so what I found out is that in this specific request the Request Header "Cookie" isn't passed (which contains the authelia session) and hence authelia wants to redirect - breaking the request. So far. I'll keep you posted. This doesn't harm your api however, it's only annoying in the studio.
I found the issue and it's rooted in the Source Code of Supabase here: github.com/supabase/supabase/blame/072222523afe612d575c2ff7e73e514692f545af/studio/data/auth/user-create-mutation.ts#L50 I created a fix here: github.com/supabase/supabase/pull/18156
@@activenode Yup, that's exactly what I noticed as well. The authelia session cookie is missing for that request, and I'm not sure why. It works for all the other routes.
hello sir everything is working now after 3 to 4 days of working but 1 issue is still remaing 23:09 time of the video where you setup the storage in my case without custom location dashboard is working if i configure as you my dashboard is not working when the dashboard is working storage is not working 😂😂 tell me the solution
hello, i have just install from supabase docs, but when I upload file to storage or create folder it does not work. Can't create folder or upload with 0% upload. could you told me what this happen?
Hey man, I'm seeing some people having issues like that, with others it does not happen. Have you checked the blog article linked in the comment as well? After the video I updated a few things there
@@activenode It's working. Thank you but I can't get "Get Url" when I click on image to get public url that was uploaded to storage. Could you please help with this final problem?
Legit question: Does the Self hosted version supports Users(creating users via studio Authentication)?? I thought i was breaking some configuration, but even with vannila install (without any config changes it always crashes with "canot fetch")
Yes, sure it does! This should work. I haven't tried it in a while as I'm currently on the Supabase book writing process (supa.guide) but this does work (rn doing this with k8s). Have you checked the linked blogpost?
Yes by separating through Docker - which this video already mostly does. The easiest solution, if you are not familiar with Docker, the easiest way to separate those container names is probably 1. docker compose --project-name YOUR_OTHER_PROJECT but then you also need to create a shared Docker network and remove the Proxy Manager from both docker compose files and then have it started in its own docker compose file basically (alone) and then you need to add the shared network to it. So tl'dr: That isn't all too much a Supabase Infrastructure Question but more a Docker question which goes beyond the scope of this Video.
Hey man, thanks for the request. Can you specify what you mean? Connecting OpenID to protect Supabase Instance or connecting OpenID to be used by the users in your Supabase Instance?
@@activenode I’ve been trying to use a tool like keycloak which allows me to connect to our ldap and then provides an openid connector for your application. I’ve been having issues with setting up the various redirect urls that the openid connector requires.
Hm, not really. Depends on "easy". Coolify has its own constraints. I use coolify as well but it's other things to adapt then. So I wouldn't necessarily say it is easier
They are active! Here's how: In every project, no matter if Cloud, self-hosted or locally, the URL to an Edge function is API_URL/functions/v1/function_name . Now on self-hosted, you need a way to get the functions there as `supabas functions deploy` only works with cloud. This is done with a docker volume which by default is found in the docker-compose.yml under functions.volumes (usually on your server where you execute the docker-compose.yml it's ./volumes/functions)
Hello sir,I have a problem with file type for upload to strorage ,all file type was uploaded except media file jpg png or video ,I tried to create policy but was not useful,colud you please guide me
@@activenode Hey sir Actualy I can not create user in dashboard and I got this error Failed to create user: An error has occurred: Load failed error 422
@@sabahsaeedi9551 Thanks for getting back. is ist just the Dashboard that has that problem? Can you connect via Supabase Client and try creating a user there? That problem of having issues creating a user with the dashboard isn't new, I reported about it in the comments. Have you checked my blogpost as well?
rephrase that i fixed it now but the supabase storage doesnt work "because it violates the following Content Security Policy directive: "default-src 'self'" by any chance did u try it with storage and thanks for your awesome guide and time!
@@elmagnificent8550 haha my mistake, here's the US one but you can search whatever amazon of your country for "Supabase book" www.amazon.com/Building-Production-Grade-Applications-Supabase-comprehensive/dp/1837630682/
That could be a "religious" question. I'd say the main difference is the Community. At the end of the Day you will probably get along with both - for most applications. What I like: the Supabase Developers, including Paul (CEO) himself do contribute to Projects such as pgvector, which they could just do propietary to keep some "Golden Nuggets" but they don't. So they're really, really are strongly committed to Open Source
Thanks for pointing out. As always: That's in the eye of the beholder / target group. As opposed to traefik, nginx is a little bit more known and well-documented and hence configuring npm manually is as easy as googling for nginx solutions, nginx proxy manager is a widely spread solution. That's why I chose it for the tutorial. From the perspective of someone loving infra-as-code I can definitely see where you are coming from. What do you love most about Traefik? And before having used Traefik, what did you use in the older times? Cheers
I hope soon you can afford an AC... your contents are gem! thanks for the tutorial even though I didn't understand a lot but planning to look into the tutorial again in the future with some basic knowledge of docker and other things you've used.!! 🥲🥲
Have you tried self-hosting yet? What are you missing?
i have tried and im missing the part to use my organisation custom ssl certs paths like in nginx i use the conf file where i can mention the path(since they get renewed by org on certain intervals for all sites on mounted path) but here in nginx proxy manager i have to manually update by uploading the certs in the nginx proxy manager is there a way to achieve this
btw thanks for making such an easy tutorial going through all the hardwork for us 🫡BIG FAN
@@watchbro3319 Yes, for sure! We added 3 volumes (see blogpost) to our nginx one nginx-data, nginx-snippets and nginx-letsencrypt.
Now: The first time it's easiest if you simply upload it via UI. THEN: Nginx Proxy manager will create a new folder in your nginx-data volume named after the certificate number e.g. /nginx-data/custom_ssl/npm-2 and here you'll find your two .pem files. So in the future you can just update those via the volume and then trigger a server refresh (you don't haven have to restart the container, you can just trigger nginx config reload inside of the container via script.
Here are 2 links:
- github.com/NginxProxyManager/nginx-proxy-manager/issues/1618#issuecomment-1203641077
- docs.nginx.com/nginx/admin-guide/basic-functionality/runtime-control/
@@activenode cool thanks for such prompt response yh I had found the way mention cert paths but unfortunately my org has only subdomain so npm uses that I'll try doing same setup with nginx though.. thank again 🙌 do u guys have a discord though
I'm missing Ansible playbook script to deploy this in one go :D
no but that's why I'm here and I'm confused on minio and docker part as I don't have any experience in docker or what is even minio? 🥲🥲
Omg thanks 🙏 I became subscriber . Anyone giving full guides and not just 5 min versions of TH-cam just to get views is not worth my time because I tend to loose to much time afterward not understand 70% of the full content. Thanks you and keep up the great work and give us more full guides . I just found your channel. ❤
That's what I do it for. Thanks mate.
Thank you very much for the explanation.I struggled for more than 4 hours with the minio setup, using the latest minio image reported an error "network error" on console login.After 4+ hours of struggling and debugging, I realized that the problem was in the configuration of the MINIO environment variables.Currently MINIO_SERVER_URL is deprecated and MINIO_DOMAIN is used to replace it.This modification worked, so hopefully it will be helpful to anyone who sees this video later.
thanks for sharing this! I was struggling with the exact same issue.
After the third thorough installation, I found out following: 1. Right after you set access list in NGINX, you can only manipulate with tables, but not with storage nor users. 2. After FULL installation by the guide, with Authelia, everything works. 3. supabase-vector doesn`t start. 4. supabase-storage & supabase-studio both are in 'unhealthy' state, 5. Error on sending invitation But, main functions works. Great job David ! Thank`s a lot !
Watching this for 100th time :) , pls don't delete this video. Setting up my 3rd supabase site.
I won’t! No worries
OMG dude! I literally wanted to try hosting my own supabase on Hetzner this weekend and you upload this! Thanks for saving me from wasting a ton of time 😅
🎉
Some Hint for Studio + Storage:
If you use the Studio Dashboard for File Storage and copy the URL of a Storage File, it might append a "wrong" Port. This is a bug within Supabase and I already opened a Pull Request in GitHub for it.
github.com/supabase/supabase/pull/17587
Thanks for the hard work! I just followed your tutorial verbatim over the past couple of days. I realized the hard way that I couldn't host minio/nginx/supabase on the same server, so I tore everything down and rebuilt, excluding minio and opting for remote Wasabi S3. Little bit of wasted time, but no worries
Also, I have stopped at the Basic Auth portion and have yet to configure Authelia.
Current Issues:
1. Auth: user creation not working from the Dashboard, giving me a "nameserver error". Additionally, using only Basic Auth, an endless loop is created when trying to create a new user with the Dashboard prompting me to enter Basic Auth credentials endlessly.
2. Nginx Basic Auth: this is mainly an issue with NPM itself, but when enabling a "Custom Location", it breaks that proxy (at least when dealing with the supabase dashboard proxy). The temp fix is to add the "location /" as custom code in the advanced tab and it will work as intended.|
3. Logging: you seemed to have missed this part of the setup in your tutorial. Considering port 4000 is by default exposed for the logging service, it might pose a security issue. Additionally, logs are not working at all within the Dashboard using your setup and the current supabase build.
4. S3: following your setup, I had linked up to Wasabi S3 storage (uses the same AWS S3 protocol). It only works half-way out of the box, although I am still isolating the issue to see if it's a IAM policy issue. My bucket is set to allow my S3 user to access all bucket resources and folders, except Supabase will only ADD files to Wasabi, but is unable to delete them. Additionally, manually deleting files from within Wasabi will not reflect those changes inside supabase, making it unusable for S3 storage in it's current out-of-the-box state.
Collectively, these issues means the supabase self-hosted solution using your tutorial is not development ready, let alone production ready, so I am continuing to troubleshoot these issues.
It is unfortunate that Supabase self-hosted had the potential to be the best self-hosted BaaS on the market, but falls shortly of actually being a useful product, and would be a key primer to have me sign up for their cloud solution. Although, if a company is so willing to waste this much of my time in hopes I'll sign up for their cloud service, I'll be better satisficed searching their competitors' cloud solutions.
I am new to backend development and as such setting up supabase has been a terrible experience so far (which activenode's tutorial has made me feel better about). Albeit this was a necessary process to help me understand the difference between company's providing good documentation, and those providing bad documentation. I'm confident in saying supabase seems to suffer from the latter. Being a newbie developer myself (their target demographic), I will seek to avoid supabase self-hosted (and cloud) at all costs.
Considering point 3: Yeah, it seems like logging is missing indeed, thanks for the hint, I have that on the bucket list of ToDos now but I'm struggling to keep up with the content right now due to work. Stay tuned.
Considering Point 4: I can't tell much about this as it had worked in my case wonderfully as shown.
Rest assured I'm going to work more on self-hosted stuff once my book is finalized but patience is key here :)
@@activenode Appreciate the work! Once it is more stable I will give it a go again for database hosting.
I am looking into Directus right now as their self-hosting is a bit more straightforward since everything is contained within 1 service in the .yml file, and they offer the same supabase features + a lot more additional CMS tuned features. If you make videos on Directus + Supabase DB for a secure/production self-hosting tutorial, I will watch those as well! Although, considering Directus supports PostgresSQL, I think a supabase/directus combo might be overkill.
God knows how much it takes to compress those 45h of work and research into a 35 minute guide for your audience. Def, earned this like comment and subscriber =) Keep up the good work, we are very thankfull
Thank you so much:)
The last part at from 25:00 made this really exciting. This channel is so under-subscribed, its a hidden gem.
3 months later, I still refer this video.
Hear that Sigh, wonderful. You are a hero.
On the venture to host it on OCI lets see how it goes.
Thanks a bunch. Wish you the best with the OCI!
Hehe, I was just reading their selfhosting guide last night, and wanted to do the setup on my contabo server. What a timing, this came just right. Thank you for the tutorial, and saving the time as well💓💓
Enjoy!
Did you pull it off
@@joshuaprecious Hey, I did that time. But, then I went with AppWrite, it was much easier to manage, and setup.
WOW! You bring this up to my head. I'm finding the way to selfhost Supabase with storage. You pull it up. Subscribed.
Hey activenode, thanks you for sharing your hard work, it’s the best tutorial I found.
I strongly believe self hosted supabase is a good approach, mainly for EU-based project and GPDR compliance. Even if Supabase seems to work on this subject (with their DPA, TIA, Supabase Inc. entity…), not all is very clear on that.
Supabase Docker is the first approach for self hosting, K8s infra would be more suitable for scalability and maintainability but community project for k8s seems outdated.
The only way I currently found to self hosting supabase using Docker in a few click is elestio services, which is GDPR compliant too. Another big point is elestio allows you to bring your own VM with their services.
But, the dark side in elestio and in your video, is IMHO, the db should be decoupled from the middleware (studio, Kong and sub apis) for some reasons : easier to update the middleware, backup rules differentiation, security, etc. Same thing for storage layer with S3 compatible storage, as u suggest with minio. And idem for Analytics.
It would be great if we could use a dbaas for the postgresql, unfortunately no EU hosting (Scaleway, AWS, OVH) allows to use the “not-certified” pg extensions that supabase is built on. Only AWS RDS allows pg_tle for custom extensions like pgjwt, but it’s not yet sufficient.
Other point, for production env, I don’t like the idea that postgresql port was open (and don’t know if fail2ban filter for postgres is embedded in docker image). But for migration from supabase CLI (using -db-url), it would be interesting to use something like a bastion and ssh tunneling. Same idea for studio (to be available from another IP/domain).
Sorry for long post, but let me know if I can help you on something about these points to think together and with the community ;)
About your book, when is it due?
I will soon release basically this, with all infrastructure as code using Terraform and minimal instructions. Thanks so much for this tutorial. Really helpful. I am able to stand on the shoulder of giants!
Hey there! Sounds cool. You can also take a grain of salt from here, I think they did so too, so maybe it helps getting the very best out of your own solution: docs.digitalocean.com/developer-center/hosting-supabase-on-digitalocean/ .
Tell me when it's done and post a link, would love to check it out! 🎉
Thanks. Got that one on my radar, I like your MinIO and authelia steps.
you are a legend, absolutely brilliant thank you for the indepth step by step tutorial
I just subscribed cause of the hard work. I could feel it in your voice
Awesome explanation and tutorial thank you! I haven’t tried self-hosting yet but I suspect I’ll be moving there before too long so this will be a wonderful help!!
Glad you liked it 🙌🔥
Hi! This guide is super useful and your style is really sophisticated!
Hence i am new to Self-Hosting and thus embody all the common characteristics of a fully fledged n00b i would like to ask one question about overall security:
The internet out there is basically a very scary and dangerous place, so would it be suitable to disable access to Dashboard/Proxy-Manager/MinIO at all and only expose Auth/Storage/API?
If one would need to access Dashboard/Proxy-Manager one could use key-based SSH access (set on some high 5xxx port)?
Regarding the security of publicly available services i would prefer attacking hummingbirds with cluster bombs rather than shooting sparrows with cannons
The Authelia part came really handy for OpenSearch. Thanks for the tutorial :)
Glad to hear that!
@@activenode It's a little sad though how complex it is to setup. The auth for OpenSearch. Plus, it only supports RSA.
2024 and people still use RSA . . .
How do you upgrade the version of the backend services if supabase pushes new changes? Resolve merge conflicts whenever you pull from the upstream supabase repo?
Dropping another thank you as I have been continuing to come back to this video to help me out. Btw, have you used Authelia in the past for implementing more complicated auth systems - if so would you recommend it?
Hey, thanks! Could you specify what you mean by "more complicated auth systems"? Did you mean adding other Auth Methods within Authelia or using Authelia for something like user management? If you constrain the question I can answer it :)
@@activenode Hey I've tried to reply twice already but somehow my comment disappeared after trying to edit it. I was curious to hear your opinion about Authelia or potentially other auth libs for the following use-case: social login with scoped jwts (so basically a simplified version of auth0 but self hosted). I also wanted to ask if you have had any experience with Hetzner load balancers? Is my assumption correct that if I used a load balancer then I wouldn't need nginx on the server directly?
After trying out multiple libraries, I have settled on Ory Kratos
Thanks for the hard work. The video was super helpful. I really appreciate you for your efforts.
You're very welcome!
Wow a fantastic tutorial. Thank you very much!
After the latest cloudflare problems we need to install our Supabase on our own server.
Happy you like it! Feedback appreciated.
hi, nice video, A video on how to integrate authentication, for example, through Google would be greatly appreciated, especially in the case of self-hosted solutions
Written to the idea board, I hope I can get back to it :)
I was so happy to see this guide, but still struggeling with minio. I can not login into the dashboard. So I digged around for an hour until I gave up .... for now ;-) But I can see how much effort it have taken to create this tutorial it is everything but easy.
I'm so sorry to see that for some people it works and for some it doesn't. What was your setup like? Where did you try it? Keep up the good work!
@@activenode Thanks for your answer. I did the same as you did, but I struggeled with the login to minio. It seems to be so complicated. I can see that credentials are correct on the docker container, but it is simply not working. Read a lot about it find a hint with certificates?! I stopped and setted it up without minio and it is working. The frontend you used instead of Basic Auth is really great. Thanks to save me hours of trying with this guide.
@@DannyGerst I also have the same issue -"Invalid Login" error from Minio. Try to solve it. I am using exactly the same credention that I have provided in docker-compose.yml file
@activenode thanks for a tutorial! Catch a coffee
I solve the problem: previously I used AWS Linux OS. When try to deploy again on Ubuntu OS it works. I guess IT a matters of firewall settings
Hello, Great video i have bought your book going to start reading it. quick question can the locally installed version of Supabase function like the online paid version? is it possible to add the settings button and functionality to a local Supabase instance?
This is an amazing tutorial. Thank you so much. Do you have any advice for data replication for mino and superbase in case the server got corrupted for any reasons?
Best video on this topic on youtube!
Any chance you'd do a video where you spin up a server with Typesense and do the proper connections to the supabase server to use it as a serchengine for content in the Supabase db?
I've written it down on the big ToDo List so I might come back to it :)
Hey men. thank u for sharing your knowledge. but why i can't see settings button, can u help me with that?
I think following the supabase docs for self hosting is much simpler. After watching this video, i checked docs and there were just 4 commands. I thought no way these 4 commands will run the self hosted supabase. I was so much sure that this won't work that i didn't even change the env file and after running the docker compose up command viola it is working.
Sure, if that's what you want, it will definitely spin up a Supabase with basic auth and no https and no domains. So in it's default state, especially with unchanged env, the docker spin up from the docs is not meant for production.
omg, thank you man! this is what i need :D, You pull it up. Subscribed.
is there a way to configure the self host supabase with an extenal postgres database?
Thanks so much. Considering external DB: Theoeretically, it's doable. It's on my long bucket list of things I want to evaluate. In one week, there's also Supabase Launch week, so I'm waiting to see if there are any major changes that would make external databases easier to implement.
Man this is great, thanks a lot, just subscribed
I want to start Supabase as a newbie programmer and wanted exactly this video. Are there any limitations to Self Hosting features, compared with the paid Cloud options?
I couldnt find a feature comparision table anywhere
🤯😍🤠👊👏 Well that answered the question I had about whether this could be done. Amazing work. Thank you for your effort! Godspeed with your book!
Thank you so much 💫
that's very well explained, i will try it myself
If this works, you are my favorite TH-camr.
I better hope it does!
i saw a lot of videos about podman as a docker replacement, but not 1 video about hosting supabase with podman. Why is that?
It would be nice if somebody make a video for it..
thank you so much for this tutorial as it answers most questions, but this feels incomplete just by a little. at the end it feels like the video was ended without us being able to access the other parts of supabase. i got s3 and authelia working, im able to access the studio.. but theres no way to get an api key from supabase or access any of its other dashboards it seems? the way you explained it sounded like there was a site AND a studio, but we were only able to set up the studio? we were able to set up the api on port 8000 but you showed a SITE_URL in the .env file on port 3000 which is inaccessable, i can't figure out what was meant to be there. is that where we make api keys or soemthing?
thank you for all the work youve put into this though, i was able to follow it right up until the end and get 99.9% of everything working perfect, thank you!
Awesome!!!
i figured out my own answer. api keys are generated by you and supplied in the docker configuration. thats the one and only API key. so you cant really have different api keys per app as most databases would do, but it works real well. i just feel like that specific part might be a security flaw rather than a feature... or maybe you're supposed to make a completely seperate instantce of supabase per application and just have a bunch of supabases running in tandem? really unclear there what that design is for.
Thanks for the guide! I was wondering is this only for 1 supabase project? Do I have to selfhost another instance to get more projects?
Yes it's for one. Supabase.com itself is an infrastructure deploying ONE instance per project, that's why you'll have different URLs. So if you want to deploy more, you can but you need to deploy all services for each project again.
very detailed tutorial, thanks. But need explanation on how to configure custom domains, how to point them dockerized nginx
Isn't this what I show in the video? What are you missing? Cheers
thanks for the great tutorial! really helpful with authelia.
Man You did a fantastic Job , thanks :)
Thanks a lot :)
After adding the domaind and creating the SSL certificate for the nginx proxy manager itself you should go back into your docker compose file and remove port 81 from being exposed. By default, you exposed ports 81, 80 and 443 and since you are now exposing the proxy maanger via port 80 and 443 via the domain, you don't need port 81 anymore and you should remove it.
It's quite annoying that the proxy manager doesn't use TLS by default, or via config flag. But great you later go over this.
Thank you, you’ve earned my respect and subscription
The Vector setting is missing in the code downloaded actually. Doesn't match the one you got in video.
I pointed 2 ip adresses (S3 storage& supabase) from CF without DNS proxy to my domain and I still getting internal error while I try to adding S3 to proxy list nginx
Hey, which exact error are you facing? Do you have separate servers?
Great video bro thank you so much.. one question what are the tradeoffs of this solution compared to the one hosted by supabase.
Well, you have to take care of it. So whilst Supabase paid version will allow you to request support for upgrading to the newest version, with self-hosted you are on your own. Also, you are obviously on your own in any case of problems. The self-hosted version doesn't do auto-backups so, as stated in my video, use something like the Hetzner backups to achieve that.
For most people the differences are neglectible but e.g. self-hosted Edge functions will not be distributed across multiple servers (obviously because you self-host them). The supabase.com Edge functions use the deno.com/deploy and hence can distribute across the world.
The upside of self-hosting: You can choose a cheap but fast server close to your location and be independent from any issues, like cloudflare problems that recently happened.
With GDPR regulations in place I'd recommend self-hosting as you wouldn't be 100% safe with supabase.com . You're in full control.
I love both and I have both, self-hosted as well as projects on supabase.com
The reply is so detailed thank you, do you have any idea regarding supabade graphql can even find one tutorial on youtube.
@@activenode
Great work. Could you do a tutorial on how to enable logging in supabase?
Good one. I put that on the bucket list!
thanks for the video. it really helps. but do you know how to decouple the database? :)
Decoupling the Database / Scaling is on my bucket List for Videos but don't expect it anytime soon, sorry, TH-cam is still a hobby project and it needs a lot of time. But at one point of time I'll definitely try to make it.
Hey! Would you say the server backups on Hetzner are sufficient for the database or would you use a different system/location for backups? I don’t want data to get lost and I’m not sure how good Hetzners backup systems are.
Hey there! Personally, I would. I have high trust and good experience with Hetzner for more than a decade. If you enable Backups, you get trustworthy backups.
However: From a paranoid consultancy perspective, time has shown that huge companies (e.g. GitLab) loose data too. So if your Business was a super high risk Business with e.g. multiple clients depending on you and paying you then I'd consider using multiple Backup systems / Locations.
Not an easy question to answer generically
@@activenode Thanks! I went with the paranoid consultancy perspective and added another backup system on top that dumps the database to the Google Cloud.
So that instance is globally accessible but looking at that UI, does that support collaboration? Like adding team members?
With authelia, definitely. There's often a misunderstanding of what Supabase is. This is leading to thinking Supabase has "less" features when self-hosted. That's not really true. It's just that some parts aren't part of the service Supabase but of the infrastructure and that is what supabase.com does: They implemented basically a wrapper to manage multiple Supabases and on top of that added their own Auth (which is not the same as using Supabase Auth).
And in this tutorial I do the same with authelia :) You can check their docs to see how to add more users. Cheers and have a great day!
Your way of explanation helped me a lot ❤
I have a question , why you didn't use Hetzner's Object Storage , it's cheaper than using an instance ?
Thanks, much appreciated, always looking for feedback!
Lovely Idea, multi-faceted answer:
1. I actually wanted to go with MinIO to keep it as generic as possible to make it work whenever/wherever (and then people can choose) - minio has built-in replication so you could use like 10 Copies across multiple different hosters for Safety over 9000 🚀😎
2. The Hetzner Object Storage doesn't state anything about being S3 compatibility which very much indicates that it isn't S3 compatible, which would be lovely though.
But probably you meant: You could use the Storage Box as a Volume and then just not use S3 (as shown in the Video) but Volume mounts in the docker compose.
Then the answer is basically answer 1).
In most of my cases the instance would do just fine with "only" 40GB of Space.
But when you'll have a production Page which has massive file usage on your Supabase Storage, for whatever reason, then you will need to use Storage Boxes on Hetzner, yes. Even more so in that case I would probably go as far as saying I have an instance running MinIO but MinIO is using the mount from the Storage Box - this would be massive flexibility.
Or I would go ahead and search a different hoster for the Storage part only.
Good thoughts, hope the answer wasn't too long.
@@activenode appreciate your answer 👌
I will give a try to install supbase on contabo instances and using thier compatible s3 storage buckets.
I tried to sign up for Hetzner but they don't accept me as I am Egyptian 😅
So I will try on Contabo
Thanks for your helpful tutorials 🌹❤️
@@alsherifkhalaf7385 And I thought they had something against us Indians only...😅
Hey,
first of all such a great tutorial.
Is it possible to do this with kubernetes?
Yes, sure! Just a different setup. Kubernetes + Traefik is a good one
Thanks for the reply, have you thought about making a tutorial for it? I think noone else has done one before :) @@activenode
Would love to see a tutorial about self hosting supabase securely within kubernetes too!@@activenode
Why is the configuration tab greyed out in storage and authentification? I have the same issue in my self-hasted supabase
Hay Man I really appreciate you ,cos after 1 month I can find out ,one of the best video about supabase-selfhosted,You are perfect man ,I appreciated you again and again God bless you
Thanks so much, very happy that it helps. If you face any problems with this setup please report back 💙
Love this one
Awesome you found it useful!
Excellent coverage , thank you
I have run on an arm server, and the s3 storage does not work. There are problems with the sdk version.
So minio not working on ARM? Or supabase?
@@activenode supabase-storage container, the message error (node:1) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.
@@activenode there is an open issue in supabase storage-api repository
@@bootboot3636 Nice you found it. Care to share the link?
@@activenode github.com/supabase/storage-api/issues/351
Hi! Thanks for this tutorial. Very well structured and I could easily understand everything.
Unfortunately, right at the end I cannot get the dashboard working correctly.
So in Nginx Proxy Manager I did not add any Custom locations (as I think you meant to say in your blog) and instead only pasted the custom config. When entering the dashboard, basic authentication does not work anymore, I can access everything without any login. (When I do it the 'old' way with custom locations, this works)
Either way, in the dashboard I can neither create new users (Failed to create user: invalid JWT: unable to parse or verify signature, signature is invalid), nor access access Storage. (400 GET /storage/v1/bucket). Do you have an idea whats going wrong here? I generated my JWT Secret the way you suggested in your blog.
Hey there, happy to help.
So first things first, you only want htaccess for Studio, without Authelia. I'm actually showing how to do that in the video.
The original htaccess protection we removed by removing Studio from Kong routing. Now we route with the proxy manager and I am showing in the video in the Dashboard part how to do the basic protection without Authelia. Have another look, this should really solve the protection problem as its covered.
Considering the second issue: That's rather unusual. When you changed the JWT before setting everything up, it should work as expected. The only thing that I can imagine where it does not work is when you just change the JWT after having span up everything because then there would be a mismatch with the JWT the system was setup (the standard one) and the one you changed.
Let me know if you need further assistance.
@@activenode Sorry for getting back to you so late. I tried to set it up again from the start and ran into the same issues.
Basic Auth stops working as soon as I add the Advanced Configuration (4.2. in your blog). Storage responds with "unable to get owner" and I still get "Failed to create user: invalid JWT" when trying to create users. Even though I definetily changed it before setting up my server. Any idea what I could do to proceed?
Will have another look once I am fit again
@@activenode Thank you so much for taking the time!
Still did not get it to run with Basic Authentication, but Authelia runs just fine - so I'm sticking with that. Thank you!
I respect the process that you had to go thru to get this done but there are quite a few errors in the tutorial since nowadays Supabase has a lot of new features and such. It would be nice for you to do a new tutorial with these features.
Thanks for the Feedback, can you pinpoint a few of the points so I can check them out specifically?
@@activenode Thank you for the quick response, I'm getting errors when using docker compose pull stating that "Additional property nginx is not allowed."
@@activenode So my issue with this is because I was trying to run nginx proxy manager on a server that already had nginx proxy manager setup for s3 storage.
@@undiscoveredshorts Alright then why don't you reuse that proxy manager? You could also rename it but then you'd have two proxy managers which cannot listen on the same port so I wouldn't do that.
Or, if S3 is already running, you could just get rid of the existing proxy manager and add the s3 docker image with the existing volume to the Supabase docker-compose.yml .
Last but certainly not least you can reuse the existing proxy manager by creating a docker network like "sharednginx" or so (docker network create sharednginx ) and add the nginx proxy manager that you have to it as well as add the external: networks definition to the supabase docker-compose.
Hey David, one of the best tutorials I have seen on TH-cam. To the point, unpretentious, practical.
I have now gone through the video, back & forth, probably 50 times. I have setup everything and it seems to work but with one glitch. I have setup Minio on Hetzener and configured. I can create a folder in Studio and see the folder. However, the folder/bucket is not visible in Minio dashboard inside supabase bucket. I can upload file in Minio dashboard and it works fine. When I try uploading the file from Studio, it just shows uploading one file and stays there. Any suggestions on how to resolve this?
Hm. Have you tried looking at the logs? docker compose logs.
What do they say in the exact moment when uploading fails?
Do you have the bucket created in minio? Namely supabase in my case.
Please note: Creating Folders in Supabase is kinda different to uploading actual files. So it might seem that it "works" but in fact it does not. So I'd assume there's no connection at all to MinIO in your case.
Also: Feel free to send a link to your current docker compose file and I can have a look
@@activenode Thanks for your quick response. I have checked logs already but failed to find anything. Also, the bucket is named supabase as in your case. I can send you link to docker compose file or the file itself by email sincere there no way to send direct message on TH-cam. I tried looking for your email on your website but couldn't find one. Let me know the best way to reach you. Also, thank you so much for helping out.
@@tradingindian Try sending to info@activenode.de
@@activenode I have sent you the file by email
I followed along the tutorial but when I got to the nginx proxy manager part. I got the proxy all working and I secured the proxy manager itself but when I got to the supabase api. Completed it and it said 502 Bad Gateway, Openresty. I tried alot of different stuff but I couldn't get it to work. And I tried to setup the studio too but the same situation with that. Docker says that containers are running and I tried restarting them numeros of times.
i hat the same issue. The problem was that i had the nginx proxy manager in a different docker container than the supabase studio and api. Be sure you're running the proxy manager for supabase in the same docker like the supabase stuff
@@Melf11 do you mean like the same docker-compose file? Beacuse I did have the nginx proxy manager in a different docker-compose file
@activenode
Thanks for the great Tutorial.
Currently I get the follow Error, if I try to upload an file to the Storage:
"Failed to upload example.pdf: headers must have required property 'authorization'"
any ideas ?
Have you checked my linked article again to confirm everything is correctly set up? There are some Updates in the video description to properly forward stuff.
When does that Error come? When you Upload via Dashboard?
Thanks for the guidance, noobs question, how do we evaluate the benefit of self-hosting? Is it going to be more cost effective? Thanks!
Let me put it like that: Usually, you don't want to self-host. Even if you later want to, you can migrate easily with my other video th-cam.com/video/nyX_EygplXQ/w-d-xo.html.
You are right, it can be more cost effective and especially because of the limitations of such as Realtime, also just recently discussed here www.reddit.com/r/Supabase/comments/15f6h0j/what_do_you_want_to_know_about_supabase_for_real/khehosj/?context=8&depth=9 .
The reasons for self-hosting can be multiple: Hitting realtime limits (10k is a lot but not a lot for e.g. WhatsApp with millions of users), needing it for legal reasons (EU data protection as right now Supabase is not 100% GDPR compliant), etc.
But I always recommend to start off with the supabase.com variant first to not have the costs of infrastructure at first, especially for lone coders. If you're in a big team and have infra people available, just discuss it within your team. Otherwise, maintaining it mostly will be not cost effective (in the beginning) as the maintenance itself costs more time than just paying for SaaS.
Cheers!
@@activenode Thanks for the detail explanation and resources! Valuable advice for starter like me to think about the trade-off! Appreciated!
Thank you so much for your instruction!!! But is there anyone who followed the instructions who can use the "realtime" database feature without issue? When I subscribe to the channel or db change it always shows connect to websocket fail in the browser console. I know maybe this is not a good place to ask questions but I have tried everything I can to solve it but still have this issue. If anyone can use this function please let me know so I know at least it is solvable! Thank you all for the help!
Hey man. Unusual. The realtime function is actually definitely one I checked and it worked with me. Are you sure you activated WebSocket Support in your Proxy settings?
@@activenode Wow thank you so much for replying me! Yes, I do set the proxy for the api has websocket support toggle on. But I am not sure if it is truely activated since I found the access list setting also has no effect on the proxy( no password prompt when access the url for dashboard) . But thank you thank you let me know you tested the realtime function. I will start over again to see if I can solve it!
@@aFancyFatFish Alright! Make sure to also read the additional info given in the description
@@activenode Yes! I am following you video as well as you blog posted on food for developers! ❤️
Did you ever solve this issue? I'm running into this issue right now
Sir, this is a great video, but can you tell us how do you setup the dns server ?
Can you specify the question? Not sure if I got it right
When I try to connect to the dashboard I keep getting secure connection failed. I think this has to do with something in the .env but I am not sure. Any help would be appreciated!
Did you follow the exact same setup as in the video? (please also check my additional notes considering the extra stuff I added in the blog article)
very cool - how can I migrate my project from supabase cloud to the self hosted version? Also, what is the cost of all this and (I'm new to this) do I need to change my dns records for this to work?
Yeah. All of the Domains used in this Video have to point to the IP of the Server where you install it. So yes, DNS changes required.
Migration is mainly done by exporting your whole Database first with pgdump. But if you would say you're rather a Beginner in this field I would probably reocmmend to export every table one by one (if you don't have like 1000 tables). So recreate your structure in your self-hosted instance and then just copy over the data as a naiive approach.
But maybe this helps: ironeko.com/posts/creating-a-local-backup-of-a-supabase-database
@@nocodecreative can you share your a Link to the Compose File? E.g. via GitHub Gists ?
@@activenode thanks - I've got it running, however, the API isnt working at all, just lots of errors. I'll see what I can get together for you. Thanks!
I do but it wont let me post a link! The api docs have not self-generated @@activenode
@@nocodecreative It won't let you post a link? Let me try that: supabase.com . I mean if you remove the at least it should allow you to post the Text right?
such as amazing tutorial, I love it much
Hi,
How do i setup real-ip?
I want to create applications and websites on self hosts to reduce the cost, but when using S3 or Cloudinary the cost is large, so what is the benefit of a private server if the rest of the technologies return the same cost!!?? Do the hosts not provide good handling of files and videos?
That's hard to answer without knowing your target projects.
Honestly, if your only looking at costs, 25$ on supabase.com with included 100gb of storage is a pretty fair price allowing you to be careless about maintenance etc.
So the question is rather: What do you need? How much storage does each project realistically need? After having talked to the CEO of Supabase he said you might even be faster without using S3 at all as there is no additional server - but then of course you need more performance on the server where you run it. On Hetzner you get the CAX31 for example for around 15 bucks with 160gb storage included - which is a little bit cheaper but then again: If your only looking at the price I'd happily pay 10 bucks more and have the SaaS version on supabase.com .
So it depends. Self-Hosting also doesn't reduce costs because obviously you have to run maintenance and updates which costs time and time is money.
I am trying to provide applications like Tik Tok that depend on displaying Swipe products like Tik Tok, and I need a good performance at a small cost. I was thinking about contabo vps, but I do not understand why the applications upload the project to one server and the media to another server, as the prices of VPS are very cheap and make it possible for you to obtain the capabilities. Powerful, so do we need dedicated video servers? The reason for trying to stay away from supabase is that 25 in our country is a large price due to inflation, so can I rely on VPS and get a good video playing experience?
Hey man. Reasonable performance at very low cost is possible without the need of storage/streaming.
> but I do not understand why the applications upload the project to one server and the media to another server, as the prices of VPS are very cheap and make it possible for you to obtain the capabilities.
As I said in the video, you don't have to separate! Grabbing a file is probably faster as the server doesn't have to do another DNS lookup. The reasons why many people do this are multiple but let me state the most important: 1. S3 has built-in replica support. So people can just scale another 5 servers of the same S3 and have 4 backup servers which they can switch to. 2. Storage Servers need less overall power if all they do is about storage and nothing else. Hence, you get more storage at a cheaper price (as you mentioned contabo: contabo has Storage VPS which has less power than normal VPS but bigger hard drive) 3. Your power on a system is limited by all means (CPU, read-write on the hard-drive, bandwidth, etc.). So whilst your VPS is busy dealing with grabbing and delivering files, you'll have less performance on your same server which also runs the application (but for most of my projects which are not media-focused, this is totally fine).
> The reason for trying to stay away from supabase is that 25 in our country is a large price due to inflation, so can I rely on VPS and get a good video playing experience?
You have many many ways of ensuring a good video playing experience. Compressing videos without noticeable quality loss, setting the cache headers right, another one is using free CDNS (www.cloudflare.com/application-services/products/cdn/#CDN-Page-Pricing-AS). So it's really not all about "server power". There's more about this on the internet.
I also think there's this flaw in the way of thinking that many people have in programming: "I have to find the solution that will still be good if I grow". But in fact, you don't. Let me explain.
Considering this fact you have given, I am quite sure that your project is not making money at this point because you have no income on the project that would cover 25 bucks. This implies that you have no paying users right now or whatsoever. So I might be right in assuming that right now there's no such thing as "heavy load on the server" or so but you're only thinking about solutions in terms of "what if". I can totally understand that fear, but this fear is not helping in development of your project.
So here's my proposal:
Start with the cheapest solution you can get, e.g. one VPS, develop your application, optimize where you can (as I said above) and once you feel like hitting a limit it might be that it's because your app has grown and hence generates money and from there you can take that money and scale to more servers / bigger servers / supabase.com .
Hey @activenode great stuff ! was wondering how I could have a on-prem setup (without cloud like hetzner) but connect to local s3 minio - is this actually possible?
What exactly do you refer to with on-prem? supabase.com plus own S3 ? Or another provider setting it up for you and you want to configure S3? Does that reflect your question?
@@activenode was thinking supabase run on docker locally, and Flyte (they have minio S3) spun locally too -
@@Ali-m2k6s That should work. I mean you can connect Supabase with the given configuration with any S3 compatible service. Hope I understood you correctly.
@@activenode Ok! So as I understand it I can skip removing kong, adding nginx and authelia - but only change the storage in the docker compose?
i cannot get in my S3 dashboard
incorrect password and username
does this guide still work for anyone because i've tried without the s3 and there is no storage in the supabase storage and i've tried the s3 portion in the guide and still fails to create bucket in supabase studio. i noticed a differance in the .yml file # To use S3 backed storage: docker compose -f docker-compose.yml -f docker-compose.s3.yml up what is this ?
Hey there! It all works for me, except one thing: auth. It looks like the authelia_session cookie isn't sent when I request /auth/v1/users (that's what happens when you click "Create new user" in the dashboard), therefore the request is unauthenticated and it gets redirected to the login page. I've followed the additions in the description, but that doesn't change anything. Any idea on why that happens? I feel like some additional configuration for the auth endpoint is missing.
Hey man. Yup, I can confirm this. Will follow up. Probably a small fix.
Awesome! Thanks for sharing this tutorial - it probably saved me a lot of headache. 😉
Okay so what I found out is that in this specific request the Request Header "Cookie" isn't passed (which contains the authelia session) and hence authelia wants to redirect - breaking the request. So far. I'll keep you posted. This doesn't harm your api however, it's only annoying in the studio.
I found the issue and it's rooted in the Source Code of Supabase here: github.com/supabase/supabase/blame/072222523afe612d575c2ff7e73e514692f545af/studio/data/auth/user-create-mutation.ts#L50
I created a fix here: github.com/supabase/supabase/pull/18156
@@activenode Yup, that's exactly what I noticed as well. The authelia session cookie is missing for that request, and I'm not sure why. It works for all the other routes.
hello sir everything is working now after 3 to 4 days of working but 1 issue is still remaing 23:09 time of the video where you setup the storage in my case without custom location dashboard is working if i configure as you my dashboard is not working when the dashboard is working storage is not working 😂😂 tell me the solution
Hey there! Have you also checked the blog article where I added Updates in section "Make it run" ?
hello, i have just install from supabase docs, but when I upload file to storage or create folder it does not work. Can't create folder or upload with 0% upload. could you told me what this happen?
Hey man, I'm seeing some people having issues like that, with others it does not happen. Have you checked the blog article linked in the comment as well? After the video I updated a few things there
@@activenode It's working. Thank you but I can't get "Get Url" when I click on image to get public url that was uploaded to storage. Could you please help with this final problem?
Legit question: Does the Self hosted version supports Users(creating users via studio Authentication)?? I thought i was breaking some configuration, but even with vannila install (without any config changes it always crashes with "canot fetch")
Yes, sure it does! This should work. I haven't tried it in a while as I'm currently on the Supabase book writing process (supa.guide) but this does work (rn doing this with k8s). Have you checked the linked blogpost?
Great job! It would be perfect if we could use supabase to authentication instead of authelia.
Well, Supabase uses the GitHub Login. GitHub Login can be enabled with Authelia, as far as I know :)
Is there a way to have more than one supabase project on one server?
Yes by separating through Docker - which this video already mostly does. The easiest solution, if you are not familiar with Docker, the easiest way to separate those container names is probably 1. docker compose --project-name YOUR_OTHER_PROJECT but then you also need to create a shared Docker network and remove the Proxy Manager from both docker compose files and then have it started in its own docker compose file basically (alone) and then you need to add the shared network to it.
So tl'dr: That isn't all too much a Supabase Infrastructure Question but more a Docker question which goes beyond the scope of this Video.
Hi! I ve a problem which says "unauthorized, sign ups are disabled " how to solve it
Sorry to hear. Did you follow the exact same setup as in the video? Including the steps from the blog article that I linked in the description?
Can you give a demo on connecting an OpenId Provider to supabase auth?
Hey man, thanks for the request. Can you specify what you mean?
Connecting OpenID to protect Supabase Instance or connecting OpenID to be used by the users in your Supabase Instance?
@@activenode I’ve been trying to use a tool like keycloak which allows me to connect to our ldap and then provides an openid connector for your application. I’ve been having issues with setting up the various redirect urls that the openid connector requires.
Hey my Man, i get an 403 "bad_jwt" "invalid JWT: unable to parse or verify signature, signature is invalid". Any ideas?
Shouldn't happen. Have you tried generating one at supabase.com/docs/guides/self-hosting/docker#generate-api-keys ?
Yes, but these where invalid. I used one generated by a python script and it worked. I have no idea how and why but it works
@@Sansalvador67 Indeed weird but wonderful you got it working. keep rockin!
Would this be easier using coolify self hosting?
Hm, not really. Depends on "easy". Coolify has its own constraints. I use coolify as well but it's other things to adapt then. So I wouldn't necessarily say it is easier
Is there a way to enable edge functions in this tutorial?
They are active!
Here's how: In every project, no matter if Cloud, self-hosted or locally, the URL to an Edge function is API_URL/functions/v1/function_name .
Now on self-hosted, you need a way to get the functions there as `supabas functions deploy` only works with cloud. This is done with a docker volume which by default is found in the docker-compose.yml under functions.volumes (usually on your server where you execute the docker-compose.yml it's ./volumes/functions)
@@activenode thank you! great (unique) tutorial
@@handler_k Thanks so much
Hello sir,I have a problem with file type for upload to strorage ,all file type was uploaded except media file jpg png or video ,I tried to create policy but was not useful,colud you please guide me
Hey there, what's the mspecific problem? So you can upload - right now - all but media? Do you have policies in place right now?
@@activenode Hey sir Actualy I can not create user in dashboard and I got this error Failed to create user: An error has occurred: Load failed error 422
@@sabahsaeedi9551 Thanks for getting back. is ist just the Dashboard that has that problem? Can you connect via Supabase Client and try creating a user there? That problem of having issues creating a user with the dashboard isn't new, I reported about it in the comments. Have you checked my blogpost as well?
Thank you very much sir , absolutely l tested by flutterflow and l got kong error
Also in storge l can upload any type of file except images and video
it's not the exact match like cloud, there's no multiple databases, multiple users etc etc
Hi when i try to route ngnix to kong i always get bad gateway any idea whats going on ?
rephrase that i fixed it now but the supabase storage doesnt work "because it violates the following Content Security Policy directive: "default-src 'self'" by any chance did u try it with storage and thanks for your awesome guide and time!
and nevermind again i forgot to use https in the env file :) thanks again for the guide and the rubber duck suppport :D
The Details take time. Imagine making this 😁
Can we do it with Caprover?
Sure, but it needs a one-click recipe as per default it cannot simply run docker-compose.
I get a internal server error after I want to set up SSL and click on save, anyone else as well?
I fixed it, guys don't forget to add a wildcard infront of your domain for subdomains....this costed a lot right now hahah
Now everything works....you are the fucking man haha thanks for the vid
@@ralphpichler6635 I don't get it, what do you have to do exactly ? Do you have to do something different from the video ?
@@MaximePreveris Nope I configured the dns wrongly
Hi is there a way to get in touch with u ?
Yes. First and foremost via Mail (info[at]activenode.de)
Is the book out?
It is :)
@@activenode link!!!
@@elmagnificent8550 haha my mistake, here's the US one but you can search whatever amazon of your country for "Supabase book" www.amazon.com/Building-Production-Grade-Applications-Supabase-comprehensive/dp/1837630682/
@@activenode May I ask? Is this for supabase SELF_HOSTING ? and does it have all supabase features ( like rpc functions?)
@@elmagnificent8550 it's best if you join the book discord and ask the questions there, easy to answer then -> discord.com/invite/qhAAYwKx3f
how does supabase compare to appwrite?
That could be a "religious" question. I'd say the main difference is the Community. At the end of the Day you will probably get along with both - for most applications.
What I like: the Supabase Developers, including Paul (CEO) himself do contribute to Projects such as pgvector, which they could just do propietary to keep some "Golden Nuggets" but they don't. So they're really, really are strongly committed to Open Source
Thank you!
Welcome!
That proxy manager looks like a pain in comparison to Traefik.
Thanks for pointing out. As always: That's in the eye of the beholder / target group. As opposed to traefik, nginx is a little bit more known and well-documented and hence configuring npm manually is as easy as googling for nginx solutions, nginx proxy manager is a widely spread solution. That's why I chose it for the tutorial.
From the perspective of someone loving infra-as-code I can definitely see where you are coming from.
What do you love most about Traefik? And before having used Traefik, what did you use in the older times? Cheers
nice article, thank you
how add ssl certificate to self-hosted
The Video here shows that. Everything SSL protected. Do you need something else?
I hope soon you can afford an AC... your contents are gem! thanks for the tutorial even though I didn't understand a lot but planning to look into the tutorial again in the future with some basic knowledge of docker and other things you've used.!! 🥲🥲
❤ good job 👏