The amount of options available to do things on the internet can be overwhelming. This video is concise and very helpful. You have 1 more subscriber...
With option 3. How does the tech stack look? Adding another video with diagrams/flows would be much appreciated. Like do have our backend [django] > still build our own postgreSQL DB with our own schema > containerize it with docker > connect it with (pick 1) [AWS ECS Fargate] or [kubernetes] > pick a cloud provided e.g google cloud (can I choose google cloud if I used AWS ECS in first step?) > pick #servers you want to run... Is this the correct flow or do we skip the postgreSQL step and maybe the final step if #servers to use is chosen automatically. I only trust you Mark with this question you're the devops god!
In the minute 5:55 you said the severless option is suitable for small app with not many users. So how many users? Is 1000 users a day good to go with this option? Thank you for your great presentation of this topic.
Thanks for the comment and apologies for the late response! I wouldn't say that serverless is only suitable for a small app with not many users. What I'm saying is that it's usually free (or very very low cost) for a small app with few users... Serverless is a great option for very large applications (I think Google run a lot of their projects on the Google App Engine serverless tech), however it can become costly depending on which services you're using. For example, on GAE you could run a static website that could easily handle 1,000 users per day for < $1 (or free), however if those 1,000 users each transferred 500gb of data through Google Cloud Datastore and 2tb through Google Cloud Storage, that would be much more expensive.
I have a small django based database application, need to deploy directly on server as this is the requirement of the client. Can you guide or share any video link on how i can do that? App is ready just deployment is remaining
I believe that perhaps the third option is the best . Because with kubernetes you can choose between GCP or AWS and it's scalable .it' s not very easy to implement because we need to know and implement many knowledges
Thank you, glad it was helpful. We have an in-depth course that teaches AWS ECS here: londonapp.dev/devops-aws-terraform. We are also planning on creating more free TH-cam content on the subject in the near future!
I thought that but it was more than I thought. For example, I built an app that uses Google App Engine. I needed to implement background tasks which had to be done in quite a bespoke way.
I need to deploy a django webapp, that contain LFS(audio, videos files, etc) What do you recommend me to do? ( is not an option, I want these files to be stored in my own website)
Thanks a lot for this video, looks like a good overview. Buut.. If I'm a beginner developer who wants to have freedom to play with a few live web projects, as well as work for small clients, isn't VPS my best bet ? I'm deploying my first real world project and the client already has a VPS and I'd like to avoid repeating this agony.. So if getting my own VPS is a good idea, how would I automate the deployment process to be able to cookie-cut projects ? No need for full answer, I'd appreciate keyworks/links to look up.
Hey good question. If you deploy the VPS to EC2 you could use Terraform with a user data script (basically a shell script). Otherwise, you could look at using Ansible.
How to change API endpoints while deploying to public when localhost is there in endpoints , eg : 120.x.x.x:8000/to-dos , how to change this when deploying ? Won't it affect the end point for get and post ? Pls help anyone ;
Perhaps... I've not used it much myself, but it seems like a mix of option 3 (docker orchestration) and 4 (serverless). Really, it's probably closer to 4.
Yeah good point. Ansible would be useful for option 1. It would be better than option 1 without ansible, but still various limitations with it such as single point of failure etc... Of course, it is possible to use Ansible to setup multiple servers with redundancy etc...
@@LondonAppDeveloper Option 1 is the less limiting option, you can absolutely replicate most features you have with kubernetees and those which are generally considered standards today on a "bare" Linux server but you have to really know what you're doing and it may be tedious, we weren't at stone age of deployment few years ago. Quite weird you didn't note any pro for it.
Option #1 with imaging is the superior option. Once your image is created you can scale up and down infinitely. You have a very limited scope of knowledge. Like the width of a toothpick. Both option #3 & #4 are trash options. Option #2 could work well if you have AI managed services. You need to open your mind about 359º, kid.
Thanks a lot for watching our video and taking the time to give feedback. I've been programming for 20+ years and working in the industry for about 10, and I haven't seen that approach used recently. If I understand correctly, the approach you're talking about involves creating an image of an entire operating system with all the dependencies/code installed, and using it to create as many virtual machines you need to support your load? In my opinion, the issues with this approach are: 1) Very large images which need to be stored and transferred as needed (you need to image an entire OS...) 2) I expect it takes a lot longer to spin up an image compared with running a Docker container 3) You need to re-build these images every time you need to upgrade packages or the OS, which takes a long time Of course there may be examples when this type of process is needed, but I really can't see how this is the "superior" option compared to deploying lightweight docker images using some form of orchestration system?
The amount of options available to do things on the internet can be overwhelming. This video is concise and very helpful. You have 1 more subscriber...
Thanks appreciate that :)
Thanks a lot. This video maybe saved my life :) Because select way of deploy is very difficult and stresful decision.
Glad it helped!
Thank you for the explaination after 4 years :)
Better late than never?
Even though you're still a "small" channel, you have potential, I bought your advanced django course and it's one of the best courses I've ever bought
That's so kind, thank you! :)
Could you attach a link to the course. Sounds interesting.
With option 3. How does the tech stack look? Adding another video with diagrams/flows would be much appreciated.
Like do have our backend [django] > still build our own postgreSQL DB with our own schema > containerize it with docker > connect it with (pick 1) [AWS ECS Fargate] or [kubernetes] > pick a cloud provided e.g google cloud (can I choose google cloud if I used AWS ECS in first step?) > pick #servers you want to run... Is this the correct flow or do we skip the postgreSQL step and maybe the final step if #servers to use is chosen automatically.
I only trust you Mark with this question you're the devops god!
always enjoy Marks videos .....will definately be purchasing the deployment course in a few days
Thanks so much really appreciate that.
Now I understood what scale up and scale down means. Thank you ❤️
Excellent! You're very welcome my friend.
In the minute 5:55 you said the severless option is suitable for small app with not many users. So how many users? Is 1000 users a day good to go with this option? Thank you for your great presentation of this topic.
Thanks for the comment and apologies for the late response! I wouldn't say that serverless is only suitable for a small app with not many users. What I'm saying is that it's usually free (or very very low cost) for a small app with few users... Serverless is a great option for very large applications (I think Google run a lot of their projects on the Google App Engine serverless tech), however it can become costly depending on which services you're using. For example, on GAE you could run a static website that could easily handle 1,000 users per day for < $1 (or free), however if those 1,000 users each transferred 500gb of data through Google Cloud Datastore and 2tb through Google Cloud Storage, that would be much more expensive.
Hi, Please make a video on how to use Docker and kubernetes on the ubuntu located in Hypervisor inorder to deploy the Django Project onto the server.
I have a small django based database application, need to deploy directly on server as this is the requirement of the client. Can you guide or share any video link on how i can do that? App is ready just deployment is remaining
I believe that perhaps the third option is the best . Because with kubernetes you can choose between GCP or AWS and it's scalable .it' s not very easy to implement because we need to know and implement many knowledges
I use Flask, but your tutorial is still amazing.
Excellent
Thanks, this is super helpful! Do you have any resources for Option 3? I would like to learn how to set up a managed Docker orchestration service!
Thank you, glad it was helpful. We have an in-depth course that teaches AWS ECS here: londonapp.dev/devops-aws-terraform. We are also planning on creating more free TH-cam content on the subject in the near future!
Really useful!!! Do you know if using Elastic Beanstalk is a different scenario? Or it is just the same as the second option?
Thank you! I believe Elastic Beanstalk is most similar to the serverless technology (comparable to Google App Engine).
so clear, thanks
Thank you!
Thank you, this was very helpful.
Great to hear, thanks for watching!
For the serverless option, how much do you really need to tailor your application to the particular cloud vendor? I would not think much.
I thought that but it was more than I thought. For example, I built an app that uses Google App Engine. I needed to implement background tasks which had to be done in quite a bespoke way.
Awesome content
Thank you Laureano.
Great job babe.
Thank you.
I need to deploy a django webapp, that contain LFS(audio, videos files, etc)
What do you recommend me to do? ( is not an option, I want these files to be stored in my own website)
you forget to mention one important con of google server, and that freedom of speech.
helped a lot, thanks
Thank you very much for your helpful video.
Any good books of systems design out there as well?
Thanks a lot for this video, looks like a good overview. Buut.. If I'm a beginner developer who wants to have freedom to play with a few live web projects, as well as work for small clients, isn't VPS my best bet ? I'm deploying my first real world project and the client already has a VPS and I'd like to avoid repeating this agony.. So if getting my own VPS is a good idea, how would I automate the deployment process to be able to cookie-cut projects ? No need for full answer, I'd appreciate keyworks/links to look up.
Hey good question. If you deploy the VPS to EC2 you could use Terraform with a user data script (basically a shell script). Otherwise, you could look at using Ansible.
@@LondonAppDeveloper why not just simple orchestration like dokku? pros and cons? may be a good topic for another video as well
Yes please make drf deployment video on AWS
Very good video thanks! :)
please make a video in serverless django
That's going to be our next video! Should be out in a couple of weeks.
@@LondonAppDeveloper I am eagerly waiting for that.. esp google serverless deployment
@@tluanga-ruatpuii-pa We have this one available already: londonappdeveloper.com/2021/05/03/deploying-django-to-google-app-engine-using-docker/
Any tutorial making option 2?
Not yet but I'll keep it in mind for future content!
I have a problem-I have to deploy a Django app on a server. I know, I will use Docker. Oh, now I have two problems.
😁
How to change API endpoints while deploying to public when localhost is there in endpoints , eg : 120.x.x.x:8000/to-dos , how to change this when deploying ? Won't it affect the end point for get and post ? Pls help anyone ;
I do not understand English. I know Hindi language. But I ran my mind and I am talking to you through Google Translate.
Can u make session tokens u, refresh tokens, JwT, OAuth, etc.. related to security consepts in Django.
Thanks a lot for the feedback, I'll plan to make a video on that in the future!
@@LondonAppDeveloper okay I am waiting for security series. Better to start telegram/whatsapp group.
Great info. How about Pythonanywhere
Thank you!
You're welcome!
Is Heroku included in point number 3?
Perhaps... I've not used it much myself, but it seems like a mix of option 3 (docker orchestration) and 4 (serverless). Really, it's probably closer to 4.
How can I deploying asgi & run https also with nginx & ssl from certbot.
Not sure off the top of my head.
What about using ansible?
Yeah good point. Ansible would be useful for option 1. It would be better than option 1 without ansible, but still various limitations with it such as single point of failure etc... Of course, it is possible to use Ansible to setup multiple servers with redundancy etc...
@@LondonAppDeveloper Option 1 is the less limiting option, you can absolutely replicate most features you have with kubernetees and those which are generally considered standards today on a "bare" Linux server but you have to really know what you're doing and it may be tedious, we weren't at stone age of deployment few years ago.
Quite weird you didn't note any pro for it.
Django ci/cd gitlab please
What would category python anywhere fall under?
Good point. I haven't used it much but I would probably place that in serverless... Happy to be corrected if someone has used it a lot!
I loved it .....insightful......
Thank you
Deployment on heroku will come in which part/way from this 4?
That's serverless.
Bro looks like expert in Metaprogramming XDDD
Good video, but this is not what I was looking for :D
Thank you. what were you looking for?
@@LondonAppDeveloper I was looking for how to install django on the web server
Hello my brother my name is Maya and I am from India. I want to get an application tied Mobile Android App!
Cool!
Option #1 with imaging is the superior option. Once your image is created you can scale up and down infinitely. You have a very limited scope of knowledge. Like the width of a toothpick. Both option #3 & #4 are trash options. Option #2 could work well if you have AI managed services. You need to open your mind about 359º, kid.
Thanks a lot for watching our video and taking the time to give feedback. I've been programming for 20+ years and working in the industry for about 10, and I haven't seen that approach used recently.
If I understand correctly, the approach you're talking about involves creating an image of an entire operating system with all the dependencies/code installed, and using it to create as many virtual machines you need to support your load?
In my opinion, the issues with this approach are:
1) Very large images which need to be stored and transferred as needed (you need to image an entire OS...)
2) I expect it takes a lot longer to spin up an image compared with running a Docker container
3) You need to re-build these images every time you need to upgrade packages or the OS, which takes a long time
Of course there may be examples when this type of process is needed, but I really can't see how this is the "superior" option compared to deploying lightweight docker images using some form of orchestration system?
is this real
Yup =)
b*llshit to much talk.
fishy
What's fishy?
Thanks !
Thank you!
You're welcome!