This was incredibly clear and to the point. As an experienced dev trying to make use of docker, I got more insight from this one video than a bunch of others and reading some documentation combined!
In the "yaml" file you can create a service and configure a name for its container with the container_name command. When we create a container with a given database and a container with a backend, what name should we give in the connection string (service name or container name)?
Hey, Monis. I have a couple of questions. I hope you have some time to address those. 1. You "apps" seems completely developed where one can directly build it and deploy it (which you will explain in the next video). But generally, app development is never finished and we need to continuously make changes. Now, if there is only one developer, it's alright as we can simply make changes in the local and the `--build` flag will recreate the docker image. However, for a team working on the same project and/or if there are multiple branches of the code (prod, dev, test, etc.), can we leverage docker-compose to keep things streamlined? 2. vscode has a feature named dev-containers. This is great because I can create container and share that with anyone who want to develop with the tools and versions that I am using. They won't have to setup anything. They can run the container and have an exact replica of my dev environment. I am having hard time figuring out how to setup dev-containers for each segment of an app (frontend, backend, and DB for instance) that can communicate with each other while in the development phase. 3. At 7:44 you mentioned that we can provide environment variables into the docker-compose. I don't think it is secure at all as I can read the docker-compose file you have shared in the GitHub repo. How can I keep it safe? Is the only way to keep it secure is not to share the actual docker-compose file rather a censored version of it? I am fascinated by DevOps and SecDevOps but I have very limited technical knowledge. If my queries doesn't make sense, let me know. I will try to clarify on what I mean by these questions. Thanks for the all the efforts and videos. I look forward to the next one. I have one potential question regarding the content of the next video as well. Are we going to see IaaS where we deploy it within, let's say, AWS EC2 instance where we potentially have more control over the system or are we just planning to explore the SaaS version of this in AWS aka AWS ECS? Perhaps, you could show a contrast between these. (Just a suggestion) Basically, are we trying to outsource the job of the DevOps team to AWS or are we leveraging the features of cloud computing while retaining the control over the system and have our own DevOps team? Cheers, HYP3R00T
Sure :) 1. Docker Compose is more geared towards local development. Where you can make changes (let's say just backend and database). Run docker-compose up and see the results while in development phase. However - in reality, on higher environments you won't usually use docker compose. If we take this example - your backend, frontend and database would most likely be hosted in different types of cloud infrastructure (e.g. you'd have a FE cluster, a BE cluster, a DB cluster and so on). For each of these clusters - you also need to manage versions. Usually your CI/CD pipeline builds an image in one step, tags it with a version and pushes it to a private docker registry. Thereafter, in another step the latest image would be pulled into the server(s) and deployed. And you would have to maintain different docker image versions for frontend, backend (and probably other downstream microservices). Maybe your frontend underwent 20 changes since the time it went live (image version: v0.20) and your backend only underwent 10 (image version: v10). Similarly - it could also get counter-productive in higher environments to intertwine these versions and then run all of them up on the same instance via docker-compose. 2) To your question "I am having hard time figuring out how to setup dev-containers for each segment of an app (frontend, backend, and DB for instance) that can communicate with each other while in the development phase". -- You just need to run the docker compose up. If you need to make changes, change the code, re-run docker compose with --build. This will automatically create the containers and the network for these apps on your local environments. Your colleagues will be able to replicate this in the same way. 3) True, setting up environment variables in the docker-compose file is not the best practice. It was added for the sake of simplicity and for local development. In reality - it depends on the sensitivity of the value of environment variables. Some environment variables and not sensitive (e.g. an environment variable for a public URL which is added for standardisation and making it less error-prone). And depending on where they are used (app or instance level), they can be added in docker's .env files, build pipeline or an orchestration engine like Kubernetes. However - for environment variables that are secret (Database Passwords, Keys and such) and are on higher environments, should never be in cleartext, nor should they be in compose files or environment files. For that, as a best practice we should use a credential store. For the next video - we'll, you'll have to wait for it ;)
Super. I have one question here how backend will know database started , what condition we have to mention in docker compose file. and follow up is what software you are using to develop these type of content with animation. Thank you
At 13:34 - we specify that the backend app depends on the database on the condition that the service should be started (service_started). This option is the default option simply waits for the container to be "started" and will not check the internal status of the service if the container has successfully started. For this reason, this option doesn't require anything "extra" to be defined in the backend_database. Since our database image is very fast and we're not doing a lot of customisations there - it shouldn't be a problem. I use Final Cut Pro
Links to code examples, docker basics and timestamps:
____________________________________________________________
Basics of Docker: th-cam.com/video/k29FmUcihSA/w-d-xo.html
💻Code Examples from this video: github.com/monisyousuf/youtube-tutorials/tree/main/CD_007_docker_2
#########
Timestamps:
#########
00:00 Intro
00:47 Multi Service Applications
03:22 What is docker compose?
06:16 Running Multiple Containers
09:13 Docker Networking
12:29 Handling Dependencies
14:06 What's Next?
This was incredibly clear and to the point. As an experienced dev trying to make use of docker, I got more insight from this one video than a bunch of others and reading some documentation combined!
This video is covers more concepts and much better than some pluralsight courses! Great job.
I am not sure if this is your explanation style for all your videos; however, this one was one of the best explanation methods, at least for me.
great explanation !!
Excellent explanation!!!
Excellent explanation that cleared many doubts, can't wait for the next one!
Very clear and focused presentation! Thank you!
Clear, and concise. The animations made the explanation ever clearer.
Thank you for the effort
Awesome! I hope a lot of people support you by sharing your videos to their networks 🤞🏻 best of luck, Monis
I cant wait to see your next video! these are masterpieces
Good video! All the concepts are well explained. 👍
you are amazing. omg. unbelievable explanation.
what a great work, Well done Monis
i already knew all of that, but subscribed anyway cus the video is just so well done. Keep up the high quality!
great tutorial @Monis, thanks a lot.
Your content is really good. Much thanks.
Great work 🎉 awesome,
May I know what tool you are using for these animations?
I use Final Cut Pro
good presentation
ooh man need more of this also fast cause i have interview on this XD
I don't know whether this comment should make me happy or sad :D
@@MonisYousuf the one motivate you dude😎
Also how ya make those edits? Seems amazing
Thank you! I use Final Cut Pro
Great stuff bro, thank you very much
would love to know how this works for production vs local environments
In the "yaml" file you can create a service and configure a name for its container with the container_name command. When we create a container with a given database and a container with a backend, what name should we give in the connection string (service name or container name)?
Either one can be used. Both will work.
@@MonisYousuf Great job. Thank you for your work.
I'm sure with such cool animations you will have many subscribers
Thank you so much!
Hey, Monis. I have a couple of questions. I hope you have some time to address those.
1. You "apps" seems completely developed where one can directly build it and deploy it (which you will explain in the next video). But generally, app development is never finished and we need to continuously make changes. Now, if there is only one developer, it's alright as we can simply make changes in the local and the `--build` flag will recreate the docker image. However, for a team working on the same project and/or if there are multiple branches of the code (prod, dev, test, etc.), can we leverage docker-compose to keep things streamlined?
2. vscode has a feature named dev-containers. This is great because I can create container and share that with anyone who want to develop with the tools and versions that I am using. They won't have to setup anything. They can run the container and have an exact replica of my dev environment. I am having hard time figuring out how to setup dev-containers for each segment of an app (frontend, backend, and DB for instance) that can communicate with each other while in the development phase.
3. At 7:44 you mentioned that we can provide environment variables into the docker-compose. I don't think it is secure at all as I can read the docker-compose file you have shared in the GitHub repo. How can I keep it safe? Is the only way to keep it secure is not to share the actual docker-compose file rather a censored version of it?
I am fascinated by DevOps and SecDevOps but I have very limited technical knowledge. If my queries doesn't make sense, let me know. I will try to clarify on what I mean by these questions.
Thanks for the all the efforts and videos. I look forward to the next one. I have one potential question regarding the content of the next video as well. Are we going to see IaaS where we deploy it within, let's say, AWS EC2 instance where we potentially have more control over the system or are we just planning to explore the SaaS version of this in AWS aka AWS ECS? Perhaps, you could show a contrast between these. (Just a suggestion) Basically, are we trying to outsource the job of the DevOps team to AWS or are we leveraging the features of cloud computing while retaining the control over the system and have our own DevOps team?
Cheers, HYP3R00T
Sure :)
1. Docker Compose is more geared towards local development. Where you can make changes (let's say just backend and database). Run docker-compose up and see the results while in development phase. However - in reality, on higher environments you won't usually use docker compose. If we take this example - your backend, frontend and database would most likely be hosted in different types of cloud infrastructure (e.g. you'd have a FE cluster, a BE cluster, a DB cluster and so on). For each of these clusters - you also need to manage versions. Usually your CI/CD pipeline builds an image in one step, tags it with a version and pushes it to a private docker registry. Thereafter, in another step the latest image would be pulled into the server(s) and deployed. And you would have to maintain different docker image versions for frontend, backend (and probably other downstream microservices). Maybe your frontend underwent 20 changes since the time it went live (image version: v0.20) and your backend only underwent 10 (image version: v10). Similarly - it could also get counter-productive in higher environments to intertwine these versions and then run all of them up on the same instance via docker-compose.
2) To your question "I am having hard time figuring out how to setup dev-containers for each segment of an app (frontend, backend, and DB for instance) that can communicate with each other while in the development phase".
-- You just need to run the docker compose up. If you need to make changes, change the code, re-run docker compose with --build. This will automatically create the containers and the network for these apps on your local environments. Your colleagues will be able to replicate this in the same way.
3) True, setting up environment variables in the docker-compose file is not the best practice. It was added for the sake of simplicity and for local development. In reality - it depends on the sensitivity of the value of environment variables. Some environment variables and not sensitive (e.g. an environment variable for a public URL which is added for standardisation and making it less error-prone). And depending on where they are used (app or instance level), they can be added in docker's .env files, build pipeline or an orchestration engine like Kubernetes. However - for environment variables that are secret (Database Passwords, Keys and such) and are on higher environments, should never be in cleartext, nor should they be in compose files or environment files. For that, as a best practice we should use a credential store.
For the next video - we'll, you'll have to wait for it ;)
@@MonisYousuf ❤Thanks for responding. Can't wait for the next video.
Super. I have one question here how backend will know database started , what condition we have to mention in docker compose file.
and follow up is what software you are using to develop these type of content with animation. Thank you
At 13:34 - we specify that the backend app depends on the database on the condition that the service should be started (service_started).
This option is the default option simply waits for the container to be "started" and will not check the internal status of the service if the container has successfully started. For this reason, this option doesn't require anything "extra" to be defined in the backend_database.
Since our database image is very fast and we're not doing a lot of customisations there - it shouldn't be a problem.
I use Final Cut Pro
Got it. Thank you
Great work man!
Thank you!