Wow. Most videos are nigh-useless; I only watch them while I eat or do something else. But you have managed to create one of the most straightforward, well explained and information dense videos I've seen. Thank you and congratulations!
Great video! Concise and explain each step very well. Thank you for sharing it! One quick question: the app-cluster.yml would create a new cluster when you run it, so if I have an existing cluster created previously, how do I reference it in the template? The reason for this is I'll need to deploy different containers and different versions of each container in the cluster, and I don't want to create a new cluster every time. Thank you for you time!
Thank you @lijunchen5000 . I'm glad you found it helpful. If you already have a cluster created, you can pass that into the create-stack command (something like "--parameters ParameterKey=ClusterName,ParameterValue=mycluster") . You can use the same stack YAML template for multiple services. For example, you can use api.yml to create multiple stacks for your variant of container services within the same cluster. In the Service definition in app.yml, instead of reading the Cluster name from Stack Ouput (using ImportValue), use the parameter ClusterName. Similarly, you can parameterize the values for scale, service names, etc. aws cloudformation create-stack --template-body file://$PWD/infra/api.yml --stack-name api1 --parameters ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ContainerImage,ParameterValue=myapp:v1 aws cloudformation create-stack --template-body file://$PWD/infra/api.yml --stack-name api2 --parameters ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ContainerImage,ParameterValue=myapp:v2
Thanks for the feedback Jonas. I think that's the part that took me a good deal of time to understand and define before deploying a container application. I thought a quick walkthrough of networking to start with will help get a full picture of what goes into deploying a container application on ECS. Thanks for noticing the details.
Ankit - Thanks for the interest. Yes I will. I haven't been able publish one since a while but soon I will publish a couple of videos on AWS and Kubernetes. And yes it will be more of CLI or API than the web console.
@@ChandraShettigar I tried your template for spinning up sonarqube container through fargate but strange thing is that I am only seeing loading page of Sonarqube when I hit the elb dns. I don't know why isn't it loading further and I don't see any errors on ecs console as well.
Great video and valuable content, what changes would you do, to make production ready? for example I see the load balancer using http connecting, it will need to be changed with a certificate for https. Beside the internet gateway will allow inbound and outbound connection this will make proven to attacks, a better way will be using NAT, how do I link a domain to the load balancer and what more sugestión will add to make ready production? Thanks a lot
I am glad you found it valuable. There are a few things I would do to make it prod ready, 1. You're right it can't be HTTP for prod. Unless there is need for HTTP traffic, I would add a ListenerRule to the HTTP Listener to redirect to HTTPS/443. That should make it HTTPS always. And then add a Listener for HTTPS and use that Listener to map all the TargetGroups and ListenerRules across all the services that you deploy to ECS. Also, for the TLS certificates, I would use AWS Certificate Manager which makes it easy to map the certs to Listener and track the expirty 2. For additional security, configure WAF at the ALB level so that most common web attacks can be addressed before it hits the ALB or Application Services 3. Yes, the InternetGateway is good for the public subnet meaning that it is good for web traffic. Make the private network on NAT and restrict access to certain private network access to the resources in the private network. This involves a bit more networking details. 4. To map the Domain Names, there are two parts. If you use Route53, then it gets easier. You just need to pick your domain and map it to ALB. If you're using an external domain registrar, you can CNAME the sub-domains (eg: api.example.com, www.example.com, blog.example.com) to ALB's domain name. Never use IPs of ALB as those are not static. There is one issue using an external domain registrar is that you can't set the A record for the root/apex domain to ALB unless the domain registrar allow Alias for the root domain A few other things to make it prod ready, - Use multi-stage build approach for building docker images. Make sure no sensitive configs go into docker image - If there are secret configs, I would have the container to pul those from AWS Secrets Manager instead of setting them as env variables. This is one thing I see most ignore or defer to later. I like this is addressed first - If you need to further restrict access to data tier, I would move that further behind to a third subnet (say data subnet or pii/pci etc.) - Not related to the stack but I might go with Terraform instead of CloudFormation or maybe AWS CDK (I am hands-on with CDK) which makes your infra code modular and easy to collaborate PS: This video was made a few years ago and the syntax or configs may have changed. While I like Kubernetes for the most part, I consider ECS can be a good start and it does have advantages. If you would like to consider Kubernetes or want to explore, here is my course you might be interested in - www.devteds.com/k8s I hope this helps.
Thank you very much sir for the straight forward explanation. Is it possible to pass the user data to the instances while using the Fargate Launch type?
Glad it helped, Deepa. It's been a couple of months since you asked the question. Sorry I didn't early enough. Let me know if you're still looking for an answer for that question.
Thanks for the feedback, Saja. At the minimum, it is better to have the individual service instances grouped into separate target groups for the traffic routing purpose. For example, the requests to "/api/books" get routed to Target Group 1, and the requests for "/api/price" get routed to a different target group. Two separate services or micro-services, possibly owned by separate teams. You get a clear segregation of ownership, security and traffic routing, etc. I hope that explains.
Thanks for the kind words. That is correct, the cloudwatch log-group isn't really part of the cluster. It was a thing that gets created once, likely one group for all the services you might deploy into that cluster.
Thanks for the great video. I'm new to this topic, so I didn't quite understand your motivation for creating several stacks in cloud formation rather than just one for everything. What are the reasons for doing it that way?
Thank you Rob. That was one way to modularize infrastructure code. For example, you don't want to apply the change to entire infrastructure (VPC > App Cluster > API) when you need to create/update one API stack components. VPC may contain multiple App Clusters (across multiple teams) and individual App Clusters may contain multiple API services or databases or other types of workloads.
Could you elaborate more on the update process for a particular ECS service? I was able to figure out the initial deployment of my ECS Fargate service but I am having a hard time figuring out how to effectively update the service with a new image. I think it can be done if I just use a different image tag for each deployment but I would like to make it work with 'latest'. Any input would be great!
Trevor, I couldn't really figure out a cleaner method for updating service (with update to docker image) with ecs. I think the only option with CloudFormation is if you change the image tag, using may be a parameter for tag or version. I just updated code repo (github.com/devteds/e9-cloudformation-docker-ecs) with some notes on deployment and a script. I hope that helps. Please share if you find a better solution.
Hi Chandra Shettigar, I actually followed all the Steps that you performed in the video and also refered the "Read.md" file. But in the cloudformation it is strucking/stopping or taking a lot of time and not moving forward at the "Service" event.Also the tasks under the service are always creating and deleting themselves within 6min or less than that. can you please help me with this...
Hi, Apologies for the delayed response! I hope you've managed to resolve the issue by now. If it's still persisting, could you check the logs of the tasks, especially the ones in a stopped state? There might be an issue with pulling the container image or other potential issues. Additionally, considering the source code's age (>5 years), updates might be needed to ensure compatibility with the deployment process.
Dude, this video has tons of info, thanks a bunch! But why dont you just update the stacks instead of creating them from scratch in case of errors? (Just Wondering here)
Thank you, Anderson. It was easy and clean to re-create the stack instead of updating it. Also, re-doing the create-stack made sense as the stack wasn't complete to update.
FYI... Had an issue with the api stack timing out. The following change in the iam.yml file fixed it. From: - 'ecr:CreateLogGroup' - 'ecr:CreateLogStream' To: - 'logs:CreateLogGroup' - 'logs:CreateLogStream' - 'logs:PutLogEvents' - 'logs:DescribeLogStreams'
Hey, thanks for the great video. I have two questions: Is the ALB endpoint fixed or it will be changing every time I update the stack? If the ALB endpoint is dynamically changing, how can I make it fixed? Since I want to set up an application domain (let's say on Cloudflare) point to that ALB endpoint if It changing every time I deploy a new change, it's troublesome to re-config. Scenario: + I made code change with the docker images and want to deploy that change. + I re-run deploy script. note: subscribed ;)
ALB endpoint (domain name) is fixed and that doesn't change. On applying stack changes, unless the stack code (CloudFormation YAML) deletes or re-create ALB resource, domain name of ALB doesn't change. But know that the IP associated with ALB's domain name isn't static. If you use Route53 for your domain, it is easy that you don't need the IP for A record. With Route53 you can create A Record Alias to ALB endpoints. If you are using an external DNS hosting providers, you will need an IP address for root domain (eg: example.com). And for non-root domain names (eg: www.example.com or blog.example.com), you CNAME to ALB's dns name. If you need to map your root domain (using A record) in an external DNS provider, one of the options is to add an NLB to front ALB. Basically, NLB gives you static IP and ALB's IPs become targets for NLB. It gets a bit more complicated again as ALB's IPs are not static. You will need to write some more code to watch ALB's IP changes and update NLB's targets. Following link might help understand this better: _aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/_ And if you are not using root domain, then CNAME to ALB's dns name is all you need. I hope this helps! Thanks for subscribing :)
I have the following issue: my service is written in grpc and uses a TCP port, e.g. 50000. I am using then a Network Load Balancer instead of an Application Load Balancer (which supports only HTTP). However, a Network Load Balancer does not support a ListenerRule (see error below). I am getting the following errors when creating the last stack: - Attached to ListenerRule: Rules are unsupported for Network Load Balancer listeners (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code: InvalidConfigurationRequest; - Attached to TargetGroup: The target group (..) does not have an associated load balancer. (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException I thought that they were related, but now I think that they might be independent... any idea? Thanks a lot
Laurent, I've never tried using NLB with ECS. With NLB I think we can do path based routing, so the listener rule is not useful. I am assuming that if you were to use NLB, there can only be one type of container service that you can serve on that NLB instance. For such cases, I think you can use the ARN of the default target group configured to the listener, assign to LoadBalancers.TargetGroupArn of ECS Service. I hope that works. Let me know if you've figured a better solution.
Neithan - I don't there is a straight answer for that. ECS with EC2 or Fargate can be used for a large scale and I am yet to load test it with a few applications while I am also looking at Kubernetes. Fargate is ideal if you want to start very small and scale (auto-scale) your containers horizontally. Same is true with ECS EC2 option but that involves a little more work building the infrastructure.
Hi Venu - There are a few different approaches for deployments of app changes, depending on which CI/CD tools you use or the type of deployment methods you need to handle. Here are a few links that I hope is useful to you - aws.amazon.com/blogs/compute/building-deploying-and-operating-containerized-applications-with-aws-fargate/, aws.amazon.com/blogs/compute/bluegreen-deployments-with-amazon-ecs/ .
Rohit, There wasn't really a need for two health check paths. One was sort of a root or home path of the service and the other is health check path. My bad, I should have left a comment about that.
Tara, Firstly I don't think ECS is an option for stateful services such as mysql or mongo. But if there are multiple application services (stateless services) that require inter-service communication, then you need a service discovery solution with the ECS cluster. AWS-way I think is using AWS ECS Service Discovery with Route 53. Check this out aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/ If you want to containerize stateful services, Kubernetes is what you need. I personally don't think ECS is a good fit for running stateful container services.
We have used fargate for our application which is running on tomcat. Inside our application we have one https service call(test.com) to fetch some data which is used by our application. When we run our application inside docker container this test.com is redirecting to test.com Could you please help me on this. Where to stop this redirection or apply some certifications or etc.....
Timur, Please refer the video description for more links and references. Here is the one for this video github.com/devteds/e9-cloudformation-docker-ecs. Source code for other videos on github github.com/devteds/
If we want to add RDS to this set up, how can we pass db_host variable to container variable in ECS task? Can we do it without additional scripting, only with cloud formation features?
One option is - define a stack to create RDS resource (plus additional security group for RDS resource) and output database endpoint (of RDS resource) to an export variable and then use import-value function to read the database endpoint to assign to container environment variable DB_HOST. You might as well define RDS resource in the same stack as application (api.yml). There may be other options but this should work.
I've been trying to learn something about Cloud Formation templates in the past few days. I've noticed that all the tutorials simply throw the template at you and don't explain any of it in detail. They don't explain why any of the items are there and what they do. I come away learning next to nothing. This one is about the same.
Wow. Most videos are nigh-useless; I only watch them while I eat or do something else. But you have managed to create one of the most straightforward, well explained and information dense videos I've seen. Thank you and congratulations!
Agustin, Thank you for the kind words.
I wish all tutorials were like this one. Concise, to the point and no-nonsense. Thank you very much, kind Sir!
Thank you for the kind words.
I loved this tutorial!! Thank you very much!!
Glad it was helpful! You're welcome.
More videos in the AWS/Container/Kubernetes in the pipeline @ devteds.com
Superb sir, we expect more videos on AWS-DevOps
Great video without wasting our time. Thank you so much sir. I am new subscriber
this is such a great tutorial! thank you for your great work!
I am glad it was useful. Thank you 🙂
Best best best video ever , Thank you
Thank you Erica
Great video! Concise and explain each step very well. Thank you for sharing it!
One quick question: the app-cluster.yml would create a new cluster when you run it, so if I have an existing cluster created previously, how do I reference it in the template? The reason for this is I'll need to deploy different containers and different versions of each container in the cluster, and I don't want to create a new cluster every time.
Thank you for you time!
Thank you @lijunchen5000 . I'm glad you found it helpful.
If you already have a cluster created, you can pass that into the create-stack command (something like "--parameters ParameterKey=ClusterName,ParameterValue=mycluster") .
You can use the same stack YAML template for multiple services. For example, you can use api.yml to create multiple stacks for your variant of container services within the same cluster. In the Service definition in app.yml, instead of reading the Cluster name from Stack Ouput (using ImportValue), use the parameter ClusterName. Similarly, you can parameterize the values for scale, service names, etc.
aws cloudformation create-stack --template-body file://$PWD/infra/api.yml --stack-name api1 --parameters ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ContainerImage,ParameterValue=myapp:v1
aws cloudformation create-stack --template-body file://$PWD/infra/api.yml --stack-name api2 --parameters ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ClusterName,ParameterValue=mycluster ParameterKey=ContainerImage,ParameterValue=myapp:v2
Very neatly explained keep it up ..
Glad you liked it. Thank you.
I guess what impresses me about this is that you're doing the networking, the orchestration and the containerization :-)
Thanks for the feedback Jonas. I think that's the part that took me a good deal of time to understand and define before deploying a container application. I thought a quick walkthrough of networking to start with will help get a full picture of what goes into deploying a container application on ECS. Thanks for noticing the details.
Outstanding HowTo! A great way to get started with ECS on AWS!
Thank you Jeff.
This gives me an insight on CloudFormation. Thank you !
Sumesh, I'm glad you found it helpful.
Man, you are great!
Very good tutorial. It helps me start building applications with docker on ECS. Thanks heaps
Thank you Selim. I am glad that it was helpful.
really great video. So well explained. Thanks so much for your efforts
You're very welcome!
amazing tutorial from scratch. Just solved my problem of automating
Thank you Ankit. I am glad it helped.
@@ChandraShettigar can you make more such videos on Devops using aws cli?
Ankit - Thanks for the interest. Yes I will. I haven't been able publish one since a while but soon I will publish a couple of videos on AWS and Kubernetes. And yes it will be more of CLI or API than the web console.
@@ChandraShettigar I tried your template for spinning up sonarqube container through fargate but strange thing is that I am only seeing loading page of Sonarqube when I hit the elb dns. I don't know why isn't it loading further and I don't see any errors on ecs console as well.
Thanks a lot for your video ! It have been very useful!
I'm glad you found it helpful.
Great video and valuable content, what changes would you do, to make production ready? for example I see the load balancer using http connecting, it will need to be changed with a certificate for https. Beside the internet gateway will allow inbound and outbound connection this will make proven to attacks, a better way will be using NAT, how do I link a domain to the load balancer and what more sugestión will add to make ready production? Thanks a lot
I am glad you found it valuable.
There are a few things I would do to make it prod ready,
1. You're right it can't be HTTP for prod. Unless there is need for HTTP traffic, I would add a ListenerRule to the HTTP Listener to redirect to HTTPS/443. That should make it HTTPS always. And then add a Listener for HTTPS and use that Listener to map all the TargetGroups and ListenerRules across all the services that you deploy to ECS. Also, for the TLS certificates, I would use AWS Certificate Manager which makes it easy to map the certs to Listener and track the expirty
2. For additional security, configure WAF at the ALB level so that most common web attacks can be addressed before it hits the ALB or Application Services
3. Yes, the InternetGateway is good for the public subnet meaning that it is good for web traffic. Make the private network on NAT and restrict access to certain private network access to the resources in the private network. This involves a bit more networking details.
4. To map the Domain Names, there are two parts. If you use Route53, then it gets easier. You just need to pick your domain and map it to ALB. If you're using an external domain registrar, you can CNAME the sub-domains (eg: api.example.com, www.example.com, blog.example.com) to ALB's domain name. Never use IPs of ALB as those are not static. There is one issue using an external domain registrar is that you can't set the A record for the root/apex domain to ALB unless the domain registrar allow Alias for the root domain
A few other things to make it prod ready,
- Use multi-stage build approach for building docker images. Make sure no sensitive configs go into docker image
- If there are secret configs, I would have the container to pul those from AWS Secrets Manager instead of setting them as env variables. This is one thing I see most ignore or defer to later. I like this is addressed first
- If you need to further restrict access to data tier, I would move that further behind to a third subnet (say data subnet or pii/pci etc.)
- Not related to the stack but I might go with Terraform instead of CloudFormation or maybe AWS CDK (I am hands-on with CDK) which makes your infra code modular and easy to collaborate
PS: This video was made a few years ago and the syntax or configs may have changed.
While I like Kubernetes for the most part, I consider ECS can be a good start and it does have advantages. If you would like to consider Kubernetes or want to explore, here is my course you might be interested in - www.devteds.com/k8s
I hope this helps.
Thank you very much sir for the straight forward explanation. Is it possible to pass the user data to the instances while using the Fargate Launch type?
Glad it helped, Deepa. It's been a couple of months since you asked the question. Sorry I didn't early enough. Let me know if you're still looking for an answer for that question.
thanks a lot, bro... very helpful. I appreciate your efforts
Thank you Venu.
thanks for the great Video!
but i didnt understand why did we end with 2 target groups in the end and which instance does each represent
thank you!
Thanks for the feedback, Saja.
At the minimum, it is better to have the individual service instances grouped into separate target groups for the traffic routing purpose. For example, the requests to "/api/books" get routed to Target Group 1, and the requests for "/api/price" get routed to a different target group. Two separate services or micro-services, possibly owned by separate teams. You get a clear segregation of ownership, security and traffic routing, etc. I hope that explains.
This has been greatly helpful! Appreciate it 🙏
One question though, the LogGroup `CloudWatchLogsGroup` in cluster isn't actually necessary right? 🙌
Thanks for the kind words. That is correct, the cloudwatch log-group isn't really part of the cluster. It was a thing that gets created once, likely one group for all the services you might deploy into that cluster.
Thanks for the great video. I'm new to this topic, so I didn't quite understand your motivation for creating several stacks in cloud formation rather than just one for everything. What are the reasons for doing it that way?
Thank you Rob.
That was one way to modularize infrastructure code. For example, you don't want to apply the change to entire infrastructure (VPC > App Cluster > API) when you need to create/update one API stack components. VPC may contain multiple App Clusters (across multiple teams) and individual App Clusters may contain multiple API services or databases or other types of workloads.
Could you elaborate more on the update process for a particular ECS service? I was able to figure out the initial deployment of my ECS Fargate service but I am having a hard time figuring out how to effectively update the service with a new image. I think it can be done if I just use a different image tag for each deployment but I would like to make it work with 'latest'. Any input would be great!
Trevor, I couldn't really figure out a cleaner method for updating service (with update to docker image) with ecs. I think the only option with CloudFormation is if you change the image tag, using may be a parameter for tag or version.
I just updated code repo (github.com/devteds/e9-cloudformation-docker-ecs) with some notes on deployment and a script. I hope that helps. Please share if you find a better solution.
Hi Chandra Shettigar, I actually followed all the Steps that you performed in the video and also refered the "Read.md" file. But in the cloudformation it is strucking/stopping or taking a lot of time and not moving forward at the "Service" event.Also the tasks under the service are always creating and deleting themselves within 6min or less than that.
can you please help me with this...
Hi, Apologies for the delayed response! I hope you've managed to resolve the issue by now. If it's still persisting, could you check the logs of the tasks, especially the ones in a stopped state? There might be an issue with pulling the container image or other potential issues. Additionally, considering the source code's age (>5 years), updates might be needed to ensure compatibility with the deployment process.
Dude, this video has tons of info, thanks a bunch! But why dont you just update the stacks instead of creating them from scratch in case of errors? (Just Wondering here)
Thank you, Anderson.
It was easy and clean to re-create the stack instead of updating it. Also, re-doing the create-stack made sense as the stack wasn't complete to update.
@@ChandraShettigar It makes sense, keep up the good work!
super
Thank you Anssi
Thanks
Glad it was helpful.
FYI...
Had an issue with the api stack timing out. The following change in the iam.yml file fixed it.
From:
- 'ecr:CreateLogGroup'
- 'ecr:CreateLogStream'
To:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
- 'logs:DescribeLogStreams'
Hey, thanks for the great video.
I have two questions:
Is the ALB endpoint fixed or it will be changing every time I update the stack?
If the ALB endpoint is dynamically changing, how can I make it fixed?
Since I want to set up an application domain (let's say on Cloudflare) point to that ALB endpoint if It changing every time I deploy a new change, it's troublesome to re-config.
Scenario:
+ I made code change with the docker images and want to deploy that change.
+ I re-run deploy script.
note: subscribed ;)
ALB endpoint (domain name) is fixed and that doesn't change. On applying stack changes, unless the stack code (CloudFormation YAML) deletes or re-create ALB resource, domain name of ALB doesn't change.
But know that the IP associated with ALB's domain name isn't static. If you use Route53 for your domain, it is easy that you don't need the IP for A record. With Route53 you can create A Record Alias to ALB endpoints. If you are using an external DNS hosting providers, you will need an IP address for root domain (eg: example.com). And for non-root domain names (eg: www.example.com or blog.example.com), you CNAME to ALB's dns name.
If you need to map your root domain (using A record) in an external DNS provider, one of the options is to add an NLB to front ALB. Basically, NLB gives you static IP and ALB's IPs become targets for NLB. It gets a bit more complicated again as ALB's IPs are not static. You will need to write some more code to watch ALB's IP changes and update NLB's targets. Following link might help understand this better: _aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/_
And if you are not using root domain, then CNAME to ALB's dns name is all you need.
I hope this helps! Thanks for subscribing :)
Chandra Shettigar wow, so detailed and insightful. Thanks you so much
cool
I have the following issue: my service is written in grpc and uses a TCP port, e.g. 50000. I am using then a Network Load Balancer instead of an Application Load Balancer (which supports only HTTP). However, a Network Load Balancer does not support a ListenerRule (see error below). I am getting the following errors when creating the last stack:
- Attached to ListenerRule:
Rules are unsupported for Network Load Balancer listeners (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code: InvalidConfigurationRequest;
- Attached to TargetGroup:
The target group (..) does not have an associated load balancer. (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException
I thought that they were related, but now I think that they might be independent... any idea?
Thanks a lot
Laurent, I've never tried using NLB with ECS. With NLB I think we can do path based routing, so the listener rule is not useful. I am assuming that if you were to use NLB, there can only be one type of container service that you can serve on that NLB instance. For such cases, I think you can use the ARN of the default target group configured to the listener, assign to LoadBalancers.TargetGroupArn of ECS Service. I hope that works. Let me know if you've figured a better solution.
What kind of size/traffic would make sense for a setup like this one?
Neithan - I don't there is a straight answer for that. ECS with EC2 or Fargate can be used for a large scale and I am yet to load test it with a few applications while I am also looking at Kubernetes. Fargate is ideal if you want to start very small and scale (auto-scale) your containers horizontally. Same is true with ECS EC2 option but that involves a little more work building the infrastructure.
Hi Chandra, Can I get those yaml's?
Sudheer, I hope you found the YAML files already. If not, here is the github repo - github.com/devteds/e9-cloudformation-docker-ecs.
One doubt, Which is the best way to deploy the app changes?
Hi Venu - There are a few different approaches for deployments of app changes, depending on which CI/CD tools you use or the type of deployment methods you need to handle. Here are a few links that I hope is useful to you - aws.amazon.com/blogs/compute/building-deploying-and-operating-containerized-applications-with-aws-fargate/, aws.amazon.com/blogs/compute/bluegreen-deployments-with-amazon-ecs/ .
What was the need to create 2 health check path in Ruby service if we are using only /stat path for health check?
Rohit, There wasn't really a need for two health check paths. One was sort of a root or home path of the service and the other is health check path. My bad, I should have left a comment about that.
@@ChandraShettigar No problems, could be confusing for learners. Thanks for the good demo.
As you have just added one container. What if we have another container too say mysql or mongo and they need to communicate?
Tara,
Firstly I don't think ECS is an option for stateful services such as mysql or mongo. But if there are multiple application services (stateless services) that require inter-service communication, then you need a service discovery solution with the ECS cluster. AWS-way I think is using AWS ECS Service Discovery with Route 53. Check this out aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
If you want to containerize stateful services, Kubernetes is what you need. I personally don't think ECS is a good fit for running stateful container services.
@@ChandraShettigar Thanks for the response. I though AWS ECS can be alternatives to kubernetes. Thanks
We have used fargate for our application which is running on tomcat.
Inside our application we have one https service call(test.com) to fetch some data which is used by our application.
When we run our application inside docker container this test.com is redirecting to test.com
Could you please help me on this.
Where to stop this redirection or apply some certifications or etc.....
Good to know you use ECS Fargate. I hope its going well and that your issue is resolved. Sorry that I couldn't reply when you had the issue.
You can make kubernetes videos please
Yes Mohan. I definitely have that in my list. May be in the next couple of months.
Could you share your repo with your templates?
Timur, Please refer the video description for more links and references.
Here is the one for this video github.com/devteds/e9-cloudformation-docker-ecs.
Source code for other videos on github github.com/devteds/
If we want to add RDS to this set up, how can we pass db_host variable to container variable in ECS task? Can we do it without additional scripting, only with cloud formation features?
One option is - define a stack to create RDS resource (plus additional security group for RDS resource) and output database endpoint (of RDS resource) to an export variable and then use import-value function to read the database endpoint to assign to container environment variable DB_HOST. You might as well define RDS resource in the same stack as application (api.yml). There may be other options but this should work.
Chandra Shettigar in my case i can use only one CF stack
I've been trying to learn something about Cloud Formation templates in the past few days. I've noticed that all the tutorials simply throw the template at you and don't explain any of it in detail. They don't explain why any of the items are there and what they do. I come away learning next to nothing. This one is about the same.