I realize this is an old video but it still applies today which is amazing since technology is moving so fast. I would love to see using multiple different ingress controllers in the same namespace to get a better view of using namespaces by environment like dev, model, production or even by business unit. Its possible you already have that in a later video so I'll keep watching. Thank You for your videos. They help!
HI @That DevOps Guy, I really appreciate your videos. I think this video is too short and I find a bit of practicality missing compared to your other videos. Installation of the ingress-controller was awesome, but the challenge which I feel including some of them facing is, 1) "For the LoadBalancer how did you manage to get the external IP up as localhost?" 2) "The description of the ingress which you created and its explanation." I really hope to see a video, on how to set up LB for the local machines or kind cluster which also demonstrates ingress. Once again, I really love your videos and they are really one of the best available.
Thank you! Thinking of an Ingress Controller like an API gateway really helps me to get an understanding of how Ingress works. Pretty cool and powerful stuff!!
but to what IP you would point DNS if you have external cluster, not running on localhost ? Is the ingress bound to single node, or can I use any worker node or controller node ?
Hey one request can you make one video on how to attach a load balancer like nlb in front of our kubernates cluster that can load balance between different nodes
@@MarcelDempers yes this video is great but I am little bit confuse how to setup both layer 4 and layer 7 load balancer for our kubernates cluster especially in layer 4 thanks if you can make a dedicated video on that
Thanks for the video with clear and crisp explanation. In my project, I am using API Gateway, Apigee. How can be my deployment flow when I want to use both Apigee and traefik. I want all external requests to come to my Apigee proxy first
Thanks for the kind words 😁 Your entry point for traffic would be a service of type=LoadBalancer which would point to your Apigee proxy. You would then have to route traffic to traefik from the Apigee proxy service of type=ClusterIP. That way only one is exposed to public traffic. Hope that helps
@@MarcelDempers Thanks Marcel for your quick reply. Can I get elaborate explanation on this somewhere. Also, can I have two ingress points for my kubernetes services- one for GLB load balancer which we use in AWS and the other to point to my east/west cluster. Sorry I am asking for much. Atleast can I read from somewhere this possibility of having two ingress points for my k8s service. Thanks.
@@pgangula Doubt you'll find documentation around that since it's unusual to run an ingress controller behind another. However you can run multiple controllers with different endpoints: link.medium.com/SXBidlcYC4
hello sir, when i create minikube instance it uses docker as a driver and runs in it. i cant access webpage through browser but minikube ssh. how can i handle this?
Awesome. Could you tell us how to secure API endpoints. I mean it is not recommended to expose the endpoints to the public. So is there a way to secure them through ingress controller?
If you run Kubernetes on Docker for Windows (most local K8s like Minikube does same), any Kubernetes service object with type: LoadBalancer, will expose "localhost" by default since it is not running in a cloud provider :)
Hi @That Devops guy when we use ingress with load balancer... 1- what is the backend set for load balancer? ingress pod? 2-any external request first hits the load balancer then the request comes to ingress controller for doing routing based on rules then it comes to the actual micro-svc pod where it gets served...then how useful is load balancer here? I mean LB is just 'balancing' load between ingress controller pods not pods with real App. I know I am missing something here.
You are right. The service plays the role of providing a cloud agnostic loadbalancer. Beauty of service type=LoadBalancer is you dont need to write scripts for each different type of cloud provider. Kubernetes gives us a LB easily. The LB role is to provide a public IP and entry point for public traffic. Its backend is the ingress pods yes. Although ingress also load balances as you mentioned, its role is different. Let's say you had 300 microservices. You would not want 300 loadbalancers or public IPs. Its ideal to have one entry point and ingress plays to role of deciding where traffic goes in the cluster based on rules. Therefore playing a similar role to an API Gateway. (SSL offload, http rewrite, redirects, CORS enforcement, caching, basic authentication etc) It plays role of a proxy. Hope this helps you 🤓💪🏽
@@MarcelDempers thanks for answer....so you agree that we do not use LB's power to load balance between 'app pods'? it only serves as stable external ip and not much more....in this setup
Very nice video. If would like to know possibilities for managing kubernetes applications in a gitops model. I've worked with helm charts, but now kubernetes operators seems to be the new thing. Argo cd seems to be cool but I have never used it. Do you know it or have experience to accomplish a full self managed application in kubernetes?
I am yet to explore Argo more, but i like the product. In terms of CICD and GitOps I like to keep it simple. checkout my Github actions video. You can do some cool stuff with it for GitOps th-cam.com/video/rgxbeIvQj0Q/w-d-xo.html Also for continuous deployment in Kubernetes for my site I use Keel.sh I have a video on that too th-cam.com/video/Sh63SN-ySCE/w-d-xo.html Hope it helps 🙂
Thank you Marcel. Is it possible to right an ingress controller to load balance the http traffic between the pods of a deployment imposing a policy? Like, "prefer the pod with the lowest memory usage"
I don't believe you can do that with an ingress alone. You might be able to do that with the help of a service mesh like istio. He has a good entry-level video on it if you wanna check it out.
Thanks for the great video Marcel :) A quick question from my side - not sure if you can help? If we have a microservice that is currently running pods and is configured with a clusterIP type service and we are needing to make calls to an external 3rd party (and this external 3rd party requires static IP addresses for whitelisting purposes on their side) - what solution would be best to implement in order to achieve this for our microservice?
This depends on the cloud provider you are using. Some clouds will have your node IP as an outbound IP, others will have the LB IP (incoming IP) as an outbound IP as well when it uses NAT. For example in AKS, an LB can have an egress IP so you have control over it. Best bet would be to check with your cloud provider. Otherwise you can route all traffic to another proxy outside Kubernetes which can give you a fixed IP. something like Squid. Cloud providers will each have different solutions for this 💪🏽
This is what an ingress controller is. It receives traffic as a load balancer service. NGINX runs in the deployment and is designed to do the heavy lifting.Users write ingress rules which is consumed and hot reloaded into NGINX so you dont have to write configs yourself and it forwards requests to other cluster IP services based on domain\path
@@MarcelDempers Thanks so much for the reply boet. Love your videos and great to see someone from SA doing so well on the platform. My confusion with using an ingress is if you want to utilise all the features of the reverse proxy. For example, if I need fine grained rules with authn and authz applied etc. From my research, it seems you are limited to the config structure of k8s ingress which the ingress controller translates into a compatible config on it's side. Is there a way to pass through ingress controller specific rules or in this case would you just route from your ingress to your internal gateway to apply this sort of logic/functionality?
@@lukejkw This depends on the ingress controller. From a kubernetes perspective its plain and simple. Terminate SSL and route based on host and path. NGINX supports global configuration and ingress level contigs to run custom LUA to do custom logic. You can embed custom nginx conf per ingress or global
@@MarcelDempers Yeah so this is kind of where the water gets muddy for me. If I need to create a highly customised gateway anyway. Why not just skip the ingress which seems to just provide an easy to use API for simple scenarios?
I don't believe this is possible with Kubernetes Ingress. You could create a new service to cover multiple deployments and point an ALB ingress to that but it would be round robin. Another hack would be to have a custom NGINX server behind the ingress that routes to two services, but could get ugly I'm afraid
Opening all of the videos of yours that I'm watching AFTER watching them in a browser which is logged into TH-cam just so I can like them. Thanks man!
I realize this is an old video but it still applies today which is amazing since technology is moving so fast. I would love to see using multiple different ingress controllers in the same namespace to get a better view of using namespaces by environment like dev, model, production or even by business unit. Its possible you already have that in a later video so I'll keep watching. Thank You for your videos. They help!
HI @That DevOps Guy,
I really appreciate your videos.
I think this video is too short and I find a bit of practicality missing compared to your other videos. Installation of the ingress-controller was awesome, but the challenge which I feel including some of them facing is,
1) "For the LoadBalancer how did you manage to get the external IP up as localhost?"
2) "The description of the ingress which you created and its explanation."
I really hope to see a video, on how to set up LB for the local machines or kind cluster which also demonstrates ingress.
Once again, I really love your videos and they are really one of the best available.
most insightful ingress video
These Videos have helped me grasp the things I couldn't in lectures! Thank you for all the great videos! Its worth Joining if anyone is on the fence.
Thanks for your support 🙏🏽
Wonderful session Marcel.traefik is a API gateway , but how do we integrate Authentication and autherization in Traefik .
Thank you! Thinking of an Ingress Controller like an API gateway really helps me to get an understanding of how Ingress works. Pretty cool and powerful stuff!!
very true!
Please show us how to use letsencrypt with ingress controllers.
Excellent video Mercel. Thanks for putting it together. I can refer to this whenever I need to refresh my kubernetes knowledge.
Really concise with practical details.
How do we attach a ALB to ingress controller?
but to what IP you would point DNS if you have external cluster, not running on localhost ? Is the ingress bound to single node, or can I use any worker node or controller node ?
The service IP will be a public IP if you're running in the cloud. It will be bound to any worker node is service is type=LoadBalancer
Really nice explanation of k8s.
Awesome explanation. Just straight to the point. 👍👍👍👍👍
Awsum video... so easy to understand. thank you marcel..... great...
Hey one request can you make one video on how to attach a load balancer like nlb in front of our kubernates cluster that can load balance between different nodes
Have a really old one that might help you
th-cam.com/video/xhva6DeKqVU/w-d-xo.html
@@MarcelDempers yes this video is great but I am little bit confuse how to setup both layer 4 and layer 7 load balancer for our kubernates cluster especially in layer 4 thanks if you can make a dedicated video on that
Thanks for the video with clear and crisp explanation. In my project, I am using API Gateway, Apigee. How can be my deployment flow when I want to use both Apigee and traefik. I want all external requests to come to my Apigee proxy first
Thanks for the kind words 😁 Your entry point for traffic would be a service of type=LoadBalancer which would point to your Apigee proxy. You would then have to route traffic to traefik from the Apigee proxy service of type=ClusterIP. That way only one is exposed to public traffic. Hope that helps
@@MarcelDempers Thanks Marcel for your quick reply. Can I get elaborate explanation on this somewhere. Also, can I have two ingress points for my kubernetes services- one for GLB load balancer which we use in AWS and the other to point to my east/west cluster. Sorry I am asking for much. Atleast can I read from somewhere this possibility of having two ingress points for my k8s service. Thanks.
@@pgangula Doubt you'll find documentation around that since it's unusual to run an ingress controller behind another. However you can run multiple controllers with different endpoints: link.medium.com/SXBidlcYC4
@@MarcelDempers Ok. Will certainly check this. Thanks.
Thank you for sharing your knowledge.
hello sir, when i create minikube instance it uses docker as a driver and runs in it. i cant access webpage through browser but minikube ssh. how can i handle this?
Awesome. Could you tell us how to secure API endpoints. I mean it is not recommended to expose the endpoints to the public. So is there a way to secure them through ingress controller?
can we expose/open ports other than 80/443 say like 5671 using any ingress nginx/haproxy/ambassador controller
What load balancer were you using and how you got the service expose to localhost as external IP?
If you run Kubernetes on Docker for Windows (most local K8s like Minikube does same), any Kubernetes service object with type: LoadBalancer, will expose "localhost" by default since it is not running in a cloud provider :)
Hi @That Devops guy
when we use ingress with load balancer...
1- what is the backend set for load balancer? ingress pod?
2-any external request first hits the load balancer then the request comes to ingress controller for doing routing based on rules then it comes to the actual micro-svc pod where it gets served...then how useful is load balancer here? I mean LB is just 'balancing' load between ingress controller pods not pods with real App.
I know I am missing something here.
You are right. The service plays the role of providing a cloud agnostic loadbalancer. Beauty of service type=LoadBalancer is you dont need to write scripts for each different type of cloud provider. Kubernetes gives us a LB easily. The LB role is to provide a public IP and entry point for public traffic. Its backend is the ingress pods yes. Although ingress also load balances as you mentioned, its role is different. Let's say you had 300 microservices. You would not want 300 loadbalancers or public IPs. Its ideal to have one entry point and ingress plays to role of deciding where traffic goes in the cluster based on rules. Therefore playing a similar role to an API Gateway. (SSL offload, http rewrite, redirects, CORS enforcement, caching, basic authentication etc) It plays role of a proxy. Hope this helps you 🤓💪🏽
@@MarcelDempers thanks for answer....so you agree that we do not use LB's power to load balance between 'app pods'? it only serves as stable external ip and not much more....in this setup
Very nice video. If would like to know possibilities for managing kubernetes applications in a gitops model. I've worked with helm charts, but now kubernetes operators seems to be the new thing. Argo cd seems to be cool but I have never used it. Do you know it or have experience to accomplish a full self managed application in kubernetes?
I am yet to explore Argo more, but i like the product. In terms of CICD and GitOps I like to keep it simple. checkout my Github actions video. You can do some cool stuff with it for GitOps th-cam.com/video/rgxbeIvQj0Q/w-d-xo.html
Also for continuous deployment in Kubernetes for my site I use Keel.sh
I have a video on that too
th-cam.com/video/Sh63SN-ySCE/w-d-xo.html
Hope it helps 🙂
Amazing explanation 🙌
Another excellent video, keep up the great work
Thank you Marcel.
Is it possible to right an ingress controller to load balance the http traffic between the pods of a deployment imposing a policy? Like, "prefer the pod with the lowest memory usage"
I don't believe you can do that with an ingress alone. You might be able to do that with the help of a service mesh like istio. He has a good entry-level video on it if you wanna check it out.
Thanks for the great video Marcel :) A quick question from my side - not sure if you can help? If we have a microservice that is currently running pods and is configured with a clusterIP type service and we are needing to make calls to an external 3rd party (and this external 3rd party requires static IP addresses for whitelisting purposes on their side) - what solution would be best to implement in order to achieve this for our microservice?
This depends on the cloud provider you are using. Some clouds will have your node IP as an outbound IP, others will have the LB IP (incoming IP) as an outbound IP as well when it uses NAT. For example in AKS, an LB can have an egress IP so you have control over it. Best bet would be to check with your cloud provider. Otherwise you can route all traffic to another proxy outside Kubernetes which can give you a fixed IP. something like Squid. Cloud providers will each have different solutions for this 💪🏽
Thanks so much Marcel - this helps alot :)
Great content man, thanks a lot
Very well explained..! Thank u
Why couldn't you just point a load balancer service to a deployment which then forwards requests on to other cluster IP services?
This is what an ingress controller is. It receives traffic as a load balancer service. NGINX runs in the deployment and is designed to do the heavy lifting.Users write ingress rules which is consumed and hot reloaded into NGINX so you dont have to write configs yourself and it forwards requests to other cluster IP services based on domain\path
@@MarcelDempers Thanks so much for the reply boet. Love your videos and great to see someone from SA doing so well on the platform.
My confusion with using an ingress is if you want to utilise all the features of the reverse proxy.
For example, if I need fine grained rules with authn and authz applied etc. From my research, it seems you are limited to the config structure of k8s ingress which the ingress controller translates into a compatible config on it's side.
Is there a way to pass through ingress controller specific rules or in this case would you just route from your ingress to your internal gateway to apply this sort of logic/functionality?
@@lukejkw This depends on the ingress controller. From a kubernetes perspective its plain and simple. Terminate SSL and route based on host and path.
NGINX supports global configuration and ingress level contigs to run custom LUA to do custom logic. You can embed custom nginx conf per ingress or global
@@MarcelDempers Yeah so this is kind of where the water gets muddy for me.
If I need to create a highly customised gateway anyway. Why not just skip the ingress which seems to just provide an easy to use API for simple scenarios?
Is it possible to map 1 ALB ingress to multiple services? If so may you perhaps advice how i would define my configuration file
I don't believe this is possible with Kubernetes Ingress. You could create a new service to cover multiple deployments and point an ALB ingress to that but it would be round robin. Another hack would be to have a custom NGINX server behind the ingress that routes to two services, but could get ugly I'm afraid
Thanks a lot..
What if that ingress server goes downnnnnn?????
Greetings, I hope you can help me, I can't get my ingress configuration to take the route /posts/?(.*)/comments
he answers me with 404
i subscribed
Hello from Russia! Thanks for videos! (Can you record video about spinnaker? )
Thatnk you Marcel, but unfortunately your source code is outdated, I appreciated the overview of the topic though.
Drop the background music dude, it's really distracting...!
handsome man