Amazing... I struggled to build a multi-node K8s cluster after watching this content even easier to set up ..i am trying this weekend with my VMware workstation ..thank you ..keep up and we learning from your content
@@justmeandopensource i'm going to automate all this with an ansible project. It could be interesting to put loadbalancing on docker instead of installing directly on the host and put every thing on EC2 or other VM
Fantastic video! I've been tasked with setting up a k8's cluster on RHEL using an vIP from a network appliance. I didn't find anything on the interwebz as good as this video to explain/simplify the process! Checking out some of your other videos as well! Much appreciated! Thanks!
Hi, nice explanation! You said you will do another video explaining how to set up this using static pods on the master nodes. Maybe it would be great to explain how to use kube-vip! Which would be essentially the same thing but without having to configure keepalived and haproxy separately plus it has some features worth checking out.
@@Muiterz Its just the standard ssh client. What you are asking is my terminal emulator. Again you can use any terminal emulator. On top of it I use tmux which allows the windows to be split into multiple panes.
Great video Venkat ! One question... that --apiserver-advertise-adress you config in all master nodes, thas its only if you have multiple network interfaces right ? or if you have multiple masters, you have to declare all of them to the api trough the proxy ? (i've have a HA cluster, but it only have one proxy for masters)
Hi Gonzalo, thanks for watching. That option is only required if you have multiple network interfaces and you want to use a specific one for your cluster. If you don't specify that option it will use the first available NIC by default. In my case, the first available NIC is eth0 which I don't want to use. If you just have one NIC on all nodes, you can ignore this option. Cheers.
Hey Mate, I really enjoyed your video. Would you be willing to share the tool you used for connecting servers and managing multiple sessions on a single page?
Hi Venkat, thank you for your tutorials, always a great help! I have a question, i'm using the exact same setup in your afore mentioned video "Set up multi master Kubernetes cluster using Kubeadm" in production enviroment on bare metal, so my question is, it's possible to expose the services on the pods to be accesible from outside of the local network using that setup? Here we plan to access services like the Dashboard, MySQL, and Ngnix, from outside, and since i'm kinda new to Kubernetes i'm having some trouble to figure that out. Thanks in advance, looking forward to the next one!
Hi, thanks for watching. RKE is a separate topic and this video is about setting everything up manually on bare metal kubernetes. I don't have patreon but if you do like to contribute, then there is a paypal link in the channel page or in video description. Thanks again for your interest in this channel.
Hi, thanks for watching. NodePort is for service and not for pod. You create a service of type nodeport and access whatever (application/service) running on the pods behind that service. If you directly want to access the pod, you can do something like kubectl port-forward
I have a very good question here, first of all thank you for your efforts, I have used keepalived before, and I was going to build the same thing you did here, but regarding keepalived, lets say after 51 down times of any load balancer , the keepalived priority will becomed below 0 is that correct ? will keepalived reset the number on every switch to 100 ? I am intrested to know what will the priority of the load balancers in the keepalived configuration will be ? blesses to you and your efforts.
Hi, thanks for watching. priority is just used to determine which of the competing backup nodes can become a master. The node with higher priority becomes the master in case of election. I haven't noticed the priority going to 0 despite the health check script failed many times. And priority can be any number. I believe the priority will be reset to what it was in the configuration when the node restarts but I could be wrong. Priority is only significant when there are more than one backup nodes waiting to become master. If you have just two nodes, no matter what the priority of the other backup node is, it will become master when the original master node crashes.
@@justmeandopensource another question please, what is the difference between control-plane-endpoint and apiserver-advertise-address ? what is the usage of each one please, thank you.
Thank you very much, great tutorial. How would you address SSL Certificates and access from outside of the local network. Would there be port forwarding to the virtual ip? Thanks again.
Hi, thanks for watching. These clusters that I am running are just for demo purpose that I only access from my host machine. So really not worried about accessing it from outside. If I wanted to have access from outside the host machine, I would have setup bridge network and get a LAN IP for each vm that can be reached from all other machine in my LAN and not just the host machine running the VMs.
Hi, thanks for watching. Its best to have keepalived/haproxy on a separate set of machines for true HA scenario. But you can also have these on the master nodes. Same process. Testing HA is done just by shutting of master nodes one by one and see if you can still access the cluster.
Great video! after setting up my cluster, when I turn off one of the control plane machines to simulate a problem, HAProxy and Keepalived work as expected, but if you check the healthz endpoint you will see one error: [-]etcd failed: reason withheld. I guess I need to watch more videos to find out how to HA the etcd part. Another suggestion: instead of blindly turning off the firewall, advice would be appreciated about which ports to actually open.
Can I configure HAProxy and Keepalived directly on the master node VMs to achieve a working high-availability setup without needing separate load balancer VMs (like loadbalancer1 and loadbalancer2)?
Hey, thanks for amazing tutorial. after setting up this HA, I tried to start only one load balancer and one master, it failed when running kubectl. it's normal if I keep two masters running. can you help me check, thanks.
Hello i have a problem with joinging the worker node, all other configurations worked when i use the kubeadm join command in the kworker1, it's seems not able to connect properly
Short answer is you can't. You may have noticed that I passed --control-plane-endpoint option to the kubeadm init command during cluster initialization. You can't add additional master nodes to an existing single node cluster.
Hi Venkat, This video is really helpful. I have a query, Will there be any split-brain situation in this case if two nodes or more than 50% nodes are down? If so then how can I overcome the situation between hypervisors? Actually, I have only two Proxmox hypervisors.
Hi, Thanks for watching. Split brain case is possible. The approach would be to bring the third node up as soon as possible. Don’t leave the cluster with 2 nodes for longer.
Hi Venkat, great presentation. I'm more into Openshift and trying to understand how vanilla K8s work. In OCP we have 443 frontend is forwarded to backend nodes where routers are configured, in this way we manage ssl termination at LB for applications if needed. However, I don't see 443/80 port config in LB/HA Proxy in k8s discussion, how is that managed here? any insights on this would be helpful. Thanks.
Hi, thanks for watching. The infrastructure for this video has been provisioned using vagrant and virtualbox. If you look in my vagrantfile, I have added a private network in the range 172.16.16.0/24 and these will be on eth1 interface on all my VMs. And at 8:56, you can see my configuring keepalived in /etc/keepalived/keepalived.conf where I specify which interface to use and I have specified eth1 there. Hope it helps. Cheers.
@@justmeandopensource I first added the vip/24 to each interface, then when I shutdown everything and bought up the loadblancers one at at time for testing, I saw that it was working with the vip/32 and I removed the vip/24 from the interface. thanks for your help.
hi, i love your channel. I followed your tutorial and ended up with same internal ips and keep getting error: error upgrading connection: unable to upgrade connection: pod does not exist error while tring to access or port forward nginx. Could you help me with that
I followed the same setup however when one haproxy server is up wit the virtual IP assigned, on the other haproxy the haproxy service fails to start with the error that cannot bind IP address as it is already in use.....is that normal behaviour or I have missed anything?
great tutorial. if I add service to this cluster how can I connect this service outside the cluster ?. can I use virtual ip ? or I need another loadbalancer for handle fronend requests ?
Hi Venkat, when we setting up cluster with static pod load balancer, we will have to use different ip in configuration file and to be added it in manifest directory.
Hi Venkat, First of all great video! It helped me a ton! I was just wondering of something: I tried to use a similiar setup but with two master nodes instead of 3 and once I was bringing one of the masternodes down (shutdown on the machine) I was not able to access the K8S API from the second master node. Do you know why that is? With 3 master nodes everythings works perfectly. Another question that I have is about using the load balancer machines as NFS servers. Would you recommand such a solution or not and how would you implement NFS storage from a high availability perspective?
Hi Cezar, thanks for watching. With the setup explained in this video, I haven't tried it with 2 master nodes as that is not a proper cluster anyway. But I can surely test it. And NFS is not welcomed very well with kubernetes persistence. Its not distributed and fault tolerant by default. You can look at cloud native storage solutions like OpenEBS, ceph/rook, Longhorn, Glusterfs. I have done videos on few of them. Longhorn cloud native distributed storage th-cam.com/video/SDI9Tly5YDo/w-d-xo.html Glusterfs fundamentals th-cam.com/video/IGEtVYh0C2o/w-d-xo.html Glusterfs in Kubernetes th-cam.com/video/65XOlaERvjw/w-d-xo.html
Hi, please give me more context about your issue/question. With this one line, I don't know whats the issue you are having with readiness and liveness probes. I have done a video on this topic. See if this helps. th-cam.com/video/3TJRkKWuVoM/w-d-xo.html Cheers.
@@justmeandopensource We have faced one issue in argocd error is " comparison error. rpcerror:code=deadlineExceeded desc = content deadline exceeded" Could you please help me out this?
hello I'm creating a kubernetes cluster but haproxy keeps losing connections with the control-planes. haproxy pods keep crashloopbackoff. could you help me to fix this? I'm using only one server for load balancing.
Thanks for your tutorials. Would you please make the scenario with 2 Cluster group ? And how to control each one with your host machine ? I don't know how to customize cluster name.
That should be simple. You can point to different kubeconfig via KUBECONFIG env variable or by passing --kubeconfig to your kubectl commands. Or you can merge those kubeconfig into one and switch contexts when working with different clusters.
@@justmeandopensource Ok. But my problem is : 2 Cluster have same name, and I don not know how to change name of Cluster. Could you make a video that how to create 2 Cluster with different name ?
@@MrDungvh you need to read more on config file used by kubectl. The command : kubectl cluster-info, tells u abt current cluster being used in config context. Keywords u should use to research: Kubectl Config, kubectl context
Hi Rachneet, thanks for watching. Traefik is used for incluster load balancing and traffic routing. HAProxy is used externally to load balance the traffic to control planes. Traefik is used within the cluster to load balance and route traffic between internal services.
Hi there. Very good video. I have one profesional question. I would like to run a ha kubernetes for production, what do you recommend to contract? A dedicated server and install this vms inside, or a number of virtual private servers in different parts of the world? i can only use bare metal, cant use kubernetes cloud
Hi Venkat , Thanks for the tutorial , it really helps a lot . But I am facing one issue . When I do kubeadm init with the VIP address it is failing. although the I can see the VIP in my HAProxy's base eth port and it is working perfectly fine . If I stop one of the HA proxy services then also the VIP is switching to other server . Can you suggest
@@shivbratacharaya4199 in that case do a kubeadm reset on the failed master and restart or recreate that vm and try again. If you used the vagrant provisioning, then all master VMs should be identical.
Venkat thank you for great video. But for this video or all others HA i thing better solution will be HAProxy with access to Master nodes and workers with local NGINX installed, because HAproxy for all tasks it's single point of failure anyway (even we have 2 of them, 3) - if HAproxy will be loosed all cluster stop working. On other hand with ngnix no matter how many worker or master nodes we will lose (and all haproxy can be gone also) but others workers still know how to communicate with masters (balanced). haproxy for adm access only masters. Maybe you can latter update ansible lesson with this configuration (workers nginx - adm api haproxy).
Hi Farid, thanks for checking. I am guilty of not posting videos regularly these days. I will try and get back to routine weekly videos. Hopefully from the coming week. Been so busy last few weeks. Cheers.
Thank you so much for such great tutorials on k8s. I have one query, Can we setup HA cluster using only keepalived without using HAProxy or any other load balancer ? like If I setup keepalived in all my master nodes & init the cluster using virtual IP? will this work?
Hi Mukesh, thanks for watching. I haven't tried this setup in cloud environment. But I think you will have problems with the virtual ip as you don't have control over ip address assignment in the cloud.
@@justmeandopensource hi venkat, I need to implement this in public cloud like aws or azure . Please let me know how can I manage ip address assignment. Please help.
Thanks you for your amazing video. I have a issue that i spend my whole day and still can not debug, hope you can help me out. The load balancer work just perfectly fine, but the master doesn't The issue is: when I shutdown any one of my master node , the cluster seem it go down too. I can not using kubectl, if I try it will throw an error : "nodes is forbidden: User "kubernetes-admin" cannot list resource "nodes" in API group "" at the cluster scope" . Sometime it throw another error that said : "etcdserver: request timed out". Sorry for my terrible english. Hope you have a great day
Your explanation is awesome... it’s helped me a a lot .... but For me virtual ip is not accepting during kubeadm init -control-plane-endpoint=:6443 Can you please help me what is the exact virtual ip concept.. how can I pick that... I am using EC2 instances here actually..
Hi, thanks for watching. I haven't attempted this in the cloud where the handing out of ip addresses are not under your control. This was done on my local machine.
@Venkat Kishore have you completed this system using ec2 instances it worked fine in ec2 instances can I follow this approach in ec2 instances please reply it will save my time
Good point. Yes. We definitely need more worker nodes. I was focusing on control plane HA in this demo as I am limited on the number of VMs I can run on my laptop.
Very well explanation, I love your videos. Anyway I have couple questions: - do you have basic vigrant tutorial? - how do you setup your local environment for run this? I want to do practicing - your pc os is windows or linux? Thanks
Hi Phalla, thanks for your interest in this video. I don't have any basic vagrant tutorial. You can just learn from examples. I explained in this video how I set up my environment. All I do is vagrant up in the directory where I have vagrantfile. All you need is vagrant and virtualbox. It can be Windows, Mac or Linux. Works everywhere the same vagrant setup. I use Linux on my Laptop. Cheers.
Even this is not highly available... Just imagine the aws goes down.. How could we setup cluster with multiple nodes from different networks like some nodes on azure, oracle and gcp?
Amazing... I struggled to build a multi-node K8s cluster after watching this content even easier to set up ..i am trying this weekend with my VMware workstation ..thank you ..keep up and we learning from your content
Hi Kumar, Thanks for watching.
Зачем нужны всякие платные непонятные курсы, когда есть такой замечательный Venkat. Лучшие объяснения, из того что я находил. Спасибо!
Я согласен. Спасибо Венкат.
In 30 minutes you have explained very clear every step i needed. Thanks a lot!!
Glad you liked it. Thanks for watching. Cheers.
@@justmeandopensource i'm going to automate all this with an ansible project. It could be interesting to put loadbalancing on docker instead of installing directly on the host and put every thing on EC2 or other VM
Super tutorial and the way you update your tutorials is just awesome, you really care about what you teach!!
Hi, Thanks for watching.
Fantastic video! I've been tasked with setting up a k8's cluster on RHEL using an vIP from a network appliance. I didn't find anything on the interwebz as good as this video to explain/simplify the process! Checking out some of your other videos as well! Much appreciated! Thanks!
Glad to hear that. Thanks for watching.
This is just beyond simplicity! Nice one.
Hi Samuel, Thanks for watching.
Straight to the point and thorough. Very nice
Hi Taylor, many thanks for watching. Cheers.
Best video for k8s setup I've ever seen
Thanks
Thanks for watching. Glad you liked it.
Clearly straight forward. Thanks a lot
You’re welcome and Thanks for watching.
Great help as always ! keep it up! thank you
Thanks for watching.
Thank you so much for the amazing content! Look forward to the next one!!!
Hi Chinglong, thanks for watching.
Thanks for this awesome tutorial🍻
Thanks for watching.
Great Work. Thanks
Thanks for watching
Thank you!
Hi Ajmal, Thanks for watching.
Hi, nice explanation! You said you will do another video explaining how to set up this using static pods on the master nodes. Maybe it would be great to explain how to use kube-vip! Which would be essentially the same thing but without having to configure keepalived and haproxy separately plus it has some features worth checking out.
Splendid
Hi Asad, thanks for watching.
Excellent!,
Hi Tim, Thanks for watching.
@@justmeandopensourceI'll implement tomorrow and Monday when succesfull @customer :D. many thanks
@@justmeandopensource Which SSH client do you use? (multiple windows?, looking for a windows application) :D
@@Muiterz Its just the standard ssh client. What you are asking is my terminal emulator. Again you can use any terminal emulator. On top of it I use tmux which allows the windows to be split into multiple panes.
@@justmeandopensource thank you!
Hi, This video is clear to understand and What's terminal name you using in video? (which support show old script hint in background).
Thanks...!
Hi, thanks for watching. You can find more about my terminal setup in the below video.
th-cam.com/video/PUWnCbr9cN8/w-d-xo.html
@@justmeandopensource Tks so much...
@@thaocrouch No worries.
Hi Venkat,can you please post video for installing keepalived and Haproxy in master node itself for a multi master cluster
Great video Venkat ! One question... that --apiserver-advertise-adress you config in all master nodes, thas its only if you have multiple network interfaces right ? or if you have multiple masters, you have to declare all of them to the api trough the proxy ? (i've have a HA cluster, but it only have one proxy for masters)
Hi Gonzalo, thanks for watching. That option is only required if you have multiple network interfaces and you want to use a specific one for your cluster. If you don't specify that option it will use the first available NIC by default. In my case, the first available NIC is eth0 which I don't want to use. If you just have one NIC on all nodes, you can ignore this option. Cheers.
Hey Mate,
I really enjoyed your video. Would you be willing to share the tool you used for connecting servers and managing multiple sessions on a single page?
Thank your video, it's very helpful for me, could you please make a video about kubemq inside kubernetes?
Hi Thanks for watching. I can try.
Hi Venkat, thank you for your tutorials, always a great help!
I have a question, i'm using the exact same setup in your afore mentioned video "Set up multi master Kubernetes cluster using Kubeadm" in production enviroment on bare metal, so my question is, it's possible to expose the services on the pods to be accesible from outside of the local network using that setup? Here we plan to access services like the Dashboard, MySQL, and Ngnix, from outside, and since i'm kinda new to Kubernetes i'm having some trouble to figure that out.
Thanks in advance, looking forward to the next one!
this was fantastic! what about rke2? is it the same? Also, do you have a patreon where I can send you a thank you $$?
Hi, thanks for watching. RKE is a separate topic and this video is about setting everything up manually on bare metal kubernetes. I don't have patreon but if you do like to contribute, then there is a paypal link in the channel page or in video description. Thanks again for your interest in this channel.
@@justmeandopensource ok great! i was asking the same thing with rke2 setting it up on bare metal... anyways to contact you directly?
Hi
Can we have video on K8s worker node app deployment?
Thanks
great but where in the keepalived conf mentioned of its peer ? How does the keepalived know of its peer without setting its ip address ?
Can you please add a example to access a pod with nodeport ?
Regards
Hi, thanks for watching. NodePort is for service and not for pod. You create a service of type nodeport and access whatever (application/service) running on the pods behind that service. If you directly want to access the pod, you can do something like kubectl port-forward
I have a very good question here, first of all thank you for your efforts, I have used keepalived before, and I was going to build the same thing you did here, but regarding keepalived, lets say after 51 down times of any load balancer , the keepalived priority will becomed below 0 is that correct ? will keepalived reset the number on every switch to 100 ? I am intrested to know what will the priority of the load balancers in the keepalived configuration will be ? blesses to you and your efforts.
Hi, thanks for watching. priority is just used to determine which of the competing backup nodes can become a master. The node with higher priority becomes the master in case of election. I haven't noticed the priority going to 0 despite the health check script failed many times. And priority can be any number. I believe the priority will be reset to what it was in the configuration when the node restarts but I could be wrong. Priority is only significant when there are more than one backup nodes waiting to become master. If you have just two nodes, no matter what the priority of the other backup node is, it will become master when the original master node crashes.
@@justmeandopensource another question please, what is the difference between control-plane-endpoint and apiserver-advertise-address ?
what is the usage of each one please, thank you.
Hi Venkat, Can you post the link for the video where you have setup the keepalive + haproxy as a pod. Thanks in advance
did he ever post this?
Can we use hardware load balancer instead of HAproxy?
Hi, thanks for watching. Hardware load balancer is an overkill in this setup. Why would you want to go the hard way?
Thank you very much, great tutorial. How would you address SSL Certificates and access from outside of the local network. Would there be port forwarding to the virtual ip? Thanks again.
Hi, thanks for watching. These clusters that I am running are just for demo purpose that I only access from my host machine. So really not worried about accessing it from outside. If I wanted to have access from outside the host machine, I would have setup bridge network and get a LAN IP for each vm that can be reached from all other machine in my LAN and not just the host machine running the VMs.
found the solution :D
kubernetes.io/docs/setup/production-environment/container-runtimes/#container-runtimes
Hi, What about high Availability Plan for ETCD ?
Hi, thanks for this tuto
how do that with internal-keepalived-haproxy are installed on the masters and how test the HA?
Hi, thanks for watching. Its best to have keepalived/haproxy on a separate set of machines for true HA scenario. But you can also have these on the master nodes. Same process. Testing HA is done just by shutting of master nodes one by one and see if you can still access the cluster.
Great video! after setting up my cluster, when I turn off one of the control plane machines to simulate a problem, HAProxy and Keepalived work as expected, but if you check the healthz endpoint you will see one error: [-]etcd failed: reason withheld. I guess I need to watch more videos to find out how to HA the etcd part. Another suggestion: instead of blindly turning off the firewall, advice would be appreciated about which ports to actually open.
How to set domain + ssl for keepalived virtual IP?
Can I configure HAProxy and Keepalived directly on the master node VMs to achieve a working high-availability setup without needing separate load balancer VMs (like loadbalancer1 and loadbalancer2)?
Thank you so much for great tutorial, Can you share me which tools terminal you use?
Hi thanks for watching.
th-cam.com/video/PUWnCbr9cN8/w-d-xo.html
Hey, thanks for amazing tutorial. after setting up this HA, I tried to start only one load balancer and one master, it failed when running kubectl. it's normal if I keep two masters running. can you help me check, thanks.
Hello i have a problem with joinging the worker node, all other configurations worked
when i use the kubeadm join command in the kworker1, it's seems not able to connect properly
perfect tutorial! I have a question though. How could you migrate an existing cluster to an HA one?
Hi thanks for all the tutorial. My question is how can i configure multi master on already setup kubernetes cluster. Thank you
Short answer is you can't. You may have noticed that I passed --control-plane-endpoint option to the kubeadm init command during cluster initialization. You can't add additional master nodes to an existing single node cluster.
Hi Venkat, This video is really helpful. I have a query, Will there be any split-brain situation in this case if two nodes or more than 50% nodes are down? If so then how can I overcome the situation between hypervisors? Actually, I have only two Proxmox hypervisors.
Hi, Thanks for watching. Split brain case is possible. The approach would be to bring the third node up as soon as possible. Don’t leave the cluster with 2 nodes for longer.
can u do the same with cilium i tried but the cluster always crash for some reason
Hi Venkat, great presentation. I'm more into Openshift and trying to understand how vanilla K8s work. In OCP we have 443 frontend is forwarded to backend nodes where routers are configured, in this way we manage ssl termination at LB for applications if needed. However, I don't see 443/80 port config in LB/HA Proxy in k8s discussion, how is that managed here? any insights on this would be helpful. Thanks.
Could you please share a volume snapshot of Kubernetes video
Hi there
Great video, do you have to change the ethernet intterface to add the VIP?
if so can you point me to a link for that using ubuntu 22.04.1
Hi, thanks for watching. The infrastructure for this video has been provisioned using vagrant and virtualbox. If you look in my vagrantfile, I have added a private network in the range 172.16.16.0/24 and these will be on eth1 interface on all my VMs. And at 8:56, you can see my configuring keepalived in /etc/keepalived/keepalived.conf where I specify which interface to use and I have specified eth1 there. Hope it helps. Cheers.
@@justmeandopensource I first added the vip/24 to each interface, then when I shutdown everything and bought up the loadblancers one at at time for testing, I saw that it was working with the vip/32 and I removed the vip/24 from the interface. thanks for your help.
@@hprompt166 You are welcome.
How do you secure the cluster for production in case you have a public ip for the keepalived, apart of firewall rules?
Are you use xenial kube repo for 20.04 ubuntu?
Yeah. I know that is odd but thats what suggested in official docs.
kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
hi, i love your channel.
I followed your tutorial and ended up with same internal ips and keep getting error: error upgrading connection: unable to upgrade connection: pod does not exist error while tring to access or port forward nginx. Could you help me with that
I followed the same setup however when one haproxy server is up wit the virtual IP assigned, on the other haproxy the haproxy service fails to start with the error that cannot bind IP address as it is already in use.....is that normal behaviour or I have missed anything?
great tutorial. if I add service to this cluster how can I connect this service outside the cluster ?. can I use virtual ip ? or I need another loadbalancer for handle fronend requests ?
Hi Venkat, when we setting up cluster with static pod load balancer, we will have to use different ip in configuration file and to be added it in manifest directory.
Hi Venkat, First of all great video! It helped me a ton! I was just wondering of something: I tried to use a similiar setup but with two master nodes instead of 3 and once I was bringing one of the masternodes down (shutdown on the machine) I was not able to access the K8S API from the second master node. Do you know why that is? With 3 master nodes everythings works perfectly. Another question that I have is about using the load balancer machines as NFS servers. Would you recommand such a solution or not and how would you implement NFS storage from a high availability perspective?
Hi Cezar, thanks for watching. With the setup explained in this video, I haven't tried it with 2 master nodes as that is not a proper cluster anyway. But I can surely test it.
And NFS is not welcomed very well with kubernetes persistence. Its not distributed and fault tolerant by default. You can look at cloud native storage solutions like OpenEBS, ceph/rook, Longhorn, Glusterfs. I have done videos on few of them.
Longhorn cloud native distributed storage
th-cam.com/video/SDI9Tly5YDo/w-d-xo.html
Glusterfs fundamentals
th-cam.com/video/IGEtVYh0C2o/w-d-xo.html
Glusterfs in Kubernetes
th-cam.com/video/65XOlaERvjw/w-d-xo.html
Hi In my case (rhel8), my both LBs have VIP. is that a problem?
I wonder, if you would like to make a video about installing Percona XtraDB Cluster 8.0?
Hi how to resolve readiness and liveness in kubernetes issue,
Could you Please help me out this
Hi, please give me more context about your issue/question. With this one line, I don't know whats the issue you are having with readiness and liveness probes. I have done a video on this topic. See if this helps.
th-cam.com/video/3TJRkKWuVoM/w-d-xo.html
Cheers.
@@justmeandopensource ok thanks
@@justmeandopensource
We have faced one issue in argocd error is " comparison error. rpcerror:code=deadlineExceeded desc = content deadline exceeded"
Could you please help me out this?
no worries
@@chandrasekharreddy8177 if you could ask the questions in the related video's comment section, it would be helpful for others as well.
hello I'm creating a kubernetes cluster but haproxy keeps losing connections with the control-planes.
haproxy pods keep crashloopbackoff.
could you help me to fix this?
I'm using only one server for load balancing.
Hi Venkat,
Can you please start videos for Openshift?
Hi, That’s in my list but I am not sure when I’ll get to it though. Will try my best. Cheers.
Thanks for your tutorials. Would you please make the scenario with 2 Cluster group ? And how to control each one with your host machine ? I don't know how to customize cluster name.
Hi Vuong, do you mean running two k8s clusters on my host machine and accessing them?
@@justmeandopensource Yes, please do the scerario
That should be simple. You can point to different kubeconfig via KUBECONFIG env variable or by passing --kubeconfig to your kubectl commands. Or you can merge those kubeconfig into one and switch contexts when working with different clusters.
@@justmeandopensource Ok. But my problem is : 2 Cluster have same name, and I don not know how to change name of Cluster. Could you make a video that how to create 2 Cluster with different name ?
@@MrDungvh you need to read more on config file used by kubectl.
The command : kubectl cluster-info, tells u abt current cluster being used in config context.
Keywords u should use to research:
Kubectl Config, kubectl context
Hey can you please make video on how to install ArgoCD on EKS from scratch I don't found any video on TH-cam
I can try.
@@justmeandopensource we are eagarly waiting
Hi Venkat. Thanks a lot for your efforts. Is it possible to use traefik instead of HAproxy in this configuration?
Hi Rachneet, thanks for watching. Traefik is used for incluster load balancing and traffic routing. HAProxy is used externally to load balance the traffic to control planes. Traefik is used within the cluster to load balance and route traffic between internal services.
@@justmeandopensource Oh I see. But can't we use a traefik ingress controller?
@@rachneetsachdeva4 There is nothing stopping you from using Traefik as ingress controller in your k8s cluster.
@@justmeandopensource Thanks for clarifying.
@@rachneetsachdeva4 no worries
how the multi cluster setup behaves with the etcd ?
Did you got the answer for that pls let is know
can you make a tutorial on making ISP proxies from google cloud, and creatingmass amount not just 8?
Hi there. Very good video. I have one profesional question. I would like to run a ha kubernetes for production, what do you recommend to contract? A dedicated server and install this vms inside, or a number of virtual private servers in different parts of the world? i can only use bare metal, cant use kubernetes cloud
Hi Venkat , Thanks for the tutorial , it really helps a lot . But I am facing one issue . When I do kubeadm init with the VIP address it is failing. although the I can see the VIP in my HAProxy's base eth port and it is working perfectly fine . If I stop one of the HA proxy services then also the VIP is switching to other server . Can you suggest
Hi Arnab, thanks for watching.
How is it failing? The kubeadm init command should have some meaningful errors.
I am using AWS EC2 instance. The VIP is unreachable from my kubernetes server although they belong to the same subnet
@@arnabdas5166 I haven't tried this in the cloud where you can't control the IP address assignment.
@Arnab Das have u finished this assignment using ec2 instances it worked ?? Can I follow this approach for ec2 instances. Please reply
Hi venkat, faced issue while adding 2nd master node, etcd and kube-apiserver are in crashloopbackoff, can you please guide me
Is your first master node where you ran kubeadm init all fine? You can always do kubeadm reset followed by kubeadm join on other nodes.
@@justmeandopensource yes venkat 1st master in fine, all pods in that is running fine
@@justmeandopensource
Even now the cluster is in ready state for both nodes, but the etcd and api-server pod for 2nd node is in crashloopbackoff state
@@shivbratacharaya4199 in that case do a kubeadm reset on the failed master and restart or recreate that vm and try again. If you used the vagrant provisioning, then all master VMs should be identical.
@@justmeandopensource
Its identical, i deployed them using vagrant only. Done the deployment multiple times but not getting any clue now
Venkat thank you for great video. But for this video or all others HA i thing better solution will be HAProxy with access to Master nodes and workers with local NGINX installed, because HAproxy for all tasks it's single point of failure anyway (even we have 2 of them, 3) - if HAproxy will be loosed all cluster stop working. On other hand with ngnix no matter how many worker or master nodes we will lose (and all haproxy can be gone also) but others workers still know how to communicate with masters (balanced). haproxy for adm access only masters. Maybe you can latter update ansible lesson with this configuration (workers nginx - adm api haproxy).
Hi Venkat where are you bro, miss without your videos)) all is ok?
Hi Farid, thanks for checking. I am guilty of not posting videos regularly these days. I will try and get back to routine weekly videos. Hopefully from the coming week. Been so busy last few weeks. Cheers.
tnx
Thank you so much for such great tutorials on k8s. I have one query, Can we setup HA cluster using only keepalived without using HAProxy or any other load balancer ? like If I setup keepalived in all my master nodes & init the cluster using virtual IP? will this work?
Hi venkat, can I use same set up in aws using ec2 instances ??
Please reply
Hi Mukesh, thanks for watching. I haven't tried this setup in cloud environment. But I think you will have problems with the virtual ip as you don't have control over ip address assignment in the cloud.
@@justmeandopensource hi venkat, I need to implement this in public cloud like aws or azure . Please let me know how can I manage ip address assignment. Please help.
note: explain keepalived 09:00
Thanks you for your amazing video. I have a issue that i spend my whole day and still can not debug, hope you can help me out. The load balancer work just perfectly fine, but the master doesn't
The issue is: when I shutdown any one of my master node , the cluster seem it go down too. I can not using kubectl, if I try it will throw an error : "nodes is forbidden: User "kubernetes-admin" cannot list resource "nodes" in API group "" at the cluster scope" .
Sometime it throw another error that said : "etcdserver: request timed out".
Sorry for my terrible english. Hope you have a great day
Your explanation is awesome... it’s helped me a a lot .... but For me virtual ip is not accepting during kubeadm init -control-plane-endpoint=:6443
Can you please help me what is the exact virtual ip concept.. how can I pick that...
I am using EC2 instances here actually..
Hi, thanks for watching. I haven't attempted this in the cloud where the handing out of ip addresses are not under your control. This was done on my local machine.
@Venkat Kishore have you completed this system using ec2 instances it worked fine in ec2 instances can I follow this approach in ec2 instances please reply it will save my time
कहा थे भाई
It's still a single point of failure. You should had at least one more worker node there.
Good point. Yes. We definitely need more worker nodes. I was focusing on control plane HA in this demo as I am limited on the number of VMs I can run on my laptop.
@@justmeandopensource Make sense! I am wondering if you could make a video covering Persistent Volume availability across the nodes would be great!
Hi.
Very well explanation, I love your videos. Anyway I have couple questions:
- do you have basic vigrant tutorial?
- how do you setup your local environment for run this? I want to do practicing
- your pc os is windows or linux?
Thanks
Hi Phalla, thanks for your interest in this video.
I don't have any basic vagrant tutorial. You can just learn from examples.
I explained in this video how I set up my environment. All I do is vagrant up in the directory where I have vagrantfile. All you need is vagrant and virtualbox. It can be Windows, Mac or Linux. Works everywhere the same vagrant setup.
I use Linux on my Laptop.
Cheers.
Thank mate!
@@phalla6646 No worries
Even this is not highly available... Just imagine the aws goes down.. How could we setup cluster with multiple nodes from different networks like some nodes on azure, oracle and gcp?