I just started watching your videos and they are excellent! Thank you and keep making more. I am running k8s 1.17.3, virtualbox 5.2 and vagrant 2.2.7. So far so good.
Hi Roger, thanks for watching and for your interest in this channel. I have this master playlist where I upload all Kubernetes videos one per week on Mondays. th-cam.com/play/PL34sAs7_26wNBRWM6BDhnonoA5FMERax0.html Also there is separate playlist where I pulled just the provisioning part of Kubernetes. th-cam.com/play/PL34sAs7_26wODP4j6owN-36Vg-KbACgkT.html I personally use LXC containers as kubernetes nodes.
Hi Jaime, thanks for watching. I was using KVM with this vagrant environment for a while personally. But had hit a road block where I couldn't use MetalLB successfully with KVM networks.
Just did the setup on Windows 10 home virtualbox. Worked perfectly. Very high quality of vagrant setup scripts which would be immensely helpful in my other vagrant projects too :)
I love Vagrant. I much prefer using Vagrant+LXC boxes on Ubuntu over just about anything else. I've found Vagrant super easy to use, works 99% of the time, and flexible enough for my needs.
@@justmeandopensource oh just a precision: I saw that your github repo has evolved since your video. I took it without understanding exactly why tigera and calico and what they do exactly. They seem to configure the network with security, and blabla... Ill will figure out
hey works great...required slight modification for centos7 install...eg material is now out of "misc" and the bootstrap.sh required adding kubernetes-cni-0.6.0 to list of k8s yums.
Hello, In your Github repository, specifically in the master bootstrap script you forget the COPY CONFIG FILE STEP : sudo cp /etc/kubernetes/admin.conf /home/vagrant/.kube/config chown -R vagrant:vagrant /home/vagrant/.kube That's why many of those who tried your tutorial got the following ERROR message: The connection to the server localhost:8080 was refused - did you specify the right host or port? And by the way can you tell us why you changed from CentOS to Ubuntu and from Docker to Containerd, is it for security issues ? Thank you for the tutorial it's really helpful.
Hi, thanks for watching. I originally had that step as part of bootstrap_kmaster.sh. But I wanted to keep the bootstrap steps to a minimum and decided to remove unwanted steps. Generally you will bootstrap a cluster and then copy the kube config file to your local machine and then interact with the cluster. You shouldn't be using kubectl commands on the master or worker nodes although it doesn't harm in any way. But leave the cluster for what it is supposed to do. So I intentionally removed that step. Always interact with your cluster from your local machine. Don't ssh to any of the cluster nodes unless you are an admin and know what you are doing. Regarding the switch to containerd from docker runtime, see my other video th-cam.com/video/AkfE8PBQnPs/w-d-xo.html And I switched from CentOS to Ubuntu because I felt its lot easier to manage Ubuntu with containerd. Just my preference. Cheers.
Thanks a lot Venkat! Vagrant is wonderfull!!! worked good here! Build and configure a VM at VirtualBox under my small display is painful haha. I have installed 2.2.6, so sounds command " vagrant snapshot save kubernetes-clean-base" to save all VMs is no longer working or needs some tricks, so I have save one by one, np! # vagrant snapshot save kubernetes-clean-base The machine with the name 'kubernetes-clean-base' was not found configured for this Vagrant environment.
in window 10 After ran vagrant up . i got below error . can you please look into it kmaster: [TASK 3] Deploy Calico network kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.
Hi Venkat, I tried your Vagrant file to create a kubernetes cluster on an AWS EC2 instance, but got an error message that VT-x is not available. Do you know if there a way I can create a k8s cluster using Vagrant + Virtualbox on AWS EC2 ? Thanks
Hi Susheel, thanks for watching. AWS EC2 are virtual machines themselves. In order to run virtual machine inside a virtual machine, you need to enable nested virtualization capabilities. I once tried that on an ec2 instance that is of bare metal instance.
thanks so much for your awesome videos. i use your vagrant configuration but i can't access to internet inside the docker container how should i solve this problem??
HI Sajjad, thanks for watching. My vagrant configuration has a second network interface added to all the VMs. You should have internet access on the VMs. If so, then you should have internet access on anything running on those VMs. If you could explain in detail your steps, I can test it on my machine.
Hey Thanks for the video. Vagrant script was successful, but i see below two errors during the script. is this an issue? kmaster: W0102 02:47:44.460190 8381 validation.go:28] Cannot validate kube-proxy config - no validator is available kmaster: W0102 02:47:44.460323 8381 validation.go:28] Cannot validate kubelet config - no validator is available
Hi Vijay, thanks for watching. Yeah, I am aware of those errors and didn't have time to look into it. But despite those errors, the cluster is in working order. Cheers.
Hi Venkat, I am a big fan of your k8s videos. During cluster creation, I am facing the issue below: kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh kmaster: W0202 17:04:40.512993 8601 validation.go:28] Cannot validate kube-proxy config - no validator is available kmaster: W0202 17:04:40.513031 8601 validation.go:28] Cannot validate kubelet config - no validator is available However, the cluster creation script will continue and subsequently cluster will deploy successfully. Thanks.
Hi Claudio, many thanks for following my videos. Yes those are warnings that you can safely ignore. I have never had any problems with cluster operations despite seeing those warnings.
Hi Venkat, great video as always, anyway you can add, maybe in your git repo, a vagrant config for a multi-master k8s setup? Thanks and keep up the good work!
Many thanks for watching this video. I used Kubespray to deploy a multi-master multi-etcd HA Kubernetes cluster. I also used "Kubernetes The Hard Way" to deploy similar HA cluster. But yet to make videos of them. Will soon record a video on it. Thanks, Venkat
Is it possible to set up auto scale? where as the cluster needs more worker vagrant provisions them and vice versa where they scale down when not needed.
Thanks for the tutorial . I would need help to set-up and access kubernetes dashboard on master and nodes . Can you please suggest steps to do the same ?
Hi Avinash, thanks for watching this video. I have already done a video on deploying kubernetes dashboard and you can check it out in the below link. Hope that will help you. th-cam.com/video/brqAMyayjrI/w-d-xo.html Thanks.
This really quality stuff karthik , Would it be possible to controll master from my win10 ,i have set up cluster but could,nt run kubectl from my host win 10 .
Hi Manoj, thanks for watching. This vagrant environment should work across all platforms. I have tested it in Linux and Mac but not on Windows. But I have been told by few viewers that it worked flawlessly in their Windows environment as well. When you say you couldn't run kubectl, what do you mean exactly? Can it not connect to the cluster?
@@justmeandopensource I had set up the cluster successfully had copied cluster host-names to /etc/hosts , but while doing scp of ./kube/config file i got an error . so i manually copied it from master to host under .kube/ .(26:12) cluster info throws an error . You must be logged in to the server (the server has asked for the client to provide credentials)" guess master couldnt authenticate hosts
Hi Venkat, Your videos are helpful lot to learn about K8s. May i know what changes are required as part of this vagrant file to provision k8s(1 master & 2 worker nodes) cluster on Windows-10, i have tried from my end to provision on it but i did not get luck. It will be great to do help over here.
Hi Aneel, thanks for watching this video. It should work without any modifications. I have heard from few viewers that this vagrant file is working on their Windows laptop. I have tried it once as well. What's the problem you are facing? Or where exactly you are facing the problem?
Hi Akshay, thanks for watching. Depending on what port you want on your host machine to be forwarded to what port on the guest VMs, you can configure them in the Vagrantfile in the respective blocks. There is a top kmaster block and bottom kworker block which is a loop for two workers. You can use something like below. You can't have this in the global section outside of the blocks as that will cause a conflict forwarding host port 8080 to container port 80 on all the nodes. config.vm.network "forwarded_port", guest: 80, host: 8080 So you have to use some logic. You can put this line inside the kworker block config.vm.network "forwarded_port", guest: 8#{i}, host: 8080 This would map host port 81 to 8080 on kworker1 and 82 to 8080 on kworker2. Hope this makes sense. Cheers.
Thanks Venkat For Sharing your knowledge and wonderful teaching . I have installed cluster successfully on MAC and Even able to run kubectl commands from HOST machine . But when i connect to my corporate VPN i am unable to run the kubectl commands from host as well at kmaster, Any Clue and needed your help in solving the same
sir i am getting this error. can u please help me kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.
Hi Yash, thanks for watching. You can remove the output redirection in the bootstrap scripts to see where and why its failing. Wherever you see ">/dev/null 2>&1", just delete that part and you will see more actual output.
Hi, thanks for the wonderful videos. May, I know why are you taking snapshots of VMs. What if I would not take the snapshots created through vagrant? Thanks !!
HI Swaraj, thanks for watching. Snapshots are taken just in case if I wanted to go back to a particular state. Nothing wrong if you don't take snapshots.
Hi Venkant, I have install 3 node via vagrant in window 10. Cluster installation look fine. But , How Can I make setup for kubectl to work locally that would access all 3 node. Please suggest
Hi Amrita, thanks for watching. I am not a Windows person. The way to access the cluster using kubectl is same in all OS. You need to place the kubernetes configration file under .kube directory in your home directory. In Windows you may also need to set the KUBECONFIG system/user environment variable to point to the location where you copied the kube config file.
Just discover your channel and there is a lot to learn here I see, thanks for all your effort! Starting with this one, its an old video but I see you recently updated the Vagrantfile in the kubernetes Github repo. Everything goes well but kubectl get componentstatus command shows: NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} Any idea what could be the cause of this error? Thanks in advance.
Hi Ivo, thanks for watching. That particular endpoint is being deprecated. You can safely ignore that error when you do component status grt request. It doesn't affect cluster functionality in any way. Cheers.
@@justmeandopensource Wauw, what a quick response! Indeed I saw a depreciated warning but I saw you using it in another video and that command works also for me then, but depends on the versions you use in that video of course. Sorry for asking this but thanks for your quick response. Also great that you keep updated your repo's so we can still follow your videos that are recorded more than a year ago. I'm going to the next videos in your Kubernetes playlist.
HI Darvin, thanks for watching. There are lots of overlay Networks. I have only explored Flannel and Calico. There is Weave Net as well and few others. Flannel is simple but doens't come with lots of advanced features. For example, you can't use Pod Network policy with Flannel network. Calico and WeaveNet are advanced in terms of features they offer. So if you are going to use it in production, I would advise you to evaluate each of these for your needs. It won't be easy to switch the overlay network once you have deployed it without some downtime. So make a right choice in the first place. The following link might be helpful. rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/ Thanks.
it is working properly on my ubuntu machine thankyousoomuch but can some please explain me that, I want to make 3 worker nodes and 1 master node how could i do that?
Hi, thanks for watching. You can update the Vagrantfile and set "NodeCount = 3". Then in bootstrap.sh script add below dns entry in [TASK 1]. 172.42.42.103 kworker3.example.com kworker3 Now you can do vagrant up. Remember to first destroy your existing vagrant environment (vagrant destroy -f) before following above process. Cheers.
Hello Venkat, nice video! One quick question, what is this syntax about in Ruby? node.vm.network "private_network", ip: "172.16.16.100" There's no equal sign "=" in between
I just started watching your videos and they are excellent! Thank you and keep making more. when I am doing vagrant up I am getting below error............( and also my centos is on VM, its look like-->windows10-VM-centos-here I installed vagrant and one more VM,...my VirtualBox 6.1 and vagrant 2.2.8) There was an error while executing `VBoxManage`, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["startvm", "0ae04c7d-1c65-4c19-ad99-01e0a0af8a19", "--type", "headless"] Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
Hello Venkat, I am probably very late.. I tried this setup earlier and it was working fine, but don't know why if i am creating a cluster again it is getting stuck on TASK5 and never returning. Can you please suggest or help me out?
I also checked through the VirtualBox console when the script was running , there was one error which says "dependency failed for hyper-v kvp protocol daemon". Any idea on this one ?
@@kunaljain5266 The vagrantfile I have got is designed for either Virtualbox or Kvm libvirt. It won't work for Hyper-V unless you modified the provider block. You can remove all /dev/null redirections in the bootstrap script and see where exactly its failing. I don't have a Windows machine to test this unfortunately.
Sure @venkat i will try this and let you know about this.. Really not sure but the same script was working around 20 days before .. Will keep you updated on this.. Thanks a lot again :)
@Just me and Opensource Hi Venkat, Its still confusing me on, do we really need vagrant in every kubernetes environment? I have started practicing with GCP do we have anyother vagrant file?
Hi Raarth, thanks for watching this video. Vagrant is a tool to provision virtual machines. I used vagrant to create 3 virtual machines and provision them as Kubernetes clusters. Vagrant is best suited for development environment where you want to spin up multiple virtual machines on your machine. If you are using GCP, you can use other provisioning methods like Kubespray.
Hi Venkat thanks for the video, I have a doubt regarding modifying the cluster info by manual, like I have setup my cluster, kmaster kworker1, kworker2, after some time I have changed my kworker1 name as kubeworker1 with hostnamectl command on node machine after restart node status is showing not ready, will it be possible to modify the cluster info manually, where all the cluster info stores(in etcd or other place) , and how to modify in etcd or other place, with out using kubeadm create token, is there any way like that, thanks in advance
Hi Siva, I just played with it and uploaded a video if it helps others in similar situation. Basically no easy way to rename a node. You only can delete the node and join it back to the cluster. If you have subscribed to my channel, you should have received a notification about this new video. Or you can check it out in the below link. Thanks. th-cam.com/video/TqoA9HwFLVU/w-d-xo.html
Hello Venkat, Thanks for your video. Need your guidance . I have only one RedHat Linux 7 VM in my environment. I wanted to do setup Master & Worker K8s nodes. Can you assist to guide how to setup ? Can we use Vagrant in RedHat Linux 7 to setup K8s cluster ?
Hi, yes you can setup the cluster using vagrant in RHEL7. Install VirtualBox and Vagrant and just do vagrant up. Or you can use kvm/libvirt instead of VirtualBox.
@@visva2005 I don't have a RHEL7 machine to test but it should be similar to the process you follow on CentOS 7 provided you have the redhat subscription to install packages. yum install vagrant
I am trying to do this on windows 11 and having the following errors? Please help. node01: SSH auth method: private key Timed out while waiting for the machine to boot. This means that Vagrant was unable to communicate with the guest machine within the configured ("config.vm.boot_timeout" value) time period. If you look above, you should be able to see the error(s) that Vagrant had when attempting to connect to the machine. These errors are usually good hints as to what may be wrong. If you're using a custom box, make sure that networking is properly working and you're able to connect to the machine. It is a common problem that networking isn't setup properly in these boxes. Verify that authentication configurations are also setup properly, as well. If the box appears to be booting properly, you may want to increase the timeout ("config.vm.boot_timeout") value.
Hi venkat , i am trying to install specific version of k8s by running yum install -y -q kubeadm-1.13.2 kubelet-1.13.2 kubectl-1.13.2 >/dev/null 2>&1. during installation it gives kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found error. Can you please advise.
In the bootstrap.sh script, we are installing kubeadm, kubelet and kubectl in one go. Version 1.13.2 of kubeadm has certain dependencies issues. Update the bootstrap.sh script, remove the yum install line and change it like below. echo "[TASK 9] Install Kubernetes (kubeadm, kubelet and kubectl)" yum install -y -q kubectl-1.13.2 yum install -y -q kubelet-1.13.2 yum install -y -q kubeadm-1.13.2 I just tested and working fine.
I am facing a problem that my pods are not getting start after this setup they are showing imagepullbackoff error And i tried to pull the image before running the pod and then the pod get started so the problem is the image is not getting pulled What should i do?
Hmm. Thats strage. Is that happening on any particular worker nodes? Or on all nodes? That usually means either the container image you specified in the manifest (or during kubectl command) is not a valid image or there is some networking issue pulling containers.
@@justmeandopensource i don't think that there is any problem in the image because i tried to pull different images and all them are not gettin pulled The main problem is that my nodes are not might catching the internet connection Beacuse also when i start my nodes with vagrant up the option shows in 10-11th line of every node starting that (connection not building,retrying) it shows on master node as well as on worker nodes... This shows for 4-5 times than the further process continues
Thanks alot for this series. Really helping me. I am trying to use vagrant-vsphere plugin to create VMs using vagrant in my vsphere. I am able to get it but not fully because of the centos template i use is different. Is there any way i could get this vagrant centos used here so that i could create a template of it and upload to my vsphere and use it to create VMs there.
Hi Maria, thanks for following my video series. I used the official centos/7 virtualbox provider box in this video. app.vagrantup.com/centos/boxes/7 It also has a provider for vmware. I haven't tested that. May be you could use that. Or you can follow my tutorial to get the centos vm created in Virtualbox using vagrant and then export it as ova and then import into vmware. All you need is a base centos machine to work with. Rest of the items are all done by the provisioner script. So it shouldn't matter which provider you use. Thanks
@@justmeandopensource : Thanks Venkat. The VM box doesnt seems to be the issue. The main issue i face with using vsphere is that, i am unable to define the ip for the machines.Here in video you have defined 172.42.42.100 for master and the same is used for the --apiserver-advertise-address. The VMs created in Vsphere gets its IP once it is powered on. So i am stuck there. Like to get it to proceed with the k8s provisioning. Do you have any idea how i could solve this issue?
@@MariaJossy In the case of using vagrant with VirtualBox, the default first network interface is hostonly adapter which gets IP like 10.0.2.15 and every vm gets the same IP. Thats the reason I introduced the second network adapter and the machines will have 172.42.42.100/101/102 for the three VMs. Since the vm has two network interface, I had to use the --apiserver-advertise-address to specify the interface that should be used for the cluster communication. Okay. When the VM comes up, can you check how many interfaces it has and what their respective IPs are? Did the VM get 172.42.42.100 address as we defined in the vagrantfile? Thanks
Hi, thanks for watching. You don't need to change the vagrantfile. You only need to make sure that nested virtualization is supported on your Ubuntu server in AWS and install the dependencies like vagrant, virtualbox. The cluster will be provisioned with CentOS 7 machines. Cheers.
@@mirrahat4105 Hi I found a reddit discussion where it was mentioned that you could go for i3.metal instance for virtualization capabilities but it could be expensive.
I'm new tp k8s. I have to use the same IP address for kmaster and pod-network or i have to give my ip address and my cidr block for pod network to initialize the cluster
@@kumarvedavyas5631 Thanks for watching this video. You don't have to do anything manually. All will be taken care by Vagrant. Vagrant will set up a second network interface using NAT and not bridge. Just git clone the repository, cd to vagrant-provisioning and then do "vagrant up". You will have your cluster ready. Unless you are installing the cluster by yourself you don't have to worry about configuring any of those. Thanks.
Hi Venkat, thanks for the amazing tutorial. i am trying from windows 7. Able to run all the Vms however looks like the kubeadm and kubectl is not getting installed. Is there any new vegrant file you created.because while running the vegrant file while creating the master node the following error comes in after [TASK 3] Deploy Calico network (kmaster : - bash kubectl command not found kamaster : [TASK 4] Generate and save cluster join command to /join cluster kmaster : /tmp/vagrant-shell:line 19 kubeadm command not found
HI Rajesh, thanks for watching. My Vagrant files should work in Windows 7 as well. I tried it a while ago. I also got confirmation recently from one other user that it was working perfectly fine on Windows 10 as well in a Powershell. However, I am not a Windows person to be honest. Cheers.
hi Venkat. I am running a two machines setup one with rancher latest running as a docker container and one other host machine with vagrant provisioning (virtualbox0 both mxlinux OS. When I try to add the cluster to rancher the cattle-cluster-agent pod keeps restarting with the following error: ERROR: {{IP of rancher host}}/ping is not accessible (Failed to connect to {{IP rancher host}} port 443: Connection timed out I can ping and curl the rancher host IP from the K8S cluster host. I can also ping and curl the rancher host IP from within the kworker node where the pod is failing. I came across this github.com/rancher/rancher/issues/18832 but nothing there help. Any ideas? :)
Hi, connection timed out indicates that eithere there is a firewall issue or the vm kmaster isn't up. You can rule out firewall issue as I have disabled firewall in the bootstrap script when provisioning the VMs. Make sure the VMs are running.
I was having trouble running kubectl commands from the worker nodes, looks like the admin.conf needs to be copied as config in the 'kube directory on the workers too I added the following to the bootstrap_worker.sh file mkdir /home/vagrant/.kube sshpass -p "kubeadmin" scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no kmaster.example.com:/home/vagrant/.kube/config /home/vagrant/.kube/config 2>/dev/null sudo chown -R vagrant:vagrant /home/vagrant/.kube
Cool. That's the way to go. The initial cluster config admin.conf is available only on master node and needs to be copied to any machine that needs access to the cluster including worker nodes.
Hi Bro!!! Thanks for your Classes... I have Installed the Kubernetes from Vagrant... But Worker nodes not listed when I used the command "kubectl get nodes" in Master node... Plz provide me the solution
Hi Rangisetti, thanks for watching. Lets troubleshoot this. Could you please paste the output of below commands? 1. kubectl version --short 2. kubectl cluster-info 3. kubectl get nodes Thanks.
@@justmeandopensource Thanks!! Please find the Output rangisetti ~ kubectl cluster-info Kubernetes master is running at 172.42.42.100:6443 KubeDNS is running at 172.42.42.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. rangisetti ~ kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster.example.com Ready master 172m v1.16.2 localhost.localdomain NotReady 149m v1.16.2 rangisetti ~ kubectl version --short Client Version: v1.11.0+d4cacc0 Server Version: v1.16.2 rangisetti ~
Hi Rangisetti, thanks for the outputs. It seems the worker nodes are not getting provisioned properly. It should have the name kworker1.example.com and not localhost.localdomain. Are you running the vagrant up command in vagrant-provisioning directory or are you copying the vagrantfile to some other directory? And you have changed the number of worker nodes to 1 in the vagrantfile. That shouldn't be a problem. I have been using this vagrant environment for a very long time and its working absolutely fine.
@@justmeandopensource I did the installation same as you did... The problem while installation seems that SSH connection is getting disconnected for my case.... I will troubleshoot... Thanks for your support...
@@rangisettisatishkumar5491 Okay. You can remove the output redirection in the bootstrap script to see whats going on. In all the shell script, wherever you see 2>/dev/null, just remove it and you can see the output or error while running vagrant up. Thanks.
Thx for the video , I tried all steps but still getting the error,when I connect to Master node and run kubectl get cs , it states that controller-manager and scheduler is unhealthy , connection refused,also it gives warning " Warning: v1 ComponentStatus is deprecated in v1.19+" , kindly advise
HI Deepak, thanks for watching. May I ask you for little more details? What is your host operating system? Have you made any changes to the vagrant file or you followed every step as it is in this video? Also if you could paste the command outputs in pastebin.com and share it with me, I can take a look and try it in my environment. Cheers.
@@justmeandopensource Thanks for prompt reply , I am running Windows 10, on top of that installed vagrant , running kubectl commands directly from master node ( SSH). I can create POD , Deployment set without any issue.Not sure if that error is due to liveness / readiness prope.I have not made any changes to vagrant file. ***************************************** Command Output: [root@kmaster ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} Hope this helps
@@Deepak9728 I remember seeing this error and this particular command kubectl get componentstatus is about to get deprecated. So I wouldn't worry too much if your cluster is running fine apart from this command. Few other viewers also mentioned about this despite their cluster fully operational.
Hi, enjoying this series. Thanks for the effort you have put in. Do you know how to use this set up to create other versions of k8s? I have managed to create v1.11.x upwards, but nothing below. I am trying to create a v1.9.0, but looking at the systemctl status kubeletI see a lot more args in the cgroup param. The error I am getting is at the deploy Flannel task -> unable to contact 172.42.42.100:6443 are you sure you have the correct port. Any ideas?
Hi John, I just tested and I was able to successfully deploy Kubernetes cluster version 1.9.0 without any issues. I have added vagrant-provisioning files for 1.9.0 in my github repository. github.com/justmeandopensource/kubernetes/tree/master/misc/vagrant-provisioning-by-version Please check if this helps you. I might do a video on this later. I have got videos scheduled for the next 2 months (one every Monday). So will add this to the list. Thanks for bringing this to my attention. Thanks, Venkat
@@justmeandopensource Great thanks, will try again when I get home from work. Can I be really cheeky and request a video on the aggregation/extensions api, configuring the aggregation ca certs etc to be added to your list if you have not already planned. There is a lot of videos on k8s but have found yours amongst the most accessible for beginners. Thanks
Hi Ashu, thanks for watching. It will be more involved in doing that way. But if you used Kubespray, then why would you want to upgrade manually with kubeadm? Kubespray is meant for these tasks in an automated way. Cheers.
Also, I just tried to create cluster using your repo which gives me Kubernetes version as 1.18.0 which is latest, How can I specify a particular version?
@@ashurana31 If you are using my vagrant environment, then you can modify the bootstrap.sh script and specify the version of kubeadm, kubectl, kubelet for installation.
May I know what version is your virtual box? I tried to run your script using vagrant 2.2.7 and virtualbox 6.1 in Windows 10 (powershell). I am having this issue: ==> kmaster: Running provisioner: shell... kmaster: Running: C:/Users/DEFAUL~1.LAP/AppData/Local/Temp/vagrant-shell20200404-9520-1x03bq7.sh kmaster: [TASK 1] Initialize Kubernetes Cluster kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory kmaster: [TASK 3] Deploy Calico network kmaster: -bash: kubectl: command not found kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. PS D:\Projects\kubernetes\vagrant-provisioning> Then it stopped provisioning the workers.
Hi thanks for watching. I don't use Windows. In my Linux machine, I am running VirtualBox version 6.1.4 and it always worked even with previous versions. But I once had a comment from another viewer that this vagrant environment doesn't work properly with Virtualbox 6 on Windows and his advise was to continue using latest 5.x version. But I haven't researched much about that error.
HI Narendra, thanks for watching. I don't get your question. This vagrant setup will provision the virtual machines in VirtualBox environment on your local machine. If you want to provision actual ec2 instances in AWS, then its a whole different concept using vagrant-aws plugin.
Excellent tutorial. Many thanks for this. I followed this on windows 10 and was able to launch all of the 3 VMs in the vagrant env but after when I logged in to the master node and ran kubectl get nodes, I could see that the worker nodes didn't join the cluster (status = Not ready) ! How to debug this ? Thanks
Hi Surendra, thanks for watching this video. While doing this video, I also verified in Windows 10 and it worked. But haven't tested this recently. Something might have changed. What version of Kubernetes was installed? Is it the latest 1.16. You can check the output of "kubectl -n kube-system get pods" and see which pods are pending and then do kubectl -n kube-system describe pod . Some of the core component pods might be in pending state. Thanks.
@@justmeandopensource $ vagrant.exe ssh kmaster [vagrant@kmaster ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster.example.com NotReady master 8h v1.16.0 kworker1.example.com NotReady 8h v1.16.0 kworker2.example.com NotReady 8h v1.16.0 [vagrant@kmaster ~]$ kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE coredns-5644d7b6d9-bzbw7 0/1 Pending 0 8h coredns-5644d7b6d9-nbpwc 0/1 Pending 0 8h etcd-kmaster.example.com 1/1 Running 1 8h kube-apiserver-kmaster.example.com 1/1 Running 1 8h kube-controller-manager-kmaster.example.com 1/1 Running 1 8h kube-proxy-87bpv 1/1 Running 1 8h kube-proxy-cjg7d 1/1 Running 1 8h kube-proxy-r2ckx 1/1 Running 1 8h kube-scheduler-kmaster.example.com 1/1 Running 1 8h [vagrant@kmaster ~]$ kubectl version --short Client Version: v1.16.0 Server Version: v1.16.0 [vagrant@kmaster ~]$ kubectl describe coredns-5644d7b6d9-bzbw7 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. Warning FailedScheduling default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate. Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate. Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Does your local network address is 172.42.42.0/24? Is there any difference in putting "public_network" with local IP on vagrant? BTW great stuff, keep it up!
Hi, thanks for watching this video. 172.42.42.0/24 is not my local network. That network gets defined automatically in the VirtualBox environment when I specify the IP address. My local network is 192.168.1.0/24 network. If you want to have local network IP (same as your host machine) for your VirtualBox VMs, you can add the bridge option and the network interface on the host that is connected to the network. www.vagrantup.com/docs/networking/public_network.html Thanks
Hi Venkant, First of all thanks for videos, Please can you help on below error I am receiving the while running the vagrant. How I can fix this. I am running centos7 via azure cloud VM . ==> kmaster: Booting VM... There was an error while executing `VBoxManage`, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["startvm", "2334fd06-5480-4ea8-bd5d-645ff2d8336b", "--type", "headless"] Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
Hi Pankaj, thanks for watching. VT-x is virtualization. It seems that your CentOS 7 instance doesn't support virtualization. Your CentOS 7 instance is itself a virtual machine in Azure cloud. In order to run nested virtualization, you may need to enable certain things which I am not sure about.
You might need to choose a different instance type that supports nested virtualization. stackoverflow.com/questions/48579206/how-to-install-centos-7-64bit-vbox-guest-in-windows-azure
Hello Venkat the videos are awesome..thanks a ton for sharing... however while I am trying to replicate this particular setup using vagrant, I am facing some issues: My kmaster VM is always getting vcpu as 1 and vmem as 512mb only. I have cloned your repo and even the Vagrantfile has vcpu as 2 and vmem as 2048 MB..still every time whenever I execute "vagrant up", it creates a kmaster VM with just 1 core and 512MB ram..and due to that my later stuffs doesn't gets installed and everything fails. PS: that I am using KVM and my base machine is ubuntu 18.04: snippet of the vagrant file: just few lines: PS: I have made changes to private IP in Vagrant file:: also note that my host machine has enough RAM and CPU to be allocated for VM's. host machine is 12 GB. # -*- mode: ruby -*- # vi: set ft=ruby : ENV['VAGRANT_NO_PARALLEL'] = 'yes' Vagrant.configure(2) do |config| config.vm.provision "shell", path: "bootstrap.sh" # Kubernetes Master Server config.vm.define "kmaster" do |kmaster| kmaster.vm.box = "centos/7" kmaster.vm.hostname = "kmaster.example.com" kmaster.vm.network "private_network", ip: "192.168.122.100" kmaster.vm.provider "virtualbox" do |v| v.name = "kmaster" v.memory = 2048 v.cpus = 2 # Prevent VirtualBox from interfering with host audio stack v.customize ["modifyvm", :id, "--audio", "none"] end kmaster.vm.provision "shell", path: "bootstrap_kmaster.sh" end snippet of the kmaster VM getting created where we can see that it assigns just 1 VCPU and 512mb: ==> kmaster: Successfully added box 'centos/7' (v1905.1) for 'libvirt'! ==> kmaster: Creating image (snapshot of base box volume). ==> kmaster: Creating domain with the following settings... ==> kmaster: -- Name: vagrant-provisioning_kmaster ==> kmaster: -- Domain type: kvm ==> kmaster: -- Cpus: 1 ==> kmaster: -- Feature: acpi ==> kmaster: -- Feature: apic ==> kmaster: -- Feature: pae ==> kmaster: -- Memory: 512M ==> kmaster: -- Management MAC: ==> kmaster: -- Loader: ==> kmaster: -- Nvram: ==> kmaster: -- Base box: centos/7 ==> kmaster: -- Storage pool: default ==> kmaster: -- Image: /var/lib/libvirt/images/vagrant-provisioning_kmaster.img (41G) ==> kmaster: -- Volume Cache: default ==> kmaster: -- Kernel: ==> kmaster: -- Initrd: ==> kmaster: -- Graphics Type: vnc ==> kmaster: -- Graphics Port: -1 ==> kmaster: -- Graphics IP: 127.0.0.1 ==> kmaster: -- Graphics Password: Not defined ==> kmaster: -- Video Type: cirrus ==> kmaster: -- Video VRAM: 9216 ==> kmaster: -- Sound Type: ==> kmaster: -- Keymap: en-us ==> kmaster: -- TPM Path: ==> kmaster: -- INPUT: type=mouse, bus=ps2 ==> kmaster: Creating shared folders metadata... ==> kmaster: Starting domain.
seems like vagrant works smoothly with vbox and thats the default hypervisor it uses. i reviewed many issues where people had issues running vagrant with kvm hypervisor and this was all due to "vagrant-libvirt" plugin... no issues,I have uninstalled kvm hyper now and using vbox where its working as expected... also posted the issue on github with vagrant support....let see if I get a response for vagrant-libvirt plugin not working... thanks
Hi Indrajeet, thanks for watching. I have used libvirt/kvm successfully with this vagrant file with minor modifications. In the vagrant file, make sure to change vm.provider to libvirt from virtualbox. And I believe I also changed the provisioning shell scripts slightly. It was working perfectly well for me. But I stick with virtualbox. Cheers.
I have deployed nginx pod on node2. Created a nodeport service nodeport = 30080 and Target port =80 and port =80 and now I am trying to Access the nginx from the master / base laptop. I am unable to. I had a similar issue on AWS but it was resolved when I permitted traffic on 30080. I am trying to open 30080 port on node2 but unable to find a way. I did use exec command and checked that nginx was installed and running. Thanks for coming back so quickly. Pradeep
@@pradeepchawla6643 I have not tried accessing a nodeport service from a master machine. But you should be able to access it with the IP address of any of your worker nodes. So your url will be :30080, which will take you to the nginx service. The actual nginx pod can be running on any worker node. You don't need to target the worker node that is running the pod. Any worker node IP should be able to redirect you to the node that is running the pod. Make sure you don't have firewall enabled on the worker nodes, otherwise you might have to open those ports on all the worker nodes. Cheers.
Hey, I am not able to ping the kmaster from my desktop. I had tired this one time before as well and I was able to setup everything. As kmaster is not pignable from by desktop i am not able to scp .kube/config to my home directory. Can you please help with this? Sandeep
Hi Sandeep, Did you mean it worked when you tried previously and on your second attempt it didn't work? The Vagrant file provisions all three VMs with private network in addition to NAT. So if you look at the "ifconfig" or "ip addr show" command on any of these VMs, you will see eth0 (which is NAT) and eth1 (which is host-only network adapter). The NAT interface will have ip like 10.0.2.15 and eth1 will have 172.42.42.100. Make sure you edit the /etc/hosts file on your desktop machine and add ip entries for kmaster. Otherwise you won't be able to ping kmaster. But you can ping it using IP address. Thanks
Hi Sandeep, have you added "172.42.42.100 kmaster.example.com kmaster" to your desktop's /etc/hosts file? Can you ping 172.42.42.100 from your machine?
Hi, excellent tutorial. But I got following issue when I testing the vagrant provision on Win10, it seems kubectl kubeadm are not installed, any suggestions? ####################### kmaster: [TASK 1] Initialize Kubernetes Cluster kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory kmaster: [TASK 3] Deploy Calico network kmaster: -bash: kubectl: command not found kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
Hi Ziv, thanks for watching. I have tested this vagrant environment on a Windows 10 machine and it worked. Few other users also confirmed that it is working. In your case, it seems the bootstrap script failed to install the kubernetes component binaries like kubeadm kubectl. In order to troubleshoot, you can modify the bootstrap scripts in the vagrant-provisioning directory and remove all occurences of ">/dev/null 2>&1" so that when you run vagrant up next time, you can actually see whats going on. You might be able to see the errors. Please give it a go and let me know. Cheers.
after run vagrant up . the following error appear kmaster: error: unable to recognize "/vagrant/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1" i change to apps/v1 but the issue exist
HI Ahmed, thanks for wathcing this video. Things have changed slightly with respect to api versions for certain resources since k8s version 1.16.0. I have already updated my github repo for that. Please do git pull or a fresh clone of my kubernetes repository and then try again. Please let me know if it worked. Thanks.
@@justmeandopensource working now but i,m folowing vide 34 elk but when excute the following command curl --insecure -sfL 172.42.42.100/v3/import/hxrf5dhlpjc84nqszcpgj2vt9ffrqjfpsp66xbcdqj58l5t9gdmfwp.yaml | kubectl apply -f - unable to recognize "STDIN": no matches for kind "Deployment" in version "extensions/v1beta1" unable to recognize "STDIN": no matches for kind "DaemonSet" in version "extensions/v1beta1"
hi i downgrade veriosn from v1.16.1 to v1.13.1 i think latest version have issue, working as it should be with version v1.13.1 many thanks for your videos
Ahh. You are configuring a cluster in your Rancher. I see. Yeah, I have few other users reported that problem as well. I tried it as well and found the issue. Basically, in k8s v1.16.0, they have changed the api versions for few resources. For example, in your yaml manifests for a deployment, you would be using "apiVersion: extensions/v1beta1". This is now changed to "apiVersion: apps/v1". So you have to update all your manifests. The url you pasted which is given by Rancher when you tried to import your existing cluster, the manifests provided by Rancher was for k8s versions prior to 1.16.0. Hopefully, Rancher will update the documentation. Thanks.
HI Shravan, thanks for watching this video. I have been using Kubernetes cluster on LXC containers for a long time. Please follow the below videos that I did few months ago. First video is an introduction to LXC containers and getting started and the second one is to provision Kubernetes cluster on LXC containers. Please change as per your needs. th-cam.com/video/CWmkSj_B-wo/w-d-xo.html th-cam.com/video/XQvQUE7tAsk/w-d-xo.html Hope this helps. Thanks, Venkat
Hi Yogesh, thanks for watching this video. Unfortunately I haven't played with VMWare as it is not open-source. www.vagrantup.com/vmware/index.html Its a paid service to use vmware provider in vagrant. Thanks.
HI Kumar, thanks for watching. To provision multi-master Kubernetes, I would prefer Kubespray as it is really simple to add or remove nodes to existing cluster. You can watch Kubespray related videos in the below playlist. th-cam.com/play/PL34sAs7_26wOAqYsrIhtDaIviGlSkmfv9.html Cheers.
I am getting this error rajendar@ubuntu-elitebook:~/Desktop/kubernetes/vagrant-provisioning$ vagrant status Current machine states: kmaster running (virtualbox) kworker1 running (virtualbox) kworker2 running (virtualbox) This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`. rajendar@ubuntu-elitebook:~/Desktop/kubernetes/vagrant-provisioning$ vagrant ssh kmaster vagrant@kmaster:~$ kubectl cluster-info To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server localhost:8080 was refused - did you specify the right host or port? vagrant@kmaster:~$ but if i login with root@172.16.16.100 it is working fine
Hi Venkat, I have followed all the steps from your video. Everything works fine except when I ran command "vagrant halt" I got this output, vagrant halt ==> kworker2: VM not created. Moving on... ==> kworker1: VM not created. Moving on... ==> kmaster: VM not created. Moving on... eventhough my vms were running fine. What can be the issue @justmeandopensource
Hi Abdul, thanks for watching this video. I never had that issue because I never did a vagrant halt on this environment. Just searched internet and I found lot of people had this exact issue and there were few bug reports. One of the possible reason for vagrant to forget about the VMs it created is the vagrant or virtualbox software had been upgraded while the VMs were running. Did you upgrade your system? You could run vagrant global status to look at the VMs Id. Or just do vagrant destroy and redo the environment. Cheers.
Hi Anupama, thanks for watching. I am using this exact vagrant setup on a daily basis. May I know what is it you are trying to access from the browser. Is your host machine Windows or Linux?
Just me and Opensource Host machine is linux. Like i want to deploy an web application in k8s cluster and need to expose to outside world/ Internet . So i need to hit the ip and port to see my application. So where should i hit the ip as using vagrant I don’t have mozilla or google chrome. So please help me out!
Just me and Opensource you use gcp to expose to outside world !! Is there anyway that i can do in vagrant . Like when we install k8s cluster using kubeadm we use mozilla to hit the ip and check the whether my application is running! So similar how can i do using vagrant??
@@mommyandagastyaa Lets take this example. Your host machine is Linux. You have installed K8S using my vagrant environment. If so, then you would have one master and two worker nodes with the below ip addresses. kmaster: 172.42.42.101 kworker1: 172.42.42.102 kworker2: 172.42.42.103 Now you have deployed a web application and exposed it as a node port service and the node port is (for example) 32323. Now you can access this nodeport on any of the nodes in your k8s cluster. From any browser on your host machine, you can visit any of the below urls. 172.42.42.101:32323 172.42.42.102:32323 172.42.42.103:32323 Hope this makes sense.
Your videos are Excellent, I am new to K8s. I have tried this setup but ran on windows machine using git cloned it. I get the message K8s-master/slave-1/slave-2 - Status not ready for all three. .. Any help would be appreciated.. I get the following message kubectl describe node k8s-master ....runtime network not ready:NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready:cni config uninitialized k8s-master NotReady k8s-slave-1 NotReady k8s-slave-2 NotReady When I check in cat /etc/hosts 127.0.0.1 k8s-master and similary for slaves But IPconfig -a as 192.168.33.10/11/12 ( master/slave1/slave2)
VirtualBox is complaining that the installation is incomplete. Please run `VBoxManage --version` to see the error message which should contain instructions on how to fix this error. I am getting error on vagrant up. Please help here
@@justmeandopensource Hi Thanks for replying. Error msg:- /home/vvdn/kubernetes/vagrant-provisioning# vagrant up VirtualBox is complaining that the installation is incomplete. Please run `VBoxManage --version` to see the error message which should contain instructions on how to fix this error. I am using Ubuntu18.04 and My virtual box version :- VirtualBox Graphical User Interface Version 5.2.34_Ubuntu r133883
@@justmeandopensource error kubernetes/vagrant-provisioning# vboxmanage --version WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. 5.2.34_Ubuntur133883
@@justmeandopensource Error when I am trying vboxmanage --version kubernetes/vagrant-provisioning# vboxmanage --version WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. 5.2.34_Ubuntur133883
Hello Venkat. I have Windows 10, and I'm trying to follow the step you took. But it is failing with kmaster. Here is the error: kmaster: [TASK 1] Initialize Kubernetes Cluster kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory kmaster: [TASK 3] Deploy Calico network kmaster: The connection to the server localhost:8080 was refused - did you specify the right host or port? kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh kmaster: failed to load admin kubeconfig: open /root/.kube/config: no such file or directory kmaster: To see the stack trace of this error execute with --v=5 or higher The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. Although master is created, but it doesn't have any config file under /etc/kubernetes/ and /home/vagrant/.kube/
Hi, thanks for watching. Although many viewers confirmed the vagrant setup works I haven't tried it myself. I haven't used Windows for more thn a decade. I can try setting up a VM and see if it works.
@@yashhirulkar909 You can try and remove all the output redirection code in the bootstrap scripts. Take a look at bootstrap.sh and remove ">/dev/null 2>&1" from all the lines that have it. So now when you do vagrant up, you can see more detailed errors and hopefully that will give you some direction. Cheers.
Your videos are excellent. One of the best on k8s. Thanks!!!
Hi Phat, thanks for watching and taking time to comment/appreciate. You made my day
Can't agree more
Thanks
I just started watching your videos and they are excellent! Thank you and keep making more. I am running k8s 1.17.3, virtualbox 5.2 and vagrant 2.2.7. So far so good.
Hi Roger, thanks for watching and for your interest in this channel.
I have this master playlist where I upload all Kubernetes videos one per week on Mondays.
th-cam.com/play/PL34sAs7_26wNBRWM6BDhnonoA5FMERax0.html
Also there is separate playlist where I pulled just the provisioning part of Kubernetes.
th-cam.com/play/PL34sAs7_26wODP4j6owN-36Vg-KbACgkT.html
I personally use LXC containers as kubernetes nodes.
Superb explanation, 4 random videos in and i like how yo explain stuff. Thanks got yourself a subscriber.
Thanks for sharing. Congratulation for your work. I'm using KVM and after a few tweaks, it just works.
Hi Jaime, thanks for watching. I was using KVM with this vagrant environment for a while personally. But had hit a road block where I couldn't use MetalLB successfully with KVM networks.
Just did the setup on Windows 10 home virtualbox. Worked perfectly. Very high quality of vagrant setup scripts which would be immensely helpful in my other vagrant projects too :)
Thanks.
I love Vagrant. I much prefer using Vagrant+LXC boxes on Ubuntu over just about anything else. I've found Vagrant super easy to use, works 99% of the time, and flexible enough for my needs.
Hi Guys,
I just deployed in windows 10. when i do vagrant ssh kmaster.. its asking for password...
Can you help me out bro? i am struggling to install this on W7 box. please ping me on whatsapp 7978976637
A late response to this video.. Tried with Windows 10 PowerShell, worked like a charm. Thanks for this great video
How did it worked?? Can you guide me also. i am trying from windows7 and it doesnt work
Hi ylcnky, that's perfect.
@@rajeshbastia8502 I have tried this vagrant environment in my Windows laptop as well and it worked.
@Rajesh Bastila I didn't do anything different than what has been shown. Only the locations of hosts file is different than linux systems.
@@ylcnky9406 That's what I would expect.
Very good , I installed on windows 10 , it works. You save me a lot money, I don’t need to buy 3 laptops Thank you so much !
Hi Frank, thanks for watching.
Thank You so much. Your content is very helpfull for to setup the k8s cluster
Hi Umesh, thanks for watching. Cheers.
I have Window10, but i follow along this video and it is working perfectly fine. Thank you, Great help
Hi Paresh, thanks for watching. Cheers.
Thanks ! Its has many answers for the questions I am searching for since couple of months. Thank a ton!
Hi Venkat, thanks for watching this video.
#kudos everything is here to start as new. And I appreciates your reply's to questions. Thank u.
Hi Ram, many thanks for your interest in this channel.
well done sir. Best k8s video.
Hi, thanks for watching.
Super! Tried it on Windows 10 with Ubuntu image and it worked fine for me :)
Some Great stuff!!
Hi Anshul, many thanks for watching. Cheers.
It worked for me on Mac, thank you ! 🙏🏽🙏🏽
Cool. Thanks for watching.
great job man!!!! And great advices too , great explanations! You are perfect!
Hi, Thanks for watching.
@@justmeandopensource oh just a precision: I saw that your github repo has evolved since your video. I took it without understanding exactly why tigera and calico and what they do exactly. They seem to configure the network with security, and blabla... Ill will figure out
Thanks Lot. I am all set with my three nodes cluster. Really appreciated.
So I hope you got it all sorted. Well done Sandeep.
hey works great...required slight modification for centos7 install...eg material is now out of "misc" and the bootstrap.sh required adding kubernetes-cni-0.6.0 to list of k8s yums.
Thank you so much..it worked in Windows 10...it saved a lot of time
Hi Jayashree, thanks for watching.
Nice tutorials. Thanks
Hi Ashutosh, thanks for watching.
super easy! thank you. Your videos are excellent.
Thanks for watching.
video is excellent. keep it up
Hi Aditya, thanks for watching. Cheers.
Hello,
In your Github repository, specifically in the master bootstrap script you forget the COPY CONFIG FILE STEP :
sudo cp /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown -R vagrant:vagrant /home/vagrant/.kube
That's why many of those who tried your tutorial got the following ERROR message:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
And by the way can you tell us why you changed from CentOS to Ubuntu and from Docker to Containerd, is it for security issues ?
Thank you for the tutorial it's really helpful.
Hi, thanks for watching.
I originally had that step as part of bootstrap_kmaster.sh. But I wanted to keep the bootstrap steps to a minimum and decided to remove unwanted steps.
Generally you will bootstrap a cluster and then copy the kube config file to your local machine and then interact with the cluster. You shouldn't be using kubectl commands on the master or worker nodes although it doesn't harm in any way. But leave the cluster for what it is supposed to do. So I intentionally removed that step. Always interact with your cluster from your local machine. Don't ssh to any of the cluster nodes unless you are an admin and know what you are doing.
Regarding the switch to containerd from docker runtime, see my other video
th-cam.com/video/AkfE8PBQnPs/w-d-xo.html
And I switched from CentOS to Ubuntu because I felt its lot easier to manage Ubuntu with containerd. Just my preference.
Cheers.
Thanks a lot Venkat! Vagrant is wonderfull!!! worked good here! Build and configure a VM at VirtualBox under my small display is painful haha.
I have installed 2.2.6, so sounds command " vagrant snapshot save kubernetes-clean-base" to save all VMs is no longer working or needs some tricks, so I have save one by one, np!
# vagrant snapshot save kubernetes-clean-base
The machine with the name 'kubernetes-clean-base' was not found configured for
this Vagrant environment.
Don't you have to specify the name of the vm in your vagrant snapshot save command if you have multiple VMs in your vagranfile?
Thanks for the great tutorials!
in window 10 After ran vagrant up . i got below error . can you please look into it
kmaster: [TASK 3] Deploy Calico network
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Hi Venkat, I tried your Vagrant file to create a kubernetes cluster on an AWS EC2 instance, but got an error message that VT-x is not available. Do you know if there a way I can create a k8s cluster using Vagrant + Virtualbox on AWS EC2 ? Thanks
Hi Susheel, thanks for watching. AWS EC2 are virtual machines themselves. In order to run virtual machine inside a virtual machine, you need to enable nested virtualization capabilities. I once tried that on an ec2 instance that is of bare metal instance.
thanks so much for your awesome videos. i use your vagrant configuration but i can't access to internet inside the docker container how should i solve this problem??
HI Sajjad, thanks for watching. My vagrant configuration has a second network interface added to all the VMs. You should have internet access on the VMs. If so, then you should have internet access on anything running on those VMs. If you could explain in detail your steps, I can test it on my machine.
Hey Thanks for the video. Vagrant script was successful, but i see below two errors during the script. is this an issue?
kmaster: W0102 02:47:44.460190 8381 validation.go:28] Cannot validate kube-proxy config - no validator is available
kmaster: W0102 02:47:44.460323 8381 validation.go:28] Cannot validate kubelet config - no validator is available
Hi Vijay, thanks for watching. Yeah, I am aware of those errors and didn't have time to look into it. But despite those errors, the cluster is in working order. Cheers.
Thanks so much for this tutorial.
Thanks for watching.
Hi Venkat, I am a big fan of your k8s videos.
During cluster creation, I am facing the issue below:
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: W0202 17:04:40.512993 8601 validation.go:28] Cannot validate kube-proxy config - no validator is available
kmaster: W0202 17:04:40.513031 8601 validation.go:28] Cannot validate kubelet config - no validator is available
However, the cluster creation script will continue and subsequently cluster will deploy successfully.
Thanks.
Hi Claudio, many thanks for following my videos. Yes those are warnings that you can safely ignore. I have never had any problems with cluster operations despite seeing those warnings.
Thanks, man! Very helpful!
You are welcome!
Hi Venkat, great video as always, anyway you can add, maybe in your git repo, a vagrant config for a multi-master k8s setup? Thanks and keep up the good work!
Many thanks for watching this video. I used Kubespray to deploy a multi-master multi-etcd HA Kubernetes cluster. I also used "Kubernetes The Hard Way" to deploy similar HA cluster. But yet to make videos of them. Will soon record a video on it.
Thanks,
Venkat
Awesome! can't wait to see the video.
Is it possible to set up auto scale? where as the cluster needs more worker vagrant provisions them and vice versa where they scale down when not needed.
Thanks for the tutorial . I would need help to set-up and access kubernetes dashboard on master and nodes . Can you please suggest steps to do the same ?
Hi Avinash, thanks for watching this video. I have already done a video on deploying kubernetes dashboard and you can check it out in the below link. Hope that will help you.
th-cam.com/video/brqAMyayjrI/w-d-xo.html
Thanks.
This really quality stuff karthik ,
Would it be possible to controll master from my win10 ,i have set up cluster but could,nt run kubectl from my host win 10 .
Hi Manoj, thanks for watching. This vagrant environment should work across all platforms. I have tested it in Linux and Mac but not on Windows. But I have been told by few viewers that it worked flawlessly in their Windows environment as well.
When you say you couldn't run kubectl, what do you mean exactly? Can it not connect to the cluster?
@@justmeandopensource I had set up the cluster successfully had copied cluster host-names to /etc/hosts , but while doing scp of ./kube/config file i got an error . so i manually copied it from master to host under .kube/ .(26:12) cluster info throws an error . You must be logged in to the server (the server has asked for the client to provide credentials)" guess master couldnt authenticate hosts
Hi resolved the issue by setting context in host machine
"kubectl config use-context"
@@manojchander1382 perfect.
Hi Venkat,
Your videos are helpful lot to learn about K8s.
May i know what changes are required as part of this vagrant file to provision k8s(1 master & 2 worker nodes) cluster on Windows-10, i have tried from my end to provision on it but i did not get luck.
It will be great to do help over here.
Hi Aneel, thanks for watching this video. It should work without any modifications. I have heard from few viewers that this vagrant file is working on their Windows laptop. I have tried it once as well. What's the problem you are facing? Or where exactly you are facing the problem?
Hi bro..,your explanation is really awesome...it seems u deleted docker and flannel in your git repo...I want to make practical ..
Can u help
Hi Manoj, thanks for your interest in this channel. You can access those from the 2020 branch. Cheers.
Hi Venkat, that's one of the best video. It's really helpful. Thank you. Can you please tell me where I can change the port forwarding rules?
Hi Akshay, thanks for watching.
Depending on what port you want on your host machine to be forwarded to what port on the guest VMs, you can configure them in the Vagrantfile in the respective blocks. There is a top kmaster block and bottom kworker block which is a loop for two workers.
You can use something like below. You can't have this in the global section outside of the blocks as that will cause a conflict forwarding host port 8080 to container port 80 on all the nodes.
config.vm.network "forwarded_port", guest: 80, host: 8080
So you have to use some logic.
You can put this line inside the kworker block
config.vm.network "forwarded_port", guest: 8#{i}, host: 8080
This would map host port 81 to 8080 on kworker1 and 82 to 8080 on kworker2.
Hope this makes sense.
Cheers.
Very Good, Really helpfull
Thanks for trying it out Sandeep. Glad it was helpful to you.
Hello Sir,
I have one doubt. Will the Ips for nodes will be static? Or do we need to manually make them static in /etc/network/interfaces
Will this setup will work on osx? and how you figure out the ips for master and worker nodes? Thanks
why did not I came here earlier !!
There is a right time for everthing :P
In this vagrant setup what is the virtualization provider? docker Or libvirt ?
Hi, I used VirtualBox.
Thanks Venkat For Sharing your knowledge and wonderful teaching . I have installed cluster successfully on MAC and Even able to run kubectl commands from HOST machine . But when i connect to my corporate VPN i am unable to run the kubectl commands from host as well at kmaster, Any Clue and needed your help in solving the same
Hi Venkat Any update ?
Hi, thanks for watching. So the same setup in your Macbook that worked stops working when connected to VPN?
sir i am getting this error. can u please help me
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Hi Yash, thanks for watching. You can remove the output redirection in the bootstrap scripts to see where and why its failing. Wherever you see ">/dev/null 2>&1", just delete that part and you will see more actual output.
Hi, thanks for the wonderful videos. May, I know why are you taking snapshots of VMs. What if I would not take the snapshots created through vagrant? Thanks !!
HI Swaraj, thanks for watching. Snapshots are taken just in case if I wanted to go back to a particular state. Nothing wrong if you don't take snapshots.
Hi Venkant,
I have install 3 node via vagrant in window 10. Cluster installation look fine. But , How Can I make setup for kubectl to work locally that would access all 3 node. Please suggest
Hi Amrita, thanks for watching. I am not a Windows person. The way to access the cluster using kubectl is same in all OS. You need to place the kubernetes configration file under .kube directory in your home directory. In Windows you may also need to set the KUBECONFIG system/user environment variable to point to the location where you copied the kube config file.
Just discover your channel and there is a lot to learn here I see, thanks for all your effort! Starting with this one, its an old video but I see you recently updated the Vagrantfile in the kubernetes Github repo. Everything goes well but kubectl get componentstatus command shows:
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
Any idea what could be the cause of this error? Thanks in advance.
Hi Ivo, thanks for watching. That particular endpoint is being deprecated. You can safely ignore that error when you do component status grt request. It doesn't affect cluster functionality in any way. Cheers.
@@justmeandopensource Wauw, what a quick response! Indeed I saw a depreciated warning but I saw you using it in another video and that command works also for me then, but depends on the versions you use in that video of course. Sorry for asking this but thanks for your quick response. Also great that you keep updated your repo's so we can still follow your videos that are recorded more than a year ago. I'm going to the next videos in your Kubernetes playlist.
I know this is an old video but how can vagrant be run on bare metal instead of using virtualbox?
Not sure what you exactly mean. I ran vagrant on my Laptop that brings up Virtual machines in VirtualBox.
Hi, what is the difference between calico.yaml and kube-flanel.yml? I am setting up an environment for production, what do you recommend me? Thanks!
HI Darvin, thanks for watching. There are lots of overlay Networks. I have only explored Flannel and Calico. There is Weave Net as well and few others. Flannel is simple but doens't come with lots of advanced features. For example, you can't use Pod Network policy with Flannel network. Calico and WeaveNet are advanced in terms of features they offer. So if you are going to use it in production, I would advise you to evaluate each of these for your needs. It won't be easy to switch the overlay network once you have deployed it without some downtime. So make a right choice in the first place.
The following link might be helpful.
rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/
Thanks.
@@justmeandopensource Thanks!
@@VinuezaDario You are welcome. Cheers.
it is working properly on my ubuntu machine thankyousoomuch
but can some please explain me that, I want to make 3 worker nodes and 1 master node how could i do that?
Hi, thanks for watching. You can update the Vagrantfile and set "NodeCount = 3". Then in bootstrap.sh script add below dns entry in [TASK 1].
172.42.42.103 kworker3.example.com kworker3
Now you can do vagrant up.
Remember to first destroy your existing vagrant environment (vagrant destroy -f) before following above process.
Cheers.
@@justmeandopensource tysm bro you're soooooooo awesome love youuu....
@@abdulhaseeb1224 No worries. Thanks for your interest in this channel. Cheers.
Hello Venkat, nice video!
One quick question, what is this syntax about in Ruby?
node.vm.network "private_network", ip: "172.16.16.100"
There's no equal sign "=" in between
Hi, thanks for watching. That's Ruby syntax.
@@justmeandopensource Yeah, I know it's Ruby syntax. Is there any keyword about this syntax that I can search on google?
INCREDIBLE!!
Thanks for watching this video.
I just started watching your videos and they are excellent! Thank you and keep making more. when I am doing vagrant up I am getting below error............( and also my centos is on VM, its look like-->windows10-VM-centos-here I installed vagrant and one more VM,...my VirtualBox 6.1 and vagrant 2.2.8)
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "0ae04c7d-1c65-4c19-ad99-01e0a0af8a19", "--type", "headless"]
Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
Hello Venkat, I am probably very late.. I tried this setup earlier and it was working fine, but don't know why if i am creating a cluster again it is getting stuck on TASK5 and never returning. Can you please suggest or help me out?
I also checked through the VirtualBox console when the script was running , there was one error which says "dependency failed for hyper-v kvp protocol daemon". Any idea on this one ?
@@kunaljain5266 The vagrantfile I have got is designed for either Virtualbox or Kvm libvirt. It won't work for Hyper-V unless you modified the provider block. You can remove all /dev/null redirections in the bootstrap script and see where exactly its failing. I don't have a Windows machine to test this unfortunately.
Sure @venkat i will try this and let you know about this.. Really not sure but the same script was working around 20 days before .. Will keep you updated on this.. Thanks a lot again :)
@@kunaljain5266 no worries
Thank you very much. Its working :-)
Cool. Keep learning Kubernetes. Very interesting stuff.
@Just me and Opensource Hi Venkat,
Its still confusing me on, do we really need vagrant in every kubernetes environment? I have started practicing with GCP do we have anyother vagrant file?
Hi Raarth, thanks for watching this video.
Vagrant is a tool to provision virtual machines. I used vagrant to create 3 virtual machines and provision them as Kubernetes clusters. Vagrant is best suited for development environment where you want to spin up multiple virtual machines on your machine. If you are using GCP, you can use other provisioning methods like Kubespray.
@@justmeandopensource Thanks for the quick response bro, appreciate the videos posted for us..it means alot..
@@raarth_gameplays No worries. You are welcome. Cheers.
Hi Venkat thanks for the video, I have a doubt regarding modifying the cluster info by manual, like I have setup my cluster, kmaster kworker1, kworker2, after some time I have changed my kworker1 name as kubeworker1 with hostnamectl command on node machine after restart node status is showing not ready, will it be possible to modify the cluster info manually, where all the cluster info stores(in etcd or other place) , and how to modify in etcd or other place, with out using kubeadm create token, is there any way like that, thanks in advance
I haven't tried that scenario. Let me try it out and find what is possible. I will get back to you. Thanks
Hi Siva, I just played with it and uploaded a video if it helps others in similar situation. Basically no easy way to rename a node. You only can delete the node and join it back to the cluster.
If you have subscribed to my channel, you should have received a notification about this new video.
Or you can check it out in the below link. Thanks.
th-cam.com/video/TqoA9HwFLVU/w-d-xo.html
Hello Venkat, Thanks for your video. Need your guidance . I have only one RedHat Linux 7 VM in my environment. I wanted to do setup Master & Worker K8s nodes. Can you assist to guide how to setup ? Can we use Vagrant in RedHat Linux 7 to setup K8s cluster ?
Hi, yes you can setup the cluster using vagrant in RHEL7. Install VirtualBox and Vagrant and just do vagrant up. Or you can use kvm/libvirt instead of VirtualBox.
@@justmeandopensource Could you please share the link and command to install Vagrant in rehel 7
@@visva2005 I don't have a RHEL7 machine to test but it should be similar to the process you follow on CentOS 7 provided you have the redhat subscription to install packages.
yum install vagrant
@@justmeandopensource thanks. Just got the rpm from releases.hashicorp.com/vagrant/2.2.15/vagrant_2.2.15_x86_64.rpm and able to install.
@@visva2005 cool
only 228 likes? it deserved way more than that
Hi Rajesh, thanks for watching. Sadly the video didn't reach that many people.
I am trying to do this on windows 11 and having the following errors?
Please help.
node01: SSH auth method: private key
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
Hi venkat , i am trying to install specific version of k8s by running yum install -y -q kubeadm-1.13.2 kubelet-1.13.2 kubectl-1.13.2 >/dev/null 2>&1. during installation it gives kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found error. Can you please advise.
Hi Venkat, thanks for watching. Let me try this in my environment and update you.
In the bootstrap.sh script, we are installing kubeadm, kubelet and kubectl in one go. Version 1.13.2 of kubeadm has certain dependencies issues.
Update the bootstrap.sh script, remove the yum install line and change it like below.
echo "[TASK 9] Install Kubernetes (kubeadm, kubelet and kubectl)"
yum install -y -q kubectl-1.13.2
yum install -y -q kubelet-1.13.2
yum install -y -q kubeadm-1.13.2
I just tested and working fine.
@@justmeandopensource thank you
neat, thanks a lot for this! Any particular reason for using flannel as opposed to calico?
Hi Rutwick, thanks for watching this video. Flannel seemed very simple and straightforward. Other than that no specific reason.
Thanks,
Venkat
@@justmeandopensource got it, thanks for the reply.
I am facing a problem that my pods are not getting start after this setup they are showing imagepullbackoff error
And i tried to pull the image before running the pod and then the pod get started so the problem is the image is not getting pulled
What should i do?
Hmm. Thats strage. Is that happening on any particular worker nodes? Or on all nodes? That usually means either the container image you specified in the manifest (or during kubectl command) is not a valid image or there is some networking issue pulling containers.
@@justmeandopensource i don't think that there is any problem in the image because i tried to pull different images and all them are not gettin pulled
The main problem is that my nodes are not might catching the internet connection
Beacuse also when i start my nodes with vagrant up the option shows in 10-11th line of every node starting that (connection not building,retrying) it shows on master node as well as on worker nodes...
This shows for 4-5 times than the further process continues
@@abdulhaseeb1224 Okay so then there is a bigger problem to solve. Just try recreating the cluster with vagrant if it helps.
@@justmeandopensource so i should first delete the current cluster than build new one again with the same process?
Am i getting it right?
@@abdulhaseeb1224 Yup.
Below command in the directory where you have vagrantfile will destroy all the vms.
$ vagrant destroy -f
Then
$ vagrant up
Thanks alot for this series. Really helping me. I am trying to use vagrant-vsphere plugin to create VMs using vagrant in my vsphere. I am able to get it but not fully because of the centos template i use is different. Is there any way i could get this vagrant centos used here so that i could create a template of it and upload to my vsphere and use it to create VMs there.
Hi Maria, thanks for following my video series. I used the official centos/7 virtualbox provider box in this video.
app.vagrantup.com/centos/boxes/7
It also has a provider for vmware. I haven't tested that. May be you could use that.
Or you can follow my tutorial to get the centos vm created in Virtualbox using vagrant and then export it as ova and then import into vmware.
All you need is a base centos machine to work with. Rest of the items are all done by the provisioner script. So it shouldn't matter which provider you use.
Thanks
@@justmeandopensource : Thanks Venkat. The VM box doesnt seems to be the issue. The main issue i face with using vsphere is that, i am unable to define the ip for the machines.Here in video you have defined 172.42.42.100 for master and the same is used for the --apiserver-advertise-address. The VMs created in Vsphere gets its IP once it is powered on. So i am stuck there. Like to get it to proceed with the k8s provisioning. Do you have any idea how i could solve this issue?
@@MariaJossy
In the case of using vagrant with VirtualBox, the default first network interface is hostonly adapter which gets IP like 10.0.2.15 and every vm gets the same IP. Thats the reason I introduced the second network adapter and the machines will have 172.42.42.100/101/102 for the three VMs. Since the vm has two network interface, I had to use the --apiserver-advertise-address to specify the interface that should be used for the cluster communication.
Okay. When the VM comes up, can you check how many interfaces it has and what their respective IPs are? Did the VM get 172.42.42.100 address as we defined in the vagrantfile?
Thanks
@@justmeandopensource : No it did not get the address as defined in the vagrant file. It got the DNS name as defined, but not the IP.
@@MariaJossy do you mean kmaster.example.com, kworker1.exampke.com and so on?
I'm using AWS ubuntu server so should i edit the Vagrantfile ?
Hi, thanks for watching. You don't need to change the vagrantfile. You only need to make sure that nested virtualization is supported on your Ubuntu server in AWS and install the dependencies like vagrant, virtualbox. The cluster will be provisioned with CentOS 7 machines. Cheers.
@@justmeandopensource I'm using ubuntu server but it doesn't support nested virtualization now what should i do?
@@mirrahat4105 Hi I found a reddit discussion where it was mentioned that you could go for i3.metal instance for virtualization capabilities but it could be expensive.
I'm new tp k8s. I have to use the same IP address for kmaster and pod-network or i have to give my ip address and my cidr block for pod network to initialize the cluster
Are you using NAT or bridge adapter to provide internet to your vm's. For master do we need 2 network interfaces???
@@kumarvedavyas5631 Thanks for watching this video. You don't have to do anything manually. All will be taken care by Vagrant. Vagrant will set up a second network interface using NAT and not bridge. Just git clone the repository, cd to vagrant-provisioning and then do "vagrant up". You will have your cluster ready. Unless you are installing the cluster by yourself you don't have to worry about configuring any of those. Thanks.
@@justmeandopensource thank you, i will try it
Cool
@@justmeandopensource thanks a lot for u... Its working for me...
Hi Venkat, thanks for the amazing tutorial. i am trying from windows 7. Able to run all the Vms however looks like the kubeadm and kubectl is not getting installed.
Is there any new vegrant file you created.because while running the vegrant file while creating the master node the following error comes in after [TASK 3] Deploy Calico network
(kmaster : - bash kubectl command not found
kamaster : [TASK 4] Generate and save cluster join command to /join cluster
kmaster : /tmp/vagrant-shell:line 19 kubeadm command not found
HI Rajesh, thanks for watching. My Vagrant files should work in Windows 7 as well. I tried it a while ago. I also got confirmation recently from one other user that it was working perfectly fine on Windows 10 as well in a Powershell. However, I am not a Windows person to be honest. Cheers.
hi Venkat. I am running a two machines setup one with rancher latest running as a docker container and one other host machine with vagrant provisioning (virtualbox0 both mxlinux OS. When I try to add the cluster to rancher the cattle-cluster-agent pod keeps restarting with the following error:
ERROR: {{IP of rancher host}}/ping is not accessible (Failed to connect to {{IP rancher host}} port 443: Connection timed out
I can ping and curl the rancher host IP from the K8S cluster host. I can also ping and curl the rancher host IP from within the kworker node where the pod is failing. I came across this github.com/rancher/rancher/issues/18832 but nothing there help. Any ideas? :)
I am unable to scp into master node
ssh: connect to host 172.16.16.100 port 22: Connection timed out
Hi, connection timed out indicates that eithere there is a firewall issue or the vm kmaster isn't up. You can rule out firewall issue as I have disabled firewall in the bootstrap script when provisioning the VMs. Make sure the VMs are running.
I was having trouble running kubectl commands from the worker nodes, looks like the admin.conf needs to be copied as config in the 'kube directory on the workers too I added the following to the bootstrap_worker.sh file
mkdir /home/vagrant/.kube
sshpass -p "kubeadmin" scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no kmaster.example.com:/home/vagrant/.kube/config /home/vagrant/.kube/config 2>/dev/null
sudo chown -R vagrant:vagrant /home/vagrant/.kube
Cool. That's the way to go. The initial cluster config admin.conf is available only on master node and needs to be copied to any machine that needs access to the cluster including worker nodes.
Hi Bro!!! Thanks for your Classes...
I have Installed the Kubernetes from Vagrant... But Worker nodes not listed when I used the command "kubectl get nodes" in Master node... Plz provide me the solution
Hi Rangisetti, thanks for watching.
Lets troubleshoot this.
Could you please paste the output of below commands?
1. kubectl version --short
2. kubectl cluster-info
3. kubectl get nodes
Thanks.
@@justmeandopensource Thanks!! Please find the Output
rangisetti ~ kubectl cluster-info
Kubernetes master is running at 172.42.42.100:6443
KubeDNS is running at 172.42.42.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
rangisetti ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster.example.com Ready master 172m v1.16.2
localhost.localdomain NotReady 149m v1.16.2
rangisetti ~ kubectl version --short
Client Version: v1.11.0+d4cacc0
Server Version: v1.16.2
rangisetti ~
Hi Rangisetti, thanks for the outputs. It seems the worker nodes are not getting provisioned properly. It should have the name kworker1.example.com and not localhost.localdomain.
Are you running the vagrant up command in vagrant-provisioning directory or are you copying the vagrantfile to some other directory? And you have changed the number of worker nodes to 1 in the vagrantfile. That shouldn't be a problem.
I have been using this vagrant environment for a very long time and its working absolutely fine.
@@justmeandopensource I did the installation same as you did... The problem while installation seems that SSH connection is getting disconnected for my case....
I will troubleshoot...
Thanks for your support...
@@rangisettisatishkumar5491 Okay. You can remove the output redirection in the bootstrap script to see whats going on. In all the shell script, wherever you see 2>/dev/null, just remove it and you can see the output or error while running vagrant up. Thanks.
Thx for the video , I tried all steps but still getting the error,when I connect to Master node and run kubectl get cs , it states that controller-manager and scheduler is unhealthy , connection refused,also it gives warning " Warning: v1 ComponentStatus is deprecated in v1.19+" , kindly advise
HI Deepak, thanks for watching. May I ask you for little more details?
What is your host operating system? Have you made any changes to the vagrant file or you followed every step as it is in this video? Also if you could paste the command outputs in pastebin.com and share it with me, I can take a look and try it in my environment. Cheers.
@@justmeandopensource Thanks for prompt reply , I am running Windows 10, on top of that installed vagrant , running kubectl commands directly from master node ( SSH). I can create POD , Deployment set without any issue.Not sure if that error is due to liveness / readiness prope.I have not made any changes to vagrant file.
*****************************************
Command Output:
[root@kmaster ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
Hope this helps
@@Deepak9728 I remember seeing this error and this particular command kubectl get componentstatus is about to get deprecated. So I wouldn't worry too much if your cluster is running fine apart from this command. Few other viewers also mentioned about this despite their cluster fully operational.
@@justmeandopensource wow, good to know that. Again thanks for your quick response , still watching your videos , they are imaging.
@@Deepak9728 Thanks for your interest. Cheers.
Hi, enjoying this series. Thanks for the effort you have put in. Do you know how to use this set up to create other versions of k8s? I have managed to create v1.11.x upwards, but nothing below. I am trying to create a v1.9.0, but looking at the systemctl status kubeletI see a lot more args in the cgroup param. The error I am getting is at the deploy Flannel task -> unable to contact 172.42.42.100:6443 are you sure you have the correct port. Any ideas?
Hi John,
I just tested and I was able to successfully deploy Kubernetes cluster version 1.9.0 without any issues.
I have added vagrant-provisioning files for 1.9.0 in my github repository.
github.com/justmeandopensource/kubernetes/tree/master/misc/vagrant-provisioning-by-version
Please check if this helps you. I might do a video on this later. I have got videos scheduled for the next 2 months (one every Monday). So will add this to the list. Thanks for bringing this to my attention.
Thanks,
Venkat
@@justmeandopensource Great thanks, will try again when I get home from work. Can I be really cheeky and request a video on the aggregation/extensions api, configuring the aggregation ca certs etc to be added to your list if you have not already planned. There is a lot of videos on k8s but have found yours amongst the most accessible for beginners. Thanks
Yeah. Sure. I will have to play with it understand it completely and then will definitely make a video. No worries. Thanks
Hi, Can we upgrade the Kubernetes cluster by kubeadm when originally cluster was build using kubespray?
Hi Ashu, thanks for watching. It will be more involved in doing that way. But if you used Kubespray, then why would you want to upgrade manually with kubeadm? Kubespray is meant for these tasks in an automated way. Cheers.
@@justmeandopensource I am trying to do live upgrade so that I can manage upgrade with zero downtime
Also, I just tried to create cluster using your repo which gives me Kubernetes version as 1.18.0 which is latest, How can I specify a particular version?
@@ashurana31 If you are using my vagrant environment, then you can modify the bootstrap.sh script and specify the version of kubeadm, kubectl, kubelet for installation.
May I know what version is your virtual box? I tried to run your script using vagrant 2.2.7 and virtualbox 6.1 in Windows 10 (powershell). I am having this issue:
==> kmaster: Running provisioner: shell...
kmaster: Running: C:/Users/DEFAUL~1.LAP/AppData/Local/Temp/vagrant-shell20200404-9520-1x03bq7.sh
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy Calico network
kmaster: -bash: kubectl: command not found
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
PS D:\Projects\kubernetes\vagrant-provisioning>
Then it stopped provisioning the workers.
Hi thanks for watching. I don't use Windows. In my Linux machine, I am running VirtualBox version 6.1.4 and it always worked even with previous versions. But I once had a comment from another viewer that this vagrant environment doesn't work properly with Virtualbox 6 on Windows and his advise was to continue using latest 5.x version. But I haven't researched much about that error.
How we can use the config file in case we are working on aws and local vagrant setup.
HI Narendra, thanks for watching. I don't get your question. This vagrant setup will provision the virtual machines in VirtualBox environment on your local machine. If you want to provision actual ec2 instances in AWS, then its a whole different concept using vagrant-aws plugin.
Excellent tutorial. Many thanks for this.
I followed this on windows 10 and was able to launch all of the 3 VMs in the vagrant env but after when I logged in to the master node and ran kubectl get nodes, I could see that the worker nodes didn't join the cluster (status = Not ready) ! How to debug this ? Thanks
Hi Surendra, thanks for watching this video. While doing this video, I also verified in Windows 10 and it worked. But haven't tested this recently. Something might have changed.
What version of Kubernetes was installed? Is it the latest 1.16. You can check the output of "kubectl -n kube-system get pods" and see which pods are pending and then do kubectl -n kube-system describe pod . Some of the core component pods might be in pending state.
Thanks.
@@justmeandopensource $ vagrant.exe ssh kmaster
[vagrant@kmaster ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster.example.com NotReady master 8h v1.16.0
kworker1.example.com NotReady 8h v1.16.0
kworker2.example.com NotReady 8h v1.16.0
[vagrant@kmaster ~]$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-bzbw7 0/1 Pending 0 8h
coredns-5644d7b6d9-nbpwc 0/1 Pending 0 8h
etcd-kmaster.example.com 1/1 Running 1 8h
kube-apiserver-kmaster.example.com 1/1 Running 1 8h
kube-controller-manager-kmaster.example.com 1/1 Running 1 8h
kube-proxy-87bpv 1/1 Running 1 8h
kube-proxy-cjg7d 1/1 Running 1 8h
kube-proxy-r2ckx 1/1 Running 1 8h
kube-scheduler-kmaster.example.com 1/1 Running 1 8h
[vagrant@kmaster ~]$ kubectl version --short
Client Version: v1.16.0
Server Version: v1.16.0
[vagrant@kmaster ~]$ kubectl describe coredns-5644d7b6d9-bzbw7
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
It seems that flannel network wasn't created. I need to check the scripts and try it again .. Thanks for your help
@@surensingh123 Hmm.. I will also test that when I get some time. Cheers.
Have you tried it on a Linux machine? Thanks.
Great videos! thanks!......the audio could be better....may be using a better micro.......
Hi Oscar, thanks for watching this video and for the feedback. I have changed few microphones and I believe the recent videos sound better.
Does your local network address is 172.42.42.0/24? Is there any difference in putting "public_network" with local IP on vagrant? BTW great stuff, keep it up!
Hi, thanks for watching this video. 172.42.42.0/24 is not my local network. That network gets defined automatically in the VirtualBox environment when I specify the IP address.
My local network is 192.168.1.0/24 network. If you want to have local network IP (same as your host machine) for your VirtualBox VMs, you can add the bridge option and the network interface on the host that is connected to the network.
www.vagrantup.com/docs/networking/public_network.html
Thanks
Hey venkat, Have you tried to setup kudeadm cluster with ubuntu vagrant boxes?. I was facing issues. But with CentOS it worked well.
Hi Akilan, thanks for watching this video. I haven't tried it but when I get some time I will try and let you know. Thanks.
Thanks venkat for your time and your work...Keep up the good work...
@@akilansubramanian3339 Sure. Thanks.
works fine for windows
Hi, Thanks for trying it on Windows.
Hi Venkant,
First of all thanks for videos,
Please can you help on below error I am receiving the while running the vagrant. How I can fix this. I am running centos7 via azure cloud VM .
==> kmaster: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "2334fd06-5480-4ea8-bd5d-645ff2d8336b", "--type", "headless"]
Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
Hi Pankaj, thanks for watching. VT-x is virtualization. It seems that your CentOS 7 instance doesn't support virtualization. Your CentOS 7 instance is itself a virtual machine in Azure cloud. In order to run nested virtualization, you may need to enable certain things which I am not sure about.
You might need to choose a different instance type that supports nested virtualization.
stackoverflow.com/questions/48579206/how-to-install-centos-7-64bit-vbox-guest-in-windows-azure
Thank You Very much Venkant. I change the VM instance and it works for me. Now I can install the cluster.
@@pankajmahto2370 perfect.
Thank you!
You are welcome and thanks for watching. Cheers.
Hello Venkat
the videos are awesome..thanks a ton for sharing...
however while I am trying to replicate this particular setup using vagrant, I am facing some issues:
My kmaster VM is always getting vcpu as 1 and vmem as 512mb only.
I have cloned your repo and even the Vagrantfile has vcpu as 2 and vmem as 2048 MB..still every time whenever I execute "vagrant up", it creates a kmaster VM with just 1 core and 512MB ram..and due to that my later stuffs doesn't gets installed and everything fails.
PS: that I am using KVM and my base machine is ubuntu 18.04:
snippet of the vagrant file: just few lines:
PS: I have made changes to private IP in Vagrant file::
also note that my host machine has enough RAM and CPU to be allocated for VM's.
host machine is 12 GB.
# -*- mode: ruby -*-
# vi: set ft=ruby :
ENV['VAGRANT_NO_PARALLEL'] = 'yes'
Vagrant.configure(2) do |config|
config.vm.provision "shell", path: "bootstrap.sh"
# Kubernetes Master Server
config.vm.define "kmaster" do |kmaster|
kmaster.vm.box = "centos/7"
kmaster.vm.hostname = "kmaster.example.com"
kmaster.vm.network "private_network", ip: "192.168.122.100"
kmaster.vm.provider "virtualbox" do |v|
v.name = "kmaster"
v.memory = 2048
v.cpus = 2
# Prevent VirtualBox from interfering with host audio stack
v.customize ["modifyvm", :id, "--audio", "none"]
end
kmaster.vm.provision "shell", path: "bootstrap_kmaster.sh"
end
snippet of the kmaster VM getting created where we can see that it assigns just 1 VCPU and 512mb:
==> kmaster: Successfully added box 'centos/7' (v1905.1) for 'libvirt'!
==> kmaster: Creating image (snapshot of base box volume).
==> kmaster: Creating domain with the following settings...
==> kmaster: -- Name: vagrant-provisioning_kmaster
==> kmaster: -- Domain type: kvm
==> kmaster: -- Cpus: 1
==> kmaster: -- Feature: acpi
==> kmaster: -- Feature: apic
==> kmaster: -- Feature: pae
==> kmaster: -- Memory: 512M
==> kmaster: -- Management MAC:
==> kmaster: -- Loader:
==> kmaster: -- Nvram:
==> kmaster: -- Base box: centos/7
==> kmaster: -- Storage pool: default
==> kmaster: -- Image: /var/lib/libvirt/images/vagrant-provisioning_kmaster.img (41G)
==> kmaster: -- Volume Cache: default
==> kmaster: -- Kernel:
==> kmaster: -- Initrd:
==> kmaster: -- Graphics Type: vnc
==> kmaster: -- Graphics Port: -1
==> kmaster: -- Graphics IP: 127.0.0.1
==> kmaster: -- Graphics Password: Not defined
==> kmaster: -- Video Type: cirrus
==> kmaster: -- Video VRAM: 9216
==> kmaster: -- Sound Type:
==> kmaster: -- Keymap: en-us
==> kmaster: -- TPM Path:
==> kmaster: -- INPUT: type=mouse, bus=ps2
==> kmaster: Creating shared folders metadata...
==> kmaster: Starting domain.
seems like vagrant works smoothly with vbox and thats the default hypervisor it uses.
i reviewed many issues where people had issues running vagrant with kvm hypervisor and this was all due to "vagrant-libvirt" plugin...
no issues,I have uninstalled kvm hyper now and using vbox where its working as expected...
also posted the issue on github with vagrant support....let see if I get a response for vagrant-libvirt plugin not working...
thanks
Hi Indrajeet, thanks for watching. I have used libvirt/kvm successfully with this vagrant file with minor modifications. In the vagrant file, make sure to change vm.provider to libvirt from virtualbox. And I believe I also changed the provisioning shell scripts slightly. It was working perfectly well for me. But I stick with virtualbox. Cheers.
Hi How can I open a port say 30010 on the node machine? Thanks in advance
Hi Pradeep, thanks for watching. What is it you want to do exactly?
I have deployed nginx pod on node2. Created a nodeport service nodeport = 30080 and Target port =80 and port =80 and now I am trying to Access the nginx from the master / base laptop. I am unable to.
I had a similar issue on AWS but it was resolved when I permitted traffic on 30080.
I am trying to open 30080 port on node2 but unable to find a way. I did use exec command and checked that nginx was installed and running.
Thanks for coming back so quickly. Pradeep
@@pradeepchawla6643 I have not tried accessing a nodeport service from a master machine. But you should be able to access it with the IP address of any of your worker nodes. So your url will be :30080, which will take you to the nginx service. The actual nginx pod can be running on any worker node. You don't need to target the worker node that is running the pod. Any worker node IP should be able to redirect you to the node that is running the pod.
Make sure you don't have firewall enabled on the worker nodes, otherwise you might have to open those ports on all the worker nodes. Cheers.
Hey, I am not able to ping the kmaster from my desktop. I had tired this one time before as well and I was able to setup everything.
As kmaster is not pignable from by desktop i am not able to scp .kube/config to my home directory. Can you please help with this?
Sandeep
Hi Sandeep,
Did you mean it worked when you tried previously and on your second attempt it didn't work?
The Vagrant file provisions all three VMs with private network in addition to NAT. So if you look at the "ifconfig" or "ip addr show" command on any of these VMs, you will see eth0 (which is NAT) and eth1 (which is host-only network adapter). The NAT interface will have ip like 10.0.2.15 and eth1 will have 172.42.42.100.
Make sure you edit the /etc/hosts file on your desktop machine and add ip entries for kmaster. Otherwise you won't be able to ping kmaster. But you can ping it using IP address.
Thanks
Yeah
Hi Sandeep, have you added "172.42.42.100 kmaster.example.com kmaster" to your desktop's /etc/hosts file?
Can you ping 172.42.42.100 from your machine?
Hi, excellent tutorial. But I got following issue when I testing the vagrant provision on Win10, it seems kubectl kubeadm are not installed, any suggestions?
#######################
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy Calico network
kmaster: -bash: kubectl: command not found
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
Hi Ziv, thanks for watching. I have tested this vagrant environment on a Windows 10 machine and it worked. Few other users also confirmed that it is working. In your case, it seems the bootstrap script failed to install the kubernetes component binaries like kubeadm kubectl. In order to troubleshoot, you can modify the bootstrap scripts in the vagrant-provisioning directory and remove all occurences of ">/dev/null 2>&1" so that when you run vagrant up next time, you can actually see whats going on. You might be able to see the errors.
Please give it a go and let me know. Cheers.
@@justmeandopensource Thanks,Venkat. I changed provider to virtualbox and installed successfully.
@@zivhuang7314 Cool.
after run vagrant up . the following error appear
kmaster: error: unable to recognize "/vagrant/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
i change to apps/v1 but the issue exist
HI Ahmed, thanks for wathcing this video. Things have changed slightly with respect to api versions for certain resources since k8s version 1.16.0. I have already updated my github repo for that. Please do git pull or a fresh clone of my kubernetes repository and then try again.
Please let me know if it worked.
Thanks.
@@justmeandopensource working now
but i,m folowing vide 34 elk but when excute the following command
curl --insecure -sfL 172.42.42.100/v3/import/hxrf5dhlpjc84nqszcpgj2vt9ffrqjfpsp66xbcdqj58l5t9gdmfwp.yaml | kubectl apply -f -
unable to recognize "STDIN": no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize "STDIN": no matches for kind "DaemonSet" in version "extensions/v1beta1"
hi i downgrade veriosn from v1.16.1 to v1.13.1
i think latest version have issue,
working as it should be with version v1.13.1
many thanks for your videos
Ahh. You are configuring a cluster in your Rancher. I see. Yeah, I have few other users reported that problem as well. I tried it as well and found the issue. Basically, in k8s v1.16.0, they have changed the api versions for few resources. For example, in your yaml manifests for a deployment, you would be using "apiVersion: extensions/v1beta1". This is now changed to "apiVersion: apps/v1". So you have to update all your manifests. The url you pasted which is given by Rancher when you tried to import your existing cluster, the manifests provided by Rancher was for k8s versions prior to 1.16.0. Hopefully, Rancher will update the documentation.
Thanks.
hey venkat, insist of setup kubernetes cluster with vagrant can i do setup kuberntes cluster with lxc ???
HI Shravan, thanks for watching this video.
I have been using Kubernetes cluster on LXC containers for a long time.
Please follow the below videos that I did few months ago. First video is an introduction to LXC containers and getting started and the second one is to provision Kubernetes cluster on LXC containers. Please change as per your needs.
th-cam.com/video/CWmkSj_B-wo/w-d-xo.html
th-cam.com/video/XQvQUE7tAsk/w-d-xo.html
Hope this helps.
Thanks,
Venkat
@@justmeandopensource thanks a lot man
No worries. Cheers
Btw way thanks for video...how to automate VMware machine using vagrant?
Hi Yogesh, thanks for watching this video. Unfortunately I haven't played with VMWare as it is not open-source.
www.vagrantup.com/vmware/index.html
Its a paid service to use vmware provider in vagrant.
Thanks.
Thank you so much sir..i wll continue with virtual box😊
@@yogeshasalkar2507 You are welcome. Cheers.
How can we setup two or three master nodes per one cluster
HI Kumar, thanks for watching. To provision multi-master Kubernetes, I would prefer Kubespray as it is really simple to add or remove nodes to existing cluster.
You can watch Kubespray related videos in the below playlist.
th-cam.com/play/PL34sAs7_26wOAqYsrIhtDaIviGlSkmfv9.html
Cheers.
@@justmeandopensource thanks for your videos and great thing about you is you will respond to doubts. Keep going. They are very helpful
@@kumarvedavyas879 No worries. My pleasure replying to comments. Cheers.
where is /etc/kubernetes/admin.conf ?
Hi, Thanks for watching. You can find that file in the master node kmaster. Cheers.
@@justmeandopensource thanks
@@乌龟-q3m You are welcome.
I am getting this error
rajendar@ubuntu-elitebook:~/Desktop/kubernetes/vagrant-provisioning$ vagrant status
Current machine states:
kmaster running (virtualbox)
kworker1 running (virtualbox)
kworker2 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
rajendar@ubuntu-elitebook:~/Desktop/kubernetes/vagrant-provisioning$ vagrant ssh kmaster
vagrant@kmaster:~$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
vagrant@kmaster:~$
but if i login with root@172.16.16.100 it is working fine
Hi Venkat, I have followed all the steps from your video. Everything works fine except when I ran command "vagrant halt" I got this output,
vagrant halt
==> kworker2: VM not created. Moving on...
==> kworker1: VM not created. Moving on...
==> kmaster: VM not created. Moving on...
eventhough my vms were running fine.
What can be the issue @justmeandopensource
Hi Abdul, thanks for watching this video. I never had that issue because I never did a vagrant halt on this environment. Just searched internet and I found lot of people had this exact issue and there were few bug reports.
One of the possible reason for vagrant to forget about the VMs it created is the vagrant or virtualbox software had been upgraded while the VMs were running. Did you upgrade your system?
You could run vagrant global status to look at the VMs Id. Or just do vagrant destroy and redo the environment. Cheers.
Just me and Opensource thanks a lot for your response! You are doing such an amazing job ! Thanks for sharing the good stuff
@@abdulghaffar725 You are welcome. Cheers.
How can i access mozilla or google chrome if i installed via vagrant ?
Hi Anupama, thanks for watching.
I am using this exact vagrant setup on a daily basis. May I know what is it you are trying to access from the browser.
Is your host machine Windows or Linux?
Just me and Opensource Host machine is linux. Like i want to deploy an web application in k8s cluster and need to expose to outside world/ Internet . So i need to hit the ip and port to see my application. So where should i hit the ip as using vagrant I don’t have mozilla or google chrome. So please help me out!
Just me and Opensource you use gcp to expose to outside world !! Is there anyway that i can do in vagrant . Like when we install k8s cluster using kubeadm we use mozilla to hit the ip and check the whether my application is running! So similar how can i do using vagrant??
@@mommyandagastyaa
Lets take this example.
Your host machine is Linux. You have installed K8S using my vagrant environment. If so, then you would have one master and two worker nodes with the below ip addresses.
kmaster: 172.42.42.101
kworker1: 172.42.42.102
kworker2: 172.42.42.103
Now you have deployed a web application and exposed it as a node port service and the node port is (for example) 32323.
Now you can access this nodeport on any of the nodes in your k8s cluster.
From any browser on your host machine, you can visit any of the below urls.
172.42.42.101:32323
172.42.42.102:32323
172.42.42.103:32323
Hope this makes sense.
@@mommyandagastyaa See my previous comment. Most of my videos are based on bare metal using my vagrant environment.
Your videos are Excellent, I am new to K8s. I have tried this setup but ran on windows machine using git cloned it. I get the message K8s-master/slave-1/slave-2 - Status not ready for all three. .. Any help would be appreciated.. I get the following message kubectl describe node k8s-master ....runtime network not ready:NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready:cni config uninitialized
k8s-master NotReady
k8s-slave-1 NotReady
k8s-slave-2 NotReady
When I check in cat /etc/hosts
127.0.0.1 k8s-master
and similary for slaves
But IPconfig -a as 192.168.33.10/11/12 ( master/slave1/slave2)
VirtualBox is complaining that the installation is incomplete. Please
run `VBoxManage --version` to see the error message which should contain
instructions on how to fix this error.
I am getting error on vagrant up. Please help here
Hi Arvind, thanks for watching. Coul you please post the exact error. Also what is your host machine and version of virtualbox?
@@justmeandopensource Hi Thanks for replying.
Error msg:-
/home/vvdn/kubernetes/vagrant-provisioning# vagrant up
VirtualBox is complaining that the installation is incomplete. Please
run `VBoxManage --version` to see the error message which should contain
instructions on how to fix this error.
I am using Ubuntu18.04 and My virtual box version :- VirtualBox Graphical User Interface Version 5.2.34_Ubuntu r133883
@@ArvindSharma-hw8rd Hmm. Have you tried running vboxmanage --version to see the actual failure?
@@justmeandopensource
error
kubernetes/vagrant-provisioning# vboxmanage --version
WARNING: The character device /dev/vboxdrv does not exist.
Please install the virtualbox-dkms package and the appropriate
headers, most likely linux-headers-generic.
You will not be able to start VMs until this problem is fixed.
5.2.34_Ubuntur133883
@@justmeandopensource
Error when I am trying
vboxmanage --version
kubernetes/vagrant-provisioning# vboxmanage --version
WARNING: The character device /dev/vboxdrv does not exist.
Please install the virtualbox-dkms package and the appropriate
headers, most likely linux-headers-generic.
You will not be able to start VMs until this problem is fixed.
5.2.34_Ubuntur133883
Thank's for your video, am getting this error Unable to connect to the server: dial tcp 192.168.1.40:6443: i/o timeout
Hi Charles, thanks for watching. Where exactly you are getting this error? Can you give more details about your setup please.
Hello Venkat. I have Windows 10, and I'm trying to follow the step you took. But it is failing with kmaster. Here is the error:
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy Calico network
kmaster: The connection to the server localhost:8080 was refused - did you specify the right host or port?
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: failed to load admin kubeconfig: open /root/.kube/config: no such file or directory
kmaster: To see the stack trace of this error execute with --v=5 or higher
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Although master is created, but it doesn't have any config file under /etc/kubernetes/ and /home/vagrant/.kube/
Hi, thanks for watching. Although many viewers confirmed the vagrant setup works I haven't tried it myself. I haven't used Windows for more thn a decade. I can try setting up a VM and see if it works.
@@justmeandopensource sir same happening for me
@@yashhirulkar909 You can try and remove all the output redirection code in the bootstrap scripts.
Take a look at bootstrap.sh and remove ">/dev/null 2>&1" from all the lines that have it. So now when you do vagrant up, you can see more detailed errors and hopefully that will give you some direction. Cheers.