I'm not sure there is a benefit of the 5 minute approach, when you run commands it moves too fast to easily follow and some of the commands in the description are truncated. Also, the t3.micro instance only has 1 GB of RAM but when I tried to run the Kubernetes commands it complained that 2 GB is required. This needs to be updated, I'm following the steps exactly and getting many Kubernetes errors when running the sudo kubeadm init commands.
Thanks! Yeah sorry about that. There was a lot to cover and I didn't want to make the video too long. If there's any topic in this video that you'd like me to go into more detail on, just let me know and I'll make a separate video on it.
Great video! I had to pause, reverse, and re-watch certain points but you hit all of the relevant points like modifying the security group, enabling bridging, and setting the container networking interface. Saved me days of research. Thanks!
I work in hospitals with a software called EPIC, but I recently took a 3 month Kubernetes class. It's extremely complex, but I have developed a fundamental understanding of the processes. However, I'm not too confident in my CLI skills as it pertains to K8's. How did you learn, any suggestions?
Thanks! Yes the responsibilities may vary greatly depending on the size and needs of the company or organization you're working for. The common themes though will be around designing and maintaining the cluster(s) your company has workloads on. As an admin, you may be called upon to make modifications to existing clusters to add new workloads, fix issues, harden security, or create new clusters for different use cases. Hope that helps! Let me know if you have more questions.
Few Questions : 1. In this present setup , Is it possible to communicate to Nginx from outside world ? 2. Currently Nginx is running in may pods , If one died , will it be recreated automatically ? 3. sudo kubeadm init --pod-network-cidr=10.244.0.0/16 This command is for ?
1. Absolutely! You just need to apply a few more configurations. The AWS security group for the servers need to allow inbound requests on port 80 for http (443 for https). Also, you need to configure kubernetes to take in those requests properly and pass them to nginx pods. This page shows you how to do that. kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/ 2. Great question. You can actually configure kubernetes on how to behave when one of them dies, but usually people set it to recreate the pod if one dies, and that might even be the default behavior in some cases. 3. This relates to the cluster's internal network. I'm defining the network address which all the pods (i.e. nginx servers) will run on. Kubernetes networking is a more in-depth topic, but this page can give you more info on it. I may make a video on it too if it would be helpful. Cheers.
Hey Shiraz great video. Just one question I wanna clarify, if I have a frontend and backend app. Do I just deploy it into individual pods in one node. Or do I decouple the FE and BE into different nodes too?
Good question. It's often up to you, as long as you take the reliability considerations of the apps into account. Some real-world production scenarios have both frontend and backend pods on a single server, and then replicate it again on multiple servers for high availability. For light-weight applications, there's generally no issues. In other cases, the backend app requires so much resources that it could interfere with the frontend app if they're hosted on the same node (e.x. the backend app consumes all the network bandwidth or disk I/O of the server). In this case, just host them on separate servers if you have more available. In either case, stress-test the application and you'll have your answers. Hope that helps!
amazing video, Simple yet explanatory. What is next? Could you create a tutorial on how to get this nginx-server running on some domain? May be using Some Ingress?
fantastic video, I absolutely loved it. It was almost a joy ride with the clock on. 🙂 Very clean explanation. I would love a whole series that covers the CKA exam tasks, just in this same style. I know I am asking for a lot. 😀
Great, but what will happened if I have SQL server on it and the master node shuts down/the VM machine fail/ and I access the SQL by the IP address and port of the master machine??? I will lose the service?
One of the great things about kubernetes is the separation of control plane and data plane. When properly configured, the master node can actually go down, but your data flow to applications like SQL server on the worker nodes should not go through the master node at all, so your application will surprisingly still function! However, pod orchestration will be down, so if kubernetes will not be able to keep tabs on your SQL server and ensure it's still running, or create new/replacement pods for it while the master node is down. Hope that helps!
can you make like complete beginner DevOps tutorials like what is docker etc? I particularly enjoyed your jupyter in the cloud vid and now I want to build my own web server (blog, newsletter, etc.) with kubernetes -> so thanks!
it's really amazing but faced a small issue while running the kubeadm init...as the containerd caused the trouble justby doing below steps 1) rm /etc/containerd/config.toml 2) restarted containerd issue resolved
Thanks for the feedback! Looking at the kubeadm documentation now it looks like they’ve raised their minimum requirements since the time this video was published. You might still be able to get it to work on a t3.micro by installing an older version of kubeadm for testing purposes.
Shiraz, great tutorial! I wanted to make a similar tutorial that goes more into depth on the networking bits and linux commands - any issues if I shout you out in my video for the parts that inspire mine?
Coolest video I have ever seen. I want to implement on scenario would you make tutorial for that. On-prem cluster 1 Master-2 worker AWS 3workers And want to manage these 3 workers using on-prem master . Please make on video
Thank you for your simple to follow tutorial, I was kind a confused guy watching bunch of irrelavent tutorials on Kubernetes but yours is very precise and right to the dot, thanks once again, May Allah bless you with more beneficial knowledge, ameen
Thanks for the video, Can you do the cluster autoscaler for the same setup with AWS? Is it possible to with the same method or we need to change our settings to was cloud provider and do the cluster autoscaler?
After deploying Nginx, you also need to create some networking configurations to allow your new k8s cluster to accept and properly route incoming requests (i.e. from your browser) to the right application (in this case, nginx). An ingress controller does this for you. You just need to add it and tell it what to do. This page guides you on how to set it up. kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/ Hope that helps!
I created the kubernets cluster using AWS instance everything run fine for 2mins all the pods were in running state but after that all the pods will go to crash loopbackoff state. I worked for 24hours on this and no solution please suggest me the solution
Thanks for the video ,but I want to say that we need an instance with at least 4GiB RAM... While trying this approach , I had a failure of kubeadm service , does anyone know why?
Thanks so much for this video! I needed to create a quick configuration in AWS, but all the tutorials I found presented a pretty complex (production-like) infrastructure setup with LBs and multiple networking and IAM configurations and I didn't manage to get any of them working and spent like 4 hours just troubleshooting kubelet errors. With yours video I got a quickly up and running using a lot of the defaults and it works just fine :D
Hi Deepak, what part of this process are you getting stuck on? If you’re finding this to be too cumbersome, EKS is the faster easier option, although it has its own separate learning curve as well.
@@ShirazHazrat Hi Sir, getting error like connection refused on port 8080 after installing kubernetes installaltion can you plz guide me how to seetup gmail: kumardeepakluck@gmail.com
Shiraz bhai I really like the way you explain steps, please keep uploading videos of topics you are comfortable with, I know you may be busy with your work but plz spare some time and make good tutorials this is going to help the world community in getting the real and genuine content. Thank you for this awesome video.
This relates to the cluster's internal network. I'm defining the network address which all the pods (i.e. nginx servers) will run on and be able to communicate with each other on. By default, we want kubernetes to have a network separate from the host's network to run its workloads on. There may definitely be cases where you want the pods to have IPs on the hosts' network (i.e. VPC network), but this is not a good practice for security and stability purposes. Kubernetes networking is a more in-depth topic, but this page can give you more info on it. I may make a video on it too if it would be helpful. kubernetes.io/docs/concepts/cluster-administration/networking/ Cheers.
@@ShirazHazrat Does that mean that we could configure any IP ranges which we would like to? 10.244.0.0/16 is just an example right? We could do like 15.23.234.0/20, etc ?
The beginning explanation was great. I don't really understand the race against the clock. If we all need to stop, slow down, rewind the video, then shouldn't you just explain it slower? I don't see who's benefitting by that clock being there
ubuntu@ip-172-31-41-83:/etc/systemd/system$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.20.4 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.4. Latest validated version: 19.03 error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Mem]: the system RAM (953 MB) is less than the minimum 1700 MB [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher any way out for this , docker and kubernetes installed but having issue at this point..
Your site's replication/failover/self healing etc ability are limited by your site's access to the database. The whole idea of container orchestration and containers absolutely fails with databases since data has to be stored somewhere not inside a "stateless" container. Containers just don't replicate databases properly. If you use container orchestration without properly replicating/making data highly available, you're still going to get the "single point of failure" problem and no amount of nginx servers being spun up will fix that. Sure, you need less database instances than web servers/cdns/whatever, but you still need to replicate your database. You don't do that with kubernetes. Even in non production environments, databases are just "copy pasted" with full manual migrations when you need to replicate it. No docker, no kuberntes, nothing like that. Just be aware of that when buying into the kubernetes fad.
Thanks for the video! I'm having trouble with kubeadm init (step 4) when it's waiting for kubelet to boot up the control plane pods (right after creating pod manifests). This is the output I'm getting: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get "localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. ** repeats the last two outputs a couple times ** When I check kubelets status with journalctl it says: "failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\" I'm running a t3.small (micro doesn't have enough ram anymore) image: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type - ami-00399ec92321828f5 (64-bit x86) / ami-08e6b682a466887dd (64-bit Arm) Only ran the commands listed in this tutorial. Any idea what might be happening here?
the problem was cgroup driver. Kubernetes cgroup driver was set to systems but docker was set to systemd. Solved : 1- create json file /etc/docker/daemon.json then write { "exec-opts":["native.cgroupdriver=systemd"] } and save it . 2- sudo systemctl daemon-reload 3- sudo systemctl restart docker 4- sudo systemctl restart kubelet
you must be popular in the ants community, your fontsize is perfect for them
I'm not sure there is a benefit of the 5 minute approach, when you run commands it moves too fast to easily follow and some of the commands in the description are truncated. Also, the t3.micro instance only has 1 GB of RAM but when I tried to run the Kubernetes commands it complained that 2 GB is required. This needs to be updated, I'm following the steps exactly and getting many Kubernetes errors when running the sudo kubeadm init commands.
Same problem. Looks like we'll need a t3.small instead of a micro.
I saw a job posting today for a Kubernetes role paying $140 per hour. That’s why I am here. 😎👍🏆
Hey, great video thanks alot. Do you think k8s cert alone can get me hired?
Great video, was looking for some basic info on kubernetes and you sir delivered and then some! Great job!
Good video. It goes pretty fast though for beginners like me.
Thanks! Yeah sorry about that. There was a lot to cover and I didn't want to make the video too long.
If there's any topic in this video that you'd like me to go into more detail on, just let me know and I'll make a separate video on it.
Nice video @shiraz ... any idea if how pricing compares between EC2 self-hosted versus EKS?
I'm tilled by your tutorial, brief straight to the point and very helpful.
Thanks for the speedrun!
We want more vids! Do one on real-estate investing
man this is quality content, very clean explanation. can you make another videos about k8s?
Very happy to hear! Any feedback is appreciated.
I'm going to try to cover other topics, but I'll make a note to create more K8s videos.
How do you connect to each pod from url?
Great video! I had to pause, reverse, and re-watch certain points but you hit all of the relevant points like modifying the security group, enabling bridging, and setting the container networking interface. Saved me days of research. Thanks!
Awesome, very happy to hear!
I work in hospitals with a software called EPIC, but I recently took a 3 month Kubernetes class. It's extremely complex, but I have developed a fundamental understanding of the processes. However, I'm not too confident in my CLI skills as it pertains to K8's. How did you learn, any suggestions?
He's back! Love it, looking forward to watching more videos from you, keep it up!
Thanks!
excellent, on the point, very professional ... Thanks
Love your session, Can I know the day to day responsibilities of Kubernetes admin ?
Thanks! Yes the responsibilities may vary greatly depending on the size and needs of the company or organization you're working for. The common themes though will be around designing and maintaining the cluster(s) your company has workloads on.
As an admin, you may be called upon to make modifications to existing clusters to add new workloads, fix issues, harden security, or create new clusters for different use cases.
Hope that helps! Let me know if you have more questions.
Shiraz Sir , Thank you so much, this was very helpful !
This is gold Shiraz. Thank you very much.
Crisp to the point, one of the best explanations !
Few Questions :
1. In this present setup , Is it possible to communicate to Nginx from outside world ?
2. Currently Nginx is running in may pods , If one died , will it be recreated automatically ?
3. sudo kubeadm init --pod-network-cidr=10.244.0.0/16 This command is for ?
1. Absolutely! You just need to apply a few more configurations. The AWS security group for the servers need to allow inbound requests on port 80 for http (443 for https). Also, you need to configure kubernetes to take in those requests properly and pass them to nginx pods. This page shows you how to do that. kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
2. Great question. You can actually configure kubernetes on how to behave when one of them dies, but usually people set it to recreate the pod if one dies, and that might even be the default behavior in some cases.
3. This relates to the cluster's internal network. I'm defining the network address which all the pods (i.e. nginx servers) will run on. Kubernetes networking is a more in-depth topic, but this page can give you more info on it. I may make a video on it too if it would be helpful.
Cheers.
kubernetes.io/docs/concepts/cluster-administration/networking/
great video - really well presented - please make some more!
This is watch I was looking for
Awesome demonstration. Thanks!
Hey Shiraz great video. Just one question I wanna clarify, if I have a frontend and backend app. Do I just deploy it into individual pods in one node. Or do I decouple the FE and BE into different nodes too?
Good question. It's often up to you, as long as you take the reliability considerations of the apps into account. Some real-world production scenarios have both frontend and backend pods on a single server, and then replicate it again on multiple servers for high availability. For light-weight applications, there's generally no issues.
In other cases, the backend app requires so much resources that it could interfere with the frontend app if they're hosted on the same node (e.x. the backend app consumes all the network bandwidth or disk I/O of the server). In this case, just host them on separate servers if you have more available.
In either case, stress-test the application and you'll have your answers.
Hope that helps!
Stumbled across this! Great tutorial Shiraz! =D
Thanks man!
amazing video, Simple yet explanatory. What is next? Could you create a tutorial on how to get this nginx-server running on some domain?
May be using Some Ingress?
Thanks for the clear explanation and step by step guide.
fantastic video, I absolutely loved it. It was almost a joy ride with the clock on. 🙂 Very clean explanation. I would love a whole series that covers the CKA exam tasks, just in this same style. I know I am asking for a lot. 😀
Thank you, very happy to hear!
Great, but what will happened if I have SQL server on it and the master node shuts down/the VM machine fail/ and I access the SQL by the IP address and port of the master machine??? I will lose the service?
One of the great things about kubernetes is the separation of control plane and data plane.
When properly configured, the master node can actually go down, but your data flow to applications like SQL server on the worker nodes should not go through the master node at all, so your application will surprisingly still function! However, pod orchestration will be down, so if kubernetes will not be able to keep tabs on your SQL server and ensure it's still running, or create new/replacement pods for it while the master node is down.
Hope that helps!
How do I access a node port from my browser with this setup? And video is super informative.
can you make like complete beginner DevOps tutorials like what is docker etc? I particularly enjoyed your jupyter in the cloud vid and now I want to build my own web server (blog, newsletter, etc.) with kubernetes -> so thanks!
You got it! Plenty more to come.
Shiraz Hazrat perfect!!
it's really amazing
but faced a small issue while running the kubeadm init...as the containerd caused the trouble
justby doing below steps
1) rm /etc/containerd/config.toml
2) restarted containerd
issue resolved
Great video !
That is great, thank you for brilliant tutorial!
Wow!
Great job. Thanks so much.
fantastic bro
AWESOME VIDEO! Thanks a lot
This is a great video but I couldn't use t3 micro for the instance as kubeadm said it needed at least 1700MB of RAM. Did I miss something?
Thanks for the feedback! Looking at the kubeadm documentation now it looks like they’ve raised their minimum requirements since the time this video was published. You might still be able to get it to work on a t3.micro by installing an older version of kubeadm for testing purposes.
@@ShirazHazrat Good call.
Thanks. Helped a lot.
Shiraz, great tutorial! I wanted to make a similar tutorial that goes more into depth on the networking bits and linux commands - any issues if I shout you out in my video for the parts that inspire mine?
That was awesome!
Thanks for this video, it's very useful for beginners 👍
Fucking clear as day, would be nice to show istio for peeps
Coolest video I have ever seen. I want to implement on scenario would you make tutorial for that.
On-prem cluster 1 Master-2 worker
AWS 3workers
And want to manage these 3 workers using on-prem master . Please make on video
nicely done
Great session thanks for making it 🙏
Thank you for your simple to follow tutorial, I was kind a confused guy watching bunch of irrelavent tutorials on Kubernetes but yours is very precise and right to the dot, thanks once again, May Allah bless you with more beneficial knowledge, ameen
Thanks for the video, Can you do the cluster autoscaler for the same setup with AWS?
Is it possible to with the same method or we need to change our settings to was cloud provider and do the cluster autoscaler?
I tried but unable to hit the Nginx application using the public DNS
After deploying Nginx, you also need to create some networking configurations to allow your new k8s cluster to accept and properly route incoming requests (i.e. from your browser) to the right application (in this case, nginx).
An ingress controller does this for you. You just need to add it and tell it what to do.
This page guides you on how to set it up. kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
Hope that helps!
I created the kubernets cluster using AWS instance everything run fine for 2mins all the pods were in running state but after that all the pods will go to crash loopbackoff state. I worked for 24hours on this and no solution please suggest me the solution
I'm very new to this.. How can we check if the nginx server is up and running? Do we have to port forward the pod to localhost?
awesome video, could you please make a video on k8 cluster using kubespray
Thanks man👍
Thanks for the video ,but I want to say that we need an instance with at least 4GiB RAM...
While trying this approach , I had a failure of kubeadm service , does anyone know why?
Awesome video!! Thanks alot!
Put Playback speed to 0.25, turn on captions(Cc), mute the speakers, full screen and relax... :)
when you created the additional pods, do they run on the same worker nodes. Dont they create additional EC2 instances?
As far as I know. They will be evenly spun up on the two workers nodes in the cluster. That's is 5 pods on each worker nodes.
Excellent
ECR, ECS also will do the same..how this is different from that?
Upload More AWS Videos !!!
Noted! Thanks!
Thanks so much for this video!
I needed to create a quick configuration in AWS, but all the tutorials I found presented a pretty complex (production-like) infrastructure setup with LBs and multiple networking and IAM configurations and I didn't manage to get any of them working and spent like 4 hours just troubleshooting kubelet errors.
With yours video I got a quickly up and running using a lot of the defaults and it works just fine :D
enjoyed tq
Can I know the day to day responsibilities of Kubernetes admin.. Keep slow as not all are like you..Sir.. as we are in learning stage only..
Wow, awesome video! Super informative and easy to follow. Can't wait for the next one :)
Thanks! More to come!
Video thumbnail : install kubernetes in 5 mins
Video length : 15 mins
nice video
Can you plz help me with sepup k8s installation
still unable to setup k8s on Aws
Hi Deepak, what part of this process are you getting stuck on?
If you’re finding this to be too cumbersome, EKS is the faster easier option, although it has its own separate learning curve as well.
@@ShirazHazrat Hi Sir,
getting error like connection refused on port 8080
after installing kubernetes installaltion
can you plz guide me how to seetup
gmail: kumardeepakluck@gmail.com
Could you please make real project deployment with SSL certificate
you could have kept that time display in small font.. it is hiding the contents.. we are not bothered about the time.. :)
Instead of making it in 5 minutes in hurry burry , if you made it in 15 minutes calm and smooth would be more Appreciatable!!!!
Good to know, I'll keep that in mind. Thanks!
Shiraz bhai I really like the way you explain steps, please keep uploading videos of topics you are comfortable with, I know you may be busy with your work but plz spare some time and make good tutorials this is going to help the world community in getting the real and genuine content. Thank you for this awesome video.
where did you get 10.244.0.0/16
Yes me too confused , may be VPC network
This relates to the cluster's internal network. I'm defining the network address which all the pods (i.e. nginx servers) will run on and be able to communicate with each other on. By default, we want kubernetes to have a network separate from the host's network to run its workloads on. There may definitely be cases where you want the pods to have IPs on the hosts' network (i.e. VPC network), but this is not a good practice for security and stability purposes.
Kubernetes networking is a more in-depth topic, but this page can give you more info on it. I may make a video on it too if it would be helpful.
kubernetes.io/docs/concepts/cluster-administration/networking/
Cheers.
@@ShirazHazrat Does that mean that we could configure any IP ranges which we would like to? 10.244.0.0/16 is just an example right? We could do like 15.23.234.0/20, etc ?
The beginning explanation was great. I don't really understand the race against the clock. If we all need to stop, slow down, rewind the video, then shouldn't you just explain it slower? I don't see who's benefitting by that clock being there
ubuntu@ip-172-31-41-83:/etc/systemd/system$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.4. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Mem]: the system RAM (953 MB) is less than the minimum 1700 MB
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
any way out for this , docker and kubernetes installed but having issue at this point..
omg there was no need to set up a timer. didn't understand anything . was too fast
@shiraz
Your site's replication/failover/self healing etc ability are limited by your site's access to the database. The whole idea of container orchestration and containers absolutely fails with databases since data has to be stored somewhere not inside a "stateless" container. Containers just don't replicate databases properly. If you use container orchestration without properly replicating/making data highly available, you're still going to get the "single point of failure" problem and no amount of nginx servers being spun up will fix that. Sure, you need less database instances than web servers/cdns/whatever, but you still need to replicate your database. You don't do that with kubernetes. Even in non production environments, databases are just "copy pasted" with full manual migrations when you need to replicate it. No docker, no kuberntes, nothing like that. Just be aware of that when buying into the kubernetes fad.
at 2x, you're still wasting my fucking time
also @ 10 min you're doing everything wrong :\
imagine making a 15 min video entitled "in 5 min" and then doing everything wrong :,)
@@boople2snoot430 Do you have a better way to do it?
Thanks for the video! I'm having trouble with kubeadm init (step 4) when it's waiting for kubelet to boot up the control plane pods (right after creating pod manifests). This is the output I'm getting:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get "localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
** repeats the last two outputs a couple times **
When I check kubelets status with journalctl it says:
"failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\"
I'm running a t3.small (micro doesn't have enough ram anymore)
image: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type - ami-00399ec92321828f5 (64-bit x86) / ami-08e6b682a466887dd (64-bit Arm)
Only ran the commands listed in this tutorial.
Any idea what might be happening here?
the problem was cgroup driver. Kubernetes cgroup driver was set to systems but docker was set to systemd.
Solved :
1- create json file /etc/docker/daemon.json then write {
"exec-opts":["native.cgroupdriver=systemd"]
} and save it .
2- sudo systemctl daemon-reload
3- sudo systemctl restart docker
4- sudo systemctl restart kubelet
excellent, on the point, very professional, Thanks
nice video