GKE Autopilot - Fully Managed Kubernetes Service From Google

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ม.ค. 2025

ความคิดเห็น • 59

  • @DevOpsToolkit
    @DevOpsToolkit  3 ปีที่แล้ว +17

    I made a false statement in this video. GKE Autopilot is NOT the first fully-managed Kubernetes service. AWS Fargate gets that award. A more precise wording would be that "GKE Autopilot is the first fully-managed Kubernetes service **that implements full Kubernetes API**". There are some differences between the two and I'll explore them in one of the upcoming videos. Subscribe if you'd like to get a notification when I publish it.

    • @migueldias1292
      @migueldias1292 3 ปีที่แล้ว

      that's what i was thinking :)
      we use fargate at my company, and i cant wait to try autopilot!

    • @n4870s
      @n4870s 3 ปีที่แล้ว +1

      Can we limit cost in case sth happens and there are many pods created and we get billed a lot that won’t be nice.
      Would love to see how secrets and config maps are defined, managed and used by an app for new bees.

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +1

      @@n4870s GKE Autopilot scales the cluster and not the Pods (replicas of your apps). So, just by using Autopilot you get a guarantee that the capacity is just as what your apps need. Your apps can scale uncontrollably if you have an automated scaler like, for example, HorizontalPodAutoscaler (HPA) that does not have the upper limit specified. If you're interested in how to prevent others creating HPA without the upper limit, you might want to check Open Policy Agent.

  • @creative-commons-videos
    @creative-commons-videos 3 ปีที่แล้ว +5

    Absolute amazing, the way of presenting the content is too good, I am surprised why there is very few subscribers, anyway nice content as usual :)

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +3

      My theory for not having many subscribers is that a) I started uploading regularly since half a year ago (even though the channel exists much longer and b) that I do not do socials (beyond posting a link to Twitter and LinkedIn).
      I should find someone to do the marketing. The only problem is that the goal of the channel is not to earn $$$ so I do not have a budget for that.

  • @FernandoLares-gm2uj
    @FernandoLares-gm2uj 3 ปีที่แล้ว +1

    This is the most accurate description about Kubernertes i've ever seen. not only accurate but real!

  • @tabenatdylan77
    @tabenatdylan77 2 ปีที่แล้ว +1

    Very engaging video. Thanks for breaking this down

  • @samirtahir3756
    @samirtahir3756 3 ปีที่แล้ว +2

    I have been 'suffering' with rancher for a long time lol, but gcp is coming to where I work finally!

  • @felipeschossler
    @felipeschossler 3 ปีที่แล้ว +2

    What a good explication about this new feature, definitely +1 subscriber 😊😊

  • @santoshperumal129
    @santoshperumal129 2 ปีที่แล้ว +1

    Amazing video

  • @srsh77
    @srsh77 3 ปีที่แล้ว +1

    Many many thanks for this excellent demo of this game changing feature.

  • @juancarloschristensen
    @juancarloschristensen 3 ปีที่แล้ว +2

    Thanks for the video Viktor!
    One interesting point of comparison is costs.
    On one hand it would seem that GKE Autopilot clusters are more expensive or in par with a standard GKE cluster running standard nodes.
    e.g. Same workload, running 24/7 to 100% utilization of resources could cost more on (or similar) on autopilot.
    On the other hand, on a standard cluster you can utilize preemptible nodes + auto-scaling, which for many workloads lowers costs substantially (1/3 to 1/4 of the cost).
    When comparing autopilot to a standard cluster relying extensibly on preemptible nodes, the cost difference seems to be quite large.
    I do believe the cost of autopilot will go down over time as the google team collects more data and fine tunes the offering.
    I'd love to read your thoughts on the cost perspective!
    Thanks again

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +2

      It's hard to say which one is cheaper and what the difference is since it depends a lot on usage and type of workloads. Autopilot has just been released so I do not (yet) have been running it enough to compare. Even when I do, it would be hard to compare since it's close to impossible to have the same workload in both types of clusters.
      That being said, I think that Autopilot cost should be similar to GKE standard based on E type of instances. If you compare it with a cluster using spot instances, they should be cheaper (in case of GKE standard), but only it you nail down the sizes of nodes and cluster autoscaler.
      Personally, I wouldn't mind paying more for Autopilot if it will removed some of the burden of managing my clusters. If the difference in pricing is not big (and I don't think it is), it's worth it. Ultimately, Google is charging for resource consumption of the Pods, but that price is internally based on the resources used by nodes. So it all boils down to how well they optimised Autopilot and how much such a setup fits someone's needs.
      If it's of any help, you can use cloud.google.com/products/calculator to get a rough idea of what each combination would cost.

  • @christianibiri
    @christianibiri 3 ปีที่แล้ว +1

    Another great video

  • @manipal2011
    @manipal2011 3 ปีที่แล้ว

    amazing .... session .... have to learn all Security controls in GKE

  • @desmoulins6095
    @desmoulins6095 2 ปีที่แล้ว +2

    Could you compare GKE Auto Pilot and Digital Ocean managed Kubernetes? Thanks for all the awesome content.

    • @DevOpsToolkit
      @DevOpsToolkit  2 ปีที่แล้ว +1

      I think that a better comparison would be DigitalOcean Kubernetes with GKE (without Autopilot) since those two are similar. Autopilot has additional features that makes it "more managed" than other solutions and could be compared with, let's say, AWS Fargate.
      Anyways.... Adding it to my TODO list... :)

  • @TheStuzenz
    @TheStuzenz 3 ปีที่แล้ว +1

    Hi, love your commentary. Keep up the good work. Question for you - how much value is there in consolidating knowledge with the likes of CKA certification - is this level of hand configured architecture that comes under the theme of CKA going to be abstracted away to the point that this knowledge is only going to be needed by the few?
    For context, I have started doing some of the CKA studies - mainly because it interests me (good/bad reason? I'm not sure) and seems like an interesting way to get into cloud after having managed my own servers and linux boxes in the past.

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +2

      I didn't go through CKA so I cannot comment about it specifically, but more on the general level. Kubernetes administration is not going to go away with Autopilot (or with Fargate). Vast majority will be in semi-managed and self-managed k8s for a while. Now, I cannot say what "for a while" means specifically. Everything in our industry gets obsolete sooner or later. Eventually, even Kubernetes will go away no matter whether it is self-managed, semi-managed, fully-managed, or anything in between.
      I would look at it from this perspective. If there is a certification worth taking, that's CKA. We do not yet have any alternative to k8s. No one even talks about what's coming afterwards, so it's here to stay for a while (in one form or another).
      P.S. By "go away", I really mean "not mainstream any more". Nothing ever fully goes away. Mainframes are still around.

  • @Deevg-f9e
    @Deevg-f9e 11 หลายเดือนก่อน +1

    Very informative. I tried to create a GKE auto pilot cluster with a shared VPC private network through terraform, but stuck with this exception again and again, 'Error: Error waiting for creating GKE cluster: All cluster resources were brought up, but: only 0 nodes out of 1 have registered; cluster may be unhealthy.' Is there any suggestion to troubleshoot this error ?

    • @DevOpsToolkit
      @DevOpsToolkit  11 หลายเดือนก่อน

      Unfortunately, I haven't used Terraform in a while so I'm not sure what would be the correct HCL or why that error appears :(

  • @fenarRH
    @fenarRH 3 ปีที่แล้ว +2

    [Reference to poster on the wall] Tyler Durden: The things you own end up owning you. -> so go with SaaS Lol

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +2

      I love that one. I'll use it from now on!

  • @aryadiadi6888
    @aryadiadi6888 3 ปีที่แล้ว +1

    Hi Victor, thanks for the video, so...what is deifference betwwen autopilot and serverless ?

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      Autopilot is a solution to get a fully managed Kubernetes cluster. Someone else needs to make sure that the cluster is always operational, has the right size, etc. You, on the other hand, are in charge of (almost) everything happening inside that cluster. Serverless can be thought of as one more layer on top of it. You do not even know or care what is inside your cluster. You just have to specify an image (or whatever else is your packaging mechanism) and that's about it.
      Take a look at Google Cloud Run (there's a bit about it in th-cam.com/video/Jq8MY1ZGjno/w-d-xo.html). It is, effectively, an additional layer on top of something similar to Autopilot.

  • @fieryinferno8352
    @fieryinferno8352 3 ปีที่แล้ว +2

    Can we integrate global load balancer to autopilot? If so please let me know how...

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      I don't think that there is anything specific for autopilot. GLB should work with it just as with a normal GKE cluster. You'd probably go for MCI and MCS to set GLB.

  • @julianthefrank
    @julianthefrank 3 ปีที่แล้ว +2

    Would it scale the nodes to 0 if there is no load?

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +1

      Does it matter? You are paying for the resources your pods are consuming. Google might have 100 nodes in your cluster yet, if you have no load you are not paying anything.
      Going back to the original question... I think that the autopilot has two nodes for system-level processes so that is the minimum. You are not paying for those.

  • @freecomwifi5817
    @freecomwifi5817 ปีที่แล้ว +1

    Scaling up the Pod takes a significant amount of time (at least 60 seconds) due to the dependency on the new node being brought up first. This delay may not be acceptable in a production environment.

    • @DevOpsToolkit
      @DevOpsToolkit  ปีที่แล้ว

      Typically, we do not scale pods when those already running teach the limits and start crashing. Instead, you would, for example, scale up when memory consumption reaches, let's say, 80%. That scenario would cause issue only if the time required to bring up a new node takes more time than it takes for that memory consumption to reach 100%. If the goal is to scale up only when things go terribly wrong and instant scaling is require, the only solution is to overprovision clusters so that there's always plenty of free resources.

    • @freecomwifi5817
      @freecomwifi5817 ปีที่แล้ว +1

      @@DevOpsToolkit Thank you for your fast response. What is your opinion on whether it is more efficient to use GKE Autopilot with a large range between Pod request and limit resources, and scaling with a low percentage, or to use standard GKE with the Cluster Autoscaler, such as Karpeneter?

    • @DevOpsToolkit
      @DevOpsToolkit  ปีที่แล้ว

      I don't think there is a need to scale with low percentage. Existing replicas should be able to sustain increased traffic while new replicas are being added.
      Bear in mind that you will have a similar issue/process no matter whether you use gke or gke autopilot. You need to scale up and down both replicas of apps and nodes. The question is only how much of that you want to do as opposed of relying on services managed by others.
      Finally, the last time I checked, Karpenter worked only with EKS.

  • @AlissonVieir4
    @AlissonVieir4 3 ปีที่แล้ว +1

    Do Autopilot supports Knative? Or any kind of scale-to-zero approach?

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      As long as you do not use daemonsets, it should work. Knative works well in Autopilot.

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      ... however, if you are using GCP and you want something like Knative, Google Cloud Run might be a better choice. It is a service based on knative.

    • @AlissonVieir4
      @AlissonVieir4 3 ปีที่แล้ว +1

      Thank you for your response.
      Basically what I need is a multi container pod that "awakes from zero" consuming kafka events. Cloud Run have limitations about multi container and consuming kafka is not an option as far as I know... I'll give GKE Autopilot a try!

  • @soubinan
    @soubinan 3 ปีที่แล้ว +1

    A kind of Pod as a Service ?

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +1

      It's more like a Kubernetes-as-a-Service. You can apply (almost) any k8s manifest, so it's not only about Pods.

  • @perspextive
    @perspextive 3 ปีที่แล้ว +1

    dope

  • @jagkoth
    @jagkoth 3 ปีที่แล้ว +1

    Going forw there won't be any work for us it seems. Bracing for impact

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +2

      There is always work. If anything, the demand for software engineers is increasing. The trick is that we all need to move at the same speed as technology so that we always provide value. 30 years ago we were managing baremetal, 20 years ago we were managing VMs, 10 years ago we started managing cloud, since a few years ago Kubernetes, etc. There is always work, and it is constantly changing.

    • @jagkoth
      @jagkoth 3 ปีที่แล้ว +1

      @@DevOpsToolkit you said correct. But I mean for infrastructure based work

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว +1

      I have no reason to believe that infrastructure work will disappear any time soon. It is changing all the time, but not disappearing. Now, some specialties do become obsolete but that should worry only people who reject to follow the tech and adapt.

  • @barefeg
    @barefeg 3 ปีที่แล้ว

    What about aws fargate?

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      I thought that Fargate runs on top of EKS clusters (ignoring ECS) that are created by users (with all the complexities of using EKS). Now I'm not so sure anymore since someone told me a few hours ago that Fargate can completely remove EKS creation and management. I need to double check that.

    • @barefeg
      @barefeg 3 ปีที่แล้ว

      @@DevOpsToolkit i don’t know if that’s the case since I use unmanaged/managed nodes as well as fargate nodes. So I already have an EKS cluster. It’s true that creating an EKS cluster via the console is way harder than google. But it’s not technically true that autopilot is the only fully managed solution. I’m interested in knowing how the autopilot pods are priced with respect to resource requests/limits. In fargate you get charged at minimum fo 1/4 CPU no matter how low you set it

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      Just started working on a video that will explore Fargate and (probably) compare it to GKE Autopilot.

    • @barefeg
      @barefeg 3 ปีที่แล้ว

      Also in autopilot it’s probably not allowed to use daemonsets? (Same as fargate)

    • @DevOpsToolkit
      @DevOpsToolkit  3 ปีที่แล้ว

      You can run DaemonSets in Autopilot.

  • @denverfletcher9419
    @denverfletcher9419 3 ปีที่แล้ว +1

    You can gain that experience ... well, yes, but I doubt you can gain it without making mistakes (aka "messing it up" in Viktor speak).

  • @victormendoza3295
    @victormendoza3295 2 ปีที่แล้ว +1

    CNKS - ChuckNorris Kubernetes Service - infinite 9s.