For multi clusters solutions, I had a similar summary. 1. controller layer: Karmada, clusternet, kubefed (obsolete): kind of controlled distribution scheduling 2. data layer: Clusterpedia: only performs data aggregation to provide a better experience for operation and maintenance monitoring and data retrieval. 3. devops layer: ArgoCD and FluxCD are connected to multiple clusters through CD and release alternative distribution, with similar effects. 4. infra layer: ClusterAPI, kubean and other multi-cluster life cycle management, only for multi-cluster creation 5. logical layer: Virtual multi-tenancy, vcluster, kubezoo, etc., make users feel that it is an independent cluster, but it is actually virtual. It saves resources in some test development scenarios. 6. network layer: submarine, istio mutil-cluster and other ingress/egress This part is my relatively superficial understanding.
Before K8S - everybody built their own "Application Farms" and created ways to run their onw apps. Meaning if you hire somebody, who supported "Application farm" they will not be able to suppory your farm at day 1 as every of them are different. Kuberentes alinged this. Now we see same happening with platfroms - Everybody are building platfroms ontop of k8s and all of them uses different tools and looks different. They achieve same goals, but are different.
They not competing in any ways. All BACK-stack components are operating at single cluster context in the way that you run them in multiple cluster and/or have control clusters. So in the way it is fleet management stack. Where KCP intended to be single API, horizontally scaled. There is not limitation why somebody should not be able to put all the BACK-stack components ontop of KCP creating unified single API to do the all the management.
@@mangirdasjudeikis8799 I appreciate this perspective! Do you feel that BACK-stack and KCP have a natural partnership in that way? Peanut butter and chocolate, jam and toast, etc. I ask because it does seem like there is an opportunity to blend each to make both stronger. How would you approach the very first technical step in combining both efforts? What would be the first material win/outcome that would see both changed/improved as a result?
Why can’t different versions of CRDs be installed in the same cluster? The resources are all versioned. Similar how k8s updates APIs over several releases.
Its more of the operators authors questions. You can, but community does not do this. Usually with operator upgrade you are forced to upgrade CRDs. So where intentions of the API where good, community didn't built as it intended (same operator supporting multiple version), and we get to the point there lower layer of stack dictates the pattern
@@mangirdasjudeikis8799 by that idea, it would be a simple solution would be to create a tool to merge both CRDs with different versions and create a router controller that delegates to the right controller version given the resource’s version.
For multi clusters solutions, I had a similar summary.
1. controller layer: Karmada, clusternet, kubefed (obsolete): kind of controlled distribution scheduling
2. data layer: Clusterpedia: only performs data aggregation to provide a better experience for operation and maintenance monitoring and data retrieval.
3. devops layer: ArgoCD and FluxCD are connected to multiple clusters through CD and release alternative distribution, with similar effects.
4. infra layer: ClusterAPI, kubean and other multi-cluster life cycle management, only for multi-cluster creation
5. logical layer: Virtual multi-tenancy, vcluster, kubezoo, etc., make users feel that it is an independent cluster, but it is actually virtual. It saves resources in some test development scenarios.
6. network layer: submarine, istio mutil-cluster and other ingress/egress
This part is my relatively superficial understanding.
Before K8S - everybody built their own "Application Farms" and created ways to run their onw apps. Meaning if you hire somebody, who supported "Application farm" they will not be able to suppory your farm at day 1 as every of them are different. Kuberentes alinged this. Now we see same happening with platfroms - Everybody are building platfroms ontop of k8s and all of them uses different tools and looks different. They achieve same goals, but are different.
How might KCP (sandbox status) and BACK-stack co-exist and benefit one another?
They not competing in any ways. All BACK-stack components are operating at single cluster context in the way that you run them in multiple cluster and/or have control clusters. So in the way it is fleet management stack. Where KCP intended to be single API, horizontally scaled. There is not limitation why somebody should not be able to put all the BACK-stack components ontop of KCP creating unified single API to do the all the management.
@@mangirdasjudeikis8799 I appreciate this perspective! Do you feel that BACK-stack and KCP have a natural partnership in that way? Peanut butter and chocolate, jam and toast, etc.
I ask because it does seem like there is an opportunity to blend each to make both stronger. How would you approach the very first technical step in combining both efforts? What would be the first material win/outcome that would see both changed/improved as a result?
@@knaledge6854 As mentioned KCP is framework to build platforms. So I suspect somebody need to build an opinionated platfrom from it first :D
Why can’t different versions of CRDs be installed in the same cluster? The resources are all versioned. Similar how k8s updates APIs over several releases.
Its more of the operators authors questions. You can, but community does not do this. Usually with operator upgrade you are forced to upgrade CRDs. So where intentions of the API where good, community didn't built as it intended (same operator supporting multiple version), and we get to the point there lower layer of stack dictates the pattern
@@mangirdasjudeikis8799 by that idea, it would be a simple solution would be to create a tool to merge both CRDs with different versions and create a router controller that delegates to the right controller version given the resource’s version.
@@barefeg It solves only 1 problem. There are many other problems :) we trying to lookg bit more holistic view
Great talk! In some way KCP reminds me Teleport but on steroids. It not only allows to manage access but the platform itself: deployemets, etc
We getting there :D
why would you need multiple clusters?
They said in the beginning slides. The CRDs are cluster wide, and various teams may need different setup for each.
Legos
All of them!