yes, yes, please make a video about models. Also it would be nice to have a concise "Intro" into our options (tools) to train and/or fine-tune our own self-hosted models (or managed?). I want to dip my fingers into ML, but the technology is so fast growing and changing, that it's quite difficult to get 3 consistent search results (videos/blogs) explaining what is what and how to DIY. And ML seems to be the way to go further, as it seems to be rather good at automation w/o the need to write custom software., so we DO have to master it
Hi Viktor, glad that you mentioned KubeVirt. I did not find a solution yet where one can have on-premise (or cloud-provider A) GPU Servers to cover the base-load, while at the same time scaling into another cloud-provider B when there is demand. I coldnt get it to work yet and tried different solutions (Admiral, KubeVirt, Karmada) but there was always one or more roadblockers. Most of the time, the scheduler would just not even try to schedule my workload since all GPUs are already used. But IF the scheduler would just go ahead and schedule, auto-scaling would have picked up and spawn a new GPU node. This topic could also be expanded on the general case of how to do multi-cluster workload distribution (with auto-scaling) As always, thanks so much for your valuable content!
Great content as always! As someone who works a lot with KServe, I would of course like to see a video clip about your preferred approach of scaling InferenceServices in prod.
Just FYI: a single Ollama installation and instance can run multiple models. Of course after each other when one is no longer used, but if you have enough VRAM also at the same time now
I was looking into using knative to scale copilot like models to aid development and instead of partitioning the gpu; using time sharing as the lack of security boundary is not a problem.
What are your recommended tools to manage GPU workloads on Kubernetes? At my org, we've configured the basics that you have here already, and now the teams are looking into AI frameworks. (Using Argo, Karpenter, and EKS to manage all the configurations discussed in your video here) Applications like Kubeflow are being discussed to help those teams move more swiftly, and I'm curious about your take on it or if you have content coming soon related to that.
I explored inference. Kubeflow is focused more on generating models. It is great, but sometimes overwhelming. I'll do my best to explore it in one of the upcoming videos.
Yes, an orchestrator definitely makes your life easier and there are pretty good open source ones. Airflow, prefect or flyte are pretty good. From Kubeflow I've heard mixed experiences.
Kubeflow is the ML everything tool (collection), but it is not very easy to deploy and maintain in my experience. We used deployKf which made it a bit easier but that does not include everything
Hello Viktor, thank for the video, I would like to see a video with Knative and KubeVirt.
yes, yes, please make a video about models.
Also it would be nice to have a concise "Intro" into our options (tools) to train and/or fine-tune our own self-hosted models (or managed?).
I want to dip my fingers into ML, but the technology is so fast growing and changing, that it's quite difficult to get 3 consistent search results (videos/blogs) explaining what is what and how to DIY. And ML seems to be the way to go further, as it seems to be rather good at automation w/o the need to write custom software., so we DO have to master it
Thank you for you video i got some idea. I'm eagerly wait for the Ollama with GPU in Kubernetes.
Hi Viktor, glad that you mentioned KubeVirt.
I did not find a solution yet where one can have on-premise (or cloud-provider A) GPU Servers to cover the base-load, while at the same time scaling into another cloud-provider B when there is demand. I coldnt get it to work yet and tried different solutions (Admiral, KubeVirt, Karmada) but there was always one or more roadblockers. Most of the time, the scheduler would just not even try to schedule my workload since all GPUs are already used. But IF the scheduler would just go ahead and schedule, auto-scaling would have picked up and spawn a new GPU node.
This topic could also be expanded on the general case of how to do multi-cluster workload distribution (with auto-scaling)
As always, thanks so much for your valuable content!
Great content as always! As someone who works a lot with KServe, I would of course like to see a video clip about your preferred approach of scaling InferenceServices in prod.
Thank you for another great video! I definitively enjoyed your Ollama demo! I really would like to see a video with Knative too.
I am particularly excited about the new alpha feature of OCI images as read-only mounts. That'll take k8s to the next level for running ML algorithms.
Just FYI: a single Ollama installation and instance can run multiple models. Of course after each other when one is no longer used, but if you have enough VRAM also at the same time now
That's true. I installed it twice only to demonstrate how sharing gpu works.
I was looking into using knative to scale copilot like models to aid development and instead of partitioning the gpu; using time sharing as the lack of security boundary is not a problem.
not just ollama... differentiate bwn few llms .... helpful in devops space
Also you can use Slurm Workload Manager or Volcano
Cloud run is a good severless platform to run GPU workloads instead of setting up kNative and managing it !!
Oh yeah. Cloud Run is managed Knative abd it's great.
Pozdrav!
A video of Ollama AI models in your presentation would be a valuable content.
Great. I'll add it to my to-do list.
Thanks for the video, can you make a video on how to do the same with on prem kubeadm cluster?
Unfortunately I do not have access to on-prem clusters any more so I would not have means to try it out and write the instructions.
have you tried Kaito? A video on it would be great.
Kaito seem to be focused on marketing and that's not an area I tend to work in.
What are your recommended tools to manage GPU workloads on Kubernetes? At my org, we've configured the basics that you have here already, and now the teams are looking into AI frameworks. (Using Argo, Karpenter, and EKS to manage all the configurations discussed in your video here) Applications like Kubeflow are being discussed to help those teams move more swiftly, and I'm curious about your take on it or if you have content coming soon related to that.
I explored inference. Kubeflow is focused more on generating models. It is great, but sometimes overwhelming. I'll do my best to explore it in one of the upcoming videos.
Yes, an orchestrator definitely makes your life easier and there are pretty good open source ones. Airflow, prefect or flyte are pretty good. From Kubeflow I've heard mixed experiences.
zenML could help make life easy to use many ML tools.
Kubeflow is the ML everything tool (collection), but it is not very easy to deploy and maintain in my experience. We used deployKf which made it a bit easier but that does not include everything
Kubevirt with gpu will be cool, thx
Yes please show the knative option
Thank you sharing - Could it be possible if I can apply the same locally on my homelab ?
If you have gpu in your homeland, yes you can. Th setup will be more complicated though.
What if my model requires 32 gpus to perform inference? 😜 what see your K8s do that
What would you use instead?
@@DevOpsToolkit SLURM as reccomended by Nvidia. They have lots of eduational material on the top and recently updated DLI labs