Viktor just sending a message here from a brazilian devops engineer to thank you so much for your contents.God bless you and your life!You are a good person to provide such a content for free to us, you are helping people on their jobs!Congrats...and please..keep doing your videos!
We utilize Taskfile, which is a modern equivalent of a makefile. We are delighted with its capabilities, as it covers nearly everything you mentioned in a format combining declarative YAML and Bash. Taskfile provides us with a unified interface for both local and remote CI.
I think that the last sentence is critical. If whatever you're using works in all variations (e.g. local, remote, etc.), the main problem is solved. Most of the rest are individual preferences.
Once again, a fantastic video! I became enamored with Dagger right after watching it. However, I later discovered that it doesn't support rootless, and in my company, privileged containers are akin to shutting down a datacenter, haha. It would be fantastic if Dagger could, in the future, utilize pods instead of containers, especially considering that all developers these days have a local Kubernetes cluster.
I work in java based environment and used the same principle, but instead I used gradle. Jenkins just uses "sh ./gradlew .." to trigger but whole logic is in gradle and custom plugins. It does not matter if your running on Windows laptop, WSL, virtual machine, Jenkins agent. It always works the same. Heck we even have plugin to bundle JS apps to have consistency. Additionally I do not need to force everyone to learn quirks of Jenkins because most of them are JVM devs so they can just debug gradle runtime on their machine.
"Glorified cronjobs"... made my day. I've been trying to explain this to my fellow devs and devops for some years now :) And yes, we had the huge shell script that did orchestration and was just triggered by basic pipelines in a CI system that eventually became redundant.
Виктор, ты бесспорно профессионал, но тут я не соглашусь - важно не забывать про простоту - KISS метод. Представь, что ты накодил все круто на Go и уволился, а потом компания долго пытается найти девопса который в этом сможет разобраться. И очень повезет, если есть документация! Отчасти поэтому стали везде делать декларативное описание и лично мне это нравится.
Dagger supports other languages so I would assume that you'd use the one you (and your company) is most comfortable with. Nevertheless, that video is not primary about Dagger but, rather, a discussion whether we should use a declarative format to describe imperative processes. It's clear that declarative formats are great for describing the state of something and the question is whether they should be used as a replacement for designing workflows. P.S. I used Google Translate to translate your message.
This sounds great in theory (and is super cool!), but 15s cycle time for no changes is too slow. Devs' inner loop cycle time is extremely precious, so while I totally agree it would be great if all those things ran locally, I wouldn't trade 15s to get it. I'm former Netflix DevEx and we initially built something like this based on the mistake of conflating what was good for Platform with what's actually good for app devs -- it's an easy mistake to make and I believe you might've partially fallen into the same trap. We ended up abandoning containers for localdev (except for some sidecar services they didn't touch directly) a few years ago and everyone was happier. Well, everyone except Platform who had to support it!
The intent of this tool is not to make devs lives easier, only to help with managing CI (this is imperative way of declaring what to do with real code), where 15s is more that enough. If you need to have reproducible local env anywhere use nix.
I thought I watched everything, but I seemed to miss a note about Dagger modules i.e. code reuse for the win! The even cooler thing is, someone who needs to use a module in Go, and they are using Dagger with TypeScript, can still call on the Go module to use it. How awesome is that?
Not sure if i read it correctly, obviously we can run github pipelines locally. The same is with Jenkins, they provide slim runner for that. So in that aspect both solutions are quite similar
I'm talking about building, testing, and doing whatever else I'm doing while working locally. I tend to switch from writing code to other tasks every minute or even faster and, for that, I do not thing that Jenkins and similar solutions are a good choice. A simple script like test.sh is much better, especially when combined with a tool that will re-run the process every time I make any change to the code. Now, if that's the case, what I need is the ability to run a variation of the same both locally and remotely after I push changes to Git. What's where Dagger comes in. It's a replacement for Shell script that I used locally but also inside Jenkins pipelines.
@@DevOpsToolkit My tests require 32 GB of mem and 12 cores to finish in remotely reasonable time. Let me run that on my 8core 16bg laptop. Trying to solve issues nobody has is the best example of most SaaS solutions.
Very good points! Use declarative tooling when you are describing an end state. Use imperative tooling when you are describing a process. Declarative pipelines always felt counterintuitive to me
I agree with your point on imperative CI but there's something about seeing the actual code, calling shell via go instead of just writing it in just shell seems too cumbersome. It might be very flexible but at the same time, it seems like not very straightforward and would require a longer time to start when compared to writing it in just shell scripts or lines in a YAML file. YAML is declarative but maybe there is merit on using it for pipelines because of its human-readable nature. I guess it will depend on which you value more.
That's true. If Shell does it, use it. I also tend to go with Shell scripts for a long as the complexity does not make it worse with Shell than other solutions. That being said, Dagger is more than a replacement for a Shell script. It allows us to work inside containers in a similar way we'd work with containers inside pipelines. Still, if Shell does what you need it to do, there's no need to increase the complexity (or decrease portability) with anything else.
Declarative and imperative are not that distinct concepts. Declarative is not necessarily describing a state. It simply a way of "not describing how". Meanwhile imperative does describe the how. But declarative can define a set of insctructions and not necessarily a state. So YAML can represent a set of instructions to run and not a state. I agree on the discussion about whether a pipeline should be defined by state or by instructions. But its not about declarative or imperative from my point of view.
What about system integration, that the microservice I am developing is part of other microservices and I need to test them in a cluster. The microservice already has unit test and if this passes locally I can push it to git and the webhook would trigger the ci build and I can then test it. So far i am not convinced of this use case
You still need pipelines that accept webhooks. What Dagger brings to the table, among other things, is being able to execute tasks both locally and remotely from inside pipelines (e.g., GitHub Actions).
What if I told you there is another tool that does everything dagger does but doesn't require using containers. Outputs containers, ami's, kubevirt images, iso's, hyperv, gce, azure, virtualbox, qemu and more. Built-in distributed building, hot rebuild, sandboxed dev environments with all tooling included. Can be run equally locally or remote with one command.
Earthly is another interesting solution/tool. Maybe not the best for complex pipelines. It is like Dockerfile und Makefile had a baby child 😂. Easy to use but sometimes limited.
@@ZiggleFingers there's one little but super important difference that nix doesn't solve which is generally one of the strongest aspects of Dagger. You can express your pipelines with *actual code* instead of having to rely on some custom DSL. In the nix story this point is even more relevant since the #1 frustration of users adopting nix is the learning curve.
dagger seems like an attempt to reimplement a very small part of Nix when they don't understand (or they don't want to fill you in on) the full problem. each step of daggers development will come to the same conclusion Nix has to move it further towards reproducible builds. they will find they need a nixpkgs equivalent (every* software package defined with inputs and outputs) to be able to build their DAG (which Nix has done for decades). then find that it makes sense to use a language to simply define inputs and outputs (hmm Nix?). Why ship a monolithic 1GB container when Nix can build a DAG based container (because it knows each *files* inputs) with upto 128 layers? If all your containers are Nix built, they will share most of the same layers because they're ALL reproduceable from the same inputs! Need to swap out one 500KB shared dependency on 5 pods that run on a 5000 node K8S? One 500KB layer pull - theres no special logic to define this in your bespoke pipeline. Nix knows this.
As a side note... I had high hopes for dagger and used it for a while but, ulyately, dropped it. Go (the language i used in dagger) is much for that type of operations. I feel that taksfile more elegant and easier to pick up by everyone.
@@DevOpsToolkit ah, ok. That's why I haven't gone down the dagger road. I've been eyeing it for a while. I would also choose to implement it via go and thought the same thing about using go for expressing script-style commands. I have been using taskfiles. Then I have CI/CD call the tasks. Maybe I"ll skip the dagger dive for now. Too much else going on. Thanks! I'll give justfile a peek too.
This feels like youre searching for a solution at the wrong end. I much rather run pipelines with the same tools i use locally! Thats why i switched to podman, buildah etc.
Big fat NOPE from me. You are wrong on this one. Although I agree Declarative code primarily represent a state it does really work well with pipelines. Pipelines should be simple: Run top to bottom as a DAG in parallel, based on events. Simple conditionals as a property on a yaml section is good enough. It's super easy to read for most people and devs. having imperative code based on language is not inclusive because different teams use different languages.
@@DevOpsToolkit yes of course. But wouldnt the simplest answer be run an agent on the laptop? like a github actions agent? Maybe even wrap it in a container so you dont even need to install the agent on the laptop and make it ephemeral
I have to agree here. Declarative is the way to go. As true as it may be that a Pipeline is not particularly a state, you could say that it is a template of a chain of events. I strongly believe that it is possible to build a tool that is declarative and can executed locally (like argo workflows where you define each stage as a container, that could run anywhere, for better or worse). This does sound to me, though, as a perfect “universal” build tool candidate, I’ve used Makefiles for this purpose, but we all know how limiting that is…
@robertkozak i never managed to make something similar to work as fast and efficient as a simple execution of a script or a binary. I might be wrong though...
What do you think of Dagger? Can it change how we do CI (NOT CD)?
Viktor just sending a message here from a brazilian devops engineer to thank you so much for your contents.God bless you and your life!You are a good person to provide such a content for free to us, you are helping people on their jobs!Congrats...and please..keep doing your videos!
We utilize Taskfile, which is a modern equivalent of a makefile. We are delighted with its capabilities, as it covers nearly everything you mentioned in a format combining declarative YAML and Bash. Taskfile provides us with a unified interface for both local and remote CI.
I think that the last sentence is critical. If whatever you're using works in all variations (e.g. local, remote, etc.), the main problem is solved. Most of the rest are individual preferences.
thanks for the info! didn't know about Taskfile but it looks very cool
I love this-- I've been looking into Dagger and I'm glad someone made a very digestible intro to Dagger!
Once again, a fantastic video! I became enamored with Dagger right after watching it. However, I later discovered that it doesn't support rootless, and in my company, privileged containers are akin to shutting down a datacenter, haha.
It would be fantastic if Dagger could, in the future, utilize pods instead of containers, especially considering that all developers these days have a local Kubernetes cluster.
I work in java based environment and used the same principle, but instead I used gradle. Jenkins just uses "sh ./gradlew .." to trigger but whole logic is in gradle and custom plugins. It does not matter if your running on Windows laptop, WSL, virtual machine, Jenkins agent. It always works the same. Heck we even have plugin to bundle JS apps to have consistency. Additionally I do not need to force everyone to learn quirks of Jenkins because most of them are JVM devs so they can just debug gradle runtime on their machine.
"Glorified cronjobs"... made my day. I've been trying to explain this to my fellow devs and devops for some years now :) And yes, we had the huge shell script that did orchestration and was just triggered by basic pipelines in a CI system that eventually became redundant.
Виктор, ты бесспорно профессионал, но тут я не соглашусь - важно не забывать про простоту - KISS метод. Представь, что ты накодил все круто на Go и уволился, а потом компания долго пытается найти девопса который в этом сможет разобраться. И очень повезет, если есть документация! Отчасти поэтому стали везде делать декларативное описание и лично мне это нравится.
Dagger supports other languages so I would assume that you'd use the one you (and your company) is most comfortable with.
Nevertheless, that video is not primary about Dagger but, rather, a discussion whether we should use a declarative format to describe imperative processes. It's clear that declarative formats are great for describing the state of something and the question is whether they should be used as a replacement for designing workflows.
P.S. I used Google Translate to translate your message.
This sounds great in theory (and is super cool!), but 15s cycle time for no changes is too slow. Devs' inner loop cycle time is extremely precious, so while I totally agree it would be great if all those things ran locally, I wouldn't trade 15s to get it. I'm former Netflix DevEx and we initially built something like this based on the mistake of conflating what was good for Platform with what's actually good for app devs -- it's an easy mistake to make and I believe you might've partially fallen into the same trap. We ended up abandoning containers for localdev (except for some sidecar services they didn't touch directly) a few years ago and everyone was happier. Well, everyone except Platform who had to support it!
The intent of this tool is not to make devs lives easier, only to help with managing CI (this is imperative way of declaring what to do with real code), where 15s is more that enough. If you need to have reproducible local env anywhere use nix.
15 seconds is still a lot faster than pushing and waiting for the pipeline to run on github actions or whichever tool you're using
I thought I watched everything, but I seemed to miss a note about Dagger modules i.e. code reuse for the win! The even cooler thing is, someone who needs to use a module in Go, and they are using Dagger with TypeScript, can still call on the Go module to use it. How awesome is that?
Yeah. That's one of the great things which i probably did not explain well (or at all).
Not sure if i read it correctly, obviously we can run github pipelines locally. The same is with Jenkins, they provide slim runner for that. So in that aspect both solutions are quite similar
I'm talking about building, testing, and doing whatever else I'm doing while working locally. I tend to switch from writing code to other tasks every minute or even faster and, for that, I do not thing that Jenkins and similar solutions are a good choice. A simple script like test.sh is much better, especially when combined with a tool that will re-run the process every time I make any change to the code. Now, if that's the case, what I need is the ability to run a variation of the same both locally and remotely after I push changes to Git. What's where Dagger comes in. It's a replacement for Shell script that I used locally but also inside Jenkins pipelines.
Thanks for clarifications Viktor!
@@DevOpsToolkit My tests require 32 GB of mem and 12 cores to finish in remotely reasonable time. Let me run that on my 8core 16bg laptop.
Trying to solve issues nobody has is the best example of most SaaS solutions.
@danielhd6719 that does not mean that everyone's tests require that much memory and CPU.
Very good points! Use declarative tooling when you are describing an end state. Use imperative tooling when you are describing a process. Declarative pipelines always felt counterintuitive to me
I agree with your point on imperative CI but there's something about seeing the actual code, calling shell via go instead of just writing it in just shell seems too cumbersome. It might be very flexible but at the same time, it seems like not very straightforward and would require a longer time to start when compared to writing it in just shell scripts or lines in a YAML file. YAML is declarative but maybe there is merit on using it for pipelines because of its human-readable nature. I guess it will depend on which you value more.
That's true. If Shell does it, use it. I also tend to go with Shell scripts for a long as the complexity does not make it worse with Shell than other solutions.
That being said, Dagger is more than a replacement for a Shell script. It allows us to work inside containers in a similar way we'd work with containers inside pipelines. Still, if Shell does what you need it to do, there's no need to increase the complexity (or decrease portability) with anything else.
Declarative and imperative are not that distinct concepts. Declarative is not necessarily describing a state. It simply a way of "not describing how". Meanwhile imperative does describe the how. But declarative can define a set of insctructions and not necessarily a state. So YAML can represent a set of instructions to run and not a state.
I agree on the discussion about whether a pipeline should be defined by state or by instructions. But its not about declarative or imperative from my point of view.
Hello, Nice video again thanks, great case on declarative. would you wrap dagger task into cicd task so u could benefit parallelism for example ?
Depends... Since it's a programming language, running in parallel is not a problem anyways.
@@DevOpsToolkit well yes and no… thanks for your reply 🤗
Thanks for the video.
Any idea which software Victor uses to create these lovely animated diagrams?
I think it's Adobe Illustrator. Not 100% sure since it's done by an agency.
"Today I will continue complaining about CI/CD pipelines..." Already sold.
This reminded me of Nuke, another tool to accomplish similar results. Great video!
What about system integration, that the microservice I am developing is part of other microservices and I need to test them in a cluster. The microservice already has unit test and if this passes locally I can push it to git and the webhook would trigger the ci build and I can then test it. So far i am not convinced of this use case
You still need pipelines that accept webhooks. What Dagger brings to the table, among other things, is being able to execute tasks both locally and remotely from inside pipelines (e.g., GitHub Actions).
What if I told you there is another tool that does everything dagger does but doesn't require using containers. Outputs containers, ami's, kubevirt images, iso's, hyperv, gce, azure, virtualbox, qemu and more. Built-in distributed building, hot rebuild, sandboxed dev environments with all tooling included. Can be run equally locally or remote with one command.
I'd say, tell me more.
LOL
Nix has been doing this for a long time. Take a look!
Earthly is another interesting solution/tool. Maybe not the best for complex pipelines. It is like Dockerfile und Makefile had a baby child 😂. Easy to use but sometimes limited.
@@ZiggleFingers there's one little but super important difference that nix doesn't solve which is generally one of the strongest aspects of Dagger. You can express your pipelines with *actual code* instead of having to rely on some custom DSL. In the nix story this point is even more relevant since the #1 frustration of users adopting nix is the learning curve.
dagger seems like an attempt to reimplement a very small part of Nix when they don't understand (or they don't want to fill you in on) the full problem. each step of daggers development will come to the same conclusion Nix has to move it further towards reproducible builds. they will find they need a nixpkgs equivalent (every* software package defined with inputs and outputs) to be able to build their DAG (which Nix has done for decades). then find that it makes sense to use a language to simply define inputs and outputs (hmm Nix?). Why ship a monolithic 1GB container when Nix can build a DAG based container (because it knows each *files* inputs) with upto 128 layers? If all your containers are Nix built, they will share most of the same layers because they're ALL reproduceable from the same inputs! Need to swap out one 500KB shared dependency on 5 pods that run on a 5000 node K8S? One 500KB layer pull - theres no special logic to define this in your bespoke pipeline. Nix knows this.
The problem with our pipeline is that it uses Jenkins. And it sucks.
Is Dagger calling go-task tasks too many layers or a winning combination?
I think that using both is an overkill. I'd go with one of those (in my case Taskfile or Justfile).
@@DevOpsToolkit So you define certain things twice? Once in your dagger implementation and then again in your Taskfile? Or am I misunderstanding?
No. For a while now i use only taksfile for running tasks (sometimes justfile) and i combine it with devbox for packages.
As a side note... I had high hopes for dagger and used it for a while but, ulyately, dropped it. Go (the language i used in dagger) is much for that type of operations. I feel that taksfile more elegant and easier to pick up by everyone.
@@DevOpsToolkit ah, ok. That's why I haven't gone down the dagger road. I've been eyeing it for a while. I would also choose to implement it via go and thought the same thing about using go for expressing script-style commands. I have been using taskfiles. Then I have CI/CD call the tasks. Maybe I"ll skip the dagger dive for now. Too much else going on. Thanks! I'll give justfile a peek too.
I don't understand its free or open source tool? I couldn't find information about opensource or community version on the main site
It's open source apache licence. Here's the repo: github.com/dagger/dagger
Hi. Have you ever tried batect?
I haven't 😔
I will... Adding it to my to-do list...
Not maintained anymore...@@DevOpsToolkit
Scratch that. Just noticed it's not longer maintained :/
Who is Wong?
Typo... It's fixed now... Thanks for letting me know.
This feels like youre searching for a solution at the wrong end. I much rather run pipelines with the same tools i use locally!
Thats why i switched to podman, buildah etc.
Isn't dagger all about being able to run pipelines remotely using the same tool as when running them locally?
@@DevOpsToolkit well yea but i rather use the same tools everywhere without adding another tool to do so ;)
Love your content, but the frequent zoom jumps make some of your videos almost unwatchable for me
# TIL
Another set of complications to help tech youtubers stay relevant
Big fat NOPE from me. You are wrong on this one. Although I agree Declarative code primarily represent a state it does really work well with pipelines. Pipelines should be simple: Run top to bottom as a DAG in parallel, based on events. Simple conditionals as a property on a yaml section is good enough. It's super easy to read for most people and devs. having imperative code based on language is not inclusive because different teams use different languages.
There's still the question "how do I run part of it locally?"
@@DevOpsToolkit yes of course. But wouldnt the simplest answer be run an agent on the laptop? like a github actions agent? Maybe even wrap it in a container so you dont even need to install the agent on the laptop and make it ephemeral
I have to agree here. Declarative is the way to go. As true as it may be that a Pipeline is not particularly a state, you could say that it is a template of a chain of events. I strongly believe that it is possible to build a tool that is declarative and can executed locally (like argo workflows where you define each stage as a container, that could run anywhere, for better or worse).
This does sound to me, though, as a perfect “universal” build tool candidate, I’ve used Makefiles for this purpose, but we all know how limiting that is…
@robertkozak i never managed to make something similar to work as fast and efficient as a simple execution of a script or a binary. I might be wrong though...
@@juanmacoo technically i suppose a pipeline is a state machine
You lost me at "writing in the programming language of your choice"... Pre-commit is good enough for what you talked about.
I'm not sure I understood the relation between pre commits and writing pipelines/tasks in any language.
@@DevOpsToolkit No relation, different thoughts. And not that you're wrong about any of this. Thank you for the content!
First!