Your CI/CD Pipelines Are Wrong - From Monoliths To Events

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 มิ.ย. 2024
  • This TH-cam video challenges the conventional wisdom around CI/CD pipelines, arguing that the transition from monolithic applications to event-driven architectures requires a paradigm shift in our approach. Tune in to explore the drawbacks of traditional CI/CD pipelines and discover how embracing event-driven systems can improve the Software Development Life Cycle (SDLC).
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Get one month of 🎉 VPS FREE 🎉 at 🔗 hivelocityinc.net/3SqKZZX 🔗.
    Use code ”DEVOPS1” at checkout.
    #hivelocityhosting
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    #CI/CD #pipelines #SDLC
    Consider joining the channel: / devopstoolkit
    ▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬
    If you are interested in sponsoring this channel, please use calendar.app.google/Q9eaDUHN8... to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
    ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
    ➡ Twitter: / vfarcic
    ➡ LinkedIn: / viktorfarcic
    ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬
    🎤 Podcast: www.devopsparadox.com/
    💬 Live streams: / devopsparadox
    ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬
    00:00 Introduction
    00:59 Hivelocity (sponsor)
    01:55 What Is Software Development Life Cycle (SDLC)?
    04:52 The Difference Between One-Shot And Continuous Actions
    06:31 From One-Shot Tasks to Continuous Loops
    09:55 From Monolithic to Event-Driven Pipelines
    15:59 The Need for Tracing
    20:05 From Isolated Teams To Event Reactionists
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 94

  • @DevOpsToolkit
    @DevOpsToolkit  6 หลายเดือนก่อน +5

    How do you define CI/CD processes and tasks?

    • @cantucodes
      @cantucodes 6 หลายเดือนก่อน +1

      I would LOVE to get to a place where we are no longer running pipelines to deploy to production.
      We use Pulumi to update Kubernetes Deployments on a production deployment. It's frustrating sometimes because Pulumi can take a LONG time for a large stack to gather resource state before it can push to production. On top of that, deploying multiple micro-services at once is impossible due to the way Pulumi LOCKs the stack when it's currently running a Pulumi process. This means deploying the frontend and the backend is a no-go for us :(
      Argo is something we are actively looking at to replace this process. Thank you for this great video!

    • @Jimmy-Ungerman
      @Jimmy-Ungerman 6 หลายเดือนก่อน +2

      The big thing we’ve moved to is having a standardized “pipeline-templates” repository that is automatically set as the ci-cd file for new repos in gitlab. Then we try and intuitively deduce what needs to be happening to these new repos by type of code, labels, tags, etc.
      Let the devs just worry about code and we can worry about delivery. Hopefully we can translate this into a dev portal like system soon

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +1

      @@cantucodes do you have all the things from Kubernetes to micro-services in a single Pulumi stack?

    • @cantucodes
      @cantucodes 6 หลายเดือนก่อน +1

      @@IvanDavletshin yes, that's part of our problem 😞

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +3

      @@cantucodes wooowowowo :) Just split it! Application lifecycle is way more frequent than infra lifecycle. Leave those coupled, only which are highly coupled in their lifecycles. Like network and cluster, like database and the app (even questionable). Then you'll get the whole relief :)

  • @Schollii68
    @Schollii68 6 หลายเดือนก่อน +6

    Enjoyed the video but I only partially agree. Mainly because other than security scans, everything else, at least at high level, needs to be in sequence. You can't scan until you have image and you cant build image until you've checked out code, you can't deploy until scan done and you can't run integration tests until deployment completed.
    Also, most of commit pushes to git will fail at some point along pipeline, until one has fixed every issue with one's changes.
    Eg maybe you find your test results are not deterministic so sometimes the pipeline will fail at the unit test, other times at integration test step, other times both will pass. But you will need it easily find out where in the pipeline was the failure, and for what reason (which will typically require you look at logs).
    So when steps are executed asynchronously, now you need to add callback mechanisms to notify your pipeline of the status and even progress of the step. When if a deployment step has hung because an image is not found so the pod is in CLBO status while k8s keeps trying at larger time intervals? You need it see this type of events.
    Except for very large systems (1k microservices), I don't see that the added complexity is worth it. There are far fewer pieces that can break in a simple synchronous pipeline.
    The added complexity doesn't outweigh the benefits so far. I'll be one of the first to switch if this ever changes, but I'm not seeing that happening any time soon.
    Just to be clear, I thoroughly enjoy the opportunity to have this discussion, great video and thanks for doing all these, they are amazing (most of them 😉).

    • @Protagonist369
      @Protagonist369 6 หลายเดือนก่อน +1

      My thoughts exactly

  • @joebowbeer
    @joebowbeer 6 หลายเดือนก่อน +3

    Check out durable execution, e.g., from Temporal. The context is encoded in the workflow topology.

  • @StephaneMoser
    @StephaneMoser 6 หลายเดือนก่อน +6

    The lack of traceability is the reason that we keep do a pipelines. I tried to use events in the revamp of CI/CD but then it was difficult to show the flow to the developers also it was difficult to define the reactions for the events for multiple repositories

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +2

      Same. We do gitops, but even with this we still have a step in job for delpoyment to verify if the deployment happened successfully, otherwise it might be easily missed by developer who sees the geen light on the workflow run.

  • @treptunes
    @treptunes 6 หลายเดือนก่อน +1

    Thanks for giving me, as somebody from a total different IT department insight on this - nice structured!

  • @emil-backmark-ericsson
    @emil-backmark-ericsson 6 หลายเดือนก่อน +4

    Maybe the Eiffel event protocol or CDEvents is what you're looking for?

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน +3

      I haven't used Eiffel. CDEvents is indeed where I'm going to.

    • @autohmae
      @autohmae 6 หลายเดือนก่อน +1

      Looks like the people from Grafana, OpenTelemetry and CDEvents are all. For example Grafana people created GraCIe which uses Grafana Tempo, Grafana Loki, and Prometheus.

  • @proteusnet
    @proteusnet 6 หลายเดือนก่อน +1

    Spot on, these are the patterns i have been using for quite a while now and they work really nicely once setup.

  • @m19mesoto
    @m19mesoto 6 หลายเดือนก่อน +2

    Jenkins Matrix + Kubernetes Build Plugin --> Artifactory XRay(Reaction) + SonarQube(Reaction) ---> to Jenkins Results... Just we are watching all events in Jenkins pipeline.
    Artifactory CI/CD Can (React) on uploaded artifact and trigger deployment job.
    There are some ways, anyway not that decoupled as you describe it.

  • @newtondev
    @newtondev 6 หลายเดือนก่อน +2

    Thanks Viktor, this was very insightful. Unfortuntely we have a lot of these kinds of projects that are push based in huge pipelines as opposed to the pull and event based model. Will be recommending this to some of our teams to see if we can get away from this historic monolithic model.
    The traceability however is the missing link. Our developers like to see it progressing through each stage in the pipeline view (the piece of mind that it went where it was supposed to go).

  • @zarbis
    @zarbis 6 หลายเดือนก่อน +1

    Great video! Couple years Go I've had exaclty the same idea of event-driven pipelines and hit the same brick wall of lack of observability. It's nice to have some validation 😅

  • @SV-tc8cu
    @SV-tc8cu 6 หลายเดือนก่อน +1

    excellent video, thank you Victor!

  • @Bioingtutu
    @Bioingtutu 6 หลายเดือนก่อน +2

    Muchas gracias por el material, sos un capo!!

  • @guai9632
    @guai9632 6 หลายเดือนก่อน +2

    there is also a caching problem. those microservices are kinda independent, so it wouldn't be easy to reuse work done in the prev steps/ prev builds

  • @ChristianSagstetter
    @ChristianSagstetter 6 หลายเดือนก่อน +2

    real expert is talking=) thanks for the great talk

  • @romdhan97
    @romdhan97 6 หลายเดือนก่อน +1

    I think xl release might be what you're searching for( I only know it orchestrates tasks, I don't know if it can be event driven)

  • @fpvclub7256
    @fpvclub7256 6 หลายเดือนก่อน +4

    I think if we tag each execution and event with a unique tag, we should be able to write an argo add-on that could visualize them end-to-end similar to something like Spring Sleuth..

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน +2

      That would be one option. Another, and in my opinion probably a better one, would be to somehow treat it as traces and visualize with grafana tempo or Jaeger.

    • @DavidBernard31
      @DavidBernard31 6 หลายเดือนก่อน +1

      Using a unique tag (or trace_id like for d-tracing) is not a doable solution, because each execution are triggered independently (manual, reaction to event system, scheduled, continuous,...), and by different systems. Having a unique tag/id means having a central orchestrator or having the info propagated/attached to events and resources (for events not triggered as reaction to events).
      At my current state of thinking on this topic (I'm on it since few weeks on my spare time), my idea is more to define rules that help to correlate events and extract graph and lifecycle of artifacts.

  • @chasim1982
    @chasim1982 6 หลายเดือนก่อน +4

    Great video, I think near to required CICD process is kargo & argocd, I know missing lot of options but they are doing great job (Akuity), can you please make video on Kargo (Akuity)

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน +5

      Adding it to my to-do list... 🙂

    • @chasim1982
      @chasim1982 6 หลายเดือนก่อน +2

      @@DevOpsToolkit thank you ❤️

  • @scottscoble2500
    @scottscoble2500 6 หลายเดือนก่อน +1

    I think Apache Camel would be a good orchestrator that could handle the tracing by attaching a unique I'd to the Exchange.

  • @AvshaDev
    @AvshaDev 6 หลายเดือนก่อน +2

    Thanks! One thing I was wondering about is how do you keep the new pushed image in synced with the required deployment configuration. For example, if the new image code refers to a new environment variable which is part of the updated deployment. You certainly don't want to deploy only based only of a new image pushed to registry.

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      Most of the time only image tag changes. Nevertheless, you are right. Sometimes manifests change as well. That should often work independently of each other. From gitops perspective it should not matter how and what you changed in git.

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +1

      We do use helm for such deployments. And whenever we deploy, we actually roll the new helm deployment(thru argocd, so it’s not a helm upgrade or release technically). And the helm values are placed into the same repo of the application. And during the release one of the latest steps before creating tag is to add a tag to the values file. Which leads to the versionized app along with the values of helm chart which also referes to the image version. So you can always use the tagged things together.

  • @AlexandruVoda
    @AlexandruVoda 6 หลายเดือนก่อน +1

    There is a point to sometimes running certain steps twice. Cosmic rays do cause bit flips and when all other causes have been eliminated, such an unlikely event may actually be the cause.

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      That's true but, in those cases, I prefer having the logic to re-run (loop) the task itself rather than re-running the pipeline build with all the tasks.

    • @AlexandruVoda
      @AlexandruVoda 6 หลายเดือนก่อน +1

      @@DevOpsToolkit fair

  • @santhosh003
    @santhosh003 6 หลายเดือนก่อน +1

    Great content advocating event driven approach for CICD. Can Knative help to achieve this?

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      Knative Events can help but are not the only thing we need.

  • @guai9632
    @guai9632 6 หลายเดือนก่อน +2

    devops in my company are making mess even in pipelines. I imagine what hell they'll invoke with event approach

  • @abydossolutions258
    @abydossolutions258 6 หลายเดือนก่อน +1

    Grafana Tempo and Loki is a nice combo if you ask me :-) I know you didn't but here we go... This is can be part of cluster already like observability within microk8s... just saying...

  • @jonathanmatthews8928
    @jonathanmatthews8928 6 หลายเดือนก่อน +1

    Interesting video. IMHO the approach requested is quite similar to that already exposed by Concourse CI, with its resource-state-based job trigger primitives. I know state!=event, but it's perhaps worth a look to see if it tickles your fancy ...

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      Thanks for the suggestion. I haven't used it yet so I'll add it to my TODO list and check it out.

  • @alexandregravem6043
    @alexandregravem6043 3 หลายเดือนก่อน +1

    Thanks for the video. I like very much this idea of event-based workflows but your explanation about collaboration left me with a weird waterfall feeling. "dev is done, I don't care what comes next". This is exactly what devops as a movement was fighting against. Any ideas how to avoid that a move towards event-based workflows result in a step back in terms of collaboration?

    • @DevOpsToolkit
      @DevOpsToolkit  3 หลายเดือนก่อน

      That came out the wrong way. It's not that the person does not care but rather that a process can focus on something specific and that the person that designs that process can also focus on the process itself.

  • @SpectralAI
    @SpectralAI 6 หลายเดือนก่อน +1

    I’m building a product that will do all the things you said and much more. It will automate anything. I need an investor to help me get the product finished.

  • @taliosnz
    @taliosnz 3 หลายเดือนก่อน +1

    Great video - but again, it seems like another example that (looks like it) assumes a monorepo, and that the only artifact is an image?

    • @DevOpsToolkit
      @DevOpsToolkit  3 หลายเดือนก่อน

      I think it's quite the opposite. It mimics microservices.

  • @krzysztofwiatrzyk4260
    @krzysztofwiatrzyk4260 6 หลายเดือนก่อน +1

    Regarding the sponsor of this video, where does the 4$ comes from? I cannot find it in the Hivelocity pricing.

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      It's the price of the cheapest VM in www.hivelocity.net/vps/.

  • @Ruben-by4oy
    @Ruben-by4oy 6 หลายเดือนก่อน +2

    Gitlab-CI fits well in this scenario

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน

      How?:) can you have a cronjobs there? How about manual triggers of multiple jobs if needed?

    • @autohmae
      @autohmae 6 หลายเดือนก่อน +2

      @@IvanDavletshin yes, you can have both. But I think what's important: you can trigger a gitlab-ci pipeline by a API-request (which could be anything).
      That said as in the video: their is however no tracing.

  • @fabianoslack4269
    @fabianoslack4269 6 หลายเดือนก่อน +1

    And when you say ”big vendors”, what 3 ones you think are most probaly hearing what you are saying? 😅

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      That's for them to discover :)

  • @dirien
    @dirien 6 หลายเดือนก่อน +1

    Good video, truth is spoken here. The good think is most tools in our k8s ecosystem supports eventing and continuous everything. CloudEvents doing here a great work to create a common specification for the different tools, to react on each other.
    Good point on your tracing remark, never thought about this but it makes sense! I have currently no idea, if tools are already supporting this. I could think about Dagger maybe.

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน +2

      I don't think Dagger would help with that. It does solve a different problem though and I'm working on that right now. I think it'll be published in a month.

    • @DevOpsToolkit
      @DevOpsToolkit  5 หลายเดือนก่อน +1

      Here's the video about Dagger: th-cam.com/video/oosQ3z_9UEM/w-d-xo.html

  • @federiconafria
    @federiconafria 4 หลายเดือนก่อน +1

    Has anyone given Keptn a go?

    • @DevOpsToolkit
      @DevOpsToolkit  4 หลายเดือนก่อน

      I used it a while ago so I'd need to refresh my knowledge about it since it changed a lot.

  • @sergeyp2932
    @sergeyp2932 6 หลายเดือนก่อน +2

    Frankly, in this example I don't see any challenges that can't be relatively easy solved by a classic pipeline approach. Background vulnerability scanning, as well as other monitoring stuff, is obviously out of pipeline scope (however, it may has one-shot cousins - after-build vuln scan, and after-deploy canary analysis). Then, we have only one asynchronous task - GitOps deploy process, which relatively easy can be made synchronous (e.g. with "argo app wait").
    Other question is a complex inter-task dependencies, but many existing CI solutions already has DAGs as an alternative to "classic" linear pipelines, to address specifically this problem.
    However, your ideas might be right and lead us to the better architecture (in most of the cases, async system is better than sync one - scalability, reliability, etc.) But, I think, it will come to mass market only when many people will encounter problems which can't be solved with existing solutions.

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน +2

      One of the big advantages of GitOps is the pull-based model that allows us to block access to clusters so I don't feel that `argo app wait` is the right thing to do (otherwise, why not simply `kubectl apply`) so any task after the sync must be triggered in some other way. Similarly, if scanning is done inside registries (instead of making sure that scanning is executed no matter from where we push images), we need to wait for it to trigger yet another event. Finally, I listed only a few examples and in real-world scenarios, there tend to be many others.
      What I feel is that we are slowly going towards microservice-like model with SDLC and that requires some sort of events management (e.g., Argo Events) and some kind of tracing. The benefits and the challenges are essentially the same as with microservices with the main difference being that SDLC is not as mature as apps.

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +1

      @@DevOpsToolkit there’s another way to let your workflow system know about finish of the deployment: argocd notifications. But it’s not guaranteed, and may sometimes fail by multiple reasons.

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      I'm not much concerned with that. Argo Events is good enough to provide notifications and triggers based on statuses of Kubernetes resources. My bigger concern is how to connect those unrelated tasks into a flow (how to visualize them in a similar way pipelines do).

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +1

      ​@@DevOpsToolkit grafana might be the option, but i'd rather disconnect them naturally.
      Since it's not a flow, you don't need to see the whole actual chain of events(or do you? :D). I think you just need to see the things if they're going wrong, or see the chain of events somewhere in the logs if something goes wrong, ideally grouped by some major steps (tracing?).
      The problem is to see if some event is not fired(maybe even handled correctly is the best wording here), when it's supposed to be fired. Queues might be the solution here, instead of simple events. Then once the queue is filling up - you can monitor it.

    • @IvanDavletshin
      @IvanDavletshin 6 หลายเดือนก่อน +1

      But building a message broker, observability on top, with the whole new bunch of layers of questions/issues/points-of-failure........ :D

  • @LifeAfterK8s
    @LifeAfterK8s 6 หลายเดือนก่อน +2

    kargo?

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      Kargo Is great, but only a fraction of the SDLC and, at the same time, one of the reasons why traditional pipelines do not work well any more.

  • @CuriousSpy
    @CuriousSpy 6 หลายเดือนก่อน +1

    Idea for million dollars. Ci/Cd constructor for dummies. Instead of shit market with 1000 of nonames, just create constructor for each result you want.
    For example:
    I want to build my frontends and backends and deploy it to server in one region with some load.
    As a result i get suggestion:
    Here some vps providers but you limited with manual setup. Or here with kubernetes support and you can use this products to build apps with docker, etc
    That would be much simpler than going to some random devops product and read "we deliver something you not understand in a very stupid way, but with smart look"

    • @CuriousSpy
      @CuriousSpy 6 หลายเดือนก่อน +1

      I would do it myself, but i complete dumbass in devops world. If someone has free time to spend with me to create such database - it would be nice

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      @CuriousSpy the database is easy. We could use Jaeger or something similar for a start. The problem is in creating and propagating traces through all events and tasks.

    • @CuriousSpy
      @CuriousSpy 6 หลายเดือนก่อน +1

      @@DevOpsToolkit jaeger is just one tool. There are 1000 of these tools and i want "one place do decide design from start" rather than just mastering one tool and breaking microscope from nail

    • @DevOpsToolkit
      @DevOpsToolkit  6 หลายเดือนก่อน

      I suggested Jaeger as a starting point. Since traces are now based on OpenTelemetry, it does not matter which storage is used since they all support OTEL. A bigger question is a) is OTEL tracing good enough (probably not) and, more importantly, how we propagate something similar to traces to SDLC tasks.
      Honestly, I don't have an answer to that one (at least not yet).

    • @CuriousSpy
      @CuriousSpy 6 หลายเดือนก่อน +2

      @@DevOpsToolkit i'm not sure that we are on the same topic :)
      I'am talking about knowledge base for all developers, that will help to pick right instrument/product/tool for each job. But not just some wiki pages, but instead a constructor, that will ask relevant question and will give relevant solutions

  • @heldercosta6556
    @heldercosta6556 6 หลายเดือนก่อน +5

    As always the most depressing title….