How to design a modern CI/CD Pipeline
ฝัง
- เผยแพร่เมื่อ 12 พ.ค. 2024
- Learn how I design CI/CD pipelines. in this video I diagram out the major components and considerations taken when creating pipelines for modern software companies.
Playlist: • DevOps - Whiteboarding...
If you are wondering about the diagraming software I used to create this video it's www.eraser.io (sponsor of this video)
View my diagram: app.eraser.io/workspace/UHFVA...
☕ Buy me a coffee: www.buymeacoffee.com/bradmorg
🛍️ Amazon Store (Homelab/TH-cam Setup): www.amazon.com/shop/devopsjou...
☁️ $200 Digital Ocean Cloud Credits: m.do.co/c/adc24155a741 - วิทยาศาสตร์และเทคโนโลยี
View the diagram here: app.eraser.io/workspace/UHFVA30wF6pdEb2sgrWa
I found this very helpful, great breakdown of how it works! Awesome visual and looking forward to watching the follow up video.
I'm a simple front-end dev looking to expand my understanding of the whole software process and I understood everything.
Which says a lot about your ability to explain things simply and in layman terms.
well this is how you lay things down. amazing explanation there!
Well done video, looking forward to the followup video.
Really Awesome video. Thanks for sharing.
Great job man, thank you so much :)
Well laid out! It was logical and easy to understand
Thank You, this is a great explanation.
nice vid. looking forward in gitops part. I'm new to Argo and hoping it will give more clarity and different approach. example using mono repo vs multi repo, git vs helm sync and etc
Nice explanation 😊
you da man buddie...hi from Argentina
Nice video! I did not understand though that after we get container image how do we ensure unit test is running against that image? Is there sone deploy step to Dev env where those unit tests would run?
Great video!
Great video can you do another one for DevSecOps. Thank you so much
This feels very specific to a development tech stack, without actually specifying it. (Unittests in container, upload to aws)
Great!
Nice video, I have a few questions:
1. what if there are errors in any stage/sub stage
2. isn't there a difference between deployment and release
3. is pre-commit code a bit too intrusive? I would run checks only after pull request is approved.
4. in general, what kind of dev process does CI/CD pipeline follow? Scum or?? Because making the pipeline is writing code but somehow I don't see it fitting well with Scrum.
5. do we unit test the pipeline itself?
1. You should set the build to fail. You should also build in some notifications to notify your developers. Slack/Email are most commonly used.
2. Release stage is making your container image/software available on a registry. Deployment stages are after the release stage, and actually get the software to run on your servers
3. Some people don't like it, but it's made to save your Devs time. It follows the "fail fast" moto of pipelines. If there are linting/formatting errors, we want that to fail as soon as possible, not during a PR check or during a pipeline run (which comes later).
4. Depends on the company, it doesn't really matter. It's usually something you build and always improve on.
5. Some do, but many don't. I usually just run the pipeline after any changes and make sure it still passes. I would probably do something more if people keep breaking the pipeline with changes they make to it.
@Aleks-fp1kq I would 100% include precommit. It is the first gate for the code to go through. It will eliminate countless PR request updates that say 'fixed linting / formatting'. Also, if you include a precommit hook that checks for secrets like gitleaks or yelp, you are preventing any sensitive information ever being committed in the first place.
Pre commit is really useful to prevent the red mark of shame. I try to put them in every project I dev on so that you fail as early as possible.
Isnt branch protection is usually at the SCM level rather than as part of the pipeline? At this point it is too late, the code would be in the branch
Does the compile stage include a website code. I dont understand why and how one would complie css and html and JS scripts
Since they do not require compiling, you would copy those files into the docker image you are building in the build stage
Here's where I think you can design the pipeline in a better way. I think running your tests prior to your build is better. If you build and your tests fails then you wont be able detect it prior to the build. This will increase your change lead time to get committed code to production
container registry is where we have the container images with tags, right?
yes, this is the place to which you push or from which you pull container images
There is somthing I still don't understand, why you put building the container image step before running units test ??
Shoulde we first test the code and make sure it is working and then if every thing is ok we ship the code and build the docker image ??
exactly, and container registry is where we have the container images with tags, right?
sometimes you must first compile the UTs (like in C++), however, I don't see why we should keep compiled UTs in the image
I have the same question UTs and coverage after the build . I don’t understand
It's a preference. You can UT before or after building the image. My preference is to make use of multistage dockerfiles and run my CI tests within the image. When dockerfiles are built correctly it doesn't take long to build an image, and it's something you can bring to any CI environment.
Here's a good post about it www.reddit.com/r/docker/s/2o8TBzsBtK
@@DevOpsJourney, instead of multistage dockerfile can’t we just run unit test first after building the code. Copy the artifacts into Dockerfile and just build docker image?
Should Unit test be before Container image ?
I guess it depend how you run unit test in container or not but if we run in container that may take more time then just running unit-test directly when you build and also you dont need to download container image if unit test fails
@@florimmaxhuni4718 Depends on in which format you ship the application and how you test. The norm would currently be to ship release images which include the runtime and the compiled/transpiled binary for execution and not much else. For isolation/parallelization purposes on the build server it also makes sense to run every step of CI pipeline in predefined containers. The 'CI pipeline' image definition would be part of the project repository and could include everything you need for testing, debugging, development (can be used locally by devs). The 'release' image which you want to push into the registry and which is then used as the startable application in production could be way leaner, because no testing framework is needed.
We still ship release images that include compiled/transpiled binary but when build is succussed and all the test pass otherwise if test fails thats not a releasable I assume for dev or other environments thats why I think it may take a lot of time specially when creating PR-s. (I guess you have methods that help to use cache but still).
By the way Im talking about some languages that I work and github actions has great support for them (one command to build or run test/integration--test)@@T.P.87
By the way tell about the app/website you are using for this video 😊
eraser.io. I got a link to the diagram in the description
Unit test after creating Container Image? Are you sure about it? Maybe you mean e2e tests?
It's a preference. You can UT before or after building the image. My preference is to make use of multistage dockerfiles and run my CI tests within the image. When dockerfiles are built correctly it doesn't take long to build an image, and it's something you can bring to any CI environment.
Here's a good post about it www.reddit.com/r/docker/s/2o8TBzsBtK
T - Google acknowledged our Major Milestone Maturity Level… we’re going to seal this SU1…!
The title word "modern” got me there, but left me disapointed.
Not my circus, not my clowns, but my two cents is that unit tests need to be heavily de-prioritized. They make code more difficult to refactor, and they lead to people writing super bad tests just to get coverage. They waste a lot of time testing things that might not even really need it. Unit tests can be good but should be reserved for cases where a function/method has a very complex purpose and you want to verify it's doing as it should.
This is because the test you have seen\used are written badly.
@@Aleks-fp1kq If you don't think too many unit tests is a net negative you've either not refactored often enough, or not worked in an environment where sloc and code coverage were used as performance metrics. Prioritizing unit tests where they're unnecessary complicates development and adds pointless busy work.
@@ZTGallagher why would anyone write "too many unit test"? you mean multiple tests that test the same thing? How coverage be a performance metric?
@@Aleks-fp1kq 100% code coverage is not just impossible, it's a hindrance. So many functions simply do not need unit tests. They're too small, straightforward, or simple. They don't need it. You can cover more of the system with good integration tests that get you the same reliability. But people will write unit tests arbitrarily, and often bad hacky ones, because it's less important to have good code or develop efficiently, and more important to say "yeah we have high code coverage". Prioritizing code coverage means prioritizing volume, not quality. A system is better with just well-written integration tests. Unit tests aren't bad. They're simply extremely over-valued.
@@ZTGallagher I agree, and as I said in my first reply, you have used/encountered bad tests/practices. there is no such things as prioritizing unit tests. you need unit tests if you are doing TDD (bottom-up\top-down). you write the unit tests to test your SUT, you don't write more than that. While doing that we follow patterns such as equivalence partitions, triangulation and principles ....
Downvoted because Continuous Deployment is part of CI/CD, yet you still separated them into 2 videos for the money.
😂