I can't understand... why you have to build the image before testing? is it not supposed to work opposite: testing your code first, building the image after testing? that's because building the image spends time which, if the code under test fails, becomes a waste, so you end with a tagged image which is unusable. Let's distinguish: testing the code isn't equals to test the image: testing the code is simply the fact of proving the code works the way we expect it works, and, for this, the unit test must be implemented in such way that has no dependencies over external resources (i.e. databases, queues, etc) but is only a logical testing which supplies mock entities for isolating the code; testing the image is testing the application within the environment it is expected to work into, so is different to unit test the code, than testing the container from an integration point of view, because, when you test the container, you have to supply environment variables which fill the required definitions for making it work into a desired environment, but it has nothing to do with the code, because, at that point in time, the code must be proven to work. That's my point of view so I would love any feedback on this subject.
Here's where I think you can design the pipeline in a better way. I think running your tests prior to your build is better. If you build and your tests fails then you wont be able detect it prior to the build. This will increase your change lead time to get committed code to production
Although testing the code before building a container would be quicker both in testing the code and in debugging, containerizing the build would make sure your application is running on the correct OS and environment as the deployment server. Both methods have their advantages and disadvantages
I'm a simple front-end dev looking to expand my understanding of the whole software process and I understood everything. Which says a lot about your ability to explain things simply and in layman terms.
There is somthing I still don't understand, why you put building the container image step before running units test ?? Shoulde we first test the code and make sure it is working and then if every thing is ok we ship the code and build the docker image ??
It's a preference. You can UT before or after building the image. My preference is to make use of multistage dockerfiles and run my CI tests within the image. When dockerfiles are built correctly it doesn't take long to build an image, and it's something you can bring to any CI environment. Here's a good post about it www.reddit.com/r/docker/s/2o8TBzsBtK
@@DevOpsJourney, instead of multistage dockerfile can’t we just run unit test first after building the code. Copy the artifacts into Dockerfile and just build docker image?
Thanks for the useful video. I just have one question here: 3:55 If you compile the code within the image, doesn't that make your container to be wasting resources? I mean, you need to put all libraries and tools used only for compilation, not for run-time.
Great point. You can add a line to remove them at the end of the dockerfile. You can do this via a multistage dockerfile as well which is one of its main use cases. Hope this helps!
Agree. What is the reason to spend build time before, which is paid, to understand that unit tests have failed? 😅 I suggest author to try to understand how it works before providing any thoughts.
I guess it depend how you run unit test in container or not but if we run in container that may take more time then just running unit-test directly when you build and also you dont need to download container image if unit test fails
@@florimmaxhuni4718 Depends on in which format you ship the application and how you test. The norm would currently be to ship release images which include the runtime and the compiled/transpiled binary for execution and not much else. For isolation/parallelization purposes on the build server it also makes sense to run every step of CI pipeline in predefined containers. The 'CI pipeline' image definition would be part of the project repository and could include everything you need for testing, debugging, development (can be used locally by devs). The 'release' image which you want to push into the registry and which is then used as the startable application in production could be way leaner, because no testing framework is needed.
We still ship release images that include compiled/transpiled binary but when build is succussed and all the test pass otherwise if test fails thats not a releasable I assume for dev or other environments thats why I think it may take a lot of time specially when creating PR-s. (I guess you have methods that help to use cache but still). By the way Im talking about some languages that I work and github actions has great support for them (one command to build or run test/integration--test)@@T.P.87
Nice video! I did not understand though that after we get container image how do we ensure unit test is running against that image? Is there sone deploy step to Dev env where those unit tests would run?
Nice video, I have a few questions: 1. what if there are errors in any stage/sub stage 2. isn't there a difference between deployment and release 3. is pre-commit code a bit too intrusive? I would run checks only after pull request is approved. 4. in general, what kind of dev process does CI/CD pipeline follow? Scum or?? Because making the pipeline is writing code but somehow I don't see it fitting well with Scrum. 5. do we unit test the pipeline itself?
1. You should set the build to fail. You should also build in some notifications to notify your developers. Slack/Email are most commonly used. 2. Release stage is making your container image/software available on a registry. Deployment stages are after the release stage, and actually get the software to run on your servers 3. Some people don't like it, but it's made to save your Devs time. It follows the "fail fast" moto of pipelines. If there are linting/formatting errors, we want that to fail as soon as possible, not during a PR check or during a pipeline run (which comes later). 4. Depends on the company, it doesn't really matter. It's usually something you build and always improve on. 5. Some do, but many don't. I usually just run the pipeline after any changes and make sure it still passes. I would probably do something more if people keep breaking the pipeline with changes they make to it.
@Aleks-fp1kq I would 100% include precommit. It is the first gate for the code to go through. It will eliminate countless PR request updates that say 'fixed linting / formatting'. Also, if you include a precommit hook that checks for secrets like gitleaks or yelp, you are preventing any sensitive information ever being committed in the first place.
wooww..simply explained in a way anybody can understandable . you are a great teacher . Requesting a video of creating declarative yaml pipeline scripts . THANKYOU
Wouldn't the pre commit checks that enforce 'linting' - aka syntax errors in your context- be unnecessary if the code compiles and builds successfully on their local machine? I understand the need to lint prs after they've been pushed to make sure they compile in the pipeline environment but not the pre commit linting check
Linting is more for quality of life during code reviews. Sometimes there are unnecessary formatting stuff that pollutes the PR. Sometimes (and you’d be surprised) devs will push without actually building their code locally. It’s annoying.
It's a preference. You can UT before or after building the image. My preference is to make use of multistage dockerfiles and run my CI tests within the image. When dockerfiles are built correctly it doesn't take long to build an image, and it's something you can bring to any CI environment. Here's a good post about it www.reddit.com/r/docker/s/2o8TBzsBtK
Can you please design a CI/CD pipeline for Embedded System like programming a Microcontroller (STM32, EFR32, etc). It's depended on the hardware, so I don't have any idea how to have a CI/CD on the source code. Thanks for the great video!!!
Your container image contains the tools, sdks and libraries to test and get code coverage? That sounds like a lot of bloat. Edit: Then you do integration tests outside of the container? This makes no sense, how do you get an aggregated code coverage, do you copy the coverage report files from the container image back to the CI system to aggregate the unit + integration test coverage? Do you only code coverage your unit tests?
nice vid. looking forward in gitops part. I'm new to Argo and hoping it will give more clarity and different approach. example using mono repo vs multi repo, git vs helm sync and etc
There are good unit test and bad unit tests. But if you choose to not add testing to your application, you’re waiting for a disaster to happen and you will find out once everything goes haywire.
What is the point of building a container image, if the unit tests fail? Seems like a waste of time. Compile Code -> Run Unit Tests -> Build Container Image and keep it smaller by not including Unit test reports, Code coverage reports and Test Classes
Not my circus, not my clowns, but my two cents is that unit tests need to be heavily de-prioritized. They make code more difficult to refactor, and they lead to people writing super bad tests just to get coverage. They waste a lot of time testing things that might not even really need it. Unit tests can be good but should be reserved for cases where a function/method has a very complex purpose and you want to verify it's doing as it should.
@@Aleks-fp1kq If you don't think too many unit tests is a net negative you've either not refactored often enough, or not worked in an environment where sloc and code coverage were used as performance metrics. Prioritizing unit tests where they're unnecessary complicates development and adds pointless busy work.
@@Aleks-fp1kq 100% code coverage is not just impossible, it's a hindrance. So many functions simply do not need unit tests. They're too small, straightforward, or simple. They don't need it. You can cover more of the system with good integration tests that get you the same reliability. But people will write unit tests arbitrarily, and often bad hacky ones, because it's less important to have good code or develop efficiently, and more important to say "yeah we have high code coverage". Prioritizing code coverage means prioritizing volume, not quality. A system is better with just well-written integration tests. Unit tests aren't bad. They're simply extremely over-valued.
@@ZTGallagher I agree, and as I said in my first reply, you have used/encountered bad tests/practices. there is no such things as prioritizing unit tests. you need unit tests if you are doing TDD (bottom-up\top-down). you write the unit tests to test your SUT, you don't write more than that. While doing that we follow patterns such as equivalence partitions, triangulation and principles ....
Pre-commits are not a good way to strongarm devs into practices that is not really doing anything other than an nuisance, sometimes you wanna branch incomplete branches and this prevents that method. It causes more harm than good, you should rather instead implement these linting tests before compiling/building.
@@Gurttastic disagree. Precommit follows the fail fast methodology. Your developer's will be a lot happier that they get instant feedback on problems with their code rather than having to wait until a build failure
View the diagram here: app.eraser.io/workspace/UHFVA30wF6pdEb2sgrWa
I can't understand... why you have to build the image before testing? is it not supposed to work opposite: testing your code first, building the image after testing? that's because building the image spends time which, if the code under test fails, becomes a waste, so you end with a tagged image which is unusable.
Let's distinguish: testing the code isn't equals to test the image: testing the code is simply the fact of proving the code works the way we expect it works, and, for this, the unit test must be implemented in such way that has no dependencies over external resources (i.e. databases, queues, etc) but is only a logical testing which supplies mock entities for isolating the code; testing the image is testing the application within the environment it is expected to work into, so is different to unit test the code, than testing the container from an integration point of view, because, when you test the container, you have to supply environment variables which fill the required definitions for making it work into a desired environment, but it has nothing to do with the code, because, at that point in time, the code must be proven to work.
That's my point of view so I would love any feedback on this subject.
Here's where I think you can design the pipeline in a better way. I think running your tests prior to your build is better. If you build and your tests fails then you wont be able detect it prior to the build. This will increase your change lead time to get committed code to production
so true . once the liniting process is complete . the unit test shoud start
Although testing the code before building a container would be quicker both in testing the code and in debugging, containerizing the build would make sure your application is running on the correct OS and environment as the deployment server. Both methods have their advantages and disadvantages
I'm a simple front-end dev looking to expand my understanding of the whole software process and I understood everything.
Which says a lot about your ability to explain things simply and in layman terms.
This is a great lecture. I would be honoured to see a hands-on course based on this.
I found this very helpful, great breakdown of how it works! Awesome visual and looking forward to watching the follow up video.
There is somthing I still don't understand, why you put building the container image step before running units test ??
Shoulde we first test the code and make sure it is working and then if every thing is ok we ship the code and build the docker image ??
exactly, and container registry is where we have the container images with tags, right?
sometimes you must first compile the UTs (like in C++), however, I don't see why we should keep compiled UTs in the image
I have the same question UTs and coverage after the build . I don’t understand
It's a preference. You can UT before or after building the image. My preference is to make use of multistage dockerfiles and run my CI tests within the image. When dockerfiles are built correctly it doesn't take long to build an image, and it's something you can bring to any CI environment.
Here's a good post about it www.reddit.com/r/docker/s/2o8TBzsBtK
@@DevOpsJourney, instead of multistage dockerfile can’t we just run unit test first after building the code. Copy the artifacts into Dockerfile and just build docker image?
well this is how you lay things down. amazing explanation there!
Thanks for the useful video.
I just have one question here: 3:55
If you compile the code within the image, doesn't that make your container to be wasting resources? I mean, you need to put all libraries and tools used only for compilation, not for run-time.
Great point. You can add a line to remove them at the end of the dockerfile. You can do this via a multistage dockerfile as well which is one of its main use cases. Hope this helps!
Isnt branch protection is usually at the SCM level rather than as part of the pipeline? At this point it is too late, the code would be in the branch
that is what the pull request is for
Really Helpful. Kudos to your work. Simple short and to the points video.
This was what every beginner needs
it was awesome man, gave a big picture and helped me to understand the whole pipeline all together
why not unit test in the source stage ?
Agree. What is the reason to spend build time before, which is paid, to understand that unit tests have failed? 😅 I suggest author to try to understand how it works before providing any thoughts.
can't run UTs without building code
Should Unit test be before Container image ?
I guess it depend how you run unit test in container or not but if we run in container that may take more time then just running unit-test directly when you build and also you dont need to download container image if unit test fails
@@florimmaxhuni4718 Depends on in which format you ship the application and how you test. The norm would currently be to ship release images which include the runtime and the compiled/transpiled binary for execution and not much else. For isolation/parallelization purposes on the build server it also makes sense to run every step of CI pipeline in predefined containers. The 'CI pipeline' image definition would be part of the project repository and could include everything you need for testing, debugging, development (can be used locally by devs). The 'release' image which you want to push into the registry and which is then used as the startable application in production could be way leaner, because no testing framework is needed.
We still ship release images that include compiled/transpiled binary but when build is succussed and all the test pass otherwise if test fails thats not a releasable I assume for dev or other environments thats why I think it may take a lot of time specially when creating PR-s. (I guess you have methods that help to use cache but still).
By the way Im talking about some languages that I work and github actions has great support for them (one command to build or run test/integration--test)@@T.P.87
Nice video! I did not understand though that after we get container image how do we ensure unit test is running against that image? Is there sone deploy step to Dev env where those unit tests would run?
Nice video, I have a few questions:
1. what if there are errors in any stage/sub stage
2. isn't there a difference between deployment and release
3. is pre-commit code a bit too intrusive? I would run checks only after pull request is approved.
4. in general, what kind of dev process does CI/CD pipeline follow? Scum or?? Because making the pipeline is writing code but somehow I don't see it fitting well with Scrum.
5. do we unit test the pipeline itself?
1. You should set the build to fail. You should also build in some notifications to notify your developers. Slack/Email are most commonly used.
2. Release stage is making your container image/software available on a registry. Deployment stages are after the release stage, and actually get the software to run on your servers
3. Some people don't like it, but it's made to save your Devs time. It follows the "fail fast" moto of pipelines. If there are linting/formatting errors, we want that to fail as soon as possible, not during a PR check or during a pipeline run (which comes later).
4. Depends on the company, it doesn't really matter. It's usually something you build and always improve on.
5. Some do, but many don't. I usually just run the pipeline after any changes and make sure it still passes. I would probably do something more if people keep breaking the pipeline with changes they make to it.
@Aleks-fp1kq I would 100% include precommit. It is the first gate for the code to go through. It will eliminate countless PR request updates that say 'fixed linting / formatting'. Also, if you include a precommit hook that checks for secrets like gitleaks or yelp, you are preventing any sensitive information ever being committed in the first place.
Pre commit is really useful to prevent the red mark of shame. I try to put them in every project I dev on so that you fail as early as possible.
This feels very specific to a development tech stack, without actually specifying it. (Unittests in container, upload to aws)
Run your unit tests before building images / starting container.
wooww..simply explained in a way anybody can understandable . you are a great teacher . Requesting a video of creating declarative yaml pipeline scripts . THANKYOU
Really good visual! Thanks! I will try bringing this up in one of my meetings
I would shift Unit Tests and coverage check to branch protection step
Thanks a lot. You have greatly simplified, and this is easier to understand. The diagram also helps 👏
Does the compile stage include a website code. I dont understand why and how one would complie css and html and JS scripts
Since they do not require compiling, you would copy those files into the docker image you are building in the build stage
Wouldn't the pre commit checks that enforce 'linting' - aka syntax errors in your context- be unnecessary if the code compiles and builds successfully on their local machine? I understand the need to lint prs after they've been pushed to make sure they compile in the pipeline environment but not the pre commit linting check
Linting is more for quality of life during code reviews. Sometimes there are unnecessary formatting stuff that pollutes the PR.
Sometimes (and you’d be surprised) devs will push without actually building their code locally. It’s annoying.
which tool are using to build your CI/CD pipeline jenkings for vscode , Gitlab, azure devops ?
@@hjoseph777 usually Jenkins or GitHub actions. They all could be used to get similar results
Well done video, looking forward to the followup video.
I prefer doing unit tests before creating container image and even before build. why would I use space and start a container if the unit tests fail.
@@saisandep yes that works but I did explain my reasoning on why I sometimes prefer to build the container first
Amazing!, and I would like to see practical examples too
you da man buddie...hi from Argentina
Unit test after creating Container Image? Are you sure about it? Maybe you mean e2e tests?
It's a preference. You can UT before or after building the image. My preference is to make use of multistage dockerfiles and run my CI tests within the image. When dockerfiles are built correctly it doesn't take long to build an image, and it's something you can bring to any CI environment.
Here's a good post about it www.reddit.com/r/docker/s/2o8TBzsBtK
container registry is where we have the container images with tags, right?
yes, this is the place to which you push or from which you pull container images
Greate video men thanks a lot 👍👍👍
Can you please design a CI/CD pipeline for Embedded System like programming a Microcontroller (STM32, EFR32, etc). It's depended on the hardware, so I don't have any idea how to have a CI/CD on the source code.
Thanks for the great video!!!
how would you change this for a python code . because you would not be building any artifacts ! the unit test can still remain
Well laid out! It was logical and easy to understand
thanks for clarification! I thought it is like "test, build, deploy" instead of "build, test, deploy". 🤐
Your container image contains the tools, sdks and libraries to test and get code coverage? That sounds like a lot of bloat.
Edit: Then you do integration tests outside of the container? This makes no sense, how do you get an aggregated code coverage, do you copy the coverage report files from the container image back to the CI system to aggregate the unit + integration test coverage? Do you only code coverage your unit tests?
Very insightful video!
Great job man, thank you so much :)
nice vid. looking forward in gitops part. I'm new to Argo and hoping it will give more clarity and different approach. example using mono repo vs multi repo, git vs helm sync and etc
Thank You, this is a great explanation.
By the way tell about the app/website you are using for this video 😊
eraser.io. I got a link to the diagram in the description
Great video can you do another one for DevSecOps. Thank you so much
Great video!
Nice explanation 😊
perfect!
Why would you want to build it before running tests? It sound an ineficient way of doing it
There are good unit test and bad unit tests. But if you choose to not add testing to your application, you’re waiting for a disaster to happen and you will find out once everything goes haywire.
Great!
Your video is very useful for me. I will subscribe to your channel. Thank you
cool. simple. fast.
What is the point of building a container image, if the unit tests fail? Seems like a waste of time.
Compile Code -> Run Unit Tests -> Build Container Image and keep it smaller by not including Unit test reports, Code coverage reports and Test Classes
I just want to point out that if it's an actual microservice, it shouldn't have dependencies on other microservices. Just saying
Never forget security…..
Not my circus, not my clowns, but my two cents is that unit tests need to be heavily de-prioritized. They make code more difficult to refactor, and they lead to people writing super bad tests just to get coverage. They waste a lot of time testing things that might not even really need it. Unit tests can be good but should be reserved for cases where a function/method has a very complex purpose and you want to verify it's doing as it should.
This is because the test you have seen\used are written badly.
@@Aleks-fp1kq If you don't think too many unit tests is a net negative you've either not refactored often enough, or not worked in an environment where sloc and code coverage were used as performance metrics. Prioritizing unit tests where they're unnecessary complicates development and adds pointless busy work.
@@ZTGallagher why would anyone write "too many unit test"? you mean multiple tests that test the same thing? How coverage be a performance metric?
@@Aleks-fp1kq 100% code coverage is not just impossible, it's a hindrance. So many functions simply do not need unit tests. They're too small, straightforward, or simple. They don't need it. You can cover more of the system with good integration tests that get you the same reliability. But people will write unit tests arbitrarily, and often bad hacky ones, because it's less important to have good code or develop efficiently, and more important to say "yeah we have high code coverage". Prioritizing code coverage means prioritizing volume, not quality. A system is better with just well-written integration tests. Unit tests aren't bad. They're simply extremely over-valued.
@@ZTGallagher I agree, and as I said in my first reply, you have used/encountered bad tests/practices. there is no such things as prioritizing unit tests. you need unit tests if you are doing TDD (bottom-up\top-down). you write the unit tests to test your SUT, you don't write more than that. While doing that we follow patterns such as equivalence partitions, triangulation and principles ....
Pre-commits are not a good way to strongarm devs into practices that is not really doing anything other than an nuisance, sometimes you wanna branch incomplete branches and this prevents that method. It causes more harm than good, you should rather instead implement these linting tests before compiling/building.
@@Gurttastic disagree. Precommit follows the fail fast methodology. Your developer's will be a lot happier that they get instant feedback on problems with their code rather than having to wait until a build failure
True@@DevOpsJourney
The title word "modern” got me there, but left me disapointed.
I've never heard an accent like this which replaces I's with A's. Santax. Gathub.
Ha, the whole right hand side of this is impossible for my company. Literally no fucking testing at all. I hate it
What's modern about it? This is how CICD in a generic sense works where you have checks n balances for your complete CI process then for he CD.
Pre-Commit hooks and mandatory test coverage are plain bad advice
Why?
you need to explain it more clearly by giving an actual example/ implementation and not just pure theory.
@@hoangfromvietnam I have multiple videos on this
Downvoted because Continuous Deployment is part of CI/CD, yet you still separated them into 2 videos for the money.
😂