This was incredibly informative, logical, and easy to understand and follow. I don’t think I’ve ever seen such a clear, comprehensible step-by-step basic structuring of, as you put it, the ideal CI/CD pipeline. I really appreciate it. Thanks so much and I look forward to your subsequent videos.
Well that will be a great addition to our on-boarding procedure ! You described the ideal CI/CD pipeline in such a smooth yet accurate manner and in a 22 minutes video rather than a 2h one ! Well done!
Hi, great video. I think you may be conflating Canary and Health Checks. Canary (Deployments) are what you refer to as your 1box approach. The function which you run periodically to check system is a Health Check.
Actually there are multiple types of canaries in software development. I've seen this definition of canary in practice, where you have some sort of test running continuously from customer's view. Another version I have seen is "canary deployment" which is where you deploy a small % of the code (similar to 1box here) and rollback based on how it performs in that % rollout. Both are correct, which is why the video explains their definition. I've actually seen more of the first definition, but I don't think it's a standard across the industry.
Amazing 30,000 ft view of the CI/CD process, and what the purposes of every step are, not just how to do it but why and what you're going to ac complish strives, love it!
15:30 Errors over an extended period of time may also be indicative of a platform issue rather than an issue with the code. Just because the "canary" detects an issue, this does not mean that a rollback should be performed.
Dude, I've been watching your stuff and I must say that you are a freaking legend for putting so much of yourself into all of this. Thank you! You are the reason why a lot of people will be able to make a better life for themselves, really.
Let me help improve that whole ideal pipeline and simplify it: The ideal pipeline has these 4 main stages: Test -> Build -> Scan -> Deploy We have 3 different environments where those 4 main stages take place, which are: Dev, Test, Prod The Stages: *Test* This stage could encompass your: - unit testing - performance/load testing ( kind of difficult to implement/automate ) - integration testing ( kind of difficult to implement/automate ) *Build* This stage could encompass: - building code artifacts - building container images *Scan* This stage could encompass: - scanning for code coverage results - scanning your code with a tool that covers OWASP standard vulnerability scanning (SonarQube, Whitesource, Findbugs, Checkmarx) - Image vulnerability scanning (Clair, Trivy, Snyx) *Deploy* This stage could encompass: - Deploying your service with the correct config and version - Updating your deployment config *General* In general we also include a controller stage that handles some of the pipeline parameters or rules based on which environment you are pointing to, which is usually determined with particular code branches. This controller/preflight stage is usually at the very start or the pipeline. For example we follow the old gitflow model since the team is not yet mature enough, so the branches that follows the rules of each environment are: master (branch) - prod (environment) hotfix (branch) - prod (environment) release (branch) - test (environment) bugfix (branch) - dev (environment) dev (branch) - dev (environment) feature (branch) - dev (environment)
Thanks! I can see the content came from a very experienced and down to earth devops engineer. I learn a lot from that, really appreciate. Especially you talked a lot about the developer experience which I also think is very important. Most people talk about devops only as a cloud engineer or only talk about the CI/CD pipeline.
It's missing static analyses, including linting, code quality, security/dependency scanning, etc. It's also missing a stage for a small number of system/smoke tests. And while not technically a part of CI, if you're including monitoring/alerts, then logging should be included as well. (There's a big fuzzy area where CD blends into Ops, so it's not always clear where to draw the line, but your video is a distinctly "dev" perspective -- I wouldn't bisect Ops.) As far as integration tests go -- they test the integration between components. So there's always 2^n-2n possible test combinations. Example: for 3 components a-b-c, there's 2 combos -- ab and bc. (The individual ones a--, -b-, and --c are unit tests, and the whole group "abc" is a system test). For each integration test combo, there are 1 or more components that are "left out". These components are to be mocked/faked/stubbed. **Not every combo should be tested!** Only the ones that make sense or are easy. Most people bend the definition of integration test to mean whatever they want, but that's how you get systematic about them. List out all the components and draw various "system under test" combinations. Do the ones that make sense.
I agree. Also parallelism in the deploy is essential. Don't want to run all this stuff sequentially on all your environments -- that would take a long time. Parallelism is key for efficient pipelines.
I agree. A full CI needs more checks such as lint, code quality, security scan, and even tagging/versioning . CD also has a varied process which depends on the platform where you are deploying - K8, serverless, etc
Great video! One additional thing you can add either as part of PR approval or build step is code linters / code analysis like check style or sonarqube.
This is a really good overview. Thanks As pointed out several times - your candary testing is a health check. No biggie, obviously people got it. However, the important difference is that a canary test lets you send REAL traffic through your newly released build (say 10%) to see what happens with just a small subset of your user base. If it blows up - it didnt blow up everybody and you can recover. Being able to do this type of testing in production also opens the door to more expirmentation. You can run A/B tests to see which UI your users like better, or if you have a new feature that you want to see if anybody is interested in. All of this stuff (health checks, Canary testing, A/B testing, etc) is enabled by having a functioning CI/CD pipeline.
Thanks for the great video. In terms of canary deployment and deployment progression a great tool to look at is Argo Rollouts that can handle this for you.
One of the best explained videos I have watched on CI/DC pipeline. Liked and subscribed. Now I am just gonne binge watch all of your videos!! thank you.
Only thing you missed was code analysis for vulnerabilities in the build task. Something along the lines of Coverity or Black Duck. Otherwise this almost exactly matches my company's pipeline.
That's a good start and you've got all the basics covered but there's a few critical things your missing for a large real-world application. It's a good idea to think about the different major components of your application (Web App, API, Database) and break up those things into different pipelines so that a one line change of your front end doesn't take forever to run. Also I actually hate the word pipeline because it implies there's a strict first-in-first-out approach to code flowing across the architecture. The build process should build Artefacts (npm package, jars, docker containers). The deployment process should deploy Artefacts. These two processes should not be tightly coupled. Any release should be capable of being deployed into any environment. Sure there's a bunch of reasons why you can't deploy a 2.3 API release into a environment with a 1.1 database but once the application is mostly built it becomes quite stable and you'll find there's quite a lot of compatibility between the artefacts being able to work with different versions. Any release into any environment means you can do a couple of things; - If there's an urgent critical production problem then the fix can be built from a branch taken of the production release without interfering with whatever dodgy/unstable major release you might be working on in production. - Performance optimisation usually involves working in a production-like environment (you can't test performance on your Dev machine), so you may need a Performance testing environment, or allocating another environment to performance testing for a period of time. Performance optimisation involves making experimental changes so you'll want these on a branch and you don't want to break the main branch until these are stable. So you need to be able to deploy branch'ed code into any environment = any release into any environment. It does lead to a "version compatibility matrix" problem of knowing which versions will work together, but in practice that's usually pretty easy and things like semantic versioning can help.
Hi Richard, Thank you for the detailed reply and added tips. I agree with many of your points especially the about separating out the layers into different pipelines - this is something I definitely should have mentioned.
The point 3) Canary in prod env stage is also called Synthetics monitoring. It simulates traffic real time to the env to ensure the systems are up and behaving as expected. Something similar to New Relic or IBM Cloud Synthetics
I liked this video, but i think it misses a few things to describe a really efficient pipeline: - Review branches are not necessary, but are prescribed here. We can achieve the "for eyeballs" principle in source with pairing and co-signed commits - The "build" step is a perfect place to run linters (and other static analysis tooling). We want to fail hard and fast on not passing linters - The Testing stage is typically the most difficult to implement, and we usually want to run some kind of End-to-end and/or Functional testing there. Deploying a preview/test environment for the test stage is usually one of the more complex parts of a pipeline (how do you handle routing and databases? and cleanup on success or failure?) also, using E2E testing in CI reduces the need for as much pre-prod and canary deployment time. Shift that confidence left. Otherwise there is a lot of good advice here. I like the tech-stack neutral approach.
1box in azure is deploying to a slot and redirect small percentage of connection to it. It could be also done with blue green deployment so the rollback is just to permut the release version and the previous release
10:01 No matter how full-blown or sophisticated your CI/CD pipeline is, you should still perform a gradual rollout (what OP refers to as 1box), because production environments almost always have properties that cannot be fully replicated in test environments, so there is always a potential for a surprise in production.
I'm 50/50 on this. If that is the case it means your application code is too tightly coupled to production data and you don't have actually useful tests that model real world use cases, if you have to rely on live users to essentially test your application then something has gone horribly wrong. I go agree there can be edge cases that sometimes happens that literally know one even imagines, and that's fine because in those cases those edges are added as part of the test suite.
Good overall approach. However each CI/CD pipelines has its own set of needs and this is only good for typical app deployment. If the app is being deployed with more then basic app, meaning it uses specific services, such as Elasticsearch or Kafka, then these environments should also be added to test environment. But I like the overall approach. It is clean and almost bullet proof. Thanks
Really nice tutorial, just two questions how to handle multiple commits during bake period on 1box prod? Should deployment of next version wait? or always rollout newest version and then downgrade it? Second thing is about db schema changes, as prod have only one db how to handle situation with 1box using different version that other hosts?
This was excellent, and you did a fantastic job covering what a pipeline should do in just minutes. Awesome! And thanks for making this. It's super handy to share.
Awesome video! It's really hard to learn these things, if you haven't worked in a large scale company.. Will try to get our startup to implement a similar pipeline!
Can you tag me the next video where you started the journey of building this project on AWS like you stated at the end, I really enjoyed this video so I want to continue the next one.
It's very informative for beginners. Could someone help me figure out how integration tests are usually run after an environment is deployed? Are they run automatically or manually?
Hello, how do I make a partial deploy in production if I made databases changes (data and infrastructure) and rollback then. PD. Thank you, I show your videos to my coworkers :)
You go from your testing environment to the production environment. In what environments should one include the staging, and canary as pre-production to the production environment, or do you include those two stages in your testing environment?
Also, as someone else mentioned, I believe you have mistaken Canary with Health checks/probes. The one box environment seems to be using liveness probe to test for validation.
Hi, as you told each developer should have his own local environment, honestly, this did not work for our case. Your project had 6 or so microservices, and we(as FE devs) did not care at all on how they run. However, we had to run local versions which ate almost all ram and disc space in docker. What would you suggest in this kind of situation? Applies to even bigger projects with 10-40 microservices. And if this is possible to solve in cloud(aws/gcp)--please tell the exact mechanism that allows to do that) Thanks in advance, video is great!
Curious what your thoughts are on 1box or blue/green deployments when DB schema changes are involved in your release? Especially when some of those schema changes involve data migration and/or breaking changes to previous releases
Is the reason why in my enterprise project never added this. But That's why we never added this to my enterprise project. But it would be nice to find a way to bypass this problem.... in that case we would definitely adopt the B/G deployment
If I need to introduce any BC break, I would personally create a new table and write on both tables for the duration of the blue/green deployment while still reading off the old one. When I'm satisfied with the writing part, I would do a new blue green to switch the reading from the new table. Concepts like CQRS probably make this kind of process easier though.
I love the idea of 1box, its a new concept to me. I dont fully understand how this would fit into the full cycle of CI though. It sounds like if the tests pass in dev, you release to 1box, but when do you release code to full blown production? Would you have it running at a set time, like 9am each day?
Would great to see a video of this ideal pipeline in a cdk video. Almost feels like every tutorial for code pipeline just has the basic beta to prod design. Great video though!
Great vid. Maybe you could add details on how the prod (1box) and "big" prod work with DB (unified, I assume, any notes on some kind of rollback rules for any incorrect updates?)
Nice informative video.I am from QA background.I would like to know why integration tests need to be run on a separate Test/QA Environment and what if we run those tests in DEV environment itself?Is it only to avoid test data dependency for integration or E2E tests.Or there are other reasons behind running the integration tests in a separate environment? Thanks.
I have a question| How is the 1box is relevant when we rollout a new feature? Isn't the 10% traffic would be a bottleneck in the production? Or does we expect to test the production again?
Nice video, I just think the canary concept is a bit different from what I have seen, Canary testing is deploying new code to a limited number of components instead of hitting everything in production. By the way…. I have no idea why it is called canary 😂
I googled it and this is what I found, RIP all those canaries 😰 Google: The origin of the phrase is from the phrase “Canary in the coal mine”, in which coal miners would bring a caged canary bird into the coal mine to detect if the level of toxic gas was too high.
what if you update frontend and backend, the new front end will not be compatible with old backend, then when automatically roll back, the whole system will crash and trigger the alert, and rollback again, and eventually roll back the first version?
Can anyone help me understand how you would do one box with a seperate backend/frontend. I would need the 10% to get both the new frontend and access the new backend. The backend and frontend both have different hostnames, don't know how to carve out groups of users by DNS when there are multiple hostnames in play.
I don't understand how one can assure a high quality code delivery with structural code coverage of 90+%. Anything below 100% is only allowed when portions of the delivered product are not intended to be used in operation e.g., some functions are intentionally not available.
Make sure nothing bad happens, then make sure nothing bad happens, again, then make sure nothing bad happens, again! THEN MAKE SURE NOTHING BAD HAPPENS - THEN MAKE. SURE. NOTHING. BAD. HAPPENS. But then, implement a system to detect that something bad has happened, because it always happens. Next step: Keep an eye on it personally in case the system didn't detect it automatically.
Can the environments be seperated by different regions instead of accounts? And by account, you mean the main account (my company only has 1 of this) or IAM account?
This was incredibly informative, logical, and easy to understand and follow. I don’t think I’ve ever seen such a clear, comprehensible step-by-step basic structuring of, as you put it, the ideal CI/CD pipeline. I really appreciate it. Thanks so much and I look forward to your subsequent videos.
Well that will be a great addition to our on-boarding procedure ! You described the ideal CI/CD pipeline in such a smooth yet accurate manner and in a 22 minutes video rather than a 2h one ! Well done!
Hi, great video. I think you may be conflating Canary and Health Checks. Canary (Deployments) are what you refer to as your 1box approach. The function which you run periodically to check system is a Health Check.
+1
Yeah, and I think his 1box example is blue/green deployment.
There are also such a thing as "canary alarms" as well and the explanation in the video pretty much matches what I've seen in my experience.
Actually there are multiple types of canaries in software development. I've seen this definition of canary in practice, where you have some sort of test running continuously from customer's view. Another version I have seen is "canary deployment" which is where you deploy a small % of the code (similar to 1box here) and rollback based on how it performs in that % rollout. Both are correct, which is why the video explains their definition. I've actually seen more of the first definition, but I don't think it's a standard across the industry.
Amazing 30,000 ft view of the CI/CD process, and what the purposes of every step are, not just how to do it but why and what you're going to ac complish strives, love it!
15:30 Errors over an extended period of time may also be indicative of a platform issue rather than an issue with the code.
Just because the "canary" detects an issue, this does not mean that a rollback should be performed.
Your videos are great, I recommend these to junior developers on my team all the time. You deserve waaay more views for these man. Thanks again.
Thanks so much Robert, really glad you enjoyed and your recommendations :)
From your video, I have 80% confidence you are(or were) Amazon SDE. Some of the terms you mentioned are used by Amazon, but not lot of other places.
You caught me! Check out th-cam.com/video/TqJjiP88OEQ/w-d-xo.html :)
And Amazon has one of the most advanced Pipelines I've ever seen. Think Route 53 deploying changes all over the world. Coming from R53 team myself.
@@BeABetterDev Ahaha, that's why I feel some terms are so familiar. From AWS Identity.
@@EvgenyGoldin How does Amazon Pipelines compare to the systems used by other companies you've worked at?
What terms? Asking because everything in our org use the same terms and we are definitely not Amazon.
Dude, I've been watching your stuff and I must say that you are a freaking legend for putting so much of yourself into all of this. Thank you! You are the reason why a lot of people will be able to make a better life for themselves, really.
Thank you so much Christopher for your kind words !
Let me help improve that whole ideal pipeline and simplify it:
The ideal pipeline has these 4 main stages:
Test -> Build -> Scan -> Deploy
We have 3 different environments where those 4 main stages take place, which are:
Dev, Test, Prod
The Stages:
*Test*
This stage could encompass your:
- unit testing
- performance/load testing ( kind of difficult to implement/automate )
- integration testing ( kind of difficult to implement/automate )
*Build*
This stage could encompass:
- building code artifacts
- building container images
*Scan*
This stage could encompass:
- scanning for code coverage results
- scanning your code with a tool that covers OWASP standard vulnerability scanning (SonarQube, Whitesource, Findbugs, Checkmarx)
- Image vulnerability scanning (Clair, Trivy, Snyx)
*Deploy*
This stage could encompass:
- Deploying your service with the correct config and version
- Updating your deployment config
*General*
In general we also include a controller stage that handles some of the pipeline parameters or rules based on which environment you are pointing to, which is usually determined with particular code branches. This controller/preflight stage is usually at the very start or the pipeline.
For example we follow the old gitflow model since the team is not yet mature enough, so the branches that follows the rules of each environment are:
master (branch) - prod (environment)
hotfix (branch) - prod (environment)
release (branch) - test (environment)
bugfix (branch) - dev (environment)
dev (branch) - dev (environment)
feature (branch) - dev (environment)
Very well said.
Thanks! I can see the content came from a very experienced and down to earth devops engineer. I learn a lot from that, really appreciate. Especially you talked a lot about the developer experience which I also think is very important. Most people talk about devops only as a cloud engineer or only talk about the CI/CD pipeline.
It's missing static analyses, including linting, code quality, security/dependency scanning, etc. It's also missing a stage for a small number of system/smoke tests. And while not technically a part of CI, if you're including monitoring/alerts, then logging should be included as well. (There's a big fuzzy area where CD blends into Ops, so it's not always clear where to draw the line, but your video is a distinctly "dev" perspective -- I wouldn't bisect Ops.)
As far as integration tests go -- they test the integration between components. So there's always 2^n-2n possible test combinations. Example: for 3 components a-b-c, there's 2 combos -- ab and bc. (The individual ones a--, -b-, and --c are unit tests, and the whole group "abc" is a system test). For each integration test combo, there are 1 or more components that are "left out". These components are to be mocked/faked/stubbed. **Not every combo should be tested!** Only the ones that make sense or are easy. Most people bend the definition of integration test to mean whatever they want, but that's how you get systematic about them. List out all the components and draw various "system under test" combinations. Do the ones that make sense.
I agree. Also parallelism in the deploy is essential. Don't want to run all this stuff sequentially on all your environments -- that would take a long time. Parallelism is key for efficient pipelines.
Sir, can you please provide me a link or source where I can learn everything.
I agree. A full CI needs more checks such as lint, code quality, security scan, and even tagging/versioning . CD also has a varied process which depends on the platform where you are deploying - K8, serverless, etc
@@chandrashekar-us6ef +1
Just another dev that doesn’t think about security and compliance
Great video! One additional thing you can add either as part of PR approval or build step is code linters / code analysis like check style or sonarqube.
This is a really good overview. Thanks
As pointed out several times - your candary testing is a health check. No biggie, obviously people got it.
However, the important difference is that a canary test lets you send REAL traffic through your newly released build (say 10%) to see what happens with just a small subset of your user base. If it blows up - it didnt blow up everybody and you can recover. Being able to do this type of testing in production also opens the door to more expirmentation. You can run A/B tests to see which UI your users like better, or if you have a new feature that you want to see if anybody is interested in.
All of this stuff (health checks, Canary testing, A/B testing, etc) is enabled by having a functioning CI/CD pipeline.
Couldn't agree more with this!
Our pipeline is
[Jira task check, version check]-> [unit tests, sonar scan, cherkmarx scan]->[build, publish]->[deploy to dev]->[integration tests, load tests]->[deploy to prod]-[update Jira board]
Looks great!
Kindly share your code of pipeline
Great , to see such videos exists in youtube
You spoke about canary after I asked the question. Thanks. I think staging occurs prior to canary.
this was sooooo helpful. Thanks! I took thorough notes and am about to share them with the rest of my team. This was awesome.
Glad it helped!
Thanks for the great video. In terms of canary deployment and deployment progression a great tool to look at is Argo Rollouts that can handle this for you.
One of the best explained videos I have watched on CI/DC pipeline. Liked and subscribed. Now I am just gonne binge watch all of your videos!! thank you.
Thanks so much and welcome to the channel!
Only thing you missed was code analysis for vulnerabilities in the build task. Something along the lines of Coverity or Black Duck. Otherwise this almost exactly matches my company's pipeline.
This is a really good point Dan! Thanks for calling this out.
Good explanation of what should be a basic CI/CD pipeline multi-AZ. Thanks man you rock!
Your channel is pure gold
That's a good start and you've got all the basics covered but there's a few critical things your missing for a large real-world application.
It's a good idea to think about the different major components of your application (Web App, API, Database) and break up those things into different pipelines so that a one line change of your front end doesn't take forever to run.
Also I actually hate the word pipeline because it implies there's a strict first-in-first-out approach to code flowing across the architecture. The build process should build Artefacts (npm package, jars, docker containers). The deployment process should deploy Artefacts. These two processes should not be tightly coupled. Any release should be capable of being deployed into any environment. Sure there's a bunch of reasons why you can't deploy a 2.3 API release into a environment with a 1.1 database but once the application is mostly built it becomes quite stable and you'll find there's quite a lot of compatibility between the artefacts being able to work with different versions.
Any release into any environment means you can do a couple of things;
- If there's an urgent critical production problem then the fix can be built from a branch taken of the production release without interfering with whatever dodgy/unstable major release you might be working on in production.
- Performance optimisation usually involves working in a production-like environment (you can't test performance on your Dev machine), so you may need a Performance testing environment, or allocating another environment to performance testing for a period of time. Performance optimisation involves making experimental changes so you'll want these on a branch and you don't want to break the main branch until these are stable.
So you need to be able to deploy branch'ed code into any environment = any release into any environment.
It does lead to a "version compatibility matrix" problem of knowing which versions will work together, but in practice that's usually pretty easy and things like semantic versioning can help.
Hi Richard,
Thank you for the detailed reply and added tips. I agree with many of your points especially the about separating out the layers into different pipelines - this is something I definitely should have mentioned.
1box. SO HELPFUL.
Great video. I would also add code quality analysis tools just to catch the bad code/vulnerable plugins etc before going to production.
The point 3) Canary in prod env stage is also called Synthetics monitoring. It simulates traffic real time to the env to ensure the systems are up and behaving as expected. Something similar to New Relic or IBM Cloud Synthetics
I liked this video, but i think it misses a few things to describe a really efficient pipeline:
- Review branches are not necessary, but are prescribed here. We can achieve the "for eyeballs" principle in source with pairing and co-signed commits
- The "build" step is a perfect place to run linters (and other static analysis tooling). We want to fail hard and fast on not passing linters
- The Testing stage is typically the most difficult to implement, and we usually want to run some kind of End-to-end and/or Functional testing there. Deploying a preview/test environment for the test stage is usually one of the more complex parts of a pipeline (how do you handle routing and databases? and cleanup on success or failure?) also, using E2E testing in CI reduces the need for as much pre-prod and canary deployment time. Shift that confidence left.
Otherwise there is a lot of good advice here. I like the tech-stack neutral approach.
Also you should add a stage/step to CVE check your artifact.
Good call out Chris!
Not sure on the One box, but the Red/Blue method of deployment is the best out there.
Well explained buddy! You are changing lives and thank you
1box in azure is deploying to a slot and redirect small percentage of connection to it. It could be also done with blue green deployment so the rollback is just to permut the release version and the previous release
10:01 No matter how full-blown or sophisticated your CI/CD pipeline is, you should still perform a gradual rollout (what OP refers to as 1box), because production environments almost always have properties that cannot be fully replicated in test environments, so there is always a potential for a surprise in production.
Couldn't have said it better myself!
I'm 50/50 on this. If that is the case it means your application code is too tightly coupled to production data and you don't have actually useful tests that model real world use cases, if you have to rely on live users to essentially test your application then something has gone horribly wrong.
I go agree there can be edge cases that sometimes happens that literally know one even imagines, and that's fine because in those cases those edges are added as part of the test suite.
Clear and Concise! Thanks!
Good overall approach. However each CI/CD pipelines has its own set of needs and this is only good for typical app deployment. If the app is being deployed with more then basic app, meaning it uses specific services, such as Elasticsearch or Kafka, then these environments should also be added to test environment.
But I like the overall approach. It is clean and almost bullet proof.
Thanks
I believe the right therm, instead of "Canary" is "Continuous Testing".
Thank you for this amazing video
You're very welcome!
Another masterpiece, appreciations
Thank you!!!
Thank you. This is quite informative.
Great video! Thanks for that. Besides, I dont think that we have the word "ideally" in any field
Really nice tutorial, just two questions how to handle multiple commits during bake period on 1box prod? Should deployment of next version wait? or always rollout newest version and then downgrade it? Second thing is about db schema changes, as prod have only one db how to handle situation with 1box using different version that other hosts?
Hi! Thanks a lot for this.
When are you going share the next part of adding unit test part for lambda to pipeline ?
Thanks for the info and loving your video especially as a new junior dev just starting out :)
This was excellent, and you did a fantastic job covering what a pipeline should do in just minutes. Awesome! And thanks for making this. It's super handy to share.
Glad you enjoyed it Andrea!
Awesome video! It's really hard to learn these things, if you haven't worked in a large scale company.. Will try to get our startup to implement a similar pipeline!
Great. Please share the video where you build this on AWS.
Prod 1 box : CUG : Closed user group , where code runs on prod database but only fraction of entire userbase can use that environment.
Can you tag me the next video where you started the journey of building this project on AWS like you stated at the end, I really enjoyed this video so I want to continue the next one.
It's very informative for beginners. Could someone help me figure out how integration tests are usually run after an environment is deployed? Are they run automatically or manually?
Very helpful, thank you.
Hello, how do I make a partial deploy in production if I made databases changes (data and infrastructure) and rollback then. PD. Thank you, I show your videos to my coworkers :)
Great explanation! May I know what tool are you using to draw on the screen? Is it through ipad or some sort of a diff tablet ?
Thanks!
Cheers
You go from your testing environment to the production environment. In what environments should one include the staging, and canary as pre-production to the production environment, or do you include those two stages in your testing environment?
Also, as someone else mentioned, I believe you have mistaken Canary with Health checks/probes. The one box environment seems to be using liveness probe to test for validation.
Thanks for clarifying this Nadir. I've cleared this up in a recent follow up video.
1 box could be canary deployment strategy
Could you share a small version example of this pipe? It could be useful
thanks for the video
Hi, as you told each developer should have his own local environment, honestly, this did not work for our case. Your project had 6 or so microservices, and we(as FE devs) did not care at all on how they run. However, we had to run local versions which ate almost all ram and disc space in docker. What would you suggest in this kind of situation? Applies to even bigger projects with 10-40 microservices. And if this is possible to solve in cloud(aws/gcp)--please tell the exact mechanism that allows to do that) Thanks in advance, video is great!
Curious what your thoughts are on 1box or blue/green deployments when DB schema changes are involved in your release? Especially when some of those schema changes involve data migration and/or breaking changes to previous releases
same question
Is the reason why in my enterprise project never added this. But
That's why we never added this to my enterprise project. But it would be nice to find a way to bypass this problem.... in that case we would definitely adopt the B/G deployment
If I need to introduce any BC break, I would personally create a new table and write on both tables for the duration of the blue/green deployment while still reading off the old one. When I'm satisfied with the writing part, I would do a new blue green to switch the reading from the new table.
Concepts like CQRS probably make this kind of process easier though.
I would hope your DB library layer would be architected to be able to navigate the various versions of your db schema gracefully.
I love the idea of 1box, its a new concept to me. I dont fully understand how this would fit into the full cycle of CI though. It sounds like if the tests pass in dev, you release to 1box, but when do you release code to full blown production? Would you have it running at a set time, like 9am each day?
Talks about "2" reviewers.... living the dream of a team being properly resourced... :-P
Thank you for the video.
Very nice!
Thank you! Cheers!
Thanks for this great concept.
How you suggest handle db migrations ?
amazing video!!!
Glad you liked it!!
Thank you!
Would great to see a video of this ideal pipeline in a cdk video. Almost feels like every tutorial for code pipeline just has the basic beta to prod design. Great video though!
Awesome video. Thank you.
You're very welcome Anthony!
Concerning the test part , i think that it would be better to separate it with the production phase
Thanks for your video. 1box? just call it canary release.
Good call!
Do you have a video that covers this throughout the project?
I liked your gear.
Thanks!
How about ideal CICD pipeline for monorepos? Thanks
Great idea thanks David
Couldn't have come at a better time. Great video, thank you!
Thanks a lot👍
You're very welcome Med!
Great vid.
Maybe you could add details on how the prod (1box) and "big" prod work with DB (unified, I assume, any notes on some kind of rollback rules for any incorrect updates?)
Great stuff
Great vid !
Thanks Brijesh!
Nice informative video.I am from QA background.I would like to know why integration tests need to be run on a separate Test/QA Environment and what if we run those tests in DEV environment itself?Is it only to avoid test data dependency for integration or E2E tests.Or there are other reasons behind running the integration tests in a separate environment? Thanks.
I have a question|
How is the 1box is relevant when we rollout a new feature? Isn't the 10% traffic would be a bottleneck in the production? Or does we expect to test the production again?
In an ideal CICD linting and other similar static analysis techniques should gate promotion to integretion/cicd testing
Good point Ben. I made some amendments in the follow up to this video.
Man I love you
The one box thing is called a canary deployment
Nice video, I just think the canary concept is a bit different from what I have seen, Canary testing is deploying new code to a limited number of components instead of hitting everything in production.
By the way…. I have no idea why it is called canary 😂
I googled it and this is what I found, RIP all those canaries 😰
Google:
The origin of the phrase is from the phrase “Canary in the coal mine”, in which coal miners would bring a caged canary bird into the coal mine to detect if the level of toxic gas was too high.
Poor little fellas!
I think the Canary, in the industry, means the 1box
what if you update frontend and backend, the new front end will not be compatible with old backend, then when automatically roll back, the whole system will crash and trigger the alert, and rollback again, and eventually roll back the first version?
How do you roll out your change to one box when you've done database changes and the code relies on it? It's a common case.
gr8 vid, but im a Infra Engineer, how should i work with CI/CD in the best way?
What tool do you use for the whiteboarding?
Hi Viktor, I'm using Photoshop.
can you explain how to compile code for nodejs please ;)
i think using hitting every api per minute will increase server expenses a lot, or is it necessary?
Can anyone help me understand how you would do one box with a seperate backend/frontend. I would need the 10% to get both the new frontend and access the new backend. The backend and frontend both have different hostnames, don't know how to carve out groups of users by DNS when there are multiple hostnames in play.
1box also can be said as green blue deployment?
how about building on a golden image and using things like twistlock for opsec container scanning...
maybe also adding lint stage + staging instead of prod env
Good addition thank you !
what's the difference between blue green deployment vs 1box?
I don't understand how one can assure a high quality code delivery with structural code coverage of 90+%. Anything below 100% is only allowed when portions of the delivered product are not intended to be used in operation e.g., some functions are intentionally not available.
What do you think code coverage accomplished?
Make sure nothing bad happens, then
make sure nothing bad happens, again, then
make sure nothing bad happens, again! THEN
MAKE SURE NOTHING BAD HAPPENS - THEN
MAKE. SURE. NOTHING. BAD. HAPPENS.
But then, implement a system to detect that something bad has happened, because it always happens.
Next step: Keep an eye on it personally in case the system didn't detect it automatically.
Can the environments be seperated by different regions instead of accounts? And by account, you mean the main account (my company only has 1 of this) or IAM account?
looks like an aws pipeline :)
Shhhhh don't tell anyone ;)