I've worked at the Dutch railways, where our software supplied the travelers with information about their journey. Everything ran in Docker containers and inside the docker containers configurations were updated every 20 minutes using puppet. This meant that when a new train arrived from the factory, after installing the base system and configuring the identity, it was automatically updated with everything that was. For me, this was a great example of how embedded systems (given they're connected) can also benefit from this kind of technique.
At my company we use helm / k8 with docker to deploy our immutable infrastructure. However, we depend on an outside partner (who hosts a different service that we use) and they... often make changes that lead to our environment being impacted. It's a different problem (interfaces and system borders which you already covered in another video), and I'm very glad we have no issues on our end!
Great video Dave. I've been working on a project using Amazon CDK and I have a single repo with infra and code. So I can just run cdk deploy and it's all pushed. As you made comment it was slow. These days it's about 5 mins to make a simple code push because of all of the checks cdk does. This time is too slow for development. However using TDD I've been able to make this less of a problem. All code is in lambda so I can mock incoming requests reliably and I can also mock requests to other Amazon services. So I can make changes rapidly locally running one test, then run the entire suite locally and then deploy to AWS. The moto library for mocking AWS API calls is awesome. I'd never go back to development of 2010 or earlier. That said I see so many developers who don't get IaC. They see it as a DevOps job and they just want to write code. Amazon CDK / cloud formation is great for declarative infrastructure. It only does what needs to be done and will remove deleted infrastructure. The best part of CDK is that you express your infra as JS/python/c#/java etc. So you can use inheritance etc in your infra. So there is no reason for a developer not to do this. I find a programming language works better than YML, but understand that a lot of sys admins probably prefer configuration files.
Thank you for sticking with us! Unfortunately, the whole team went down with Covid, so we did the best we could in the circs - not up to our usual production standards, sorry!
@@ContinuousDelivery while sound is indeed a bit worse, I think the video recording is better - maybe sometimes a little overlit, but you more stand out from dark background. And it's not only shirt color, which suits you well, but also edges (keying?) look better and the movement (higher keyframe?) I'm not an expert, this is very subjective opinion, but I thought you have new equipment and just sound is not yet set up well, but overall effect is very good.
Hi Dave, first of all I just want to thank you for all the great content you provide. It's always super insightful! This particular episode made me think again a lot about an "issue" I have with IaC regarding testability. Just like you, I like to test everything I do as much as I can, always trying to maximize the value each test can provide in term of effective feedback. That's why, regarding IaC, I have kind of a small conceptual issue. Indeed, I feel like a 'unit testing approach' does not apply well to that kind of code, in the sense that, in very fine grained tests, we try to remove/mock external dependencies or actors to focus on the business logic. That's a thought I came across when writing Jenkins pipeline code. Scripts are mostly a succession of side effects, and trying to abstract away external actors just leads to what I call 'interaction tests', where you basically check via mock verifications that you called the right dependency with the right arguments... the kind of tests that you described in previous videos as coupled with the production code and hardly maintainable, and I totally agree with that. Then if the 'unit test' approach is not suitable, the following one would be more about 'integration testing' - by that I mean 'where you actually don't mock away external actors'. But those are usually more costly, slower, harder to maintain due to interactions and coupling between services and environments etc... Hence, I was wondering if you had insights or opinions about that particular subject of 'IaC efficient/effective testability' ? Maybe there's room for a whole video here :p Again, thank you very much for all your work!
Test Driven IaC is still pretty "bleeding edge" as far as I can tell. I think that the difficulty is the same for any code, testing at the edges of the system, where there is input and output. I think that the solutions are the same, try to marginalise the inputs and outputs so that you can fake them and test the rest if the code. The problem with IaC is that it is a lot about the I/O. I worked on a team where we did this, and got some real value of it, but it was always a bit more tactical than regular TDD. We were using TeamCity for build management, and did most of of our glue automation with Unix shell scripts. We got decent TDD in place for the shell scripts, we had a working deployment pipeline for our deployment pipelines, and we designed the pipeline around a collection of simple adapters, so that we could run tests of the logic in TeamCity against unit-like tests that talked the to the adapters. Then we had some acceptance tests, with a real mini-project that had a handful of real unit tests and a handful of real acceptance tests so that we could run the pipeline and see it report success, report failure, and so on. As I said it wasn't really elegant, but it did work pretty well for us.
15:00 demonstrates why Dave is such a great systems thinker, not that anyone here needed that reassurance. I would say another key benefit is that IaC gets your IT teams off of menial busy work and allows them to work on creativity and innovation driven projects. That "pushing the button", which provides auditability & other good bits, frees up your FTE to perform complex tasks that require a human brain, win win in my book. Great video as always Dave.
When I was hired to become Team Lead, I have established IaC practice. Now it is a requirement for any solution and project delivered to customer. I never regret this decision. We made some smart choices and developed tools and tech internally without relying and hiding details behind 3rd party packages such as Terraform etc. Knowledge and productivity increased dramatically. This must be a requirement. I developed initial set of IaC tools myself and then passed it to others. Now team cannot get back to manual deployments of infra/networks/security.
So Terraform is 3rd party for you, but if you develop new tools for the same purpose, you think that's smart? Behind Terraform is a world-leading company with lot's of really smart devs. It's usually not smart to reinvent the wheel.
@@malizios821 To answer your question. This choice was neither smart nor stupid. To explain: 1. You are correct. I had a choice: Terraform or my team. My first instinct was to use Terraform. Unfortunately, at that time Terraform was underdeveloped and could not handle everything we need. 3rd party vendor also involves costs and hides the knowledge under the hood. If I would work at a company who mainly consumes dev productivity tools then something like Terraform would be my choice. But we are software developers and we need to satisfy our needs and must have very deep understanding of infra. Our choice was PowerShell based automation. Need to fix provisioning of some sort - no prob. Need to do privileged operations - no prob. Secure environments - no prob. Native integration with Azure - no prob. Full featured C# like automation - no prob. No additional language skills required. 2. We pass IaC capabilities to our customers by default. So customers are not locked in. To do that we must rely on source code without relying on 3rd party, unless customer has Terraform already. Many customers see that as competitive advantages (no lock-in, no additional package to train on). Depends on customer environment, but IMHO, basic admin skills should be sufficient to make use of IaC. For usual enterprise customers I often recommend 3rd party packages, like Terraform. 3. There is a lag risk. We focus on Azure. Azure introduces new feature, we need to wait, make sure that Terraform supports it. If we handle it ourselves then we can use it next day.
5:38 Ehm, a clear pro is that if you need to get something online for the meeting with a client in 30min, you don’t need to know how to puppeteer infrastructure via APIs. If the demo system needs to be online, it needs to be online. Fast. You don’t always need repeatability. For the rest: Use pulumi
I am late to this one. However I have one question. Does the principle of not branching git and using main branch still count in things like Terraform and IaC?
Would you recommend a GitOps approach for IaC? I attended an Istio workshop last year, and the presenter told us he heavily recommends comibing Istio with GitOps. If so, does your "usual" recommendation of a trunk-based branching model with pair programming still hold up to IaC+GitOps? Tbh, I get kind of nervous thinking about changing the foundation of my production system just by commiting something.
Yea, neverous and for us not allowed since it hasn't gone through governance. Of course, you can have gates where gitops requires manual or some other promotion approval mechanism.
GitOps is ok, though I don’t think it adds anything significant to the concepts, so sure, certainly good enough, no different to any other IaC approach though.
One question - how would you describe IaC without GitOps? From what I understand, it's the same, just configuration job runs on click, not on push, isn't it? In my projects, we keep app source code in repo just next to dependend infrastructure definitions. This way, when you do some change in source code, which requires some infrastructure changes (like update framework runtimes), you do it in one commit and it's deployed together. If my understanding of GitOps is correct, to work in GitOps way we would need to create separate repository and track dependencies between them. Sounds just unnecessarily complex Only place, where I use indemendent infrastructure-oriented repository is when infrastructure is not related to particular product. E.g. in NGO where I volunteer we use repo to keep 1000+ DNS entries (it's public BTW - github/itwzhp/infrastruktura), but in other cases I can't see particular benefits from this approach
@@qj0n No difference. How config changes are executed are choices of pipeline design. My default is infrastructure config changes flow the the same pipeline as application changes, are tested with the version of the application that they related to, and are deployed as part of the application deployment. If there are no changes to the app, config changes flow through the pipeline in the same way, are tested with the current version of the app, and deployed at the end of the pipeline. No special cases, wether the prod deployment is automated or manually triggered is really just a choice of how your pipeline is configured. I agree with you, I much prefer that my infrastructure config lives alongside the app, so that it is clear which versions of the app this config has been tested with.
Considering configuration management systems like Puppet/Ansible/Salt/Chef etc - This is an interesting scenario in which I have fully embraced trunk-based development towards all associated modules. For the conglomeration aspects such as a puppet control repo - do you feel that it is preferable to still utilize a version control everything approach vs. trunk-based development approaches towards it?
I don't see a"vs"? I use TBD for IaC changes, my preference, because it is simplest, is to keep everything in the same repo, source, config, IaC code, everything.
@@ContinuousDelivery Hi Dave - perhaps I have worded my question not so well or even more likely I've missed a key aspect of TBD - Interpretively - TBD to me is all component aspects (such as modules) are in themselves trunk and the conglomerate repository that calls them all in is also trunk, allowing it to be deployable and in-line based upon syntax, unit, acceptance/etc testing in the pipeline for not only the component modules, also the conglomerate (such as a puppet control repo). Thusly there are CI/CD pipelines for each component module and for the enabled aspects from the conglomerate (Elaborated below) Looking at a more specific example: If I were IAC'ing Puppet code, inevitably every manifest/module, in the eyes of TBD, would state that I feature flag hide everything, enable as needed such as enabling sudo custom configs, haproxy configurations, etc. At that level, would version locking any of the component aspects make any sense? If we (royal we being devs, etc) are disciplined in TBD - should not all components continuously be on main/trunk/master?
@@ContinuousDelivery if preferable, I can happily email you with a more concrete "real world" scenario in which my confusion/question will hopefully be outlined
@@crushingbelial I am still not sure that I understand what you are describing. At the moment this seems to me like a complicated way to describe a simple idea. TBD = using version control to simplify the relationships between things. Have one repo that contains everything. So the 'current' version is definitive, we know that this IaC config works with this version of the app, because they are all on one repo, and usually, ideally were all tested together. That is all TBD is about. Now you may add things like feature flags for some kinds of change if you really want to, but the easiest use of TDB is to make every change atomic, and every change keeps the system correct. Sometimes, for complex changes that you want to drip into prod over a series releases, more complex things like FFs can help, but this is only peripherally related to IaC.
@@ContinuousDelivery Thank you for the simplification - that actually did help me out. I think I may be conflating some details with TBD rather than the core aspects you outlined
How do you do IAC with databases? From experience those usually get setup once and then it stays like that until data has to be moved to a new architecture of some kind
There's a lot of stuff that still needs to be managed, even without architecture changes - databases have a lot of configuration that can be used to optimise performance, automated backups, logging, etc. IaC can also manage DB users, and in some cases you can even change the underlying hardware (more RAM? more CPUs?). With IaC you can easily manage all of this to e.g. propagate changes from your perf env to prod, clone environments, etc. But overall yes, IaC works much better with immutable infractructure, which can be re-created on-the-fly. DBs are not exactly immutable :)
In other words, your running your software in different fields and some fields contain all cows, some contain 2 sheep and 1 cow, and another contains 3 cows, 2 sheep and a giraffe
In the end, it comes down to laziness: I'm way too lazy to log into all of my environments and go happy bug-hunting. I just made the entire config a part of the CI/CD pipeline, so that even if my infrastructure ever burns down, I can rebuild from scratch without doing any work.🤷
All sorts, helm, k8, terraform, docker, puppet, chef, shell scripts… Whatever it takes, whatever makes sense for you tech. I think that the philosophy is much more important than the tech, but then I nearly always say that 😉😁
There are different tools for different infrastructure. The ones I know and I can recommend: - for Azure cloud: Azure Resource Manager + bicep (alternatively, you can use Terraform if you want to be cloud-independent) - for on-prem machines: Ansible - for DNS: dnscontrol - for docker: docker-compose files loaded to preferred orchiestrator - for CI: yaml files in repo (most tools support it except TeamCity, which useses Kotlin to define pipelines) I also tried to use saltstack, but it's really hard to use, once you know ansible and see, how saltstack should have beed designed ;)
This view of continuous delivery is limited to the delivery of software in a limited environment. When you are setting up continuous delivery for software running on hardware products that the company creates, you have to test the software across infrastructure with necessary hardware topological variation and so you can only partially approach config synchronization
A favourite real world example Dave and his co-author Jez Humble like to bring up is HP. Their success story regarding a complete reorganization into continuous delivery speaks to your concerns. They've managed to deal with the numerous hardware they produce and has to be supported
@Abe Dillon HW dependencies? Can you containerized different HW models with different processors and different audio encoders with different network connectivity configurations?
@Abe Dillon I also worked at Dell on OneFS clusters, we had virtualized clusters but you cant just test on a virtualized cluster. Eventually you have to throw your code on real HW with real network card variation and real disc variation with real power controllers and real power supplies, etc. Its a different story when you make both the HW and the Software
@Abe Dillon Example, we had a bug that was thrashing SSDs in the clusters and wearing them out. If you test only on virtual clusters you might not ever catch a bug that is thrashing your HW, but if you test on real HW configurations you find things you cannot catch in a virtualized environment. Companies cannot afford to ship millions of dollars of HW products only to recall them or ship a multi-million dollar 20 PB cluster to a client only to have it brick on them. You have to test on your HW. The cases I am referring to the Software is a image of an OS so you don't run a container on the HW, its spec'd a close as possible. You run the image your going to ship on the HW just like the customer will get it.
@@jangohemmes352 you can do continuous delivery, but the full infrastructure as code is not attainable when your shipping HW. You end up having to make your infrastructure adapt to the HW environment instead of make your code adapt to the HW
5:40 I find it hard to agree with you really. Pro: sometimes you just have to prioritize your efforts and the learning curve for this automation can have a lesser priority. And another thing, the fact that this is a common approach is not a con. We all breathe too, and I've never heard of anyone who stopped doing so because it's too common :)
I've worked at the Dutch railways, where our software supplied the travelers with information about their journey. Everything ran in Docker containers and inside the docker containers configurations were updated every 20 minutes using puppet.
This meant that when a new train arrived from the factory, after installing the base system and configuring the identity, it was automatically updated with everything that was.
For me, this was a great example of how embedded systems (given they're connected) can also benefit from this kind of technique.
At my company we use helm / k8 with docker to deploy our immutable infrastructure. However, we depend on an outside partner (who hosts a different service that we use) and they... often make changes that lead to our environment being impacted.
It's a different problem (interfaces and system borders which you already covered in another video), and I'm very glad we have no issues on our end!
Great video Dave. I've been working on a project using Amazon CDK and I have a single repo with infra and code. So I can just run cdk deploy and it's all pushed. As you made comment it was slow. These days it's about 5 mins to make a simple code push because of all of the checks cdk does. This time is too slow for development. However using TDD I've been able to make this less of a problem. All code is in lambda so I can mock incoming requests reliably and I can also mock requests to other Amazon services. So I can make changes rapidly locally running one test, then run the entire suite locally and then deploy to AWS. The moto library for mocking AWS API calls is awesome. I'd never go back to development of 2010 or earlier. That said I see so many developers who don't get IaC. They see it as a DevOps job and they just want to write code. Amazon CDK / cloud formation is great for declarative infrastructure. It only does what needs to be done and will remove deleted infrastructure. The best part of CDK is that you express your infra as JS/python/c#/java etc. So you can use inheritance etc in your infra. So there is no reason for a developer not to do this. I find a programming language works better than YML, but understand that a lot of sys admins probably prefer configuration files.
Yeesh, Dave, great content as always. But it sounds like you're talking through a tube.
I thought the same thing. But the content was so good that I set aside the sound quality and watched the whole thing anyway.
Thank you for sticking with us!
Unfortunately, the whole team went down with Covid, so we did the best we could in the circs - not up to our usual production standards, sorry!
@@ContinuousDelivery while sound is indeed a bit worse, I think the video recording is better - maybe sometimes a little overlit, but you more stand out from dark background. And it's not only shirt color, which suits you well, but also edges (keying?) look better and the movement (higher keyframe?)
I'm not an expert, this is very subjective opinion, but I thought you have new equipment and just sound is not yet set up well, but overall effect is very good.
@@ContinuousDelivery hope you all feel better soon!
@@stephenbutler3929 Thanks, we're all on the mend now.
Hi Dave, first of all I just want to thank you for all the great content you provide. It's always super insightful!
This particular episode made me think again a lot about an "issue" I have with IaC regarding testability. Just like you, I like to test everything I do as much as I can, always trying to maximize the value each test can provide in term of effective feedback.
That's why, regarding IaC, I have kind of a small conceptual issue. Indeed, I feel like a 'unit testing approach' does not apply well to that kind of code, in the sense that, in very fine grained tests, we try to remove/mock external dependencies or actors to focus on the business logic. That's a thought I came across when writing Jenkins pipeline code. Scripts are mostly a succession of side effects, and trying to abstract away external actors just leads to what I call 'interaction tests', where you basically check via mock verifications that you called the right dependency with the right arguments... the kind of tests that you described in previous videos as coupled with the production code and hardly maintainable, and I totally agree with that.
Then if the 'unit test' approach is not suitable, the following one would be more about 'integration testing' - by that I mean 'where you actually don't mock away external actors'. But those are usually more costly, slower, harder to maintain due to interactions and coupling between services and environments etc...
Hence, I was wondering if you had insights or opinions about that particular subject of 'IaC efficient/effective testability' ? Maybe there's room for a whole video here :p
Again, thank you very much for all your work!
Test Driven IaC is still pretty "bleeding edge" as far as I can tell. I think that the difficulty is the same for any code, testing at the edges of the system, where there is input and output. I think that the solutions are the same, try to marginalise the inputs and outputs so that you can fake them and test the rest if the code. The problem with IaC is that it is a lot about the I/O.
I worked on a team where we did this, and got some real value of it, but it was always a bit more tactical than regular TDD. We were using TeamCity for build management, and did most of of our glue automation with Unix shell scripts. We got decent TDD in place for the shell scripts, we had a working deployment pipeline for our deployment pipelines, and we designed the pipeline around a collection of simple adapters, so that we could run tests of the logic in TeamCity against unit-like tests that talked the to the adapters. Then we had some acceptance tests, with a real mini-project that had a handful of real unit tests and a handful of real acceptance tests so that we could run the pipeline and see it report success, report failure, and so on. As I said it wasn't really elegant, but it did work pretty well for us.
15:00 demonstrates why Dave is such a great systems thinker, not that anyone here needed that reassurance.
I would say another key benefit is that IaC gets your IT teams off of menial busy work and allows them to work on creativity and innovation driven projects. That "pushing the button", which provides auditability & other good bits, frees up your FTE to perform complex tasks that require a human brain, win win in my book.
Great video as always Dave.
Dave, make sure you have a healthy lifestyle. We want you around for a long time
Thanks
When I was hired to become Team Lead, I have established IaC practice. Now it is a requirement for any solution and project delivered to customer. I never regret this decision. We made some smart choices and developed tools and tech internally without relying and hiding details behind 3rd party packages such as Terraform etc. Knowledge and productivity increased dramatically. This must be a requirement. I developed initial set of IaC tools myself and then passed it to others. Now team cannot get back to manual deployments of infra/networks/security.
So Terraform is 3rd party for you, but if you develop new tools for the same purpose, you think that's smart? Behind Terraform is a world-leading company with lot's of really smart devs. It's usually not smart to reinvent the wheel.
@@malizios821 To answer your question. This choice was neither smart nor stupid. To explain:
1. You are correct. I had a choice: Terraform or my team. My first instinct was to use Terraform. Unfortunately, at that time Terraform was underdeveloped and could not handle everything we need. 3rd party vendor also involves costs and hides the knowledge under the hood. If I would work at a company who mainly consumes dev productivity tools then something like Terraform would be my choice. But we are software developers and we need to satisfy our needs and must have very deep understanding of infra. Our choice was PowerShell based automation. Need to fix provisioning of some sort - no prob. Need to do privileged operations - no prob. Secure environments - no prob. Native integration with Azure - no prob. Full featured C# like automation - no prob. No additional language skills required.
2. We pass IaC capabilities to our customers by default. So customers are not locked in. To do that we must rely on source code without relying on 3rd party, unless customer has Terraform already. Many customers see that as competitive advantages (no lock-in, no additional package to train on). Depends on customer environment, but IMHO, basic admin skills should be sufficient to make use of IaC. For usual enterprise customers I often recommend 3rd party packages, like Terraform.
3. There is a lag risk. We focus on Azure. Azure introduces new feature, we need to wait, make sure that Terraform supports it. If we handle it ourselves then we can use it next day.
Great introduction and overview on the topic! But can you please add chapters to the video? That would be great.
5:38 Ehm, a clear pro is that if you need to get something online for the meeting with a client in 30min, you don’t need to know how to puppeteer infrastructure via APIs. If the demo system needs to be online, it needs to be online. Fast. You don’t always need repeatability. For the rest: Use pulumi
audio is bad
I am late to this one. However I have one question. Does the principle of not branching git and using main branch still count in things like Terraform and IaC?
Would you recommend a GitOps approach for IaC? I attended an Istio workshop last year, and the presenter told us he heavily recommends comibing Istio with GitOps.
If so, does your "usual" recommendation of a trunk-based branching model with pair programming still hold up to IaC+GitOps?
Tbh, I get kind of nervous thinking about changing the foundation of my production system just by commiting something.
Yea, neverous and for us not allowed since it hasn't gone through governance. Of course, you can have gates where gitops requires manual or some other promotion approval mechanism.
GitOps is ok, though I don’t think it adds anything significant to the concepts, so sure, certainly good enough, no different to any other IaC approach though.
One question - how would you describe IaC without GitOps? From what I understand, it's the same, just configuration job runs on click, not on push, isn't it?
In my projects, we keep app source code in repo just next to dependend infrastructure definitions. This way, when you do some change in source code, which requires some infrastructure changes (like update framework runtimes), you do it in one commit and it's deployed together. If my understanding of GitOps is correct, to work in GitOps way we would need to create separate repository and track dependencies between them. Sounds just unnecessarily complex
Only place, where I use indemendent infrastructure-oriented repository is when infrastructure is not related to particular product. E.g. in NGO where I volunteer we use repo to keep 1000+ DNS entries (it's public BTW - github/itwzhp/infrastruktura), but in other cases I can't see particular benefits from this approach
@@qj0n No difference. How config changes are executed are choices of pipeline design. My default is infrastructure config changes flow the the same pipeline as application changes, are tested with the version of the application that they related to, and are deployed as part of the application deployment. If there are no changes to the app, config changes flow through the pipeline in the same way, are tested with the current version of the app, and deployed at the end of the pipeline. No special cases, wether the prod deployment is automated or manually triggered is really just a choice of how your pipeline is configured. I agree with you, I much prefer that my infrastructure config lives alongside the app, so that it is clear which versions of the app this config has been tested with.
Excellent !!
Considering configuration management systems like Puppet/Ansible/Salt/Chef etc - This is an interesting scenario in which I have fully embraced trunk-based development towards all associated modules. For the conglomeration aspects such as a puppet control repo - do you feel that it is preferable to still utilize a version control everything approach vs. trunk-based development approaches towards it?
I don't see a"vs"? I use TBD for IaC changes, my preference, because it is simplest, is to keep everything in the same repo, source, config, IaC code, everything.
@@ContinuousDelivery Hi Dave - perhaps I have worded my question not so well or even more likely I've missed a key aspect of TBD -
Interpretively - TBD to me is all component aspects (such as modules) are in themselves trunk and the conglomerate repository that calls them all in is also trunk, allowing it to be deployable and in-line based upon syntax, unit, acceptance/etc testing in the pipeline for not only the component modules, also the conglomerate (such as a puppet control repo). Thusly there are CI/CD pipelines for each component module and for the enabled aspects from the conglomerate (Elaborated below)
Looking at a more specific example:
If I were IAC'ing Puppet code, inevitably every manifest/module, in the eyes of TBD, would state that I feature flag hide everything, enable as needed such as enabling sudo custom configs, haproxy configurations, etc. At that level, would version locking any of the component aspects make any sense? If we (royal we being devs, etc) are disciplined in TBD - should not all components continuously be on main/trunk/master?
@@ContinuousDelivery if preferable, I can happily email you with a more concrete "real world" scenario in which my confusion/question will hopefully be outlined
@@crushingbelial I am still not sure that I understand what you are describing. At the moment this seems to me like a complicated way to describe a simple idea.
TBD = using version control to simplify the relationships between things. Have one repo that contains everything. So the 'current' version is definitive, we know that this IaC config works with this version of the app, because they are all on one repo, and usually, ideally were all tested together. That is all TBD is about.
Now you may add things like feature flags for some kinds of change if you really want to, but the easiest use of TDB is to make every change atomic, and every change keeps the system correct. Sometimes, for complex changes that you want to drip into prod over a series releases, more complex things like FFs can help, but this is only peripherally related to IaC.
@@ContinuousDelivery Thank you for the simplification - that actually did help me out. I think I may be conflating some details with TBD rather than the core aspects you outlined
The link to the IaC book seems to be missing
Thanks for spotting that - I have added this to the details: "Infrastructure As Code", by Kief Morris ➡️ amzn.to/3ppZXxJ
How do you do IAC with databases? From experience those usually get setup once and then it stays like that until data has to be moved to a new architecture of some kind
There's a lot of stuff that still needs to be managed, even without architecture changes - databases have a lot of configuration that can be used to optimise performance, automated backups, logging, etc. IaC can also manage DB users, and in some cases you can even change the underlying hardware (more RAM? more CPUs?). With IaC you can easily manage all of this to e.g. propagate changes from your perf env to prod, clone environments, etc.
But overall yes, IaC works much better with immutable infractructure, which can be re-created on-the-fly. DBs are not exactly immutable :)
You script everything, and employ data-migration strategies. I talk about some aspects of this here: th-cam.com/video/JPfbjKl9jbw/w-d-xo.html
In other words, your running your software in different fields and some fields contain all cows, some contain 2 sheep and 1 cow, and another contains 3 cows, 2 sheep and a giraffe
In the end, it comes down to laziness:
I'm way too lazy to log into all of my environments and go happy bug-hunting.
I just made the entire config a part of the CI/CD pipeline, so that even if my infrastructure ever burns down, I can rebuild from scratch without doing any work.🤷
What are the usual technology that compose the IaC environment?!
All sorts, helm, k8, terraform, docker, puppet, chef, shell scripts… Whatever it takes, whatever makes sense for you tech. I think that the philosophy is much more important than the tech, but then I nearly always say that 😉😁
There are different tools for different infrastructure. The ones I know and I can recommend:
- for Azure cloud: Azure Resource Manager + bicep (alternatively, you can use Terraform if you want to be cloud-independent)
- for on-prem machines: Ansible
- for DNS: dnscontrol
- for docker: docker-compose files loaded to preferred orchiestrator
- for CI: yaml files in repo (most tools support it except TeamCity, which useses Kotlin to define pipelines)
I also tried to use saltstack, but it's really hard to use, once you know ansible and see, how saltstack should have beed designed ;)
This view of continuous delivery is limited to the delivery of software in a limited environment. When you are setting up continuous delivery for software running on hardware products that the company creates, you have to test the software across infrastructure with necessary hardware topological variation and so you can only partially approach config synchronization
A favourite real world example Dave and his co-author Jez Humble like to bring up is HP. Their success story regarding a complete reorganization into continuous delivery speaks to your concerns. They've managed to deal with the numerous hardware they produce and has to be supported
@Abe Dillon HW dependencies? Can you containerized different HW models with different processors and different audio encoders with different network connectivity configurations?
@Abe Dillon I also worked at Dell on OneFS clusters, we had virtualized clusters but you cant just test on a virtualized cluster. Eventually you have to throw your code on real HW with real network card variation and real disc variation with real power controllers and real power supplies, etc. Its a different story when you make both the HW and the Software
@Abe Dillon Example, we had a bug that was thrashing SSDs in the clusters and wearing them out. If you test only on virtual clusters you might not ever catch a bug that is thrashing your HW, but if you test on real HW configurations you find things you cannot catch in a virtualized environment. Companies cannot afford to ship millions of dollars of HW products only to recall them or ship a multi-million dollar 20 PB cluster to a client only to have it brick on them. You have to test on your HW. The cases I am referring to the Software is a image of an OS so you don't run a container on the HW, its spec'd a close as possible. You run the image your going to ship on the HW just like the customer will get it.
@@jangohemmes352 you can do continuous delivery, but the full infrastructure as code is not attainable when your shipping HW. You end up having to make your infrastructure adapt to the HW environment instead of make your code adapt to the HW
sounds like kubernetes or docker
btw there's something off about the audio codec on this video, for me at least. Sounds like Dave is down a manhole.
Same for me, not the usual quality of his vids.
Unfortunately, the whole team went down with Covid, so we did the best we could in the circs - not up to our usual production standards, sorry!
@@ContinuousDelivery Sorry to hear that, hope they get better soon.
5:40 I find it hard to agree with you really. Pro: sometimes you just have to prioritize your efforts and the learning curve for this automation can have a lesser priority. And another thing, the fact that this is a common approach is not a con. We all breathe too, and I've never heard of anyone who stopped doing so because it's too common :)
content's really good but the audio is terrible. Just a bit of feedback (pun intended).
Bro I love you but I can’t hear you ever in videos have to max out volume