*Modern TH-camrs jump cutting every misplaced vowel and topic change* >Dave enunciating every word of a technical topic for four minutes straight without ever breaking eye contact
Dave, thank you. I’ve held jobs including help desk tech, network admin, systems engineer, and cloud architect since 2013. I’ve asked half a dozen people how containers were different than VMs, and nobody has ever been able to answer the question like you have. Your statement “if you get ring 0 access on the container, you get it for the whole machine” made it click for me.
My favorite guy from the MS DOS and Win95 days, explaining something i've been curious about. I have only used virtual machines, but i've supported systems using docker.
a bare metal server is a house. you have your plot of land and your house. it is all yours. a VMhost is an apartment block. each server is a suite, but share the infrastructure (plumbing, stairs, building door). a container is a bed in an army barracks. you share everything.
That little hint about Kubernetes at 12:45 has me salivating for your explanation in the NEAR future. In the mean time, thanks for making this make sense in the most plain way possible. I get it. I finally get it. Thanks Dave.
In a lot of ways, modern virtualization and containerization technology in current operating systems are direct descendants of the IBM VM legacy. Big Iron showed us what was possible and Moore's Law guaraanteed that that technology would one day fit in your pocket.
@@jovetj I was an operator on DOS/VSE and then we did a conversion to MVS. We had VM. DOS/VSE on one virtual machine and MVS on the other. My head was spinning when working on it. Just unbelievable stuff.
@@rudycramer225 My head would be spinning, too! Radically different operating systems, really. Where I worked, we were a VSE/ESA shop, and while I can understand that may not be big or robust enough for everyone, it was easier to understand and work with, and felt like a more cohesive product. With that being said, having VM under everything surely helped keep lots of things about VSE out of the weeds.
I was a VM Sysprog from early 1980s to 2004. We had a large CMS user base, and ran MVS guests, and VSE guests - all communicating with each other. It was the most fantastic and flexible platform. My happiest working days.
Just a note, you technically don't need to learn how to create Dockerfiles, since, just like in a VM, you can create a simple base container (like Debian or Ubuntu or something), open an interactive shell inside the container and configure it however you want like you normally would on a normal system, after which you can run "docker commit" on that container to get an image with all the changes you performed, which you can use similarly how you'd use VM snapshots.
While this is true, I would still use Dockerfile to define what your container is and does. When building it via the Dockerfile, if you make a line change, all previous steps are cached by default and don’t have to be redone, saving lots of time for tweaks during the build stage.
"...and configure it however you want like you normally would on a normal system," and additionally needing about 50 times as long because of obscure 'safety' measures and additionally needed configurations in places and things like network or hardware related stuff one never heard about before. Docker is painful and very inefficient für personal use. Actually no single time of the about ten times I tried it to run a service on one of my computers, server or SBCs I was using the Docker solution in the end, because it either still would not work aftzer wasting dozens of hours (90%) or I was simply exhausted and annoyed too much, eve after it finally worked( the remaining 10% of cases).
As someone who has been kicking around the IT industry in a variety of roles for 30 years, I understand all the words coming out of your mouth, but this particular video for some reason gave me an unanticipated appreciation as to why it is in so many far future sci-fi settings, technology is more or less treated like magic. Not because it is hyper-advanced and thus indistinguishable from magic in the Clarke sense, I'm talking settings like Warhammer 40K or Fading Suns or even Battle Tech, where a lot of it is either possible now or will be in our near future, where the in-game civilizations had peaked at something far beyond us today but then collapsed and regressed all the way back, and all the people who understood the technology and how to create and operate it were almost all wiped out, so extremely few people remain who can even keep the existing stuff running, much less invent new stuff. And in some cases, the knowledge of how to maintain the technology has become ritualized into religious like ceremonies. Which is to say, even without an intentional targeting of people who can create and maintain these technologies, how many people today actually truly fully comprehend how this stuff works? Much less can build it from scratch in a clean room? So very few... And yet so much of the world today is critically reliant on this stuff! Kind of nerve-racking when you really think about it.
There is a lot more people than you think that can maintain this tech especially with the tools that we have now. When I started you learnt new tech by magazines, books, clubs, co-workers and if your were in a company specialised training companies. Now we have GitHub, TH-cam, Stack Overflow, Reddit, LLM's... Its comparable easy to get up to speed with most technology stacks without leaving your home.
@@damiendye6623 I don't need to remember how to do it in assembler because I can do the operation in C and look at dissembler output in Visual Studio. This is from my head a Google search or ask a LLM will yield many methods!
It is not like I'm deifying Dave, it's just this phenomena when I already used a lot of hypervisors, troubleshoted many problems with containers, and got my knowledge in shreds and patches. And then I listen to this summary by Dave and everything goes into its designated place in my head forming a solid structure of knowledge. It happens from time to time to us specialists, when the last book that we read on some subject is so crystal clear to us like it was specifically tailored for our brain to understand, when in reality it's just the critical mass of knowledge in the brain reached the saturation point and we finally got to the level where we understand what the author wanted to say in every paragraph.
i've been amazed how quickly Docker has been adopted and improved... in a 'past life,' i signed up for dotCloud and still recall how rapidly a container could be configured and launched... great work, Docker team.
whether you use VM's or containers, having dealt with oh-so-carefully managed 'precious snowflake' environments, we can all be glad for these advancements.
@@rekall76 I've spent most of my life in middleware... where Microsoft .NET App.Pools and Java JVMs have been the more-efficient little brothers to what has become 'Docker' for the past decades. Back in mainframe days, we called these LPars... new generations, new names, same concepts. :)
it's not really dockers accomplishment but the technology that created the hole "container" thing called LXC which was essentially built by one guy. docker is simply a "nice" gui with some convenience tools but in essence you can do that all with lxd
I really appreciate you taking the time to make the videos. Even if I know the information, I still tend to learn a thing or two from you and always enjoy the show. Thanks again
Man, your videos are great!!!! You’re technical, but your use of analogies shows your deep understanding of computer science… it’s excellent for learning!!!! thank you so much for creating these videos!!!!
I choose to setup a VM. It's just something I am more comfortable with at the moment. I create a VM template that I spent quite some time getting setup near perfect for my development environment and then I make a copy of it every morning to work in. It's like a clean slate every day. All things I need to save I save online so there is not a issue losing something. I even sometimes during the day load the template again if the things I worked on was quite cluttered. It normally takes about 10 minutes to make a copy of the VM template so it gives me time to take a break and clear my head.
I maintained our container stack (docker, containerd, runc) at a major linux distribution for about six months. I'm gonna keep this video in my pocket now, because it's a really good overview of some very technical topics that people tend to have misconceptions about (starting with containers are not virtual machines).
As a older dude who got diagnosed with ASD/ADHD at age 59, I often come to your channel for explanations of things that I just can't seem to digest from others. Thanks.
@@nicholasneyhart396agreed. im 22 and ASD (former PDD-NOS) and he explains things the way i do which sound condescending as hell to neurotypical pc nerds i assume others know nothing, just the bare-metal basics. everybody is not me and does not have the same interests (or path down the same interest)
I'm not a SW engineer but had heard about Docker environments and had no idea as to what they were. Now seeing your video about Docker an VM, I now have a better understanding as to what they are. Many thanks.
Even if you're not a software engineer (I'm not one either) you can still utilize the power of VMs and containers to your advantage. For example you could set up a Raspberry Pi as a cheap and low powered docker host at home to run Pi-Hole and Unbound, providing you with your own recursive DNS (Unbound) and network-level ad blocking (Pi-Hole), instead of relying on your ISP's DNS or some third party like Google or Cloudflare. There are plenty of tutorials on TH-cam on how to do this. Besides blocking advertisement domains at the network level, meaning you won't see as many ads on websites thought it still won't block TH-cam ads for example, the main benefit of a setup like that is that your ISP or the third party are unable to collect data about your browsing habits from the sites you visit and how frequently you visit them.
@@BigA1 As I said there are plenty of good tutorials on TH-cam on the subject. TH-cam comments can be difficult about links, but I'd recommend a video called "you're using pi-hole wrong" by Craft Computing. He uses Proxmox though, as am I. I'm not much into Docker (yet).
@@RudyBleeker I wonder why would you use a Docker level, and not run Pi-Hole and Unbound directly on Rasp Pi ? This is the question I always encounter with Docker, unless I want to transfer software, or share my setup with somebody
I have nothing but the fondest feelings foryou, good sir, and I think its a true crime that you do not have millions of subscribers. You feed the insatiable beast that is my quest for deep understanding, and you do it so concisely.
Awesome…and I mean awesome explanation of the differences. My whole background has been with VMWare and HyperV. I’ve been trying to get a clear explanation of this precise thing and this has been the best and most clear.
I find this guy amazing to listen to. I am a retired Systems Architect which I did for a fortune 50 company for 17 years designing both Networks, Apps, MS Solutions and other crazy stuff with absolutely insane budgets. I did a lot of cool stuff. Now I have a different career, but still do the IT side of it for my family business now that I am involved in it heavily, I use IT to automate a lot of stuff using many different solutions, so I use a lot of tech. Guys like this make it fun to this day even though MS is a cancer. Thanks Dave, for these straight forward videos which I play on my second monitor while doing stuff. Edited for Grammar. Not sure what was wrong with me when I wrote this.
When I saw the title "Docker vs VMs" (in the thumbnail tile, not on the video itself) I immediately thought VMS as in VAX/VMS. However, I wasn't disappointed by this video! 😊
Actually, you can also create a container without a declarative Dockerfile. You can build the container from a base image yourself by interacting with it using bash, for example, installing what you need manually using a package manager such as apt, for example, and in the end take a snapshot of the final container state into a new image that you can share. All without a declarative Dockerfile.
@@fritsdaalmans5589 I was just replying to the fact that a dockerfile is not actually required, contrary to what is said in the video, that's all. If you want to discuss the pros and cons of using declarative docker files vs creating and storing docker images manually, that's another discussion.
If you ever need to explain the difference between a container and a VM to anyone who is not an OS engineer take a snapshot at 8:30 . Your supervisor will think you are really smart. Gold quality content - thanks for sharing 👍
This is a very very good video especially for anybody who's maybe heard of virtual machines are containers but doesn't really understand what they mean
As a guy running a few servers in a homelab, the VM vs containers argument turned out to be peanut butter vs chocolate. Both are great, and sometimes they go well together. My particular professional use case leans heavily towards VMs - think ERP software best-suited for infrastructure supporting very large databases, and millisecond latency in certain scenarios can lead to critical output taking too long to reach the key stakeholders. My personal use case is leveraging containers to add few small workloads to make the day-to-day stuff like managing my digital media or just trying some new widget and destroying it if it doesn't meet my needs. Your conclusion was spot on, Dave. If it needs to scale one way or the other, pick the right solution.
peanut butter vs chocolate - for a non-american this comparison says volumes. For me peanut butter is what I buy to catch mice, while chocolate is one of the prime achievements of civilization :)
I think Docker networking deserves a whole video. One thing that threw me off was that host names can only be used within the container even if the different containers are added to the same network.
I saw a VM as a manufactured home. I'ts portable but not quite made as a conventional home. Its a self-contained unit that can be placed on your land (storage space) and it operates independently but shares resources, like utilities, with the land. Can be replicated or moved and if you have the land, utilities, and means of transportation, you can share it. While a Docker container is a movie set of a specified room (program) of a home. (Can be manufactured or conventional). It has the bare necessities for it to resemble and function as said room. But its very lightweight and very easy to replicate with a blueprint (image) of said room and the crew does all the work. A project manager (Docker Compose) would be the project manager that would help setup different movie sets using your available foundation (kernel) to try and make the rooms resemble the manufactured or conventional rooms. Proxmox is like a trailer park 😅
I have to say that you are using a really good lens for your videos. The circle of confusion is really pronounced with a good focus and shallow depth of field.
That was fantastic thank you you have connected so many dots for me! The perfect natural follow-up would be Dockers running in the kubernetes environment as you alluded to please consider that for your next project thank you so much you rock
To be absolutely real, Dave's dialogue on the recent lecture-esque videos like the opnsense video and this one have been really good Goes to show that the process keeps improving and I am ALL HERE FOR IT... giga banger and i will always hope for the next video
Containers are great for horizontal scaling as well. Using an orchestrator like Kubernetes you can respond in real time to changing demand and scale out or back your cluster based on that demand. Using containers it's super easy to just clone an application container and deploy a completely functional application. Tearing down is just as clean since it will send termination signals to the pod to give your application an opportunity for a graceful shutdown.
Dave, Very well done summary of VMs and Docker. I will be sharing this video with people that need a good explanation. I have tried but I believe you did it better without getting into the weeds.
Thank you so much. I've used Proxmox for many years and even containers on it, but didn't realize that Docker was just a different brand of containers. I always thought it was a different VM stack.
Docker containers and the Linux Containers (LXC) you and I use on Proxmox are not the same thing. Docker containers are "application containers", meaning they package the least possible amount of stuff to make an application work. At it's core it's purely the application binary and the libraries it depends on, everything else is overhead. Linux Containers or LXC are "system containers", meaning they contain most of what makes a Linux operating system run, except for the Linux kernel which it shares with the host. They fall somewhere in between VMs and Docker containers.
Thanks Dave for the explanation. I’m a retired Windows Server engineer. I was responsible for maintaining VMs either under VMWare or Microsoft Hyper-V. It was good to be able to V-motion VMs or migrate VMs from one physical host to another in our VM farms so that a host server could be maintained, even rebooted without affecting any running VMs or applications. I don’t believe that same ability exists with Docker containers. So any application built in a docker container would need redundancy at a different level like Network Load balancing, clustering or something similar. Having to support many hundreds of VM servers with single instance applications running would mean planning for monthly patching and maintenance of the host at the same time of the host. While in the VM world the guest OS’s could be moved around to different hosts at will and we could patch the hosts at different times from the guests. Also any issues caused by patching would only affect one guest, so rolling back the patching of one guest didn’t affect all VMs like I’m guessing patching the Docker host might affect all Docker containers… But for sharing an applications between developers or end users in an enterprise, I do understand the benefits of having a smaller file size to share and move around . Thanks again for the technical information you share in these videos. Having been in the IT windows world, I find them being very interesting…
Well if you want move your docker containers around this is what kubernetes cluster is for. Or if you are less adventurous you can set up your docker on the VM and move that VM around. Or go completely insane and setup the kubernetes cluster on the bunch of VMs for the ultimate redundancy.
I haven’t had to do server administration for a few years; at the time Docker and Kubernetes were still young projects. VMware was pretty mature and V-motion was a tool we used a lot, which of course wasn’t free, but was not difficult to set up. Much the same can be done with the open source Linux tools, but when I was doing the job, was more manual work. The open source tools, I liken to running Exchange Server and Active Directory - everyone tells you it is easy, but when you do it you find out how much work there really is.
In the commercial environment you have to squeeze out that last 0.0000001% uptime but for me and my home lab Docker makes a lot more sense for most projects. If something has to go down and I need redundancy I just spin everything up on a different piece of hardware temporarily. That's rarely needed because my only "customer" is my wife. I usually just do updates when she's asleep. 😆
@@vitoswat I can go 2 steps further. I have an environment that is currently, VMs running docker containers inside of RH OSV on ESXi. It is basically docker on VMs running inside of the RH OpenShift (kubernetes) which is running on VMs that are on an ESXi 8 cluster. Admittedly we are working on migrating the setup out of ESXi and onto direct RH OSV clusters, but the original setup still blows my mind when I inherited it a few month ago.
With a load balancer in front of the Docker, then you can create new Docker instances and move around as you like in the same way that you can move a VM from one host to another. So you can empty one host and update it and then move Docker instances back to it and update another host.
Having used VMs for a number of years as a software tester, they allow the complete environment and set configuration, which can be reproduced. Especially when I get a new drop of software, I just roll back to the snapshot of the os without the app installed (I don’t trust the app uninstall sometimes). Doesn’t sound like docker will do that same level of control.
6:30 you can create a scratch docker container, exec a shell inside it and make changes, then run "docker commit" to create an image from the container. Just the same as you would with a VM and snapshotting. The Dockerfile is equivalent to the Vagrantfile of the VM world, it's just there to make it easier to track changes to an image in source control, it's not a necessary part of the infrastructure.
I really liked the house vs apartment building analogy. I'ma use that in the future when I explain these concepts. ALSO: MicroVMs are an interesting and slightly unexpected way to get isolation without sacrificing performance. They're what amazon uses for Lambda functions
David, man! As always great video about this fundamental technology for development. So much better explained than at class at my university. You can do a part two video detailing specific or known use cases for both technologies, and even get your hands dirty with some basic code, which would be really awesome. Then you should compare Docker vs Kubernetes. You are one of my favorite YT teachers. Thank you so much and have a nice day!
Been looking into this so thanks for the information. I heard of Distrobox and wondered how the underlying technology worked. Mainly to see about trying to create a NixOS setup with Arch in a Distrobox for testing and being able to have an easily reproducible and backup.
Another use for VMs, which was touched upon but not explicitly stated, is running a system with a different OS than your host. For example, if your host OS is Linux, but you need a Windows system to support clients, or if you have a Windows 10 system, but still have software which doesn't support anything beyond Windows 7. Or, of course, if you have any DOS programs or 16-bit Windows programs that you still need/want to keep around.
No one knows what kubernetes is. Not even the creators and maintainers. Kubernetes orchestrates containers on machines. With Docker, you have a single machine where you can run a container (without swarm mode). With Kubernetes, you can run containers on multiple machines. You essentially treat a cluster as a single machine when loading container configuration.
If someone asks you you say it’s docker on steroids. I’m layman’s turns it’s a swarm of high availability containers, so you’d use it where demand might be highly variable.
Kubernetes and other platforms like OpenShift are schedulers for containers across one or more servers. They let you determine resource priority for containers, as well as networking and security. They're basically private cloud frameworks.
Really nice explanation! This is one of the interview questions I give potential systems engineers. It really is surprising how many people don't understand the tools they use every day.
Another great video Dave! I use docker to deploy my application. One other advantage is roll back a release is easy. Just go to the previous docker version. Not that I ever have to do that.
I mis-read the flyer for the video. I read Docker vs VMS. I have always thought there was quite a lot of similarity between docker containers and VMS processes (each with ports for comms). They had references out to shared images (like DLLs) which could be shared between different processes even if instantiated at different addresses in the address space using position independent code. Nothing really new under the sun.
@7:59, and again @13:21 Dave, I have been searching high and low for that specific security aspect between Docker and VMs. I wanted the improved performance of Docker, but not with that risk. I want a sandbox of sorts, to keep various activities from seeing each other. Social media kept separate from banking, kept separate from gaming, kept separate from testing a download, etc. And even with social media, I want facebook kept 100% isolated from anything and everything (I stopped using facebook years ago, due to its aggressive spying / tracking of everything). I assumed that using separate VMs was the way to go, but was not sure if Docker was just as secure, and if I should learn Docker. Now I know not to struggle with learning Docker. I guess I was looking for a QubesOS style environment, without running QubesOS. Your video was exactly what I was seeking, for months. Thank you.
I really enjoy using Docker containers. I have a home lab setup running openSUSE Leap. I have a container for Home Assistant, PiHole, Traefik reverse proxy, qbittorrent, Prometheus, NodeExporter, Grafana, CAdvisor, Plex and I manage them with Portainer that's also in a container. Maybe I'll add more containers for fun 😊
I cannot express how well this video has helped me understand exactly what Docker is. I've been trying to understand WHAT it is, but everything else that I've found has only really explain how to create one, or what other people use it for. Based on the information you provided, I would surmise that Docker is best suited for things like Home Assistant since it has fairly low resource requirements and there aren't really any security concerns. For something like a NAS, I would need to do more research, but I assume a VM would be more suited to the task because it needs more processing power to be able run efficiently, and providing that with dedicated resources gives it a comfortable 'floor' of performance, so to speak. Similar with Jellyfin, etc, especially in my case because my home server doesn't have a lot of resources(yet) to share around, and ensuring a minimum consistent experience is important for watching videos. For something like software routing, Docker might seem appealing resource wise, at least for a smaller network, I would have security concerns given that my router is the first thing external threats would interface with, and so if someone were able to exploit the routing software, the would be able to use that to gain access to the other docker containers.
I work in the IT industry (not an IT professional) so I love consuming this kind of content. It's stuff like this that the world really runs on, and even if VMware is on its way out, VM's are here to stay.
I still remember the first time i recompiled gateway container on my production server. Been enhancing and polishing its docker file on my local dev server for weeks, catching and correcting compilation bugs before I finally had courage to pull and deploy it on production. Don't recall worrying that much since I was defending my phd thesis 😂
Nice video, Dave. A little personal feedback, with stuff like this more visuals the better. Visuals helps me a lot when describing so many different technical nuances that I can pause and little it sink in more. :)
Funnily enough I have not installed Docker in the last four years. The underlying technology, like you pointed out, is all about Linux containers and Docker just provided the first widespread command line tools and configuration file format, plus the Docker Hub ecosystem. On my workstation, natively installed Podman provides 1:1 compatibility and I end up mostly using it through VS Code devcontainers anyway. In production, Kubernetes' little brother k3s does the same directly without Docker. While folks think containers are just a hip'n'cool way to run modern software, I disagree. It's great especially for building legacy software because you can just pull an old Linux image and build the software there instead of going through the hassle of finding all the old libraries for your current OS.
Thanks, Dave! That was a nice concise and clear explanation. Hypervisors I know well (ESXi/vCenter is my $dayjob with a dozen sites and hundreds of VMs across the globe.) I just spun up a new ESXi box from on a old work computer (i7) so that I can play with Microsoft AD and joining RHEL to the domain, something that I'll be needing for work and getting my RH certs. Since I'm going to be taking as many RH classes and certs as this 63 year old brain can stand, I know that Docker and Kubernetes are in my future. Your LED server will likely be one of my first attempts since you've also got me playing with ESP32. I've enjoyed your videos and hope you keep it up. Back when you were pounding on Win95 code, I was herding modems for Wolfenet, I'm sure we could tell each other stories.
I've spent so many years trying to explain to my research colleagues what the difference between the two and trying to get them to understand the differences when it comes to making my life easier from the system deployment side vs development side. (honestly I just think they don't care and disengage) ty for breaking it down so I can just link them your video over and over in hopes they'll listen to another person ❤
Been wondering about this. Thanks for the explanation. I think I will have to watch this a couple more times before it all falls into place. Computing was pretty simple when I started in it in 1966. Even Linux was simple when I discovered it around 1995.
Buckees can kiss my ass. Ohh wait. He did that. Several several times!! Lol. Buuccckkkk- why don't you love me buuuccckkk? I'll tell you why... You and yo brother! Because some people are incapable of love. Period.
@@RikuRicardo the billboards in Nashville said it was my washing machine. But seems like there's an abundance of problems to be fixed so maybe when the washer stops, we can take it out and figure out what to do with it all. Lol
@@RikuRicardo My usual answer was "Well if you didn't install extraneous shit on your computer, we wouldn't have this issue." because it almost always came down to some non-standard software on their dev machines. It drove me batty. The simple fact was, if it didn't work on a brand new clean install with nothing but the base server software, then it was their problem to figure it out.
Awesome video. I knew this stuff, but the explanation was clean. A rather short video where you set up a VM or Docker for typical applications would be nice. I prefer VMs. As I work with custom embedded systems it's not just an editor but a rather complex development environment. And more often than not that environment has more value in-house than the first-view products being shipped using it.
Modern day experience. Running a VM through Virtual Box from Oracle really needed stuff to be enabled and disabled and Hyper-V was one of the stuff that had to be disabled. It really slowed down my VM and made it extremely laggy. After following some guide the VM eventually was workable and I am now running a Windows Server hosting a web application through IIS without problems.
Hey man you are clearly an awesome expert in this field and I believe this is a fantastic video and you are sharing a wealth of super important knowledge here. It would be absolutely fantastic if you used a couple of diagrams maybe a UML Deployment diagram and just highlighted the parts as you talked about them seeing multiple Guest and Host OS parts in a VM and not in Docker would be helpful to understand differences. Explaining a tiny bit of history of Docker, just as far as hyper scaling a web server from 1 machine to 1000 and back down to 500, when you don’t need state on everything would be handy perhaps. As would be using a new image version, again when state isn’t a huge concern…really just saying a diagram would be super handy here…but also saying thank you 🙏
Hello Dave. I'm an autistic person too. I also struggle with the side effects of high iq. Could you make a segment on windows TSR(Terminate and Stay Resident) programs? For some reason I found great pleasure in making TSR''s that slept silently in the background back in the good old ms-dos days. Why did Microsoft introduce TSRs? Were they popular by other users? TSRs are kind of a strange bird. Who at Microsoft invented them and what was the primary use-case? Did you make / work on the TSR architecture? It was kind of magical that you could "terminate" a program but it was still in the background, and if you assigned a hot-key (say a function key) to it, the background program would immediately be back again without the initial load time.
It's been interesting learning about the cgroups subsystem underlying Docker, might be a fun topic for a video. Namely, whether a user space exploit giving "ring 0"/"root" is still constrained.
Very good information - but I was constantly losing focus because of the spinny light thingy over your left shoulder. I don't know what it is but now I need one.
This was very exhaustive and handy video... I wish there were links that mentioned in the video though... But I'll subscribe to this channel for sure...
Great video, Dave! I've started using Terraform at work to spin up resources locally within Docker while also creating resources and VMs in AWS to interface with. I would love to hear your take on IaC and maybe your interpretation of pros/cons leveraging code for managing infrastructure. Cheers 🍻
iv been supporting Microsoft stuff since nt4. and i gotta say, it was way better when dave the the lads were at the keyboard. -only installed my first docker instance a couple of months ago, so this was a nice watch. cheers again dave!
Hypervisor is "the OS for Operating systems". It control and share the hardware resources between OS:es, as ordinary OS control and share the hardware resources between processes/programs.
I set up a docker container and run my c code from my PC now. I ran out of old pcs to put Linux on. So fun! I feel like VM used to much of my computers resources.
I build out infrastruture with VM's. Some VM's just happen to be container hosts too. Its easier to manage and for disaster recovery you just migrate the the VM's to another host.
7:59 regarding possible security exploits, I wouldn't quite say that a VM is fully isolated in all cases. Great explainer, I did have a little flashback to the days of using Thinstall/ThinApp for portable apps. Not quite the same as either a VM nor a Container, but somewhere in between.
Correct. Needs to talk to the host OS also. Usually installing vmtools creating the possibility for extra security exploits. With containers you share of course the real kernel and if you run rootful you create extra risks.
@@edwinkm2016 Yeah, or using bridged networks, or that VMWare just last month patched numerous escape exploits in esxi, fusion, workstation and whatever their cloud one is. Those were all rated critical.
*Modern TH-camrs jump cutting every misplaced vowel and topic change*
>Dave enunciating every word of a technical topic for four minutes straight without ever breaking eye contact
Unfortunately, though, falling into the modern TH-camr fetish for making the music at least as important as the content.
I don't know anything about programming, I'm here for coherent sentences.
Obviously brains help
That's the 'tism.
@@darylndI didn’t notice the soundtrack until you highlighted it…. Thanks?
Dave, thank you. I’ve held jobs including help desk tech, network admin, systems engineer, and cloud architect since 2013. I’ve asked half a dozen people how containers were different than VMs, and nobody has ever been able to answer the question like you have.
Your statement “if you get ring 0 access on the container, you get it for the whole machine” made it click for me.
Thanks, it means a lot when I know I reached someone and they "got" what I was saying!
The way I always expl;ain it is that containers virtualize userland only, while VMs virtualize kernelspace as well.
My favorite guy from the MS DOS and Win95 days, explaining something i've been curious about. I have only used virtual machines, but i've supported systems using docker.
Hahaha. I have a similar story. I wonder how many people found themselves supporting Docker without having any real idea how it worked.
Dave ask for a comment!
Us fans need to find a simple phrase or emoji to spam our favourite win95 era to current tech channel.
@@Michael_Brock 📎 Would clippy do? 📎 😆 Oh, I forgot to put it in the form of 📎* --- A comment by an old guy --- *📎
whats a docker? im an oldschool vmware/virtual pc user lol
I've had far less bugs and problems with VMs than Docker, but then, Docker uses VMs, so it's ironic.
a bare metal server is a house. you have your plot of land and your house. it is all yours.
a VMhost is an apartment block. each server is a suite, but share the infrastructure (plumbing, stairs, building door).
a container is a bed in an army barracks. you share everything.
Well defined! Thank you.
Saving this
What, then, is an app.pool or a JVM? :)
@@JonRowlison probably just an orgy.
@@JonRowlison A pool party? And someone pissed in the pool.
That little hint about Kubernetes at 12:45 has me salivating for your explanation in the NEAR future.
In the mean time, thanks for making this make sense in the most plain way possible. I get it. I finally get it.
Thanks Dave.
I worked with IBM's VM in the early 70's. Big iron cannot be beat. What wonderful machines they were and still are. Amazing!
I worked with VM/ESA in the 90s. Love it! Hell of an OS!
In a lot of ways, modern virtualization and containerization technology in current operating systems are direct descendants of the IBM VM legacy. Big Iron showed us what was possible and Moore's Law guaraanteed that that technology would one day fit in your pocket.
@@jovetj I was an operator on DOS/VSE and then we did a conversion to MVS. We had VM. DOS/VSE on one virtual machine and MVS on the other. My head was spinning when working on it. Just unbelievable stuff.
@@rudycramer225 My head would be spinning, too! Radically different operating systems, really. Where I worked, we were a VSE/ESA shop, and while I can understand that may not be big or robust enough for everyone, it was easier to understand and work with, and felt like a more cohesive product. With that being said, having VM under everything surely helped keep lots of things about VSE out of the weeds.
I was a VM Sysprog from early 1980s to 2004. We had a large CMS user base, and ran MVS guests, and VSE guests - all communicating with each other. It was the most fantastic and flexible platform. My happiest working days.
Just a note, you technically don't need to learn how to create Dockerfiles, since, just like in a VM, you can create a simple base container (like Debian or Ubuntu or something), open an interactive shell inside the container and configure it however you want like you normally would on a normal system, after which you can run "docker commit" on that container to get an image with all the changes you performed, which you can use similarly how you'd use VM snapshots.
Really good point, and even though I’m pretty experienced with Docker, I learned something from you today. Thanks!
But …. highly inadvisable. The purpose of using the Dockerfile is to make it repeatable, and the track changes via source control.
While this is true, I would still use Dockerfile to define what your container is and does. When building it via the Dockerfile, if you make a line change, all previous steps are cached by default and don’t have to be redone, saving lots of time for tweaks during the build stage.
docker is so cool that I wish I had a need for it
"...and configure it however you want like you normally would on a normal system," and additionally needing about 50 times as long because of obscure 'safety' measures and additionally needed configurations in places and things like network or hardware related stuff one never heard about before. Docker is painful and very inefficient für personal use. Actually no single time of the about ten times I tried it to run a service on one of my computers, server or SBCs I was using the Docker solution in the end, because it either still would not work aftzer wasting dozens of hours (90%) or I was simply exhausted and annoyed too much, eve after it finally worked( the remaining 10% of cases).
As someone who has been kicking around the IT industry in a variety of roles for 30 years, I understand all the words coming out of your mouth, but this particular video for some reason gave me an unanticipated appreciation as to why it is in so many far future sci-fi settings, technology is more or less treated like magic. Not because it is hyper-advanced and thus indistinguishable from magic in the Clarke sense, I'm talking settings like Warhammer 40K or Fading Suns or even Battle Tech, where a lot of it is either possible now or will be in our near future, where the in-game civilizations had peaked at something far beyond us today but then collapsed and regressed all the way back, and all the people who understood the technology and how to create and operate it were almost all wiped out, so extremely few people remain who can even keep the existing stuff running, much less invent new stuff. And in some cases, the knowledge of how to maintain the technology has become ritualized into religious like ceremonies.
Which is to say, even without an intentional targeting of people who can create and maintain these technologies, how many people today actually truly fully comprehend how this stuff works? Much less can build it from scratch in a clean room? So very few... And yet so much of the world today is critically reliant on this stuff! Kind of nerve-racking when you really think about it.
There is a lot more people than you think that can maintain this tech especially with the tools that we have now.
When I started you learnt new tech by magazines, books, clubs, co-workers and if your were in a company specialised training companies.
Now we have GitHub, TH-cam, Stack Overflow, Reddit, LLM's... Its comparable easy to get up to speed with most technology stacks without leaving your home.
It's like everything else in computing; abstracted to the point where very few people actually know what's going on.
@@rjy8960 Lets see if this is true?
Does anyone know what an Array is?
If you do then you have mastered the core data structure of computers.
@@TheReferrer72 but can you do it in assembler if not then no you haven't
@@damiendye6623 I don't need to remember how to do it in assembler because I can do the operation in C and look at dissembler output in Visual Studio.
This is from my head a Google search or ask a LLM will yield many methods!
I love this channel, no frills just quality content throughout
Presented with lowest entropy. Every sentence conveys something. Thank you.
It is not like I'm deifying Dave, it's just this phenomena when I already used a lot of hypervisors, troubleshoted many problems with containers, and got my knowledge in shreds and patches. And then I listen to this summary by Dave and everything goes into its designated place in my head forming a solid structure of knowledge. It happens from time to time to us specialists, when the last book that we read on some subject is so crystal clear to us like it was specifically tailored for our brain to understand, when in reality it's just the critical mass of knowledge in the brain reached the saturation point and we finally got to the level where we understand what the author wanted to say in every paragraph.
Dave....you didn't again. A masterful explanation that is sufficiently technical but not overwhelming. Thank you for this
i've been amazed how quickly Docker has been adopted and improved... in a 'past life,' i signed up for dotCloud and still recall how rapidly a container could be configured and launched... great work, Docker team.
whether you use VM's or containers, having dealt with oh-so-carefully managed 'precious snowflake' environments, we can all be glad for these advancements.
@@rekall76 I've spent most of my life in middleware... where Microsoft .NET App.Pools and Java JVMs have been the more-efficient little brothers to what has become 'Docker' for the past decades. Back in mainframe days, we called these LPars... new generations, new names, same concepts. :)
Thanks! 😊
it's not really dockers accomplishment but the technology that created the hole "container" thing called LXC which was essentially built by one guy. docker is simply a "nice" gui with some convenience tools but in essence you can do that all with lxd
*libcontainer* and *dockerd* are a lot more than 'simply a nice gui'
You ask and you shall receive. Thumbs up given brother.
I really appreciate you taking the time to make the videos. Even if I know the information, I still tend to learn a thing or two from you and always enjoy the show. Thanks again
My pleasure!
Man, your videos are great!!!! You’re technical, but your use of analogies shows your deep understanding of computer science… it’s excellent for learning!!!! thank you so much for creating these videos!!!!
I choose to setup a VM. It's just something I am more comfortable with at the moment. I create a VM template that I spent quite some time getting setup near perfect for my development environment and then I make a copy of it every morning to work in. It's like a clean slate every day. All things I need to save I save online so there is not a issue losing something. I even sometimes during the day load the template again if the things I worked on was quite cluttered. It normally takes about 10 minutes to make a copy of the VM template so it gives me time to take a break and clear my head.
I maintained our container stack (docker, containerd, runc) at a major linux distribution for about six months. I'm gonna keep this video in my pocket now, because it's a really good overview of some very technical topics that people tend to have misconceptions about (starting with containers are not virtual machines).
"Get ready to drink from the firehose of knowledge" What a *vivid* metaphor! 🤓😁🤭
As a older dude who got diagnosed with ASD/ADHD at age 59, I often come to your channel for explanations of things that I just can't seem to digest from others. Thanks.
I am only 20, but his patient demeanor helps me understand things I struggle with as well.
@@nicholasneyhart396agreed. im 22 and ASD (former PDD-NOS) and he explains things the way i do
which sound condescending as hell to neurotypical pc nerds
i assume others know nothing, just the bare-metal basics. everybody is not me and does not have the same interests (or path down the same interest)
You should see his video about having something like that also. Maybe that's why you connect
I'm not a SW engineer but had heard about Docker environments and had no idea as to what they were. Now seeing your video about Docker an VM, I now have a better understanding as to what they are. Many thanks.
Even if you're not a software engineer (I'm not one either) you can still utilize the power of VMs and containers to your advantage. For example you could set up a Raspberry Pi as a cheap and low powered docker host at home to run Pi-Hole and Unbound, providing you with your own recursive DNS (Unbound) and network-level ad blocking (Pi-Hole), instead of relying on your ISP's DNS or some third party like Google or Cloudflare. There are plenty of tutorials on TH-cam on how to do this.
Besides blocking advertisement domains at the network level, meaning you won't see as many ads on websites thought it still won't block TH-cam ads for example, the main benefit of a setup like that is that your ISP or the third party are unable to collect data about your browsing habits from the sites you visit and how frequently you visit them.
@@RudyBleeker Sounds great, where is the best place to learn (and use) such SW tools?
@@BigA1 As I said there are plenty of good tutorials on TH-cam on the subject. TH-cam comments can be difficult about links, but I'd recommend a video called "you're using pi-hole wrong" by Craft Computing. He uses Proxmox though, as am I. I'm not much into Docker (yet).
@@RudyBleeker I wonder why would you use a Docker level, and not run Pi-Hole and Unbound directly on Rasp Pi ? This is the question I always encounter with Docker, unless I want to transfer software, or share my setup with somebody
I have nothing but the fondest feelings foryou, good sir, and I think its a true crime that you do not have millions of subscribers. You feed the insatiable beast that is my quest for deep understanding, and you do it so concisely.
head. just provide it to dave. he will be happy from your head.
Dave, thanks for the explanation. I'm learning how limited my education has been.
"Drinking from the firehose of knowledge..." I love it! ❤️
Awesome…and I mean awesome explanation of the differences. My whole background has been with VMWare and HyperV. I’ve been trying to get a clear explanation of this precise thing and this has been the best and most clear.
I find this guy amazing to listen to. I am a retired Systems Architect which I did for a fortune 50 company for 17 years designing both Networks, Apps, MS Solutions and other crazy stuff with absolutely insane budgets. I did a lot of cool stuff. Now I have a different career, but still do the IT side of it for my family business now that I am involved in it heavily, I use IT to automate a lot of stuff using many different solutions, so I use a lot of tech.
Guys like this make it fun to this day even though MS is a cancer.
Thanks Dave, for these straight forward videos which I play on my second monitor while doing stuff.
Edited for Grammar. Not sure what was wrong with me when I wrote this.
Compliments Dave. Among the most concise descriptions of this I have seen.
When I saw the title "Docker vs VMs" (in the thumbnail tile, not on the video itself) I immediately thought VMS as in VAX/VMS. However, I wasn't disappointed by this video! 😊
Actually, you can also create a container without a declarative Dockerfile. You can build the container from a base image yourself by interacting with it using bash, for example, installing what you need manually using a package manager such as apt, for example, and in the end take a snapshot of the final container state into a new image that you can share. All without a declarative Dockerfile.
But in that case, you have to export and keep your Docker image as a file, which is huge, as opposed to the Dockerfile, which is tiny.
@@fritsdaalmans5589 I was just replying to the fact that a dockerfile is not actually required, contrary to what is said in the video, that's all. If you want to discuss the pros and cons of using declarative docker files vs creating and storing docker images manually, that's another discussion.
If you ever need to explain the difference between a container and a VM to anyone who is not an OS engineer take a snapshot at 8:30 . Your supervisor will think you are really smart. Gold quality content - thanks for sharing 👍
This is a very very good video especially for anybody who's maybe heard of virtual machines are containers but doesn't really understand what they mean
As a guy running a few servers in a homelab, the VM vs containers argument turned out to be peanut butter vs chocolate. Both are great, and sometimes they go well together.
My particular professional use case leans heavily towards VMs - think ERP software best-suited for infrastructure supporting very large databases, and millisecond latency in certain scenarios can lead to critical output taking too long to reach the key stakeholders.
My personal use case is leveraging containers to add few small workloads to make the day-to-day stuff like managing my digital media or just trying some new widget and destroying it if it doesn't meet my needs.
Your conclusion was spot on, Dave. If it needs to scale one way or the other, pick the right solution.
peanut butter vs chocolate - for a non-american this comparison says volumes. For me peanut butter is what I buy to catch mice, while chocolate is one of the prime achievements of civilization :)
This video helped me understand the difference between Emulators, VMs and Docker Containers; and so, I want to thank you for that.
I think Docker networking deserves a whole video. One thing that threw me off was that host names can only be used within the container even if the different containers are added to the same network.
I saw a VM as a manufactured home. I'ts portable but not quite made as a conventional home. Its a self-contained unit that can be placed on your land (storage space) and it operates independently but shares resources, like utilities, with the land. Can be replicated or moved and if you have the land, utilities, and means of transportation, you can share it.
While a Docker container is a movie set of a specified room (program) of a home. (Can be manufactured or conventional). It has the bare necessities for it to resemble and function as said room. But its very lightweight and very easy to replicate with a blueprint (image) of said room and the crew does all the work. A project manager (Docker Compose) would be the project manager that would help setup different movie sets using your available foundation (kernel) to try and make the rooms resemble the manufactured or conventional rooms.
Proxmox is like a trailer park 😅
I have to say that you are using a really good lens for your videos. The circle of confusion is really pronounced with a good focus and shallow depth of field.
That was fantastic thank you you have connected so many dots for me!
The perfect natural follow-up would be Dockers running in the kubernetes environment as you alluded to please consider that for your next project thank you so much you rock
To be absolutely real, Dave's dialogue on the recent lecture-esque videos like the opnsense video and this one have been really good
Goes to show that the process keeps improving and I am ALL HERE FOR IT... giga banger and i will always hope for the next video
Containers are great for horizontal scaling as well. Using an orchestrator like Kubernetes you can respond in real time to changing demand and scale out or back your cluster based on that demand. Using containers it's super easy to just clone an application container and deploy a completely functional application. Tearing down is just as clean since it will send termination signals to the pod to give your application an opportunity for a graceful shutdown.
Dave, Very well done summary of VMs and Docker. I will be sharing this video with people that need a good explanation. I have tried but I believe you did it better without getting into the weeds.
Thank you so much. I've used Proxmox for many years and even containers on it, but didn't realize that Docker was just a different brand of containers. I always thought it was a different VM stack.
Docker containers and the Linux Containers (LXC) you and I use on Proxmox are not the same thing.
Docker containers are "application containers", meaning they package the least possible amount of stuff to make an application work. At it's core it's purely the application binary and the libraries it depends on, everything else is overhead.
Linux Containers or LXC are "system containers", meaning they contain most of what makes a Linux operating system run, except for the Linux kernel which it shares with the host. They fall somewhere in between VMs and Docker containers.
Thanks Dave for the explanation. I’m a retired Windows Server engineer. I was responsible for maintaining VMs either under VMWare or Microsoft Hyper-V. It was good to be able to V-motion VMs or migrate VMs from one physical host to another in our VM farms so that a host server could be maintained, even rebooted without affecting any running VMs or applications. I don’t believe that same ability exists with Docker containers. So any application built in a docker container would need redundancy at a different level like Network Load balancing, clustering or something similar. Having to support many hundreds of VM servers with single instance applications running would mean planning for monthly patching and maintenance of the host at the same time of the host. While in the VM world the guest OS’s could be moved around to different hosts at will and we could patch the hosts at different times from the guests. Also any issues caused by patching would only affect one guest, so rolling back the patching of one guest didn’t affect all VMs like I’m guessing patching the Docker host might affect all Docker containers…
But for sharing an applications between developers or end users in an enterprise, I do understand the benefits of having a smaller file size to share and move around . Thanks again for the technical information you share in these videos. Having been in the IT windows world, I find them being very interesting…
Well if you want move your docker containers around this is what kubernetes cluster is for. Or if you are less adventurous you can set up your docker on the VM and move that VM around. Or go completely insane and setup the kubernetes cluster on the bunch of VMs for the ultimate redundancy.
I haven’t had to do server administration for a few years; at the time Docker and Kubernetes were still young projects. VMware was pretty mature and V-motion was a tool we used a lot, which of course wasn’t free, but was not difficult to set up. Much the same can be done with the open source Linux tools, but when I was doing the job, was more manual work. The open source tools, I liken to running Exchange Server and Active Directory - everyone tells you it is easy, but when you do it you find out how much work there really is.
In the commercial environment you have to squeeze out that last 0.0000001% uptime but for me and my home lab Docker makes a lot more sense for most projects. If something has to go down and I need redundancy I just spin everything up on a different piece of hardware temporarily. That's rarely needed because my only "customer" is my wife. I usually just do updates when she's asleep. 😆
@@vitoswat I can go 2 steps further. I have an environment that is currently, VMs running docker containers inside of RH OSV on ESXi. It is basically docker on VMs running inside of the RH OpenShift (kubernetes) which is running on VMs that are on an ESXi 8 cluster. Admittedly we are working on migrating the setup out of ESXi and onto direct RH OSV clusters, but the original setup still blows my mind when I inherited it a few month ago.
With a load balancer in front of the Docker, then you can create new Docker instances and move around as you like in the same way that you can move a VM from one host to another.
So you can empty one host and update it and then move Docker instances back to it and update another host.
Nice and clear Dave. Would love a follow-up adding LXC's and venvs into the mix
Having used VMs for a number of years as a software tester, they allow the complete environment and set configuration, which can be reproduced. Especially when I get a new drop of software, I just roll back to the snapshot of the os without the app installed (I don’t trust the app uninstall sometimes). Doesn’t sound like docker will do that same level of control.
I knew most of this, but hearing an explanation of both reinforced some of my beliefs on how they worked and are different.
Built a cluster of dell c6100s because I was inspired by you and similar channels. Keep up the great content.
What other channels fo you like that are similar to Dave's? I like Ben eater
6:30 you can create a scratch docker container, exec a shell inside it and make changes, then run "docker commit" to create an image from the container. Just the same as you would with a VM and snapshotting. The Dockerfile is equivalent to the Vagrantfile of the VM world, it's just there to make it easier to track changes to an image in source control, it's not a necessary part of the infrastructure.
I really liked the house vs apartment building analogy. I'ma use that in the future when I explain these concepts.
ALSO: MicroVMs are an interesting and slightly unexpected way to get isolation without sacrificing performance. They're what amazon uses for Lambda functions
David, man! As always great video about this fundamental technology for development. So much better explained than at class at my university. You can do a part two video detailing specific or known use cases for both technologies, and even get your hands dirty with some basic code, which would be really awesome. Then you should compare Docker vs Kubernetes. You are one of my favorite YT teachers. Thank you so much and have a nice day!
Been looking into this so thanks for the information. I heard of Distrobox and wondered how the underlying technology worked. Mainly to see about trying to create a NixOS setup with Arch in a Distrobox for testing and being able to have an easily reproducible and backup.
Another use for VMs, which was touched upon but not explicitly stated, is running a system with a different OS than your host. For example, if your host OS is Linux, but you need a Windows system to support clients, or if you have a Windows 10 system, but still have software which doesn't support anything beyond Windows 7. Or, of course, if you have any DOS programs or 16-bit Windows programs that you still need/want to keep around.
now explain what Kubernetes is
No one knows what kubernetes is. Not even the creators and maintainers.
Kubernetes orchestrates containers on machines. With Docker, you have a single machine where you can run a container (without swarm mode). With Kubernetes, you can run containers on multiple machines. You essentially treat a cluster as a single machine when loading container configuration.
@@JacobSantosDev "No one knows what kubernetes is."
**Goes on to explain what kubernetes is**
If someone asks you you say it’s docker on steroids. I’m layman’s turns it’s a swarm of high availability containers, so you’d use it where demand might be highly variable.
Kubernetes and other platforms like OpenShift are schedulers for containers across one or more servers. They let you determine resource priority for containers, as well as networking and security. They're basically private cloud frameworks.
Yah
Really nice explanation! This is one of the interview questions I give potential systems engineers. It really is surprising how many people don't understand the tools they use every day.
Another great video Dave! I use docker to deploy my application. One other advantage is roll back a release is easy. Just go to the previous docker version. Not that I ever have to do that.
I mis-read the flyer for the video. I read Docker vs VMS. I have always thought there was quite a lot of similarity between docker containers and VMS processes (each with ports for comms). They had references out to shared images (like DLLs) which could be shared between different processes even if instantiated at different addresses in the address space using position independent code. Nothing really new under the sun.
@7:59, and again @13:21
Dave, I have been searching high and low for that specific security aspect between Docker and VMs.
I wanted the improved performance of Docker, but not with that risk.
I want a sandbox of sorts, to keep various activities from seeing each other. Social media kept separate from banking, kept separate from gaming, kept separate from testing a download, etc. And even with social media, I want facebook kept 100% isolated from anything and everything (I stopped using facebook years ago, due to its aggressive spying / tracking of everything).
I assumed that using separate VMs was the way to go, but was not sure if Docker was just as secure, and if I should learn Docker. Now I know not to struggle with learning Docker.
I guess I was looking for a QubesOS style environment, without running QubesOS.
Your video was exactly what I was seeking, for months. Thank you.
I really enjoy using Docker containers. I have a home lab setup running openSUSE Leap. I have a container for Home Assistant, PiHole, Traefik reverse proxy, qbittorrent, Prometheus, NodeExporter, Grafana, CAdvisor, Plex and I manage them with Portainer that's also in a container. Maybe I'll add more containers for fun 😊
When you're 1-2 sec into a video and you gotta Like it ;) ! Really some awesome content Dave! Thanks for you stories also!
I cannot express how well this video has helped me understand exactly what Docker is.
I've been trying to understand WHAT it is, but everything else that I've found has only really explain how to create one, or what other people use it for.
Based on the information you provided, I would surmise that Docker is best suited for things like Home Assistant since it has fairly low resource requirements and there aren't really any security concerns.
For something like a NAS, I would need to do more research, but I assume a VM would be more suited to the task because it needs more processing power to be able run efficiently, and providing that with dedicated resources gives it a comfortable 'floor' of performance, so to speak. Similar with Jellyfin, etc, especially in my case because my home server doesn't have a lot of resources(yet) to share around, and ensuring a minimum consistent experience is important for watching videos.
For something like software routing, Docker might seem appealing resource wise, at least for a smaller network, I would have security concerns given that my router is the first thing external threats would interface with, and so if someone were able to exploit the routing software, the would be able to use that to gain access to the other docker containers.
Glad you found it useful!
I like this Peter Griffin part at the end! One day I will learn Kubernetes. For now, Docker Swarm seems magical!
I work in the IT industry (not an IT professional) so I love consuming this kind of content. It's stuff like this that the world really runs on, and even if VMware is on its way out, VM's are here to stay.
in my world, it is PR/SM, zVM, zCX - so much fun. ( i.e. IBM z16 boxes)
Another good use for VMs, running and testing different OSs.
I still remember the first time i recompiled gateway container on my production server. Been enhancing and polishing its docker file on my local dev server for weeks, catching and correcting compilation bugs before I finally had courage to pull and deploy it on production. Don't recall worrying that much since I was defending my phd thesis 😂
Nice video, Dave. A little personal feedback, with stuff like this more visuals the better. Visuals helps me a lot when describing so many different technical nuances that I can pause and little it sink in more. :)
Great channel and content. Thanks from a MS alumni from Brazil
Funnily enough I have not installed Docker in the last four years. The underlying technology, like you pointed out, is all about Linux containers and Docker just provided the first widespread command line tools and configuration file format, plus the Docker Hub ecosystem. On my workstation, natively installed Podman provides 1:1 compatibility and I end up mostly using it through VS Code devcontainers anyway. In production, Kubernetes' little brother k3s does the same directly without Docker.
While folks think containers are just a hip'n'cool way to run modern software, I disagree. It's great especially for building legacy software because you can just pull an old Linux image and build the software there instead of going through the hassle of finding all the old libraries for your current OS.
I saw VMS and read DEC. Miss that company.
Your channel is a breath of clean air in todays youtube
As usual, it was an excellent video. Thank you for providing so much helpful information.
Thanks, Dave! That was a nice concise and clear explanation. Hypervisors I know well (ESXi/vCenter is my $dayjob with a dozen sites and hundreds of VMs across the globe.) I just spun up a new ESXi box from on a old work computer (i7) so that I can play with Microsoft AD and joining RHEL to the domain, something that I'll be needing for work and getting my RH certs. Since I'm going to be taking as many RH classes and certs as this 63 year old brain can stand, I know that Docker and Kubernetes are in my future. Your LED server will likely be one of my first attempts since you've also got me playing with ESP32.
I've enjoyed your videos and hope you keep it up. Back when you were pounding on Win95 code, I was herding modems for Wolfenet, I'm sure we could tell each other stories.
I've spent so many years trying to explain to my research colleagues what the difference between the two and trying to get them to understand the differences when it comes to making my life easier from the system deployment side vs development side. (honestly I just think they don't care and disengage) ty for breaking it down so I can just link them your video over and over in hopes they'll listen to another person ❤
Man, I did some MS exams, with stuff about dockers, but still don't understand them.. in sentence #2 it's clear. Thnx!!
This was great!. Clarifying and encouraging to make attempts. A blessing genuine.
Been wondering about this. Thanks for the explanation. I think I will have to watch this a couple more times before it all falls into place. Computing was pretty simple when I started in it in 1966. Even Linux was simple when I discovered it around 1995.
"It works on my machine" - my most hated line from my time in QA and build management. 😂
Buckees can kiss my ass. Ohh wait. He did that. Several several times!! Lol. Buuccckkkk- why don't you love me buuuccckkk? I'll tell you why... You and yo brother! Because some people are incapable of love. Period.
Well, your machine belongs to me now. This is now your problem. 🤣
@@RikuRicardo the billboards in Nashville said it was my washing machine. But seems like there's an abundance of problems to be fixed so maybe when the washer stops, we can take it out and figure out what to do with it all. Lol
@@RikuRicardo My usual answer was "Well if you didn't install extraneous shit on your computer, we wouldn't have this issue." because it almost always came down to some non-standard software on their dev machines. It drove me batty. The simple fact was, if it didn't work on a brand new clean install with nothing but the base server software, then it was their problem to figure it out.
@tradde11 Oh God, yes. It might work as coded, but it's not coded in line with the design specs. LOL
Thank you for this video. I am starting to learn about docker and this was a great overview.
Awesome video. I knew this stuff, but the explanation was clean. A rather short video where you set up a VM or Docker for typical applications would be nice.
I prefer VMs. As I work with custom embedded systems it's not just an editor but a rather complex development environment. And more often than not that environment has more value in-house than the first-view products being shipped using it.
Modern day experience. Running a VM through Virtual Box from Oracle really needed stuff to be enabled and disabled and Hyper-V was one of the stuff that had to be disabled.
It really slowed down my VM and made it extremely laggy. After following some guide the VM eventually was workable and I am now running a Windows Server hosting a web application through IIS without problems.
And I know it's a docker vs VMs but still.
Hey man you are clearly an awesome expert in this field and I believe this is a fantastic video and you are sharing a wealth of super important knowledge here. It would be absolutely fantastic if you used a couple of diagrams maybe a UML Deployment diagram and just highlighted the parts as you talked about them seeing multiple Guest and Host OS parts in a VM and not in Docker would be helpful to understand differences. Explaining a tiny bit of history of Docker, just as far as hyper scaling a web server from 1 machine to 1000 and back down to 500, when you don’t need state on everything would be handy perhaps. As would be using a new image version, again when state isn’t a huge concern…really just saying a diagram would be super handy here…but also saying thank you 🙏
Thanks Dave. I was wondering on the differences. Since normally I have worked with Type 1 & 2 hypervisors.
Hello Dave. I'm an autistic person too. I also struggle with the side effects of high iq. Could you make a segment on windows TSR(Terminate and Stay Resident) programs? For some reason I found great pleasure in making TSR''s that slept silently in the background back in the good old ms-dos days. Why did Microsoft introduce TSRs? Were they popular by other users? TSRs are kind of a strange bird. Who at Microsoft invented them and what was the primary use-case? Did you make / work on the TSR architecture? It was kind of magical that you could "terminate" a program but it was still in the background, and if you assigned a hot-key (say a function key) to it, the background program would immediately be back again without the initial load time.
It's been interesting learning about the cgroups subsystem underlying Docker, might be a fun topic for a video. Namely, whether a user space exploit giving "ring 0"/"root" is still constrained.
This is good explaining. IT people who still do not know about containers in 2024 are way behind the curve.
Very good information - but I was constantly losing focus because of the spinny light thingy over your left shoulder. I don't know what it is but now I need one.
I had a process that I needed to add and I only was able to do it with the version that Dave coded
(Referring to task manager)
Looking forward to your kubernetes vs docker video. I really like the metaphors.
This was very exhaustive and handy video... I wish there were links that mentioned in the video though... But I'll subscribe to this channel for sure...
Terrific explanation Dave!
Great video, Dave! I've started using Terraform at work to spin up resources locally within Docker while also creating resources and VMs in AWS to interface with. I would love to hear your take on IaC and maybe your interpretation of pros/cons leveraging code for managing infrastructure. Cheers 🍻
Thanks for clarifying it in terms even I can understand!
iv been supporting Microsoft stuff since nt4. and i gotta say, it was way better when dave the the lads were at the keyboard. -only installed my first docker instance a couple of months ago, so this was a nice watch. cheers again dave!
Thank you for making a video on docker. Love your content!
Hypervisor is "the OS for Operating systems". It control and share the hardware resources between OS:es, as ordinary OS control and share the hardware resources between processes/programs.
New lighting looks very good.
I set up a docker container and run my c code from my PC now. I ran out of old pcs to put Linux on. So fun! I feel like VM used to much of my computers resources.
I'm liking this ahead of watching because I've wanted somebody to do a cross reference of these 2 in a meaningful way! \m/
I build out infrastruture with VM's. Some VM's just happen to be container hosts too. Its easier to manage and for disaster recovery you just migrate the the VM's to another host.
7:59 regarding possible security exploits, I wouldn't quite say that a VM is fully isolated in all cases.
Great explainer, I did have a little flashback to the days of using Thinstall/ThinApp for portable apps. Not quite the same as either a VM nor a Container, but somewhere in between.
Correct. Needs to talk to the host OS also. Usually installing vmtools creating the possibility for extra security exploits. With containers you share of course the real kernel and if you run rootful you create extra risks.
@@edwinkm2016 Yeah, or using bridged networks, or that VMWare just last month patched numerous escape exploits in esxi, fusion, workstation and whatever their cloud one is. Those were all rated critical.
Going to comment for the engagement. Thanks for your short and informative vids, Dave!
Excellent video. In my job I want to push the docker approach and now I have more arguments for that :)