Jonathan Blow on Docker

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ม.ค. 2025

ความคิดเห็น • 179

  • @Patrick-ig6tt
    @Patrick-ig6tt 10 หลายเดือนก่อน +301

    Let me guess. Jonathan thinks Docker is the best thing ever and his next game will be distributed exclusively in a container. Did I get it right?

    • @YOSFP
      @YOSFP 10 หลายเดือนก่อน +38

      can't contain the excitement

    • @Caboose2563
      @Caboose2563 10 หลายเดือนก่อน +11

      No, no -- you're thinking of LSP

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 10 หลายเดือนก่อน +39

      Sounds like him. Thank you, you just saved me from having to spend 3 whole minutes of my life listening to Jonathan being overly excited and postive about some random technology or methodology yet again. I'll go learn prompt engineering from shorts for an hour or four now.

    • @monad_tcp
      @monad_tcp 10 หลายเดือนก่อน +2

      Well, his next game is a container because he just statically linked everything, and just load data from the ELF .data section. Who needs file systems anyway, file systems were a mistake.
      Problem solved.

    • @sensitiveecoterrorist
      @sensitiveecoterrorist 10 หลายเดือนก่อน +1

      hELL yEA

  • @bloody_albatross
    @bloody_albatross 10 หลายเดือนก่อน +68

    Well, these days "regular software" is an electron app that bundles its own web fonts. XD

    • @ar_xiv
      @ar_xiv 10 หลายเดือนก่อน +16

      and you just have to put up with horrible performance, zero OS integration and 3-14 updates a day for some reason and god help you if your operating system is more than a few years old. oh and it's always secretly a chrome browser

    • @liquidsnake6879
      @liquidsnake6879 9 หลายเดือนก่อน

      @@ar_xiv Sure, and it works exactly the same on any OS 99% of the time and doesn't require massive gigabyte downloads of 200 day 1 patches to fix completely broken release software and it works seamlessly through the internet whilst ensuring your underlying system's safety. All this high praise of game developers really must not be of the same ones i've grown acquainted with as a consumer of games lol
      Games are being released broken, really completely broken, crashing to desktop repeatedly, lagging heavily, with crap internet connectivity if any at all, enabling hackers to gain access to your system through RCE like Dark Souls 3, it's wild lol

    • @ar_xiv
      @ar_xiv 9 หลายเดือนก่อน

      @@liquidsnake6879 what’s the difference between 200 day one patches and a patch a day for 200 days

    • @nousquest
      @nousquest 9 หลายเดือนก่อน +3

      "Just get more RAM"

    • @DiegoSandoval-cs5oo
      @DiegoSandoval-cs5oo 6 หลายเดือนก่อน +1

      Bundling your own fonts is the right way to do it, though.

  • @marcotroster8247
    @marcotroster8247 10 หลายเดือนก่อน +56

    The sad truth is, for 99% of projects there's no business case to deliver super polished stuff, at least not right away. You gotta stitch some crap together and make it look pretty to the outside with Docker. It is what it is.
    I guess people forgot how shitty those hand-crafted Windows IIS deployments with loads of manual testing used to be. Having a 100% scripted way to deploy is actually a huge win. Wouldn't wanna go back.

    • @ethograb
      @ethograb 10 หลายเดือนก่อน +5

      I do have to admit, the vision of a future where all applications are static binaries with standard and unchanging hooks into the OS is something that I REALLY want. I can at least strive for it I guess on my own 🙂.

    • @marcotroster8247
      @marcotroster8247 10 หลายเดือนก่อน +5

      @@ethograb I'm a bit bruised by DevOpsing an Unreal Engine game. 3h build times etc. Won't ever touch other peoples CMake stuff again. It's basically impossible to install valid versions of build tools that fit together. Such a hilarious waste of precious life time. So no, I prefer Docker.

    • @gruntaxeman3740
      @gruntaxeman3740 10 หลายเดือนก่อน +1

      No one sane didn't use Windows IIS.

    • @gruntaxeman3740
      @gruntaxeman3740 10 หลายเดือนก่อน

      @@ethograb
      Closest thing is to make your application run in browser. It is really standard.
      In my opinion it is improvement in browser technology that really makes almost all cases irrelevant what OS there is. It looks like it all started from jQuery as it helped to make cross-browser code and HTTP requests.

    • @ethograb
      @ethograb 10 หลายเดือนก่อน

      @@marcotroster8247 So long as it compiles to a static binary I can forgive you ;)

  • @debajyatidey9468
    @debajyatidey9468 10 หลายเดือนก่อน +96

    Jonathan never looks like a real world developer. He looks like a sci-fi movie antagonist.
    EDIT: 90 LIKES!
    MOM! I'm famous!

    • @bernardcrnkovic3769
      @bernardcrnkovic3769 10 หลายเดือนก่อน +11

      he looks like brain in pinky and the brain

    • @wacky.racoon
      @wacky.racoon 10 หลายเดือนก่อน +1

      He looks like Kane

    • @smallsnippets
      @smallsnippets 10 หลายเดือนก่อน

      @@wacky.racoon I guess you mean Kwai Chang Caine

    • @wacky.racoon
      @wacky.racoon 10 หลายเดือนก่อน

      I was thinking more like Kane, from the Command & Conquer game series@@smallsnippets

  • @tc2241
    @tc2241 10 หลายเดือนก่อน +19

    As an ex-systems engineer and pitfalls with containers is vastly overshadowed by the pitfalls with vms or baremetal nodes

    • @DiegoSandoval-cs5oo
      @DiegoSandoval-cs5oo 6 หลายเดือนก่อน

      The pitfalls in vms are also a workaround about the problem that Jon is referring to.
      And the pitfalls in baremetal nodes are the direct result of the problems that Jon is talking about.
      If running a website was a simple executing a statically linked server binary and passing it a directory with your website's code and configuration, then VMs and Docker wouldn't have existed in the first place.
      The actual problems that originated all of this are:
      - Dynamic linking, and the expectation that the user has to install libraries as dependencies before they can run your software.
      - Essential libraries whose API contract changes from under your feet every time a new version of your OS is released
      - The way Linux package managers work

  • @sarjannarwan6896
    @sarjannarwan6896 10 หลายเดือนก่อน +34

    It isn't just about interoperability. It also helps us package our stuff up into one image. Makes it easy to share the code + infra requirements, e.g. I can just download a docker image for a web server deployment instead of doing that all manually.

    • @chrisalexthomas
      @chrisalexthomas 10 หลายเดือนก่อน +7

      Boom! exactly right. Also, the more complex the website, the more parts it has, the more developers you need, the more problems you run into when they all try to edit the same codebase

    • @NukeCloudstalker
      @NukeCloudstalker 10 หลายเดือนก่อน +23

      @@chrisalexthomas The out of hand complexity is the issue. The solution is to address that complexity, not enable further complexity by layering over it to avoid dealing with it until you reach the pain-threshold again, only to layer over it again, ad infinitum.

    • @chrisalexthomas
      @chrisalexthomas 10 หลายเดือนก่อน +7

      ​@@NukeCloudstalker I agree. But the solution to this complexity is not to hand roll your own solutions. Cause whilst it might work on the small scale, it won't work at the other end. Also, docker does not make the problem more complex. It actually makes it simpler in that you eliminate a whole class of problems caused by modifying the host machine in favour of building a virtual machine (of sorts) using code that can be documented in your git repo.
      Were you a developer in the 2000's when you hand rolled your architecture, custom edited your configuration, then had the pain and suffering of needing to tweak your machine all the time in non-repeatable ways, or heaven forbid, configure your partners machine in the same way but you've forgotten all the edits you made to get your current setup working and now your friends machine can't work and you can't figure out what setting is wrong?
      Well docker eliminates ALL of those problems with something reliable, stable, and repeatable. Big win

    • @peezieforestem5078
      @peezieforestem5078 10 หลายเดือนก่อน +4

      @@chrisalexthomas You're substituting the terms here. You're using complexity in the sense of using the system, but the issue is more like the fundamental computational complexity. To give you an example, turning on the engine in your car can be done with the press of a button or a turn of a key, which makes it a simple system to operate, but the underlying mechanism is very complex, which makes it a complex system.
      The best way to differentiate these complexities is to consider the places where things can go wrong. If you think in those terms, it becomes clear that docker is another layer where there could be bugs, and thus, it is an additional complexity layer.

    • @NukeCloudstalker
      @NukeCloudstalker 10 หลายเดือนก่อน

      @@chrisalexthomas look at it this way: Docker is equivalent to band-aiding a festering would.
      Sure, you prevented bad stuff from getting into the wound, or blood from bleeding out, you "removed" the complexity.
      But it is still there - it is still causing problems, they're just not imminently causing you any tangible issues (yet).
      Eventually the bandaid needs replacement, because the wound is now a different beast, and sooner or later, you'll realize it's too late to safe the limb the wound was on, and that the only sane way to deal with the problem, is to discard the entire thing. The issue was manageable at the point of band-aiding, had people treated it with foresight then, but now it is too late, and you either apply an even larger band-aid, or to hide the necrotized flesh surrounding the initial wound, or you cut off the limb, as it is wholly unusable.
      Not a perfect analogy, but the point is that hiding complexity beneath layers and working on top of that layer, cements the problems hidden beneath that layer, and even allows them to get worse. Up until the point where the layer above it can no longer manage the complexity itself, without becoming similarly complex to deal with.
      That's why working from fundamentals is important. Docker itself, doesn't make "the problem" more complex, it changes the problem, and makes the complexity you were dealing with before something you just ignore, which can and will fester, unless the roots of its complexity are dealt with (and docker does obviously not do that, it just provides a layer of working around it for you).
      I never said it didn't solve "a problem", but I will say, it doesn't solve the right/real problem(s). In fact, it hides those problems, which results in less being done about them in the longer run.

  • @techforserious60
    @techforserious60 10 หลายเดือนก่อน +10

    Thing is, it's hard to deny the power of docker when you've got a company trying to deploy something and loads of people have to work on test environments, docker allows you to get those up and running, how else would you do it?
    Either you don't do it at all in which case you maybe have everyone interacting on the same environment spun up by one devops guy, or you basically run the same set of scripts loads of times on everyone's machine, the same set of scripts that you would have just given to docker to run for you via containerization

    • @CianMcsweeney
      @CianMcsweeney 10 หลายเดือนก่อน +2

      His point is that docker is a solution to a problem that shouldn't exist, which I agree with

    • @techforserious60
      @techforserious60 10 หลายเดือนก่อน +5

      @@CianMcsweeney My point was essentially that docker is a solution to the problem of having many people needing to work on test environments. As JBlow didn't mention that problem (as I understand from this video), are you then saying that this problem also shouldn't exist?
      If so, how would you go about working with multiple teams who need to try out different things on various copies of the main application? Keep in mind, its not always just devs using these environments, but testers, BAs, architects, automation scripters, so they won't all be having a local dev environment to work with

    • @ZakWhaley
      @ZakWhaley 10 หลายเดือนก่อน

      Yeah, and personally, for gamedev specifically I prefer to use containers to get *more* specific and explicit (e.g. hacking together simple build, deploy, and test scripts), not to be more abstract or generic like he's suggesting.
      It's much easier to Wild West things like game devs are wont to do when you can easily tear down and recreate something. Plus it helps for reproducibility to ensure you have actually captured what your dependencies are.

    • @evergreen-
      @evergreen- 10 หลายเดือนก่อน

      Did you watch the video? He literally said that back in the day they just copied the program and it worked everywhere

  • @justadude8716
    @justadude8716 4 หลายเดือนก่อน

    At work we were blessed with docker because it let me work on two embedded projects at once since they needed two different version compilers

  • @CrazyMineCuber
    @CrazyMineCuber 10 หลายเดือนก่อน +14

    Looks like Jonathan would love Nix and NixOS.

    • @necraul
      @necraul 10 หลายเดือนก่อน +28

      Jonathan isn't capable of loving.

    • @echobucket
      @echobucket 10 หลายเดือนก่อน +7

      nix is just adding another complex "fix" over top of a broken system.

    • @dirtysmoky
      @dirtysmoky 10 หลายเดือนก่อน

      @@necraulyou’re a loser

  • @davidboeger6766
    @davidboeger6766 10 หลายเดือนก่อน +8

    He has a point but is also missing several important points. Docker (really containers in general) are primarily a form of namespacing, which I have to imagine he doesn't have a problem with (surely Jai has namespaces, right?). I guess you can say name conflicts aren't a fundamental problem, and we should all just pick unique names, but come one, he has to know that doesn't scale to large teams and industries.
    But in addition to namespacing, containers provide security features like isolation, resource allocation, privileges, etc.
    And sure, you could argue that programs could just implement container orchestration logic internally, stuff like keepalive, service discovery, load balancong, etc., but that's a ton of generic boilerplate that solutions like Kubwrnetes take care of om behalf of all containers in a configurable way that is indeoendent of program logic.
    It seems to me like Jonathan Blow would argue that design patterns aren't solutions to fundamental problems at this point. I guess they're not technically necessary to move bits around, but they're elegant ways of solving recurring problems in programming. That's basically what containers and orchestration platforms are: design patterns for services. And just like otger design patterns, sometimes they're appropriate, other times they're not. But saying they don't solve anything is a bit silly, given they're clearly better than DLL Hell and rolling your own network layer for every application.

    • @ImaginaryNumb3r
      @ImaginaryNumb3r 10 หลายเดือนก่อน

      Very well put. As much as I find his takes interesting, he is absolutely within his own bubble and simply doesn't see the problems other industries are facing.
      While his personal experiences are more than valid, he (as common in our industry) thinks that his problems are the same everybody else is facing.
      As such, his premise is already flawed.
      Okay, to be fair he is aware of other industries, but those would be frontend/web related. Of course he would think the rest of the world is horrible lol.

    • @perthhi1
      @perthhi1 10 หลายเดือนก่อน

      "It seems to me like Jonathan Blow would argue that design patterns aren't solutions to fundamental problems at this point."
      Hah, he actually does argue that; see his rant on MVC.

  • @thebaysix
    @thebaysix 10 หลายเดือนก่อน +7

    I think he has a point and the proliferation of layers upon layers of code IS a real problem. That said, some people do need to build/run/share code and do so cross-platforn and at scale, and for that Docker does the job. As technology moved forward out of the 80s and 90s, the creation of different platforms, OSs, environments etc was inevitable. Could we have made different or better decisions in building our platforms? Sure. Is it good to keep in mind the downsides of too much code and lack of compatibility? Yes. But Jon seems to extrapolate from his gamedev world to an alternate reality where everyone is just running Linux that was simply never going to happen, and shouldn't have happened. He's a smart guy and I like hearing his thoughts but he seems to talk with lack of nuance on this topic.

    • @monad_tcp
      @monad_tcp 10 หลายเดือนก่อน +5

      Sharing multiplatform code is better done via the matrix of compilation, not docker. Docker solves the problems of the stupid amount of bullshit you need because linux distros create that problem, every single one of them uses something slight different, its hell basically, Linux has basically NO binary compatibility, and that's not even a problem with the kernel , but with distros.
      One counter-example, every binary made in Go for example, they come with everything, even the "CRT" (aka, LibC6) so it will run on any Linux distribution by just copying-pasting, GO programs only rely on syscalls, they don't use anything from the usermode like stupid shared libraries like the LibC6.
      This is how programs should run.
      Even better, unikernels, program comes with the kernel needed to operate the environment, minus the drivers that run on the LPAR0, aka, the root partition of the machine.
      MirageOS is a good idea.
      I don't think John is extrapolating to an world that everyone is running Linux, he even says that.
      There's no way to have code running on multi-platform without having to recompile it, or having a virtual machine like Java.
      Docker doesn't even solve that problem actually, because you can't use a docker container made for AMD64 on ARM64, you basically can't. Docker doesn't solve this problem.
      If Linux wasn't an awful operating systems that relied too much on file systems, none of those problems would exist. (Tanenbaum was always right, Monolith kernels were a mistake, Microkernels are what operating systems should be, that's why we have Hypervisors now, Linux just is a bad system)
      I don't think he's having a lack of nuance, he's approaching the problem Top-Down and holistically instead of solving technical problems that wouldn't ever need to exist if the entire engineering of the machine/system was made Top-Down better for what we actually use computers for in 2024 instead of how we used them in 1980.

    • @thebaysix
      @thebaysix 10 หลายเดือนก่อน +1

      @@monad_tcp thanks for the thoughtful reply, sorry if I misquoted him. The main thing I want to get across is that Top Down problem solving like he is talking about is a pipe dream AT AN INDUSTRY SCALE. There's simply going to be different solutions and platforms in a free market. And that's a good thing.
      I do like the idea of solving at the compilation level though, instead of building entire containerized workspaces that pretend you're in the same environment when you're really not.

    • @gruntaxeman3740
      @gruntaxeman3740 10 หลายเดือนก่อน

      ​@@monad_tcp
      "Sharing multiplatform code is better done via the matrix of compilation, not docker."
      Nope. There are plenty of developers and they have all kind of systems and they like to develop locally. Development can be done everywhere and then deployed to server.
      "Docker solves the problems of the stupid amount of bullshit you need because linux distros create that problem, every single one of them uses something slight different, its hell basically, Linux has basically NO binary compatibility, and that's not even a problem with the kernel , but with distros."
      I didn't know that macOS can run same OS level C++ binaries as Windows. They are not binary compatible on that level.
      Docker is not solving anything from Linux. Idea of Docker is that you don't need to have full virtual machine and also allow "vm image" deployment, like VMware did long time ago. However it standardizes Linux as base technology to servers. So there was lost opportunity to VMware here.
      "One counter-example, every binary made in Go for example, they come with everything, even the "CRT" (aka, LibC6) so it will run on any Linux distribution by just copying-pasting, GO programs only rely on syscalls, they don't use anything from the usermode like stupid shared libraries like the LibC6."
      So why not use Go then?
      "There's no way to have code running on multi-platform without having to recompile it, or having a virtual machine like Java."
      What is the problem using virtual machine then?
      "If Linux wasn't an awful operating systems that relied too much on file systems, none of those problems would exist."
      Success of Linux is based on same thing that was success of Unix that started from 60s. There was very simple architecture that works, and not some overengineered, complex crap. And when things are overengineered, developers don't like that and find simper way.
      Design choices back in 60s are proven to be very flexible and have led to an extremely long-lived foundation on which the whole world rests.

  • @Terszel
    @Terszel 10 หลายเดือนก่อน +4

    "We somehow made a world where you need to do that"
    I can't just copy files from localhost to some cloud machine, it needs auth, I don't want to tie myself to my cloud provider so I'll use package managers and containers, I need to actually provision the cloud hardware I'm using because nothing is free so I'll use terraform etc.
    It is more complex, it is a headache and a PITA, but it exists for a reason

  • @abdullahnadeem1823
    @abdullahnadeem1823 9 หลายเดือนก่อน

    Jonathan Blow is a legend

  • @rodjenihm
    @rodjenihm 10 หลายเดือนก่อน +9

    Docker is awesome

  • @chrisalexthomas
    @chrisalexthomas 10 หลายเดือนก่อน +48

    Jonathan is kind of short sighted in his opinion about docker, I"ll explain why. He's right that the idea of solving the problem of shared libraries is a problem that we created and there are solutions to this that don't require docker. But docker is not just about solving that problem. One of the problems docker solves is about how to package up everything an app needs, including infrastructure and deployment, especially using automation and not having to care what else is installed. Mostly this is aimed at server-like software where you aren't just dropping a binary or something onto a hard drive, but configuring network access, ports, etc. Operating systems don't really provide a standardised way to do that, so docker solves those problems. If I have a docker daemon running, I can pull a repo and `docker compose up` and it'll make itself available on my machine with a predictable configuration that I can share with everybody without having to care what that users local machine is like. This is a lot harder to solve and it's the problem he doesn't talk about in this video.

    • @DrandilonOriculus
      @DrandilonOriculus 10 หลายเดือนก่อน +8

      Docker is king

    • @chrisalexthomas
      @chrisalexthomas 10 หลายเดือนก่อน +5

      @@DrandilonOriculus absolutely, when you need to package up a complex app, work on it with colleagues and avoid the "it works on my machine" drama, it's really the only tool we have

    • @gammalgris2497
      @gammalgris2497 10 หลายเดือนก่อน +1

      It automates certain processes, but don't ignore the prerequisites that must exist in order for the processes to work in a meaningful way. Without testing that a software runs in a certain environment and certain versions of dependencies the automation is meaningless. In the end a piece of software will have a defined environment within it will run properly. The biggest unsolved problem is keeping software in synch with the required environment. An OS is changing by small steps. 3rd party software/ frameworks/ libraries are changing by small steps. So best keep dependencies and tooling at a minimum and it will save you a significant amount of time. Docker, although useful, does things you could also do otherwise.

    • @lucsalander
      @lucsalander 10 หลายเดือนก่อน +1

      would you have a good tutorial article about Docker?

    • @tapwater424
      @tapwater424 10 หลายเดือนก่อน +2

      Listening on ports and minimizing file access in a portable fashion shouldn't actually be a daunting task. We don't have to manually remap memory addresses to make sure programs work together but for some reason we're forced to do this with e.g. ports.

  • @Dexterdevloper
    @Dexterdevloper 10 หลายเดือนก่อน

    great , thank you.

  • @Auhuro
    @Auhuro 8 หลายเดือนก่อน

    Browser?

  • @voltairespuppet
    @voltairespuppet 10 หลายเดือนก่อน

    Your are also likely to only have one game running, and few people wouldn't begrudge shutting down most other apps. If you use multiple apps at the same time you want them to share as many resources as possible; the combined experience being the important part.

  • @monad_tcp
    @monad_tcp 10 หลายเดือนก่อน +3

    Docker solves the problem that having shared file systems was an awful idea. File systems are literally data structures, and data structures should always be private to the programs that use them.

    • @youtubeenjoyer1743
      @youtubeenjoyer1743 10 หลายเดือนก่อน +1

      You don't need the whole docker machine just to have a "private filesystem". Docker is a crutch taped to a swiss army knife, unfortunately.

    • @ArthurSchoppenweghauer
      @ArthurSchoppenweghauer 10 หลายเดือนก่อน +2

      Enjoy fixing os level vulnerabilities in base images and then testing your applications in those containers. I'm sure all of that time spent fixing problems you created by piling more shit on top of your existing stack of garbage is well spent.

    • @guilhermecarvalhotrindade2625
      @guilhermecarvalhotrindade2625 10 หลายเดือนก่อน +1

      File Systems are not only data structures; they are also services. Any general-purpose service (like an OS-level FS) will quickly run into limitation on edge cases for different consumers of the service, because each consumer has different needs and assumptions.

    • @gruntaxeman3740
      @gruntaxeman3740 10 หลายเดือนก่อน +1

      You don't need docker to create private filesystem. Docker solves different thing.

  • @Otomega1
    @Otomega1 10 หลายเดือนก่อน +22

    The entire history of programming could be resumed by not migrating older systems to newer systems by lazyness.
    Docker is a prime example people would prefer virtualization inceptions rather than recognize the system is imperfect from the beginning.

    • @monad_tcp
      @monad_tcp 10 หลายเดือนก่อน +1

      Yes, multi-user operating systems are imperfect, we need better systems.
      Docker shouldn't be necessary, and operating systems should isolate process better like Virtual Machines do. Maybe having operating systems is an entire wrong idea to begin with. We should have hardware with hypervisors capable of running multiple programs directly.

    • @monad_tcp
      @monad_tcp 10 หลายเดือนก่อน +2

      I'm with the blow on this one, games are doing it right by using unikernels (at least the ones that run on XBox, I don't know how Playstation runs its software, but Xbox always run inside an Hypervisor with 3 VMs, the Game, the Hud/Menu/UI [the visual part of the OS, aka, the WDM] and an Hardware/VM responsible for actually driving the hardware, were the drivers are, its so simple and fast ).

    • @Otomega1
      @Otomega1 10 หลายเดือนก่อน

      ​@@monad_tcpThanks for your comment, i'm gonna read about how Xbox works at the software level.

    • @gruntaxeman3740
      @gruntaxeman3740 10 หลายเดือนก่อน

      Of course system is not perfect.
      You also need to realize, you can't build anything to depend on system if it is not stable. That means that we build imperfect systems and we don't change them much so other people can depend on them. And when we need to change that imperfect system we add some layer and move code to depend on that there so we can have freedom to make changes below that layer.
      I don't mind at all to make my code to run in docker. I can be then lean as possible and I know that I can deploy that container to run almost everywhere. Before Docker I had to specify operating system version.

    • @trumpetpunk42
      @trumpetpunk42 10 หลายเดือนก่อน +1

      ​​@@monad_tcpyou talking about unikernels? MirageOS etc? Edit: ah I see you already mentioned unikernels in another comment. I didn't know Xbox was like this too

  • @logantcooper6
    @logantcooper6 10 หลายเดือนก่อน +1

    This guy has never had to deploy software to windows and linux at the same time apparently.

    • @c4llv07e
      @c4llv07e 9 หลายเดือนก่อน

      Jai is working on windows and linux at the same time

  • @khangle6872
    @khangle6872 10 หลายเดือนก่อน +3

    man i loved Docker but blowing it would be a bit much

  • @elieobeid77
    @elieobeid77 10 หลายเดือนก่อน +57

    Jonathan sees everything from a game developer/C developer perspective.He always fails to see the world outside of that little box.

    • @dzivba
      @dzivba 10 หลายเดือนก่อน +14

      And always has harsh opinions yet offers no alternative solutions

    • @TheOnlyJura
      @TheOnlyJura 10 หลายเดือนก่อน

      @@dzivba he offered to ship entire bundles,
      i have been in a numerous companies who used docker all the time,
      i took their code, de-containerized it, and continued development without docker without any problems

    • @joseduarte9823
      @joseduarte9823 10 หลายเดือนก่อน +3

      Thank you!!! And it’s almost like he’s proud of his narrow view. Dude doesn’t understand that multiple perspectives make him BETTER

    • @monad_tcp
      @monad_tcp 10 หลายเดือนก่อน

      @@dzivba you don't need alternative solutions to this imagined problem of sharing DLLs and operating systems and docker/namespaces/containers.
      The solution is getting rid of multi-user operating systems and containers all together.
      He's kind of right in this one. The ideal solution is what every game does, they all use uni-kernels, that is, they come with the kernel responsible for managing only the hardware, everything else is done inside the code of the application.
      But what about sharing of resources ? you don't need operating systems for that, the IBM S/360 could run multiple programs in the same machine through this amazing invention : hypervisors !
      Basically his harsh opinion is the correct one, and the solution is something like MirageOS. (or maybe Qubes)
      We live in a world with some much cores and CPUs that we could literally dedicate entire CPUs to specific software, that would make things so much simpler.
      I'm with him on this hill.
      You guys are the ones failing to see outside the box of how computers/desktops were in 1990s and think instead in other ways of doing computing, that's what computers are for.

    • @youtubeenjoyer1743
      @youtubeenjoyer1743 10 หลายเดือนก่อน +13

      @@dzivba The alternative is not having 9000 dynamic dependencies for every single program.

  • @ske2004
    @ske2004 7 หลายเดือนก่อน

    i worked with docker at a job. even though he's right that it's accidental complexity, there's absolutely no way people will be writing a custom distributed server system for their applications. the reason for dockers existence is simple, servers started with an OS, but to distribute the jobs now you need multiple servers. so docker does that by emulating what such servers would do, and indeed the only way to reliably do it is by having a bunch of virtual machines.
    i can see where he's coming from but it's an out of touch with reality take. i think we should improve software. however, it's too pessimistic to deny usefulness of docker in the current times.

  • @mx338
    @mx338 9 หลายเดือนก่อน

    Containers are great technology, the problem are people who just run a docker containers without knowing how they work.

  • @joewhiteakeriii5568
    @joewhiteakeriii5568 10 หลายเดือนก่อน +3

    This hot take is dead ass wrong and is not helpful. Different apps in different languages and frameworks being managed and deployed in a consistent way as a container is not at all stacking bad ideas and isn’t the result of any human error.

  • @GarrethandPipa
    @GarrethandPipa 10 หลายเดือนก่อน +1

    That's bullshit I have been programming at least as long as Jon. Computers were shit and WE cared as much about speed as the graphics guys because it was a necessity. It shifted with the WYSIWYG IDE where speed of development became the one and only concern and to this day to the detriment of software as a whole. What was once 6 months to create a UI could be done in a day. Why write a library when you can just use someone else's code. The crap they teach in college today is 80% nothing burger and 20% programming.

  • @Gruak7
    @Gruak7 10 หลายเดือนก่อน

    Nix is the solution, the holy grail.

    • @therealvbw
      @therealvbw 10 หลายเดือนก่อน

      It solves a load of problems I don't have, coming from stable Debian

  • @oscarhagman8247
    @oscarhagman8247 10 หลายเดือนก่อน +1

    spent like 2 seconds talking about docker lol

  • @liquidsnake6879
    @liquidsnake6879 9 หลายเดือนก่อน

    How did we create the problem of unnecessarily shared concerns mixed together in a shared system that doesn't need to be shared at all. It's not just DLLs, if i only have one service with Java or whatnot why the f*** does my entire server need to have it installed and become vulnerable through it? Why would that even cross your mind? And if i decide to move hardware or cloud provider i need to manually install all that crap again over running literally 3 commands to provision the entire thing automatically? That mindset is why games come out completely broken and require gigabytes of patching to make run at least half-decent and you're mumbling about precision...

  • @goody8321
    @goody8321 10 หลายเดือนก่อน +2

    we are building stack of problems , i hate this take , it's like we are all were some dumb and lazy people while had some special problem with those wo "warned" us and just made our day worse ! there are some reasons why we did these bad designs and we are constantly trying to fix and come up with new ideas , cut the bullshit and just learn and contribute

    • @goody8321
      @goody8321 10 หลายเดือนก่อน +1

      yeah some big part of programming right now is to interoperate with others ! that's fact that was something that we were inevitably face it , look at cells and environments it's everywhere !

    • @goody8321
      @goody8321 10 หลายเดือนก่อน +1

      and yeha for final finish on this , yeah some other programming culture think a different way , because the worst mistake we did was to name it programming as start , like it was some skill for itself , but it was the doorway to digital designs with all the different needs, so yeah we think different way and there needs to be some bridges
      by the way i'm testing nix, you can call me dumb later

  • @anasouardini
    @anasouardini 10 หลายเดือนก่อน

    I wish this guy doesn't study anything lower level than he does currently, or else he'll be roasting the entire tech industry!
    Everything sucks and then we move on, I can find the "bad" in almost every field of expertise, defects just exist, they can't not exit, the world is changing in a parabolic way.

  • @bobby9568
    @bobby9568 10 หลายเดือนก่อน +11

    the world doesnt care about smart people Jonaboy

  • @vitiok78
    @vitiok78 10 หลายเดือนก่อน

    Evolution itself is a stack of bad ideas. But it works and works well.

  • @4.0.4
    @4.0.4 9 หลายเดือนก่อน

    Docker only exists because of Python being such a mess. It solved the "works on my machine" broken nature of Python.

    • @trex511ft
      @trex511ft 2 หลายเดือนก่อน

      that's a shame, my brief contact with Python was positive. Way more friendly than javascript for instance.

    • @4.0.4
      @4.0.4 2 หลายเดือนก่อน +1

      @@trex511ft I'm trying to learn it right now. It's kinda painful how fragile everything is, like you need exactly python 3.10 and not 3.9 or 3.11 or everything breaks, needing Conda, and then Conda throws some weird errors, or is slow as hell to "solve environment".
      By comparison Deno and Bun sound like heaven to work with...

  • @surfingbilly9654
    @surfingbilly9654 10 หลายเดือนก่อน +4

    has this guy ever said he likes something?

    • @meanmole3212
      @meanmole3212 10 หลายเดือนก่อน +5

      Of couse...
      chai latte

    • @DLGWare
      @DLGWare 10 หลายเดือนก่อน +5

      Don't be ridiculous... he also said he liked a water filter he bought on Amazon

    • @meanmole3212
      @meanmole3212 10 หลายเดือนก่อน +2

      @@DLGWare I forgot! But that is it.

    • @Jonathang5730
      @Jonathang5730 10 หลายเดือนก่อน +2

      The smell of his own farts ...

  • @TheLostDriver
    @TheLostDriver 10 หลายเดือนก่อน +2

    why does this john blow guy act like hes smart anyone who tries to talk and act like they are smart are probably just non social and spend all their time learning

  • @MrSwac31
    @MrSwac31 9 หลายเดือนก่อน

    Shit take, as always

  • @Wurstfinger-rl1zi
    @Wurstfinger-rl1zi 10 หลายเดือนก่อน

    I feel like the kinds of deployments he's talking about work perfectly fine for simple deployments but with the time that's passed since then, there has been a lot of complexity added to our tech stacks since it's necessary considering what more applications have to deliver in comparison to the early days. Comparing applications nowadays to applications during the 80s is just ignorant to the fact that all of us are standing on the shoulders of giants to be able to make applications that can withstand modern challenges.