"Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022)

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น •

  • @marinoceccotti9155
    @marinoceccotti9155 2 ปีที่แล้ว +437

    When you realize computer science has become a culture, with even an archaeology department...

    • @DogeMultiverse
      @DogeMultiverse 2 ปีที่แล้ว +30

      in 50 years, people will look back at our time and laugh like how we laugh at fortran

    • @TheAlison1456
      @TheAlison1456 2 ปีที่แล้ว +11

      professions are cultures

    • @almari3954
      @almari3954 ปีที่แล้ว

      ​@@DogeMultiverse No, totally disagree. We're still digging ourselves into a hole. We first need to get out of it. Watch: "The Mess We're In" by Joe Armstrong

  • @artdehls9100
    @artdehls9100 2 ปีที่แล้ว +1076

    I remember reading this story about a company that was trying to send a program to a customer in France. Trying, because every time they did, it would fail to run customers' hardware. Finally they sent someone with a case that contained the program to sort things out. When he went through customs he dutifully declared the program as an imported product, whereupon the customs official pulled a few cards out as a required "sample" of the imported product. Oh joy.

    • @kurax1
      @kurax1 2 ปีที่แล้ว +103

      You cant really know if the food tastes good until you have some. Thats what went through the customs officers mind i think.

    • @PMA65537
      @PMA65537 2 ปีที่แล้ว +62

      I had a program to run remotely on lots of servers and something in the shell and terminal setup was eating a few of my characters. I added multiple lines of #### for a NOP slide to overcome that.

    • @totheknee
      @totheknee 2 ปีที่แล้ว +25

      This is awesome. They could put throwaway code onto a few cards, like some superfluous OOP or Rust checkout checker border patrol or whatever they call it, and the rest of the program could still run in France.

    • @christiansitzman5601
      @christiansitzman5601 2 ปีที่แล้ว +80

      My blood started to boil just reading this.

    • @jesustyronechrist2330
      @jesustyronechrist2330 2 ปีที่แล้ว +47

      Actually kinda funny parallel to Docker and the whole "ship the whole machine that the code works on" meme

  • @pkphilips2
    @pkphilips2 2 ปีที่แล้ว +192

    This guy speaks so fast.. basically about 5 presentations in the time for 1, but somehow, he is completely understandable and he keeps the attention of the audience!

    • @judaronen
      @judaronen ปีที่แล้ว +3

      Watched it 2x… 😛

  • @ALaModePi
    @ALaModePi 2 ปีที่แล้ว +392

    I was almost through this entire lecture when I realized that all these issues sound like "when you're a hammer, everything looks like a nail." We were trained by a thousand editors and programming languages to approach problems in a particular way instead of asking, "What is the best tool to approach the type of problem I'm working on?" Thanks for showing some really good tools and challenging us to make tools that are equally good for working with certain types of problems and data sets.

    • @57thorns
      @57thorns 2 ปีที่แล้ว +14

      But it also trigger my silver bullet detector.
      While I agree C++ is a bloody mess, you can still write reliable real time programs in it.
      Of course, you can't use dynamic memory allocation (apart from the heap for function call) and you have to be careful about which standard libraries you use.
      And C++ is a pain syntactically.
      I wonder how python works in real time systems with digital and analog inputs?

    • @BillClinton228
      @BillClinton228 2 ปีที่แล้ว +17

      "The best tool for the job" largely depends solely on what the most senior programmer in the company is familiar with. It rarely has anything to do with tech and more to do with politics. These guys have usually been with the company since the beginning and the executives know him and trust him, so he has carte blanche to do as he pleases, so if he thinks the best tool for the job is Cobol or Delphi then that's exactly what will be used as long as it delivers software that makes money for the company.
      Sorry to burst your tech utopia bubble but politics and profits are way more important than the "tools"... if management agrees that the latest and greatest tech needs to be used to write good software then thats what will happen, if they agree that the legacy code is working fine and doesnt neeed to be written in the latest tools then sorry for the 20 year old junior intern but you will need to learn the ancient tech to work there and it will look terrible on your CV but that's just how it is.

    • @ZahrDalsk
      @ZahrDalsk 2 ปีที่แล้ว +4

      @@57thorns
      >And C++ is a pain syntactically.
      I love C++'s syntax, personally. It just feels natural and easy to understand.

    • @ridespirals
      @ridespirals 2 ปีที่แล้ว +6

      I'm a big fan of "idea the right tool for the job," I hate when people try to force solutions into a system to reduce the total systems/languages in use. my current company does that, does everything in javascript when other frameworks or languages would be better.

    • @robertwilson3866
      @robertwilson3866 ปีที่แล้ว

      You can do what he talks about in the video really quickly by just asking ChatGPT.

  • @itsdavidmora
    @itsdavidmora ปีที่แล้ว +6

    For those wondering, the title is likely a reference to Bret Victor's 2013 "Stop Drawing Dead Fish" talk.

  • @Tiddo1000
    @Tiddo1000 2 ปีที่แล้ว +350

    I think the biggest problem with all visual examples is that they work great for data-science or theoretical algorithms, but far less for your run-of-the-mill "corporate programming" such as (web)services. When building services, almost all of the programming is about creating a model of the real world, and not so much about visualizing and transforming data. All those examples of graphs, tables, flows etc. work really well for data-science (hence things like Jupyter are so popular there), but they don't generalize to domain modeling very well. I would absolutely love to have some sort of interactive and visual environment to build and maintain domain models, but I've yet to come across anything like that.

    • @ABuffSeagull
      @ABuffSeagull 2 ปีที่แล้ว +6

      I feel like Dark Lang is pretty close to what you're describing, and it seems really cool, but I'm not quite ready to have so little ownership of the tech stack

    • @lkedves
      @lkedves 2 ปีที่แล้ว +21

      Then it may please you that _informatics started with such tools,_ like the Sketchpad from Ivan Sutherland (but it's better to learn about it from Alan Kay because the original demos don't really explain the difference between "before" and "after") or the NLS from Douglas Engelbart (look up the Mother of All Demos, pay some attention to the date or the hint at the end that ARPANet "will start next year"...) Unfortunately, Engelbart's Augmenting Human Intellect Report is a very hard read, the whole field lost the point and the result is what we have today.

    • @alex987alex987
      @alex987alex987 2 ปีที่แล้ว +12

      And not for the lack of trying. I've watched oir read pretty much this talk at least five times in the last 30 years.

    • @lkedves
      @lkedves 2 ปีที่แล้ว +10

      Results like that we have the ultimate communication infrastructure, but people don't feel pain to
      - limit themselves to a single bit, "Like" and think that any number of likes can ever worth a single statement.
      - repeat the same statements picked up here and there without processing and pretend that it is the same as a dialog.
      - rip off and descope the "Stop drawing dead fish" lecture (Bret Victor, 2013) in 2022. It's not about coding and punch cards but our very relationship with information systems (in machines, libraries, human communities and within our own brain).
      _"Why do my eyes hurt? You have never used them before."_ (Matrix, 1999)

    • @RaZziaN1
      @RaZziaN1 2 ปีที่แล้ว +2

      Domain modelling is bunch of graphs.. cqrs, ddd and so on. All is just processes and workflows.

  • @JackRusher
    @JackRusher 2 ปีที่แล้ว +30

    @11:06 rank polymorphism, I mispoke in the heat of the moment.

  • @AdrianBoyko
    @AdrianBoyko 2 ปีที่แล้ว +167

    My high school had Apple ][s and UCSD Pascal but the teacher didn’t want to learn a new language so we had to do Fortran on punched cards, instead. The cards would go to a university about 30 minutes away but the results took a week to come back.

    • @johnmoss4624
      @johnmoss4624 2 ปีที่แล้ว +11

      A week. wow!

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว +2

      😱

    • @KaiHenningsen
      @KaiHenningsen 2 ปีที่แล้ว +14

      And then you learn that there was a FORTRAN available for the ][s UCSD system and weep.
      I once wrote a punched card Pascal program (for a uni course before terminals became available for those) by first developing in UCSD, then going to the card punch with the resultant listing. (I'm not sure, it might have been the 7 billionth implementation of Life.)

    • @TheAntoine191
      @TheAntoine191 2 ปีที่แล้ว +9

      @@KaiHenningsen Also people often hate on fortran because they had to use 78 version and practices. Modern Fortran is OK in my opinion.

    • @username4699
      @username4699 2 ปีที่แล้ว +1

      @@TheAntoine191 I think deeming it "OK" is valid for those who still must maintain programs in it, but there are still too many leftover - or even new - oddities that prevent it from being used in the ways that C is still useful. Some of these being: if you want an array of pointers to some data type, you have to use a structure; the lack of a true way to define/typedef custom data types; the intense duplication and verbosity required when declaring methods on classes; the syntax for declaring subroutine/function arguments; and the lack of a literal syntax for nested data structures (like assigning a value to an array that exists as a field inside of a structure, all at once). However, other old, largely forgotten languages like Ada, Modula-2/3 and modern variants of Pascal (Free Pascal and Delphi), certainly do have many redeeming qualities and are still very usable to this day, sometimes more so than mainstream/popular solutions even, Ada being the biggest tragedy out of the ones mentioned, in my opinion.

  • @michaelgfotiades
    @michaelgfotiades 2 ปีที่แล้ว +125

    My first programming class used punched cards running FORTRAN on a Sperry/Rand UNIVAC computer (IBM 360 clone). As a consultant over the subsequent decades I would carry a little history kit to show the newbies - some punched cards, a coding pad (80 columns!), 9 track tape, 8" floppies, and a little bag of coal as a sample of what we had to keep shoveling into back of the computer to keep up a good head of steam. As my friend called it - "The age of iron programmers and wooden computers."

    • @r0cketplumber
      @r0cketplumber 2 ปีที่แล้ว +18

      You had coal? We had to scavenge for firewood.

    • @phinhager6509
      @phinhager6509 2 ปีที่แล้ว +14

      @Eleanor Bartle not in computer labs.

  • @hlprmnky
    @hlprmnky 2 ปีที่แล้ว +19

    It isn’t every day I see a conference talk that reminds me why I want to work on being a better programmer. Thank you.

    • @kentbull
      @kentbull ปีที่แล้ว

      agreed

  • @TheMrKeksLp
    @TheMrKeksLp 2 ปีที่แล้ว +134

    I have to say some things about this talk really irked me. Like the implication that APL has superior syntax because for this very specific use case it happens to be quite readable and more terse than the alternatives
    Most choices are a compromise one way or the other. Compiled languages might be "dead programs" but that's the cost you pay for function inlining, aggressive code optimization, clever register allocation, known static stack layout and so on. That's why compiled languages are fast and static and not slow and dynamic. It's all a trade off
    In fact just yesterday I had an idea for code hotreloading in Rust. One limitation that immediately came to mind is that every control flow that crosses the module border will have to use dynamic dispatch, mostly preventing any meaningful optimization between the two

    • @Nesetalis
      @Nesetalis 2 ปีที่แล้ว +43

      Yeah this exact exchange is what I was thinking about while listening to him. Compiling isn't a bad thing, it's an optimization. I use python for rapid prototyping, for instance, but when I'm done playing and ready to do some work, I write my final in C++, because it's fast. Yes I've spent days compiling libraries before, but once they were compiled, I didn't have to worry about them, didn't have to wait for my computer to chug and choke on the complex human readable parsing. Computers are not humans, don't feed them human.
      This whole mentality is an offshoot of the "just throw more hardware at it." camp, one I find regrettable.

    • @jrdougan
      @jrdougan 2 ปีที่แล้ว +54

      @@Nesetalis The problem is that most languages don't have both an optimized and unoptimized (introspectable) version. I want to be able to do both without changing language. I expect he does as well.

    • @duncanw9901
      @duncanw9901 2 ปีที่แล้ว +5

      @@jrdougan Then use Haskell 😈
      (admittedly, GHCi is nowhere near LISP levels of interactivity. But, it's better than nothing)

    • @gamekiller0123
      @gamekiller0123 2 ปีที่แล้ว +11

      @@jrdougan I don't think that would be enough to him. It seems like he wants introspection on production. I don't see how this is possible without making some major tradeoffs like globally turning off optimizations or annotating things that can be introspected.
      In fact it seems like he even wants to make the code modifiable at runtime (not necessarily the production code though).

    • @janekschleicher9661
      @janekschleicher9661 2 ปีที่แล้ว +3

      @@gamekiller0123 I mean, why not. Basically we already doing it, just in a slow way. In bigger projects, usually you don't just deploy and overwrite your previous version, you deploy it, let it running through staging/production pipeline and then make it first available in addition to the existing code via an internal route for the programmers and integration testing pipeline, then you'll canary make it available to a small part of users, monitor it, if nothing fails, you make it available to a significant part of users (let it route to the new version, while still keeping the old version), then if you don't monitor something wrong, you'll make it the default and then you stop serving the previous version and finally make a deployment some time later to get rid of the deprecated functionality.
      So, what happens as effect is that we are changing the runtime without really switching it off (if we regard the executed distributed environment as one unit execution). But the whole process is slow (we are talking about hours to see first changes and days till everything is finished -> very punch card like) and hard to debug and monitor (even with tools like distributed tracing or kafka or w/e).
      There wouldn't be anything wrong or scarier if the programming model just would allow to do these changes directly in the runtime (probably still keeping different versions) and not do it on microservice level with the help of container runtimes and routing services and complicated tools for introspection. Just doing what the language should do for us involves in the end knowing Docker, Kubernetes, API gateways, Prometheus, DataDog, Kafka, a CI/CD pipeline, and many things I might have missed on the fly now. In the end, most companies are now in high demand for DevOps engineers to optimize this process (-> punch card operators are back) as the complexity is too high to really expect the programmers to handle while they are trying to solve a complete different problem (the business case).

  • @davedouglass438
    @davedouglass438 2 ปีที่แล้ว +45

    One of my more unmistakable descents into IT Madness:
    At Conrail, I had to write out my COBOL programs on 14-inch green-and-white coding sheets, and send them over to the 029 experts in the Punchcard Department.
    Next day, when they'd dropped the code into my Shared Storage, it would contain so many errors that I had to spend an hour fixing it...
    So I took to typing my code directly into Shared Storage, using my handy-dandy SPF Editor...
    and was REPRIMANDED for wasting my Valuable Professional Computer-Programmer Time.

    • @MarcTompkins
      @MarcTompkins ปีที่แล้ว

      SPF Editor! Now, _THAT_ brings back memories.

  • @SamGarcia
    @SamGarcia 2 ปีที่แล้ว +326

    As a sort of self taught programmer, now I understand the purpose of notebooks. Thank you for that.

    • @pleonexia4772
      @pleonexia4772 2 ปีที่แล้ว +6

      Can you explain for me please

    • @DanH-lx1jj
      @DanH-lx1jj 2 ปีที่แล้ว +57

      @@pleonexia4772 You load the large dataset once and edit/rerun the code on it over and over instead of reloading the dataset every time you want to make a change to the code.

    • @wumi2419
      @wumi2419 2 ปีที่แล้ว

      @@DanH-lx1jj and then you still rerun everything if you ran cells in wrong order at some point

    • @foobarbecue
      @foobarbecue 2 ปีที่แล้ว +21

      Make sure you also get familiar with breakpoint debugging and stopping through running code. Absolutely essential for a self-taught programmer in the "popular" languages.

    • @auntiecarol
      @auntiecarol 2 ปีที่แล้ว +14

      @@pleonexia4772 look up Don Knuth and literate programming. Pretty common in Emacs circles to write executable code in blocks in org-mode (a kind of "markdown"), a precursor of these notebooks.

  • @ianglenn2821
    @ianglenn2821 2 ปีที่แล้ว +210

    From 18:00 to 19:35 is such a good sequence haha, I finally understand VI keybindings

    • @gargleblasta
      @gargleblasta 2 ปีที่แล้ว +8

      It was a revalation...

    • @eugenetswong
      @eugenetswong 2 ปีที่แล้ว +4

      Yeah, it makes VI look logical. When I first saw VI, I could never understand how people accomplished anything, but my boss [i.e.: my uncle] kept pressuring me to use it.

    • @tinkerwithstuff
      @tinkerwithstuff 2 ปีที่แล้ว +12

      @@eugenetswong But the fact that a subculture of people is using, for decades, ~ IBM-compatible keyboards, with editor software that's totally mismatched to that, is kinda hilarious.

    • @monad_tcp
      @monad_tcp 2 ปีที่แล้ว +12

      @@tinkerwithstuff it really is, as I started learning computers when I was 8yo on DOS6.22. edit_com just felt natural for the IBM PC keyboard.
      When I came to the unix world, their stupid editors always felt "wrong" anachronistic.
      Why can't I have "edit_com" ? every sane editor I ever used on PCs with Windows or OS/2Warp was always like that. (and yes, I installed OS2/Warp when I was 10yo on my PC)
      Linux/Unix always felt like going to the past, to a museum.
      That can't be true, why would anyone ever want to use "vi/vim" ?
      Emacs, it at least made sense, you call anything with "command", which is `ctrl`, like every modern keyboard shortcut ever in any GUI program like qbasic or edit_com or msword.
      Then I found "nano", well that solves the problem.
      But the more I studied Unix/C, the more at a museum I felt. Like why ? why must I program my supercomputer x86 from 2007 like a freaking PDP11.
      Let not get me started on how brain damaged is writing shell scripts. I HATE IT, Why can't you "unixy/linuxy" guys just use Perl or Python.
      And the top of my unix journey was "autotools" , FSCK IT !
      no, I had enough, even CMake is better than that, even ".bat" and "nmake", I'll never, ever, ever use it, just reading the docs give me headaches, why, why do you have 3 abstraction levels of text-generation, its absurd, it literally easier to write the command manually (in `nano`) and ctrl-c ctrl-v them to get the freaking binary.
      And when I'm choosing libraries for "C++", I chose those NOT use any that only provides build script for autotools.
      Lets also ignore how all code that has the "GNU" is basically horribly written, from 2010 perspective, and I've read a lot, A LOT of C/C++ code. Its just amateur code, not professional, by modern standards. It baffles me that people think they are good.
      If its from a GNU project, the code is basically a "bodge", example is "screen", not only the code is really bad, the user interface of the tool is really, really bad, like a circular saw plugged to a angle grinder that hangs from the ceiling by its cable, no wonder you keep losing your arms.
      And those horrible, horrible things are worshiped like if they were the holy grail of the `Church of C`, or must I say the `Church of PDP11`. I understand the historical importance of such things, but they are super anachronist, its like driving day-to-day in a Ford Model-T, its not good, it was good for the time, but I prefer my modern 2019 peugeut.
      I wanted to do computing, not archeology of old computing systems. That's what unix always felt like.
      I like knowing it, and experimenting with it, but I don't want to use it on my day-to-day job, but is there any other option.

    • @achtsekundenfurz7876
      @achtsekundenfurz7876 2 ปีที่แล้ว +16

      The one thing i don't get is his hate on "fixed width" tho.
      Whenever I program in a new invironment that uses proportional fonts, I switch to something with fixed width, because without it, numbers don't line up any more. A 1 takes less screen space than a 2 without fixed width, and the code looks ugly. Even worse if you depend on white space, like Python...

  • @red-switch
    @red-switch 2 ปีที่แล้ว +8

    I loved this talk but I don't know why the author is sounding as though typing is somehow a waste of time or insignificant. Most web Devs use typescript or Babel because otherwise you wouldn't catch a lot of errors while writing the program.
    Type checking has nothing to do with making the programming experience interactive, and in fact would aid it.

  • @vitasartemiev
    @vitasartemiev 2 ปีที่แล้ว +27

    Every popular language without static types eventually gets static type support, but worse than if it got it in the start. Have you tried debugging 50TLOC+ python codebases without type annotations? It's infuriating. Type systems are a must. They don't need to be rigid or obtuse, but there has to be some mechanism for the programmer to know at a glance what to expect.
    Also "build buggy approximations first" is objectively wrong. Everybody knows that generally managers don't allocate time for bugfixes and refactoring. If you teach all programmers to write buggy approximations, you're gonna have to live with code that is 70% buggy approximations. Maybe he's talking about TDD like that, but it comes off wrong.
    Also I don't understand why he says debuggability is mutually exclusive with correctness - it's not... Yes, interactive code is cool, but correct, readable interactive code where all type-driven possibilities are evident at a glance is 10x cooler.
    Also Rust has a REPL mode. A lot of compiled languages do. Isn't that exactly what he wants?
    Also also what does he mean by debugging code in production? I really wish he'd elaborate on that.

    • @janisir4529
      @janisir4529 2 ปีที่แล้ว +2

      That's an issue with managers, and not coding methodology. Not that I agree much with what he says in this talk, but heard some horror stories of managers.
      And I suppose debugging in production means attaching a debugger to the currently working server or client on the customer's machine?

    • @YDV669
      @YDV669 2 ปีที่แล้ว +3

      Debugging code in production is where you buy a system that promisses it because when a program crashes, it just falls back to the interpreter prompt so you look at all your variables and code, and then you write an entire point-of-sale system in said system and deploy to 100 stores only to discover that you can't dial into the stores to connect to the crashed system because they have just one phone line and they need that for credit card machines.

  • @coderedart
    @coderedart 2 ปีที่แล้ว +160

    I do agree that having runtime reflection is a great thing so that we can look at the environment / state over time.
    But i hard disagree with most of the other points in this talk.
    1. comparing C / C++ / Rust / Zig with Lisp / Clojure etc.. is just plain wrong.
    anyone can see that these languages are targeted at different use cases. They are manually memory managed low level languages for precise control and peak performance to extract everything out of hardware. literally just a step above assembly.
    2. This talk conveniently skips over things like Garbage collection (and performance in general) except for a reference to tweet talking about devs being okay with stop the world compile times but not stop the world garbage collection. Games or Apps sensitive to latency ( real time music/video editing, trading etc..) just cannot afford to have that garbage collection pause no matter what. But devs can very much afford that compile time.
    3. Saying Rust and other ML family languages don't improve software is also debatable. Rust's typesystem turns runtime errors into compile time errors making the software more reliable. Rust is in trial mode in Linux kernel.. because it provides a proper safe typesystem that C doesn't.
    Most of the talk is about debugging, viewing runtime state and live interactive coding. Which is more about tooling surrounding the language rather than just the language itself. We definitely need better tooling and many projects shown in the video are good examples of what might be possible in the future. for anyone interested, i recommend watching the talk about dion format dion.systems/blog_0001_hms2020.html which also talks about editing syntax trees in a custom IDE instead of a text language with syntax.
    Rust has been getting decent tooling to improve developer experience. github.com/jakobhellermann/bevy-inspector-egui for example shows all the game State live AND allows you to modify it. there's github.com/tokio-rs/console for async server apps to provide a look into the runtime state. you can always add a scripting language to your application (like lua) and query any state you want. there's other initiatives like lunarg's GfxReconstruct which will dump all the vulkan state so that the developer can reproduce the gui/graphics state exactly on his machine by receiving the vulkan dump from user. people are working on lots of cool ideas to help with debugging state machines.
    Although, i think a custom rust specific IDE will go a long long way.

    • @IamusTheFox
      @IamusTheFox 2 ปีที่แล้ว +51

      Not a rust guy, but rust is a great example of how he missed the point of static typing. It's feedback at compile time. Run time errors are errors caught by the end user if you are not careful.

    • @dixaba
      @dixaba 2 ปีที่แล้ว +50

      All that "types slow devs" sounds like dynamically typed languages are better. Maybe they are... until your fancy no-explicit-types JS or Python or whatever app crashes in the middle of logic because, for example, you forgot to parse string into number. Languages with static types (even as limited as C) just won't allow you to run such nonsense at all. Types are helpful for reliability, TypeScript, Python typing, etc. confirm this. Better slow down 1 developer team than have 1000 customers with "oh no, I made an oopsie" crashes.

    • @fat4eyes
      @fat4eyes 2 ปีที่แล้ว +37

      Thank you. Way too many people who don't actually work on real systems completely ignore performance and maintainability and focus way too much on writing code 'quickly'.

    • @tajkris
      @tajkris 2 ปีที่แล้ว +10

      Ad 1. And how being low-level, manually memory managed for peak performance stops you from having nice things like runtime modifiable code and introspection into live system? Those are orthogonal concepts and they aren't mutually exclusive. C++ approach is 'pay only for what you use', but there doesn't seem to be much to 'buy' when you actually would like to pay for those niceties.
      Ad 2. It's not that devs can afford the compile time, it's that they have to in some of the languages. E.g. you can run Haskell or Ocaml in interactive shell while developing, but compile to get better performance for release. JIT compilers do exist for various languages, so it's not like you cannot have a runtime modifiable system that performs well. C# has garbage collector, but you can use manual memory management to some extent when you really need to (high perf or interoperability with C/C++). It's an attitude problem, designers of the language(s) decided that's it's not of enough value. The point of this talk as I see it is to highlight the value of presented things and get language designers to think about such use cases.
      Ad 3. This is only an improvement in an environment with forced compile/run cycles. You type something, launch the compiler (or your IDE launches it the background) wait between 0.5s and 60 minutes for it to compile, you get an error about wrong type. You fix it, compile again, run it, spend between a second to a minute to verify that it works as expected (i.e. rule out problems that weren't caught by type system).
      Now compare it to: you type something while your program is running, you see clearly incorrect results on screen and on top of that you get an error. You modify the code while the system is still running and you see correct results on screen.
      IMO the second workflow is much better and more efficient.
      Also, look at TypeScript or Python - you can rapidly prototype your code omitting the types or add type annotations for additional safety.
      TLDR: compiled/statically typed vs interpreted/dynamically typed - you could have both and achieve high efficiency in development as well as high performance in runtime, there's no need to limit yourself.

    • @janisir4529
      @janisir4529 2 ปีที่แล้ว +5

      Those high level tools look so fragile, they'd never make back the time invested into them.

  • @soppaism
    @soppaism 2 ปีที่แล้ว +35

    It's all spot on. Optimally, we would spend all of our time in solving the actual problem at hand, instead of spending most of it fighting the details that emerge from our choice of tools/solutions.

  • @curls6778
    @curls6778 2 ปีที่แล้ว +111

    Now I wish there where a part 2 of this talk that goes into more detail regarding modern language options that tackle these issues. A lot of the options mentioned seem near impossible to setup in a dev environment because the tooling is so outdated that I have to spend more time getting the environment to work than even thinking about programming in it.
    It especially seems like there are no options whatsoever when it comes to hard real-time applications like audio synthesis.

    • @sid6645
      @sid6645 2 ปีที่แล้ว +7

      Yeah its a peek to the future, if people decide to pick it up. Hope it comes to fruition, because bringing a coder closer to their code will only make it easier to see what actually goes on, past the abstraction of language syntax, semantics and language specific quirks.

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว

      @Curls Check out this talk: th-cam.com/video/yY1FSsUV-8c/w-d-xo.html

    • @JamesGroom
      @JamesGroom 2 ปีที่แล้ว +4

      I still haven't used it myself but you might be interested in Sonic Pi

    • @hmd-unpo
      @hmd-unpo 2 ปีที่แล้ว +5

      Supercollider is an advanced audio synthesis tool. Faust is a nice DSL for audio.

    • @thewafflemancer
      @thewafflemancer 2 ปีที่แล้ว +3

      Yeah SuperCollider, TidalCycles, Max MSP and PureData are great examples of this

  • @jonstarritt848
    @jonstarritt848 2 ปีที่แล้ว +18

    The history of the vi arrow keys and the home/~ connection blew my mind! Now it's time to go down the Unix history rabbit hole.

  • @netchkin
    @netchkin 2 ปีที่แล้ว +30

    This talk is so engaging it made be spontaneously clap along with the audience while watching it at home.

  • @MrSaemichlaus
    @MrSaemichlaus 2 ปีที่แล้ว +10

    It's very hard to carve a statue with a can opener. Selecting the right tool is key to success. But then most people also have an employee mindset, they are not toolmakers. It's good to see what other methodology is out there in order to set the right expectations in the users of programming environments and languages.

  • @chat-gpt-bot
    @chat-gpt-bot 2 ปีที่แล้ว +42

    The idea of visual programming was not ignored, it has been tried over and over and failed in many ways. The keyboard remains the best input device available and programming languages are structured around that, program input and algorithm expression. The visual cortex can process much more but the human mind cannot express ideas faster than it can speak them or type them.
    What we need are not non-text languages, we need better code visualization tools that take existing text code and annotates it in an easy to understand visual format. The entire compiled artifact diatribe becomes irrelevant if the programming environment has an Edit&Continue feature that recompiles a minimal amount of text code, relinks and reloads the parts effected, so you can continue debugging from the same state, or some saved intermediary state before the bug manifested.

    • @Waitwhat469
      @Waitwhat469 2 ปีที่แล้ว +4

      The Edit&Continue bit was excatly what came to mind to me when he mentioned that as well. A cool example of a large program that needs to not fail while doing this is the Linux Kernel when live patching is used!

  • @ekted
    @ekted 2 ปีที่แล้ว +109

    It's not just the code itself that can have a lot of "this isn't part of the actual problem" problems. All of the "technical bureaucracy" (certificates, hosting, provisioning, deploying, releasing, building, source control, branches, pull requests, code reviews, unit/integration tests) contributes in a big way to stuff not part of the actual problem. In addition, "corporate bureaucracy" (development process, useless roles, incompetence, corruption) is a killer. At the end of the day, maybe 5% of your mental effort goes to solve the real problem, and the end result is ruined by the other 95%. Solving a problem with 5 lines of code versus 1000 lines just gets lost in all the other noise.

    • @christophersavignon4191
      @christophersavignon4191 2 ปีที่แล้ว +30

      Imagine a craftsman complaining that one needs to know metalwork to craft woodworking tools. Or a soldier moaning that all those logistics officers are not contributing because they don't fight. You'd just laugh at them.
      Creating tools has always been an investment, spending effort on one task to make another task easier. Teamwork has always required coordination. IT is no exception.
      If you become able multiply your workforce by 50 and spend 10% of that on the "actual problem", you have quintupled your progress. If you don't want to coordinate a team, your only other choice is to work solo. And while it sounds intriguing not to deal with 49 other lunatics and their code that conflicts with everything, including your sanity, it will really slow you down, more than team coordination ever could.

    • @Dyllon2012
      @Dyllon2012 2 ปีที่แล้ว +1

      I think your argument applies to just reducing LoC, but better abstractions can also eliminate certain types of mistakes. For example, a hash function builder reduces the chance that some hash function is written incorrectly and produced collisions.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +3

      docker's imperative config and uninspectable (possibly even malware-ridden?) root containers to me is already part of that legacy mentality, people just slap it in because everyone else is doing it, not because it gets the job done the best. imperative config and orchestration is the way to go to eliminate most of the issues you mentioned in "technical bureaucracy" as you call it.
      "corporate bureaucracy" is just capitalist problems. and incompetence has nothing to do with this discussion. neither of these will be solved with better programming tools.

    • @Sonsequence
      @Sonsequence 2 ปีที่แล้ว +3

      Have you ever led a team where you were free to remove that technical bureaucracy? I am. I haven't. For each of those items you list I asked how we could shrink the footprint but removing entirely would have gone badly.
      Certificates, hosting: Be maximally unoriginal in cloud provider setup.
      Source control: Yes. Have you tried scp on text files instead?
      Branches: trunk only except for shortlived ones just to hold pull requests.
      Pull requests, code review: So much more powerful than merely quality assurance. But yes, very expensive so always good to figure out when and where to skip.

    • @Sonsequence
      @Sonsequence 2 ปีที่แล้ว

      @@LC-hd5dc I guess you meant to say declarative config is the solution?

  • @alst1777
    @alst1777 2 ปีที่แล้ว +23

    Really can't disagree more with the "visual paradigm superiority" part as well as backward compatibility stance of this talk. The opposite of backward compatibility is a complete chaos and retaining it for a long time is totally worth it. I'm a long time vi user and unix user, but I came from a windows background and initially a lot of things didn't make sense to me. I'm in digital art nowadays and after learning and embracing the simplicity of vim and bash shell I can do things naturally: working with all sorts of files, writing scripts for almost all my purposes - like converting images, media files, custom backup scripts, 3d modeling and animation and many more. In windows and mac you can use nice GUI, but it comes at a huge cost of being burdensome to use, resisting scripting capabilities (try writing something to automate a process that involves a necessary clicking a button in some program that doesn't support command line interface) and so on and so forth. Our technology exists today thanks to "dead" programs that cared enough to support wider variety of interfaces.
    Text medium, while fancy like nice web page with all sorts of graphics can get it too far and turn to presentation which will try to convey the idea via picture but lack precision of concise text description. Someone said "Writing is nature's way of telling us how lousy our thinking is". If that's not convincing enough - one of the most successful companies - Amazon - intentionally discourages presentational style of conveying information about new ideas or new technologies in favor of rather writing it in a short and concise manner - if you're interested read an article "How Amazonians share their ideas". So, if you're new to programming, take this talk with a grain of salt. Clarity of thoughts is indispensable when you work on a complicated design and I'd argue is hardly achievable if you can't produce or consume a good old written content.
    Can't agree on "spec is always incorrect" argument. While I agree that spec is impractical for complete program it could actually be useful for some of its parts. For example, a consensus protocol "paxos" could be described in quite strict terms, proven and finally its implementation to some extent could be decoupled from the main program. Programming is about combining multiple parts into a whole and some parts (cryptography, distributed protocols ensuring livability and robustness of the system) may be a great fit for actually writing the spec.
    Also can't agree on "programming is about debugging" - it couldn't be farther from real world programs running on your mobile devices or in big datacenters. Logging, metrics is what actually matters to give you introspection on what your program and your users are doing. Also ability to quickly recover - e.g. issue a patch. I'd change this stance to "programming is about testing" when it comes to professional programming as big distributed programming could be too hard to debug and reproducing a particular debug sequence could be both hard and impractical.

    • @aggregateexception5304
      @aggregateexception5304 2 ปีที่แล้ว

      Thoroughly agree on your point of "spec is always correct" in the video the example of an array goes to array[i] => i+ i, this is a clearly defined spec, it might not be the best real world example but it at least proves a counter example exists. Not sure if you could elaborate on "logging metrics is what actually matters" from my mind, this is equivalent to debugging, be it core dumps or just a red/green light debugging is core to development (Yes I have had times where I have only had a "it worked" or "it didn't work" to go with becuase of my company's instance to work with AWS and outsource the access to someone else which would take me a week for approval (who knows why). It is painful.). From my experience, metrics are second to getting it to work. The client doesn't care how long it takes as long as it isn't more than a couple of hours. But that may well be my limited experience talking, I have only worked in a handful or small to medium sized domains but it is worth taking into account that not every dev job is dealing with Google/Netflix levels of traffic, some are maybe 100 people a day (not to say your point isn't valid in your domain but that the speaker's point isn't necissarily invalid in all domains, as much as I disagree with many other points of his.)

  • @metalpachuramon
    @metalpachuramon 2 ปีที่แล้ว +50

    I never even stopped to think about it, now I have a name for it: introspection. Before studying the hard theory about regular expressions, I never actually understood them and just resorted to copy one from stack overflow. After learning the theory, I still don't write them as punch cards, instead I like using websites where you can test them in place, see explanations and so on. Now I don't feel bad for wanting to attach a java debugger to a live server haha

    • @brunodantasm
      @brunodantasm 2 ปีที่แล้ว +7

      Indeed, that's the same point that game devs John Carmack and Jon Blow make. The debugger is the most useful environment there is.
      Also note that regex is amusingly not the same thing as formal language theory's regular language. After I learned that I started to forgive myself for having a hard time with them. en.m.wikipedia.org/wiki/Regular_expression#Patterns_for_non-regular_languages

    • @Favmir
      @Favmir 2 ปีที่แล้ว +8

      Yeah I open up Regexr website every time I need to write a regex. Would be great if IDEs at least tried to help you with the visualization.

    • @Duconi
      @Duconi 2 ปีที่แล้ว +5

      Debugging in a live environment is very problematic. Just imagine a order process of a web shop and you debug it in execution and mess things up accidentally and as you stopped the process it's not executed and also other orders are not coming through. There is a much better way. Write tests. I sometimes don't even try out my changes manually. It's tested and if I would have broken something the changes are high, that some test will find that. Some testing frameworks have even watchers, that execute the tests every time you safe your file, so you immediately see if your changes work. If you have proper tests, there isn't much in production that can cause it to fail. So instead of debugging a live server I would rather set up the development process in a way, that you find bugs before they reach live. That at least works really well for me.

    • @soheil5710
      @soheil5710 2 ปีที่แล้ว +4

      @@Duconi Nobody intentionally writes bugs. Prevention is good but not perfect.
      Don't you still need a minimally disruptive way to fix the live environment?

    • @BosonCollider
      @BosonCollider ปีที่แล้ว +1

      @@brunodantasm It depends on which kind of regex you are dealing with. Regexes from SQL or grep are real regexes. The ones in many scripting languages that use the perl syntax are fake regexes and can be six orders of magnitude slower on hard inputs

  • @ianmclean9382
    @ianmclean9382 2 ปีที่แล้ว +8

    I realized my habit of printing out variables and what information is being calculated in what the speaker calls "dead languages" is exactly the point he's making. There needs to be easier ways to observe the data and processes we write as it runs.

    • @Taladar2003
      @Taladar2003 2 ปีที่แล้ว +1

      On the other hand printing out values is a lot more productive than that nonsense single-step debugging. Give me a printout of two runs of the program and a diff tool any time over stepping through it for hours trying to remember what the debugger displayed 1000 steps ago in the last program execution.

  • @rumisbadforyou9670
    @rumisbadforyou9670 2 ปีที่แล้ว +12

    The only question I have is: "How do you mix heavy optimizations of Rust/C++ with powerful debugging and on-flight editing of Smalltalk?"
    If you have an answer, I'm willing to switch.
    From my experience JIT compiled code is always slower than AOT compiled. (And "lol just get a more powerful PC" or "stop running server workloads on a 10 y.o. laptop" are not valid arguments)
    If somebody has an example, of a performance-dependent software written in Smalltalk/LISP-like languages, like ray-tracing or video-encoding, I'd like to take a look and compare them to more conventional solutions.

    • @D0Samp
      @D0Samp 2 ปีที่แล้ว

      Also even if JIT comes close to native compilation (at least as long as the latter does not use make use of profiling and other advanced optimizations) in either responsiveness or throughput, you typically pay for it in higher RAM usage, which is unfortunately the most limited resource in shared computing in multiple ways. Contemporary Java comes to mind there, even though on-flight editing is obviously not a thing there, I'm already grateful for a REPL.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +2

      how about this - JIT while you're working on the code, and then AOT when you want a prod release?
      i definitely don't agree with his suggestion that we want JIT in production.

    • @lanthas5744
      @lanthas5744 2 ปีที่แล้ว

      As of Visual Studio 2022, you can use Hot Reload to change C++ applications while they're running. I'm actually quite surprised he didn't bring this up.

    • @pierremisse1046
      @pierremisse1046 2 ปีที่แล้ว +1

      One Solution of combining heavy optimizations of Rust/C++ & capabilities of Smalltalk is to use twin softwares (or simulation).
      Works fairly well, recent smalltalk distributions have worked using such an approach for more than two decades now.
      They code their VM in Smalltalk (OpenSmalltal-VM/Pharo) and generate C code from it.
      There's also RPython that does similar things.
      This approach is loved by some, hated by others.
      Is this an example you consider to be a performance-dependent software?

    • @rumisbadforyou9670
      @rumisbadforyou9670 ปีที่แล้ว

      @@pierremisse1046 I guess I'll try Pharo after learning some Smalltalk. But from reading about it a little, it still sounds like it'll bring some runtime overhead that might be difficult for the compiler to optimize. But I'll give it a go. If transpiled C will be faster than native JS, I'd consider it a win for Pharo.

  • @alanr4447a
    @alanr4447a 2 ปีที่แล้ว +13

    1:30 The 80-column "Hollerith" punch card design is an advancement over the original same-number-of-ROWS (12) with what I think were just 27 columns (circular holes rather than narrow rectangles) designed by the man named named Hollerith himself for tabulating the 1890 U.S. census, decades before there were "computers".

    • @LemuriaGames
      @LemuriaGames 2 ปีที่แล้ว +1

      And before that the predecessors of punchcards were used to "program" weaving looms.

  • @Verrisin
    @Verrisin 2 ปีที่แล้ว +42

    I would love this, but give me a language and IDE, that properly completes symbols for me, is context aware, is _interactive programming_ before I even wrote it.
    - That's why I like types. Kotlin, C# ... They are helpful sooner. They catch nearly all typos. In fact, I always tab-complete, so I never have to worry about typos.
    - I tried Elixir because the erlang model is so great, and I had dumb mistakes right away (typos, wrong symbol, etc), all costing lots of time to go back to. Only found through running tests, instead of before I even made them.
    - An environment that let's me make mistakes is worse, then one where I notice them ~live. Worse is only type checking (and no help) at compile time. Even worse is only getting errors at runtime, which sadly due to many reasons, when trying Clojure, that's where I would end up. A lot of things are a problem to do in the REPL, say I need to inspect argument to some callback. In Kotlin, I at least see the full type spec, and the IDE is helpful. In Clojure, I need to mock-trigger the callback, hope it roughly matches production, hope I can "suspend" inside the callback and and hand craft a reply, and that's even worse: How do I know what reply it wants? Reading docs is tedious. Filling out a clear type "template" provided by IDE is really nice and simple in comparison.

  • @chrishamilton1728
    @chrishamilton1728 2 ปีที่แล้ว +13

    Dynamic, interpreted languages are better than statically typed, compiled ones?
    Now that is a hot take.
    Not a good take, but a hot one.

    • @fakt7814
      @fakt7814 2 ปีที่แล้ว +1

      They have a potential to be much better in some important aspects like debuggability and prototyping. But most scripting languages did not go very far from static in these aspects, which does not make very much sense. Why sacrifice performance and stability for practically nothing? That's why dynamic interpreted languages are often perceived as inferior to static. It's either because most of them initially were either a replacement for shell scripting or developed to solve a very specific task (like JavaScript) and then accidentally grow bigger and become more significant. It's no wonder that the most advanced languages in that matter are Lisps, because they were designed as an AI research tool from the start.

    • @no-defun-allowed
      @no-defun-allowed 2 ปีที่แล้ว

      1960 Lisp I called, wants its compiler back.

    • @Crazy_Diamond_75
      @Crazy_Diamond_75 2 ปีที่แล้ว

      For understanding, debugging, and visualizing your program in real time? Yes, absolutely.

  • @matju2
    @matju2 2 ปีที่แล้ว +65

    APL actually has a shortcut for making a list like 1 2 3 4, such that you can do the example program in only 4 characters : 1+ι4 (that's the greek iota letter) instead of 1+1 2 3 4

    • @thoperSought
      @thoperSought 2 ปีที่แล้ว +8

      I know a lot of people love APL, but it seems too terse to really be readable to me

    • @HoloTheDrunk
      @HoloTheDrunk 2 ปีที่แล้ว

      @@thoperSought APL is part of the "fun thought experiment but the next guy will just want to shoot himself while reading your code" languages. No sane person would use it for large software (or at least I hope so).

    • @NamasteProgramming
      @NamasteProgramming 2 ปีที่แล้ว +23

      ​@@thoperSought It is easy, all you need is a keyboard with 200 buttons

    • @abrudz
      @abrudz 2 ปีที่แล้ว +3

      @@thoperSought What if your reading speed is reduced by 80% but the amount of code is only 10% of the alternative?

    • @EvincarOfAutumn
      @EvincarOfAutumn 2 ปีที่แล้ว +4

      @@thoperSought The “expert-oriented” terseness of APL/J/K is scary at first, but it soon pays off, because the core languages are so tiny that you can become an expert surprisingly quickly. There are only ~5 syntax rules and ~30 symbols to learn, depending on how you count. Beyond that, basically all of the problem-solving skills are transferable to other languages, especially to APL alternatives like numpy/R/Julia/Excel.

  • @xerzy
    @xerzy 2 ปีที่แล้ว +25

    Two things:
    1) let the compiler blow up on the dev rather than the program on the user (especially if you seek the lowest runtime overhead, or you ARE making the runtime)
    2) you can start getting this future, today, with current languages, using Jupyter notebooks and alike (e.g. literate Haskell)

    • @trapfethen
      @trapfethen 2 ปีที่แล้ว +9

      Yeah, It might be interesting if we can develop a language that runtimes during development (for interactivity, visualization, etc) but can compile for deployment. Because there are instances when interactivity just isn;t necessary and the required abstraction and overhead is nothing but dead weight.

  • @DominikRoszkowski
    @DominikRoszkowski 2 ปีที่แล้ว +41

    What I really enjoy about Dart is that, even though it's punch card compatible, thanks to hot reload I need to compile the program usually just couple of times a day when I pick up some new thing. Most of the time code can be reloaded in real time at incredibly short feedback loop. I still wish there were more features that would help visualize the structure and relationships of code but it's already so much better than most of the tools in the mobile ecosystem.

    • @erikjohnson9112
      @erikjohnson9112 2 ปีที่แล้ว

      I've been chasing live system programming for years. Dart provides a lot of what I am looking for, as well as Python with hot reloading (see a project called Reloadium).
      One of my ideas for my own system (that has yet to be written) is a little similar to the last example in this video. There are nodes which represent your program and there are "sparks" of execution so you can see data flow through the system.

  • @permartin5819
    @permartin5819 2 ปีที่แล้ว +6

    Getting the night's production jobs loaded (via punch cards) as quick as possible was aided by the operators removing the rubber bands and lining up the "decks" on the counter. That is, until the night when the HALON system was accidentally triggered, sending the cards everywhere. It took quite a while to find cards stranded under equipment. Fortunately the strips on the sides of the cards helped. But it was a long, long night putting everything back together.

    • @rodschmidt8952
      @rodschmidt8952 2 ปีที่แล้ว

      Suddenly I think... Was there a method for making backup cards? Sure, read the cards and punch them. But did anybody do this?

  • @UristMcFarmer
    @UristMcFarmer 2 ปีที่แล้ว +20

    I've watched to 11:46 in at this point...and I'm getting a smell. I'm not saying he's not correct overall, but his first two examples (Assembly and C), he's writing a re-usable function that, given an array, creates a new array, stores into the new array the values of the input array incremented by 1, and then returns the new array. In his last three examples (LISP, HASKELL and APL) he's hard-coded the array as a literal and the results of the function aren't being returned into a variable for further use. He's NOT doing the same thing. He's purposefully left out 'boiler plate' or 'ceremony' code or whatever you call it to make the difference seem more dramatic than it really is.

    • @randyjackson7584
      @randyjackson7584 2 ปีที่แล้ว +1

      ld (hl),a ; inc a ; ld hl,a ; inc (hl), something like that in a loop is what those other examples seem like, basically running through the memory and incrementing

    • @SgtMacska
      @SgtMacska 2 ปีที่แล้ว +6

      The more general case is even shorter in Haskell:
      f = map (+1)

  • @Waitwhat469
    @Waitwhat469 2 ปีที่แล้ว +15

    I have to admit, the idea of messing with runtime as sysadmin and security guy sounds nightmarish. Great tools in the Dev env, but in production it seems like a system that limits checks and requires increased trust of the devs.
    Mind you I'm in the sysadmin camp that IaC and CaC greatest benefits is that you move AWAY from click here to do this administration and towards more formally tested and explicit ones.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว

      finally some sense in these comments lol.
      i'm curious, what other options would you suggest for runtime introspection? usually what i've seen is slapping in logging statements everywhere, but i have to assume there's a better way

    • @Waitwhat469
      @Waitwhat469 ปีที่แล้ว

      Logging, metrics, and tracing are the only things I can think of, but it would be nice if you could clone a running container stick it in a mock environment and step through the process.

  • @icarvs_vivit
    @icarvs_vivit 2 ปีที่แล้ว +9

    >It's what your processor wants
    No, it's what high level languages want.
    Just don't link to malloc and allocate the new array on the stack.
    And better yet, don't link code you can just assemble, include directly into the main file with (using an assembler like NASM) the 'incbin' directive and call without some rigid ABI.
    Ironically, he is shoehorning an archaic concept like "programs must be linked" into an assembly language program that does not need it. You don't actually need to link in assembly unless you specifically need a dynamically linked library. You are shoehorning a bunch of high level assumptions into assembly language programming.
    This sort of behavior makes me think you did not actually write this in assembly, that you wrote it in C or something and cleaned up the ASM output a bit.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +1

      maybe i'm too green regarding ARM assembly or asm in general, but i understood "it's what your processor wants" as referring to the stack alignment (see 9:10), not about linking to libc to have malloc available or whatever. but i agree a trivial program like this didn't need malloc and it was likely adjusted output from llvm

  • @KafeinBE
    @KafeinBE 2 ปีที่แล้ว +4

    Why do you need interactivity and visual representations if you can simply automate the checking of your program instead? It seems all of this is just building slow and complicated tools to _undo_ automation. To let a human inspect data that a computer is much better equipped to inspect. In every case of debugging that has been presented, it would be have been far more efficient to write down a test checking for the thing that the human looking at the visualization was looking for.
    Don't get me wrong, interactive and dynamic environments have their place. Learning, data science and in general areas where the act of writing down the problem into a programming language is overwhelming to the programmer. Few software developer professionals are often in that situation. Let's not forget the huge downsides. The main one for me is that repeatability is essentially thrown out the window. I _want_ my program to start from a blank slate. I'm a human, I can't deal with state. I want all my data to be immutable, I want the inputs to be pre-defined so that I can predict what the outputs should be. If the problem is incompatible with this, I can break it down into smaller problems that _are_ like this, and have automated testing for these.
    Similarly, I want my software to be easy to version control. Graphs are great when they're all made from the same program and the same data. Graphs are awful when it comes to comparing two graphs with each other. Anything that isn't text is awful to diff, and then there are a lot of ways to design text formats that are awful to diff too.
    I've developed in notebooks before. When I knew what I wanted to do, it was a bad experience. I would come back to the notebook every morning having no idea what the hell I was doing. Something was running. But I had no way to know if it was right or wrong, except my own memory. If I had formalized my problem in terms of inputs, outputs and a function that goes from one to the other, I would have written tests and my intent would have been clear.

    • @isodoublet
      @isodoublet 2 ปีที่แล้ว +2

      Live coding is terrible for data science too. You really want your analyses to be repeatable.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว

      sounds like you want functional programming.

  • @verigone2677
    @verigone2677 2 ปีที่แล้ว +2

    Sooo... I work in the Energy industry, we just retired our last VAX in the last 18 months...though we still have a bunch of virtualized VAX for historic documentation. We also just replaced a real time system that had one of the very first mice ever made (it was actually a Trackball and it was MASSIVE).

  • @mikecole2837
    @mikecole2837 2 ปีที่แล้ว +4

    The fact of the matter is that all of our hardware infrastructure expects the user to program in either ASM or C. Runtime environments are expensive and not available at the bare metal level without a ton of headaches. Lua is promising but it's written in C. I agree that modern hardware introduces many problems that don't have anything to do with solving the business problems that make us money. Maybe more people should become computer engineers and devise an ISA that allows for visual and runtime feedback natively.

  • @anatolydyatlov963
    @anatolydyatlov963 2 ปีที่แล้ว +16

    Yes! That's exactly what I've been saying, but when I began criticizing my uni for teaching Pascal for a whole year, I almost got cancelled for "Not respecting the history of programming" and "Not understanding that you have to start from the basics".

    • @pulsarhappy7514
      @pulsarhappy7514 2 ปีที่แล้ว +1

      haha I also started with Pascal, it's not that bad tho, it's really nothing like fortran and assembler, but it is not very visual I'll admit.

    • @sutech
      @sutech 2 ปีที่แล้ว

      Reminds me my high school where we were about to be thought Pascal, but the whole class decided "No. We want to learn C." And teacher was "Buy I don't know C." Other student said "I know C." and he started to teach us, which was awesome. To be fair, I had trouble understanding pointers and only after I learned programming in assembler (different class for programming microcontrollers) it clicked in my head and I finally understood.

  • @Woodside235
    @Woodside235 2 ปีที่แล้ว +11

    I dislike that Tweet towards the beginning about how programmers will feel good about learning something hard that they will oppose things that make it easier. For several reasons. Firstly, it could be used to automatically dismiss criticism of something new as the ravings of a malding old timer. Secondly, it paints experienced programmers as these ivory tower smug know-it-alls. Thirdly, it implies that behavior is unique to programmers. Do old time programmers sometimes look down from their ivory towers and scoff at their lessers? Absolutely, and I am no fan of that either. But the Tweet at face value could lead to someone with a new idea (or something they believe is a new idea) being arrogant.
    The bit with the increasingly smaller ways to write an incremented array ignores the fact that the more you remove semantics which more obtuse languages have, the less clear it is what the program is _actually doing_ besides the high-level cliff notes. This can lead to extremely painful debug sessions, where the code you write is completely sound on a high level, but that the syntactic sugar is obfuscating a deeper problem. Lower-level languages have more semantics that they really need to, but the upshot is that it allows more transparency. It's often difficult to debug average issues with, but it's significantly easier to debug esoteric issues if you slow down and go line by line. Not to mention it makes very specific optimizations easier as well.
    A lot of the ideas in this video have been tried and didn't stick around not because of adherence to tradition, but because they simply were not as effective. Visual programming in particular. It has the same problem as high level languages in that it's easy to capture the essence of the program, but not the details. Ideally you would have both a visual representation side by side with the text-based semantics.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +1

      tbh, C or even asm is still obfuscating stuff from you. i would say it's more a matter of knowing the hardware you're running on and knowing the quirks of the language and the compiler. (which would naturally take years.) blaming the language is not entirely correct imo.

  • @michaellatta
    @michaellatta 2 ปีที่แล้ว +7

    Smalltalk was one of the best live coding environments. You could change source of active stack frames. The issue was delivering the program to “production” on one of 100s of servers.

    • @sidneymonteiro3670
      @sidneymonteiro3670 2 ปีที่แล้ว +1

      The issue is how to do product development with a team of developers on the same code base for testing and releases.

    • @edwardog
      @edwardog 2 ปีที่แล้ว

      Would the team be working with a CI server?

    • @edwardog
      @edwardog 2 ปีที่แล้ว

      Was it an issue of it being unclear how to address the server in question?
      I’m also curious how you feel it compares to today’s approach of using containers/images

  • @genericdeveloper3966
    @genericdeveloper3966 2 ปีที่แล้ว +11

    Those graphical representations may help some people, but they just seem like more work to interpret as they are like a new language in themselves. They should be used only when they are the better alternative to comprehension for the average dev.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +6

      yeah as far as i can tell, most of them were just showing nesting levels...
      ultimately they seem more like teaching tools than daily programming tools.

    • @rv8891
      @rv8891 2 ปีที่แล้ว +2

      Wouldn't that be because most devs use 'traditional' code representation? In a world where programming is cannonically done in brightly-colored ballons connected by lines, trying to put it in a single sequential file might be the "hard to interpret". I think there's something to be gained here using visual&spatial&interactive programming, although I have not yet seen a version that sparks joy.
      Maybe functions as code in a bubble, and jump points (function call, return, goto) as a line between bubbles? It would visualize program flow without giving up the details you need to actually program. IDK, but it's an interesting problem.

    • @Taladar2003
      @Taladar2003 2 ปีที่แล้ว +1

      @@rv8891 The problem with graphical representations is that they are bad at abstraction and that they are hard to process by tools. Code is all about abstraction and tools to help you work with it.

  • @thomas-hall
    @thomas-hall ปีที่แล้ว +1

    A really strong start, we do a lot of dumb stuff for historical reasons, but the second half seems to totally ignore performance and honestly anything outside his own web realm. The reason programs start and run to completion is because that's what CPUs do. You can abstract that away, but now you're just using user level code and making it an undebuggable unmodifiable language feature. Sure functional languages look neat, but where are your allocations? How are they placed in memory? Are you going to be getting everything from cache or cold RAM?

  • @DevineLuLinvega
    @DevineLuLinvega 2 ปีที่แล้ว +179

    Terrific talk, laughed, and then I cried, then I was hopeful again.
    I won't turn to carpentry just yet.
    Thanks Jack.

    • @seismicdna
      @seismicdna 2 ปีที่แล้ว +6

      woah cool to see u here lol. seems like some core tenets of the philosophy that underpins your work is well represented here

    • @DevineLuLinvega
      @DevineLuLinvega 2 ปีที่แล้ว +2

      @@seismicdna I think we share a lot of similar ideas, I was fortunate to stay with Jack in Berlin a few years back, and meet Szymon Kaliski too. I was sad to hear that Strange Loop was stopping after this year, I've been dreaming of attending.

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว +2

      @@DevineLuLinvega There will be one more next year. You should give a talk!

    • @calder-ty
      @calder-ty 2 ปีที่แล้ว +4

      I'll need to watch again to digest further. Working with a data team as their engineer is both a blessing and a curse.
      I've seen some of the benefits of the interactivity that Jack talks about. Particularly with data pipelines sometimes the easiest way to debug it is to pull open the notebook and run it until it breaks and inspect. It's also easy for analysts with little programming experience to write things and get started and explore.
      It's a curse because it does make it so easy that I'm often tasked with fixing and maintaining a heap of poorly designed programs written by many times the people than myself, with little to no consistency.
      Many of the perks that Jack mentions are useful for scientists/analysts for whom programming is merely a means to the end of getting their analysis done. Not having to worry about types is nice if you just want it to work. As an engineer, working with typed systems means I _don't_ have to keep the mental "working memory" whenever I jump in to make a change down the line to remember what I nuances of my interface I have dynamically programmed.
      Like I said, will have to watch again to really understand.

    • @DevineLuLinvega
      @DevineLuLinvega 2 ปีที่แล้ว +2

      @@JackRusher I'd love to! I'll try to get in touch with the event's team.

  • @janisir4529
    @janisir4529 2 ปีที่แล้ว +16

    Lecturer: Talks about debugging in fancy visualization
    Me: Cries is performance

  • @newogame1
    @newogame1 2 ปีที่แล้ว +33

    I see this often but it usually falls apart when you approach higher levels of complexity. There are many graphical programming languages, you could even call photoshop a programming language. The problem is there are tons of experiemnts but none of them really create anything "new". They spend their time trying to copy functonality from C. Stop copying C in your GUI Language.

    • @ara.foundation
      @ara.foundation 2 ปีที่แล้ว +1

      Hmm, sounds like its better to design this kind of programming language with ui/ux designer together.

    • @nifftbatuff676
      @nifftbatuff676 2 ปีที่แล้ว +4

      Yeah this is my experience too. Graphical programming looks greak only with simple small problems. They are incredibly harder to use and a waste of time when you need to solve real-wold complex problems.

    • @theq68
      @theq68 2 ปีที่แล้ว +1

      The issue with this kind of presentation is exactly that, this convinces the management that the new shiny language is the solution to all the company problems but the sad reality is complex problems are complex in any language and learning the new shiny language takes longer than solving them. Create tools in your language that solve your problems is the current solution.

    • @TimeLemur6
      @TimeLemur6 2 ปีที่แล้ว +1

      @@nifftbatuff676 Automate for Android comes to mind. Fantastic app, I use it for a bunch of stuff I can't be bothered to write Java for and browse through Google's API docs. But large programs are an absolute nightmare when everything is drag and drop.

    • @gunkulator1
      @gunkulator1 2 ปีที่แล้ว +2

      Agree with this. You need the right tool for the job but a specialized graphical tool is really only good for solving problems that can be modeled graphically. I have wasted many hours with new tools that are supposed to bring about a new paradigm in how we program and in the end we always end up abandoning them because they never quite fit the problem at hand. The seemingly small gap between the cool demo example and what you actually need to accomplish ends up becoming an impassable chasm. In the end, tools are built by biased people who are thinking in terms of how to solve problems A, B and C but I'm stuck trying to solve problems X, Y, and Z or else a whole new class of problems, #, % and ^ that no one has ever considered before.

  • @rs232boy
    @rs232boy 2 ปีที่แล้ว +7

    This talk is fun to watch and the speaker is good, but I don't really agree with the whole argument. He spends so much time criticizing things that are what they are because of technical and physical limitations. Don't you think that people who punched fortran on cards would have loved to each have a personnal computer to type the programs easily ? Punch cards were a thing because a company or a school could only afford one computer which was a mess of 10M transistors soldered together by hand. Then machine code ? FFS it is optimized for the CPU silicon, which is a physical thing. How many thousands scientists work on better hardware architectures ? So stupid of them not to have silicon that takes images as input. /s Same thing with C, it is a portable freaking assembler and it is very good at it. Then you finally have higher level languages (which are written in C, surprise !) and they all have been trying interactive and visual things like forever ! Graphical desktops, debuggers, graphical libraries, jupyter notebooks. Some of them are good ideas, other are weird and fade away, but it's not like people are not trying while still being attached to a physical world of silicon. So what is his point ?

  • @metalsoul12
    @metalsoul12 2 ปีที่แล้ว +6

    I feel like there’s a few arguments being made here, two of which are: program state visualization is good and less code is better. I agree with the first, debugging of compiled languages has a *lot* of room for improvement. If you think the most terse syntax is always best, please suggest your favourite golfing language during your next meeting :)
    Programmers today are wildly diverse in their goals and there’s no hierarchy on which all languages exist. An off-world programmer will need the ability to change a deployed program, one researcher might be looking for the language that consumes the least energy for work-done, an avionics programmer wants the language and libraries that are the cheapest and fastest to check for correctness. If you feel that all the features discussed in the presentation should all exist in one language maybe you don’t hate Stroustrup’s work as much as you think.

    • @LowestofheDead
      @LowestofheDead 2 ปีที่แล้ว +1

      To be fair, he doesn't want less code _in general,_ just less code _on things unrelated to your problem._ Hence all the descriptions of physical punch cards, which take so much effort to get into position, and all that effort has nothing to do with your programming problem.
      "If you feel that all the features discussed in the presentation should all exist in one language"
      He isn't demanding that one language should exist for all programmers, he's saying that good developer user experience and visualizers should exist for all. Because every programmer, no matter their goals, needs to read code and understand how it works.

  • @ssddblade
    @ssddblade 2 ปีที่แล้ว +73

    What a great talk, thanks Jack. I agree with most of what you said. I just don't know what to do about it. I think our industry as a whole is in a local maxima, and don't know how to get out of it.

    • @esobrev
      @esobrev ปีที่แล้ว

      it’s up to us to create the solution.

  • @Georgggg
    @Georgggg 3 หลายเดือนก่อน +2

    Not long time ago I started to use emojis 🙂🔈⚠️⛔ in terminal output.
    Never felt better to read logs

  • @Motive9366
    @Motive9366 2 ปีที่แล้ว +13

    I don't really understand what he has against static artifacts being generated like with compiling Go. Relatively speaking, Go is easy to deploy to a different environment because of this.

    • @SimGunther
      @SimGunther 2 ปีที่แล้ว +1

      @Georgios Kotzampopoulos Flowboard (paper by Brocker, Schafer, Remy, Voelker, Borchers) is a great start for live, embedded programming.

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว +12

      @Georgios Kotzampopoulos I've written many embedded systems, bootloaders, &c, in FORTH. It provides a completely interactive development environment with excellent machine sympathy that can be bootstrapped on new hardware in ~1000 lines of C. The entire operating system of the Lisp machine was written in Lisp. The NASA projects I mentioned were mission critical autonomous vehicles sent into space. These techniques pre-date the web by decades and are useful for much more than that sort of programming.

    • @Knirin
      @Knirin 2 ปีที่แล้ว +1

      @Georgios Kotzampopoulos Not even every data program works well as a live programing example. Some data crunching processes are going to be batched up because the dataset is simply so large.

  • @curls6778
    @curls6778 2 ปีที่แล้ว +21

    In the multi-media programming world there are pure data and max/msp, that are very similar to his last examples and very commonly used by artists. This talk shed helped me understand why I keep coming back to those for projects where I have to iterate on ideas very quickly.

    • @matju2
      @matju2 2 ปีที่แล้ว +2

      Unfortunately, those two are a lot more stateful than the average non-visual languages, because every function has been turned into some kind of object class that, if it has more than 1 argument, every non-first argument is an instance variable that has to be set before sending the 1st argument. And if ever you want to set the 1st argument without running the function, or running the operation without setting the 1st argument, you have to use special cases like "set $1" and "bang", IF they happen to be supported by that given class. Then to manage all of this, you have to sprinkle a lot of [t b a] and [route stuff] objects and connect them with lines that quickly get hard to follow. The DSP subsystem (~) is the exception to this, but that's only because it has a fixed data rate, and then when you try to control that subsystem at runtime you have to use non-DSP objects I described above.

  • @d.jensen5153
    @d.jensen5153 2 ปีที่แล้ว +17

    Entertaining but deliberately simplistic. How desirable would Linux be if its kernel were written in Haskell?

    • @SgtMacska
      @SgtMacska 2 ปีที่แล้ว +7

      Eliminating an entire class of exploitable bugs? That would be amazing

    • @wumi2419
      @wumi2419 2 ปีที่แล้ว +10

      @@SgtMacska also probably eliminating multiple classes of machines due to performance

    • @wumi2419
      @wumi2419 2 ปีที่แล้ว +1

      @@SgtMacska Haskell definitely has its uses in low level application though. In relation to security, it's a lot easier to prove Haskell code and compiler are mathematically correct (which is a requirement for some security standards), proving therefore that runtime is secure, than proving the same for another language. In general Haskell's clear separation of pure parts is very good for security, as that's a large part of codebase where you have no side effects

    • @fakt7814
      @fakt7814 2 ปีที่แล้ว +1

      Performance-critical applications should be written in something like C or Rust (but not C++, f**k C++). When you know what you need to do beforehand and optimization and fitness of the code to the hardware is of the most concern, not modelling, getting insights about things or experiments. The talk was mostly about development environments and it doesn't make much sense for a kernel to be wrapped up in this notebook-like environment, because by definition kernel is running on a bare metal. But even there OS devs can benefit by modeling OS kernel routines in a more interactive environment using something like a VM before going to the hardware directly. Well, they are already using VMs, developing a bare metal program from scratch and not using a VM is an insane idea. What I'm talking about not a traditional VM but a VM-like development tool that trades the VM strictness for interactivity and debuggability. Of course a code produced in a such environment should be modified before going to the production, if not rewritten entirely, but we kinda doing that already, by firstly writing a working program and only then optimizing it.

    • @julians.2597
      @julians.2597 9 หลายเดือนก่อน +1

      we should eschew the kettle, for how desirable is it to make chilli in a kettle?

  • @draconicepic4124
    @draconicepic4124 2 ปีที่แล้ว +11

    I know that in his opinion that live programming languages are appealing, but they aren't always practical. These types of languages have a great deal of overhead and aren't suitable for certain applications. The best example of this is operating systems. In this talk he bashes on Rust a little, but the simple truth is that it was never made for this purpose. I know people want the "One Programming Language that rules them All!" so they don't have to learn multiple languages, but reality isn't so kind. Certain languages are simply better at some tasks than others.

  • @Omnifarious0
    @Omnifarious0 2 ปีที่แล้ว +5

    Food for thought, though he glosses over why things like edit/compile/link cycles still exist. There are costs to things, and sometimes those costs aren't worth the benefit.

  • @Desi-qw9fc
    @Desi-qw9fc 2 ปีที่แล้ว +3

    R is also a one-liner: 1 + c(1, 2, 3, 4). Or even better: 1 + 1:4.

    • @artiefischel2579
      @artiefischel2579 2 ปีที่แล้ว

      Every time I work in R I feel like I'm back in the era of magtapes and getting your printout from the operator at the computer center. I reflexively look down to check my pocket for a pocket protector. ;-)

    • @julians.2597
      @julians.2597 9 หลายเดือนก่อน +1

      it is also an array language at heart after all

  • @alanr4447a
    @alanr4447a 2 ปีที่แล้ว +2

    At the high school I attended in the 1970s we used punch cards typed in a KEYpunch machine (not "card punch"), and we fed them into the card reader and took the lineprinter (much faster than a teletype, although that was also an option for program output - program LISTING was always via lineprinter) printouts ourselves, so not all setups were equally primitive. Also, the reader was able to read either actual punches or pencil marks, and we developed "code cards" to allow us to make code with pencil marks (called "mark sense") so we weren't limited to the bottleneck of one or two keypunch machines for everyone to use, and I myself wrote the program to generate punched cards from marked cards, used at the school for several years after I graduated.

  • @5pp000
    @5pp000 2 ปีที่แล้ว +4

    That's a line printer, not a Teletype. (And yes, I too wrote Fortran on punch cards.)

  • @JerryAsher
    @JerryAsher 2 ปีที่แล้ว +2

    We have to remember that the width of punch cards and the number of columns on a punch card goes back to the width of a horses butt in Rome

  • @laalbujhakkar
    @laalbujhakkar 2 ปีที่แล้ว +5

    Hey i’m still using a tty terminal on an M2 Macbook faster than a Cray2 Supercomputer

  • @GordieGii
    @GordieGii 2 ปีที่แล้ว +1

    I have used an IBM 029 key-punch. When I was in high-school (about 1980) we used bubble cards, but the near-by university had key-punches so we would go there to type in long programs. We still had to send the card decks to the school board computer center (overnight), because we didn't have an account at the university.

  • @flyLeonardofly
    @flyLeonardofly 2 ปีที่แล้ว +5

    Incredible talk: I noticed the homage to Bret Victors: "Stop Drawing Dead Fish!"

  • @cesarromeroalbertos3839
    @cesarromeroalbertos3839 2 ปีที่แล้ว +8

    I agree with the main concept of the talk, like, I'm always fighting with people over this stuff. That said, I'm a videogame programmer, and usually work in Unity so not much choice (even if I didn't most games use c++, Unity is C#). The thing is, in game development many of the things you say you have tools to implement and do. We can change variables on runtime, we can create different tools and graphs and stuff to see what's happening in runtime, visualize stuff, etc. Of course it's not the same exactly as the examples in the talk and these things are implemented due to the nature of how a videogame works, rather than for a better programming experience. Just wanted to point out a curious case of how game engines get a bit closer to this idea for different reasons.

    • @naumazeredo6448
      @naumazeredo6448 2 ปีที่แล้ว +6

      Most of his examples are about tools, not programming languages themselves. He shows the issues as programming language's issues, but in reality, most of them, are lack of tooling around programming languages.
      Game engine editors (not game engines) are made exactly to address most of these issues. I agree with him that the language's ecosystems lack some basic tools, but these are also completely program specific. For games you will need a 4 floats type to store colors, should the language know about this and have a way to visualize the colors in its own editor, even though the majority of developer might be using this same language to code CLI/deamon programs? Does keeping the state of a program makes sense when you're shipping a game to players? It totally makes sense when you're developing, for fast iteration and debugging, but when you need to release the game and publish it, you need to compile, disable hot reloading, disable debug asserts, etc, since the client (the player) won't need any of this and all of this adds a performance cost.

    • @homelessrobot
      @homelessrobot ปีที่แล้ว

      @@naumazeredo6448 its because a lot of programming language communities (at the encouragement of their developers) think of these things as language issues, because they have yet to ever witness the beauty of a programming language getting out of a better tools way and sitting on the side lines for a play or two. If there is a job to be done in software development, its something to do with a programming language, and specifically MY programming language.

    • @HrHaakon
      @HrHaakon ปีที่แล้ว

      Check out GOAL, the lisp that they made Jak and Daxter with and your mind will be blown.

  • @davidnmfarrell
    @davidnmfarrell 2 ปีที่แล้ว +34

    I don't think image based computing is enough of a win to justify switching costs in most cases. The feedback loop is very fast developing on "dead programs" - it's not like we push to CICD everytime we want to see a change reflected in the program. Then there are the downsides of images, like not having version control. Instead of "othering" mainstream programmers as ignorant, build something so incredibly better no one can ignore it. But that's a lot harder than giving talks about how everyone is doing it wrong.

    • @SeanJMay
      @SeanJMay 2 ปีที่แล้ว +15

      That's the problem... How often do you fill your codebase up with GOTO followed by a literal instruction number?
      The answer should be never... but when Dijkstra published “GOTO Considered Harmful” it took a literal generation of people to die off (and new people not being taught it, de facto) for it to become normal to follow. But structured programming via if/else/for/switch, and running through a compiler, also isn't the end of history. But we keep teaching like it is. And generations of developers will need to move on (one way or the other), before other techniques gain widespread adoption.
      It's “Don’t reinvent the wheel”; don't question or rethink the monolith that you have.
      Well, why not? Is a giant slab of granite that's been having corners fractally chipped away into smaller corners, and then polished and stood on its side really what we should be putting on modern cars, or airplane landing gear?
      Would we have bicycles if each tire was 36 inches tall and 300lbs? Would we have pulleys and gearing? Water wheels wouldn't have done a whole lot of milling...
      Maybe the concept of the wheel is valuable, but the implementation should be questioned regularly...
      Lest we perform open-heart surgery with pointy rocks, and give the patient some willow bark to chew on, through the process.

    • @SimGunther
      @SimGunther 2 ปีที่แล้ว +1

      But you could version control an ascii encoded image or at least a text encoding of the image environment/code, correct? I haven't heard many people talking about that AFAIK.
      Image based programming should be an exploratory opportunity to mold the code to our will as we get a better picture (figuratively and literally) of the system we want to make before pushing forward towards "batch mode" programming for your final code output. Maybe there ought to be a batch calculation mode for huge data crunching before integrating that hard coded final answer onto the "live" portion of the code. In fact, Godbolt does present a great opportunity for exploratory batch programming if you're working with small bundles of C/C++ code and you want A/B comparisons of different algorithm implementations.

    • @davidnmfarrell
      @davidnmfarrell 2 ปีที่แล้ว +5

      ​@@SeanJMay Images have been around for at least 40 years. I think a more realistic assumption, rather than ignorance or cultural resistance is that whatever benefits they offer are not compelling for the mainstream to adopt. But rather than debating whether they are better or not, you could be building the future with them right now!

    • @davidnmfarrell
      @davidnmfarrell 2 ปีที่แล้ว +2

      @@SimGunther Yep that's possible, Squeak keeps a text representation of the code (not just bytecode) and tracks every change to the image so errors can be undone etc.

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว +1

      At no point did I say you should use image-based systems. In fact, I showed a number of environments that use source files that can be checked into revision control to achieve benefits similar to those systems. :)

  • @Sonsequence
    @Sonsequence 2 ปีที่แล้ว +3

    He's wronger than he is right. I'd love to be convinced but I think that most of these prescriptions would bring marginal improvement or go backwards. The better a visual abstraction you use, the more specific it is to a certain problem and confusing for others. The more power you give a human operator to interactively respond on a production system, the more likely they are to rely on such an untenable practice. The one thing I'd like out of all this is the ability to step into prod and set a breakpoint with some conditions in prod which doesn't halt the program but records some so I can step through.
    EDIT: Reached the end of the video and thought his Clerk project and the idea of notebooks being part of production code is fairly nice and much more limited than the earlier hyperbole.

  • @jakykong
    @jakykong 2 ปีที่แล้ว +1

    My one real criticism of this talk is that there _is_ in fact value in being consistent over time. Change what needs to be changed, and the core thesis here (TL;DR always look for better ways to do things independently of tradition) is basically right, but sometimes arbitrary things that are historically contingent aren't bad things.
    The 80 column thing is a good example to me. It's true that we _can_ make longer lines now and sometimes that seems to have benefits, but the consistency of sticking to a fixed fairly narrow column width means highly matured toolsets work well with these, whether that's things like font sizes and monitor resolutions, indentation practices (esp. deeply nested stuff), or even just the human factor of being accustomed to it (which perpetuates since each new coder gets accustomed to what their predecessors were accustomed to by default) making it if not more comfortable, at least easier to mentally process.
    Maybe there is some idealized line width (or even a language design that doesn't rely on line widths for readability) that someone could cook up. And maybe, if that happens, there would be some gain from changing the 80 column tradition. But until then, there is value in sticking to that convention precisely _because_ it is conventional.
    Don't fix what ain't broke -- but definitely _do_ fix what _is_.

    • @jakykong
      @jakykong 2 ปีที่แล้ว +1

      Rather, let me clarify by addressing specifically the "visual cortex" thought. It's absolutely true that we should explore graphics and pictures and how they can be useful - but it's not by any means obvious to me that it's actually worth dismissing 80 columns for being antiquated until and unless graphical systems actually supplant conventional linear representations.

  • @glennzone
    @glennzone 2 ปีที่แล้ว +2

    At 38:34, Jack Rusher makes a reference to a system which is like physics simulations, where you have control over sliders and such to modify the parameters and visualize the outcomes.
    What tool is this, and where can I find\how can I build ?

    • @halvarf
      @halvarf 2 ปีที่แล้ว

      It's just something someone did in Clerk, AFAIU. You'd need to look at Clerk.

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว

      Here's a generated notebook you can run in your browser:
      snapshots.nextjournal.com/clerk-demo/build/f8112d44fa742cd0913dcbd370919eca249cbcd9/notebooks/sicmutils.html
      There's a link at the top to the source that generated it. :)

  • @xcoder1122
    @xcoder1122 2 ปีที่แล้ว +4

    25:00 Actually there is a formal proof that a well typed ML program does not cause runtime errors. Not that this proof is typically used but theoretically you can ensure this property with ML programs which is impossible to to do with non-ML programs.
    28:30 The same thing works with LLDB and a C progam. I can set a variable, I can inspect it, and it will tell me the value. And this works for every compiled C program as long as it sill has the DWARF debug information available, which can be in the compiled binary itself but also in a separate file. So you can actually ship a C program compiled with LLVM, keep the DWARF debug information, roll out that program to the client and if it then fails at the client, all you need to do is copying the DWARF file next to the binary and attaching LLDB to the running program and you can inspect or change whatever variable you want in the running program. But that has two down sides: Number one is that you must not strip the compiled binary, which is usually done for several reasons. And number two is that this breaks trust, as having a signed program whose signature got verified on load is useless if you are later on allowed to modify the program in RAM because if you can do that, a virus can do so as well. Using unprotected, easy to change and respectable LISP code for everything totally ignores the majority of business models of software companies today, as well as the needs of their customers. Fine if you only code for yourself or for fun or just want to let code run on your own server, pretty much useless if you want to make a living out of it.
    33:30 Finally an accurate description of C++. And if you want to see that happening again, rather focus on Swift. The idea behind Swift 1.0 was marvelous, the language not so. But with Swift 3 they had most of the bad design decisions sorted out and now all it required were better compilers, better cross platform support (e.g. Windows) and way more standard API. What did they do instead? Keep adding tons and tons of more features to the language, some of which are very complicated, hard to understand, and thus also hard to use correctly or in a meaningful way; while at the same time also dropping stuff that was perfectly okay with very weak reasoning. And the bad thing is, they are adding most new features as language constructs, not just by adding more core library stuff. I really liked the freshness of Swift but now it's just another C++.

  • @totheknee
    @totheknee 2 ปีที่แล้ว +4

    31:52 - This is not necessarily the case. Check out Casey Muratori's implementation of hot reloading a dynamic library during runtime. First iteration done in less than an hour of coding: th-cam.com/video/oijEnriqqcs/w-d-xo.html
    Later on, he has live editing of engine code that he put in a loop to tweak values. th-cam.com/video/xrUSrVvB21c/w-d-xo.html
    Obviously this is the first draft, but it shows simple live code editing in C with less than 2 seconds of turnaround time. This time could be improved if compiler designers cared at all about it, rather than spending all their time stroking their ego on esoteric compiler benchmarking.

    • @Taladar2003
      @Taladar2003 2 ปีที่แล้ว

      The problem with live coding is that you are stuck with all the screwed up state from your previous iterations and you only ever have one state and constantly need to think of ways to modify it to get it to test all the branches in your code you care about. Then you end your session and get to do it all over again the next day from scratch. That time is better spent on static analysis and tests.

  • @amasonofnewreno1060
    @amasonofnewreno1060 2 ปีที่แล้ว +9

    I find your view of things very interesting. I observe there is a lot of activity again in the "visual programming" section. However while I do agree to some extent at least with sentiments I find that textual models are still gonna persist.
    I would love to offer a notebook interface to "business" people, so that they can simulate the system before bothering me (it would sure cut on the feedback loop). But for the most part I think 70-80% of any codebase I worked with is "data shenanigans" and while I do like for textual data there to be visually adequate(formatted to offer better view of the problem) I do not find it enticing to expose those.
    Another problem I find is that, UIs are and likely will always be a very fuzzy, not well defined problem. There is a reason why people resort to VScode - as it is texteditor. So you also have this counter movement in devtools(counter to highly interactive/visual programming) and returning to more primitive tools as they often offer more stable foundations.

    • @jones1618
      @jones1618 2 ปีที่แล้ว

      I agree about a better notebook-like system modeling tool for business people.
      As a developer, whenever I have to do any spreadsheet work, I'm also struck by how immediate & fluid it is compared to "batch" programming ... but ... also clunky and inflexible to lay out or organize. I'd love to see a boxes-and-wires interface where boxes could be everything from single values to mini-spreadsheets and other boxes could be UI controls or script/logic or graphical outputs, etc.
      Now that I think about, I'm surprised Jack didn't mention Tableau which provides a lot of the immediate & responsive interaction he wants to see in future IDEs.

  • @Hyo9000
    @Hyo9000 2 ปีที่แล้ว +15

    I think he conflates type safety with bad dynamic ergonomics. We can do better, and have both decent types and decent ergonomics

  • @stefanwullems
    @stefanwullems 2 ปีที่แล้ว +38

    This absolutely blows my mind. I've been daydreaming on my ideal programming language for a while now and it basically boiled down to interactive visuals in the way leif made them, combined with a notebook view of your program like clerk. I'm so excited to see other people have made these things already :D

    • @IARRCSim
      @IARRCSim 2 ปีที่แล้ว +23

      I don't think those are properties of the programming language. Visualization and interactive visualization are features of a code editor or integrated development environment. Development tools for a lot of existing programming languages could do that if they just implemented those features. Those features would also be more useful for some languages than others. The features would be more difficult to implement for some than others too.
      The video makes it sound like the language and its development tools are completely tied together. If you're choosing a language to learn or use in a project, you might as well group the language and its tools together. If you're tempted to invent a new programming language because you want to use lots of visualization, the distinction is important. You can always make new tools and new features for an old language without changing the old language. Inventing a new language that no one uses doesn't help anyone else. Inventing tools for popular existing languages will much more likely cause others to benefit from your creation.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +2

      @@IARRCSim yeah, like sure all the ASM boilerplate is annoying, but people could write tools to automate that boilerplate as you're typing and fold it away for visual convenience. as an example. i'm sure someone's already done it and i just haven't really looked myself.

  • @MortenSkaaning
    @MortenSkaaning 2 ปีที่แล้ว +10

    hmm, another "gadget language" talk with 100 different little demos doing a 100 different little cute things.
    Can any of the audio/visual languages talk to the audio or gpu devices? And I don't mean via FFI.
    How much of this talk could simply by replaced by C++ with Dear Imgui and Live++? Sure Live++ costs money, but Smalltalk was quite expensive back in the day.
    Why don't someone integrate all these wonderful cute features into the same language and we'd all ascend to heaven, because all our programming problems was that we didn't have enough anti-aliased animations to look at, for a compute task that takes 10 trillion cycles.

  • @janekschleicher9661
    @janekschleicher9661 2 ปีที่แล้ว +3

    Beside spreadsheets, I think the other very popular implementation of live programs that can change (and be inspected) while running are relational database management systems. They are robust, easy to use and to experiment, keep very often most of the business logic, easy to change, self documenting, self explaining (EXPLAIN and DESCRIBE etc), and highly optimized (to one part automatically, and you can give many hints to improve the optimization -> comparable to typing in programming), and secure. Indeed, the possible constraints are much better than in usual programming languages (with type systems and similar) by: expressibility, performance, durability, versioning, self-explaining, transactional behaviour and atomicity. They also ensure grants in different levels of details, while in a classic programming mode a programmer very often can see everything and change everything or nothing, but not much inbetween (like yes: you can do experiments on the running system, but it's guaranteed that you can't change or break something if you have rights to select from other namespaces, create views and similar in your namesapce, but no rights for altering or similar things, and get down prioritized when in doubt of performance automatically vs the production system).
    This liveness of the system might be one part of the story why an incredible amount of business logic is inside such databases and not in classical programming logic.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว +2

      you're not entirely wrong, but i think the reason why people use it is because the abstractions are easier to grok at first glance. and the limitations of the languages and tools mean that you can't get completely lost.
      i still don't agree about the "liveness" of the system being key necessarily though. the whole "compile" vs "run" notion is just an abstraction; i'm not gonna get into the whole "compiled" sql statements topic, but what i will say is that you're going from "i'm waiting hours for code to compile" vs "i'm waiting hours for this statement to run" i don't really see much benefit there. the approachability benefits come from decent tooling imo (like having a small sample of data and seeing how it flows through your query), which programming tools can also implement.

    • @Sonsequence
      @Sonsequence 2 ปีที่แล้ว

      Interesting points there. I cut my teeth in a naive environment where all backend code was in the RDBMS server. It was very productive and carried a lot of the efficiency and elegance you not. But it was also cowboyish, error prone and felt more like jazz improv than crafting a great symphony. When I then went and studied proper software engineering I ate up every sanity restoring technique greedily.

  • @josephlunderville3195
    @josephlunderville3195 2 ปีที่แล้ว +14

    I don't get the point. People have tried all the ideas presented here in various languages. If you don't understand the reasons behind standards, or why a mediocre standard that's actually standard is often more important to have than a "superior" one that doesn't develop consensus, you're missing a dominant part of the picture.
    For example, the reason TTYs are 80 columns wide is essentially because typewriters were. Typewriters weren't 80 columns wide because of computer memory limitations, they were that wide because of human factors -- that's a compromise width where long paragraphs are reasonably dense, and you also don't have too much trouble following where a broken line continues when you scan from right to left. Positioning that decision as just legacy is missing some rather important points that are central to the talk, which purports to be about human factors.
    I could start a similar discussion about why people do still use batch processing and slow build systems. There are a few good points in here, and if what you want is comedy snark I guess it's okay. But most of the questions raised have been well answered, and for people who have tried interactive programming and been forced to reject it because the tools just don't work for their problems, this talk is going to sound naive beyond belief.
    The presenter seems particularly ignorant of research into edit-and-continue, or workflows for people who work on systems larger than toys. The human factors and pragmatic considerations for a team of 10 working for 2 years are vastly different than someone working alone on data science problems for a couple months at a time.
    The one thing I'll give the presenter is that everyone should give the notebook paradigm a try for interactive programming.

  • @timothy8428
    @timothy8428 2 ปีที่แล้ว +2

    Building buggy approximations is my specialty.

  • @matthewspradling2046
    @matthewspradling2046 2 ปีที่แล้ว +4

    Use interactive programming to write the interpreter for the programming language.

  • @RobSwindell
    @RobSwindell 2 ปีที่แล้ว +2

    The "simple transformation" comparison didn't seem fair to the C language since you were doing an in-place transformation in other languages while doing transforming a copy in C. It would have been much simpler to write, understand, and explain an in-place transformation in C as well.

  • @AG7SM
    @AG7SM 2 ปีที่แล้ว +3

    I still have warm memories of being able to do commericial Smalltalk development. Nothing else has ever felt quite the same.

    • @georgiosdoumas2446
      @georgiosdoumas2446 2 ปีที่แล้ว

      I do not know anything about Smalltalk, so you were producing what kind of applications? Who were the customers? What computers were running those applications? What years? Why Smalltalk did not became popular for corporate/business applications as C,C++, Java, C# ?

    • @pierremisse1046
      @pierremisse1046 2 ปีที่แล้ว +1

      Smalltalk still exists, Squeak, Pharo, VirtualWorks are just a few examples !

  • @flyingsquirrel3271
    @flyingsquirrel3271 2 ปีที่แล้ว +1

    There are a lot of gems in this talk and I like the really "zoomed out" perspective. But talking about all the "traditional" programming languages we use, I couldn't agree less with this statement:
    27:24 "And I think it's not the best use of your time to proof theorems about your code that you're going to throw away anyway."
    Even though you might throw away the code, writing it obviously serves a purpose (otherwise you wouldn't write it). Usually the purpose is that you learn things about the problem you're trying to solve while writing and executing it, so you can then write better code that actually solves your problem after throwing away the first code. If this throwaway-code doesn't actually do what you were trying to express with the code you wrote, it is useless though. Or worse: You start debugging it, solving problems that are not related to your actual problem but just to the code that you're going to throw away anyway. "Proving theorems" that can be checked by a decently strong type system just makes it easier to write throwaway-code that actually helps you solve your problem instead of misleading you due to easily overlooked bugs in it.

  • @Pesthuf
    @Pesthuf 2 ปีที่แล้ว

    Ever since I worked with flutter, I refuse to use any UI framework that doesn't have stateful hot reload.
    How much time is wasted by people completely recompiling, restarting and re-inputting stuff on every change...

  • @HowardLewisShip
    @HowardLewisShip 2 ปีที่แล้ว +28

    This was an outstanding talk; interesting but with good humor. I think I need to go take a peek at Clerk.

  • @antinominianist
    @antinominianist 7 หลายเดือนก่อน

    Sometimes notebook/live coders create programs only they can run, because there is a certain dance you have to do with the machine, which acts like a magical incantation that produces output.
    The reason for Haskell style types is that the compiler will help you find the missing piece that fits the hole. Due to my low working memory I love machines that think for me.
    With unicode, most languages already support glyphs which are weirder than cosplaying tty.

  • @johnywhy4679
    @johnywhy4679 2 ปีที่แล้ว

    26:55 I think the space probes example is very important. If you want to design something robust, subject it to high risk, high cost conditions. Even imaginary ones. But then, do you always need space-probe technology?

  • @eric-seastrand
    @eric-seastrand 2 ปีที่แล้ว

    The algorithm sent me here. What a fascinating take on compiled/batch vs interactive programming.

  • @blablubb7937
    @blablubb7937 2 ปีที่แล้ว

    Is there a link to the bioinformatics example around 38:10? It's not in the transcript, unfortunately.

  • @AleksoLaĈevalo999
    @AleksoLaĈevalo999 2 ปีที่แล้ว +1

    Given the topic it's weird that he didn't mentioned Scratch. I mean nobody is using it and it's silly but fits his point about visualization perfectly.

  • @homelessrobot
    @homelessrobot ปีที่แล้ว

    Zig's direction is for this to be a part of the tooling, but not the language. In the same way that it has its safe/debug build modes that do more dynamic checks and hold on to more meta data, its eventually going to have a whole safe/debug target and execution model, that is used primarily just on the developers machine to get instrumentation into both the running program and into the compilation process itself. Because they are the same thing happening at the same time in this mode; the program is running for the purpose of further developing its source code, as is the compiler. We don't really care about performance or portability of the running program so much as the performance of the iteration on static and dynamic anaylsis loops.

  • @magictrickdev
    @magictrickdev 2 ปีที่แล้ว +2

    I can't agree with this assumptions that higher order programming languages are superior because they are more concise and expressive for accomplishing some task. *However*, I absolutely agree with the point he is trying to make. Perpetuating old ideas and popular methods simply because they are perpetuated is not conducive to for adapting to the needs of modern software. Every approach with development, be it functional, OOP/OOD, DDP, or some hip new flavor of the week asynchronous programming with a trendy name, has its uses. Within these development techniques are principles within that call for some dogmatic pattern which makes assumptions that all other approaches are downright wrong.
    One detail I would like to pick apart about this presentation is that memory management in C/C++ assumes that dynamic memory management calls for synchronous calls to malloc/free, new/delete. This is false. You are making an assumption that malloc/free and new/delete are the *only* methods to manage dynamic memory management. Spend enough time in C and C++ and take an interest in memory management, you will learn very quickly that generic allocators are more often than not the least desirable method for your particular application.
    For example, one of the simplest and oldest forms of memory management in C, the monotonic allocator (or stack allocator), is not only fastest method of management, but also decouples that pesky synchrony of malloc/free that people continually rag for its unnecessary mental work. Not only that, the dogmatic perpetuation on C/C++'s memory management facilities make it sound like demons will crawl out of your speakers if you don't issue a call to free for every call to malloc. When your program terminates, all memory claimed by the program is automatically freed by the operating system. You don't need to free your memory before main terminates. The "for every malloc, you must free" principle is perpetuated not because it is a requirement, but because malloc is frequently misused to the point that it's better to simply state it as a requirement than it is to explain inner workings of virtual memory and dynamic storage lifetimes.
    The point is that we shouldn't make assumptions about how things are done and what the correct ways are. We should be open to exploring new (and old) ideas and using them to exploit the computers for their maximum potential.

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว

      the reason that "you must free every malloc" comes up so much is because if you didn't free enough, for a sufficiently long-running program that keeps allocating memory (browsers seem to love doing this) you'd eventually run out of memory. or more realistically, the OS would start paging out memory, performance would drop, until the OS decides your process is too intensive and just kills the process outright. also, what if you're running on a non-modern OS? in kernel mode? or if you're writing a library? you are reasonably expected to call free in those cases.
      sure there's other options, one i can think of is alloca, but if you're using malloc you should be free-ing, full-stop

  • @MarkRuvald
    @MarkRuvald 2 ปีที่แล้ว +12

    Clerk is then basically the same as Mathematica, just in lisp and with fewer functions

    • @sidneymonteiro3670
      @sidneymonteiro3670 2 ปีที่แล้ว

      You nailed it.
      The absence of references to Mathematica in this presentation really stings.

  • @siquod
    @siquod 2 ปีที่แล้ว +2

    I want backwards debugging! Why should I be only able to figure out what went wrong after it did?

    • @JackRusher
      @JackRusher 2 ปีที่แล้ว

      💯

    • @dougmerritt5420
      @dougmerritt5420 2 ปีที่แล้ว +1

      Debugging that allows stepping backwards in time has in fact been implemented many times for many languages in many environments -- which is not to say that it's super common or easily available where one wishes, of course.

  • @Zen-rw2fz
    @Zen-rw2fz 2 ปีที่แล้ว +1

    I usually do this to better understand computing, I don't even work as a developer of anysort so I'm just doing this as a hobby and it's fun to have these challenges

  • @perfectionbox
    @perfectionbox 2 ปีที่แล้ว +1

    Iterative environments that combine coding with runtime are nice (like spreadsheets) but they're not terribly amenable to team projects and maybe code revision systems either.

    • @lepidoptera9337
      @lepidoptera9337 ปีที่แล้ว

      That depends on the problem. Basically all of the CGI movies today are being created with visual design environments that define the effects and their dependencies using graphs. Electrical engineers have been designing circuits using graphs for over a century. If graphs are a good description of the problem, it's all good. If they are not OR if heavy editing is required, then graphical design environments are a real pain.

  • @toddmarshall7573
    @toddmarshall7573 2 ปีที่แล้ว

    10:55 "...can you give me an instance where this operation is even more concise?...": Yes
    GLEE: 1..4 +1 $; Result: 2 3 4 5
    27:02 "...programming is actually a design discipline...": I call it software sculpture.
    27:22 "...build buggy approximations along the way...": optimize early, arrive late. My process is tunneling. Work with what you've got and move it to what you want. Move what you want toward what you've got. They meet somewhere in the middle and guarantee success.
    27:42 "...continuous change...": Classical practice tries to "trap" the problem through exhaustive analysis and sign off. Successful practice "tracks" the problem...until those who know what they want (i.e. they recognize it when they see it...and when they don't see it they know what they want different) have what they want.

  • @Ceelvain
    @Ceelvain 2 ปีที่แล้ว +3

    Yeah, I'm really unconvinced by most of that talk. Although some ideas are worth drawing from.
    Computers *are* batch processors. Every program will have to cold-start at least once in a while. That's the very reason we still write programs. That's even the reason they're called programs: it's a schedule, like a TV program.
    If all you care about is the result for some particular data, then sure: do things by hand, or use a calculator, or a spreadsheet, or a notebook. But rest assured that what you're doing is not programming if you don't care about having a program that can be re-run from scratch.
    And unfortunately, outside of some trivial cases, we can't just fix a program and apply the changes to a running program without restarting it from scratch. But having some kind of save-state + run new lines of code could help with the debugging.
    Also, any kind of visual representation of programs and data won't scale beyond trivial cases. Visual programming is nothing new, yet it hasn't taken over the world. Why? Because the visual representation becomes cluttered for anything beyond an equivalent of handful of lines.
    The tree matching example is nice and helpful, but very specific and I doubt it could find a wide use in practice. Most of the time, any visual representation deduced from the code would be unhelpful. That's because the basic algorithm is hidden among the handling of a ton of edge cases and exceptions handling. And an automated drawing tool wouldn't know what code path to highlight.
    I do agree that types *do* get in the way of fast prototyping, when you discover what you wanna write as you write it. And that's why I love python. Fortunately, not all programs are like that. Many programs just automate boring stuff. And even those programs that do something new usually have a good chunk of then that is some brain-dead handling of I/O and pre/post-processing. Those parts of the code could (and probably should) be statically typed to help make sure it's not misused in an obvious way

  • @KaiHenningsen
    @KaiHenningsen 2 ปีที่แล้ว +6

    """"""""Hah. When he says batch processing is bad because it takes forever, I think batch processing is good because it doesn't force me to sit around, doing the same stupid thing for even longer than the batch takes ... based on this, I might argue that GUIs are dead, because they're resistant to automation (not that they have to be, but designs to avoid that never seem to really take off).
    It's easy to point to problems. It's much harder to come up with solutions. And then it's much harder still to come up with solutions that can actually pay for their development.
    And similar arguments can be made on more of his points.
    And if your terminal.app insists on 80 character width (I kind of doubt it), use a terminal app that doesn't, which is almost all of them. (And I remember the old standard output width as 132, paper was wider than 80.)
    Oh, yeah. The visual cortex exists. And you know what? It loves fixed-width text when programming, and it hates variable-width text. Now for uses where you mainly do free-flowing text, things are often different, but that's not what any halfway reasonable programming language looks like. There's a damn good reason that GUI-based programming hasn't taken over the world. It's not as if people haven't tried. But for most applications, it's shit. Well, chocolate-covered shit - it looks nice, but I still don't want to eat it. (And your periodic table seems to be a lot harder to read than the standard one. And even as a pure teaching tool - those curves don't represent any actual physical reality, they are a distraction.)
    You know? Many of these features you laud are what I hate about spreadsheets, and what I think of as utterly anti-ergonomic. Don't get me wrong, spreadsheets are powerful - but the way they are programmed is shit. You can't reliably see the whole thing unless you go over every cell and look at what formula may be in it; it's far too easy to wipe out part of your logic by changing a cell you were not aware had logic in it, or trying to insert data without realizing where you'd first need to replicate logic (because of course if you want to convert some columns into a new column, every cell in that new column needs its own copy of the logic); and on and on.
    _"Note, in particular, the complex double float at the bottom that shows you a geometric interpretation"_ I can barely identify where I ought to look; I can barely read some (but certainly not all) of the text, and what I'm guessing is some kind of graph is just some unrecognizable points and lines. _"So this is amazing. This is also 1980s technology."_ Well, I'm unsure from when 720p videos are, but it's 2022. Video doesn't have to be this grainy.
    "If your error handling is the program stops then it's pretty hard to recover, right." Wrong. There's a reason that that is exactly how your beloved Erlang does it. And I don't expect my programming environment to give me ten dozen ways of handling a runtime error during development, even though a reasonably good debugger can do this even for languages like C. I do expect it to tell me about most possible bugs before runtime even starts, optimally concurrent with writing the code. (That's also a place where types come in handy - the better I explain to the machine what I want to do, the easier it can tell me if I'm doing it wrong.)
    "It's quite colorful and beautiful probably for some of you garish but I like it" ... I have no problem with the colors, I like syntax coloring and there are only so many visually distinct enough colors to work with - but I can't easily wrap my head around how the code arrangement relates to the logic, and I really don't like the partially-obscured parts.

  • @NFSHeld
    @NFSHeld 2 ปีที่แล้ว +1

    Being a C# developer, I am not a huge fan of Python's principles for programming. But I really do see the value that Python provides within the ML world.
    Imagine having a list of single or pairs of numbers and you want to get the absolute difference when it's a pair.
    Python (reciting from my memories): x = list.diff().abs().dropna()
    C# using LINQ: x = list.Where(p => p.Count == 2).Select(p => Math.Abs(p[1] - p[0]));
    Python is so much "cleaner" at communicating what you are going. And then you add just another 3 lines of code to calculate a density function and plot it and query the 95% quantile, all within the notebook. That's really cool.

    • @HoloTheDrunk
      @HoloTheDrunk 2 ปีที่แล้ว

      Now if only Python allowed you to put those on different lines without resorting to Bash-era backslashes, wouldn't that be nice? 🙃(probably my 2nd or 3rd biggest gripe with Python ever)

    • @hunterpayne6167
      @hunterpayne6167 2 ปีที่แล้ว +3

      Python is a terrible language for ML. Good languages for ML are functional. Just look at the arguments (ie hyper-parameters) to any ML algorithm, all of them are either booleans, doubles, or functions. Python is used because Physicists are smart people but terrible programmers and never learned any other languages. The ML research community (computer scientists) kept functional languages alive for 30 years pretty much by themselves. They didn't do that for fun, they did it because functional languages are the best for AI/ML programs.
      Python is only popular in data science (it isn't popular in the ML research community) because universities are cheap with IT support and because physicists think they can master other fields without any of the necessary training, background, or practice. Python is a sysadmins language designed to replace Perl. It is good at that. But since sysadmins are the only type of IT support universities provide to their researchers, guess which language they could get advice/help/support for?

    • @LC-hd5dc
      @LC-hd5dc 2 ปีที่แล้ว

      honestly LINQ is really nice if you're coming from the world of SQL. i'm not saying C# is perfect (i don't even use C# these days) but LINQ has never been a point of complaint for me. plus if you think LINQ is bad, check out Java...

    • @MrNatsuDragneel
      @MrNatsuDragneel 2 ปีที่แล้ว

      @@LC-hd5dc This list.diff().abs().dropna() can be done in c# with 5 minutes of coding. And you can reuse this .cs in every program and subroutine.

    • @NFSHeld
      @NFSHeld ปีที่แล้ว

      @@LC-hd5dc Oh I absolutely love LINQ, but sometimes the separation of "what to do" and "on what to do" makes things complicated, and unless you want to create 50 extensions, it'll be more verbose but still less clear what a LINQ expression does.