@@DogeMultiverse No, totally disagree. We're still digging ourselves into a hole. We first need to get out of it. Watch: "The Mess We're In" by Joe Armstrong
I remember reading this story about a company that was trying to send a program to a customer in France. Trying, because every time they did, it would fail to run customers' hardware. Finally they sent someone with a case that contained the program to sort things out. When he went through customs he dutifully declared the program as an imported product, whereupon the customs official pulled a few cards out as a required "sample" of the imported product. Oh joy.
I had a program to run remotely on lots of servers and something in the shell and terminal setup was eating a few of my characters. I added multiple lines of #### for a NOP slide to overcome that.
This is awesome. They could put throwaway code onto a few cards, like some superfluous OOP or Rust checkout checker border patrol or whatever they call it, and the rest of the program could still run in France.
This guy speaks so fast.. basically about 5 presentations in the time for 1, but somehow, he is completely understandable and he keeps the attention of the audience!
I was almost through this entire lecture when I realized that all these issues sound like "when you're a hammer, everything looks like a nail." We were trained by a thousand editors and programming languages to approach problems in a particular way instead of asking, "What is the best tool to approach the type of problem I'm working on?" Thanks for showing some really good tools and challenging us to make tools that are equally good for working with certain types of problems and data sets.
But it also trigger my silver bullet detector. While I agree C++ is a bloody mess, you can still write reliable real time programs in it. Of course, you can't use dynamic memory allocation (apart from the heap for function call) and you have to be careful about which standard libraries you use. And C++ is a pain syntactically. I wonder how python works in real time systems with digital and analog inputs?
"The best tool for the job" largely depends solely on what the most senior programmer in the company is familiar with. It rarely has anything to do with tech and more to do with politics. These guys have usually been with the company since the beginning and the executives know him and trust him, so he has carte blanche to do as he pleases, so if he thinks the best tool for the job is Cobol or Delphi then that's exactly what will be used as long as it delivers software that makes money for the company. Sorry to burst your tech utopia bubble but politics and profits are way more important than the "tools"... if management agrees that the latest and greatest tech needs to be used to write good software then thats what will happen, if they agree that the legacy code is working fine and doesnt neeed to be written in the latest tools then sorry for the 20 year old junior intern but you will need to learn the ancient tech to work there and it will look terrible on your CV but that's just how it is.
I'm a big fan of "idea the right tool for the job," I hate when people try to force solutions into a system to reduce the total systems/languages in use. my current company does that, does everything in javascript when other frameworks or languages would be better.
I think the biggest problem with all visual examples is that they work great for data-science or theoretical algorithms, but far less for your run-of-the-mill "corporate programming" such as (web)services. When building services, almost all of the programming is about creating a model of the real world, and not so much about visualizing and transforming data. All those examples of graphs, tables, flows etc. work really well for data-science (hence things like Jupyter are so popular there), but they don't generalize to domain modeling very well. I would absolutely love to have some sort of interactive and visual environment to build and maintain domain models, but I've yet to come across anything like that.
I feel like Dark Lang is pretty close to what you're describing, and it seems really cool, but I'm not quite ready to have so little ownership of the tech stack
Then it may please you that _informatics started with such tools,_ like the Sketchpad from Ivan Sutherland (but it's better to learn about it from Alan Kay because the original demos don't really explain the difference between "before" and "after") or the NLS from Douglas Engelbart (look up the Mother of All Demos, pay some attention to the date or the hint at the end that ARPANet "will start next year"...) Unfortunately, Engelbart's Augmenting Human Intellect Report is a very hard read, the whole field lost the point and the result is what we have today.
Results like that we have the ultimate communication infrastructure, but people don't feel pain to - limit themselves to a single bit, "Like" and think that any number of likes can ever worth a single statement. - repeat the same statements picked up here and there without processing and pretend that it is the same as a dialog. - rip off and descope the "Stop drawing dead fish" lecture (Bret Victor, 2013) in 2022. It's not about coding and punch cards but our very relationship with information systems (in machines, libraries, human communities and within our own brain). _"Why do my eyes hurt? You have never used them before."_ (Matrix, 1999)
My first programming class used punched cards running FORTRAN on a Sperry/Rand UNIVAC computer (IBM 360 clone). As a consultant over the subsequent decades I would carry a little history kit to show the newbies - some punched cards, a coding pad (80 columns!), 9 track tape, 8" floppies, and a little bag of coal as a sample of what we had to keep shoveling into back of the computer to keep up a good head of steam. As my friend called it - "The age of iron programmers and wooden computers."
My high school had Apple ][s and UCSD Pascal but the teacher didn’t want to learn a new language so we had to do Fortran on punched cards, instead. The cards would go to a university about 30 minutes away but the results took a week to come back.
And then you learn that there was a FORTRAN available for the ][s UCSD system and weep. I once wrote a punched card Pascal program (for a uni course before terminals became available for those) by first developing in UCSD, then going to the card punch with the resultant listing. (I'm not sure, it might have been the 7 billionth implementation of Life.)
@@TheAntoine191 I think deeming it "OK" is valid for those who still must maintain programs in it, but there are still too many leftover - or even new - oddities that prevent it from being used in the ways that C is still useful. Some of these being: if you want an array of pointers to some data type, you have to use a structure; the lack of a true way to define/typedef custom data types; the intense duplication and verbosity required when declaring methods on classes; the syntax for declaring subroutine/function arguments; and the lack of a literal syntax for nested data structures (like assigning a value to an array that exists as a field inside of a structure, all at once). However, other old, largely forgotten languages like Ada, Modula-2/3 and modern variants of Pascal (Free Pascal and Delphi), certainly do have many redeeming qualities and are still very usable to this day, sometimes more so than mainstream/popular solutions even, Ada being the biggest tragedy out of the ones mentioned, in my opinion.
I would love this, but give me a language and IDE, that properly completes symbols for me, is context aware, is _interactive programming_ before I even wrote it. - That's why I like types. Kotlin, C# ... They are helpful sooner. They catch nearly all typos. In fact, I always tab-complete, so I never have to worry about typos. - I tried Elixir because the erlang model is so great, and I had dumb mistakes right away (typos, wrong symbol, etc), all costing lots of time to go back to. Only found through running tests, instead of before I even made them. - An environment that let's me make mistakes is worse, then one where I notice them ~live. Worse is only type checking (and no help) at compile time. Even worse is only getting errors at runtime, which sadly due to many reasons, when trying Clojure, that's where I would end up. A lot of things are a problem to do in the REPL, say I need to inspect argument to some callback. In Kotlin, I at least see the full type spec, and the IDE is helpful. In Clojure, I need to mock-trigger the callback, hope it roughly matches production, hope I can "suspend" inside the callback and and hand craft a reply, and that's even worse: How do I know what reply it wants? Reading docs is tedious. Filling out a clear type "template" provided by IDE is really nice and simple in comparison.
One of my more unmistakable descents into IT Madness: At Conrail, I had to write out my COBOL programs on 14-inch green-and-white coding sheets, and send them over to the 029 experts in the Punchcard Department. Next day, when they'd dropped the code into my Shared Storage, it would contain so many errors that I had to spend an hour fixing it... So I took to typing my code directly into Shared Storage, using my handy-dandy SPF Editor... and was REPRIMANDED for wasting my Valuable Professional Computer-Programmer Time.
@@pleonexia4772 You load the large dataset once and edit/rerun the code on it over and over instead of reloading the dataset every time you want to make a change to the code.
Make sure you also get familiar with breakpoint debugging and stopping through running code. Absolutely essential for a self-taught programmer in the "popular" languages.
@@pleonexia4772 look up Don Knuth and literate programming. Pretty common in Emacs circles to write executable code in blocks in org-mode (a kind of "markdown"), a precursor of these notebooks.
It's all spot on. Optimally, we would spend all of our time in solving the actual problem at hand, instead of spending most of it fighting the details that emerge from our choice of tools/solutions.
Yeah, it makes VI look logical. When I first saw VI, I could never understand how people accomplished anything, but my boss [i.e.: my uncle] kept pressuring me to use it.
@@eugenetswong But the fact that a subculture of people is using, for decades, ~ IBM-compatible keyboards, with editor software that's totally mismatched to that, is kinda hilarious.
@@tinkerwithstuff it really is, as I started learning computers when I was 8yo on DOS6.22. edit_com just felt natural for the IBM PC keyboard. When I came to the unix world, their stupid editors always felt "wrong" anachronistic. Why can't I have "edit_com" ? every sane editor I ever used on PCs with Windows or OS/2Warp was always like that. (and yes, I installed OS2/Warp when I was 10yo on my PC) Linux/Unix always felt like going to the past, to a museum. That can't be true, why would anyone ever want to use "vi/vim" ? Emacs, it at least made sense, you call anything with "command", which is `ctrl`, like every modern keyboard shortcut ever in any GUI program like qbasic or edit_com or msword. Then I found "nano", well that solves the problem. But the more I studied Unix/C, the more at a museum I felt. Like why ? why must I program my supercomputer x86 from 2007 like a freaking PDP11. Let not get me started on how brain damaged is writing shell scripts. I HATE IT, Why can't you "unixy/linuxy" guys just use Perl or Python. And the top of my unix journey was "autotools" , FSCK IT ! no, I had enough, even CMake is better than that, even ".bat" and "nmake", I'll never, ever, ever use it, just reading the docs give me headaches, why, why do you have 3 abstraction levels of text-generation, its absurd, it literally easier to write the command manually (in `nano`) and ctrl-c ctrl-v them to get the freaking binary. And when I'm choosing libraries for "C++", I chose those NOT use any that only provides build script for autotools. Lets also ignore how all code that has the "GNU" is basically horribly written, from 2010 perspective, and I've read a lot, A LOT of C/C++ code. Its just amateur code, not professional, by modern standards. It baffles me that people think they are good. If its from a GNU project, the code is basically a "bodge", example is "screen", not only the code is really bad, the user interface of the tool is really, really bad, like a circular saw plugged to a angle grinder that hangs from the ceiling by its cable, no wonder you keep losing your arms. And those horrible, horrible things are worshiped like if they were the holy grail of the `Church of C`, or must I say the `Church of PDP11`. I understand the historical importance of such things, but they are super anachronist, its like driving day-to-day in a Ford Model-T, its not good, it was good for the time, but I prefer my modern 2019 peugeut. I wanted to do computing, not archeology of old computing systems. That's what unix always felt like. I like knowing it, and experimenting with it, but I don't want to use it on my day-to-day job, but is there any other option.
The one thing i don't get is his hate on "fixed width" tho. Whenever I program in a new invironment that uses proportional fonts, I switch to something with fixed width, because without it, numbers don't line up any more. A 1 takes less screen space than a 2 without fixed width, and the code looks ugly. Even worse if you depend on white space, like Python...
I have to say some things about this talk really irked me. Like the implication that APL has superior syntax because for this very specific use case it happens to be quite readable and more terse than the alternatives Most choices are a compromise one way or the other. Compiled languages might be "dead programs" but that's the cost you pay for function inlining, aggressive code optimization, clever register allocation, known static stack layout and so on. That's why compiled languages are fast and static and not slow and dynamic. It's all a trade off In fact just yesterday I had an idea for code hotreloading in Rust. One limitation that immediately came to mind is that every control flow that crosses the module border will have to use dynamic dispatch, mostly preventing any meaningful optimization between the two
Yeah this exact exchange is what I was thinking about while listening to him. Compiling isn't a bad thing, it's an optimization. I use python for rapid prototyping, for instance, but when I'm done playing and ready to do some work, I write my final in C++, because it's fast. Yes I've spent days compiling libraries before, but once they were compiled, I didn't have to worry about them, didn't have to wait for my computer to chug and choke on the complex human readable parsing. Computers are not humans, don't feed them human. This whole mentality is an offshoot of the "just throw more hardware at it." camp, one I find regrettable.
@@Nesetalis The problem is that most languages don't have both an optimized and unoptimized (introspectable) version. I want to be able to do both without changing language. I expect he does as well.
@@jrdougan I don't think that would be enough to him. It seems like he wants introspection on production. I don't see how this is possible without making some major tradeoffs like globally turning off optimizations or annotating things that can be introspected. In fact it seems like he even wants to make the code modifiable at runtime (not necessarily the production code though).
@@gamekiller0123 I mean, why not. Basically we already doing it, just in a slow way. In bigger projects, usually you don't just deploy and overwrite your previous version, you deploy it, let it running through staging/production pipeline and then make it first available in addition to the existing code via an internal route for the programmers and integration testing pipeline, then you'll canary make it available to a small part of users, monitor it, if nothing fails, you make it available to a significant part of users (let it route to the new version, while still keeping the old version), then if you don't monitor something wrong, you'll make it the default and then you stop serving the previous version and finally make a deployment some time later to get rid of the deprecated functionality. So, what happens as effect is that we are changing the runtime without really switching it off (if we regard the executed distributed environment as one unit execution). But the whole process is slow (we are talking about hours to see first changes and days till everything is finished -> very punch card like) and hard to debug and monitor (even with tools like distributed tracing or kafka or w/e). There wouldn't be anything wrong or scarier if the programming model just would allow to do these changes directly in the runtime (probably still keeping different versions) and not do it on microservice level with the help of container runtimes and routing services and complicated tools for introspection. Just doing what the language should do for us involves in the end knowing Docker, Kubernetes, API gateways, Prometheus, DataDog, Kafka, a CI/CD pipeline, and many things I might have missed on the fly now. In the end, most companies are now in high demand for DevOps engineers to optimize this process (-> punch card operators are back) as the complexity is too high to really expect the programmers to handle while they are trying to solve a complete different problem (the business case).
I do agree that having runtime reflection is a great thing so that we can look at the environment / state over time. But i hard disagree with most of the other points in this talk. 1. comparing C / C++ / Rust / Zig with Lisp / Clojure etc.. is just plain wrong. anyone can see that these languages are targeted at different use cases. They are manually memory managed low level languages for precise control and peak performance to extract everything out of hardware. literally just a step above assembly. 2. This talk conveniently skips over things like Garbage collection (and performance in general) except for a reference to tweet talking about devs being okay with stop the world compile times but not stop the world garbage collection. Games or Apps sensitive to latency ( real time music/video editing, trading etc..) just cannot afford to have that garbage collection pause no matter what. But devs can very much afford that compile time. 3. Saying Rust and other ML family languages don't improve software is also debatable. Rust's typesystem turns runtime errors into compile time errors making the software more reliable. Rust is in trial mode in Linux kernel.. because it provides a proper safe typesystem that C doesn't. Most of the talk is about debugging, viewing runtime state and live interactive coding. Which is more about tooling surrounding the language rather than just the language itself. We definitely need better tooling and many projects shown in the video are good examples of what might be possible in the future. for anyone interested, i recommend watching the talk about dion format dion.systems/blog_0001_hms2020.html which also talks about editing syntax trees in a custom IDE instead of a text language with syntax. Rust has been getting decent tooling to improve developer experience. github.com/jakobhellermann/bevy-inspector-egui for example shows all the game State live AND allows you to modify it. there's github.com/tokio-rs/console for async server apps to provide a look into the runtime state. you can always add a scripting language to your application (like lua) and query any state you want. there's other initiatives like lunarg's GfxReconstruct which will dump all the vulkan state so that the developer can reproduce the gui/graphics state exactly on his machine by receiving the vulkan dump from user. people are working on lots of cool ideas to help with debugging state machines. Although, i think a custom rust specific IDE will go a long long way.
Not a rust guy, but rust is a great example of how he missed the point of static typing. It's feedback at compile time. Run time errors are errors caught by the end user if you are not careful.
All that "types slow devs" sounds like dynamically typed languages are better. Maybe they are... until your fancy no-explicit-types JS or Python or whatever app crashes in the middle of logic because, for example, you forgot to parse string into number. Languages with static types (even as limited as C) just won't allow you to run such nonsense at all. Types are helpful for reliability, TypeScript, Python typing, etc. confirm this. Better slow down 1 developer team than have 1000 customers with "oh no, I made an oopsie" crashes.
Thank you. Way too many people who don't actually work on real systems completely ignore performance and maintainability and focus way too much on writing code 'quickly'.
Ad 1. And how being low-level, manually memory managed for peak performance stops you from having nice things like runtime modifiable code and introspection into live system? Those are orthogonal concepts and they aren't mutually exclusive. C++ approach is 'pay only for what you use', but there doesn't seem to be much to 'buy' when you actually would like to pay for those niceties. Ad 2. It's not that devs can afford the compile time, it's that they have to in some of the languages. E.g. you can run Haskell or Ocaml in interactive shell while developing, but compile to get better performance for release. JIT compilers do exist for various languages, so it's not like you cannot have a runtime modifiable system that performs well. C# has garbage collector, but you can use manual memory management to some extent when you really need to (high perf or interoperability with C/C++). It's an attitude problem, designers of the language(s) decided that's it's not of enough value. The point of this talk as I see it is to highlight the value of presented things and get language designers to think about such use cases. Ad 3. This is only an improvement in an environment with forced compile/run cycles. You type something, launch the compiler (or your IDE launches it the background) wait between 0.5s and 60 minutes for it to compile, you get an error about wrong type. You fix it, compile again, run it, spend between a second to a minute to verify that it works as expected (i.e. rule out problems that weren't caught by type system). Now compare it to: you type something while your program is running, you see clearly incorrect results on screen and on top of that you get an error. You modify the code while the system is still running and you see correct results on screen. IMO the second workflow is much better and more efficient. Also, look at TypeScript or Python - you can rapidly prototype your code omitting the types or add type annotations for additional safety. TLDR: compiled/statically typed vs interpreted/dynamically typed - you could have both and achieve high efficiency in development as well as high performance in runtime, there's no need to limit yourself.
I never even stopped to think about it, now I have a name for it: introspection. Before studying the hard theory about regular expressions, I never actually understood them and just resorted to copy one from stack overflow. After learning the theory, I still don't write them as punch cards, instead I like using websites where you can test them in place, see explanations and so on. Now I don't feel bad for wanting to attach a java debugger to a live server haha
Indeed, that's the same point that game devs John Carmack and Jon Blow make. The debugger is the most useful environment there is. Also note that regex is amusingly not the same thing as formal language theory's regular language. After I learned that I started to forgive myself for having a hard time with them. en.m.wikipedia.org/wiki/Regular_expression#Patterns_for_non-regular_languages
Debugging in a live environment is very problematic. Just imagine a order process of a web shop and you debug it in execution and mess things up accidentally and as you stopped the process it's not executed and also other orders are not coming through. There is a much better way. Write tests. I sometimes don't even try out my changes manually. It's tested and if I would have broken something the changes are high, that some test will find that. Some testing frameworks have even watchers, that execute the tests every time you safe your file, so you immediately see if your changes work. If you have proper tests, there isn't much in production that can cause it to fail. So instead of debugging a live server I would rather set up the development process in a way, that you find bugs before they reach live. That at least works really well for me.
@@Duconi Nobody intentionally writes bugs. Prevention is good but not perfect. Don't you still need a minimally disruptive way to fix the live environment?
@@brunodantasm It depends on which kind of regex you are dealing with. Regexes from SQL or grep are real regexes. The ones in many scripting languages that use the perl syntax are fake regexes and can be six orders of magnitude slower on hard inputs
Now I wish there where a part 2 of this talk that goes into more detail regarding modern language options that tackle these issues. A lot of the options mentioned seem near impossible to setup in a dev environment because the tooling is so outdated that I have to spend more time getting the environment to work than even thinking about programming in it. It especially seems like there are no options whatsoever when it comes to hard real-time applications like audio synthesis.
Yeah its a peek to the future, if people decide to pick it up. Hope it comes to fruition, because bringing a coder closer to their code will only make it easier to see what actually goes on, past the abstraction of language syntax, semantics and language specific quirks.
What I really enjoy about Dart is that, even though it's punch card compatible, thanks to hot reload I need to compile the program usually just couple of times a day when I pick up some new thing. Most of the time code can be reloaded in real time at incredibly short feedback loop. I still wish there were more features that would help visualize the structure and relationships of code but it's already so much better than most of the tools in the mobile ecosystem.
I've been chasing live system programming for years. Dart provides a lot of what I am looking for, as well as Python with hot reloading (see a project called Reloadium). One of my ideas for my own system (that has yet to be written) is a little similar to the last example in this video. There are nodes which represent your program and there are "sparks" of execution so you can see data flow through the system.
It's very hard to carve a statue with a can opener. Selecting the right tool is key to success. But then most people also have an employee mindset, they are not toolmakers. It's good to see what other methodology is out there in order to set the right expectations in the users of programming environments and languages.
The idea of visual programming was not ignored, it has been tried over and over and failed in many ways. The keyboard remains the best input device available and programming languages are structured around that, program input and algorithm expression. The visual cortex can process much more but the human mind cannot express ideas faster than it can speak them or type them. What we need are not non-text languages, we need better code visualization tools that take existing text code and annotates it in an easy to understand visual format. The entire compiled artifact diatribe becomes irrelevant if the programming environment has an Edit&Continue feature that recompiles a minimal amount of text code, relinks and reloads the parts effected, so you can continue debugging from the same state, or some saved intermediary state before the bug manifested.
The Edit&Continue bit was excatly what came to mind to me when he mentioned that as well. A cool example of a large program that needs to not fail while doing this is the Linux Kernel when live patching is used!
APL actually has a shortcut for making a list like 1 2 3 4, such that you can do the example program in only 4 characters : 1+ι4 (that's the greek iota letter) instead of 1+1 2 3 4
@@thoperSought APL is part of the "fun thought experiment but the next guy will just want to shoot himself while reading your code" languages. No sane person would use it for large software (or at least I hope so).
@@thoperSought The “expert-oriented” terseness of APL/J/K is scary at first, but it soon pays off, because the core languages are so tiny that you can become an expert surprisingly quickly. There are only ~5 syntax rules and ~30 symbols to learn, depending on how you count. Beyond that, basically all of the problem-solving skills are transferable to other languages, especially to APL alternatives like numpy/R/Julia/Excel.
1:30 The 80-column "Hollerith" punch card design is an advancement over the original same-number-of-ROWS (12) with what I think were just 27 columns (circular holes rather than narrow rectangles) designed by the man named named Hollerith himself for tabulating the 1890 U.S. census, decades before there were "computers".
It's not just the code itself that can have a lot of "this isn't part of the actual problem" problems. All of the "technical bureaucracy" (certificates, hosting, provisioning, deploying, releasing, building, source control, branches, pull requests, code reviews, unit/integration tests) contributes in a big way to stuff not part of the actual problem. In addition, "corporate bureaucracy" (development process, useless roles, incompetence, corruption) is a killer. At the end of the day, maybe 5% of your mental effort goes to solve the real problem, and the end result is ruined by the other 95%. Solving a problem with 5 lines of code versus 1000 lines just gets lost in all the other noise.
Imagine a craftsman complaining that one needs to know metalwork to craft woodworking tools. Or a soldier moaning that all those logistics officers are not contributing because they don't fight. You'd just laugh at them. Creating tools has always been an investment, spending effort on one task to make another task easier. Teamwork has always required coordination. IT is no exception. If you become able multiply your workforce by 50 and spend 10% of that on the "actual problem", you have quintupled your progress. If you don't want to coordinate a team, your only other choice is to work solo. And while it sounds intriguing not to deal with 49 other lunatics and their code that conflicts with everything, including your sanity, it will really slow you down, more than team coordination ever could.
I think your argument applies to just reducing LoC, but better abstractions can also eliminate certain types of mistakes. For example, a hash function builder reduces the chance that some hash function is written incorrectly and produced collisions.
docker's imperative config and uninspectable (possibly even malware-ridden?) root containers to me is already part of that legacy mentality, people just slap it in because everyone else is doing it, not because it gets the job done the best. imperative config and orchestration is the way to go to eliminate most of the issues you mentioned in "technical bureaucracy" as you call it. "corporate bureaucracy" is just capitalist problems. and incompetence has nothing to do with this discussion. neither of these will be solved with better programming tools.
Have you ever led a team where you were free to remove that technical bureaucracy? I am. I haven't. For each of those items you list I asked how we could shrink the footprint but removing entirely would have gone badly. Certificates, hosting: Be maximally unoriginal in cloud provider setup. Source control: Yes. Have you tried scp on text files instead? Branches: trunk only except for shortlived ones just to hold pull requests. Pull requests, code review: So much more powerful than merely quality assurance. But yes, very expensive so always good to figure out when and where to skip.
Two things: 1) let the compiler blow up on the dev rather than the program on the user (especially if you seek the lowest runtime overhead, or you ARE making the runtime) 2) you can start getting this future, today, with current languages, using Jupyter notebooks and alike (e.g. literate Haskell)
Yeah, It might be interesting if we can develop a language that runtimes during development (for interactivity, visualization, etc) but can compile for deployment. Because there are instances when interactivity just isn;t necessary and the required abstraction and overhead is nothing but dead weight.
I realized my habit of printing out variables and what information is being calculated in what the speaker calls "dead languages" is exactly the point he's making. There needs to be easier ways to observe the data and processes we write as it runs.
On the other hand printing out values is a lot more productive than that nonsense single-step debugging. Give me a printout of two runs of the program and a diff tool any time over stepping through it for hours trying to remember what the debugger displayed 1000 steps ago in the last program execution.
Every popular language without static types eventually gets static type support, but worse than if it got it in the start. Have you tried debugging 50TLOC+ python codebases without type annotations? It's infuriating. Type systems are a must. They don't need to be rigid or obtuse, but there has to be some mechanism for the programmer to know at a glance what to expect. Also "build buggy approximations first" is objectively wrong. Everybody knows that generally managers don't allocate time for bugfixes and refactoring. If you teach all programmers to write buggy approximations, you're gonna have to live with code that is 70% buggy approximations. Maybe he's talking about TDD like that, but it comes off wrong. Also I don't understand why he says debuggability is mutually exclusive with correctness - it's not... Yes, interactive code is cool, but correct, readable interactive code where all type-driven possibilities are evident at a glance is 10x cooler. Also Rust has a REPL mode. A lot of compiled languages do. Isn't that exactly what he wants? Also also what does he mean by debugging code in production? I really wish he'd elaborate on that.
That's an issue with managers, and not coding methodology. Not that I agree much with what he says in this talk, but heard some horror stories of managers. And I suppose debugging in production means attaching a debugger to the currently working server or client on the customer's machine?
Debugging code in production is where you buy a system that promisses it because when a program crashes, it just falls back to the interpreter prompt so you look at all your variables and code, and then you write an entire point-of-sale system in said system and deploy to 100 stores only to discover that you can't dial into the stores to connect to the crashed system because they have just one phone line and they need that for credit card machines.
Getting the night's production jobs loaded (via punch cards) as quick as possible was aided by the operators removing the rubber bands and lining up the "decks" on the counter. That is, until the night when the HALON system was accidentally triggered, sending the cards everywhere. It took quite a while to find cards stranded under equipment. Fortunately the strips on the sides of the cards helped. But it was a long, long night putting everything back together.
What a great talk, thanks Jack. I agree with most of what you said. I just don't know what to do about it. I think our industry as a whole is in a local maxima, and don't know how to get out of it.
I loved this talk but I don't know why the author is sounding as though typing is somehow a waste of time or insignificant. Most web Devs use typescript or Babel because otherwise you wouldn't catch a lot of errors while writing the program. Type checking has nothing to do with making the programming experience interactive, and in fact would aid it.
The fact of the matter is that all of our hardware infrastructure expects the user to program in either ASM or C. Runtime environments are expensive and not available at the bare metal level without a ton of headaches. Lua is promising but it's written in C. I agree that modern hardware introduces many problems that don't have anything to do with solving the business problems that make us money. Maybe more people should become computer engineers and devise an ISA that allows for visual and runtime feedback natively.
In the multi-media programming world there are pure data and max/msp, that are very similar to his last examples and very commonly used by artists. This talk shed helped me understand why I keep coming back to those for projects where I have to iterate on ideas very quickly.
Unfortunately, those two are a lot more stateful than the average non-visual languages, because every function has been turned into some kind of object class that, if it has more than 1 argument, every non-first argument is an instance variable that has to be set before sending the 1st argument. And if ever you want to set the 1st argument without running the function, or running the operation without setting the 1st argument, you have to use special cases like "set $1" and "bang", IF they happen to be supported by that given class. Then to manage all of this, you have to sprinkle a lot of [t b a] and [route stuff] objects and connect them with lines that quickly get hard to follow. The DSP subsystem (~) is the exception to this, but that's only because it has a fixed data rate, and then when you try to control that subsystem at runtime you have to use non-DSP objects I described above.
@@seismicdna I think we share a lot of similar ideas, I was fortunate to stay with Jack in Berlin a few years back, and meet Szymon Kaliski too. I was sad to hear that Strange Loop was stopping after this year, I've been dreaming of attending.
I'll need to watch again to digest further. Working with a data team as their engineer is both a blessing and a curse. I've seen some of the benefits of the interactivity that Jack talks about. Particularly with data pipelines sometimes the easiest way to debug it is to pull open the notebook and run it until it breaks and inspect. It's also easy for analysts with little programming experience to write things and get started and explore. It's a curse because it does make it so easy that I'm often tasked with fixing and maintaining a heap of poorly designed programs written by many times the people than myself, with little to no consistency. Many of the perks that Jack mentions are useful for scientists/analysts for whom programming is merely a means to the end of getting their analysis done. Not having to worry about types is nice if you just want it to work. As an engineer, working with typed systems means I _don't_ have to keep the mental "working memory" whenever I jump in to make a change down the line to remember what I nuances of my interface I have dynamically programmed. Like I said, will have to watch again to really understand.
Smalltalk was one of the best live coding environments. You could change source of active stack frames. The issue was delivering the program to “production” on one of 100s of servers.
Was it an issue of it being unclear how to address the server in question? I’m also curious how you feel it compares to today’s approach of using containers/images
At the high school I attended in the 1970s we used punch cards typed in a KEYpunch machine (not "card punch"), and we fed them into the card reader and took the lineprinter (much faster than a teletype, although that was also an option for program output - program LISTING was always via lineprinter) printouts ourselves, so not all setups were equally primitive. Also, the reader was able to read either actual punches or pencil marks, and we developed "code cards" to allow us to make code with pencil marks (called "mark sense") so we weren't limited to the bottleneck of one or two keypunch machines for everyone to use, and I myself wrote the program to generate punched cards from marked cards, used at the school for several years after I graduated.
They have a potential to be much better in some important aspects like debuggability and prototyping. But most scripting languages did not go very far from static in these aspects, which does not make very much sense. Why sacrifice performance and stability for practically nothing? That's why dynamic interpreted languages are often perceived as inferior to static. It's either because most of them initially were either a replacement for shell scripting or developed to solve a very specific task (like JavaScript) and then accidentally grow bigger and become more significant. It's no wonder that the most advanced languages in that matter are Lisps, because they were designed as an AI research tool from the start.
Really can't disagree more with the "visual paradigm superiority" part as well as backward compatibility stance of this talk. The opposite of backward compatibility is a complete chaos and retaining it for a long time is totally worth it. I'm a long time vi user and unix user, but I came from a windows background and initially a lot of things didn't make sense to me. I'm in digital art nowadays and after learning and embracing the simplicity of vim and bash shell I can do things naturally: working with all sorts of files, writing scripts for almost all my purposes - like converting images, media files, custom backup scripts, 3d modeling and animation and many more. In windows and mac you can use nice GUI, but it comes at a huge cost of being burdensome to use, resisting scripting capabilities (try writing something to automate a process that involves a necessary clicking a button in some program that doesn't support command line interface) and so on and so forth. Our technology exists today thanks to "dead" programs that cared enough to support wider variety of interfaces. Text medium, while fancy like nice web page with all sorts of graphics can get it too far and turn to presentation which will try to convey the idea via picture but lack precision of concise text description. Someone said "Writing is nature's way of telling us how lousy our thinking is". If that's not convincing enough - one of the most successful companies - Amazon - intentionally discourages presentational style of conveying information about new ideas or new technologies in favor of rather writing it in a short and concise manner - if you're interested read an article "How Amazonians share their ideas". So, if you're new to programming, take this talk with a grain of salt. Clarity of thoughts is indispensable when you work on a complicated design and I'd argue is hardly achievable if you can't produce or consume a good old written content. Can't agree on "spec is always incorrect" argument. While I agree that spec is impractical for complete program it could actually be useful for some of its parts. For example, a consensus protocol "paxos" could be described in quite strict terms, proven and finally its implementation to some extent could be decoupled from the main program. Programming is about combining multiple parts into a whole and some parts (cryptography, distributed protocols ensuring livability and robustness of the system) may be a great fit for actually writing the spec. Also can't agree on "programming is about debugging" - it couldn't be farther from real world programs running on your mobile devices or in big datacenters. Logging, metrics is what actually matters to give you introspection on what your program and your users are doing. Also ability to quickly recover - e.g. issue a patch. I'd change this stance to "programming is about testing" when it comes to professional programming as big distributed programming could be too hard to debug and reproducing a particular debug sequence could be both hard and impractical.
Thoroughly agree on your point of "spec is always correct" in the video the example of an array goes to array[i] => i+ i, this is a clearly defined spec, it might not be the best real world example but it at least proves a counter example exists. Not sure if you could elaborate on "logging metrics is what actually matters" from my mind, this is equivalent to debugging, be it core dumps or just a red/green light debugging is core to development (Yes I have had times where I have only had a "it worked" or "it didn't work" to go with becuase of my company's instance to work with AWS and outsource the access to someone else which would take me a week for approval (who knows why). It is painful.). From my experience, metrics are second to getting it to work. The client doesn't care how long it takes as long as it isn't more than a couple of hours. But that may well be my limited experience talking, I have only worked in a handful or small to medium sized domains but it is worth taking into account that not every dev job is dealing with Google/Netflix levels of traffic, some are maybe 100 people a day (not to say your point isn't valid in your domain but that the speaker's point isn't necissarily invalid in all domains, as much as I disagree with many other points of his.)
I have used an IBM 029 key-punch. When I was in high-school (about 1980) we used bubble cards, but the near-by university had key-punches so we would go there to type in long programs. We still had to send the card decks to the school board computer center (overnight), because we didn't have an account at the university.
Sooo... I work in the Energy industry, we just retired our last VAX in the last 18 months...though we still have a bunch of virtualized VAX for historic documentation. We also just replaced a real time system that had one of the very first mice ever made (it was actually a Trackball and it was MASSIVE).
Food for thought, though he glosses over why things like edit/compile/link cycles still exist. There are costs to things, and sometimes those costs aren't worth the benefit.
Yes! That's exactly what I've been saying, but when I began criticizing my uni for teaching Pascal for a whole year, I almost got cancelled for "Not respecting the history of programming" and "Not understanding that you have to start from the basics".
Reminds me my high school where we were about to be thought Pascal, but the whole class decided "No. We want to learn C." And teacher was "Buy I don't know C." Other student said "I know C." and he started to teach us, which was awesome. To be fair, I had trouble understanding pointers and only after I learned programming in assembler (different class for programming microcontrollers) it clicked in my head and I finally understood.
This is a wonderful talk and I think it underline a lot of the weird things that non-programmers start finding and programming language. I was originally drawn while I was self learning to less because it was so different and because it does have some super powers compared to other languages. That seem so wrong that later languages did not incorporate a lot of the stuff that was innovative enlist even to this day. Erlang, Elixir, CLisp, Scheme, Clojure, Clojurescript are all wonderful and make my life a lot easier as a self taught dev. Elixir Livebook is wild
I have to admit, the idea of messing with runtime as sysadmin and security guy sounds nightmarish. Great tools in the Dev env, but in production it seems like a system that limits checks and requires increased trust of the devs. Mind you I'm in the sysadmin camp that IaC and CaC greatest benefits is that you move AWAY from click here to do this administration and towards more formally tested and explicit ones.
finally some sense in these comments lol. i'm curious, what other options would you suggest for runtime introspection? usually what i've seen is slapping in logging statements everywhere, but i have to assume there's a better way
Logging, metrics, and tracing are the only things I can think of, but it would be nice if you could clone a running container stick it in a mock environment and step through the process.
The only question I have is: "How do you mix heavy optimizations of Rust/C++ with powerful debugging and on-flight editing of Smalltalk?" If you have an answer, I'm willing to switch. From my experience JIT compiled code is always slower than AOT compiled. (And "lol just get a more powerful PC" or "stop running server workloads on a 10 y.o. laptop" are not valid arguments) If somebody has an example, of a performance-dependent software written in Smalltalk/LISP-like languages, like ray-tracing or video-encoding, I'd like to take a look and compare them to more conventional solutions.
Also even if JIT comes close to native compilation (at least as long as the latter does not use make use of profiling and other advanced optimizations) in either responsiveness or throughput, you typically pay for it in higher RAM usage, which is unfortunately the most limited resource in shared computing in multiple ways. Contemporary Java comes to mind there, even though on-flight editing is obviously not a thing there, I'm already grateful for a REPL.
how about this - JIT while you're working on the code, and then AOT when you want a prod release? i definitely don't agree with his suggestion that we want JIT in production.
As of Visual Studio 2022, you can use Hot Reload to change C++ applications while they're running. I'm actually quite surprised he didn't bring this up.
One Solution of combining heavy optimizations of Rust/C++ & capabilities of Smalltalk is to use twin softwares (or simulation). Works fairly well, recent smalltalk distributions have worked using such an approach for more than two decades now. They code their VM in Smalltalk (OpenSmalltal-VM/Pharo) and generate C code from it. There's also RPython that does similar things. This approach is loved by some, hated by others. Is this an example you consider to be a performance-dependent software?
@@pierremisse1046 I guess I'll try Pharo after learning some Smalltalk. But from reading about it a little, it still sounds like it'll bring some runtime overhead that might be difficult for the compiler to optimize. But I'll give it a go. If transpiled C will be faster than native JS, I'd consider it a win for Pharo.
I agree with the main concept of the talk, like, I'm always fighting with people over this stuff. That said, I'm a videogame programmer, and usually work in Unity so not much choice (even if I didn't most games use c++, Unity is C#). The thing is, in game development many of the things you say you have tools to implement and do. We can change variables on runtime, we can create different tools and graphs and stuff to see what's happening in runtime, visualize stuff, etc. Of course it's not the same exactly as the examples in the talk and these things are implemented due to the nature of how a videogame works, rather than for a better programming experience. Just wanted to point out a curious case of how game engines get a bit closer to this idea for different reasons.
Most of his examples are about tools, not programming languages themselves. He shows the issues as programming language's issues, but in reality, most of them, are lack of tooling around programming languages. Game engine editors (not game engines) are made exactly to address most of these issues. I agree with him that the language's ecosystems lack some basic tools, but these are also completely program specific. For games you will need a 4 floats type to store colors, should the language know about this and have a way to visualize the colors in its own editor, even though the majority of developer might be using this same language to code CLI/deamon programs? Does keeping the state of a program makes sense when you're shipping a game to players? It totally makes sense when you're developing, for fast iteration and debugging, but when you need to release the game and publish it, you need to compile, disable hot reloading, disable debug asserts, etc, since the client (the player) won't need any of this and all of this adds a performance cost.
@@naumazeredo6448 its because a lot of programming language communities (at the encouragement of their developers) think of these things as language issues, because they have yet to ever witness the beauty of a programming language getting out of a better tools way and sitting on the side lines for a play or two. If there is a job to be done in software development, its something to do with a programming language, and specifically MY programming language.
Those graphical representations may help some people, but they just seem like more work to interpret as they are like a new language in themselves. They should be used only when they are the better alternative to comprehension for the average dev.
yeah as far as i can tell, most of them were just showing nesting levels... ultimately they seem more like teaching tools than daily programming tools.
Wouldn't that be because most devs use 'traditional' code representation? In a world where programming is cannonically done in brightly-colored ballons connected by lines, trying to put it in a single sequential file might be the "hard to interpret". I think there's something to be gained here using visual&spatial&interactive programming, although I have not yet seen a version that sparks joy. Maybe functions as code in a bubble, and jump points (function call, return, goto) as a line between bubbles? It would visualize program flow without giving up the details you need to actually program. IDK, but it's an interesting problem.
@@rv8891 The problem with graphical representations is that they are bad at abstraction and that they are hard to process by tools. Code is all about abstraction and tools to help you work with it.
This absolutely blows my mind. I've been daydreaming on my ideal programming language for a while now and it basically boiled down to interactive visuals in the way leif made them, combined with a notebook view of your program like clerk. I'm so excited to see other people have made these things already :D
I don't think those are properties of the programming language. Visualization and interactive visualization are features of a code editor or integrated development environment. Development tools for a lot of existing programming languages could do that if they just implemented those features. Those features would also be more useful for some languages than others. The features would be more difficult to implement for some than others too. The video makes it sound like the language and its development tools are completely tied together. If you're choosing a language to learn or use in a project, you might as well group the language and its tools together. If you're tempted to invent a new programming language because you want to use lots of visualization, the distinction is important. You can always make new tools and new features for an old language without changing the old language. Inventing a new language that no one uses doesn't help anyone else. Inventing tools for popular existing languages will much more likely cause others to benefit from your creation.
@@IARRCSim yeah, like sure all the ASM boilerplate is annoying, but people could write tools to automate that boilerplate as you're typing and fold it away for visual convenience. as an example. i'm sure someone's already done it and i just haven't really looked myself.
This talk is fun to watch and the speaker is good, but I don't really agree with the whole argument. He spends so much time criticizing things that are what they are because of technical and physical limitations. Don't you think that people who punched fortran on cards would have loved to each have a personnal computer to type the programs easily ? Punch cards were a thing because a company or a school could only afford one computer which was a mess of 10M transistors soldered together by hand. Then machine code ? FFS it is optimized for the CPU silicon, which is a physical thing. How many thousands scientists work on better hardware architectures ? So stupid of them not to have silicon that takes images as input. /s Same thing with C, it is a portable freaking assembler and it is very good at it. Then you finally have higher level languages (which are written in C, surprise !) and they all have been trying interactive and visual things like forever ! Graphical desktops, debuggers, graphical libraries, jupyter notebooks. Some of them are good ideas, other are weird and fade away, but it's not like people are not trying while still being attached to a physical world of silicon. So what is his point ?
I know that in his opinion that live programming languages are appealing, but they aren't always practical. These types of languages have a great deal of overhead and aren't suitable for certain applications. The best example of this is operating systems. In this talk he bashes on Rust a little, but the simple truth is that it was never made for this purpose. I know people want the "One Programming Language that rules them All!" so they don't have to learn multiple languages, but reality isn't so kind. Certain languages are simply better at some tasks than others.
Agreed. "Hard in a dumb way" is a concept that deserves broader awareness. Dealing with new problems created by the tool meant to solve the original problem is common. What ends up happening is that people either take false pride in making those symptoms the focus of their work, rather than the cause. Or an odd sense of envy leads them to force others to suffer through outdated and avoidable symptoms even when there's a better tool at hand.
I see this often but it usually falls apart when you approach higher levels of complexity. There are many graphical programming languages, you could even call photoshop a programming language. The problem is there are tons of experiemnts but none of them really create anything "new". They spend their time trying to copy functonality from C. Stop copying C in your GUI Language.
Yeah this is my experience too. Graphical programming looks greak only with simple small problems. They are incredibly harder to use and a waste of time when you need to solve real-wold complex problems.
The issue with this kind of presentation is exactly that, this convinces the management that the new shiny language is the solution to all the company problems but the sad reality is complex problems are complex in any language and learning the new shiny language takes longer than solving them. Create tools in your language that solve your problems is the current solution.
@@nifftbatuff676 Automate for Android comes to mind. Fantastic app, I use it for a bunch of stuff I can't be bothered to write Java for and browse through Google's API docs. But large programs are an absolute nightmare when everything is drag and drop.
Agree with this. You need the right tool for the job but a specialized graphical tool is really only good for solving problems that can be modeled graphically. I have wasted many hours with new tools that are supposed to bring about a new paradigm in how we program and in the end we always end up abandoning them because they never quite fit the problem at hand. The seemingly small gap between the cool demo example and what you actually need to accomplish ends up becoming an impassable chasm. In the end, tools are built by biased people who are thinking in terms of how to solve problems A, B and C but I'm stuck trying to solve problems X, Y, and Z or else a whole new class of problems, #, % and ^ that no one has ever considered before.
Beside spreadsheets, I think the other very popular implementation of live programs that can change (and be inspected) while running are relational database management systems. They are robust, easy to use and to experiment, keep very often most of the business logic, easy to change, self documenting, self explaining (EXPLAIN and DESCRIBE etc), and highly optimized (to one part automatically, and you can give many hints to improve the optimization -> comparable to typing in programming), and secure. Indeed, the possible constraints are much better than in usual programming languages (with type systems and similar) by: expressibility, performance, durability, versioning, self-explaining, transactional behaviour and atomicity. They also ensure grants in different levels of details, while in a classic programming mode a programmer very often can see everything and change everything or nothing, but not much inbetween (like yes: you can do experiments on the running system, but it's guaranteed that you can't change or break something if you have rights to select from other namespaces, create views and similar in your namesapce, but no rights for altering or similar things, and get down prioritized when in doubt of performance automatically vs the production system). This liveness of the system might be one part of the story why an incredible amount of business logic is inside such databases and not in classical programming logic.
you're not entirely wrong, but i think the reason why people use it is because the abstractions are easier to grok at first glance. and the limitations of the languages and tools mean that you can't get completely lost. i still don't agree about the "liveness" of the system being key necessarily though. the whole "compile" vs "run" notion is just an abstraction; i'm not gonna get into the whole "compiled" sql statements topic, but what i will say is that you're going from "i'm waiting hours for code to compile" vs "i'm waiting hours for this statement to run" i don't really see much benefit there. the approachability benefits come from decent tooling imo (like having a small sample of data and seeing how it flows through your query), which programming tools can also implement.
Interesting points there. I cut my teeth in a naive environment where all backend code was in the RDBMS server. It was very productive and carried a lot of the efficiency and elegance you not. But it was also cowboyish, error prone and felt more like jazz improv than crafting a great symphony. When I then went and studied proper software engineering I ate up every sanity restoring technique greedily.
I've watched to 11:46 in at this point...and I'm getting a smell. I'm not saying he's not correct overall, but his first two examples (Assembly and C), he's writing a re-usable function that, given an array, creates a new array, stores into the new array the values of the input array incremented by 1, and then returns the new array. In his last three examples (LISP, HASKELL and APL) he's hard-coded the array as a literal and the results of the function aren't being returned into a variable for further use. He's NOT doing the same thing. He's purposefully left out 'boiler plate' or 'ceremony' code or whatever you call it to make the difference seem more dramatic than it really is.
ld (hl),a ; inc a ; ld hl,a ; inc (hl), something like that in a loop is what those other examples seem like, basically running through the memory and incrementing
I find your view of things very interesting. I observe there is a lot of activity again in the "visual programming" section. However while I do agree to some extent at least with sentiments I find that textual models are still gonna persist. I would love to offer a notebook interface to "business" people, so that they can simulate the system before bothering me (it would sure cut on the feedback loop). But for the most part I think 70-80% of any codebase I worked with is "data shenanigans" and while I do like for textual data there to be visually adequate(formatted to offer better view of the problem) I do not find it enticing to expose those. Another problem I find is that, UIs are and likely will always be a very fuzzy, not well defined problem. There is a reason why people resort to VScode - as it is texteditor. So you also have this counter movement in devtools(counter to highly interactive/visual programming) and returning to more primitive tools as they often offer more stable foundations.
I agree about a better notebook-like system modeling tool for business people. As a developer, whenever I have to do any spreadsheet work, I'm also struck by how immediate & fluid it is compared to "batch" programming ... but ... also clunky and inflexible to lay out or organize. I'd love to see a boxes-and-wires interface where boxes could be everything from single values to mini-spreadsheets and other boxes could be UI controls or script/logic or graphical outputs, etc. Now that I think about, I'm surprised Jack didn't mention Tableau which provides a lot of the immediate & responsive interaction he wants to see in future IDEs.
The PL/I Checkout Compiler, under VM/CMS was my first use of a tool set that provided a powerful interactive programming (debugging) environment. The ability to alter code at runtime was a treat, generally only approximated by even today's debuggers. Progress seems to be a matter of small steps forward, interspersed with stumbling and rolling backward down the hill quite often.
My stock has risen! Licklider taught me to teach and Sussman taught me to program, some four decades ago, and now they're both mentioned in this excellent talk. Speaking as someone who helped port Scheme (written in MacLisp) to a DEC timesharing system for the first Scheme course at MIT, l don't know why your LISP examples aren't written in Scheme. Harrumph
There are a lot of gems in this talk and I like the really "zoomed out" perspective. But talking about all the "traditional" programming languages we use, I couldn't agree less with this statement: 27:24 "And I think it's not the best use of your time to proof theorems about your code that you're going to throw away anyway." Even though you might throw away the code, writing it obviously serves a purpose (otherwise you wouldn't write it). Usually the purpose is that you learn things about the problem you're trying to solve while writing and executing it, so you can then write better code that actually solves your problem after throwing away the first code. If this throwaway-code doesn't actually do what you were trying to express with the code you wrote, it is useless though. Or worse: You start debugging it, solving problems that are not related to your actual problem but just to the code that you're going to throw away anyway. "Proving theorems" that can be checked by a decently strong type system just makes it easier to write throwaway-code that actually helps you solve your problem instead of misleading you due to easily overlooked bugs in it.
I dislike that Tweet towards the beginning about how programmers will feel good about learning something hard that they will oppose things that make it easier. For several reasons. Firstly, it could be used to automatically dismiss criticism of something new as the ravings of a malding old timer. Secondly, it paints experienced programmers as these ivory tower smug know-it-alls. Thirdly, it implies that behavior is unique to programmers. Do old time programmers sometimes look down from their ivory towers and scoff at their lessers? Absolutely, and I am no fan of that either. But the Tweet at face value could lead to someone with a new idea (or something they believe is a new idea) being arrogant. The bit with the increasingly smaller ways to write an incremented array ignores the fact that the more you remove semantics which more obtuse languages have, the less clear it is what the program is _actually doing_ besides the high-level cliff notes. This can lead to extremely painful debug sessions, where the code you write is completely sound on a high level, but that the syntactic sugar is obfuscating a deeper problem. Lower-level languages have more semantics that they really need to, but the upshot is that it allows more transparency. It's often difficult to debug average issues with, but it's significantly easier to debug esoteric issues if you slow down and go line by line. Not to mention it makes very specific optimizations easier as well. A lot of the ideas in this video have been tried and didn't stick around not because of adherence to tradition, but because they simply were not as effective. Visual programming in particular. It has the same problem as high level languages in that it's easy to capture the essence of the program, but not the details. Ideally you would have both a visual representation side by side with the text-based semantics.
tbh, C or even asm is still obfuscating stuff from you. i would say it's more a matter of knowing the hardware you're running on and knowing the quirks of the language and the compiler. (which would naturally take years.) blaming the language is not entirely correct imo.
(re-posting, looks like my comment got flagged for having a link in it) Excellent talk! One thing, Zig *will* have live code reloading, there's already been proof of concept working on Linux. It's just not at the top of the priority list, given the compiler should be correct first!
I usually do this to better understand computing, I don't even work as a developer of anysort so I'm just doing this as a hobby and it's fun to have these challenges
Woah, this was a refreshing talk! Thank you Jack Rusher, whoever you are, for sharing thoughts which I never knew I had as well - let alone that I agree with
I don't think image based computing is enough of a win to justify switching costs in most cases. The feedback loop is very fast developing on "dead programs" - it's not like we push to CICD everytime we want to see a change reflected in the program. Then there are the downsides of images, like not having version control. Instead of "othering" mainstream programmers as ignorant, build something so incredibly better no one can ignore it. But that's a lot harder than giving talks about how everyone is doing it wrong.
That's the problem... How often do you fill your codebase up with GOTO followed by a literal instruction number? The answer should be never... but when Dijkstra published “GOTO Considered Harmful” it took a literal generation of people to die off (and new people not being taught it, de facto) for it to become normal to follow. But structured programming via if/else/for/switch, and running through a compiler, also isn't the end of history. But we keep teaching like it is. And generations of developers will need to move on (one way or the other), before other techniques gain widespread adoption. It's “Don’t reinvent the wheel”; don't question or rethink the monolith that you have. Well, why not? Is a giant slab of granite that's been having corners fractally chipped away into smaller corners, and then polished and stood on its side really what we should be putting on modern cars, or airplane landing gear? Would we have bicycles if each tire was 36 inches tall and 300lbs? Would we have pulleys and gearing? Water wheels wouldn't have done a whole lot of milling... Maybe the concept of the wheel is valuable, but the implementation should be questioned regularly... Lest we perform open-heart surgery with pointy rocks, and give the patient some willow bark to chew on, through the process.
But you could version control an ascii encoded image or at least a text encoding of the image environment/code, correct? I haven't heard many people talking about that AFAIK. Image based programming should be an exploratory opportunity to mold the code to our will as we get a better picture (figuratively and literally) of the system we want to make before pushing forward towards "batch mode" programming for your final code output. Maybe there ought to be a batch calculation mode for huge data crunching before integrating that hard coded final answer onto the "live" portion of the code. In fact, Godbolt does present a great opportunity for exploratory batch programming if you're working with small bundles of C/C++ code and you want A/B comparisons of different algorithm implementations.
@@SeanJMay Images have been around for at least 40 years. I think a more realistic assumption, rather than ignorance or cultural resistance is that whatever benefits they offer are not compelling for the mainstream to adopt. But rather than debating whether they are better or not, you could be building the future with them right now!
@@SimGunther Yep that's possible, Squeak keeps a text representation of the code (not just bytecode) and tracks every change to the image so errors can be undone etc.
At no point did I say you should use image-based systems. In fact, I showed a number of environments that use source files that can be checked into revision control to achieve benefits similar to those systems. :)
I do not know anything about Smalltalk, so you were producing what kind of applications? Who were the customers? What computers were running those applications? What years? Why Smalltalk did not became popular for corporate/business applications as C,C++, Java, C# ?
I have used the punch machine myself. I wrote a simple toy pascal compiler in PL/I on IBM 370 for college course (the "dragon" book) assignment. Compiling job is processed once a day in collage computer center. Now we come a long way and are living in age that AI can write computer code. What a wonderful days to live.
I feel like there’s a few arguments being made here, two of which are: program state visualization is good and less code is better. I agree with the first, debugging of compiled languages has a *lot* of room for improvement. If you think the most terse syntax is always best, please suggest your favourite golfing language during your next meeting :) Programmers today are wildly diverse in their goals and there’s no hierarchy on which all languages exist. An off-world programmer will need the ability to change a deployed program, one researcher might be looking for the language that consumes the least energy for work-done, an avionics programmer wants the language and libraries that are the cheapest and fastest to check for correctness. If you feel that all the features discussed in the presentation should all exist in one language maybe you don’t hate Stroustrup’s work as much as you think.
To be fair, he doesn't want less code _in general,_ just less code _on things unrelated to your problem._ Hence all the descriptions of physical punch cards, which take so much effort to get into position, and all that effort has nothing to do with your programming problem. "If you feel that all the features discussed in the presentation should all exist in one language" He isn't demanding that one language should exist for all programmers, he's saying that good developer user experience and visualizers should exist for all. Because every programmer, no matter their goals, needs to read code and understand how it works.
I don't get the point. People have tried all the ideas presented here in various languages. If you don't understand the reasons behind standards, or why a mediocre standard that's actually standard is often more important to have than a "superior" one that doesn't develop consensus, you're missing a dominant part of the picture. For example, the reason TTYs are 80 columns wide is essentially because typewriters were. Typewriters weren't 80 columns wide because of computer memory limitations, they were that wide because of human factors -- that's a compromise width where long paragraphs are reasonably dense, and you also don't have too much trouble following where a broken line continues when you scan from right to left. Positioning that decision as just legacy is missing some rather important points that are central to the talk, which purports to be about human factors. I could start a similar discussion about why people do still use batch processing and slow build systems. There are a few good points in here, and if what you want is comedy snark I guess it's okay. But most of the questions raised have been well answered, and for people who have tried interactive programming and been forced to reject it because the tools just don't work for their problems, this talk is going to sound naive beyond belief. The presenter seems particularly ignorant of research into edit-and-continue, or workflows for people who work on systems larger than toys. The human factors and pragmatic considerations for a team of 10 working for 2 years are vastly different than someone working alone on data science problems for a couple months at a time. The one thing I'll give the presenter is that everyone should give the notebook paradigm a try for interactive programming.
For SQL Server, we actually have a bunch of extensions to WinDBG in order to introspect dumps (and while debugging). So you can absolutely have interactivity and introspection with programs even if you are working with native code.
Recently looking at fast ai , I can see the notebook live method has huge benifits compared to my line -by-line pycharm script. fascinating coverage of first principles progarming. I will buy the Kadinski point,line,plane. data rabbtis and racket. Im going to print and display the chemical elements for the tea room wall. There are so many stupid methods for doing simple things but the complexity gives some people a warm feeling that propogates the fools errand. great talk.
Yeah, I'm really unconvinced by most of that talk. Although some ideas are worth drawing from. Computers *are* batch processors. Every program will have to cold-start at least once in a while. That's the very reason we still write programs. That's even the reason they're called programs: it's a schedule, like a TV program. If all you care about is the result for some particular data, then sure: do things by hand, or use a calculator, or a spreadsheet, or a notebook. But rest assured that what you're doing is not programming if you don't care about having a program that can be re-run from scratch. And unfortunately, outside of some trivial cases, we can't just fix a program and apply the changes to a running program without restarting it from scratch. But having some kind of save-state + run new lines of code could help with the debugging. Also, any kind of visual representation of programs and data won't scale beyond trivial cases. Visual programming is nothing new, yet it hasn't taken over the world. Why? Because the visual representation becomes cluttered for anything beyond an equivalent of handful of lines. The tree matching example is nice and helpful, but very specific and I doubt it could find a wide use in practice. Most of the time, any visual representation deduced from the code would be unhelpful. That's because the basic algorithm is hidden among the handling of a ton of edge cases and exceptions handling. And an automated drawing tool wouldn't know what code path to highlight. I do agree that types *do* get in the way of fast prototyping, when you discover what you wanna write as you write it. And that's why I love python. Fortunately, not all programs are like that. Many programs just automate boring stuff. And even those programs that do something new usually have a good chunk of then that is some brain-dead handling of I/O and pre/post-processing. Those parts of the code could (and probably should) be statically typed to help make sure it's not misused in an obvious way
Every time I work in R I feel like I'm back in the era of magtapes and getting your printout from the operator at the computer center. I reflexively look down to check my pocket for a pocket protector. ;-)
@@SgtMacska Haskell definitely has its uses in low level application though. In relation to security, it's a lot easier to prove Haskell code and compiler are mathematically correct (which is a requirement for some security standards), proving therefore that runtime is secure, than proving the same for another language. In general Haskell's clear separation of pure parts is very good for security, as that's a large part of codebase where you have no side effects
Performance-critical applications should be written in something like C or Rust (but not C++, f**k C++). When you know what you need to do beforehand and optimization and fitness of the code to the hardware is of the most concern, not modelling, getting insights about things or experiments. The talk was mostly about development environments and it doesn't make much sense for a kernel to be wrapped up in this notebook-like environment, because by definition kernel is running on a bare metal. But even there OS devs can benefit by modeling OS kernel routines in a more interactive environment using something like a VM before going to the hardware directly. Well, they are already using VMs, developing a bare metal program from scratch and not using a VM is an insane idea. What I'm talking about not a traditional VM but a VM-like development tool that trades the VM strictness for interactivity and debuggability. Of course a code produced in a such environment should be modified before going to the production, if not rewritten entirely, but we kinda doing that already, by firstly writing a working program and only then optimizing it.
My one real criticism of this talk is that there _is_ in fact value in being consistent over time. Change what needs to be changed, and the core thesis here (TL;DR always look for better ways to do things independently of tradition) is basically right, but sometimes arbitrary things that are historically contingent aren't bad things. The 80 column thing is a good example to me. It's true that we _can_ make longer lines now and sometimes that seems to have benefits, but the consistency of sticking to a fixed fairly narrow column width means highly matured toolsets work well with these, whether that's things like font sizes and monitor resolutions, indentation practices (esp. deeply nested stuff), or even just the human factor of being accustomed to it (which perpetuates since each new coder gets accustomed to what their predecessors were accustomed to by default) making it if not more comfortable, at least easier to mentally process. Maybe there is some idealized line width (or even a language design that doesn't rely on line widths for readability) that someone could cook up. And maybe, if that happens, there would be some gain from changing the 80 column tradition. But until then, there is value in sticking to that convention precisely _because_ it is conventional. Don't fix what ain't broke -- but definitely _do_ fix what _is_.
Rather, let me clarify by addressing specifically the "visual cortex" thought. It's absolutely true that we should explore graphics and pictures and how they can be useful - but it's not by any means obvious to me that it's actually worth dismissing 80 columns for being antiquated until and unless graphical systems actually supplant conventional linear representations.
for those wondering, most of those things you get in C#. You can "skip" the hard things about types with dynamic keyowrd, see that it works, and then write the code in the proper way. You can create notebooks with C# code (and F# for that matter). You get hot reload for both UI apps and full on services. You get also one of THE BEST features ever, edit&continue. You can hit a break point, change the code and continue. What's more, you can drag the current line arrow before the if you just changed, and see right away if the change fixed the issue. Then you can attach the debugger to living program AND DO THE SAME THING. Think about that for a minute :D All of that runs reasonably fast in the end product and is very fast to develop (GC for memory, hot reload). You should really try dotnet 7, which is the latest version and as always is much faster than previous versions.
Compile and run is awesome. You can pry it off my dead hands. Nothing's better as a programmer to find out that something's wrong _now_ instead of 5 hours into a long simulation when it finally reaches that point in the code, or worse, in production, because an easily fixable mistake only happens in a weird set of conditions that were tested for. Statically typed languages help me tremendously, and if you require me to abandon that I'm afraid I can't go with you.
literally nothing about a static language will prevent runtime bugs in the manner you've described. if what you said was accurate, C would never have memory bugs. right? doesn't sound like you even watched the talk in its entirety.
@@jp2kk2 You're missing the forest for the trees here. Their point wasn't specifically about memory errors, which Rust is specifically designed to avoid, but about run-time errors in general. The compile/run cycle is a pain in the ass for dealing with run-time errors, and there's no way you're ever going to fully avoid them. Why don't we have more tools and means of visualizing our code in real time when it could be so valuable?
Being a C# developer, I am not a huge fan of Python's principles for programming. But I really do see the value that Python provides within the ML world. Imagine having a list of single or pairs of numbers and you want to get the absolute difference when it's a pair. Python (reciting from my memories): x = list.diff().abs().dropna() C# using LINQ: x = list.Where(p => p.Count == 2).Select(p => Math.Abs(p[1] - p[0])); Python is so much "cleaner" at communicating what you are going. And then you add just another 3 lines of code to calculate a density function and plot it and query the 95% quantile, all within the notebook. That's really cool.
Now if only Python allowed you to put those on different lines without resorting to Bash-era backslashes, wouldn't that be nice? 🙃(probably my 2nd or 3rd biggest gripe with Python ever)
Python is a terrible language for ML. Good languages for ML are functional. Just look at the arguments (ie hyper-parameters) to any ML algorithm, all of them are either booleans, doubles, or functions. Python is used because Physicists are smart people but terrible programmers and never learned any other languages. The ML research community (computer scientists) kept functional languages alive for 30 years pretty much by themselves. They didn't do that for fun, they did it because functional languages are the best for AI/ML programs. Python is only popular in data science (it isn't popular in the ML research community) because universities are cheap with IT support and because physicists think they can master other fields without any of the necessary training, background, or practice. Python is a sysadmins language designed to replace Perl. It is good at that. But since sysadmins are the only type of IT support universities provide to their researchers, guess which language they could get advice/help/support for?
honestly LINQ is really nice if you're coming from the world of SQL. i'm not saying C# is perfect (i don't even use C# these days) but LINQ has never been a point of complaint for me. plus if you think LINQ is bad, check out Java...
@@LC-hd5dc Oh I absolutely love LINQ, but sometimes the separation of "what to do" and "on what to do" makes things complicated, and unless you want to create 50 extensions, it'll be more verbose but still less clear what a LINQ expression does.
I started my programming on punch cards and because we were using the local university computer we had to use the manual card punches LOL The electronic one was reserved for the university students. That was in the late 1970s. The students who were actually taking a computer studies course were allowed to use the teletypes for one session a week. When I came back to computing during my Masters in Theoretical Quantum Chemistry I was using an online editor and 300 Baud terminals The 1200 Baud terminal was reserved for the professors LOL That was in 1984. I taught mathematics in Britain and when I moved to Germany I taught English and one of my last students who was taking private lessons to pass a English exam to get into an English-speaking university was shocked when I explained that I was 31 years old when I got my first Internet enabled computer in 1993 and that computer had a hard disk of 40 MB and 1 MB of RAM. My mobile phone is now multiple times fast and more capable than my then pride and joy. Since retiring I have been exploring using various forms of lisp on a Linux emulator on my Android phone and the feedback is so much better and quicker.
He's wronger than he is right. I'd love to be convinced but I think that most of these prescriptions would bring marginal improvement or go backwards. The better a visual abstraction you use, the more specific it is to a certain problem and confusing for others. The more power you give a human operator to interactively respond on a production system, the more likely they are to rely on such an untenable practice. The one thing I'd like out of all this is the ability to step into prod and set a breakpoint with some conditions in prod which doesn't halt the program but records some so I can step through. EDIT: Reached the end of the video and thought his Clerk project and the idea of notebooks being part of production code is fairly nice and much more limited than the earlier hyperbole.
A really strong start, we do a lot of dumb stuff for historical reasons, but the second half seems to totally ignore performance and honestly anything outside his own web realm. The reason programs start and run to completion is because that's what CPUs do. You can abstract that away, but now you're just using user level code and making it an undebuggable unmodifiable language feature. Sure functional languages look neat, but where are your allocations? How are they placed in memory? Are you going to be getting everything from cache or cold RAM?
He speaks English so well😂. Really like his way of introducing the history of all those punch cards and TTYs. Plus, the idea of interactive programming is quite useful in my opinion.
Sometimes notebook/live coders create programs only they can run, because there is a certain dance you have to do with the machine, which acts like a magical incantation that produces output. The reason for Haskell style types is that the compiler will help you find the missing piece that fits the hole. Due to my low working memory I love machines that think for me. With unicode, most languages already support glyphs which are weirder than cosplaying tty.
Absolutely awesome presentation of the view from the "I've got plenty of horsepower to meet my needs" seats, which is pretty much everybody these days.
Other than to show off one's own cleverness, is there any reason to flash a contextless table of numbers at an unsuspecting audience, then berate them for now knowing that it represents an xy plot, after which telling them that only visualizations matter? Is there any reason to whine about the complexity of RISC assembly? You like "1+ 1 2 3 4"? Well, someone had to write that, probably in assembler. Developers made things difficult for themselves back in the day? Or did they live through extreme technological limiations? This is the kind of talk that I hate. It presents just enough valid and valuable information to not be considered full nonsense, but couches it in what I would term wilful disinformation. Sure, those of us who are old enough to have learned Fortran and assembly in highschool on an IBM 1130 know better, but there is just enough garbage here to convince kids just coming out of highschool that there's no need to learn the basics, they can just make pretty pictures. They've already got enough problems with TH-cam videos telling them they don't need no education, they should learn to code, become coders whatever the hell that means, and they'll get their dream job. Funny thing about programming: it's just like typesetting. No matter the technology or era, the basics never change. Visualizations have their place, but so do columns of data. I truly do appreciate the insight into why VI works the way it does, and I did start out using on VT100 clones. But you know, the past was a different world, and the developers who lived and worked through it and the technology they used and built gave us the foundation for all the pretty pictures everyone wants to see today. They should be celebrated rather than mocked.
👋🏻 I'm one of those guys who built a bunch of the stuff we still use! What I'm saying here is that we now have much better machines that give us the possibility to improve dev UX greatly. :)
I agree with you 100%. It's easy to bash the "old way" of doing things when you don't respect the hardware limitations. Does he think people used punch cards because they wanted to? It was simply the reality of the hardware not being capable enough at the time. All of the languages he mentions are still coded or rely on code that at the deepest level in the processor is machine code. Of course, I would love a language that contains the debug features rather than having the debug features as an after thought. It's great to have the ability to see code functioning without having to recompile each time. It's a common technique when trying to get someone to switch their tired old trusty tool with another whiz bang tool to claim the new one will do everything and to make the old tool look like it came out of the cave man era... so, yea, I am not buying the whole thing. But, I do value the overall concept that we can find better ways to perform programming. Nevertheless, the code still has to work on the hardware. Hence the limitations.
@@marcfruchtman9473 yes! the more languages try to hide the hardware away, the more that the language and its users suffer for it. i'm not saying everything should be written in asm, but there _must_ be a balance between mental load and abstraction, not too much of either, otherwise the language becomes painful in some way.
When you realize computer science has become a culture, with even an archaeology department...
in 50 years, people will look back at our time and laugh like how we laugh at fortran
professions are cultures
@@DogeMultiverse No, totally disagree. We're still digging ourselves into a hole. We first need to get out of it. Watch: "The Mess We're In" by Joe Armstrong
I remember reading this story about a company that was trying to send a program to a customer in France. Trying, because every time they did, it would fail to run customers' hardware. Finally they sent someone with a case that contained the program to sort things out. When he went through customs he dutifully declared the program as an imported product, whereupon the customs official pulled a few cards out as a required "sample" of the imported product. Oh joy.
You cant really know if the food tastes good until you have some. Thats what went through the customs officers mind i think.
I had a program to run remotely on lots of servers and something in the shell and terminal setup was eating a few of my characters. I added multiple lines of #### for a NOP slide to overcome that.
This is awesome. They could put throwaway code onto a few cards, like some superfluous OOP or Rust checkout checker border patrol or whatever they call it, and the rest of the program could still run in France.
My blood started to boil just reading this.
Actually kinda funny parallel to Docker and the whole "ship the whole machine that the code works on" meme
This guy speaks so fast.. basically about 5 presentations in the time for 1, but somehow, he is completely understandable and he keeps the attention of the audience!
Watched it 2x… 😛
For those wondering, the title is likely a reference to Bret Victor's 2013 "Stop Drawing Dead Fish" talk.
I was almost through this entire lecture when I realized that all these issues sound like "when you're a hammer, everything looks like a nail." We were trained by a thousand editors and programming languages to approach problems in a particular way instead of asking, "What is the best tool to approach the type of problem I'm working on?" Thanks for showing some really good tools and challenging us to make tools that are equally good for working with certain types of problems and data sets.
But it also trigger my silver bullet detector.
While I agree C++ is a bloody mess, you can still write reliable real time programs in it.
Of course, you can't use dynamic memory allocation (apart from the heap for function call) and you have to be careful about which standard libraries you use.
And C++ is a pain syntactically.
I wonder how python works in real time systems with digital and analog inputs?
"The best tool for the job" largely depends solely on what the most senior programmer in the company is familiar with. It rarely has anything to do with tech and more to do with politics. These guys have usually been with the company since the beginning and the executives know him and trust him, so he has carte blanche to do as he pleases, so if he thinks the best tool for the job is Cobol or Delphi then that's exactly what will be used as long as it delivers software that makes money for the company.
Sorry to burst your tech utopia bubble but politics and profits are way more important than the "tools"... if management agrees that the latest and greatest tech needs to be used to write good software then thats what will happen, if they agree that the legacy code is working fine and doesnt neeed to be written in the latest tools then sorry for the 20 year old junior intern but you will need to learn the ancient tech to work there and it will look terrible on your CV but that's just how it is.
@@57thorns
>And C++ is a pain syntactically.
I love C++'s syntax, personally. It just feels natural and easy to understand.
I'm a big fan of "idea the right tool for the job," I hate when people try to force solutions into a system to reduce the total systems/languages in use. my current company does that, does everything in javascript when other frameworks or languages would be better.
You can do what he talks about in the video really quickly by just asking ChatGPT.
I think the biggest problem with all visual examples is that they work great for data-science or theoretical algorithms, but far less for your run-of-the-mill "corporate programming" such as (web)services. When building services, almost all of the programming is about creating a model of the real world, and not so much about visualizing and transforming data. All those examples of graphs, tables, flows etc. work really well for data-science (hence things like Jupyter are so popular there), but they don't generalize to domain modeling very well. I would absolutely love to have some sort of interactive and visual environment to build and maintain domain models, but I've yet to come across anything like that.
I feel like Dark Lang is pretty close to what you're describing, and it seems really cool, but I'm not quite ready to have so little ownership of the tech stack
Then it may please you that _informatics started with such tools,_ like the Sketchpad from Ivan Sutherland (but it's better to learn about it from Alan Kay because the original demos don't really explain the difference between "before" and "after") or the NLS from Douglas Engelbart (look up the Mother of All Demos, pay some attention to the date or the hint at the end that ARPANet "will start next year"...) Unfortunately, Engelbart's Augmenting Human Intellect Report is a very hard read, the whole field lost the point and the result is what we have today.
And not for the lack of trying. I've watched oir read pretty much this talk at least five times in the last 30 years.
Results like that we have the ultimate communication infrastructure, but people don't feel pain to
- limit themselves to a single bit, "Like" and think that any number of likes can ever worth a single statement.
- repeat the same statements picked up here and there without processing and pretend that it is the same as a dialog.
- rip off and descope the "Stop drawing dead fish" lecture (Bret Victor, 2013) in 2022. It's not about coding and punch cards but our very relationship with information systems (in machines, libraries, human communities and within our own brain).
_"Why do my eyes hurt? You have never used them before."_ (Matrix, 1999)
Domain modelling is bunch of graphs.. cqrs, ddd and so on. All is just processes and workflows.
My first programming class used punched cards running FORTRAN on a Sperry/Rand UNIVAC computer (IBM 360 clone). As a consultant over the subsequent decades I would carry a little history kit to show the newbies - some punched cards, a coding pad (80 columns!), 9 track tape, 8" floppies, and a little bag of coal as a sample of what we had to keep shoveling into back of the computer to keep up a good head of steam. As my friend called it - "The age of iron programmers and wooden computers."
You had coal? We had to scavenge for firewood.
@Eleanor Bartle not in computer labs.
My high school had Apple ][s and UCSD Pascal but the teacher didn’t want to learn a new language so we had to do Fortran on punched cards, instead. The cards would go to a university about 30 minutes away but the results took a week to come back.
A week. wow!
😱
And then you learn that there was a FORTRAN available for the ][s UCSD system and weep.
I once wrote a punched card Pascal program (for a uni course before terminals became available for those) by first developing in UCSD, then going to the card punch with the resultant listing. (I'm not sure, it might have been the 7 billionth implementation of Life.)
@@KaiHenningsen Also people often hate on fortran because they had to use 78 version and practices. Modern Fortran is OK in my opinion.
@@TheAntoine191 I think deeming it "OK" is valid for those who still must maintain programs in it, but there are still too many leftover - or even new - oddities that prevent it from being used in the ways that C is still useful. Some of these being: if you want an array of pointers to some data type, you have to use a structure; the lack of a true way to define/typedef custom data types; the intense duplication and verbosity required when declaring methods on classes; the syntax for declaring subroutine/function arguments; and the lack of a literal syntax for nested data structures (like assigning a value to an array that exists as a field inside of a structure, all at once). However, other old, largely forgotten languages like Ada, Modula-2/3 and modern variants of Pascal (Free Pascal and Delphi), certainly do have many redeeming qualities and are still very usable to this day, sometimes more so than mainstream/popular solutions even, Ada being the biggest tragedy out of the ones mentioned, in my opinion.
@11:06 rank polymorphism, I mispoke in the heat of the moment.
I would love this, but give me a language and IDE, that properly completes symbols for me, is context aware, is _interactive programming_ before I even wrote it.
- That's why I like types. Kotlin, C# ... They are helpful sooner. They catch nearly all typos. In fact, I always tab-complete, so I never have to worry about typos.
- I tried Elixir because the erlang model is so great, and I had dumb mistakes right away (typos, wrong symbol, etc), all costing lots of time to go back to. Only found through running tests, instead of before I even made them.
- An environment that let's me make mistakes is worse, then one where I notice them ~live. Worse is only type checking (and no help) at compile time. Even worse is only getting errors at runtime, which sadly due to many reasons, when trying Clojure, that's where I would end up. A lot of things are a problem to do in the REPL, say I need to inspect argument to some callback. In Kotlin, I at least see the full type spec, and the IDE is helpful. In Clojure, I need to mock-trigger the callback, hope it roughly matches production, hope I can "suspend" inside the callback and and hand craft a reply, and that's even worse: How do I know what reply it wants? Reading docs is tedious. Filling out a clear type "template" provided by IDE is really nice and simple in comparison.
It isn’t every day I see a conference talk that reminds me why I want to work on being a better programmer. Thank you.
agreed
One of my more unmistakable descents into IT Madness:
At Conrail, I had to write out my COBOL programs on 14-inch green-and-white coding sheets, and send them over to the 029 experts in the Punchcard Department.
Next day, when they'd dropped the code into my Shared Storage, it would contain so many errors that I had to spend an hour fixing it...
So I took to typing my code directly into Shared Storage, using my handy-dandy SPF Editor...
and was REPRIMANDED for wasting my Valuable Professional Computer-Programmer Time.
SPF Editor! Now, _THAT_ brings back memories.
As a sort of self taught programmer, now I understand the purpose of notebooks. Thank you for that.
Can you explain for me please
@@pleonexia4772 You load the large dataset once and edit/rerun the code on it over and over instead of reloading the dataset every time you want to make a change to the code.
@@DanH-lx1jj and then you still rerun everything if you ran cells in wrong order at some point
Make sure you also get familiar with breakpoint debugging and stopping through running code. Absolutely essential for a self-taught programmer in the "popular" languages.
@@pleonexia4772 look up Don Knuth and literate programming. Pretty common in Emacs circles to write executable code in blocks in org-mode (a kind of "markdown"), a precursor of these notebooks.
It's all spot on. Optimally, we would spend all of our time in solving the actual problem at hand, instead of spending most of it fighting the details that emerge from our choice of tools/solutions.
From 18:00 to 19:35 is such a good sequence haha, I finally understand VI keybindings
It was a revalation...
Yeah, it makes VI look logical. When I first saw VI, I could never understand how people accomplished anything, but my boss [i.e.: my uncle] kept pressuring me to use it.
@@eugenetswong But the fact that a subculture of people is using, for decades, ~ IBM-compatible keyboards, with editor software that's totally mismatched to that, is kinda hilarious.
@@tinkerwithstuff it really is, as I started learning computers when I was 8yo on DOS6.22. edit_com just felt natural for the IBM PC keyboard.
When I came to the unix world, their stupid editors always felt "wrong" anachronistic.
Why can't I have "edit_com" ? every sane editor I ever used on PCs with Windows or OS/2Warp was always like that. (and yes, I installed OS2/Warp when I was 10yo on my PC)
Linux/Unix always felt like going to the past, to a museum.
That can't be true, why would anyone ever want to use "vi/vim" ?
Emacs, it at least made sense, you call anything with "command", which is `ctrl`, like every modern keyboard shortcut ever in any GUI program like qbasic or edit_com or msword.
Then I found "nano", well that solves the problem.
But the more I studied Unix/C, the more at a museum I felt. Like why ? why must I program my supercomputer x86 from 2007 like a freaking PDP11.
Let not get me started on how brain damaged is writing shell scripts. I HATE IT, Why can't you "unixy/linuxy" guys just use Perl or Python.
And the top of my unix journey was "autotools" , FSCK IT !
no, I had enough, even CMake is better than that, even ".bat" and "nmake", I'll never, ever, ever use it, just reading the docs give me headaches, why, why do you have 3 abstraction levels of text-generation, its absurd, it literally easier to write the command manually (in `nano`) and ctrl-c ctrl-v them to get the freaking binary.
And when I'm choosing libraries for "C++", I chose those NOT use any that only provides build script for autotools.
Lets also ignore how all code that has the "GNU" is basically horribly written, from 2010 perspective, and I've read a lot, A LOT of C/C++ code. Its just amateur code, not professional, by modern standards. It baffles me that people think they are good.
If its from a GNU project, the code is basically a "bodge", example is "screen", not only the code is really bad, the user interface of the tool is really, really bad, like a circular saw plugged to a angle grinder that hangs from the ceiling by its cable, no wonder you keep losing your arms.
And those horrible, horrible things are worshiped like if they were the holy grail of the `Church of C`, or must I say the `Church of PDP11`. I understand the historical importance of such things, but they are super anachronist, its like driving day-to-day in a Ford Model-T, its not good, it was good for the time, but I prefer my modern 2019 peugeut.
I wanted to do computing, not archeology of old computing systems. That's what unix always felt like.
I like knowing it, and experimenting with it, but I don't want to use it on my day-to-day job, but is there any other option.
The one thing i don't get is his hate on "fixed width" tho.
Whenever I program in a new invironment that uses proportional fonts, I switch to something with fixed width, because without it, numbers don't line up any more. A 1 takes less screen space than a 2 without fixed width, and the code looks ugly. Even worse if you depend on white space, like Python...
I have to say some things about this talk really irked me. Like the implication that APL has superior syntax because for this very specific use case it happens to be quite readable and more terse than the alternatives
Most choices are a compromise one way or the other. Compiled languages might be "dead programs" but that's the cost you pay for function inlining, aggressive code optimization, clever register allocation, known static stack layout and so on. That's why compiled languages are fast and static and not slow and dynamic. It's all a trade off
In fact just yesterday I had an idea for code hotreloading in Rust. One limitation that immediately came to mind is that every control flow that crosses the module border will have to use dynamic dispatch, mostly preventing any meaningful optimization between the two
Yeah this exact exchange is what I was thinking about while listening to him. Compiling isn't a bad thing, it's an optimization. I use python for rapid prototyping, for instance, but when I'm done playing and ready to do some work, I write my final in C++, because it's fast. Yes I've spent days compiling libraries before, but once they were compiled, I didn't have to worry about them, didn't have to wait for my computer to chug and choke on the complex human readable parsing. Computers are not humans, don't feed them human.
This whole mentality is an offshoot of the "just throw more hardware at it." camp, one I find regrettable.
@@Nesetalis The problem is that most languages don't have both an optimized and unoptimized (introspectable) version. I want to be able to do both without changing language. I expect he does as well.
@@jrdougan Then use Haskell 😈
(admittedly, GHCi is nowhere near LISP levels of interactivity. But, it's better than nothing)
@@jrdougan I don't think that would be enough to him. It seems like he wants introspection on production. I don't see how this is possible without making some major tradeoffs like globally turning off optimizations or annotating things that can be introspected.
In fact it seems like he even wants to make the code modifiable at runtime (not necessarily the production code though).
@@gamekiller0123 I mean, why not. Basically we already doing it, just in a slow way. In bigger projects, usually you don't just deploy and overwrite your previous version, you deploy it, let it running through staging/production pipeline and then make it first available in addition to the existing code via an internal route for the programmers and integration testing pipeline, then you'll canary make it available to a small part of users, monitor it, if nothing fails, you make it available to a significant part of users (let it route to the new version, while still keeping the old version), then if you don't monitor something wrong, you'll make it the default and then you stop serving the previous version and finally make a deployment some time later to get rid of the deprecated functionality.
So, what happens as effect is that we are changing the runtime without really switching it off (if we regard the executed distributed environment as one unit execution). But the whole process is slow (we are talking about hours to see first changes and days till everything is finished -> very punch card like) and hard to debug and monitor (even with tools like distributed tracing or kafka or w/e).
There wouldn't be anything wrong or scarier if the programming model just would allow to do these changes directly in the runtime (probably still keeping different versions) and not do it on microservice level with the help of container runtimes and routing services and complicated tools for introspection. Just doing what the language should do for us involves in the end knowing Docker, Kubernetes, API gateways, Prometheus, DataDog, Kafka, a CI/CD pipeline, and many things I might have missed on the fly now. In the end, most companies are now in high demand for DevOps engineers to optimize this process (-> punch card operators are back) as the complexity is too high to really expect the programmers to handle while they are trying to solve a complete different problem (the business case).
The history of the vi arrow keys and the home/~ connection blew my mind! Now it's time to go down the Unix history rabbit hole.
This talk is so engaging it made be spontaneously clap along with the audience while watching it at home.
I do agree that having runtime reflection is a great thing so that we can look at the environment / state over time.
But i hard disagree with most of the other points in this talk.
1. comparing C / C++ / Rust / Zig with Lisp / Clojure etc.. is just plain wrong.
anyone can see that these languages are targeted at different use cases. They are manually memory managed low level languages for precise control and peak performance to extract everything out of hardware. literally just a step above assembly.
2. This talk conveniently skips over things like Garbage collection (and performance in general) except for a reference to tweet talking about devs being okay with stop the world compile times but not stop the world garbage collection. Games or Apps sensitive to latency ( real time music/video editing, trading etc..) just cannot afford to have that garbage collection pause no matter what. But devs can very much afford that compile time.
3. Saying Rust and other ML family languages don't improve software is also debatable. Rust's typesystem turns runtime errors into compile time errors making the software more reliable. Rust is in trial mode in Linux kernel.. because it provides a proper safe typesystem that C doesn't.
Most of the talk is about debugging, viewing runtime state and live interactive coding. Which is more about tooling surrounding the language rather than just the language itself. We definitely need better tooling and many projects shown in the video are good examples of what might be possible in the future. for anyone interested, i recommend watching the talk about dion format dion.systems/blog_0001_hms2020.html which also talks about editing syntax trees in a custom IDE instead of a text language with syntax.
Rust has been getting decent tooling to improve developer experience. github.com/jakobhellermann/bevy-inspector-egui for example shows all the game State live AND allows you to modify it. there's github.com/tokio-rs/console for async server apps to provide a look into the runtime state. you can always add a scripting language to your application (like lua) and query any state you want. there's other initiatives like lunarg's GfxReconstruct which will dump all the vulkan state so that the developer can reproduce the gui/graphics state exactly on his machine by receiving the vulkan dump from user. people are working on lots of cool ideas to help with debugging state machines.
Although, i think a custom rust specific IDE will go a long long way.
Not a rust guy, but rust is a great example of how he missed the point of static typing. It's feedback at compile time. Run time errors are errors caught by the end user if you are not careful.
All that "types slow devs" sounds like dynamically typed languages are better. Maybe they are... until your fancy no-explicit-types JS or Python or whatever app crashes in the middle of logic because, for example, you forgot to parse string into number. Languages with static types (even as limited as C) just won't allow you to run such nonsense at all. Types are helpful for reliability, TypeScript, Python typing, etc. confirm this. Better slow down 1 developer team than have 1000 customers with "oh no, I made an oopsie" crashes.
Thank you. Way too many people who don't actually work on real systems completely ignore performance and maintainability and focus way too much on writing code 'quickly'.
Ad 1. And how being low-level, manually memory managed for peak performance stops you from having nice things like runtime modifiable code and introspection into live system? Those are orthogonal concepts and they aren't mutually exclusive. C++ approach is 'pay only for what you use', but there doesn't seem to be much to 'buy' when you actually would like to pay for those niceties.
Ad 2. It's not that devs can afford the compile time, it's that they have to in some of the languages. E.g. you can run Haskell or Ocaml in interactive shell while developing, but compile to get better performance for release. JIT compilers do exist for various languages, so it's not like you cannot have a runtime modifiable system that performs well. C# has garbage collector, but you can use manual memory management to some extent when you really need to (high perf or interoperability with C/C++). It's an attitude problem, designers of the language(s) decided that's it's not of enough value. The point of this talk as I see it is to highlight the value of presented things and get language designers to think about such use cases.
Ad 3. This is only an improvement in an environment with forced compile/run cycles. You type something, launch the compiler (or your IDE launches it the background) wait between 0.5s and 60 minutes for it to compile, you get an error about wrong type. You fix it, compile again, run it, spend between a second to a minute to verify that it works as expected (i.e. rule out problems that weren't caught by type system).
Now compare it to: you type something while your program is running, you see clearly incorrect results on screen and on top of that you get an error. You modify the code while the system is still running and you see correct results on screen.
IMO the second workflow is much better and more efficient.
Also, look at TypeScript or Python - you can rapidly prototype your code omitting the types or add type annotations for additional safety.
TLDR: compiled/statically typed vs interpreted/dynamically typed - you could have both and achieve high efficiency in development as well as high performance in runtime, there's no need to limit yourself.
Those high level tools look so fragile, they'd never make back the time invested into them.
I never even stopped to think about it, now I have a name for it: introspection. Before studying the hard theory about regular expressions, I never actually understood them and just resorted to copy one from stack overflow. After learning the theory, I still don't write them as punch cards, instead I like using websites where you can test them in place, see explanations and so on. Now I don't feel bad for wanting to attach a java debugger to a live server haha
Indeed, that's the same point that game devs John Carmack and Jon Blow make. The debugger is the most useful environment there is.
Also note that regex is amusingly not the same thing as formal language theory's regular language. After I learned that I started to forgive myself for having a hard time with them. en.m.wikipedia.org/wiki/Regular_expression#Patterns_for_non-regular_languages
Yeah I open up Regexr website every time I need to write a regex. Would be great if IDEs at least tried to help you with the visualization.
Debugging in a live environment is very problematic. Just imagine a order process of a web shop and you debug it in execution and mess things up accidentally and as you stopped the process it's not executed and also other orders are not coming through. There is a much better way. Write tests. I sometimes don't even try out my changes manually. It's tested and if I would have broken something the changes are high, that some test will find that. Some testing frameworks have even watchers, that execute the tests every time you safe your file, so you immediately see if your changes work. If you have proper tests, there isn't much in production that can cause it to fail. So instead of debugging a live server I would rather set up the development process in a way, that you find bugs before they reach live. That at least works really well for me.
@@Duconi Nobody intentionally writes bugs. Prevention is good but not perfect.
Don't you still need a minimally disruptive way to fix the live environment?
@@brunodantasm It depends on which kind of regex you are dealing with. Regexes from SQL or grep are real regexes. The ones in many scripting languages that use the perl syntax are fake regexes and can be six orders of magnitude slower on hard inputs
Now I wish there where a part 2 of this talk that goes into more detail regarding modern language options that tackle these issues. A lot of the options mentioned seem near impossible to setup in a dev environment because the tooling is so outdated that I have to spend more time getting the environment to work than even thinking about programming in it.
It especially seems like there are no options whatsoever when it comes to hard real-time applications like audio synthesis.
Yeah its a peek to the future, if people decide to pick it up. Hope it comes to fruition, because bringing a coder closer to their code will only make it easier to see what actually goes on, past the abstraction of language syntax, semantics and language specific quirks.
@Curls Check out this talk: th-cam.com/video/yY1FSsUV-8c/w-d-xo.html
I still haven't used it myself but you might be interested in Sonic Pi
Supercollider is an advanced audio synthesis tool. Faust is a nice DSL for audio.
Yeah SuperCollider, TidalCycles, Max MSP and PureData are great examples of this
What I really enjoy about Dart is that, even though it's punch card compatible, thanks to hot reload I need to compile the program usually just couple of times a day when I pick up some new thing. Most of the time code can be reloaded in real time at incredibly short feedback loop. I still wish there were more features that would help visualize the structure and relationships of code but it's already so much better than most of the tools in the mobile ecosystem.
I've been chasing live system programming for years. Dart provides a lot of what I am looking for, as well as Python with hot reloading (see a project called Reloadium).
One of my ideas for my own system (that has yet to be written) is a little similar to the last example in this video. There are nodes which represent your program and there are "sparks" of execution so you can see data flow through the system.
It's very hard to carve a statue with a can opener. Selecting the right tool is key to success. But then most people also have an employee mindset, they are not toolmakers. It's good to see what other methodology is out there in order to set the right expectations in the users of programming environments and languages.
The idea of visual programming was not ignored, it has been tried over and over and failed in many ways. The keyboard remains the best input device available and programming languages are structured around that, program input and algorithm expression. The visual cortex can process much more but the human mind cannot express ideas faster than it can speak them or type them.
What we need are not non-text languages, we need better code visualization tools that take existing text code and annotates it in an easy to understand visual format. The entire compiled artifact diatribe becomes irrelevant if the programming environment has an Edit&Continue feature that recompiles a minimal amount of text code, relinks and reloads the parts effected, so you can continue debugging from the same state, or some saved intermediary state before the bug manifested.
The Edit&Continue bit was excatly what came to mind to me when he mentioned that as well. A cool example of a large program that needs to not fail while doing this is the Linux Kernel when live patching is used!
APL actually has a shortcut for making a list like 1 2 3 4, such that you can do the example program in only 4 characters : 1+ι4 (that's the greek iota letter) instead of 1+1 2 3 4
I know a lot of people love APL, but it seems too terse to really be readable to me
@@thoperSought APL is part of the "fun thought experiment but the next guy will just want to shoot himself while reading your code" languages. No sane person would use it for large software (or at least I hope so).
@@thoperSought It is easy, all you need is a keyboard with 200 buttons
@@thoperSought What if your reading speed is reduced by 80% but the amount of code is only 10% of the alternative?
@@thoperSought The “expert-oriented” terseness of APL/J/K is scary at first, but it soon pays off, because the core languages are so tiny that you can become an expert surprisingly quickly. There are only ~5 syntax rules and ~30 symbols to learn, depending on how you count. Beyond that, basically all of the problem-solving skills are transferable to other languages, especially to APL alternatives like numpy/R/Julia/Excel.
1:30 The 80-column "Hollerith" punch card design is an advancement over the original same-number-of-ROWS (12) with what I think were just 27 columns (circular holes rather than narrow rectangles) designed by the man named named Hollerith himself for tabulating the 1890 U.S. census, decades before there were "computers".
And before that the predecessors of punchcards were used to "program" weaving looms.
It's not just the code itself that can have a lot of "this isn't part of the actual problem" problems. All of the "technical bureaucracy" (certificates, hosting, provisioning, deploying, releasing, building, source control, branches, pull requests, code reviews, unit/integration tests) contributes in a big way to stuff not part of the actual problem. In addition, "corporate bureaucracy" (development process, useless roles, incompetence, corruption) is a killer. At the end of the day, maybe 5% of your mental effort goes to solve the real problem, and the end result is ruined by the other 95%. Solving a problem with 5 lines of code versus 1000 lines just gets lost in all the other noise.
Imagine a craftsman complaining that one needs to know metalwork to craft woodworking tools. Or a soldier moaning that all those logistics officers are not contributing because they don't fight. You'd just laugh at them.
Creating tools has always been an investment, spending effort on one task to make another task easier. Teamwork has always required coordination. IT is no exception.
If you become able multiply your workforce by 50 and spend 10% of that on the "actual problem", you have quintupled your progress. If you don't want to coordinate a team, your only other choice is to work solo. And while it sounds intriguing not to deal with 49 other lunatics and their code that conflicts with everything, including your sanity, it will really slow you down, more than team coordination ever could.
I think your argument applies to just reducing LoC, but better abstractions can also eliminate certain types of mistakes. For example, a hash function builder reduces the chance that some hash function is written incorrectly and produced collisions.
docker's imperative config and uninspectable (possibly even malware-ridden?) root containers to me is already part of that legacy mentality, people just slap it in because everyone else is doing it, not because it gets the job done the best. imperative config and orchestration is the way to go to eliminate most of the issues you mentioned in "technical bureaucracy" as you call it.
"corporate bureaucracy" is just capitalist problems. and incompetence has nothing to do with this discussion. neither of these will be solved with better programming tools.
Have you ever led a team where you were free to remove that technical bureaucracy? I am. I haven't. For each of those items you list I asked how we could shrink the footprint but removing entirely would have gone badly.
Certificates, hosting: Be maximally unoriginal in cloud provider setup.
Source control: Yes. Have you tried scp on text files instead?
Branches: trunk only except for shortlived ones just to hold pull requests.
Pull requests, code review: So much more powerful than merely quality assurance. But yes, very expensive so always good to figure out when and where to skip.
@@LC-hd5dc I guess you meant to say declarative config is the solution?
Two things:
1) let the compiler blow up on the dev rather than the program on the user (especially if you seek the lowest runtime overhead, or you ARE making the runtime)
2) you can start getting this future, today, with current languages, using Jupyter notebooks and alike (e.g. literate Haskell)
Yeah, It might be interesting if we can develop a language that runtimes during development (for interactivity, visualization, etc) but can compile for deployment. Because there are instances when interactivity just isn;t necessary and the required abstraction and overhead is nothing but dead weight.
I realized my habit of printing out variables and what information is being calculated in what the speaker calls "dead languages" is exactly the point he's making. There needs to be easier ways to observe the data and processes we write as it runs.
On the other hand printing out values is a lot more productive than that nonsense single-step debugging. Give me a printout of two runs of the program and a diff tool any time over stepping through it for hours trying to remember what the debugger displayed 1000 steps ago in the last program execution.
Every popular language without static types eventually gets static type support, but worse than if it got it in the start. Have you tried debugging 50TLOC+ python codebases without type annotations? It's infuriating. Type systems are a must. They don't need to be rigid or obtuse, but there has to be some mechanism for the programmer to know at a glance what to expect.
Also "build buggy approximations first" is objectively wrong. Everybody knows that generally managers don't allocate time for bugfixes and refactoring. If you teach all programmers to write buggy approximations, you're gonna have to live with code that is 70% buggy approximations. Maybe he's talking about TDD like that, but it comes off wrong.
Also I don't understand why he says debuggability is mutually exclusive with correctness - it's not... Yes, interactive code is cool, but correct, readable interactive code where all type-driven possibilities are evident at a glance is 10x cooler.
Also Rust has a REPL mode. A lot of compiled languages do. Isn't that exactly what he wants?
Also also what does he mean by debugging code in production? I really wish he'd elaborate on that.
That's an issue with managers, and not coding methodology. Not that I agree much with what he says in this talk, but heard some horror stories of managers.
And I suppose debugging in production means attaching a debugger to the currently working server or client on the customer's machine?
Debugging code in production is where you buy a system that promisses it because when a program crashes, it just falls back to the interpreter prompt so you look at all your variables and code, and then you write an entire point-of-sale system in said system and deploy to 100 stores only to discover that you can't dial into the stores to connect to the crashed system because they have just one phone line and they need that for credit card machines.
Getting the night's production jobs loaded (via punch cards) as quick as possible was aided by the operators removing the rubber bands and lining up the "decks" on the counter. That is, until the night when the HALON system was accidentally triggered, sending the cards everywhere. It took quite a while to find cards stranded under equipment. Fortunately the strips on the sides of the cards helped. But it was a long, long night putting everything back together.
Suddenly I think... Was there a method for making backup cards? Sure, read the cards and punch them. But did anybody do this?
What a great talk, thanks Jack. I agree with most of what you said. I just don't know what to do about it. I think our industry as a whole is in a local maxima, and don't know how to get out of it.
it’s up to us to create the solution.
I loved this talk but I don't know why the author is sounding as though typing is somehow a waste of time or insignificant. Most web Devs use typescript or Babel because otherwise you wouldn't catch a lot of errors while writing the program.
Type checking has nothing to do with making the programming experience interactive, and in fact would aid it.
The fact of the matter is that all of our hardware infrastructure expects the user to program in either ASM or C. Runtime environments are expensive and not available at the bare metal level without a ton of headaches. Lua is promising but it's written in C. I agree that modern hardware introduces many problems that don't have anything to do with solving the business problems that make us money. Maybe more people should become computer engineers and devise an ISA that allows for visual and runtime feedback natively.
In the multi-media programming world there are pure data and max/msp, that are very similar to his last examples and very commonly used by artists. This talk shed helped me understand why I keep coming back to those for projects where I have to iterate on ideas very quickly.
Unfortunately, those two are a lot more stateful than the average non-visual languages, because every function has been turned into some kind of object class that, if it has more than 1 argument, every non-first argument is an instance variable that has to be set before sending the 1st argument. And if ever you want to set the 1st argument without running the function, or running the operation without setting the 1st argument, you have to use special cases like "set $1" and "bang", IF they happen to be supported by that given class. Then to manage all of this, you have to sprinkle a lot of [t b a] and [route stuff] objects and connect them with lines that quickly get hard to follow. The DSP subsystem (~) is the exception to this, but that's only because it has a fixed data rate, and then when you try to control that subsystem at runtime you have to use non-DSP objects I described above.
Terrific talk, laughed, and then I cried, then I was hopeful again.
I won't turn to carpentry just yet.
Thanks Jack.
woah cool to see u here lol. seems like some core tenets of the philosophy that underpins your work is well represented here
@@seismicdna I think we share a lot of similar ideas, I was fortunate to stay with Jack in Berlin a few years back, and meet Szymon Kaliski too. I was sad to hear that Strange Loop was stopping after this year, I've been dreaming of attending.
@@DevineLuLinvega There will be one more next year. You should give a talk!
I'll need to watch again to digest further. Working with a data team as their engineer is both a blessing and a curse.
I've seen some of the benefits of the interactivity that Jack talks about. Particularly with data pipelines sometimes the easiest way to debug it is to pull open the notebook and run it until it breaks and inspect. It's also easy for analysts with little programming experience to write things and get started and explore.
It's a curse because it does make it so easy that I'm often tasked with fixing and maintaining a heap of poorly designed programs written by many times the people than myself, with little to no consistency.
Many of the perks that Jack mentions are useful for scientists/analysts for whom programming is merely a means to the end of getting their analysis done. Not having to worry about types is nice if you just want it to work. As an engineer, working with typed systems means I _don't_ have to keep the mental "working memory" whenever I jump in to make a change down the line to remember what I nuances of my interface I have dynamically programmed.
Like I said, will have to watch again to really understand.
@@JackRusher I'd love to! I'll try to get in touch with the event's team.
Smalltalk was one of the best live coding environments. You could change source of active stack frames. The issue was delivering the program to “production” on one of 100s of servers.
The issue is how to do product development with a team of developers on the same code base for testing and releases.
Would the team be working with a CI server?
Was it an issue of it being unclear how to address the server in question?
I’m also curious how you feel it compares to today’s approach of using containers/images
At the high school I attended in the 1970s we used punch cards typed in a KEYpunch machine (not "card punch"), and we fed them into the card reader and took the lineprinter (much faster than a teletype, although that was also an option for program output - program LISTING was always via lineprinter) printouts ourselves, so not all setups were equally primitive. Also, the reader was able to read either actual punches or pencil marks, and we developed "code cards" to allow us to make code with pencil marks (called "mark sense") so we weren't limited to the bottleneck of one or two keypunch machines for everyone to use, and I myself wrote the program to generate punched cards from marked cards, used at the school for several years after I graduated.
Dynamic, interpreted languages are better than statically typed, compiled ones?
Now that is a hot take.
Not a good take, but a hot one.
They have a potential to be much better in some important aspects like debuggability and prototyping. But most scripting languages did not go very far from static in these aspects, which does not make very much sense. Why sacrifice performance and stability for practically nothing? That's why dynamic interpreted languages are often perceived as inferior to static. It's either because most of them initially were either a replacement for shell scripting or developed to solve a very specific task (like JavaScript) and then accidentally grow bigger and become more significant. It's no wonder that the most advanced languages in that matter are Lisps, because they were designed as an AI research tool from the start.
1960 Lisp I called, wants its compiler back.
For understanding, debugging, and visualizing your program in real time? Yes, absolutely.
Really can't disagree more with the "visual paradigm superiority" part as well as backward compatibility stance of this talk. The opposite of backward compatibility is a complete chaos and retaining it for a long time is totally worth it. I'm a long time vi user and unix user, but I came from a windows background and initially a lot of things didn't make sense to me. I'm in digital art nowadays and after learning and embracing the simplicity of vim and bash shell I can do things naturally: working with all sorts of files, writing scripts for almost all my purposes - like converting images, media files, custom backup scripts, 3d modeling and animation and many more. In windows and mac you can use nice GUI, but it comes at a huge cost of being burdensome to use, resisting scripting capabilities (try writing something to automate a process that involves a necessary clicking a button in some program that doesn't support command line interface) and so on and so forth. Our technology exists today thanks to "dead" programs that cared enough to support wider variety of interfaces.
Text medium, while fancy like nice web page with all sorts of graphics can get it too far and turn to presentation which will try to convey the idea via picture but lack precision of concise text description. Someone said "Writing is nature's way of telling us how lousy our thinking is". If that's not convincing enough - one of the most successful companies - Amazon - intentionally discourages presentational style of conveying information about new ideas or new technologies in favor of rather writing it in a short and concise manner - if you're interested read an article "How Amazonians share their ideas". So, if you're new to programming, take this talk with a grain of salt. Clarity of thoughts is indispensable when you work on a complicated design and I'd argue is hardly achievable if you can't produce or consume a good old written content.
Can't agree on "spec is always incorrect" argument. While I agree that spec is impractical for complete program it could actually be useful for some of its parts. For example, a consensus protocol "paxos" could be described in quite strict terms, proven and finally its implementation to some extent could be decoupled from the main program. Programming is about combining multiple parts into a whole and some parts (cryptography, distributed protocols ensuring livability and robustness of the system) may be a great fit for actually writing the spec.
Also can't agree on "programming is about debugging" - it couldn't be farther from real world programs running on your mobile devices or in big datacenters. Logging, metrics is what actually matters to give you introspection on what your program and your users are doing. Also ability to quickly recover - e.g. issue a patch. I'd change this stance to "programming is about testing" when it comes to professional programming as big distributed programming could be too hard to debug and reproducing a particular debug sequence could be both hard and impractical.
Thoroughly agree on your point of "spec is always correct" in the video the example of an array goes to array[i] => i+ i, this is a clearly defined spec, it might not be the best real world example but it at least proves a counter example exists. Not sure if you could elaborate on "logging metrics is what actually matters" from my mind, this is equivalent to debugging, be it core dumps or just a red/green light debugging is core to development (Yes I have had times where I have only had a "it worked" or "it didn't work" to go with becuase of my company's instance to work with AWS and outsource the access to someone else which would take me a week for approval (who knows why). It is painful.). From my experience, metrics are second to getting it to work. The client doesn't care how long it takes as long as it isn't more than a couple of hours. But that may well be my limited experience talking, I have only worked in a handful or small to medium sized domains but it is worth taking into account that not every dev job is dealing with Google/Netflix levels of traffic, some are maybe 100 people a day (not to say your point isn't valid in your domain but that the speaker's point isn't necissarily invalid in all domains, as much as I disagree with many other points of his.)
I have used an IBM 029 key-punch. When I was in high-school (about 1980) we used bubble cards, but the near-by university had key-punches so we would go there to type in long programs. We still had to send the card decks to the school board computer center (overnight), because we didn't have an account at the university.
Lecturer: Talks about debugging in fancy visualization
Me: Cries is performance
Sooo... I work in the Energy industry, we just retired our last VAX in the last 18 months...though we still have a bunch of virtualized VAX for historic documentation. We also just replaced a real time system that had one of the very first mice ever made (it was actually a Trackball and it was MASSIVE).
Food for thought, though he glosses over why things like edit/compile/link cycles still exist. There are costs to things, and sometimes those costs aren't worth the benefit.
Yes! That's exactly what I've been saying, but when I began criticizing my uni for teaching Pascal for a whole year, I almost got cancelled for "Not respecting the history of programming" and "Not understanding that you have to start from the basics".
haha I also started with Pascal, it's not that bad tho, it's really nothing like fortran and assembler, but it is not very visual I'll admit.
Reminds me my high school where we were about to be thought Pascal, but the whole class decided "No. We want to learn C." And teacher was "Buy I don't know C." Other student said "I know C." and he started to teach us, which was awesome. To be fair, I had trouble understanding pointers and only after I learned programming in assembler (different class for programming microcontrollers) it clicked in my head and I finally understood.
This is a wonderful talk and I think it underline a lot of the weird things that non-programmers start finding and programming language. I was originally drawn while I was self learning to less because it was so different and because it does have some super powers compared to other languages. That seem so wrong that later languages did not incorporate a lot of the stuff that was innovative enlist even to this day.
Erlang, Elixir, CLisp, Scheme, Clojure, Clojurescript are all wonderful and make my life a lot easier as a self taught dev.
Elixir Livebook is wild
I have to admit, the idea of messing with runtime as sysadmin and security guy sounds nightmarish. Great tools in the Dev env, but in production it seems like a system that limits checks and requires increased trust of the devs.
Mind you I'm in the sysadmin camp that IaC and CaC greatest benefits is that you move AWAY from click here to do this administration and towards more formally tested and explicit ones.
finally some sense in these comments lol.
i'm curious, what other options would you suggest for runtime introspection? usually what i've seen is slapping in logging statements everywhere, but i have to assume there's a better way
Logging, metrics, and tracing are the only things I can think of, but it would be nice if you could clone a running container stick it in a mock environment and step through the process.
Incredible talk: I noticed the homage to Bret Victors: "Stop Drawing Dead Fish!"
💯
The only question I have is: "How do you mix heavy optimizations of Rust/C++ with powerful debugging and on-flight editing of Smalltalk?"
If you have an answer, I'm willing to switch.
From my experience JIT compiled code is always slower than AOT compiled. (And "lol just get a more powerful PC" or "stop running server workloads on a 10 y.o. laptop" are not valid arguments)
If somebody has an example, of a performance-dependent software written in Smalltalk/LISP-like languages, like ray-tracing or video-encoding, I'd like to take a look and compare them to more conventional solutions.
Also even if JIT comes close to native compilation (at least as long as the latter does not use make use of profiling and other advanced optimizations) in either responsiveness or throughput, you typically pay for it in higher RAM usage, which is unfortunately the most limited resource in shared computing in multiple ways. Contemporary Java comes to mind there, even though on-flight editing is obviously not a thing there, I'm already grateful for a REPL.
how about this - JIT while you're working on the code, and then AOT when you want a prod release?
i definitely don't agree with his suggestion that we want JIT in production.
As of Visual Studio 2022, you can use Hot Reload to change C++ applications while they're running. I'm actually quite surprised he didn't bring this up.
One Solution of combining heavy optimizations of Rust/C++ & capabilities of Smalltalk is to use twin softwares (or simulation).
Works fairly well, recent smalltalk distributions have worked using such an approach for more than two decades now.
They code their VM in Smalltalk (OpenSmalltal-VM/Pharo) and generate C code from it.
There's also RPython that does similar things.
This approach is loved by some, hated by others.
Is this an example you consider to be a performance-dependent software?
@@pierremisse1046 I guess I'll try Pharo after learning some Smalltalk. But from reading about it a little, it still sounds like it'll bring some runtime overhead that might be difficult for the compiler to optimize. But I'll give it a go. If transpiled C will be faster than native JS, I'd consider it a win for Pharo.
I agree with the main concept of the talk, like, I'm always fighting with people over this stuff. That said, I'm a videogame programmer, and usually work in Unity so not much choice (even if I didn't most games use c++, Unity is C#). The thing is, in game development many of the things you say you have tools to implement and do. We can change variables on runtime, we can create different tools and graphs and stuff to see what's happening in runtime, visualize stuff, etc. Of course it's not the same exactly as the examples in the talk and these things are implemented due to the nature of how a videogame works, rather than for a better programming experience. Just wanted to point out a curious case of how game engines get a bit closer to this idea for different reasons.
Most of his examples are about tools, not programming languages themselves. He shows the issues as programming language's issues, but in reality, most of them, are lack of tooling around programming languages.
Game engine editors (not game engines) are made exactly to address most of these issues. I agree with him that the language's ecosystems lack some basic tools, but these are also completely program specific. For games you will need a 4 floats type to store colors, should the language know about this and have a way to visualize the colors in its own editor, even though the majority of developer might be using this same language to code CLI/deamon programs? Does keeping the state of a program makes sense when you're shipping a game to players? It totally makes sense when you're developing, for fast iteration and debugging, but when you need to release the game and publish it, you need to compile, disable hot reloading, disable debug asserts, etc, since the client (the player) won't need any of this and all of this adds a performance cost.
@@naumazeredo6448 its because a lot of programming language communities (at the encouragement of their developers) think of these things as language issues, because they have yet to ever witness the beauty of a programming language getting out of a better tools way and sitting on the side lines for a play or two. If there is a job to be done in software development, its something to do with a programming language, and specifically MY programming language.
Check out GOAL, the lisp that they made Jak and Daxter with and your mind will be blown.
This was an outstanding talk; interesting but with good humor. I think I need to go take a peek at Clerk.
Those graphical representations may help some people, but they just seem like more work to interpret as they are like a new language in themselves. They should be used only when they are the better alternative to comprehension for the average dev.
yeah as far as i can tell, most of them were just showing nesting levels...
ultimately they seem more like teaching tools than daily programming tools.
Wouldn't that be because most devs use 'traditional' code representation? In a world where programming is cannonically done in brightly-colored ballons connected by lines, trying to put it in a single sequential file might be the "hard to interpret". I think there's something to be gained here using visual&spatial&interactive programming, although I have not yet seen a version that sparks joy.
Maybe functions as code in a bubble, and jump points (function call, return, goto) as a line between bubbles? It would visualize program flow without giving up the details you need to actually program. IDK, but it's an interesting problem.
@@rv8891 The problem with graphical representations is that they are bad at abstraction and that they are hard to process by tools. Code is all about abstraction and tools to help you work with it.
This absolutely blows my mind. I've been daydreaming on my ideal programming language for a while now and it basically boiled down to interactive visuals in the way leif made them, combined with a notebook view of your program like clerk. I'm so excited to see other people have made these things already :D
I don't think those are properties of the programming language. Visualization and interactive visualization are features of a code editor or integrated development environment. Development tools for a lot of existing programming languages could do that if they just implemented those features. Those features would also be more useful for some languages than others. The features would be more difficult to implement for some than others too.
The video makes it sound like the language and its development tools are completely tied together. If you're choosing a language to learn or use in a project, you might as well group the language and its tools together. If you're tempted to invent a new programming language because you want to use lots of visualization, the distinction is important. You can always make new tools and new features for an old language without changing the old language. Inventing a new language that no one uses doesn't help anyone else. Inventing tools for popular existing languages will much more likely cause others to benefit from your creation.
@@IARRCSim yeah, like sure all the ASM boilerplate is annoying, but people could write tools to automate that boilerplate as you're typing and fold it away for visual convenience. as an example. i'm sure someone's already done it and i just haven't really looked myself.
This talk is fun to watch and the speaker is good, but I don't really agree with the whole argument. He spends so much time criticizing things that are what they are because of technical and physical limitations. Don't you think that people who punched fortran on cards would have loved to each have a personnal computer to type the programs easily ? Punch cards were a thing because a company or a school could only afford one computer which was a mess of 10M transistors soldered together by hand. Then machine code ? FFS it is optimized for the CPU silicon, which is a physical thing. How many thousands scientists work on better hardware architectures ? So stupid of them not to have silicon that takes images as input. /s Same thing with C, it is a portable freaking assembler and it is very good at it. Then you finally have higher level languages (which are written in C, surprise !) and they all have been trying interactive and visual things like forever ! Graphical desktops, debuggers, graphical libraries, jupyter notebooks. Some of them are good ideas, other are weird and fade away, but it's not like people are not trying while still being attached to a physical world of silicon. So what is his point ?
I know that in his opinion that live programming languages are appealing, but they aren't always practical. These types of languages have a great deal of overhead and aren't suitable for certain applications. The best example of this is operating systems. In this talk he bashes on Rust a little, but the simple truth is that it was never made for this purpose. I know people want the "One Programming Language that rules them All!" so they don't have to learn multiple languages, but reality isn't so kind. Certain languages are simply better at some tasks than others.
I was on the edge of my seat yelling "yes!" more than I wanted to admit, until now. Inspiring presentation on multiple levels, Jack, thank you.
Agreed.
"Hard in a dumb way" is a concept that deserves broader awareness.
Dealing with new problems created by the tool meant to solve the original problem is common.
What ends up happening is that people either take false pride in making those symptoms the focus of their work, rather than the cause.
Or an odd sense of envy leads them to force others to suffer through outdated and avoidable symptoms even when there's a better tool at hand.
I see this often but it usually falls apart when you approach higher levels of complexity. There are many graphical programming languages, you could even call photoshop a programming language. The problem is there are tons of experiemnts but none of them really create anything "new". They spend their time trying to copy functonality from C. Stop copying C in your GUI Language.
Hmm, sounds like its better to design this kind of programming language with ui/ux designer together.
Yeah this is my experience too. Graphical programming looks greak only with simple small problems. They are incredibly harder to use and a waste of time when you need to solve real-wold complex problems.
The issue with this kind of presentation is exactly that, this convinces the management that the new shiny language is the solution to all the company problems but the sad reality is complex problems are complex in any language and learning the new shiny language takes longer than solving them. Create tools in your language that solve your problems is the current solution.
@@nifftbatuff676 Automate for Android comes to mind. Fantastic app, I use it for a bunch of stuff I can't be bothered to write Java for and browse through Google's API docs. But large programs are an absolute nightmare when everything is drag and drop.
Agree with this. You need the right tool for the job but a specialized graphical tool is really only good for solving problems that can be modeled graphically. I have wasted many hours with new tools that are supposed to bring about a new paradigm in how we program and in the end we always end up abandoning them because they never quite fit the problem at hand. The seemingly small gap between the cool demo example and what you actually need to accomplish ends up becoming an impassable chasm. In the end, tools are built by biased people who are thinking in terms of how to solve problems A, B and C but I'm stuck trying to solve problems X, Y, and Z or else a whole new class of problems, #, % and ^ that no one has ever considered before.
Beside spreadsheets, I think the other very popular implementation of live programs that can change (and be inspected) while running are relational database management systems. They are robust, easy to use and to experiment, keep very often most of the business logic, easy to change, self documenting, self explaining (EXPLAIN and DESCRIBE etc), and highly optimized (to one part automatically, and you can give many hints to improve the optimization -> comparable to typing in programming), and secure. Indeed, the possible constraints are much better than in usual programming languages (with type systems and similar) by: expressibility, performance, durability, versioning, self-explaining, transactional behaviour and atomicity. They also ensure grants in different levels of details, while in a classic programming mode a programmer very often can see everything and change everything or nothing, but not much inbetween (like yes: you can do experiments on the running system, but it's guaranteed that you can't change or break something if you have rights to select from other namespaces, create views and similar in your namesapce, but no rights for altering or similar things, and get down prioritized when in doubt of performance automatically vs the production system).
This liveness of the system might be one part of the story why an incredible amount of business logic is inside such databases and not in classical programming logic.
you're not entirely wrong, but i think the reason why people use it is because the abstractions are easier to grok at first glance. and the limitations of the languages and tools mean that you can't get completely lost.
i still don't agree about the "liveness" of the system being key necessarily though. the whole "compile" vs "run" notion is just an abstraction; i'm not gonna get into the whole "compiled" sql statements topic, but what i will say is that you're going from "i'm waiting hours for code to compile" vs "i'm waiting hours for this statement to run" i don't really see much benefit there. the approachability benefits come from decent tooling imo (like having a small sample of data and seeing how it flows through your query), which programming tools can also implement.
Interesting points there. I cut my teeth in a naive environment where all backend code was in the RDBMS server. It was very productive and carried a lot of the efficiency and elegance you not. But it was also cowboyish, error prone and felt more like jazz improv than crafting a great symphony. When I then went and studied proper software engineering I ate up every sanity restoring technique greedily.
The algorithm sent me here. What a fascinating take on compiled/batch vs interactive programming.
I've watched to 11:46 in at this point...and I'm getting a smell. I'm not saying he's not correct overall, but his first two examples (Assembly and C), he's writing a re-usable function that, given an array, creates a new array, stores into the new array the values of the input array incremented by 1, and then returns the new array. In his last three examples (LISP, HASKELL and APL) he's hard-coded the array as a literal and the results of the function aren't being returned into a variable for further use. He's NOT doing the same thing. He's purposefully left out 'boiler plate' or 'ceremony' code or whatever you call it to make the difference seem more dramatic than it really is.
ld (hl),a ; inc a ; ld hl,a ; inc (hl), something like that in a loop is what those other examples seem like, basically running through the memory and incrementing
The more general case is even shorter in Haskell:
f = map (+1)
I find your view of things very interesting. I observe there is a lot of activity again in the "visual programming" section. However while I do agree to some extent at least with sentiments I find that textual models are still gonna persist.
I would love to offer a notebook interface to "business" people, so that they can simulate the system before bothering me (it would sure cut on the feedback loop). But for the most part I think 70-80% of any codebase I worked with is "data shenanigans" and while I do like for textual data there to be visually adequate(formatted to offer better view of the problem) I do not find it enticing to expose those.
Another problem I find is that, UIs are and likely will always be a very fuzzy, not well defined problem. There is a reason why people resort to VScode - as it is texteditor. So you also have this counter movement in devtools(counter to highly interactive/visual programming) and returning to more primitive tools as they often offer more stable foundations.
I agree about a better notebook-like system modeling tool for business people.
As a developer, whenever I have to do any spreadsheet work, I'm also struck by how immediate & fluid it is compared to "batch" programming ... but ... also clunky and inflexible to lay out or organize. I'd love to see a boxes-and-wires interface where boxes could be everything from single values to mini-spreadsheets and other boxes could be UI controls or script/logic or graphical outputs, etc.
Now that I think about, I'm surprised Jack didn't mention Tableau which provides a lot of the immediate & responsive interaction he wants to see in future IDEs.
The PL/I Checkout Compiler, under VM/CMS was my first use of a tool set that provided a powerful interactive programming (debugging) environment. The ability to alter code at runtime was a treat, generally only approximated by even today's debuggers. Progress seems to be a matter of small steps forward, interspersed with stumbling and rolling backward down the hill quite often.
Building buggy approximations is my specialty.
My stock has risen! Licklider taught me to teach and Sussman taught me to program, some four decades ago, and now they're both mentioned in this excellent talk.
Speaking as someone who helped port Scheme (written in MacLisp) to a DEC timesharing system for the first Scheme course at MIT, l don't know why your LISP examples aren't written in Scheme. Harrumph
😹 I prefer Scheme myself! I used SBCL in these examples because it is a very strong programming environment (compiler, tooling, &c).
That's a line printer, not a Teletype. (And yes, I too wrote Fortran on punch cards.)
There are a lot of gems in this talk and I like the really "zoomed out" perspective. But talking about all the "traditional" programming languages we use, I couldn't agree less with this statement:
27:24 "And I think it's not the best use of your time to proof theorems about your code that you're going to throw away anyway."
Even though you might throw away the code, writing it obviously serves a purpose (otherwise you wouldn't write it). Usually the purpose is that you learn things about the problem you're trying to solve while writing and executing it, so you can then write better code that actually solves your problem after throwing away the first code. If this throwaway-code doesn't actually do what you were trying to express with the code you wrote, it is useless though. Or worse: You start debugging it, solving problems that are not related to your actual problem but just to the code that you're going to throw away anyway. "Proving theorems" that can be checked by a decently strong type system just makes it easier to write throwaway-code that actually helps you solve your problem instead of misleading you due to easily overlooked bugs in it.
I dislike that Tweet towards the beginning about how programmers will feel good about learning something hard that they will oppose things that make it easier. For several reasons. Firstly, it could be used to automatically dismiss criticism of something new as the ravings of a malding old timer. Secondly, it paints experienced programmers as these ivory tower smug know-it-alls. Thirdly, it implies that behavior is unique to programmers. Do old time programmers sometimes look down from their ivory towers and scoff at their lessers? Absolutely, and I am no fan of that either. But the Tweet at face value could lead to someone with a new idea (or something they believe is a new idea) being arrogant.
The bit with the increasingly smaller ways to write an incremented array ignores the fact that the more you remove semantics which more obtuse languages have, the less clear it is what the program is _actually doing_ besides the high-level cliff notes. This can lead to extremely painful debug sessions, where the code you write is completely sound on a high level, but that the syntactic sugar is obfuscating a deeper problem. Lower-level languages have more semantics that they really need to, but the upshot is that it allows more transparency. It's often difficult to debug average issues with, but it's significantly easier to debug esoteric issues if you slow down and go line by line. Not to mention it makes very specific optimizations easier as well.
A lot of the ideas in this video have been tried and didn't stick around not because of adherence to tradition, but because they simply were not as effective. Visual programming in particular. It has the same problem as high level languages in that it's easy to capture the essence of the program, but not the details. Ideally you would have both a visual representation side by side with the text-based semantics.
tbh, C or even asm is still obfuscating stuff from you. i would say it's more a matter of knowing the hardware you're running on and knowing the quirks of the language and the compiler. (which would naturally take years.) blaming the language is not entirely correct imo.
Not long time ago I started to use emojis 🙂🔈⚠️⛔ in terminal output.
Never felt better to read logs
(re-posting, looks like my comment got flagged for having a link in it)
Excellent talk! One thing, Zig *will* have live code reloading, there's already been proof of concept working on Linux. It's just not at the top of the priority list, given the compiler should be correct first!
I usually do this to better understand computing, I don't even work as a developer of anysort so I'm just doing this as a hobby and it's fun to have these challenges
Woah, this was a refreshing talk! Thank you Jack Rusher, whoever you are, for sharing thoughts which I never knew I had as well - let alone that I agree with
I don't think image based computing is enough of a win to justify switching costs in most cases. The feedback loop is very fast developing on "dead programs" - it's not like we push to CICD everytime we want to see a change reflected in the program. Then there are the downsides of images, like not having version control. Instead of "othering" mainstream programmers as ignorant, build something so incredibly better no one can ignore it. But that's a lot harder than giving talks about how everyone is doing it wrong.
That's the problem... How often do you fill your codebase up with GOTO followed by a literal instruction number?
The answer should be never... but when Dijkstra published “GOTO Considered Harmful” it took a literal generation of people to die off (and new people not being taught it, de facto) for it to become normal to follow. But structured programming via if/else/for/switch, and running through a compiler, also isn't the end of history. But we keep teaching like it is. And generations of developers will need to move on (one way or the other), before other techniques gain widespread adoption.
It's “Don’t reinvent the wheel”; don't question or rethink the monolith that you have.
Well, why not? Is a giant slab of granite that's been having corners fractally chipped away into smaller corners, and then polished and stood on its side really what we should be putting on modern cars, or airplane landing gear?
Would we have bicycles if each tire was 36 inches tall and 300lbs? Would we have pulleys and gearing? Water wheels wouldn't have done a whole lot of milling...
Maybe the concept of the wheel is valuable, but the implementation should be questioned regularly...
Lest we perform open-heart surgery with pointy rocks, and give the patient some willow bark to chew on, through the process.
But you could version control an ascii encoded image or at least a text encoding of the image environment/code, correct? I haven't heard many people talking about that AFAIK.
Image based programming should be an exploratory opportunity to mold the code to our will as we get a better picture (figuratively and literally) of the system we want to make before pushing forward towards "batch mode" programming for your final code output. Maybe there ought to be a batch calculation mode for huge data crunching before integrating that hard coded final answer onto the "live" portion of the code. In fact, Godbolt does present a great opportunity for exploratory batch programming if you're working with small bundles of C/C++ code and you want A/B comparisons of different algorithm implementations.
@@SeanJMay Images have been around for at least 40 years. I think a more realistic assumption, rather than ignorance or cultural resistance is that whatever benefits they offer are not compelling for the mainstream to adopt. But rather than debating whether they are better or not, you could be building the future with them right now!
@@SimGunther Yep that's possible, Squeak keeps a text representation of the code (not just bytecode) and tracks every change to the image so errors can be undone etc.
At no point did I say you should use image-based systems. In fact, I showed a number of environments that use source files that can be checked into revision control to achieve benefits similar to those systems. :)
I still have warm memories of being able to do commericial Smalltalk development. Nothing else has ever felt quite the same.
I do not know anything about Smalltalk, so you were producing what kind of applications? Who were the customers? What computers were running those applications? What years? Why Smalltalk did not became popular for corporate/business applications as C,C++, Java, C# ?
Smalltalk still exists, Squeak, Pharo, VirtualWorks are just a few examples !
I have used the punch machine myself. I wrote a simple toy pascal compiler in PL/I on IBM 370 for college course (the "dragon" book) assignment. Compiling job is processed once a day in collage computer center. Now we come a long way and are living in age that AI can write computer code. What a wonderful days to live.
I feel like there’s a few arguments being made here, two of which are: program state visualization is good and less code is better. I agree with the first, debugging of compiled languages has a *lot* of room for improvement. If you think the most terse syntax is always best, please suggest your favourite golfing language during your next meeting :)
Programmers today are wildly diverse in their goals and there’s no hierarchy on which all languages exist. An off-world programmer will need the ability to change a deployed program, one researcher might be looking for the language that consumes the least energy for work-done, an avionics programmer wants the language and libraries that are the cheapest and fastest to check for correctness. If you feel that all the features discussed in the presentation should all exist in one language maybe you don’t hate Stroustrup’s work as much as you think.
To be fair, he doesn't want less code _in general,_ just less code _on things unrelated to your problem._ Hence all the descriptions of physical punch cards, which take so much effort to get into position, and all that effort has nothing to do with your programming problem.
"If you feel that all the features discussed in the presentation should all exist in one language"
He isn't demanding that one language should exist for all programmers, he's saying that good developer user experience and visualizers should exist for all. Because every programmer, no matter their goals, needs to read code and understand how it works.
I think he conflates type safety with bad dynamic ergonomics. We can do better, and have both decent types and decent ergonomics
I don't get the point. People have tried all the ideas presented here in various languages. If you don't understand the reasons behind standards, or why a mediocre standard that's actually standard is often more important to have than a "superior" one that doesn't develop consensus, you're missing a dominant part of the picture.
For example, the reason TTYs are 80 columns wide is essentially because typewriters were. Typewriters weren't 80 columns wide because of computer memory limitations, they were that wide because of human factors -- that's a compromise width where long paragraphs are reasonably dense, and you also don't have too much trouble following where a broken line continues when you scan from right to left. Positioning that decision as just legacy is missing some rather important points that are central to the talk, which purports to be about human factors.
I could start a similar discussion about why people do still use batch processing and slow build systems. There are a few good points in here, and if what you want is comedy snark I guess it's okay. But most of the questions raised have been well answered, and for people who have tried interactive programming and been forced to reject it because the tools just don't work for their problems, this talk is going to sound naive beyond belief.
The presenter seems particularly ignorant of research into edit-and-continue, or workflows for people who work on systems larger than toys. The human factors and pragmatic considerations for a team of 10 working for 2 years are vastly different than someone working alone on data science problems for a couple months at a time.
The one thing I'll give the presenter is that everyone should give the notebook paradigm a try for interactive programming.
For SQL Server, we actually have a bunch of extensions to WinDBG in order to introspect dumps (and while debugging). So you can absolutely have interactivity and introspection with programs even if you are working with native code.
by this definition, we've long time had it for C and C++ because we have debuggers.
Recently looking at fast ai , I can see the notebook live method has huge benifits compared to my line -by-line pycharm script. fascinating coverage of first principles progarming. I will buy the Kadinski point,line,plane. data rabbtis and racket. Im going to print and display the chemical elements for the tea room wall. There are so many stupid methods for doing simple things but the complexity gives some people a warm feeling that propogates the fools errand. great talk.
Yeah, I'm really unconvinced by most of that talk. Although some ideas are worth drawing from.
Computers *are* batch processors. Every program will have to cold-start at least once in a while. That's the very reason we still write programs. That's even the reason they're called programs: it's a schedule, like a TV program.
If all you care about is the result for some particular data, then sure: do things by hand, or use a calculator, or a spreadsheet, or a notebook. But rest assured that what you're doing is not programming if you don't care about having a program that can be re-run from scratch.
And unfortunately, outside of some trivial cases, we can't just fix a program and apply the changes to a running program without restarting it from scratch. But having some kind of save-state + run new lines of code could help with the debugging.
Also, any kind of visual representation of programs and data won't scale beyond trivial cases. Visual programming is nothing new, yet it hasn't taken over the world. Why? Because the visual representation becomes cluttered for anything beyond an equivalent of handful of lines.
The tree matching example is nice and helpful, but very specific and I doubt it could find a wide use in practice. Most of the time, any visual representation deduced from the code would be unhelpful. That's because the basic algorithm is hidden among the handling of a ton of edge cases and exceptions handling. And an automated drawing tool wouldn't know what code path to highlight.
I do agree that types *do* get in the way of fast prototyping, when you discover what you wanna write as you write it. And that's why I love python. Fortunately, not all programs are like that. Many programs just automate boring stuff. And even those programs that do something new usually have a good chunk of then that is some brain-dead handling of I/O and pre/post-processing. Those parts of the code could (and probably should) be statically typed to help make sure it's not misused in an obvious way
R is also a one-liner: 1 + c(1, 2, 3, 4). Or even better: 1 + 1:4.
Every time I work in R I feel like I'm back in the era of magtapes and getting your printout from the operator at the computer center. I reflexively look down to check my pocket for a pocket protector. ;-)
it is also an array language at heart after all
Entertaining but deliberately simplistic. How desirable would Linux be if its kernel were written in Haskell?
Eliminating an entire class of exploitable bugs? That would be amazing
@@SgtMacska also probably eliminating multiple classes of machines due to performance
@@SgtMacska Haskell definitely has its uses in low level application though. In relation to security, it's a lot easier to prove Haskell code and compiler are mathematically correct (which is a requirement for some security standards), proving therefore that runtime is secure, than proving the same for another language. In general Haskell's clear separation of pure parts is very good for security, as that's a large part of codebase where you have no side effects
Performance-critical applications should be written in something like C or Rust (but not C++, f**k C++). When you know what you need to do beforehand and optimization and fitness of the code to the hardware is of the most concern, not modelling, getting insights about things or experiments. The talk was mostly about development environments and it doesn't make much sense for a kernel to be wrapped up in this notebook-like environment, because by definition kernel is running on a bare metal. But even there OS devs can benefit by modeling OS kernel routines in a more interactive environment using something like a VM before going to the hardware directly. Well, they are already using VMs, developing a bare metal program from scratch and not using a VM is an insane idea. What I'm talking about not a traditional VM but a VM-like development tool that trades the VM strictness for interactivity and debuggability. Of course a code produced in a such environment should be modified before going to the production, if not rewritten entirely, but we kinda doing that already, by firstly writing a working program and only then optimizing it.
we should eschew the kettle, for how desirable is it to make chilli in a kettle?
Flutter does a semi-good job with the "make changes as you go" thing
My one real criticism of this talk is that there _is_ in fact value in being consistent over time. Change what needs to be changed, and the core thesis here (TL;DR always look for better ways to do things independently of tradition) is basically right, but sometimes arbitrary things that are historically contingent aren't bad things.
The 80 column thing is a good example to me. It's true that we _can_ make longer lines now and sometimes that seems to have benefits, but the consistency of sticking to a fixed fairly narrow column width means highly matured toolsets work well with these, whether that's things like font sizes and monitor resolutions, indentation practices (esp. deeply nested stuff), or even just the human factor of being accustomed to it (which perpetuates since each new coder gets accustomed to what their predecessors were accustomed to by default) making it if not more comfortable, at least easier to mentally process.
Maybe there is some idealized line width (or even a language design that doesn't rely on line widths for readability) that someone could cook up. And maybe, if that happens, there would be some gain from changing the 80 column tradition. But until then, there is value in sticking to that convention precisely _because_ it is conventional.
Don't fix what ain't broke -- but definitely _do_ fix what _is_.
Rather, let me clarify by addressing specifically the "visual cortex" thought. It's absolutely true that we should explore graphics and pictures and how they can be useful - but it's not by any means obvious to me that it's actually worth dismissing 80 columns for being antiquated until and unless graphical systems actually supplant conventional linear representations.
for those wondering, most of those things you get in C#. You can "skip" the hard things about types with dynamic keyowrd, see that it works, and then write the code in the proper way. You can create notebooks with C# code (and F# for that matter). You get hot reload for both UI apps and full on services. You get also one of THE BEST features ever, edit&continue.
You can hit a break point, change the code and continue. What's more, you can drag the current line arrow before the if you just changed, and see right away if the change fixed the issue.
Then you can attach the debugger to living program AND DO THE SAME THING. Think about that for a minute :D
All of that runs reasonably fast in the end product and is very fast to develop (GC for memory, hot reload). You should really try dotnet 7, which is the latest version and as always is much faster than previous versions.
Compile and run is awesome. You can pry it off my dead hands. Nothing's better as a programmer to find out that something's wrong _now_ instead of 5 hours into a long simulation when it finally reaches that point in the code, or worse, in production, because an easily fixable mistake only happens in a weird set of conditions that were tested for. Statically typed languages help me tremendously, and if you require me to abandon that I'm afraid I can't go with you.
literally nothing about a static language will prevent runtime bugs in the manner you've described. if what you said was accurate, C would never have memory bugs. right?
doesn't sound like you even watched the talk in its entirety.
Well, the author did say he doesn't care much for correctness.
@@LC-hd5dc rust compiler does prevent almost all memory errors
@@jp2kk2 You're missing the forest for the trees here. Their point wasn't specifically about memory errors, which Rust is specifically designed to avoid, but about run-time errors in general. The compile/run cycle is a pain in the ass for dealing with run-time errors, and there's no way you're ever going to fully avoid them. Why don't we have more tools and means of visualizing our code in real time when it could be so valuable?
Being a C# developer, I am not a huge fan of Python's principles for programming. But I really do see the value that Python provides within the ML world.
Imagine having a list of single or pairs of numbers and you want to get the absolute difference when it's a pair.
Python (reciting from my memories): x = list.diff().abs().dropna()
C# using LINQ: x = list.Where(p => p.Count == 2).Select(p => Math.Abs(p[1] - p[0]));
Python is so much "cleaner" at communicating what you are going. And then you add just another 3 lines of code to calculate a density function and plot it and query the 95% quantile, all within the notebook. That's really cool.
Now if only Python allowed you to put those on different lines without resorting to Bash-era backslashes, wouldn't that be nice? 🙃(probably my 2nd or 3rd biggest gripe with Python ever)
Python is a terrible language for ML. Good languages for ML are functional. Just look at the arguments (ie hyper-parameters) to any ML algorithm, all of them are either booleans, doubles, or functions. Python is used because Physicists are smart people but terrible programmers and never learned any other languages. The ML research community (computer scientists) kept functional languages alive for 30 years pretty much by themselves. They didn't do that for fun, they did it because functional languages are the best for AI/ML programs.
Python is only popular in data science (it isn't popular in the ML research community) because universities are cheap with IT support and because physicists think they can master other fields without any of the necessary training, background, or practice. Python is a sysadmins language designed to replace Perl. It is good at that. But since sysadmins are the only type of IT support universities provide to their researchers, guess which language they could get advice/help/support for?
honestly LINQ is really nice if you're coming from the world of SQL. i'm not saying C# is perfect (i don't even use C# these days) but LINQ has never been a point of complaint for me. plus if you think LINQ is bad, check out Java...
@@LC-hd5dc This list.diff().abs().dropna() can be done in c# with 5 minutes of coding. And you can reuse this .cs in every program and subroutine.
@@LC-hd5dc Oh I absolutely love LINQ, but sometimes the separation of "what to do" and "on what to do" makes things complicated, and unless you want to create 50 extensions, it'll be more verbose but still less clear what a LINQ expression does.
I started my programming on punch cards
and because we were using the local university computer
we had to use the manual card punches LOL
The electronic one was reserved for the university students.
That was in the late 1970s.
The students who were actually taking a computer studies course
were allowed to use the teletypes for one session a week.
When I came back to computing during my Masters in Theoretical Quantum Chemistry
I was using an online editor and 300 Baud terminals
The 1200 Baud terminal was reserved for the professors LOL
That was in 1984.
I taught mathematics in Britain and when I moved to Germany
I taught English and one of my last students
who was taking private lessons to pass a English exam to get into an English-speaking university
was shocked when I explained that I was 31 years old
when I got my first Internet enabled computer in 1993
and that computer had a hard disk of 40 MB
and 1 MB of RAM.
My mobile phone is now multiple times fast and more capable
than my then pride and joy.
Since retiring I have been exploring using various forms of lisp
on a Linux emulator on my Android phone
and the feedback is so much better and quicker.
He's wronger than he is right. I'd love to be convinced but I think that most of these prescriptions would bring marginal improvement or go backwards. The better a visual abstraction you use, the more specific it is to a certain problem and confusing for others. The more power you give a human operator to interactively respond on a production system, the more likely they are to rely on such an untenable practice. The one thing I'd like out of all this is the ability to step into prod and set a breakpoint with some conditions in prod which doesn't halt the program but records some so I can step through.
EDIT: Reached the end of the video and thought his Clerk project and the idea of notebooks being part of production code is fairly nice and much more limited than the earlier hyperbole.
Man, I enjoyed this a lot more than I expected to.
A really strong start, we do a lot of dumb stuff for historical reasons, but the second half seems to totally ignore performance and honestly anything outside his own web realm. The reason programs start and run to completion is because that's what CPUs do. You can abstract that away, but now you're just using user level code and making it an undebuggable unmodifiable language feature. Sure functional languages look neat, but where are your allocations? How are they placed in memory? Are you going to be getting everything from cache or cold RAM?
Reminds me of the Gary Bernhardt talk from Stranger Loops 2012.
He speaks English so well😂. Really like his way of introducing the history of all those punch cards and TTYs. Plus, the idea of interactive programming is quite useful in my opinion.
Clerk is then basically the same as Mathematica, just in lisp and with fewer functions
You nailed it.
The absence of references to Mathematica in this presentation really stings.
Amazing talk! Thank you for sharing it with the world. Lisp and Erlang are amazing innovations which are about to go obsolete, and that makes me sad
Sometimes notebook/live coders create programs only they can run, because there is a certain dance you have to do with the machine, which acts like a magical incantation that produces output.
The reason for Haskell style types is that the compiler will help you find the missing piece that fits the hole. Due to my low working memory I love machines that think for me.
With unicode, most languages already support glyphs which are weirder than cosplaying tty.
Absolutely awesome presentation of the view from the "I've got plenty of horsepower to meet my needs" seats, which is pretty much everybody these days.
This was such a fantastic talk, both in the subjects covered and in the very charismatic presentation. Thanks for sharing your insights Jack!
Use interactive programming to write the interpreter for the programming language.
Other than to show off one's own cleverness, is there any reason to flash a contextless table of numbers at an unsuspecting audience, then berate them for now knowing that it represents an xy plot, after which telling them that only visualizations matter? Is there any reason to whine about the complexity of RISC assembly? You like "1+ 1 2 3 4"? Well, someone had to write that, probably in assembler. Developers made things difficult for themselves back in the day? Or did they live through extreme technological limiations?
This is the kind of talk that I hate. It presents just enough valid and valuable information to not be considered full nonsense, but couches it in what I would term wilful disinformation. Sure, those of us who are old enough to have learned Fortran and assembly in highschool on an IBM 1130 know better, but there is just enough garbage here to convince kids just coming out of highschool that there's no need to learn the basics, they can just make pretty pictures. They've already got enough problems with TH-cam videos telling them they don't need no education, they should learn to code, become coders whatever the hell that means, and they'll get their dream job.
Funny thing about programming: it's just like typesetting. No matter the technology or era, the basics never change.
Visualizations have their place, but so do columns of data.
I truly do appreciate the insight into why VI works the way it does, and I did start out using on VT100 clones. But you know, the past was a different world, and the developers who lived and worked through it and the technology they used and built gave us the foundation for all the pretty pictures everyone wants to see today. They should be celebrated rather than mocked.
👋🏻 I'm one of those guys who built a bunch of the stuff we still use! What I'm saying here is that we now have much better machines that give us the possibility to improve dev UX greatly. :)
I agree with you 100%. It's easy to bash the "old way" of doing things when you don't respect the hardware limitations. Does he think people used punch cards because they wanted to? It was simply the reality of the hardware not being capable enough at the time. All of the languages he mentions are still coded or rely on code that at the deepest level in the processor is machine code. Of course, I would love a language that contains the debug features rather than having the debug features as an after thought. It's great to have the ability to see code functioning without having to recompile each time. It's a common technique when trying to get someone to switch their tired old trusty tool with another whiz bang tool to claim the new one will do everything and to make the old tool look like it came out of the cave man era... so, yea, I am not buying the whole thing. But, I do value the overall concept that we can find better ways to perform programming. Nevertheless, the code still has to work on the hardware. Hence the limitations.
@@marcfruchtman9473 yes! the more languages try to hide the hardware away, the more that the language and its users suffer for it. i'm not saying everything should be written in asm, but there _must_ be a balance between mental load and abstraction, not too much of either, otherwise the language becomes painful in some way.