Instead of the Wikipedia for Distributed System, you could have referenced Actor Systems. This is where message passing happens between encapsulated entities within one program, and is often used as a *solution* to the complexities of true distributed systems. See Actix, bastion, Akka, and of course Erlang/Elixir. Pony language, too.
Watch out y'all, paradigms are like tools, different tools work well for different jobs. Don't fall into the trap of trying to solve all types of problems with one paradigm. There are always trade-offs. The modularity and easy refactoring you gain with functions you pay with cascading abstractions, every function that takes another function as a dependency makes your program harder to read and you have to be very careful about your abstractions.
Paradigms are like blunt objects. Everything boils down to microcode that has almost nothing to do with the assembly code created from source compilation. If we had control over the microcode and were able to debug it easily, that could be game changing. Features are where you need to look for inspiration to make a minimal set of scalpels and saws that let you do things you want to do without disrupting your workflow. Finding those features will allow you to have the most powerful language out there because it just so happens to be the most minimal.
That depends on how you use the language, I argue that your functions can always be readable and independent and I dont really see the reason, why it should not be possible, you seem to project your own experience onto all of us. True?
@@shalokshalom I'm sharing my practical experience on a big app I worked on, it was very hard to keep track of cascading abstractions that were 4-5 orders deep. It made it easy to swap out parts of it, but at the expense of making it hard to actually understand what the code is doing. I don't have anything against functional programming, it's great in some cases, but it doesn't make all your problems go away. Sometimes it introduces other ones and it's up to you if you want to make that trade-off. I've also heard other people say "I don't see how it can happen in theory" and I'm here to tell you it happened to me and it wasn't fun (no pun intended). Up to you if you want to believe me, of course, I'm just sharing my opinion.
I still see Alan kay perspective for OO design is exactly about what the Actors model offers: a set of distributed (physical location is transparent) objects that maintain and encapsulate their internal state and communicate by passing messages (ideally in a non-blocking way).
Bjarne did not say that C syntax was an experiment that failed, curly braces, parenthesis, semicolons etc. What he said was that the "C declarative syntax" was an experiment that failed. A view I can sympathize with. Much of the rest of C surface syntax for conditionals, loops, functions, etc, lives on in C++, Java, Javascript, the new kid on the block Rust, and many others. Where it works very well. Far from a failed experiment.
C has very few declarative, as in declarative vs imperative, features. WTH are all three of you (Bjarne, Richard, OP) talking about? I think it's three different paradigms, but cannot follow.
@@andreashabermas7964 Quite so, C has very few declarative, as in declarative vs imperative, features. C is in the imperative/procedural language paradigm. For this reason when I read "C declarative syntax" I think only of C declarations. Those statements we use to create create (declare) data, for example "int i = 2". Of course "int i = 2" is simple and clear enough but declarations in C can get pretty horrible to disentangle when they involve pointers, pointers to functions etc. Of course such declarations get even more weird in the C++ world with it's dozen different ways of initializing things. The failed experiment made worse! :)
@Dirk Knight Don't get me wrong. I love C. It's one of the smallest, simplest, high level languages that compiles to native code worthy of the name. It allows one to do most of what would otherwise have to do in assembler. But portably. A C compiler can be written by one person in a not unreasonable amount of time. C compilers can run on very small machines. As such I'm prepared to accept all the problems that can arise through use of pointers, endianness, etc, etc. The syntax of C is mostly great. Which is why so many other languages today look like C. Java, C#, Javascript, Rust etc, etc... However, C's syntax for declarations can very tortuous. So much so that people have written programs to decode C declarations into something human readable. See: cdecl.org/ Hence I tend to agree that ""C declarative syntax" was an experiment that failed.
Interesting that the guy said that they wrote the whole program, and then rewrote it in OOP. One of the biggest problems with OOP is that you have to go full philosophical on what constitutes and separates objects. When you get it wrong you'll have to shift in place during development and then you get very esoteric object(handlers). If you already have the entire layout of the program, these are way easier to identify.
CPUs are not functional. They are procedural. If you need to make abstractions, write some declarative code with some good documentation. Otherwise, write well structured code where functions cut across data structures (not the other way around like in OOP).
Slight correction at 52:05, OO was very much a product of acedemia : www.google.com/url?sa=t&source=web&rct=j&url=medium.com/javascript-scene/the-forgotten-history-of-oop-88d71b9b2d9f%23:~:text%3D%25E2%2580%259CObject%252DOriented%2520Programming%25E2%2580%259D%2520(,his%2520Sketchpad%2520Thesis%2520in%25201963.&ved=2ahUKEwjEyLnAidHsAhU2_XMBHdoSAxgQFjABegQIARAE&usg=AOvVaw1V9uojy62DUR7Mh6laDk_T&cshid=1603675178267
@@jonohiggs I hoped for the same thing, but now I'm just gonna use FP/DoD/procedural/declarative/reactive code by default and move to imperative/OO code when the default mode is inconvenient.
Excellent talk and historical perspective. Richard is very skilled at presenting a lot of information in a short amount of time without being overly dense. I have one small nitpick, though. 50:34 You _might_ want to discuss Erlang in the next iteration of this talk. It was developed during this time as a functional programming language and *deployed* into production Ericsson phone switches. The derivative language Elixir has spawned more interest in the underlying Erlang during this FP wave we’re riding. If you enjoy piping data through functions in Elm, you’ll love designing systems in Elixir and Erlang.
I guess he won't. Erlang and Elixir both rely on Actors, which are basically the same thing A. Kay described as OO. Moreover Erlang and Elixir have no type system. He's more of an Haskell guy than an Erlang guy.
I hadn't thought about this point until after you mentioned it, but yes, you are right. Actually - I got most of my motivation to explore Functional Programming in greater detail as a result of both Elm and Elixir. More than elm - elixir propelled my ability to think in FP. I was then able to look back on Elm with even more appreciation and understanding. It's true that Elixir and therefore Erlang are heavily distributed systems - but that doesn't really detract from their functional approach.
Agreed; it's a clear and thoughtful historical overview that deftly manages to avoid getting into too much detail. However, while watching it, I had several "yeah, but..." reactions. So, here goes... As Joe Armstrong contended, Erlang is really quite an OO-capable (ie, Hybrid) language. However, the "objects" are lightweight processes (aka Actors) and they are used at a level of modularity _above_ the FP level. So, the programmer can employ them "ala carte" and pay the cost in added complexity only at that point. FWIW, most of the async stuff happens at this level, though Elixir adds stream-based pipelines via macros. Finally, because shared mutable state is avoided, many concurrency-related issues (e.g., colliding updates) go away. The question of garbage collection is a bit nuanced. If the program has hard performance constraints, GC can be a non-starter. However, in many cases the program simply has to run "smoothly". Because the Erlang VM does GC separately for each process, "stop the world" behavior is generally absent. One powerful aspect of the FP style is that, because global state is generally avoided, functions can be examined and considered in isolation. You just have gozintas (arguments), gozoutas (return values), and logic (code) to worry about... Also, because shared mutable state is avoided, many concurrency-related issues go away.
Interesting talk. I agree that functional programming may be the way of the future, but the power of familiarity should not be underestimated. I've tried to transition from Java to Kotlin (which supports FP), but when push comes to shove and I'm in a hurry to produce some working code, I still find it much more productive to use Java. I think familiarity explains the meteoric rise of C++ in the 90's too (C++ is C with stuff added, and still compatible with C). As someone who earns a living from software development, its not always about what is the best language from a technical stand point, but is usually about what you can be the most productive in.
Exactly. We're paid to deliver solutions to business problems, and the company president is exceedingly unlikely to dig into our code to see how we did it. He just wants his problems solved in a fashion he can rely on.
@@jplflyerand an fp dev can do your job 10 times faster with 100x less bugs but requires knowledge. Using c++ instead of haskell for a general purpose app is like building a skyscraper with toothpicks because you can't be bothered to learn how to actually build skyscrapers but still want to call yourself an engineer
I hate the purity tests. Use a multi-paradigm language (e.g. Common Lisp) and avoid tortured problem reformulations. Things fall apart when programmers have to be in service of a paradigm rather than a problem.
I love CL, it's a top tier language.. however, there's a problem that these languages cannot solve and it's that certain paradigms and features are not backportable. If you want a lazy language or a statically typed language or a language that tracks effects or uses different kind of memory management you're out of luck.. or at the very least you can't use libraries because they were created using different set of assumptions.
The next programming paradigm is going to be "task based" - where tasks are little tasks that can contain "waiting" (for other tasks, signals, events, I/O) without doing a system call that blocks the current thread. Aka, tasks are like "user space threads" but without the overhead of context switching and sys calls.
100% believe most languages are already functional and will continue to add more FP features, what I do NOT think will happen is it ending up at PURELY functional, because that brings in far too much accidental complexity then it actually solves and also effects system's in purely FP are not really a fully settled matter.
You have no idea how confusing OOP was for me even in high school. Teachers and TH-cam videos where presenting OOP as a solution to every problem. OOP has created more frustration than solutions. I am glad I am not alone that have noticed the importance of modularity over encapsulation.
What's so confusing in OOP for you? Here's how I view OOP: You have a data structure that has attached functions to manipulate or query that data attached to it. Then you can take a step further and realize than you can make objects out of other objects, nest one data structure into another and etc. And then take another step further and realize that you don't care about data structure internals and you'd like to have some common way to "talk" with the data, so you instead of tightly coupling one data structure to one implementation you loosely couple many data structures to many interfaces. And now you don't care if you pass around LinkedList or ArrayList, both are lists, both can be iterated over, both can be searched and etc.
@@randomname3566 objects are imho the hardest concept to grasp in the whole of programming (maybe equal to pointers). It's because it's so obvious that learning about it is not rewarding and therefore - hard to remember. That's why I always had problems with objects. I could never appreciate them because they bored me and this made me really non-creative when I had to use them. But apart from that it's easy. You make an object that has some variables as properties, and functions that modify those properties and sometimes return something to the outside world.
To the interesting list @33:08 I would add two things: 1. Polymorphism. But it turns out that both static and dynamic polymorphism are possible in non-OOP languages. Rust offers static polymorphism through generics and dynamic polymorphism through trait objects. And Rust's non-OOP polymorphism is arguably better in that you can safely make somebody else's type implement YOUR interface. 2. object.method() syntax, which improves IDE completion and allows for more consistent naming. But it turns out that you can have that in a non-OOP language, too. Rust has it, for example.
This was interesting, but... I have to disagree with some of the most basic premises. Maybe that disagreement is based on lack of supporting conversation that addresses my thoughts, and I could come around. Let's start with the idea that OO didn't solve complexity. Okay, that's fair. FP doesn't, either. Both are tools that don't *solve* complexity, but they also both make it possible to address more complicated problems. We are now solving significantly more complicated problems than we were back in 1980. OO combined with modularity have made it possible to not even think about huge parts of what we do. Back in 1980, the state of the art in display was using an ncurses library on a VT-101 terminal. How the world has changed. To say OO "didn't solve the complexity problem" is basically moving the goalposts by about 6 football fields. I also am not on board with some of the other "conclusions" some people have come to regarding things like inheritance. Inheritance is a tool. Composition is also a tool. The problem is when you use the wrong tool for the problem space. There was a lot of interesting perspective in this talk. I'm not sure I agree with some of the broad arguments, but still, interesting.
I totally agree with you, what OO allows, is to zoom and dezoom your problems, and focus on the level of the granularity your cognitive ability allows you to handle
I've always felt the problem with OO was the "Oriented" part. Objects, and the features that come along with them (encapsulation, inheritance, etc), are great things that are very useful in a lot of cases. It's very convenient to take a piece of your code and essentially turn it into its own self-contained program. There are a lot of benefits to that. However, only a fool would organize their code with the whole goal being to divide it into as many little programs as possible. That has pretty thoroughly been proven to be bad design. There's a reason nobody uses Smalltalk today. It was a great idea that resulted in a lot of useful tools, but as a style it's bad. Having objects is good. Focusing on objects is bad.
@@jeffwells641 I never programmed Smalltalk, so I can't comment on that. Early in my OO life, while I wasn't using the term module, at times I used objects like I think we're supposed to use modules. Oh, I used them as objects, too, but I found great value in organizing my code into objects. It did a great job bundling things together and put a namespace on them -- long before C++ had namespaces. Objects are great when we really are dealing with objects. I do a lot of SQL database work, and I prefer to model my objects 1-to-1 with my tables. I find it works great. But real world programmers (like I suspect you are, and I am) shouldn't try to be purists, IMHO, and I think that's kind of what you said, too. We should take the pragmatic approach to solving problems, to doing our jobs, and that means the right tool for the job. Objects aren't our only tool. In other words -- I agree with you.
@Dirk Knight Were you doing network programming in the 60s? I wasn't, but I was born in 1962. Were you working with windows, and complex user interfaces? How much multi-threading? Were you incorporating 400 tools written all around the globe written in multiple langauges? Dude, by the early 1980s, the standard for computing was the DEC VAX with a clock speed of a whopping 1 MHz. Are we really not doing anything at all on our multi-core machines with clock speeds 4,000 times faster than that?
It's an interesting way of turning the thinking around, but there is an egregious error in representing that C with Classes had a full OO implementation. There were several OO specific features added over the first few years in C++ which IMO definitely contributed to its success: * 1982: virtual functions and operator overloading (ignoring the non-OO specific added features) * 1989: multiple inheritance, abstract classes, static member functions, protected members I don't think one can make a clear case that OO was not the cause of C++'s popularity, as the OO features in C with Classes and C++ differ too much.
How is operator overloading an object-oriented feature? Many object oriented languages don't have it (Java), and Haskell can achieve something similar with typeclasses. Besides, operator overloading can be accomplished in C++ using free functions...
I agree. And just from looking at Wikipedia, I don't see many non-OOP features that he's claiming that C++ had that C with classes didn't have. It seems to me that most of the changes added to C++ that weren't in C with classes were additional OOP features, but he doesn't talk about this, so it's a very flakey argument.
Fantastic! I want more. I want the experience of a developer from each language summarized into videos. What works and what doesn't work. When does something work and when does it fail. What caused a rewrite. When was performance lost. I couldn't leave the room without pausing the video because I didn't want to miss anything. And because of this video I have been pushed passed a stuck point and I now have a few ideas that I will be working on for the months ahead. Thank You!
it is absurd how many software people use nonsequiturs in their branding. "data oriented design" that sounds fine, design oriented towards minimizing data. "it means design software around hardware so it runs faster" that would be either cache oriented or hardware oriented.
It's also the worst idea possible and completely goes against computer science in general. The only people who would need such an idea would be people writing software for an extremely specific piece of hardware and needing it to be the fastest thing humanly possibly. That's not 99.999% of people including OS designers.
From today's point in time the c style was failing as experiment. But we have to remember that, at that time, the most popular languages were ada, pascal, delphi, basic, fortran, cobol and prolog. c was still experimental in the 80's and that status started to vanish with visual c++, for which machines were still too slow. But something happened at that time that gave a second chance to c: Linux
9:20 - We even have "`Considered Harmful` considered harmful" so there's that. One historical note: According to ESR's jargon file, while the article itself was definitely written by Dijkstra, the title was apparently supplied by Niklaus WIrth. 25:35 - I think there's the crux of my minor rumbling disagreement with "6/10 languages are C++ or C++ descendants" whereas I'd be totally on board with "6/10 languages are C or C descendents" (and i'd probably argue the number is higher). I really don't consider Java to be a descendant of C++ but a descendant of C and SmallTalk (all the C++-ish bits come from SmallTalk). Also, while Objective-C is different it's still a little similar because it started life as a C pre-processor. Funnily enough it's sort of got the same ancestry as Java but loved SmallTalk a little more than C(which is fair... SmallTalk was pretty interesting). To give Stroustrup his due, He was looking at the "C Experiment" from a different perspective than basically everyone else. C didn't solve the "program organisation" problem so therefore it failed. Also a side note: It's funny that Swift is listed as being C-like (not inappropriately i'll add) because one tag line for Swift was "Objective-C without the C". 36:16 - But a huge part of the reason why distributed systems are so frightening is because of the tools that are available to us. Joe Armstrong wasn't afraid of distributed systems programming because his toolset was designed for building reliable distributed systems. I also think Alan Kay's world has, in some ways, come to pass since so much software these days is microservices all the way down. The key feature that took much of the industry decades to appreciate was isolation (not without reason... serious performance dragons lie here). If i am forced to assume that that other "object" is on a different computer then there's nothing i can do to alter or inspect it's details, i HAVE to use it's provided interface. in C++ (or Java or C# or...) it's difficult to enforce that outside of running it in a different process. 44:25 - Erlang, a language built explicitly for building distributed systems, is a functional language. Joe Armstrong(one of the creators) had as part of his thesis that share nothing immutability was a necessary precondition to a reliable distributed system. I don't think there's as much daylight between "build it like a distributed system" and "functional programming" that you think there is. Having said that, i agree that our implementations of OO and FP are diametrically opposed to one another but to me that says more that our implementations of OO fail at "build it like a distributed system". 49:20 - In fairness I think this is as much a sign of the hardware at the time as anything. The LGP-30 with it's 4096 word (about 15kb in modern parlance) drum memory was state of the art at the time. When swapping out parts of the operating system to get access to enough memory is a reasonable strategy, GCs are a waste of time because you're going to need to manually massage your data into place anyway. Garbage Collection became a reasonable general strategy when the amount of memory that could reasonably be expected to be available went through the roof (by contrast, a modern server machine with *15gb* would be considered fairly small). This is still somewhat the case in the embedded and (in some cases) mobile spaces. These days of course the common complaint about GC is latency spikes (see blog.discord.com/why-discord-is-switching-from-go-to-rust-a190bbca2b1f ).
Saying "function style is to avoid mutation and side effects" is like saying flying is to avoid touching the ground while moving. It's technically true but doesn't really help with the understanding. What could help is that you need to decompose the task into functions that (1) experience the outer world only through their input parameters (2) influence the outer world only through their output values. It's good because, when writing that function you always now where to start (input) and where you need to get to (output). Otherwise this is a very valuable video emphasising many important thoughts that are not mentioned in other FP-advocate talks, thanks for creating and sharing!
Richard, you really need to take a deep dive into Elixir and OTP, because together they serve as a prime example of how to do objects and message passing the way Alan Kay really intended. Of course, you are correct that distributed systems are more complex, but sometimes they are also necessary. However, many OTP programs don't actually run on distributed machines, rather the OTP library and the BEAM virtual machine allow the programmer to explicitly introduce concurrency into their code in a safe, reliable way via extremely lightweight processes each with their own message queue (like how Clojure often handles side effects). Concurrency is how you achieve efficiency and high throughput in a network application. OTP and the BEAM also allow you to distribute those processes when necessary. Your talk was good, but I think your side-effects considered harmful quip completely misses the point that we run software purely for the side effects that doing so produces. Managing side effects is useful for reducing the complexity of writing the software. Elm's managed side effects are amazing, but the Elixir/OTP way of managing side effects is also amazing and arguably more powerful because it can be distributed reliably. You should read "Functional Web Development with Elixir, OTP and Phoenix" by Lance Halvorsen. That book is an excellent demonstration of managing side effects and state via functional programming in a distributed web app. Another great book is Sasa Juric's "Elixir in Action" (published by Manning). I think after a deep dive into Elixir/OTP (especially via Lance's book), you won't be so quick to dismiss Alan Kay's original vision of OOP as a bad idea. I do totally agree with you though that C++ kind of hijacked the term OOP to mean something else than what Alan Kay originally intended.
Elixir has been on NoRedInk's radar since 2016 dev.to/rtfeldman/comment/23a Ultimately Haskell won out: "From Rails to Elm and Haskell" th-cam.com/video/5CYeZ2kEiOI/w-d-xo.html Presumably due to static typing.
Much of the problem with OOP has to do with which THREAD the method runs on. Erlang got it right, and C++ and its descendants did not. In C++, you try to isolate state into a struct; but concurrent callers are a mosh pit of modifying the internal state of it. The parameters to a method are not immutable, and the methods are concurrently mutating the struct. In Elang, the boundary of an object is basically a queue. Each "object" is serially ingesting immutable arguments into a queue; where each object is running concurrently. The arguments are the messages. It's actually a reasonable "distributed system", because the messages are immutable... like packets in flight.
Ah Erlang! I remember when I had the idea of using an Erlang made CMS to build sites. The request tuple, that was passed around anywhere you need to process a request, was a screen and half long since you don't have any mechanism to pass data among a call stack other than pass it as a parameter. You need a new parameter? Either change all the functions in the call stack to pass the new parameter from where you read it to where you need it or just ship one giant tuple around. I was just happy I didn't have to mess with monad transformers and lenses.
I think it's important to separate code structure from runtime structure. Organizing code like a distributed system makes it unnecessary complex, but it is actually very beneficial to design the "runtime" with isolated processes that only communicate via message passing. This is essentially how erlang/elixir achieve the level of fault tolerance and concurrency that they're famous for.
Well, regarding Scala, F# and Ocaml and their use of OO / FP features we need to remember that both Scala and F# live in an ecosystem where the large majority of the libraries have OO interface (Java, C#) so I wouldn't be surprised if they encounter the need to use OO features more often than Ocaml programmers.
You say we have a culture of not "re-inventing the wheel" in programming? That is ALL anyone does these days is re-invent the wheel! How many gillion javascript libraries and frameworks are there that do the same thing in a different way?
Very engaging presentation - I'm looking forward to a sequel. From my personal observations It looks like we are currently experiencing a pradigm shift towards functional reactive programming (FRP): Whats missing in functional programming is the ability to efficiently organize (and dynamically reorganize) asynchronous processing structures. In FRP this issue has been addressed by dynamically binding functionals and observables to directed acyclic graphs, which eventually allows meta-programming in terms of dynamic graph optimization.
1. C++ is not the successor of C -- that word implies that C has been obsoleted or has "ended"; C++ is a a rough superset of C, which you may or not want to use. There are plenty of people writing C and running C code (possibly moreso than C++). 2. *Most* of the languages on that slide have a "goto" statement. The prevelance of "goto" wasn't that no one had previously thought of using code blocks in high level languages; jumps are in almost every language because it's a fundamental instruction in almost every processor architecture. You talk about goto with such disdain but there are many valid reasons to use goto in modern code, such as breaking out of several nested "for" or "while" loops.
In the 70s there was Pascal originally developed as a teaching language which had function, blocks etc then the upgrade to Pascal was Modula-2 by the same guy (N. Wirth) which included threads/tasks. Then in the early 80s there was ADA that had modules, name spaces and all the goodness we now take for granted (yeah, it died due to licensing costs to DOD. I used it for a few years and really liked it). Pascal was an excellent language and still lives on today as does COBOL, Fortran and others. What I'm disappointed with these days is the loss of domain specific languages like Prolog for building rule engines. We now have tens to a hundred general purpose languages, when in the 80s, 90s etc we had a lot of great languages for specific purposes. Forth I loved, fast, compact, great for embedded systems(postscript is based on forth). APL for doing complex business Math, 4GLs for doing the DB layer and UI. I like those Domain Specific languages, they usually solved a problem very quickly. Anyway, my 2c worth.
Look into Haskell. One of its features is the ability to develop a DSL for a problem domain. I created a DSL for a high-performance scheduling algorithm, and it worked great.
Pure functional programming brings a lot of complexity for no reason. Something that can be described as a simple "repetition/loop" now needs to be described in terms of some other smaller blocks to simulate the same thing. This causes the performance to suffer because of unnecessary copies between the functions, increasing the complexity of algorithms by chaining, etc.
It’s not for no reason. The reason is that you’ve decided to make the trade off of using pure functions at the expense of procedural shortcuts. In my experience, almost all loops can be concisely expressed as map/reduce iteration. It’s not hard to understand, and the compiler is able to take the abstraction and produce native instructions just as fast as a procedural loop.
@@lordsharshabeel You are saying the reason for pure functions is using pure functions. I don't think I understand that. I also don't think you can express loops "concisely". You will just end up fighting with "common human sense" and "how computers work". Computer hardware performs the operations in a stateful imperative way. You can't write assembly code in a purely functional way! The compilers usually cannot optimize this type of programming which makes this type of program unsuitable for any performance-critical software. For example, check the performance issues of C++ pipelines which results in unnecessary copies. Recursion results in excessive memory usage. Working with any type of I/O or event system is a nightmare. etc etc.
@@nivo6379 On a more philosophical note: a truly pure function does not have side-effects. Composing pure functions yields another pure function. This means any pure functional program cannot have side-effects and therefore does nothing. *Everything* a program that does anything involves side effects and thus cannot, by definition, be created in a pure functional way 😉
@@totalermist, I intuited the same, initially. The distinction is "side effect" versus "managed effect". It means not abstracting a logging call or database write out into the belly of your program, but rather working to keep those sorts of things as close to the surface as possible. It means not mixing such an action with the transforms required to prepare the data for the effect. It means ensuring that any such effect is as modular, examinable, and replaceable as can be.
@@nivo6379, "you can't write assembly code [sic] in a purely functional way", and I say, "1. you can, and 2, it would be the same garble of conditional jumps that imperative looping compiles down to." The biggest reason for why we moved past writing in assembly is interoperability. The biggest reason for why we moved past thin abstractions like C is for ergonomics. Textbook OOP is fantastically ergonomic. Real-world OOP is a hideous mess of gotchas and ossifications, because we collectively have rejected the self-restrictions needed to leverage the modularity of OOP. One answer is to back to thin abstractions, as with Go, or to carefully-constrained, but deliciously composable abstractions like Rust. We use distributed systems at my job, Kubernetes, as a means of enforcing restrictions on ourselves as to what can and cannot be directly coupled, because we failed OOP. The alternative recommended by Feldman is to enforce the necessary restrictions to achieve the modularity needed to avoid being strangled by our code. One can do that in an OOP paradigm. FP does that wonderfully, as an inherent property of structure.
Actually I disagree with one thing at least: "be more like distributed systems", I think is great advise. From the ideas around microservices to Erlang's runtime system. Great presentation still!
Love the format of the video. Screen space is used for the talk, not the talker, or the room he's in, or some permanent header about the meeting. Screen space used for what we need to see. +1
This is a good prediction, (and I hope that the OOP acolytes have a painless epiphany that "OOP doesn't fit my data model because its perfect; my training and mind fit the problem space into an OOP shape because that is what I'm used to.") "OOP was an interesting experiment. It's well past time for something better." Its an Elm advert? Yup, its an Elm advert ;-)
There are a lot of straw men in this argument. For example, he talks about how Alan Kay looked at OO as composing systems from smaller things that looked like systems themselves, connected by a network, and then says "that sounds like distributed programming". He then argues that distributed programming is notoriously hard, so its a bad way to structure programs - without adequately demonstrating that this was Kay's original viewpoint (clue - it wasn't).
Once functional programming is firmly entrenched, it's going to take Brian Will doing a "Functional Programming is an Embarrassment" video to point out to us where we all went wrong.
I think a lot of people associate pure-functional programming with pain because of the category theory terminology in Haskell, Monad Transformers, and having to change a huge piece of code to thread variables through the call stack because you didn't want to use Monads. The new languages like Koka, Eff have algebraic effects which are much easier to understand and easier to use than Monad Transformers, and it also by default can tell the difference between total functions, functions that can throw exceptions, etc... Also pure functional programming is just a special case of relational and declarative programming, and the "single return value" restriction of functional languages is worked around by using Monads which just embed other languages into functional.
@@aoeu256 It think it just creates work to transform an algorithm that is fundamentally running on an imperative, state-transforming instruction set with memory into a functional model. Converting things to a mess of recursive functions with state transformation threaded through the call stack, is in the words of a colleague - 'an unnatural act'. My algorithm development work (entropy analysis and statistical data processing mostly) relies on no libraries - it's just bare metal algorithms to do whatever needs doing, usually with speed being a goal. Some things map naturally to functional structures but others simply do not. In contrast, digital logic design is fundamentally functional. 3D model expression is too.. I took to functional HDLs like a duck to water. Functional HDLs map perfectly to the problem space. OpenSCAD a functional 3D model description language similarly maps well to the problems space. Functional programming of computer algorithms map much less well. Man cannot live on recursive factorials alone.
@@davidjohnston4240this is complete nonsense and hilarious. All of these algorithms come from computer science, a field that existed before computers or the von Neumann architecture ever did and a field that will exist after that architecture as well. What you're doing is translating math into imperative nonsense to talk to a computer, instead of writing math and having the computer translate it for you, which is the point of their existence. They're meant to help the engineer, not the other way around Your illogical statement is hilarious because by your logic you should just be coding with electrons because even the opcodes in assembly aren't actually what's run on the cpu Also you have no idea about what you're talking about with recursion. Nobody even writes recursive functions directly, those are explicit iterators and like 4 layers of abstraction below anything sane This hate of fp always comes from people who barely even understand what they're talking about.. And they often have your extremely odd way of speaking to and love to talk about low level details that dont actually matter to anybody
Pure functions are important because they have equality relations among themselves which can be used in the program itself allowing program transformation without confusion, therefore controlling complexity.
Functional programming is great, but this specific argument for it seems flawed from the start. There's a problem with the premise that "OOP is flawed because programming distributed systems is complex, and OOP is like distributed systems", which is that everything that makes distributed systems difficult (concurrent programming, lossy communication channels, security, discovery, etc.) is not inherently present in an OOP system. Those difficulties arise in the measure that a system is distributed, not in the measure that a system is object-oriented. I'm guessing Alan Kay was using the distributed systems analogy in terms of the similarity in mental models of computation, not in terms of how they're implemented in practice. In both distributed systems and in OOP you have to divide your domain model into separate entities that each have their own responsibilities and whose implementations will be a "black box" to others. Then, you make the entities communicate to each other via a messaging API. None of the difficulties in distributed systems programming are present in this similarity... they arise when you go further and start putting those entities on separate processes or machines, which has nothing to do with OOP. On the other hand, if your single-threaded program grows and requires you to pull a component out into its own separate process (say for security or scalability reasons), it being already written in an object-oriented way should make it much easier to do so, since its interface and the way it's encapsulated is already defined. So the similarity between distributed systems and OOP ends up being a strength in the end...
@Dirk Knight my point is that OOP does not solve distributed systems issues (aside from domain modeling like I said at the end, which granted is a very easy part of a distributed system), and that distributed systems issues are not present in OOP. They’re orthogonal concerns.
@16:25 I think what Dan meant when he said: "we had the OPPORTUNITY.." is that rebuilding the application again was something easily done by oop not easily done in other applications written in previous paradigms making it experimental a good thing not a bad thing.
I suppose it's always interesting to hear a different spin on things, but unfortunately the logic doesn't come together here. Arguments used to bash OO don't necessarily support Functional, for example. There also seem to be a lot of little factual errors that help smooth the story. C with classes seems to not have had virtual methods, which is perhaps the number one defining thing of OO. The main point at the end is that we want to have more immutability and to control effects, which I whole-heartedly agree with. That's not the same as functional programming. Some functional programming languages have mutable global data. So the paradigm-shift might better be called immutability-oriented or effects-oriented, then I think we will still have little mini-computers managing the state we do need.
@@jeffwells641 Note that the original OOP language Smalltalk had blocks (sort of like lambdas), could handle messages without the methods being implemented by sending the message to other objects, and everything was an object including if, loops, and you edited your application while it was still running. What C++, Java, and C# call OOP is a mere shadow of what can be and mostly is over engineered class-based spaghetti code due to not having closures early on.
Wholeheartedly agree on this. The fact we had the composition over inheritance early on in the talk, which wasn't explored, and then lots of (Small)talk, glossing over what the major OO languages actually provide and why they have been so successful. We didn't even hear of Rust, which puts front and centre the issue of state and mutability. These purity tests can be tiresome.
Instead of saying the best language has these features, I think we have come to a point where we should say: "If you are working in this domain and your task is to develop this or that, with these people in this existing environment, then the most interesting languages, development environments and archiving systems are these and for these reasons"
Functional style is great but sometimes you need state for performance and sometimes you need a distributed system. I'm betting on structured programming where needed with functional style where appropriate. (IMO, OO and modular aren't really separate paradigms from structured, just minor tweaks on it).
Well, one of the better known languages that is purpose-built for distributed systems, Erlang (and it's syntax revamp Elixir) is a functional programming language.
"Functional programming makes our programs less like distributed systems" WRONG. Here's an example of a functional-style 'pipeline in C++. ('|>' is a pipeline operator that passes the output of one function into another, like a flow-chart). auto e = str |> views::reverse() |> ranges::find_if(isalpha) . base(); If one is prepared not to take things too literally, this is so very very similar to what Alan Kay was talking about when he said it's a "bit like having thousands of computer all hooked together by a very fast network", because each function is like a highly modular self-contained 'computer' that communicates with the other over a 'high-speed network' (the pipeline). Conceptually Alan Kay was very close to the modern 'Reactive/Functional' style. Apart from this niggle, I really enjoyed this talk BTW.
I don't know if this is what he mean, but this is my take on it. Purity (referential transparency) lessens the unreliability of either distributed systems, or effectful function calls. Most distributed systems require defensive programming, while pure functional programming doesn't. That said, the "pure" qualifier is important, and Kay's "high-speed network' could be seen as a metaphore of "non-fallable"/pure functions.
The example you are showing is just a syntactic sugar for function composition which can be mechanically rewritten into multiple function calls with intermediate variables holding the function outputs. This looks nothing like a distributed systems graph. It’s completely opposite: statically sound types and linear execution graph. None of these functions holds a mutable state either contrary to hiding it within an object. To me the original OOP description is very close to what actor systems are.
Agreed... Although I don't mind hearing a bit about the past to get into a certain arguing position, this talk kind of let me down in that there is a ton of history and then when it should have really built up to a real point, it just kind of fizzles out to a few slides about functional programming.
Not only that. When doing such a historic review, it is very easy to make it biased, omit some things, emphasise others. Functional programming is nothing new, and it was in Fortran and Algol from the earliest time, if only in some limited form. Here is a prediction: if FP is not kept under control, the amount of energy expended to solve a given problem will grow problematic. Just with OOP now, the return we get from Moore's Law is nearly sucked up by the complexity added in OS and programs by using OOP in many situations.
I used Motorola's assembler for the 68000 quite a bit back in the early 80s. It had features for block-structured code, like IF and WHILE. And you could make macros that pushed arguments onto the stack and called some function using the JSR instruction. Of course the code was 68000-specific; IMHO portability was the main driver of the adoption of high-level programming languages.
A few facts "against" functional style as the next paradigm shift: - Among Machine Learning frameworks pytorch won over TF, Theano because it didn't assume immutable "code" structures and allowed interactive debugging and dynamic adjustment. JAX is trying to bring functional style back, but it is still nowhere as popular as pytorch. - Event driven programming, which I think is one of the key concepts of a distributed system, proved to be extremely convenient: Qt and ROS are both amazing frameworks that are easy to learn, use and structure programs. - Distributed network abstraction allows scaling to networks of heterogeneous computational resources, i.e. clusters.
How many languages have first-class side effects, semicolons, and statements by algebraic effects or monads though? How many languages use lenses instead of references to modify part of a subtree of a complex "object graph". How many languages can you have concurrency without race conditions? Anyway look at F*, Idris, and/or Koka to see the nice things that functional programming can do.
@@aoeu256 Languages without race conditions? Whenever you have a multiuser piece of software that has to write to a db you'll have race conditions, regardless the language it was written on. As for the rest, it's not like it is impossible to write monads or lenses in other languages. You are simply not forced to. Is this the point? Well I might agree to some extent. I usually find FP styled code easier to test, yet I would consider FP an overkill when all I have to do is just printing a table looping on an array. The fact that TCO is not popular among js-engines doesn't help.
It seems to me (as a novice programmer) that the objective of OO was to prevent side effects, and one of the biggest features of FP and is that it prevents side effects.
What? "COBOL did not have language support for blocks"? Really? 7:47. It is funny to read this. A COBOL program is structured into 4 DIVISIONs. The divisions in SECTIONS. The sections in paragraphs. And the #1 loop command is called PERFORM and can, well, perform thru sections or paragraphs of CODE.
This is what happens when people try to do "history" lessons, but didn't check sufficient original sources, and instead of presenting objective historical facts choose to tell a manipulated, partly fictional, biased version of history, that somehow supports their agenda. Those who have read "1984" will remember that Smith worked in the Ministry of Truth , Records Department. Language designers in the sixties were all well aware of functions, types (to the point understood at the time), partial application etc. Still, they chose to do things in a structured procedural lexically scoped way. You may want to think hard about why that is, before abandoning sequential programming. It has been said that any sufficiently advanced program will end up containing an implementation of something like Lisp; well, as seen by things like IO monads etc, the opposite may very likely also be true: any sufficiently useful function-oriented "program" has to include a sequential execution model, mutable memory and objects, an idea of time complexity, and I/O.
@@lhpl And decades ago a COBOL program could respond to events, using async functions. There was a section for them, named DECLARATIVE SECTION. :) in UPPERCASE at the time :)
As an old assembly language programmer, the assertion that there were no procedures or functions in assembler programs is just flat out ignorant. The FORTRAN II manual published in 1958 has calls and returns: that is, functions. FORTRAN did not force spaghetti code.
I wrote a huge program in Modula-2 to build a bitmap painting program for kids, called Flying Colors. It was a huge hit, but when i had to port it to Windows from Macintosh, i found that WIndows didn't have some of the bitmap manipulations that Apple OS had in their Quickdraw system, so i was forced to use Assembler, in this case Microsoft Assembler to write about 1% of the program. In MASM, you have macros available that give you IF/ THEN/ELSE, WHILE loops, etc, and in fact writing in Assembler is quite easy and fairly high level. What makes assembler hard is that Intel had only 4 general purpose registers, 2 of which are used by the multiply instruction, so you spend way too much time fussing with loading and saving registers. So MASM is not that productive overall. But you can do any paradigm you want in Assembler, and with such a powerful macro processor, you can do a lot of stuff. I think the real gain in a new language is that for some problem domains, you can make it super convenient. I worked on a language called Beads (beadslang.com) which is designed to make web apps and mobile, and it is really easy to fit thing into small spaces because it has a clever layout model that makes it easy to support a wide variety of output machines. No other language is as convenient for that problem set. That's the point of tools that are optimized for tasks. The question is, what is Elm's problem set area where it shines?
@@edwarddejong8025 But you do realize that a compiler merely masks the fact that there are only a handful of hardware registers on the CPU. The x86 is very good at addressing memory indirectly. So I never found it to be a limitation. When you write a lot of assembler you create a library of functions that makes your life a lot easier. I never heard of beadslang. I'll check it out.
@@donjindra The convenience of having a compiler do all the register allocations for you is the reason hardly anyone programs in assembler nowadays. Further, people are using languages like Python and Javascript. where arrays are open-ended in language, which saves you the effort of managing memory explicitly. Overall the trend is to push more of the load of writing code into the computer's workload.
@@edwarddejong8025 I never said compilers and interpreters were not great tools. I'm merely suggesting that someone who programs in assembler over many years builds up a library of code that helps tremendously with tedious details. I mostly program in C and Python these days. But when I programmed in assembler I learned to be very productive over time. I could knock out complex programs easily.
@@donjindra There was a man who built a database called Panorama entirely in assembler, and it was lightning fast. Assembly is surprisingly productive, i agree, but nowadays it is much easier to work in a higher level language. I built my super high level language to build web apps and mobile, (www.beadslang.com) because i wanted something much more convenient for making graphical interactive products that are platform/os neutral. Always a tradeoff between speed/size/convenience/readability, and i consider readability/maintainability to be paramount, because labor costs are way more than computers costs. Computers are lightning fast, and dirt cheap. Just tonite I sold a perfectly working XP machine on craigslist for $20.
The reason you think OOP was not a solution to complexity is, imo, because there is no solution to complexity. No methodology will ever "solve" or eliminate complexity, because complexity will always expand to fill all available space. To put it another way, OOP very much DID solve complexity -- in the 90s. Now, the complexity programmers experience rests on the substrate of "solved" complexity such as existing windowing & messaging systems. When we have a very complex situation and we manage to simplify that situation, we then proceed to re-introduce more and newer complexities. Modern Windows is exponentially more complex than anything IBM or DEC had in the 70s, precisely because C++ simplified the patterns that existed in the 70s, and allowed newer systems to become more complex. Which is what will always happen. Programming seems as complicated today as it did in the 70s because it simply IS. And it probably always will be.
I led a functional programming rewrite of an application at a billion dollar company... I still believe in functional programming but the developer backlash was horrendous.
Which functional language? Have you looked at Koka? Its algebraic effect system is simple and unlike Monads they can be combined easily, and it lets you use side-effects easily without having to learn the Monads.
Actor based programming could be put into correspondence with the general idea of message passing (object orentation (in that sense)) and can also be put together in a distributed fashion (easily!?).
Is functional programming really a paradigm shift and does the style alone reduce complexity or rather the mental overhead needed to deal with complexity? To me it seems one major driver of complexity is the need to have interoperable modules each with their own set of dependencies that all have different update cycles.
It resolves one type of complexity -- multi-threaded access to the same data. That's a useful problem to solve. However, I think it throws out the baby with the bathwater, and I haven't jumped on board.
In my experience most of the accidental complexity comes from dependencies. Whenever there is shared mutable state, it introduces dependencies between the parts whose functionality depends on that state. Side-effects cause dependencies too. Functional programming can alleviate both of those.
Graph programming. Why do people dance around it? When C++ added future/promise, I was disappointed to find out it was trigger-once. Just form asynchronous nodes that form graphs. You can have functions at the edges and the nodes. Make a language that natively supports this metaphor. Great for UI, great for distributed computing, great for pretty much everything.
36:00 Distributed systems can be very easy and simple, WhatsApp was completely distributed and maintained the company that they sold to Facebook for 19 billion with 35 (!) engineers. They had 450 Million users at the time and one year later, they doubled that number, with 50 people. Distributed programming is NOT HARD and Alan Kay itself said that Erlang is the only true object-oriented language outside of Smalltalk. I guess this is what you mean with the 'simulation' part, while it could have been helpful, to mention that.
WhatsApp is Erlang and Erlang is functional processes (separated, autonomous objects) talking to each other and that makes sense. Versus the Java-style which is overlapping, shared objects all the way down (like down to a Boolean in many cases).
Intriguing. To me, "thousands of computers connected by a fast network" is a _better_ description of functional programming, than it is of most OOP as practiced. Consider that a message sent over network is immutable. It is passed into the computational unit (the function) and whatever happens to it after that is irrelevant to the sender. Contrast with the way OOP is often used : Mutable objects passed as messages, which may then be modified by the receiver (which is the equivalent of getting a return message you weren't expecting).
"Contrast with the way OOP is often used : Mutable objects passed as messages," The solution: Don't pass mutable objects if you don't want your objects to be mutated, which is more or less the concept present in rust.
6:20 The code syntax and execution order are two different things! I really like the implied conditional code syntax! The conditions 0 are implied by position. Sparse! Love it. But not a fan of code that's not broken into separate functions. A language could conceivably combine an implied syntax with functions. Hypothetical example: IF (BOATS - 1) PROC1, PROC2, PROC3
Organising code is very dependent on the end usage. If it is for general purpose programming, then low level, close to the metal is not required because it is more difficult to learn and implement. Programming languages should be as close to natural languages as possible, even at the expense of performance because we have nowadays more computing power than we need. Of course for special purpose applications like games or automation, real time stuff this is not valid.
My programs are UI-centric... and I really cannot imagine how to use functional languages to make UI. OOP is perfect for this, data-oriented design is somehow possible but not natural for UI... But functional programming? No way.
This is because the OpenGL and all other basic graphic libraries|drivers are written in a procedural/oop way? And when you write a program in FP style, at one moment you want to use those libraries and you are stuck? I recommend to check out the Conal Elliott's work on FRP. Also there some talks why UI today is still wrong. Even working with DOM in js is in procedural way, and this is not because you cannot do it in FP style, but because this was done initially like this. Today's libraries like React, Vue... embrace more fp style, and yet they are UI libraries. Check out Elm language, that is specialized on web UI, and it is a pure functional language. There are some good tentatives to create more basic fp libs for graphics, but I think the industry doesn't want now to invest in this as we already have huge UI libs that are very good and in them were invested a lot. The fact that we do no have now UI libs in fp style, doesn't mean that they cannot be done, they can, but industry doesn't want to invest in them now.
@@PisarencoGh Nope. It is because UI is all about state. While FP is all about pretending everything is stateless. Therefore it feels unnatural for UI.
@@vladimirkraus1438 do you have any proof about that, or you just consider that? Because i can say the same thing about OOP and UI, and I'm already right because I said that? What has to do graphics and state? And where fp cannot handle that? FP just splits values from transformers and does not hide the state, in fp the state is public, so where it cannot handle UI requirements? What is the difference between object.setWidth(width) and newObject = setWidth(width, object)?
Classes != Objects. I think Kay would consider modules more OOP than classes An alternate take is that because we have no good mainstream way of describing distributed systems that doesn’t blow up in your face, but loads of ways of modeling time-independent math concepts, there is a host of solutions we never get to make, because it’s just to hard. It seems to me that we’re good at implementing solutions in the mathematical domain, but crap at those in the human domain. Human domain is event based and time is a central concept
Feldman: "if you can avoid distributed system, you should". Microservices architects: "Hmm... Are you sure?" In fact multi-core CPUs and all those machine-learned Nvidia tensor cores and cryptomining rigs share many of the same potential concurrency problems as distributed systems not even speaking about things like Hadoop or Bitcoin. C'mon, Internet is a highly distributed system! And yes we need to create boundaries to deal with distributed systems and that's what OOP is good at - abstracting away all those HTTP requests, external services, database tables accessed by many clients etc. Well any program that has a UI is a part of a distributed system consisting of that program and the user.
I think that OOP will stick with us for ever. It still is a concept that makes things easy to understand and maintain (if you don't overuse it). But I wish we could use functional concepts as well. I am a C++ programmer and there is nothing more satisfying then implementing and running pure functions instead of calling a hand full of non-const methods (object functions that implicitly change the state of the object). Not to mention that pure functions are easier to test. Though I don't hype functional programming because I can't wrap my head around how you would implement a game or a complex user interface the functional way. Does anyone has some good examples ?
@S B I disagree with that tbh. In my opinion it is good to have paradigms like a vocabulary. Paradigms are guidelines that can help to organize your own code and as long as you write your own code alone, just do what ever works best for you. But when I want/have to work with other people on a project you need to agree on certain core principles, code design and, for C++, even naming conventions. But with the use and teaching of paradigms you have a well-known set of "how to write code" for a given language or project. I am happy beeing a C++ programmer. It showed me how much a language can change and improve. C++ also allows you to use so many paradigms or just screw all and make spaghetti and it still works. Like I learned C++ before C++11 and C++11 blew my mind (how they standardized features of other languages or 3rd party libraries). There are and probably always will be things to not love about a language. C++ would be far from my first choice of a programming language for beginner. But C++ excels at gluing together various programming languges (and paradigms), with downsides as you mentioned. E.g. the C ecosystem has their own error handling. As C++ is still not sure what error handling they actually want, it can cause some really nasty code to combine exceptions with C error return values. I have been looking for something better then the current state of C++, like a pure C++11 (and higher) that iterates into a better language. So far Rust ticks many boxes and Zig looks promising (haven't had time to play with it). But that doesn't mean I would throw all of C++ over board just because there is a new thing in town.
How many programmers are in the “Why not both?” camp? I really prefer languages that support OO and FP style and usage, or I want multi-language environments. I want to decide which to use under different circumstances. I hope there is never a monolithic “shift” to another dominant paradigm. It’s a colossal waste of time to argue for one universal solution to all problems. Instead, let’s use that time to learn more styles and features from all major paradigms, so we can be better general problem solvers.
No matter what the paradigm, greater abstraction creates greater mental strain to understand what is happening and where things are coming from. The skill of the programmer is to organize and name things such that the program is easily understood. OO has great tools for this, it is amazingly easy to model tables and rows of a database as objects, it makes sense conceptually to view them as mutable, things like this. Functional programming also has great tools, it is great to be able to great specific functionality from general functionality using currying. At the same time, it is easy for things to get out of hand and hard to read. Haskell, for all it's clever avoidance of brackets and it's complex operators that have right or left association and many different priorities... Well, if you're creating something abstract like readFromProcess, that code gets very hard to read and understand, it gets very deeply nested and there aren't a lot of great names for the things you're doing. do notation is itself a concession that it's easier to read some things if they look imperative, if you pretend that what you are doing is not so deeply nested. The other direction, our lisp languages, are notoriously nested. FP is really interesting but downplaying the ease of other paradigms and ignoring the frequently taxing readability of FP is silly. FP is not popular because the world needs a lot of programmers and you have to be a bit adventurous, a bit out to prove your intelligence... Where writing code in a staight forward sequence of instructions using words is pretty easy for most people to understand. Half the time in a fp language you're reading things backwards, and the sometimes forwards, and then sometimes switching back and forth many times in one line. I know you can get used to it but you gotta onboard people and it's expensive. Lots of people in FP are gonna be people who started it as a hobby because they love computers but that's not all programmers. Some just wanna clock in get some stuff done and clock out, they have no academic interest and they're not embarassed they don't want software to look like math. All that said, it's a neat history but the conclusions drawn are poorly supported and good fp programmers admit it's not a silver bullet, fp has places where it has drawbacks that are better addressed by different languages.
I don't need OO to do tables. Python does arrays and dictionaries right and with that I have better functionality than any database will ever offer. The entire idea of functional programming is flawed. A function doesn't produce side effects while every useful program is nothing but side effects. That's a 100% disconnect between the way academics think about programming and the way software engineers need to think about it.
So if there's enough abstractions, we'll be able to say "get me a database thing that's secure or whatever", right? Abstractions are like asking a genie for some kind of wish: you BETTER be careful and particular about what you wish for or else the wish will not be what you expected...
Haskell is pure functional language but its designers soon found out that pure functions couldn't communicate with the real world, so they introduced monads. Trying to understand exactly what monads are is a big problem in learning Haskell. We are told that they come from a branch of mathematics called Category Theory. I don't have the time or inclination to study another item in mathematics and I not sure it would really help in FP. From what I have heard I am not alone in this. Monads allow Haskell to become a procedural language but does it with very complicated syntax.
To do side effects in a pure function you have to explicitly return a new version of the world with your side effect added on top, and this too explicit so they used Monads as "first-class semicolons/side-effects", but they can do more than just add state they can be used to create eDSLs with different semantics, and all statements are first-class so you can create your own for loop, your own assignment statement, your own exception system, etc... However combining side effects was pretty hard. Newer denotational languages like Koka let you combine side-effects automatically, and are much much easier to use than Monads. koka-lang.github.io/koka/doc/book.html#why
In 2011, I took over an R&D project, that was based on a 2500 LOC Haskell application prototype, enveloped at ATT Labs. I did not know Haskell at the time, so my intro was really a submersion. It took about 6 months to figure out the existing code, then start improving it. That project lasted nearly four years, and at the end, I had about 13K LOC. At the end I still could not really tell you what a Monad was, but that was okay. Ignore Category Theory, because a Monad is really a simple interface that promotes an FP design pattern. Monads were not introduced to "allow Haskell to become a procedural language". What Haskell did was implement the IO Monad (input/output), and any state change that relied on the outside world had to be performed in the IO Monad. The result is that you can isolate pure functional code from code that has side effects. From a programmer's perspective, this is great. You know when you are working on pure functional code and when you aren't. Unlike Java, etc., you never know if you are subject to side effects. The benefit is that you can reason about, modify, etc. on the pure functional code, to your hearts desire, then let to compiler sort it out for you. True story. My application was a high performance scheduling algorithm. Basically a huge tree-search. One Friday morning, I realized that my algorithm had a flaw that basically allowed identical branches to be search multiple times, which was huge was of time. I went to lunch, and thought about it. I had two options, give up or basically rewrite about 1000 LOC, to eliminate the problem. Friday afternoon, I started a 30 hour coding session, that ended Monday morning. It took Monday morning to eliminate the compile time errors from all the code mods. Once it compiled, I went to lunch. Came back, and I started testing. The problem was solved, and no crashes or runtime errors, during the tests.
By the same logic that C with classes has failed because clearly it doesn't bring enough to the table itself.. hasn't functional programming already clearly failed far more so?
Loved the talk ... though it stayed mainly focused on history (including validating the idea that paradigms do change) than why functional programming (or anything else) is the NEXT paradigm. One thing I was hoping to hear about was the how the trends in storage costs, memory costs, processor efficiency, network ubiquity, and possibly even how blockchain may impact the world of constraints that programming languages will have to operate in. I suspect the influence of blockchain will not really show it's full impact in language design for another 10 years or more but some of the other trends may impact sooner.
Assembly had procedures. Period. Push State &Address, Jmp elswhyr (do stuff), Pull Address, State, Jmp Address unless you were a bad programmer. (or JSR in more "modern" assembly)
@Dirk Knight I was merely making the observation at around 2:40 that assembly does not have procedures. That assertion is only true if you are either a very bad or ignorant programmer.
@Dirk Knight ISTM that we are largely agreeing, just coming at it from distinct perspectives. My contention would be only that higher levels of abstraction are irrelevant if you don't have a decent grasp of what that abstraction is doing for you as a result of a decent grasp of the base layer. i.e. You (the video author) have no kudos to offer me a paradigm shift in programming, if you do not understand basic programming paradigms.
30:36 Bjarne Stroustrup's C with classes was not completely what we consider as OO, for instance it didn't has virtual functions and run-time support for it. So saying that OO features alone didn't make it successful is an incorrect conclusion. Also, 25:24 the part of the C syntax that he didn't like was the prefix/postfix operators (++, --) and the omission of the 'int' type for functions, so nothing about the brackets and the parameters. I encourage anyone to read "A History of C++" by Bjarne Stroustrup to read more about it. BTW, the end of the link to Ingalls' interview, hidden by the speaker, is "Ao9W93OxQ7U".
31:51 I'm curious, why do you claim that composition is not related to objects? To me it's one of the earliest features . What about polymorphism? Then by disregarding those OO features, you're saying that it's the same to have them within objects or only outside object; this simply isn't true. I'm not even mentioning inheritance vs composition which is largely controversial (and misinterpreted in the talk as it often is). Later you talk about Kay's "message passing" interpretation - it's the public methods of OOP (encapsulation + classes, or as you put it, methods) and polymorphism (messaging different objects the same way). Both are very much what OOP is about.
39:25 Mike Acton works with Unity, the engine which has a ton of performance problems, partly because it's misadapted to currrent hardware, and because the user part is C# ... which is GCed (and generally bytecode). So I don't think what he says about data-oriented design has a chance of being relevant or accurate. 🤣 Today the problem is still mostly about complexity, so I don't see that coming as a paradigm shift, today's compilers (to native code) do an awesome job.
@Dirk Knight the problem does not lies in the distributed system per se, but in the tools you use to debug them. Internet is a good counter example, I think. Centralization has far worse curses, BitTorrent is pure magic.
@Dirk Knight a human brain is an amazingly distributed system and quite compelling. We're currently in precarious times of understanding. My point is that centralization system are easy but not better. Even centralized systems rely on some sort of CDN in order to perform better. More work is needed. As W. Gibson said: "The future is already here - it's just not evenly distributed."
@Dirk Knight a lot of people can read a German sentence and do not understand a word either, because they are not fluent on that. I think humor is a more a language or skill, rather than an emergent property of the brain. What I really wanted to emphasize was that the body, ecosystems or whatever biological system seems to work amazingly distributed and leave us in awe in comparison with our current tech. I see the future being more like P2P rather one giant AWS (that even though is quite distributed nowadays). I would said the right view to me is sharing and collaboration rather than Divide and Conquer.
9:45 "C does have both goto and program blocks" -- see here for an interesting discussion from Linus Torvalds and others on the linux kernel mailing last back in 2003 on the subject of gotos in the kernel: koblents.com/Ches/Links/Month-Mar-2013/20-Using-Goto-in-Linux-Kernel-Code/
Not the first or last time that Linus had "differences of opinion" to academics about what quality looks like. Having said that, if your context isn't developing a kernel on an 80 character terminal then i think Dijkstra still wins. Also i bet someone committing a random jump into a different procedural context in the kernel is going to get bollocked old style by a certain benevolent dictator even in this newer gentler era of his reign so i see this as more of an objection to dogmatic adherence than the core principle.
Pattern matched functions and local named closures sound much more reasonable than gotos, but if you're forced to use gotos, make sure they're only limited to the function's scope and (most importantly) no backward time travel
A first from this talk (at least to me): even mentioning the classic article from N. Dijkstra "GOTOs are harmful", even talking about the beginning of structured programming , the author does not mention Pascal, the language created by prof. Wirth as the answer to FORTRAN and GOTOs and built upon ALGOL.
Instead of the Wikipedia for Distributed System, you could have referenced Actor Systems.
This is where message passing happens between encapsulated entities within one program, and is often used as a *solution* to the complexities of true distributed systems. See Actix, bastion, Akka, and of course Erlang/Elixir. Pony language, too.
Watch out y'all, paradigms are like tools, different tools work well for different jobs. Don't fall into the trap of trying to solve all types of problems with one paradigm. There are always trade-offs. The modularity and easy refactoring you gain with functions you pay with cascading abstractions, every function that takes another function as a dependency makes your program harder to read and you have to be very careful about your abstractions.
Paradigms are like blunt objects. Everything boils down to microcode that has almost nothing to do with the assembly code created from source compilation. If we had control over the microcode and were able to debug it easily, that could be game changing.
Features are where you need to look for inspiration to make a minimal set of scalpels and saws that let you do things you want to do without disrupting your workflow. Finding those features will allow you to have the most powerful language out there because it just so happens to be the most minimal.
Satire?
@@floriansalihovic3697 not really I'm not dissing functional programming, I've just found that it's not the perfect solution for every type of problem
That depends on how you use the language, I argue that your functions can always be readable and independent and I dont really see the reason, why it should not be possible, you seem to project your own experience onto all of us. True?
@@shalokshalom I'm sharing my practical experience on a big app I worked on, it was very hard to keep track of cascading abstractions that were 4-5 orders deep.
It made it easy to swap out parts of it, but at the expense of making it hard to actually understand what the code is doing.
I don't have anything against functional programming, it's great in some cases, but it doesn't make all your problems go away. Sometimes it introduces other ones and it's up to you if you want to make that trade-off. I've also heard other people say "I don't see how it can happen in theory" and I'm here to tell you it happened to me and it wasn't fun (no pun intended). Up to you if you want to believe me, of course, I'm just sharing my opinion.
He said C *declarator* syntax was a failure and indeed a lot of languages, like Java and C++, which directly borrowed from C/C++ abandoned it.
I don't understand. You still say int x; in java
I still see Alan kay perspective for OO design is exactly about what the Actors model offers: a set of distributed (physical location is transparent) objects that maintain and encapsulate their internal state and communicate by passing messages (ideally in a non-blocking way).
Bjarne did not say that C syntax was an experiment that failed, curly braces, parenthesis, semicolons etc.
What he said was that the "C declarative syntax" was an experiment that failed.
A view I can sympathize with.
Much of the rest of C surface syntax for conditionals, loops, functions, etc, lives on in C++, Java, Javascript, the new kid on the block Rust, and many others. Where it works very well.
Far from a failed experiment.
C has very few declarative, as in declarative vs imperative, features. WTH are all three of you (Bjarne, Richard, OP) talking about? I think it's three different paradigms, but cannot follow.
@@andreashabermas7964
Quite so, C has very few declarative, as in declarative vs imperative, features. C is in the imperative/procedural language paradigm.
For this reason when I read "C declarative syntax" I think only of C declarations. Those statements we use to create create (declare) data, for example "int i = 2".
Of course "int i = 2" is simple and clear enough but declarations in C can get pretty horrible to disentangle when they involve pointers, pointers to functions etc.
Of course such declarations get even more weird in the C++ world with it's dozen different ways of initializing things. The failed experiment made worse! :)
i can understand how it looked like a failed experiment in 1994
@@andreashabermas7964 This mess --> www.geeksforgeeks.org/complicated-declarations-in-c/
@Dirk Knight Don't get me wrong. I love C.
It's one of the smallest, simplest, high level languages that compiles to native code worthy of the name.
It allows one to do most of what would otherwise have to do in assembler. But portably.
A C compiler can be written by one person in a not unreasonable amount of time.
C compilers can run on very small machines.
As such I'm prepared to accept all the problems that can arise through use of pointers, endianness, etc, etc.
The syntax of C is mostly great. Which is why so many other languages today look like C. Java, C#, Javascript, Rust etc, etc...
However, C's syntax for declarations can very tortuous. So much so that people have written programs to decode C declarations into something human readable.
See: cdecl.org/
Hence I tend to agree that ""C declarative syntax" was an experiment that failed.
Interesting that the guy said that they wrote the whole program, and then rewrote it in OOP. One of the biggest problems with OOP is that you have to go full philosophical on what constitutes and separates objects. When you get it wrong you'll have to shift in place during development and then you get very esoteric object(handlers). If you already have the entire layout of the program, these are way easier to identify.
An important observation!
"Next paradigm shift" (functional programming) starts at 41:51
Could've avoided this video if it was titled "Functional programming: the next paradigm shift"
CPUs are not functional. They are procedural. If you need to make abstractions, write some declarative code with some good documentation. Otherwise, write well structured code where functions cut across data structures (not the other way around like in OOP).
Slight correction at 52:05, OO was very much a product of acedemia : www.google.com/url?sa=t&source=web&rct=j&url=medium.com/javascript-scene/the-forgotten-history-of-oop-88d71b9b2d9f%23:~:text%3D%25E2%2580%259CObject%252DOriented%2520Programming%25E2%2580%259D%2520(,his%2520Sketchpad%2520Thesis%2520in%25201963.&ved=2ahUKEwjEyLnAidHsAhU2_XMBHdoSAxgQFjABegQIARAE&usg=AOvVaw1V9uojy62DUR7Mh6laDk_T&cshid=1603675178267
I was hoping for something interesting not another functional evangelist
@@jonohiggs I hoped for the same thing, but now I'm just gonna use FP/DoD/procedural/declarative/reactive code by default and move to imperative/OO code when the default mode is inconvenient.
Excellent talk and historical perspective. Richard is very skilled at presenting a lot of information in a short amount of time without being overly dense. I have one small nitpick, though.
50:34 You _might_ want to discuss Erlang in the next iteration of this talk. It was developed during this time as a functional programming language and *deployed* into production Ericsson phone switches. The derivative language Elixir has spawned more interest in the underlying Erlang during this FP wave we’re riding. If you enjoy piping data through functions in Elm, you’ll love designing systems in Elixir and Erlang.
I guess he won't. Erlang and Elixir both rely on Actors, which are basically the same thing A. Kay described as OO. Moreover Erlang and Elixir have no type system. He's more of an Haskell guy than an Erlang guy.
I hadn't thought about this point until after you mentioned it, but yes, you are right. Actually - I got most of my motivation to explore Functional Programming in greater detail as a result of both Elm and Elixir. More than elm - elixir propelled my ability to think in FP. I was then able to look back on Elm with even more appreciation and understanding. It's true that Elixir and therefore Erlang are heavily distributed systems - but that doesn't really detract from their functional approach.
Elixir is coming out with a set theoretic type system.
Agreed; it's a clear and thoughtful historical overview that deftly manages to avoid getting into too much detail. However, while watching it, I had several "yeah, but..." reactions. So, here goes...
As Joe Armstrong contended, Erlang is really quite an OO-capable (ie, Hybrid) language. However, the "objects" are lightweight processes (aka Actors) and they are used at a level of modularity _above_ the FP level. So, the programmer can employ them "ala carte" and pay the cost in added complexity only at that point. FWIW, most of the async stuff happens at this level, though Elixir adds stream-based pipelines via macros. Finally, because shared mutable state is avoided, many concurrency-related issues (e.g., colliding updates) go away.
The question of garbage collection is a bit nuanced. If the program has hard performance constraints, GC can be a non-starter. However, in many cases the program simply has to run "smoothly". Because the Erlang VM does GC separately for each process, "stop the world" behavior is generally absent.
One powerful aspect of the FP style is that, because global state is generally avoided, functions can be examined and considered in isolation. You just have gozintas (arguments), gozoutas (return values), and logic (code) to worry about... Also, because shared mutable state is avoided, many concurrency-related issues go away.
Interesting talk. I agree that functional programming may be the way of the future, but the power of familiarity should not be underestimated. I've tried to transition from Java to Kotlin (which supports FP), but when push comes to shove and I'm in a hurry to produce some working code, I still find it much more productive to use Java. I think familiarity explains the meteoric rise of C++ in the 90's too (C++ is C with stuff added, and still compatible with C). As someone who earns a living from software development, its not always about what is the best language from a technical stand point, but is usually about what you can be the most productive in.
did you try to have a look at SICP and LISP? Clojure runs on the JVM you may find it interesting
Exactly. We're paid to deliver solutions to business problems, and the company president is exceedingly unlikely to dig into our code to see how we did it. He just wants his problems solved in a fashion he can rely on.
You're an engineer, your job is to elevate yourself to the medium, not the other way around.
@@jplflyerand an fp dev can do your job 10 times faster with 100x less bugs but requires knowledge. Using c++ instead of haskell for a general purpose app is like building a skyscraper with toothpicks because you can't be bothered to learn how to actually build skyscrapers but still want to call yourself an engineer
@@AndreiGeorgescu-j9p Ah, the arrogance of youth.
I hate the purity tests. Use a multi-paradigm language (e.g. Common Lisp) and avoid tortured problem reformulations. Things fall apart when programmers have to be in service of a paradigm rather than a problem.
I love CL, it's a top tier language.. however, there's a problem that these languages cannot solve and it's that certain paradigms and features are not backportable. If you want a lazy language or a statically typed language or a language that tracks effects or uses different kind of memory management you're out of luck.. or at the very least you can't use libraries because they were created using different set of assumptions.
I'm a novice in computer science, I think I just got a free lesson about programming paradigms, Thanks a lot for the lecture/presentation!
The next programming paradigm is going to be "task based" - where tasks are little tasks that can contain "waiting" (for other tasks, signals, events, I/O) without doing a system call that blocks the current thread. Aka, tasks are like "user space threads" but without the overhead of context switching and sys calls.
so basically event driven code?
100% believe most languages are already functional and will continue to add more FP features, what I do NOT think will happen is it ending up at PURELY functional, because that brings in far too much accidental complexity then it actually solves and also effects system's in purely FP are not really a fully settled matter.
You have no idea how confusing OOP was for me even in high school. Teachers and TH-cam videos where presenting OOP as a solution to every problem. OOP has created more frustration than solutions.
I am glad I am not alone that have noticed the importance of modularity over encapsulation.
What's so confusing in OOP for you?
Here's how I view OOP:
You have a data structure that has attached functions to manipulate or query that data attached to it.
Then you can take a step further and realize than you can make objects out of other objects, nest one data structure into another and etc.
And then take another step further and realize that you don't care about data structure internals and you'd like to have some common way to "talk" with the data, so you instead of tightly coupling one data structure to one implementation you loosely couple many data structures to many interfaces.
And now you don't care if you pass around LinkedList or ArrayList, both are lists, both can be iterated over, both can be searched and etc.
@@randomname3566 objects are imho the hardest concept to grasp in the whole of programming (maybe equal to pointers). It's because it's so obvious that learning about it is not rewarding and therefore - hard to remember. That's why I always had problems with objects. I could never appreciate them because they bored me and this made me really non-creative when I had to use them.
But apart from that it's easy. You make an object that has some variables as properties, and functions that modify those properties and sometimes return something to the outside world.
To the interesting list @33:08 I would add two things:
1. Polymorphism. But it turns out that both static and dynamic polymorphism are possible in non-OOP languages. Rust offers static polymorphism through generics and dynamic polymorphism through trait objects. And Rust's non-OOP polymorphism is arguably better in that you can safely make somebody else's type implement YOUR interface.
2. object.method() syntax, which improves IDE completion and allows for more consistent naming. But it turns out that you can have that in a non-OOP language, too. Rust has it, for example.
This was interesting, but... I have to disagree with some of the most basic premises. Maybe that disagreement is based on lack of supporting conversation that addresses my thoughts, and I could come around.
Let's start with the idea that OO didn't solve complexity. Okay, that's fair. FP doesn't, either. Both are tools that don't *solve* complexity, but they also both make it possible to address more complicated problems. We are now solving significantly more complicated problems than we were back in 1980. OO combined with modularity have made it possible to not even think about huge parts of what we do. Back in 1980, the state of the art in display was using an ncurses library on a VT-101 terminal. How the world has changed.
To say OO "didn't solve the complexity problem" is basically moving the goalposts by about 6 football fields.
I also am not on board with some of the other "conclusions" some people have come to regarding things like inheritance. Inheritance is a tool. Composition is also a tool. The problem is when you use the wrong tool for the problem space.
There was a lot of interesting perspective in this talk. I'm not sure I agree with some of the broad arguments, but still, interesting.
I totally agree with you, what OO allows, is to zoom and dezoom your problems, and focus on the level of the granularity your cognitive ability allows you to handle
@@senhajirhazihamza7718 That's an excellent way to put it. And not only your cognitive ability, but your immediate needs. But yes, exactly.
I've always felt the problem with OO was the "Oriented" part. Objects, and the features that come along with them (encapsulation, inheritance, etc), are great things that are very useful in a lot of cases. It's very convenient to take a piece of your code and essentially turn it into its own self-contained program. There are a lot of benefits to that. However, only a fool would organize their code with the whole goal being to divide it into as many little programs as possible. That has pretty thoroughly been proven to be bad design. There's a reason nobody uses Smalltalk today. It was a great idea that resulted in a lot of useful tools, but as a style it's bad. Having objects is good. Focusing on objects is bad.
@@jeffwells641 I never programmed Smalltalk, so I can't comment on that. Early in my OO life, while I wasn't using the term module, at times I used objects like I think we're supposed to use modules. Oh, I used them as objects, too, but I found great value in organizing my code into objects. It did a great job bundling things together and put a namespace on them -- long before C++ had namespaces. Objects are great when we really are dealing with objects. I do a lot of SQL database work, and I prefer to model my objects 1-to-1 with my tables. I find it works great.
But real world programmers (like I suspect you are, and I am) shouldn't try to be purists, IMHO, and I think that's kind of what you said, too. We should take the pragmatic approach to solving problems, to doing our jobs, and that means the right tool for the job. Objects aren't our only tool.
In other words -- I agree with you.
@Dirk Knight Were you doing network programming in the 60s? I wasn't, but I was born in 1962. Were you working with windows, and complex user interfaces? How much multi-threading? Were you incorporating 400 tools written all around the globe written in multiple langauges? Dude, by the early 1980s, the standard for computing was the DEC VAX with a clock speed of a whopping 1 MHz. Are we really not doing anything at all on our multi-core machines with clock speeds 4,000 times faster than that?
It's an interesting way of turning the thinking around, but there is an egregious error in representing that C with Classes had a full OO implementation.
There were several OO specific features added over the first few years in C++ which IMO definitely contributed to its success:
* 1982: virtual functions and operator overloading (ignoring the non-OO specific added features)
* 1989: multiple inheritance, abstract classes, static member functions, protected members
I don't think one can make a clear case that OO was not the cause of C++'s popularity, as the OO features in C with Classes and C++ differ too much.
How is operator overloading an object-oriented feature? Many object oriented languages don't have it (Java), and Haskell can achieve something similar with typeclasses.
Besides, operator overloading can be accomplished in C++ using free functions...
I agree. And just from looking at Wikipedia, I don't see many non-OOP features that he's claiming that C++ had that C with classes didn't have. It seems to me that most of the changes added to C++ that weren't in C with classes were additional OOP features, but he doesn't talk about this, so it's a very flakey argument.
Fantastic!
I want more. I want the experience of a developer from each language summarized into videos. What works and what doesn't work. When does something work and when does it fail. What caused a rewrite. When was performance lost. I couldn't leave the room without pausing the video because I didn't want to miss anything. And because of this video I have been pushed passed a stuck point and I now have a few ideas that I will be working on for the months ahead.
Thank You!
it is absurd how many software people use nonsequiturs in their branding.
"data oriented design" that sounds fine, design oriented towards minimizing data.
"it means design software around hardware so it runs faster"
that would be either cache oriented or hardware oriented.
I prefer information-oriented-processing design.
it is oriented to the data architecture of the hardware. But you're right.
It's also the worst idea possible and completely goes against computer science in general. The only people who would need such an idea would be people writing software for an extremely specific piece of hardware and needing it to be the fastest thing humanly possibly. That's not 99.999% of people including OS designers.
From today's point in time the c style was failing as experiment. But we have to remember that, at that time, the most popular languages were ada, pascal, delphi, basic, fortran, cobol and prolog.
c was still experimental in the 80's and that status started to vanish with visual c++, for which machines were still too slow.
But something happened at that time that gave a second chance to c: Linux
9:20 - We even have "`Considered Harmful` considered harmful" so there's that. One historical note: According to ESR's jargon file, while the article itself was definitely written by Dijkstra, the title was apparently supplied by Niklaus WIrth.
25:35 - I think there's the crux of my minor rumbling disagreement with "6/10 languages are C++ or C++ descendants" whereas I'd be totally on board with "6/10 languages are C or C descendents" (and i'd probably argue the number is higher). I really don't consider Java to be a descendant of C++ but a descendant of C and SmallTalk (all the C++-ish bits come from SmallTalk). Also, while Objective-C is different it's still a little similar because it started life as a C pre-processor. Funnily enough it's sort of got the same ancestry as Java but loved SmallTalk a little more than C(which is fair... SmallTalk was pretty interesting). To give Stroustrup his due, He was looking at the "C Experiment" from a different perspective than basically everyone else. C didn't solve the "program organisation" problem so therefore it failed.
Also a side note: It's funny that Swift is listed as being C-like (not inappropriately i'll add) because one tag line for Swift was "Objective-C without the C".
36:16 - But a huge part of the reason why distributed systems are so frightening is because of the tools that are available to us. Joe Armstrong wasn't afraid of distributed systems programming because his toolset was designed for building reliable distributed systems. I also think Alan Kay's world has, in some ways, come to pass since so much software these days is microservices all the way down. The key feature that took much of the industry decades to appreciate was isolation (not without reason... serious performance dragons lie here). If i am forced to assume that that other "object" is on a different computer then there's nothing i can do to alter or inspect it's details, i HAVE to use it's provided interface. in C++ (or Java or C# or...) it's difficult to enforce that outside of running it in a different process.
44:25 - Erlang, a language built explicitly for building distributed systems, is a functional language. Joe Armstrong(one of the creators) had as part of his thesis that share nothing immutability was a necessary precondition to a reliable distributed system. I don't think there's as much daylight between "build it like a distributed system" and "functional programming" that you think there is. Having said that, i agree that our implementations of OO and FP are diametrically opposed to one another but to me that says more that our implementations of OO fail at "build it like a distributed system".
49:20 - In fairness I think this is as much a sign of the hardware at the time as anything. The LGP-30 with it's 4096 word (about 15kb in modern parlance) drum memory was state of the art at the time. When swapping out parts of the operating system to get access to enough memory is a reasonable strategy, GCs are a waste of time because you're going to need to manually massage your data into place anyway. Garbage Collection became a reasonable general strategy when the amount of memory that could reasonably be expected to be available went through the roof (by contrast, a modern server machine with *15gb* would be considered fairly small). This is still somewhat the case in the embedded and (in some cases) mobile spaces. These days of course the common complaint about GC is latency spikes (see blog.discord.com/why-discord-is-switching-from-go-to-rust-a190bbca2b1f ).
"Objective-C without the C". lol, oops
Saying "function style is to avoid mutation and side effects" is like saying flying is to avoid touching the ground while moving. It's technically true but doesn't really help with the understanding. What could help is that you need to decompose the task into functions that (1) experience the outer world only through their input parameters (2) influence the outer world only through their output values. It's good because, when writing that function you always now where to start (input) and where you need to get to (output). Otherwise this is a very valuable video emphasising many important thoughts that are not mentioned in other FP-advocate talks, thanks for creating and sharing!
Except what you said is false and isn't what a pure function is. Closures exist
Richard, you really need to take a deep dive into Elixir and OTP, because together they serve as a prime example of how to do objects and message passing the way Alan Kay really intended. Of course, you are correct that distributed systems are more complex, but sometimes they are also necessary. However, many OTP programs don't actually run on distributed machines, rather the OTP library and the BEAM virtual machine allow the programmer to explicitly introduce concurrency into their code in a safe, reliable way via extremely lightweight processes each with their own message queue (like how Clojure often handles side effects). Concurrency is how you achieve efficiency and high throughput in a network application. OTP and the BEAM also allow you to distribute those processes when necessary.
Your talk was good, but I think your side-effects considered harmful quip completely misses the point that we run software purely for the side effects that doing so produces. Managing side effects is useful for reducing the complexity of writing the software. Elm's managed side effects are amazing, but the Elixir/OTP way of managing side effects is also amazing and arguably more powerful because it can be distributed reliably.
You should read "Functional Web Development with Elixir, OTP and Phoenix" by Lance Halvorsen. That book is an excellent demonstration of managing side effects and state via functional programming in a distributed web app. Another great book is Sasa Juric's "Elixir in Action" (published by Manning).
I think after a deep dive into Elixir/OTP (especially via Lance's book), you won't be so quick to dismiss Alan Kay's original vision of OOP as a bad idea. I do totally agree with you though that C++ kind of hijacked the term OOP to mean something else than what Alan Kay originally intended.
Elixir has been on NoRedInk's radar since 2016
dev.to/rtfeldman/comment/23a
Ultimately Haskell won out: "From Rails to Elm and Haskell"
th-cam.com/video/5CYeZ2kEiOI/w-d-xo.html
Presumably due to static typing.
You brought up a great point.
Much of the problem with OOP has to do with which THREAD the method runs on. Erlang got it right, and C++ and its descendants did not. In C++, you try to isolate state into a struct; but concurrent callers are a mosh pit of modifying the internal state of it. The parameters to a method are not immutable, and the methods are concurrently mutating the struct. In Elang, the boundary of an object is basically a queue. Each "object" is serially ingesting immutable arguments into a queue; where each object is running concurrently. The arguments are the messages. It's actually a reasonable "distributed system", because the messages are immutable... like packets in flight.
Ah Erlang! I remember when I had the idea of using an Erlang made CMS to build sites. The request tuple, that was passed around anywhere you need to process a request, was a screen and half long since you don't have any mechanism to pass data among a call stack other than pass it as a parameter. You need a new parameter? Either change all the functions in the call stack to pass the new parameter from where you read it to where you need it or just ship one giant tuple around. I was just happy I didn't have to mess with monad transformers and lenses.
I think it's important to separate code structure from runtime structure.
Organizing code like a distributed system makes it unnecessary complex, but it is actually very beneficial to design the "runtime" with isolated processes that only communicate via message passing.
This is essentially how erlang/elixir achieve the level of fault tolerance and concurrency that they're famous for.
Well, regarding Scala, F# and Ocaml and their use of OO / FP features we need to remember that both Scala and F# live in an ecosystem where the large majority of the libraries have OO interface (Java, C#) so I wouldn't be surprised if they encounter the need to use OO features more often than Ocaml programmers.
You say we have a culture of not "re-inventing the wheel" in programming? That is ALL anyone does these days is re-invent the wheel! How many gillion javascript libraries and frameworks are there that do the same thing in a different way?
Very engaging presentation - I'm looking forward to a sequel. From my personal observations It looks like we are currently experiencing a pradigm shift towards functional reactive programming (FRP): Whats missing in functional programming is the ability to efficiently organize (and dynamically reorganize) asynchronous processing structures. In FRP this issue has been addressed by dynamically binding functionals and observables to directed acyclic graphs, which eventually allows meta-programming in terms of dynamic graph optimization.
1. C++ is not the successor of C -- that word implies that C has been obsoleted or has "ended"; C++ is a a rough superset of C, which you may or not want to use. There are plenty of people writing C and running C code (possibly moreso than C++).
2. *Most* of the languages on that slide have a "goto" statement. The prevelance of "goto" wasn't that no one had previously thought of using code blocks in high level languages; jumps are in almost every language because it's a fundamental instruction in almost every processor architecture. You talk about goto with such disdain but there are many valid reasons to use goto in modern code, such as breaking out of several nested "for" or "while" loops.
In the 70s there was Pascal originally developed as a teaching language which had function, blocks etc then the upgrade to Pascal was Modula-2 by the same guy (N. Wirth) which included threads/tasks. Then in the early 80s there was ADA that had modules, name spaces and all the goodness we now take for granted (yeah, it died due to licensing costs to DOD. I used it for a few years and really liked it). Pascal was an excellent language and still lives on today as does COBOL, Fortran and others. What I'm disappointed with these days is the loss of domain specific languages like Prolog for building rule engines. We now have tens to a hundred general purpose languages, when in the 80s, 90s etc we had a lot of great languages for specific purposes. Forth I loved, fast, compact, great for embedded systems(postscript is based on forth). APL for doing complex business Math, 4GLs for doing the DB layer and UI. I like those Domain Specific languages, they usually solved a problem very quickly. Anyway, my 2c worth.
100% agreed,PASCAL is still a beautiful language in terms on implementation, even C# borrows some syntax from it.
Look into Haskell. One of its features is the ability to develop a DSL for a problem domain. I created a DSL for a high-performance scheduling algorithm, and it worked great.
Pure functional programming brings a lot of complexity for no reason. Something that can be described as a simple "repetition/loop" now needs to be described in terms of some other smaller blocks to simulate the same thing. This causes the performance to suffer because of unnecessary copies between the functions, increasing the complexity of algorithms by chaining, etc.
It’s not for no reason. The reason is that you’ve decided to make the trade off of using pure functions at the expense of procedural shortcuts.
In my experience, almost all loops can be concisely expressed as map/reduce iteration. It’s not hard to understand, and the compiler is able to take the abstraction and produce native instructions just as fast as a procedural loop.
@@lordsharshabeel You are saying the reason for pure functions is using pure functions. I don't think I understand that.
I also don't think you can express loops "concisely". You will just end up fighting with "common human sense" and "how computers work". Computer hardware performs the operations in a stateful imperative way. You can't write assembly code in a purely functional way!
The compilers usually cannot optimize this type of programming which makes this type of program unsuitable for any performance-critical software. For example, check the performance issues of C++ pipelines which results in unnecessary copies. Recursion results in excessive memory usage. Working with any type of I/O or event system is a nightmare. etc etc.
@@nivo6379 On a more philosophical note: a truly pure function does not have side-effects. Composing pure functions yields another pure function. This means any pure functional program cannot have side-effects and therefore does nothing.
*Everything* a program that does anything involves side effects and thus cannot, by definition, be created in a pure functional way 😉
@@totalermist, I intuited the same, initially. The distinction is "side effect" versus "managed effect". It means not abstracting a logging call or database write out into the belly of your program, but rather working to keep those sorts of things as close to the surface as possible. It means not mixing such an action with the transforms required to prepare the data for the effect. It means ensuring that any such effect is as modular, examinable, and replaceable as can be.
@@nivo6379, "you can't write assembly code [sic] in a purely functional way", and I say, "1. you can, and 2, it would be the same garble of conditional jumps that imperative looping compiles down to." The biggest reason for why we moved past writing in assembly is interoperability. The biggest reason for why we moved past thin abstractions like C is for ergonomics. Textbook OOP is fantastically ergonomic. Real-world OOP is a hideous mess of gotchas and ossifications, because we collectively have rejected the self-restrictions needed to leverage the modularity of OOP. One answer is to back to thin abstractions, as with Go, or to carefully-constrained, but deliciously composable abstractions like Rust. We use distributed systems at my job, Kubernetes, as a means of enforcing restrictions on ourselves as to what can and cannot be directly coupled, because we failed OOP. The alternative recommended by Feldman is to enforce the necessary restrictions to achieve the modularity needed to avoid being strangled by our code. One can do that in an OOP paradigm. FP does that wonderfully, as an inherent property of structure.
Actually I disagree with one thing at least: "be more like distributed systems", I think is great advise. From the ideas around microservices to Erlang's runtime system. Great presentation still!
Outstanding talk! I've been a software developer since the mid-1970 (high-school) and knew a lot of what you talked about, but not all.
Love the format of the video. Screen space is used for the talk, not the talker, or the room he's in, or some permanent header about the meeting.
Screen space used for what we need to see.
+1
Very well crafted presentation. Thank you!
I dont understand. Pasta code is the best sort of code. Specially with Nana's sauce.
The amount of ads is infuriating. Normally I don't bother watching on mobile with 2-3 ads. But I'll come back with a browser and ublock on this one.
You can use brave browser on mobile to bypass the ads
Ironically, what this talk could use more of... is objectivity.
This is a good prediction, (and I hope that the OOP acolytes have a painless epiphany that "OOP doesn't fit my data model because its perfect; my training and mind fit the problem space into an OOP shape because that is what I'm used to.")
"OOP was an interesting experiment. It's well past time for something better." Its an Elm advert? Yup, its an Elm advert ;-)
There are a lot of straw men in this argument. For example, he talks about how Alan Kay looked at OO as composing systems from smaller things that looked like systems themselves, connected by a network, and then says "that sounds like distributed programming". He then argues that distributed programming is notoriously hard, so its a bad way to structure programs - without adequately demonstrating that this was Kay's original viewpoint (clue - it wasn't).
Once functional programming is firmly entrenched, it's going to take Brian Will doing a "Functional Programming is an Embarrassment" video to point out to us where we all went wrong.
I think a lot of people associate pure-functional programming with pain because of the category theory terminology in Haskell, Monad Transformers, and having to change a huge piece of code to thread variables through the call stack because you didn't want to use Monads. The new languages like Koka, Eff have algebraic effects which are much easier to understand and easier to use than Monad Transformers, and it also by default can tell the difference between total functions, functions that can throw exceptions, etc... Also pure functional programming is just a special case of relational and declarative programming, and the "single return value" restriction of functional languages is worked around by using Monads which just embed other languages into functional.
@@aoeu256 It think it just creates work to transform an algorithm that is fundamentally running on an imperative, state-transforming instruction set with memory into a functional model. Converting things to a mess of recursive functions with state transformation threaded through the call stack, is in the words of a colleague - 'an unnatural act'. My algorithm development work (entropy analysis and statistical data processing mostly) relies on no libraries - it's just bare metal algorithms to do whatever needs doing, usually with speed being a goal. Some things map naturally to functional structures but others simply do not. In contrast, digital logic design is fundamentally functional. 3D model expression is too.. I took to functional HDLs like a duck to water. Functional HDLs map perfectly to the problem space. OpenSCAD a functional 3D model description language similarly maps well to the problems space. Functional programming of computer algorithms map much less well. Man cannot live on recursive factorials alone.
@@davidjohnston4240this is complete nonsense and hilarious. All of these algorithms come from computer science, a field that existed before computers or the von Neumann architecture ever did and a field that will exist after that architecture as well. What you're doing is translating math into imperative nonsense to talk to a computer, instead of writing math and having the computer translate it for you, which is the point of their existence. They're meant to help the engineer, not the other way around
Your illogical statement is hilarious because by your logic you should just be coding with electrons because even the opcodes in assembly aren't actually what's run on the cpu
Also you have no idea about what you're talking about with recursion. Nobody even writes recursive functions directly, those are explicit iterators and like 4 layers of abstraction below anything sane
This hate of fp always comes from people who barely even understand what they're talking about.. And they often have your extremely odd way of speaking to and love to talk about low level details that dont actually matter to anybody
Pure functions are important because they have equality relations among themselves which can be used in the program itself allowing program transformation without confusion, therefore controlling complexity.
Functional programming is great, but this specific argument for it seems flawed from the start.
There's a problem with the premise that "OOP is flawed because programming distributed systems is complex, and OOP is like distributed systems", which is that everything that makes distributed systems difficult (concurrent programming, lossy communication channels, security, discovery, etc.) is not inherently present in an OOP system. Those difficulties arise in the measure that a system is distributed, not in the measure that a system is object-oriented.
I'm guessing Alan Kay was using the distributed systems analogy in terms of the similarity in mental models of computation, not in terms of how they're implemented in practice. In both distributed systems and in OOP you have to divide your domain model into separate entities that each have their own responsibilities and whose implementations will be a "black box" to others. Then, you make the entities communicate to each other via a messaging API. None of the difficulties in distributed systems programming are present in this similarity... they arise when you go further and start putting those entities on separate processes or machines, which has nothing to do with OOP. On the other hand, if your single-threaded program grows and requires you to pull a component out into its own separate process (say for security or scalability reasons), it being already written in an object-oriented way should make it much easier to do so, since its interface and the way it's encapsulated is already defined. So the similarity between distributed systems and OOP ends up being a strength in the end...
@Dirk Knight my point is that OOP does not solve distributed systems issues (aside from domain modeling like I said at the end, which granted is a very easy part of a distributed system), and that distributed systems issues are not present in OOP. They’re orthogonal concerns.
@16:25 I think what Dan meant when he said: "we had the OPPORTUNITY.." is that rebuilding the application again was something easily done by oop not easily done in other applications written in previous paradigms making it experimental a good thing not a bad thing.
I suppose it's always interesting to hear a different spin on things, but unfortunately the logic doesn't come together here. Arguments used to bash OO don't necessarily support Functional, for example. There also seem to be a lot of little factual errors that help smooth the story. C with classes seems to not have had virtual methods, which is perhaps the number one defining thing of OO.
The main point at the end is that we want to have more immutability and to control effects, which I whole-heartedly agree with. That's not the same as functional programming. Some functional programming languages have mutable global data. So the paradigm-shift might better be called immutability-oriented or effects-oriented, then I think we will still have little mini-computers managing the state we do need.
I've never heard of virtual methods being a defining feature of OO. WTF are you on about?
@@jeffwells641 Maybe you just learned something, then. See en.wikipedia.org/wiki/Dynamic_dispatch
@@jeffwells641 Note that the original OOP language Smalltalk had blocks (sort of like lambdas), could handle messages without the methods being implemented by sending the message to other objects, and everything was an object including if, loops, and you edited your application while it was still running. What C++, Java, and C# call OOP is a mere shadow of what can be and mostly is over engineered class-based spaghetti code due to not having closures early on.
Wholeheartedly agree on this. The fact we had the composition over inheritance early on in the talk, which wasn't explored, and then lots of (Small)talk, glossing over what the major OO languages actually provide and why they have been so successful. We didn't even hear of Rust, which puts front and centre the issue of state and mutability.
These purity tests can be tiresome.
Instead of saying the best language has these features, I think we have come to a point where we should say: "If you are working in this domain and your task is to develop this or that, with these people in this existing environment, then the most interesting languages, development environments and archiving systems are these and for these reasons"
Functional style is great but sometimes you need state for performance and sometimes you need a distributed system. I'm betting on structured programming where needed with functional style where appropriate. (IMO, OO and modular aren't really separate paradigms from structured, just minor tweaks on it).
Well, one of the better known languages that is purpose-built for distributed systems, Erlang (and it's syntax revamp Elixir) is a functional programming language.
"Functional programming makes our programs less like distributed systems"
WRONG. Here's an example of a functional-style 'pipeline in C++. ('|>' is a pipeline operator that passes the output of one function into another, like a flow-chart).
auto e = str |> views::reverse() |> ranges::find_if(isalpha) . base();
If one is prepared not to take things too literally, this is so very very similar to what Alan Kay was talking about when he said it's a "bit like having thousands of computer all hooked together by a very fast network", because each function is like a highly modular self-contained 'computer' that communicates with the other over a 'high-speed network' (the pipeline). Conceptually Alan Kay was very close to the modern 'Reactive/Functional' style.
Apart from this niggle, I really enjoyed this talk BTW.
I don't know if this is what he mean, but this is my take on it. Purity (referential transparency) lessens the unreliability of either distributed systems, or effectful function calls. Most distributed systems require defensive programming, while pure functional programming doesn't. That said, the "pure" qualifier is important, and Kay's "high-speed network' could be seen as a metaphore of "non-fallable"/pure functions.
The example you are showing is just a syntactic sugar for function composition which can be mechanically rewritten into multiple function calls with intermediate variables holding the function outputs. This looks nothing like a distributed systems graph. It’s completely opposite: statically sound types and linear execution graph. None of these functions holds a mutable state either contrary to hiding it within an object. To me the original OOP description is very close to what actor systems are.
* Thinks this will be a video about the future *
* Sits through through 45 minutes about the past *
Because our best future was already made in the past
See Jonathan Blow's talk on the collapse of civilization in 2019
To know where you are going, it often helps to know where you have been. "Those that don't learn from history are doomed to reinvent LISP" ;-)
@@recklessroges lisp is cringe
Agreed... Although I don't mind hearing a bit about the past to get into a certain arguing position, this talk kind of let me down in that there is a ton of history and then when it should have really built up to a real point, it just kind of fizzles out to a few slides about functional programming.
Not only that. When doing such a historic review, it is very easy to make it biased, omit some things, emphasise others.
Functional programming is nothing new, and it was in Fortran and Algol from the earliest time, if only in some limited form.
Here is a prediction: if FP is not kept under control, the amount of energy expended to solve a given problem will grow problematic. Just with OOP now, the return we get from Moore's Law is nearly sucked up by the complexity added in OS and programs by using OOP in many situations.
I used Motorola's assembler for the 68000 quite a bit back in the early 80s. It had features for block-structured code, like IF and WHILE. And you could make macros that pushed arguments onto the stack and called some function using the JSR instruction. Of course the code was 68000-specific; IMHO portability was the main driver of the adoption of high-level programming languages.
Richard, thank you for this, excellent job untangling the history and ordering the concepts!
A few facts "against" functional style as the next paradigm shift:
- Among Machine Learning frameworks pytorch won over TF, Theano because it didn't assume immutable "code" structures and allowed interactive debugging and dynamic adjustment. JAX is trying to bring functional style back, but it is still nowhere as popular as pytorch.
- Event driven programming, which I think is one of the key concepts of a distributed system, proved to be extremely convenient: Qt and ROS are both amazing frameworks that are easy to learn, use and structure programs.
- Distributed network abstraction allows scaling to networks of heterogeneous computational resources, i.e. clusters.
I just can't see FP as the next paradigm shift. I mean it is already a thing now, any popular programming language has FP features.
How many languages have first-class side effects, semicolons, and statements by algebraic effects or monads though? How many languages use lenses instead of references to modify part of a subtree of a complex "object graph". How many languages can you have concurrency without race conditions? Anyway look at F*, Idris, and/or Koka to see the nice things that functional programming can do.
@@aoeu256 Languages without race conditions? Whenever you have a multiuser piece of software that has to write to a db you'll have race conditions, regardless the language it was written on.
As for the rest, it's not like it is impossible to write monads or lenses in other languages. You are simply not forced to.
Is this the point? Well I might agree to some extent. I usually find FP styled code easier to test, yet I would consider FP an overkill when all I have to do is just printing a table looping on an array. The fact that TCO is not popular among js-engines doesn't help.
Yet another fantastic talk from Richard. Well structured, engaging, informative and spot on!
It seems to me (as a novice programmer) that the objective of OO was to prevent side effects, and one of the biggest features of FP and is that it prevents side effects.
What? "COBOL did not have language support for blocks"? Really? 7:47. It is funny to read this. A COBOL program is structured into 4 DIVISIONs. The divisions in SECTIONS. The sections in paragraphs. And the #1 loop command is called PERFORM and can, well, perform thru sections or paragraphs of CODE.
This is what happens when people try to do "history" lessons, but didn't check sufficient original sources, and instead of presenting objective historical facts choose to tell a manipulated, partly fictional, biased version of history, that somehow supports their agenda. Those who have read "1984" will remember that Smith worked in the Ministry of Truth , Records Department.
Language designers in the sixties were all well aware of functions, types (to the point understood at the time), partial application etc. Still, they chose to do things in a structured procedural lexically scoped way. You may want to think hard about why that is, before abandoning sequential programming. It has been said that any sufficiently advanced program will end up containing an implementation of something like Lisp; well, as seen by things like IO monads etc, the opposite may very likely also be true: any sufficiently useful function-oriented "program" has to include a sequential execution model, mutable memory and objects, an idea of time complexity, and I/O.
@@lhpl And decades ago a COBOL program could respond to events, using async functions. There was a section for them, named DECLARATIVE SECTION. :) in UPPERCASE at the time :)
As an old assembly language programmer, the assertion that there were no procedures or functions in assembler programs is just flat out ignorant. The FORTRAN II manual published in 1958 has calls and returns: that is, functions. FORTRAN did not force spaghetti code.
I wrote a huge program in Modula-2 to build a bitmap painting program for kids, called Flying Colors. It was a huge hit, but when i had to port it to Windows from Macintosh, i found that WIndows didn't have some of the bitmap manipulations that Apple OS had in their Quickdraw system, so i was forced to use Assembler, in this case Microsoft Assembler to write about 1% of the program. In MASM, you have macros available that give you IF/ THEN/ELSE, WHILE loops, etc, and in fact writing in Assembler is quite easy and fairly high level. What makes assembler hard is that Intel had only 4 general purpose registers, 2 of which are used by the multiply instruction, so you spend way too much time fussing with loading and saving registers. So MASM is not that productive overall. But you can do any paradigm you want in Assembler, and with such a powerful macro processor, you can do a lot of stuff. I think the real gain in a new language is that for some problem domains, you can make it super convenient. I worked on a language called Beads (beadslang.com) which is designed to make web apps and mobile, and it is really easy to fit thing into small spaces because it has a clever layout model that makes it easy to support a wide variety of output machines. No other language is as convenient for that problem set. That's the point of tools that are optimized for tasks. The question is, what is Elm's problem set area where it shines?
@@edwarddejong8025 But you do realize that a compiler merely masks the fact that there are only a handful of hardware registers on the CPU. The x86 is very good at addressing memory indirectly. So I never found it to be a limitation. When you write a lot of assembler you create a library of functions that makes your life a lot easier. I never heard of beadslang. I'll check it out.
@@donjindra The convenience of having a compiler do all the register allocations for you is the reason hardly anyone programs in assembler nowadays. Further, people are using languages like Python and Javascript. where arrays are open-ended in language, which saves you the effort of managing memory explicitly. Overall the trend is to push more of the load of writing code into the computer's workload.
@@edwarddejong8025 I never said compilers and interpreters were not great tools. I'm merely suggesting that someone who programs in assembler over many years builds up a library of code that helps tremendously with tedious details. I mostly program in C and Python these days. But when I programmed in assembler I learned to be very productive over time. I could knock out complex programs easily.
@@donjindra There was a man who built a database called Panorama entirely in assembler, and it was lightning fast. Assembly is surprisingly productive, i agree, but nowadays it is much easier to work in a higher level language. I built my super high level language to build web apps and mobile, (www.beadslang.com) because i wanted something much more convenient for making graphical interactive products that are platform/os neutral. Always a tradeoff between speed/size/convenience/readability, and i consider readability/maintainability to be paramount, because labor costs are way more than computers costs. Computers are lightning fast, and dirt cheap. Just tonite I sold a perfectly working XP machine on craigslist for $20.
The reason you think OOP was not a solution to complexity is, imo, because there is no solution to complexity. No methodology will ever "solve" or eliminate complexity, because complexity will always expand to fill all available space. To put it another way, OOP very much DID solve complexity -- in the 90s. Now, the complexity programmers experience rests on the substrate of "solved" complexity such as existing windowing & messaging systems. When we have a very complex situation and we manage to simplify that situation, we then proceed to re-introduce more and newer complexities. Modern Windows is exponentially more complex than anything IBM or DEC had in the 70s, precisely because C++ simplified the patterns that existed in the 70s, and allowed newer systems to become more complex. Which is what will always happen.
Programming seems as complicated today as it did in the 70s because it simply IS. And it probably always will be.
Don't normalize garbage collection.
I led a functional programming rewrite of an application at a billion dollar company... I still believe in functional programming but the developer backlash was horrendous.
Which functional language? Have you looked at Koka? Its algebraic effect system is simple and unlike Monads they can be combined easily, and it lets you use side-effects easily without having to learn the Monads.
I feel you.
@@aoeu256 Koka certainly looks interesting.. I try to keep up with new languages but hadn't heard of it. Thanks for the heads up.
Actor based programming could be put into correspondence with the general idea of message passing (object orentation (in that sense)) and can also be put together in a distributed fashion (easily!?).
Is functional programming really a paradigm shift and does the style alone reduce complexity or rather the mental overhead needed to deal with complexity? To me it seems one major driver of complexity is the need to have interoperable modules each with their own set of dependencies that all have different update cycles.
It resolves one type of complexity -- multi-threaded access to the same data. That's a useful problem to solve. However, I think it throws out the baby with the bathwater, and I haven't jumped on board.
In my experience most of the accidental complexity comes from dependencies. Whenever there is shared mutable state, it introduces dependencies between the parts whose functionality depends on that state. Side-effects cause dependencies too. Functional programming can alleviate both of those.
@@digitalspecter dependancy and nesting yup.
I would love to think that the paradigm shift after functional programming is logic programming
What’s that
@@asdfkjhlk34 en.m.wikipedia.org/wiki/Prolog and friends
I thought it was specification oriented programming kind of like ATS2
how to do a + b in prolog?
@@SimGunther Specifications seem to overlap almost completely with relational/constraint programming though.
Such a great lecture. Thank you so much.
Graph programming. Why do people dance around it? When C++ added future/promise, I was disappointed to find out it was trigger-once. Just form asynchronous nodes that form graphs. You can have functions at the edges and the nodes. Make a language that natively supports this metaphor. Great for UI, great for distributed computing, great for pretty much everything.
Sounds a lot like Erlang / Elixir to me...
36:00
Distributed systems can be very easy and simple, WhatsApp was completely distributed and maintained the company that they sold to Facebook for 19 billion with 35 (!) engineers.
They had 450 Million users at the time and one year later, they doubled that number, with 50 people.
Distributed programming is NOT HARD and Alan Kay itself said that Erlang is the only true object-oriented language outside of Smalltalk.
I guess this is what you mean with the 'simulation' part, while it could have been helpful, to mention that.
WhatsApp is Erlang and Erlang is functional processes (separated, autonomous objects) talking to each other and that makes sense. Versus the Java-style which is overlapping, shared objects all the way down (like down to a Boolean in many cases).
Intriguing. To me, "thousands of computers connected by a fast network" is a _better_ description of functional programming, than it is of most OOP as practiced.
Consider that a message sent over network is immutable. It is passed into the computational unit (the function) and whatever happens to it after that is irrelevant to the sender.
Contrast with the way OOP is often used : Mutable objects passed as messages, which may then be modified by the receiver (which is the equivalent of getting a return message you weren't expecting).
"Contrast with the way OOP is often used : Mutable objects passed as messages,"
The solution: Don't pass mutable objects if you don't want your objects to be mutated, which is more or less the concept present in rust.
@@randomname3566 Agreed.
That Fortran if statement is hilarious. Makes me understand why people apparently made such a big deal about "structured programming" back in the day.
Great...my entire degree was based around OOP.
Great job Richard. Your best talk yet!
6:20 The code syntax and execution order are two different things!
I really like the implied conditional code syntax! The conditions 0 are implied by position. Sparse! Love it.
But not a fan of code that's not broken into separate functions.
A language could conceivably combine an implied syntax with functions. Hypothetical example:
IF (BOATS - 1)
PROC1, PROC2, PROC3
Or you could use LISP derivative / pure functional language and write that 'if' as a utility
Organising code is very dependent on the end usage. If it is for general purpose programming, then low level, close to the metal is not required because it is more difficult to learn and implement. Programming languages should be as close to natural languages as possible, even at the expense of performance because we have nowadays more computing power than we need. Of course for special purpose applications like games or automation, real time stuff this is not valid.
My programs are UI-centric... and I really cannot imagine how to use functional languages to make UI. OOP is perfect for this, data-oriented design is somehow possible but not natural for UI... But functional programming? No way.
See React JS... or anything JS
This is because the OpenGL and all other basic graphic libraries|drivers are written in a procedural/oop way? And when you write a program in FP style, at one moment you want to use those libraries and you are stuck? I recommend to check out the Conal Elliott's work on FRP. Also there some talks why UI today is still wrong.
Even working with DOM in js is in procedural way, and this is not because you cannot do it in FP style, but because this was done initially like this. Today's libraries like React, Vue... embrace more fp style, and yet they are UI libraries. Check out Elm language, that is specialized on web UI, and it is a pure functional language. There are some good tentatives to create more basic fp libs for graphics, but I think the industry doesn't want now to invest in this as we already have huge UI libs that are very good and in them were invested a lot. The fact that we do no have now UI libs in fp style, doesn't mean that they cannot be done, they can, but industry doesn't want to invest in them now.
@@PisarencoGh Nope. It is because UI is all about state. While FP is all about pretending everything is stateless. Therefore it feels unnatural for UI.
@@troymann5115 JS is not a functional language or is it?
@@vladimirkraus1438 do you have any proof about that, or you just consider that? Because i can say the same thing about OOP and UI, and I'm already right because I said that?
What has to do graphics and state? And where fp cannot handle that? FP just splits values from transformers and does not hide the state, in fp the state is public, so where it cannot handle UI requirements? What is the difference between object.setWidth(width) and newObject = setWidth(width, object)?
35:00 microservices are fully encapsulated objects communicating with each other by messages over a very fast network.
Pattern matching and context-sensitive grammars are the next big paradigm.
Classes != Objects. I think Kay would consider modules more OOP than classes
An alternate take is that because we have no good mainstream way of describing distributed systems that doesn’t blow up in your face, but loads of ways of modeling time-independent math concepts, there is a host of solutions we never get to make, because it’s just to hard. It seems to me that we’re good at implementing solutions in the mathematical domain, but crap at those in the human domain. Human domain is event based and time is a central concept
everyone forgets to mention that Simula was a library for ALGOL before being a language.
Feldman: "if you can avoid distributed system, you should". Microservices architects: "Hmm... Are you sure?" In fact multi-core CPUs and all those machine-learned Nvidia tensor cores and cryptomining rigs share many of the same potential concurrency problems as distributed systems not even speaking about things like Hadoop or Bitcoin. C'mon, Internet is a highly distributed system! And yes we need to create boundaries to deal with distributed systems and that's what OOP is good at - abstracting away all those HTTP requests, external services, database tables accessed by many clients etc. Well any program that has a UI is a part of a distributed system consisting of that program and the user.
I think that OOP will stick with us for ever. It still is a concept that makes things easy to understand and maintain (if you don't overuse it). But I wish we could use functional concepts as well.
I am a C++ programmer and there is nothing more satisfying then implementing and running pure functions instead of calling a hand full of non-const methods (object functions that implicitly change the state of the object).
Not to mention that pure functions are easier to test.
Though I don't hype functional programming because I can't wrap my head around how you would implement a game or a complex user interface the functional way.
Does anyone has some good examples ?
@S B I disagree with that tbh. In my opinion it is good to have paradigms like a vocabulary. Paradigms are guidelines that can help to organize your own code and as long as you write your own code alone, just do what ever works best for you.
But when I want/have to work with other people on a project you need to agree on certain core principles, code design and, for C++, even naming conventions. But with the use and teaching of paradigms you have a well-known set of "how to write code" for a given language or project.
I am happy beeing a C++ programmer. It showed me how much a language can change and improve. C++ also allows you to use so many paradigms or just screw all and make spaghetti and it still works. Like I learned C++ before C++11 and C++11 blew my mind (how they standardized features of other languages or 3rd party libraries).
There are and probably always will be things to not love about a language. C++ would be far from my first choice of a programming language for beginner. But C++ excels at gluing together various programming languges (and paradigms), with downsides as you mentioned.
E.g. the C ecosystem has their own error handling. As C++ is still not sure what error handling they actually want, it can cause some really nasty code to combine exceptions with C error return values.
I have been looking for something better then the current state of C++, like a pure C++11 (and higher) that iterates into a better language. So far Rust ticks many boxes and Zig looks promising (haven't had time to play with it). But that doesn't mean I would throw all of C++ over board just because there is a new thing in town.
You might want to look at Elixir's Phoenix framework and/or ClojureScript.
How many programmers are in the “Why not both?” camp? I really prefer languages that support OO and FP style and usage, or I want multi-language environments. I want to decide which to use under different circumstances. I hope there is never a monolithic “shift” to another dominant paradigm. It’s a colossal waste of time to argue for one universal solution to all problems. Instead, let’s use that time to learn more styles and features from all major paradigms, so we can be better general problem solvers.
100% what i thought
Shoenfinkel's combinator calculus was the first functional programming language in the 1930s but it didn't have an electronic interpreter
No matter what the paradigm, greater abstraction creates greater mental strain to understand what is happening and where things are coming from. The skill of the programmer is to organize and name things such that the program is easily understood. OO has great tools for this, it is amazingly easy to model tables and rows of a database as objects, it makes sense conceptually to view them as mutable, things like this. Functional programming also has great tools, it is great to be able to great specific functionality from general functionality using currying. At the same time, it is easy for things to get out of hand and hard to read. Haskell, for all it's clever avoidance of brackets and it's complex operators that have right or left association and many different priorities... Well, if you're creating something abstract like readFromProcess, that code gets very hard to read and understand, it gets very deeply nested and there aren't a lot of great names for the things you're doing. do notation is itself a concession that it's easier to read some things if they look imperative, if you pretend that what you are doing is not so deeply nested. The other direction, our lisp languages, are notoriously nested.
FP is really interesting but downplaying the ease of other paradigms and ignoring the frequently taxing readability of FP is silly. FP is not popular because the world needs a lot of programmers and you have to be a bit adventurous, a bit out to prove your intelligence... Where writing code in a staight forward sequence of instructions using words is pretty easy for most people to understand. Half the time in a fp language you're reading things backwards, and the sometimes forwards, and then sometimes switching back and forth many times in one line. I know you can get used to it but you gotta onboard people and it's expensive. Lots of people in FP are gonna be people who started it as a hobby because they love computers but that's not all programmers. Some just wanna clock in get some stuff done and clock out, they have no academic interest and they're not embarassed they don't want software to look like math.
All that said, it's a neat history but the conclusions drawn are poorly supported and good fp programmers admit it's not a silver bullet, fp has places where it has drawbacks that are better addressed by different languages.
I don't need OO to do tables. Python does arrays and dictionaries right and with that I have better functionality than any database will ever offer. The entire idea of functional programming is flawed. A function doesn't produce side effects while every useful program is nothing but side effects. That's a 100% disconnect between the way academics think about programming and the way software engineers need to think about it.
So if there's enough abstractions, we'll be able to say "get me a database thing that's secure or whatever", right?
Abstractions are like asking a genie for some kind of wish: you BETTER be careful and particular about what you wish for or else the wish will not be what you expected...
Haskell is pure functional language but its designers soon found out that pure functions couldn't communicate with the real world, so they introduced monads. Trying to understand exactly what monads are is a big problem in learning Haskell. We are told that they come from a branch of mathematics called Category Theory. I don't have the time or inclination to study another item in mathematics and I not sure it would really help in FP. From what I have heard I am not alone in this. Monads allow Haskell to become a procedural language but does it with very complicated syntax.
To do side effects in a pure function you have to explicitly return a new version of the world with your side effect added on top, and this too explicit so they used Monads as "first-class semicolons/side-effects", but they can do more than just add state they can be used to create eDSLs with different semantics, and all statements are first-class so you can create your own for loop, your own assignment statement, your own exception system, etc... However combining side effects was pretty hard. Newer denotational languages like Koka let you combine side-effects automatically, and are much much easier to use than Monads. koka-lang.github.io/koka/doc/book.html#why
In 2011, I took over an R&D project, that was based on a 2500 LOC Haskell application prototype, enveloped at ATT Labs. I did not know Haskell at the time, so my intro was really a submersion. It took about 6 months to figure out the existing code, then start improving it. That project lasted nearly four years, and at the end, I had about 13K LOC. At the end I still could not really tell you what a Monad was, but that was okay.
Ignore Category Theory, because a Monad is really a simple interface that promotes an FP design pattern. Monads were not introduced to "allow Haskell to become a procedural language". What Haskell did was implement the IO Monad (input/output), and any state change that relied on the outside world had to be performed in the IO Monad. The result is that you can isolate pure functional code from code that has side effects. From a programmer's perspective, this is great. You know when you are working on pure functional code and when you aren't. Unlike Java, etc., you never know if you are subject to side effects.
The benefit is that you can reason about, modify, etc. on the pure functional code, to your hearts desire, then let to compiler sort it out for you.
True story. My application was a high performance scheduling algorithm. Basically a huge tree-search. One Friday morning, I realized that my algorithm had a flaw that basically allowed identical branches to be search multiple times, which was huge was of time. I went to lunch, and thought about it. I had two options, give up or basically rewrite about 1000 LOC, to eliminate the problem. Friday afternoon, I started a 30 hour coding session, that ended Monday morning. It took Monday morning to eliminate the compile time errors from all the code mods. Once it compiled, I went to lunch. Came back, and I started testing. The problem was solved, and no crashes or runtime errors, during the tests.
Great talk! I am so lucky to have this as recommended
C has both in one structure: switch... That's one step up, goto for fall through patterns only
By the same logic that C with classes has failed because clearly it doesn't bring enough to the table itself.. hasn't functional programming already clearly failed far more so?
Good point, but did garbage collection fail? It wasn't popular until decades later.
@@Zero_Contradictions Maybe that's why classes eventually succeedded
Great talk, thanks for sharing.
Loved the talk ... though it stayed mainly focused on history (including validating the idea that paradigms do change) than why functional programming (or anything else) is the NEXT paradigm. One thing I was hoping to hear about was the how the trends in storage costs, memory costs, processor efficiency, network ubiquity, and possibly even how blockchain may impact the world of constraints that programming languages will have to operate in. I suspect the influence of blockchain will not really show it's full impact in language design for another 10 years or more but some of the other trends may impact sooner.
Saw it twice.So much to learn in this.
Assembly had procedures. Period.
Push State &Address, Jmp elswhyr (do stuff), Pull Address, State, Jmp Address unless you were a bad programmer.
(or JSR in more "modern" assembly)
@Dirk Knight I was merely making the observation at around 2:40 that assembly does not have procedures.
That assertion is only true if you are either a very bad or ignorant programmer.
@Dirk Knight ISTM that we are largely agreeing, just coming at it from distinct perspectives.
My contention would be only that higher levels of abstraction are irrelevant if you don't have a decent grasp of what that abstraction is doing for you as a result of a decent grasp of the base layer.
i.e. You (the video author) have no kudos to offer me a paradigm shift in programming, if you do not understand basic programming paradigms.
any solution to complexity will simplify things thus motivate people to make even more complex things
30:36 Bjarne Stroustrup's C with classes was not completely what we consider as OO, for instance it didn't has virtual functions and run-time support for it. So saying that OO features alone didn't make it successful is an incorrect conclusion. Also, 25:24 the part of the C syntax that he didn't like was the prefix/postfix operators (++, --) and the omission of the 'int' type for functions, so nothing about the brackets and the parameters. I encourage anyone to read "A History of C++" by Bjarne Stroustrup to read more about it. BTW, the end of the link to Ingalls' interview, hidden by the speaker, is "Ao9W93OxQ7U".
31:51 I'm curious, why do you claim that composition is not related to objects? To me it's one of the earliest features . What about polymorphism? Then by disregarding those OO features, you're saying that it's the same to have them within objects or only outside object; this simply isn't true. I'm not even mentioning inheritance vs composition which is largely controversial (and misinterpreted in the talk as it often is). Later you talk about Kay's "message passing" interpretation - it's the public methods of OOP (encapsulation + classes, or as you put it, methods) and polymorphism (messaging different objects the same way). Both are very much what OOP is about.
39:25 Mike Acton works with Unity, the engine which has a ton of performance problems, partly because it's misadapted to currrent hardware, and because the user part is C# ... which is GCed (and generally bytecode). So I don't think what he says about data-oriented design has a chance of being relevant or accurate. 🤣 Today the problem is still mostly about complexity, so I don't see that coming as a paradigm shift, today's compilers (to native code) do an awesome job.
Basic was like Fortran, unstructured. But for simole programs it was beautiful as a learning language for beginners.
36:29 Not really sure about the claims about Distributed Systems, because they're hard, it doesn't mean they're not desirable.
@Dirk Knight the problem does not lies in the distributed system per se, but in the tools you use to debug them. Internet is a good counter example, I think. Centralization has far worse curses, BitTorrent is pure magic.
@Dirk Knight a human brain is an amazingly distributed system and quite compelling. We're currently in precarious times of understanding. My point is that centralization system are easy but not better. Even centralized systems rely on some sort of CDN in order to perform better. More work is needed. As W. Gibson said: "The future is already here - it's just not evenly distributed."
@Dirk Knight a lot of people can read a German sentence and do not understand a word either, because they are not fluent on that. I think humor is a more a language or skill, rather than an emergent property of the brain. What I really wanted to emphasize was that the body, ecosystems or whatever biological system seems to work amazingly distributed and leave us in awe in comparison with our current tech. I see the future being more like P2P rather one giant AWS (that even though is quite distributed nowadays). I would said the right view to me is sharing and collaboration rather than Divide and Conquer.
9:45 "C does have both goto and program blocks" -- see here for an interesting discussion from Linus Torvalds and others on the linux kernel mailing last back in 2003 on the subject of gotos in the kernel: koblents.com/Ches/Links/Month-Mar-2013/20-Using-Goto-in-Linux-Kernel-Code/
Not the first or last time that Linus had "differences of opinion" to academics about what quality looks like. Having said that, if your context isn't developing a kernel on an 80 character terminal then i think Dijkstra still wins. Also i bet someone committing a random jump into a different procedural context in the kernel is going to get bollocked old style by a certain benevolent dictator even in this newer gentler era of his reign so i see this as more of an objection to dogmatic adherence than the core principle.
Pattern matched functions and local named closures sound much more reasonable than gotos, but if you're forced to use gotos, make sure they're only limited to the function's scope and (most importantly) no backward time travel
Obviously a lot of thought and effort into this presentation. Thank you very much
A first from this talk (at least to me): even mentioning the classic article from N. Dijkstra "GOTOs are harmful", even talking about the beginning of structured programming , the author does not mention Pascal, the language created by prof. Wirth as the answer to FORTRAN and GOTOs and built upon ALGOL.
Pascal was created by Wirth not Dijkstra
@@thaddeusolczyk5909 I always change the 2. My bad :( Thanks for pointing it
So -- we developers have 2 choices: Go nuts or go crazy
One minute into the video: "Functional, Lisp".
Nailed it.
"But I don't wanna do Lisp" - some chicken making a Racket ;-)
@@recklessroges I find that Elixir gives me most of what I want from Lisp without making my eyes cross.
This talk has so many mis-statements.... It was really hard to watch.
There were parts I disagreed with, but Faldegast is right.