I feel like this talk plays a serious "no true scotsman" with functional programming. Obviously Haskell is like, maximum pure functional programming. But Erlang/Elixir are also functional programming in the sense that they do not have classes and use immutable data structures; while they allow IO within the context of an otherwise pure function, they do not allow the mutating of any shared state, allowing for massive concurrency using their BEAM VM. And further along the spectrum you have Rust, which is very much an imperative language and yet also clearly is borrowing a lot of functional ideas, with immutability-by-default, an algebraic type system, and base language support for the Option and Result monads. Drawing a line between "pure functional" and "imperative languages" is a false dichotomy, the functional paradigm and the imperative paradigm are just two different ways of framing the same logical behavior and often the right answer is somewhere between the two.
I think I finally kind of started to understand the term monad when you described them as a function with a context. This is the first time anyone has been able to really explain it to me.
Your point about monadic libraries not supporting exceptions is not correct. Monadic libraries for concurrency can and do capture and propagate exceptions. It's true that some control structures (while and for) don't work well with monads, and that monadic error traces (either in response to exceptions or otherwise) are harder to handle. That said, this strikes me more as an implementation issue, and one that in OCaml at least, good progress is being made. The idea that monadic libraries can only compute pure functions is also not correct, in my experience. The primary strength of OCaml's Async library, in my view, is that it makes it easier to write concurrent imperative code, by giving you clear guarantees as to when interleavings may occur. It gives you something that is easier to reason about synchronously (like an old-style select-style C program), and performs similarly to that as well; while still giving you the thread-like ability to do one damn thing after another. The virality of monadic languages is real, but it's a two-edged sword. The big benefit is that you're explicit about where the monad is and where it isn't. There's lots of imperative code that doesn't block and doesn't use IO, and that code is easier to reason about and work with. Making that separation explicit is one of the big wins of this approach.
Yaron Minsky Continuations can give you the same explicitness (if you choose to have it, or without it if you'd rather not) with none of monads' disadvantages.
+Ron Pressler +Yaron Minsky An important point in this discussion is that the ergonomic issues of monadic-style effects are greatly relieved in OCaml and Haskell by giving them special syntactic support in the language; in Haskell's case 'do notation' was introduced at the same time as Monadic IO, and with OCaml there are fancy preprocessor tools to provide similar notational convenience. When you are stuck manually composing higher-order functions to build monadic computations, especially when there's a lot of syntactic ceremony around anonymous functions, the result looks a lot less like a clean extension of the language and a lot more like a clumsy *model* of some desirable effects. Because imperative languages already have a notion of sequential composition of effects baked into their default control stack and syntax, first-class delimited composable continuations provide a *much nicer* method of adding new effects to a language (including cooperative coroutine scheduling!) than a monadic structure without language support. You can essentially *extend* the language's built-in monad with additional effects and keep the nice properties of the native syntax.
Great talk. The concept that the runtime scheduler handles the IO scheduling for you is absolutely fantastic. I always wanted to keep the imperative, blocking programming style and get the same performance as non-blocking code.
But my primary disagreement with this argument is the suggestion that threads are a good abstraction. Threads in the presence of shared memory are incredibly hard to reason about, and indeed, I think you basically have to give up one or the other. I think monads go one way (give up threads) erlang goes the other (give up sharing). One of the big wins of the monadic approach is performance: sharing is in many cases simply necessary for decent performance.
+Yaron Minsky Threads and non-atomic/transactional shared memory concurrency are two completely orthogonal issues. I say that specifically in the talk. Another approach to data-sharing other than Erlang's message passing is Clojure's (or Haskell's) atomic/transactional memory (BTW, Erlang makes heavy use of atomic shared state, too, with ETS and, of course, the actor registry itself). Sharing, again, has nothing to do with threads. Threads can share or not. In imperative languages, monads are just threads without a stack context (for debugging/profiling), without native control flow/exceptions, and without compatibility with non-monadic libraries (which, I believe, are the majority, even in OCaml). Monads with shared memory are at least as hard to reason about as threads with shared memory, and are just as orthogonal to shared memory as threads are. They have no advantage whatsoever. I'd say this: as a general rule, if each stage in the monadic sequence can only be run with the same parallelism as the previous and following stages, then continuations/fibers are always superior. If, OTOH, each stage in the computation can be performed with different parallelism (like with Java streams), then the monadic approach is useful.
It's pretty interesting that you did not talk about async/await which gives us back the imperative coding style with futures, effectively solving the problem - it has been adopted by many languages like Python, C# and JavaScript and can be implemented with generators. Generators, are the generalization that gives you the expressiveness you need (not fibers) here. Where are they ineffective here?
+Benjamin Gruenbaum I do mention async/await briefly. They are simply delimited continuations that can only suspend at the topmost stack level (and are therefore called stackless coroutines/continuations). Clojure's core.async's go blocks are the same in that regard.
+Ron Pressler Right, but then what problem do scoped continuations solve us that async/await does not? They give us our imperative synchronous flows back. Why would I need the extra power in scoped continuations here?
+Benjamin Gruenbaum It's not that they don't solve the problem, but they don't solve it as well as could be. There are two problems (which are actually one) with async/await. One, they still pollute the call-stack. Async and non-async methods look differently. Two, they introduce yet another syntax that basically says: do this and then do that. The language already has such a syntax: the simple semicolon. You basically have two different continuation syntaxes: one if the continuation is a thread, and one if it's a sort-of-thread.
The fundamental confusion at the base of the functional programming paradigm is that they don't understand what a function is mathematically. A function is NOT a computation. A function is a SET of ordered pairs. A type of relation among other types of relations. A set doesn't have runtime and so saying real function have no side effects is meaningless nonsense. A program is not a function. We can model a function with a program and we can model certain programs with functions but the two are not the same. This confusion I think stems from how functions are introduced to students as some kind of abstract machine that takes an input x and spits out an output y. A function is not a machine. This is not helped by the common reference to program procedures as "functions". A program may model a function of all we look at is the input and the output. What files it logs to or emails it sends during execution is irrelevant; it still models the function.
This is the best explanation of functional programming and monads
I feel like this talk plays a serious "no true scotsman" with functional programming. Obviously Haskell is like, maximum pure functional programming. But Erlang/Elixir are also functional programming in the sense that they do not have classes and use immutable data structures; while they allow IO within the context of an otherwise pure function, they do not allow the mutating of any shared state, allowing for massive concurrency using their BEAM VM. And further along the spectrum you have Rust, which is very much an imperative language and yet also clearly is borrowing a lot of functional ideas, with immutability-by-default, an algebraic type system, and base language support for the Option and Result monads. Drawing a line between "pure functional" and "imperative languages" is a false dichotomy, the functional paradigm and the imperative paradigm are just two different ways of framing the same logical behavior and often the right answer is somewhere between the two.
I think I finally kind of started to understand the term monad when you described them as a function with a context. This is the first time anyone has been able to really explain it to me.
Your point about monadic libraries not supporting exceptions is not correct. Monadic libraries for concurrency can and do capture and propagate exceptions. It's true that some control structures (while and for) don't work well with monads, and that monadic error traces (either in response to exceptions or otherwise) are harder to handle. That said, this strikes me more as an implementation issue, and one that in OCaml at least, good progress is being made.
The idea that monadic libraries can only compute pure functions is also not correct, in my experience. The primary strength of OCaml's Async library, in my view, is that it makes it easier to write concurrent imperative code, by giving you clear guarantees as to when interleavings may occur. It gives you something that is easier to reason about synchronously (like an old-style select-style C program), and performs similarly to that as well; while still giving you the thread-like ability to do one damn thing after another.
The virality of monadic languages is real, but it's a two-edged sword. The big benefit is that you're explicit about where the monad is and where it isn't. There's lots of imperative code that doesn't block and doesn't use IO, and that code is easier to reason about and work with. Making that separation explicit is one of the big wins of this approach.
Yaron Minsky Continuations can give you the same explicitness (if you choose to have it, or without it if you'd rather not) with none of monads' disadvantages.
+Ron Pressler +Yaron Minsky An important point in this discussion is that the ergonomic issues of monadic-style effects are greatly relieved in OCaml and Haskell by giving them special syntactic support in the language; in Haskell's case 'do notation' was introduced at the same time as Monadic IO, and with OCaml there are fancy preprocessor tools to provide similar notational convenience.
When you are stuck manually composing higher-order functions to build monadic computations, especially when there's a lot of syntactic ceremony around anonymous functions, the result looks a lot less like a clean extension of the language and a lot more like a clumsy *model* of some desirable effects.
Because imperative languages already have a notion of sequential composition of effects baked into their default control stack and syntax, first-class delimited composable continuations provide a *much nicer* method of adding new effects to a language (including cooperative coroutine scheduling!) than a monadic structure without language support. You can essentially *extend* the language's built-in monad with additional effects and keep the nice properties of the native syntax.
Great talk. The concept that the runtime scheduler handles the IO scheduling for you is absolutely fantastic. I always wanted to keep the imperative, blocking programming style and get the same performance as non-blocking code.
But my primary disagreement with this argument is the suggestion that threads are a good abstraction. Threads in the presence of shared memory are incredibly hard to reason about, and indeed, I think you basically have to give up one or the other. I think monads go one way (give up threads) erlang goes the other (give up sharing). One of the big wins of the monadic approach is performance: sharing is in many cases simply necessary for decent performance.
+Yaron Minsky Threads and non-atomic/transactional shared memory concurrency are two completely orthogonal issues. I say that specifically in the talk. Another approach to data-sharing other than Erlang's message passing is Clojure's (or Haskell's) atomic/transactional memory (BTW, Erlang makes heavy use of atomic shared state, too, with ETS and, of course, the actor registry itself). Sharing, again, has nothing to do with threads. Threads can share or not. In imperative languages, monads are just threads without a stack context (for debugging/profiling), without native control flow/exceptions, and without compatibility with non-monadic libraries (which, I believe, are the majority, even in OCaml). Monads with shared memory are at least as hard to reason about as threads with shared memory, and are just as orthogonal to shared memory as threads are. They have no advantage whatsoever.
I'd say this: as a general rule, if each stage in the monadic sequence can only be run with the same parallelism as the previous and following stages, then continuations/fibers are always superior. If, OTOH, each stage in the computation can be performed with different parallelism (like with Java streams), then the monadic approach is useful.
At 33:33 your filter has a bug, it should produce x and not m(x)
This blog post complements the talk and describes scoped continuations in greater detail: blog.paralleluniverse.co/2015/08/07/scoped-continuations/
It's pretty interesting that you did not talk about async/await which gives us back the imperative coding style with futures, effectively solving the problem - it has been adopted by many languages like Python, C# and JavaScript and can be implemented with generators. Generators, are the generalization that gives you the expressiveness you need (not fibers) here. Where are they ineffective here?
+Benjamin Gruenbaum I do mention async/await briefly. They are simply delimited continuations that can only suspend at the topmost stack level (and are therefore called stackless coroutines/continuations). Clojure's core.async's go blocks are the same in that regard.
+Ron Pressler Right, but then what problem do scoped continuations solve us that async/await does not? They give us our imperative synchronous flows back. Why would I need the extra power in scoped continuations here?
+Benjamin Gruenbaum It's not that they don't solve the problem, but they don't solve it as well as could be. There are two problems (which are actually one) with async/await. One, they still pollute the call-stack. Async and non-async methods look differently. Two, they introduce yet another syntax that basically says: do this and then do that. The language already has such a syntax: the simple semicolon. You basically have two different continuation syntaxes: one if the continuation is a thread, and one if it's a sort-of-thread.
The fundamental confusion at the base of the functional programming paradigm is that they don't understand what a function is mathematically. A function is NOT a computation. A function is a SET of ordered pairs. A type of relation among other types of relations. A set doesn't have runtime and so saying real function have no side effects is meaningless nonsense. A program is not a function. We can model a function with a program and we can model certain programs with functions but the two are not the same. This confusion I think stems from how functions are introduced to students as some kind of abstract machine that takes an input x and spits out an output y. A function is not a machine. This is not helped by the common reference to program procedures as "functions". A program may model a function of all we look at is the input and the output. What files it logs to or emails it sends during execution is irrelevant; it still models the function.
interesting
Java 9 didn't get continuations :(
Who's laughing now, ten major versions later? ;)