I've actually found a use for the backwards state monad once. And I didn't even know about it at the time. For an information theory class we were tasked with implementing some compression algorithms. Since we were free to choose the language - and I was very much into Haskell at the time - I picked that. I wanted the implementation to be single-pass. I don't remember the full details, but since there were cases where the algorithm would need to look up things in its future output and that sort of thing is possible with lazy data structures I started looking around how to do that with the State type. Ultimately I arrived at Control.Monad.Tardis which provides "the time-traveling monad". It provides two states: one wired-up normally and one going backwards.
This excellent history omits Eugenio Moggi's "Abstract View of Programming Languages" notes, which he wrote in Spring 1989, while visiting John Mitchell at Stanford. These notes include a complete and detailed discussion of monad transformers, which were not understood by the rest of the community until January 1995, even though Phil Wadler cited Moggi's notes in his "Comprehending Monads" paper (1990) and "Essence of FP" paper (1992). Another important reference is Jon Beck's paper on Distributive Laws from the 1966/67 Seminar on Triples [ triples are monads !] and Categorical Homology Theory, which is available as a TAC reprint. Monads were the new hotness in 1967. 🙂
I loved the "row type" comment. I tried many years ago to write my own effect aware language compiler with row types, but got blocked by complexity. Still, this did teach me that formulating coeffects was "easier" than working with effects. Something I still believe.
I've actually found a use for the backwards state monad once. And I didn't even know about it at the time.
For an information theory class we were tasked with implementing some compression algorithms. Since we were free to choose the language - and I was very much into Haskell at the time - I picked that.
I wanted the implementation to be single-pass. I don't remember the full details, but since there were cases where the algorithm would need to look up things in its future output and that sort of thing is possible with lazy data structures I started looking around how to do that with the State type. Ultimately I arrived at Control.Monad.Tardis which provides "the time-traveling monad". It provides two states: one wired-up normally and one going backwards.
Terrific presentation.
It took me a week to watch the whole video 🤣, thanks for all this and excellent talk.
This excellent history omits Eugenio Moggi's "Abstract View of Programming Languages" notes, which he wrote in Spring 1989, while visiting John Mitchell at Stanford. These notes include a complete and detailed discussion of monad transformers, which were not understood by the rest of the community until January 1995, even though Phil Wadler cited Moggi's notes in his "Comprehending Monads" paper (1990) and "Essence of FP" paper (1992). Another important reference is Jon Beck's paper on Distributive Laws from the 1966/67 Seminar on Triples [ triples are monads !] and Categorical Homology Theory, which is available as a TAC reprint. Monads were the new hotness in 1967. 🙂
I loved the "row type" comment. I tried many years ago to write my own effect aware language compiler with row types, but got blocked by complexity. Still, this did teach me that formulating coeffects was "easier" than working with effects. Something I still believe.
thank you! very helpful history lesson!
Excellent talk
Imagine giving a presentation about Haskell and you get a question from Simon 🤣
Seems like you've mixed up applicative functor (aka closed functor) and state monad.
summary: monads were too much, maybe Lisp was onto something.
Lispers: the right answer has always been Lisp
I didn’t understand everything, but what I did understand does not agree with your summary.
We start in Church
You getting married?