To me, it's the same meaningless stuff I've heard many times. I still don't get the point with using mappings (pure functions) everywhere. *Why is state Satan?* I mean, to the degree that far fetched acrobatics like this is worth it... And why use references from the most abstract area of mathematics possible, i.e. from "category theory" hovering there up in the blue sky, when they say that the goal is to make it simpler for the programmer... I don't even agree with his definitions; the "functional" sect seems to try to take credit for everything high level these days. Say "for each" and such constructs that existed long before Haskell, or F#, or whatever (but didn't exist in Lisp).
@@herrbonk3635 No one is claiming that foreach is functional concept, but whole idea of using functions as combinators (like in rx, linq, streaming apis) is solely fp concept. It doesn't fit in oo model at all, yet emerging in such languages anyways. State isnt considered bad in pure fp, it is just isolated from stateless code, like any other side-effect. Fp doesn't simply borrow category theory concepts for lulz, we do that with the same reasoning physicists, engineers etc do rely on their corresponding branches of maths. Category theory is a framework of reasoning. Like arithmetic for counting, calculus and linear algebra for AI. And it's a fact that many high level features firstly are implemented in fp langs, since you can reason and prove certain things about them, which is extremely difficult in other languages.
@@ŊŊŊ-d1q No idea what "rx, linq, streaming apis" are. As an engineer in technical physics, I haven't really been in the programming industry since the late 80s. But functional is *much* older than OO. Both in terms of languages *called* functional (i.e. LISP et al), and Algol languages as well, that often makes use of pure functions as well as more general procedures or subroutines. OO was developed for simulation in the 1960s (Simula), but didn see widespread usage until 30 years later (C++, java, et al). And you didn't realty answer my actual questions. HOW is all this insanse complexity and wild abstractions well over 99.99% of all programmer's heads "helping" people writing good programs... And *why* are there never any concrete practical examples of working programs... Just these ultra tiny snippets of hyper abstract examples... Programming should be practical, not trying (very hard...) to be a branch of esoteric mathematics...
Take away: 1. Exception is partial, not complete 2. Functor is container(box) that supports Map.( It won't change the structure. ) 3. Monad = Functor + flatten = flatmap. the chain of boxer type needs to be the same.
I have seen people talking about monads and functors something like rocket science! If they can not explain, I would say they themselves are not clear. Finally this guy...how to explain in a way people can understand..no words..awesome talk👍
I'm so mad! How could one invent an operator that conveys a clear direction of operation (>=>) and make it run its second argument before the first. It's the opposite of beautiful it the code "f >=> g >=> h" first applies h, then g followed by f.
That's the natural direction functions are written in when they're nested. If you saw "f(g(h(x)))", you would immediately know that it means to first apply h, then g, then f. So, "f >=> g >=> h" preserves that structure.
@@calunsagrenejr I know where it comes from, but I claim that prefix notation of functions works well only when not nesting function, so making a new notation based on the most unintuitive use of math notation is not gonna be intuitive. Postfix notation is clear and intuitive though: you have some things, and then you change them some way. f(g(h(x))) vs (((x)h)g)f In programming the first gives birth to functional notation: f (g (h x) ). Thr second results in a much nicer concatenative notation: x h g f
@@aleksandersabak In response to you saying "making a new notation...", it isn't new. What is described in the video is called a "Kleisli category", which is decades old - around the 1960s. The fish notation itself seems to be younger though - at most the 2010s, but I don't know the paper/document it was first introduced in. As for programming, I think you can't say in absolute terms that postfix is better than prefix, or vice versa. That's like saying decimal is better than binary - they're equivalent systems that can achieve the same things, and switching from one to the other won't break any mathematical laws (though there would be quite an adjustment period for humans). To me at least, calling a function with the argument first is highly jarring. I'd rather see "f(x)" than "x f". The popular programming languages of today favor the former. Also, this may be inconsequential, but in your example, if you wanted to compose "h g f" after "g", you would need some extra notation to check that "g h g f" is a function definition, while "x h g f" is a function application. Function composition is the point of all of this, so I think that should be a key consideration to the notation too. Finally, there is a "left fish" operator in Haskell (), so it sounds like you might favor that one because of its directionality.
@@calunsagrenejr what you're saying is absolutely right and I'm just heavily biased towards postfix. I didn't make it very clear what parts of my comments are opinions. Also, it's not inconsequential that if I wanted to compose "h g f" after "g" in languages that make heavy use of postfix notation (by which I meam concatenative languages) I would just write "g h g f" and it would make as much sense as "x h g f" because in those languages values are functions that take state and produce state with a value appended to it, so they can be composed as normal. Every function isdefined as a function composition that is then applied to the initial state producing an output state. State is usually represented as a stack, so for example "1" in Forth or Factor is a function that puts an integer 1 on the stack.
@@aleksandersabak I see, thanks for clarifying. For myself, to clarify what I meant by "inconsequential", it was just that it may be trivial to this conversation, rather than trivial in general. I think we would very quickly agree that differentiating between "g h g f" and "x h g f" would be easy on a technical level - I would just prefer to have explicit syntax that differentiates function definition from function application, but this is just an opinion of mine. I raised it just because I felt postfix would be inconvenient for me to read if that differentiation wasn't explicit in the syntax.
It took me about a year and still do not understand the concept of functor and monad. And I realized all I need is 43 mins and some time to reach here...
alright, I already know this talk isn't in Haskell, but here's what I'm hoping his explanation boils down to: ($) :: (a -> b) -> a -> b () :: Functor f => (a -> b) -> f a -> f b () :: Applicative f => f (a -> b) -> f a -> f b (= m b) -> m a -> m b EDIT okay, after watching, that was actually pretty good. he didn't cover anything like ($), which is okay. he also didn't cover (), which is okay, too. Functor ≈ map captures the essence of it; () is map Monad ≈ flatmap also captures the essence of it in a way, but kind of only for lists? I think this is actually harder to understand than thinking of it as *bind* unwrapping the value, and passing *that* to the next function. how is a function "b -> Option c" supposed to take an "Option b" as input?? you *have* to unwrap it *first.* does Java *really* use the name "flatmap", for bind?? also, (>=>) isn't even bind. if flatmap ≈ monadic map, then ( m c) -> a -> m c ( m c) -> (a -> m b) -> a -> m c (.) :: (b -> c) -> (a -> b) -> a -> c (=
Yes it's definitely not meant to be a haskell talk nor be %100 correct. I think what you describe is a great follow up to this talk. My observation is that showing this kind of detail too early scare people off learning about functional programming. So I tried to remove as much technical/mathematical detail from the talk.
@@Phoenix736 wow! hi! great talk, btw! that makes sense about too much detail; it's a big problem in any kind of learning process. how do you feel about fronting function _application,_ (i.e., ($) ≈ () ≈ (=). in other words, I was down in the business logic of my program trying to do a series of transformations on some data that would have already produced by that point in the program. this *seems* fundamentally similar to the example you gave with ... .someMethod(...).someOtherMethod(...) I have no idea how an audience-esp. the kind of audience you're giving this talk to-would get that, though.
in minute 34 you could have re writen the flatMap without the lambda - just use method reference since it does the same as the lambda and it was more beautifull like that: flatMap(parse) .flatMap(divide)
What happen if you do : • Optional.of(Optional.of("OK")) • Optional.of(Optional.empty()) • Either.left(Either.right("OK")) • Either.right(Either.left(new Exception("...")))
I assume your question was asking what would happen if flatmap were applied: Example 1 Expands to Optional which if flatmap is called with an id function (a -> a) the result will be Optional i.e. Optional("OK")l the outer wrapper Optional was removed. id function is a function that does nothing; it returns the same input value it was given unchanged. Example 2 Also expands to Optional. the ? being whatever type you provided to Java i.e. it would be same as Optional.empty() Example 3 Is more complex because Either is a type with two generic types; one type for left and one type for right; assuming the inner types are both string, then the extended type would expand to Either -- to apply the default flatmap would not be possible, because the default flatmap is built as a mapping over the right type only... you would need flatmapL; a flatmap for mapping over the left type; which if used with an id function (a -> a) would return Either .right("OK") . Example 4 Expands to type Either> -- again the ? indicate that the compiler would not be able to infer the types; however if flatmap is called with an id function (a -> a) the result will be Either -- the default flatmap will be called which is the same as flatmapR (as flatmap for the right type)
What is the Portal (game) joke with GLaDOS (Genetic Life Form an Disk Operating System) in the end? Ok, "The cake is a lie.", then the fish (the operator), then the question mark? Didn't get that.
The best way to explain this, for a programmer at least, is to start from the problem instead of the solution (if that's a solution for something that is).
Mistake #1 - at: 6:20 he chose to use Haskell notation. Where he went wrong: He think that because the notation is something he can explain in 1.5 seconds, therefore it shouldn't be a problem. What he doesn't realize is that the fact he's asking the audience to now understand some notation from a programming language they don't know, that fact alone will alienate 99% of the viewers. And this is true independent of whether the notation is trivially simple. This mistake makes people who don't know Haskell already feel like they are not going to be able to follow along -- and are going to be left behind *as compared to someone who already understands Haskell*. He would have been better if he just made up his own notation just for this talk and take a minute to explain it. This seems counterintuitive, but it makes the audience feel like they're not being left behind. They feel like everyone else in the audience also had to learn this new thing, which means they are not left behind.
Is a "file read" a side-effect? The way I see it, it's kinda just like another bit of "input data" to the function, so view it more as like a parameter than an effect. Any thoughts?
@@jakobgross8184 - Or it's a new input - different perspective. But, I agree. It's all just a viewpoint in a way, but I think all input should be passed in. That's the only easy way to track it anyways.
God bless you; busy works soft has notNice tutorialng on tNice tutorials. I feel more comfortable from watcNice tutorialng tNice tutorials in mins than I have ever before.
I have super stupid question: is pure functional programming even Turing complete? If it is - we should not have any need in monads. If it's not - why bother? Even SUBLEQ is Turing-complete, but it's not pure. Or is it? As for as I understand, original Turing machine is "side effect only".
Turing Complete just means it can do everything that a turing machine could do. a simple test is to implement a turing machine in it. Lambda Calculus was made even before turing machine was made and is turing complete. and yup, most reputable purely FP languages are turing complete cuz otherwise, like you said, why bother. as for why we need monad? we dont. its a pattern that makes our lives easier. dont like it? dont use it.
@@keokawasaki7833 it's not about my personal feelings. I like monads as idea, as a game of mind. But I need to make decision for the team. What I really want to find out: will use of monads speed up development process or it will slow it down? Time is money. And I've seen a lot of bad code with monads in my current projects. So, should we train people to use monads the proper way or is it just the nature of monads? So I started search for this "proper way", but right now I doubt that this proper way even exist. Thanks for your comment I can now refer to lambda calculus as "prehistorical technology" :)
@@firstnamelastname2298 your search seems futile to me cuz most discussions of monads tend to care more about the "beauty of it" instead of project management. But the best advice I can think of is to treat monad as a design pattern. And we already know how to deal with those. Train people about the pattern; overusage turns patterns into anti pattern; tell the team to be consistent. I personally have never used monads in team project so I am more than interested to see the bad usage of monads that you talk of
@@keokawasaki7833 in short: devs tend to use monads (especially optionals) to silently drop unexpected result. They do the same with exceptions, but it's much easier to search for "catch {}". We have a lot of legacy code and mix of every possible methodology. Theoretically, it would be nice to have a function completely covered with unit tests and monads can help with that. Unfortunately, I need them not to help, but to force. Something like that happen quite often: Optional a = getA(): Optional b = getB(); return a?.Value + b?.Value; At the end of the day we can have something like "function works but only if some piece of hardware is broken, and something fall to default value". I understand that monads should not be used like that and they designed to avoid fall to default value, but people use them for the exact opposite.
Because of a lot of people find Haskell scary and there are 999 talks on how to do Monad with Haskell. The goal of this talk was to explain monad to those who thinks it is not for them
So this is what mathematicians did when they created imaginary numbers? √-1 is undefined, but let's create something on top of that and call it complex numbers.
24:42 How exactly is this atrocious nested if-statement chain calling "isPresent()" superior to comparisons to null? No, I do not think this is beautiful. Nullable types ARE your "Option".
@@jmfcad Nah he said it's perfect right there. Just because he then goes into promise-chaining hell at the end and calls that more perfect has nothing to do with this.
Yeah so this is call sarcasm 😅 I have seen this sort of code in a lot of code years, many years after Optional was released. Which is why I'm making fun of this
24:41 Ew, that's not beautiful at all! That's 10 function calls for a simple algorithm that could've been written in a single function! How is that more readable and easier to follow? The boxing of values is wasteful and unnecessary. Debugging stuff like this would be a nightmare, as the call stack would be massive. If you're talking performance, you should pray that the compiler is good enough to optimize that trash away because there's so much overhead there. Yikes, man.
Yes this is a toy example used for educational purposes. Of course this is over engineered but the goal is to explain a concept, not write performant code
The presenter of course was joking here. Of course these nested if-statements aren't beautiful, they are ugly, that's why he simplifies it even further using monads.
It had been year 2022. Functional programming adepts still can't find out what the domain of the function is. NO, function should NOT accept every possible value. Btw, in Erlang the problem is solved by 'where' operator.
What I find laughable about these presentations is that all of the examples of why you should use something are generally design principles. All of the things mentioned at the beginning can be used in pretty much any language you can think of. Bad coders write bad code, good coders generally do not.
Finally a guy who really knows his shit when getting into Monads.
And also brings it across nicely.
The best explanation for Monads out there...
To me, it's the same meaningless stuff I've heard many times. I still don't get the point with using mappings (pure functions) everywhere. *Why is state Satan?* I mean, to the degree that far fetched acrobatics like this is worth it...
And why use references from the most abstract area of mathematics possible, i.e. from "category theory" hovering there up in the blue sky, when they say that the goal is to make it simpler for the programmer...
I don't even agree with his definitions; the "functional" sect seems to try to take credit for everything high level these days. Say "for each" and such constructs that existed long before Haskell, or F#, or whatever (but didn't exist in Lisp).
@@herrbonk3635 No one is claiming that foreach is functional concept, but whole idea of using functions as combinators (like in rx, linq, streaming apis) is solely fp concept. It doesn't fit in oo model at all, yet emerging in such languages anyways. State isnt considered bad in pure fp, it is just isolated from stateless code, like any other side-effect. Fp doesn't simply borrow category theory concepts for lulz, we do that with the same reasoning physicists, engineers etc do rely on their corresponding branches of maths. Category theory is a framework of reasoning. Like arithmetic for counting, calculus and linear algebra for AI. And it's a fact that many high level features firstly are implemented in fp langs, since you can reason and prove certain things about them, which is extremely difficult in other languages.
@@ŊŊŊ-d1q No idea what "rx, linq, streaming apis" are. As an engineer in technical physics, I haven't really been in the programming industry since the late 80s. But functional is *much* older than OO. Both in terms of languages *called* functional (i.e. LISP et al), and Algol languages as well, that often makes use of pure functions as well as more general procedures or subroutines.
OO was developed for simulation in the 1960s (Simula), but didn see widespread usage until 30 years later (C++, java, et al).
And you didn't realty answer my actual questions. HOW is all this insanse complexity and wild abstractions well over 99.99% of all programmer's heads "helping" people writing good programs...
And *why* are there never any concrete practical examples of working programs... Just these ultra tiny snippets of hyper abstract examples... Programming should be practical, not trying (very hard...) to be a branch of esoteric mathematics...
Take away:
1. Exception is partial, not complete
2. Functor is container(box) that supports Map.( It won't change the structure. )
3. Monad = Functor + flatten = flatmap. the chain of boxer type needs to be the same.
flatmap
@@joachimdietl6737 thanks
And, Type is a set of values
I have seen people talking about monads and functors something like rocket science! If they can not explain, I would say they themselves are not clear.
Finally this guy...how to explain in a way people can understand..no words..awesome talk👍
Thanks! That is probably the most useful explanation of Monads & Functors I have seen yet.
I'm so mad! How could one invent an operator that conveys a clear direction of operation (>=>) and make it run its second argument before the first. It's the opposite of beautiful it the code "f >=> g >=> h" first applies h, then g followed by f.
That's the natural direction functions are written in when they're nested.
If you saw "f(g(h(x)))", you would immediately know that it means to first apply h, then g, then f.
So, "f >=> g >=> h" preserves that structure.
@@calunsagrenejr I know where it comes from, but I claim that prefix notation of functions works well only when not nesting function, so making a new notation based on the most unintuitive use of math notation is not gonna be intuitive. Postfix notation is clear and intuitive though: you have some things, and then you change them some way.
f(g(h(x))) vs (((x)h)g)f
In programming the first gives birth to functional notation: f (g (h x) ). Thr second results in a much nicer concatenative notation: x h g f
@@aleksandersabak In response to you saying "making a new notation...", it isn't new. What is described in the video is called a "Kleisli category", which is decades old - around the 1960s. The fish notation itself seems to be younger though - at most the 2010s, but I don't know the paper/document it was first introduced in.
As for programming, I think you can't say in absolute terms that postfix is better than prefix, or vice versa. That's like saying decimal is better than binary - they're equivalent systems that can achieve the same things, and switching from one to the other won't break any mathematical laws (though there would be quite an adjustment period for humans).
To me at least, calling a function with the argument first is highly jarring. I'd rather see "f(x)" than "x f". The popular programming languages of today favor the former.
Also, this may be inconsequential, but in your example, if you wanted to compose "h g f" after "g", you would need some extra notation to check that "g h g f" is a function definition, while "x h g f" is a function application. Function composition is the point of all of this, so I think that should be a key consideration to the notation too.
Finally, there is a "left fish" operator in Haskell (), so it sounds like you might favor that one because of its directionality.
@@calunsagrenejr what you're saying is absolutely right and I'm just heavily biased towards postfix. I didn't make it very clear what parts of my comments are opinions.
Also, it's not inconsequential that if I wanted to compose "h g f" after "g" in languages that make heavy use of postfix notation (by which I meam concatenative languages) I would just write "g h g f" and it would make as much sense as "x h g f" because in those languages values are functions that take state and produce state with a value appended to it, so they can be composed as normal. Every function isdefined as a function composition that is then applied to the initial state producing an output state. State is usually represented as a stack, so for example "1" in Forth or Factor is a function that puts an integer 1 on the stack.
@@aleksandersabak I see, thanks for clarifying.
For myself, to clarify what I meant by "inconsequential", it was just that it may be trivial to this conversation, rather than trivial in general. I think we would very quickly agree that differentiating between "g h g f" and "x h g f" would be easy on a technical level - I would just prefer to have explicit syntax that differentiates function definition from function application, but this is just an opinion of mine. I raised it just because I felt postfix would be inconvenient for me to read if that differentiation wasn't explicit in the syntax.
Fantastic presentation! I finally got an excellent understanding of the concepts which I can apply, and later easily dive into math details.
It took me about a year and still do not understand the concept of functor and monad. And I realized all I need is 43 mins and some time to reach here...
alright, I already know this talk isn't in Haskell, but here's what I'm hoping his explanation boils down to:
($) :: (a -> b) -> a -> b
() :: Functor f => (a -> b) -> f a -> f b
() :: Applicative f => f (a -> b) -> f a -> f b
(= m b) -> m a -> m b
EDIT
okay, after watching, that was actually pretty good.
he didn't cover anything like ($), which is okay. he also didn't cover (), which is okay, too.
Functor ≈ map captures the essence of it; () is map
Monad ≈ flatmap also captures the essence of it in a way, but kind of only for lists? I think this is actually harder to understand than thinking of it as *bind* unwrapping the value, and passing *that* to the next function. how is a function "b -> Option c" supposed to take an "Option b" as input?? you *have* to unwrap it *first.* does Java *really* use the name "flatmap", for bind??
also, (>=>) isn't even bind. if flatmap ≈ monadic map, then ( m c) -> a -> m c
( m c) -> (a -> m b) -> a -> m c
(.) :: (b -> c) -> (a -> b) -> a -> c
(=
Yes it's definitely not meant to be a haskell talk nor be %100 correct. I think what you describe is a great follow up to this talk. My observation is that showing this kind of detail too early scare people off learning about functional programming. So I tried to remove as much technical/mathematical detail from the talk.
@@Phoenix736
wow! hi!
great talk, btw!
that makes sense about too much detail; it's a big problem in any kind of learning process.
how do you feel about fronting function _application,_ (i.e., ($) ≈ () ≈ (=).
in other words, I was down in the business logic of my program trying to do a series of transformations on some data that would have already produced by that point in the program.
this *seems* fundamentally similar to the example you gave with ... .someMethod(...).someOtherMethod(...)
I have no idea how an audience-esp. the kind of audience you're giving this talk to-would get that, though.
That was a great explanation. Link to slides?
slideshare
in minute 34 you could have re writen the flatMap without the lambda - just use method reference since it does the same as the lambda and it was more beautifull like that:
flatMap(parse)
.flatMap(divide)
Very nice talk. Similar in contents to other monad talks but much clearer to me.
best explanation for monad
What happen if you do :
• Optional.of(Optional.of("OK"))
• Optional.of(Optional.empty())
• Either.left(Either.right("OK"))
• Either.right(Either.left(new Exception("...")))
I assume your question was asking what would happen if flatmap were applied:
Example 1
Expands to Optional which if flatmap is called with an id function (a -> a) the result will be Optional i.e. Optional("OK")l the outer wrapper Optional was removed.
id function is a function that does nothing; it returns the same input value it was given unchanged.
Example 2
Also expands to Optional. the ? being whatever type you provided to Java i.e. it would be same as Optional.empty()
Example 3
Is more complex because Either is a type with two generic types; one type for left and one type for right; assuming the inner types are both string, then the extended type would expand to Either -- to apply the default flatmap would not be possible, because the default flatmap is built as a mapping over the right type only... you would need flatmapL; a flatmap for mapping over the left type; which if used with an id function (a -> a) would return Either .right("OK") .
Example 4
Expands to type Either> -- again the ? indicate that the compiler would not be able to infer the types; however if flatmap is called with an id function (a -> a) the result will be Either -- the default flatmap will be called which is the same as flatmapR (as flatmap for the right type)
That was so helpful! I just started using soft and was so overwheld!
What is the Portal (game) joke with GLaDOS (Genetic Life Form an Disk Operating System) in the end? Ok, "The cake is a lie.", then the fish (the operator), then the question mark? Didn't get that.
The best way to explain this, for a programmer at least, is to start from the problem instead of the solution (if that's a solution for something that is).
I love his dry, quirky sense of humor.
😅🙃
Mistake #1 - at: 6:20 he chose to use Haskell notation. Where he went wrong: He think that because the notation is something he can explain in 1.5 seconds, therefore it shouldn't be a problem. What he doesn't realize is that the fact he's asking the audience to now understand some notation from a programming language they don't know, that fact alone will alienate 99% of the viewers. And this is true independent of whether the notation is trivially simple.
This mistake makes people who don't know Haskell already feel like they are not going to be able to follow along -- and are going to be left behind *as compared to someone who already understands Haskell*. He would have been better if he just made up his own notation just for this talk and take a minute to explain it. This seems counterintuitive, but it makes the audience feel like they're not being left behind. They feel like everyone else in the audience also had to learn this new thing, which means they are not left behind.
12:50 = possibly add in a orange box for "Exception"
This was an absolutely amazing talk. Well done, and thank you! 👏🏻🙌🏻
what a great presentation!
Does anyone know what presenter he is using. I like that it puts a circle on the location pointed to and grays out the rest
Very well explained
A fish? I would call it a cake shovel. An utensil to keep your fingers and tablecloth clean of jam and cream while you manage and serve your cake.
explain very well!!!good!
Very enlightening talk!
This is a fantastic talk!
Thanks Mike!
you can no imagine how much a Java devs brain hurts to understand and begin to use/like these things. Thank you
I guarantee tNice tutorials - I am going to pursue soft one way or another. It is my dream to make art that inspires people and shows them so
Is a "file read" a side-effect?
The way I see it, it's kinda just like another bit of "input data" to the function, so view it more as like a parameter than an effect.
Any thoughts?
If File content changed you have Side effects
@@jakobgross8184 - Or it's a new input - different perspective. But, I agree. It's all just a viewpoint in a way, but I think all input should be passed in. That's the only easy way to track it anyways.
Yes, it's a side effect.
@@noahwilliams8996 - yeah, I guess in that all IO is a side effect ...
Less assumptions == better code 👍
God bless you; busy works soft has notNice tutorialng on tNice tutorials. I feel more comfortable from watcNice tutorialng tNice tutorials in mins than I have ever before.
great
Yep those are words alright
sayHello async misses a ;
I have super stupid question: is pure functional programming even Turing complete? If it is - we should not have any need in monads. If it's not - why bother?
Even SUBLEQ is Turing-complete, but it's not pure. Or is it? As for as I understand, original Turing machine is "side effect only".
Turing Complete just means it can do everything that a turing machine could do. a simple test is to implement a turing machine in it.
Lambda Calculus was made even before turing machine was made and is turing complete.
and yup, most reputable purely FP languages are turing complete cuz otherwise, like you said, why bother.
as for why we need monad? we dont. its a pattern that makes our lives easier. dont like it? dont use it.
@@keokawasaki7833 it's not about my personal feelings. I like monads as idea, as a game of mind. But I need to make decision for the team. What I really want to find out: will use of monads speed up development process or it will slow it down? Time is money. And I've seen a lot of bad code with monads in my current projects. So, should we train people to use monads the proper way or is it just the nature of monads? So I started search for this "proper way", but right now I doubt that this proper way even exist. Thanks for your comment I can now refer to lambda calculus as "prehistorical technology" :)
@@firstnamelastname2298 your search seems futile to me cuz most discussions of monads tend to care more about the "beauty of it" instead of project management.
But the best advice I can think of is to treat monad as a design pattern. And we already know how to deal with those. Train people about the pattern; overusage turns patterns into anti pattern; tell the team to be consistent.
I personally have never used monads in team project so I am more than interested to see the bad usage of monads that you talk of
@@keokawasaki7833 in short: devs tend to use monads (especially optionals) to silently drop unexpected result. They do the same with exceptions, but it's much easier to search for "catch {}".
We have a lot of legacy code and mix of every possible methodology. Theoretically, it would be nice to have a function completely covered with unit tests and monads can help with that. Unfortunately, I need them not to help, but to force.
Something like that happen quite often:
Optional a = getA():
Optional b = getB();
return a?.Value + b?.Value;
At the end of the day we can have something like "function works but only if some piece of hardware is broken, and something fall to default value".
I understand that monads should not be used like that and they designed to avoid fall to default value, but people use them for the exact opposite.
why not use haskell to teach monad???
Because of a lot of people find Haskell scary and there are 999 talks on how to do Monad with Haskell. The goal of this talk was to explain monad to those who thinks it is not for them
Before I learned calculus I thought it was magic. now I can’t imagine how people don’t understand calculus.
finally, someone with the gonads to talk about monads
So this is what mathematicians did when they created imaginary numbers? √-1 is undefined, but let's create something on top of that and call it complex numbers.
24:42 How exactly is this atrocious nested if-statement chain calling "isPresent()" superior to comparisons to null? No, I do not think this is beautiful.
Nullable types ARE your "Option".
You should maybe watch the rest of the talk :D
@@jmfcad Nah he said it's perfect right there.
Just because he then goes into promise-chaining hell at the end and calls that more perfect has nothing to do with this.
Yeah so this is call sarcasm 😅 I have seen this sort of code in a lot of code years, many years after Optional was released. Which is why I'm making fun of this
This contained more nonsense than I'd expect from the title
Indeed
Uhm...... This is the worst explanation ever about Monad and Functor...
Ow, it's not a haskell talk? Huh.
ikr, i was totally thrown off by java. i haven't watched the video yet, the beginning is making me question lol
The best exaplanation i've ever seen.
24:41 Ew, that's not beautiful at all! That's 10 function calls for a simple algorithm that could've been written in a single function! How is that more readable and easier to follow? The boxing of values is wasteful and unnecessary. Debugging stuff like this would be a nightmare, as the call stack would be massive. If you're talking performance, you should pray that the compiler is good enough to optimize that trash away because there's so much overhead there. Yikes, man.
Yes this is a toy example used for educational purposes. Of course this is over engineered but the goal is to explain a concept, not write performant code
The presenter of course was joking here. Of course these nested if-statements aren't beautiful, they are ugly, that's why he simplifies it even further using monads.
It had been year 2022. Functional programming adepts still can't find out what the domain of the function is.
NO, function should NOT accept every possible value. Btw, in Erlang the problem is solved by 'where' operator.
What I find laughable about these presentations is that all of the examples of why you should use something are generally design principles.
All of the things mentioned at the beginning can be used in pretty much any language you can think of. Bad coders write bad code, good coders generally do not.
In the method myProgram, result.get() returns a Double, not an Optional