Thank you very much . I spend so much time searching for this specific part of my module Functional programming on the internet , its realy not easy to find those details.. . iam glad i found this video =)
not sure if anyone cares but if you're stoned like me atm then you can stream pretty much all the latest movies on instaflixxer. I've been streaming with my girlfriend for the last months :)
I have the same question can anyone elaborate It seems so wasteful to memorize all the expressions that come up in the program just in case they will come up again. That would mean the overhead of comparing every new expression with ones already memoized that seems to negate all the adventages
I misunderstood the idea. Of course nothing like that happens. What does happen that if, say let variable or any "named" expression is used and needs to be evaluated, the evaluation happens only once. After that the result can be used directly whenever the same name is used
In the first example, shouldn't "155 + f 94 10" be reduced to "155 + (10 + square 94)", then to "155 + (10 + (94 * 94)) as the only way, not "(155 + 10) + (94 * 94)"? So that's not an instance of Church-Rosser theorem since Haskell cannot possibly know addition is associative? Instead an example for Church-Rosser theorem I think would be "(1 + 1) + (2 + 2)" -> "2 + (2 + 2)" -> "2 + 4" -> "6" v.s. "(1 + 1) + (2 + 2)" -> "(1 + 1) + 4" -> "2 + 4" -> "6".
You're right, but some developers would take "cannot possibly know" as a challenge. I've seen someone implement an operator that was shortcutting in both its arguments. (Prelude.+) is declared infixl, so there's a defined order even if the terms were written in the same expression. Futhark's reduce_comm is permitted arbitrary order.
Hi, Now I checked this in ghci. let x = 1 + 1; let y = x + 5, after forcing y to be evaluated by typing y in ghci, :sprint y is still _. Edited: I didn't mentioned the type as Int. Now its working as expected.
I do not understand your reasoning at 9:00. [True && False] is also in WHNF, because [] is a constructor. Yes, :sprint will return _:_ , because nothing has been evaluated so far, but :sprint for the tuple (which is also in WHNF) will not return the tuple itself either, but rather _ , for the same reason. Or what am I missing here?
I do not understand why we need to put a function in lambda form in order to have weak head normal form. Can't the GHC understand by itself the expression isn't complete? For example, I do this function: blah = (+) 3 :t blah blah :: Num a => a -> a Why mandate the use of a more complex syntax?
The important thing to remember is, that a function like blah = (+) 3 is syntactic sugar for blah = \x -> (+) 3 x When reasoning about normal forms syntactic sugar only gets in the way of definitions which is why we have to rewrite it (since normal forms are defined with respect to beta-reductions in lambda calculus). GHC doesn't do any rewriting since it only looks at it's graph and does its inferences there.
But why does weak head normal form exist? Why draw the line at data constructors instead of something else? What's special about those? :o (I love your videos btw; it's my TV now :> )
Very good question! WHNF is the reason for haskells lazy evaluation, since we need some evaluation strategy that can do finite pattern matching WITHOUT evaluating the full datatype (think of a match case like "x:y:xs" on a list). The evaluations from haskell are derived from lambda calculus which, as it turns out, has a non-eager evaluation strategy ( en.wikipedia.org/wiki/Lambda_calculus_definition#Weak_head_normal_form ) which IS the definition for the WHNF! The concept of WHNF used for lambda functions is simply extended to data constructors since they follow a similar order and are essential for our definitions of recursive functions.
@@philipphagenlocher Ohhhh I get it :> WHNF is just already there because that's just what pattern matching (on data value constructors) requires! And so 'seq' is just exposing something explicitly that's already there :> Thanks! :D
@@philipphagenlocher That parallel and concurrent programming book (Simon Marlow) briefly covers profiling well enough and I can confirm that there's some weird thunk-iness and garbage collection with heap profiling done in ThreadScope and the following RTS program flags (can look at profiling data via postscript graphs later): -rtsopts -prof
Yay! Mic sounds much better now. Thank you!
Took me a bit of tweaking the settings but now it seems to work! ;)
Thank you very much . I spend so much time searching for this specific part of my module Functional programming on the internet , its realy not easy to find those details.. . iam glad i found this video =)
Thanks, this conveniently explained seq extremely well with WHNF in one fell swoop. Thunks make more sense too.
I love this channel. It's awesome! Thanks for this tutorial.
not sure if anyone cares but if you're stoned like me atm then you can stream pretty much all the latest movies on instaflixxer. I've been streaming with my girlfriend for the last months :)
@Zechariah Felipe yup, have been watching on InstaFlixxer for years myself :D
@Zechariah Felipe yea, have been using instaflixxer for since december myself =)
Mega gut ❤
But how does haskell compiler know if two expressions are the same?
I have the same question can anyone elaborate
It seems so wasteful to memorize all the expressions that come up in the program just in case they will come up again. That would mean the overhead of comparing every new expression with ones already memoized that seems to negate all the adventages
I misunderstood the idea. Of course nothing like that happens.
What does happen that if, say let variable or any "named" expression is used and needs to be evaluated, the evaluation happens only once. After that the result can be used directly whenever the same name is used
In the first example, shouldn't "155 + f 94 10" be reduced to "155 + (10 + square 94)", then to "155 + (10 + (94 * 94)) as the only way, not "(155 + 10) + (94 * 94)"? So that's not an instance of Church-Rosser theorem since Haskell cannot possibly know addition is associative? Instead an example for Church-Rosser theorem I think would be "(1 + 1) + (2 + 2)" -> "2 + (2 + 2)" -> "2 + 4" -> "6" v.s. "(1 + 1) + (2 + 2)" -> "(1 + 1) + 4" -> "2 + 4" -> "6".
You're right, but some developers would take "cannot possibly know" as a challenge. I've seen someone implement an operator that was shortcutting in both its arguments. (Prelude.+) is declared infixl, so there's a defined order even if the terms were written in the same expression. Futhark's reduce_comm is permitted arbitrary order.
Hi, Now I checked this in ghci. let x = 1 + 1; let y = x + 5, after forcing y to be evaluated by typing y in ghci, :sprint y is still _. Edited: I didn't mentioned the type as Int. Now its working as expected.
I do not understand your reasoning at 9:00. [True && False] is also in WHNF, because [] is a constructor. Yes, :sprint will return _:_ , because nothing has been evaluated so far, but :sprint for the tuple (which is also in WHNF) will not return the tuple itself either, but rather _ , for the same reason. Or what am I missing here?
I do not understand why we need to put a function in lambda form in order to have weak head normal form. Can't the GHC understand by itself the expression isn't complete?
For example, I do this function:
blah = (+) 3
:t blah
blah :: Num a => a -> a
Why mandate the use of a more complex syntax?
The important thing to remember is, that a function like
blah = (+) 3
is syntactic sugar for
blah = \x -> (+) 3 x
When reasoning about normal forms syntactic sugar only gets in the way of definitions which is why we have to rewrite it (since normal forms are defined with respect to beta-reductions in lambda calculus).
GHC doesn't do any rewriting since it only looks at it's graph and does its inferences there.
But why does weak head normal form exist? Why draw the line at data constructors instead of something else? What's special about those? :o
(I love your videos btw; it's my TV now :> )
Very good question!
WHNF is the reason for haskells lazy evaluation, since we need some evaluation strategy that can do finite pattern matching WITHOUT evaluating the full datatype (think of a match case like "x:y:xs" on a list). The evaluations from haskell are derived from lambda calculus which, as it turns out, has a non-eager evaluation strategy ( en.wikipedia.org/wiki/Lambda_calculus_definition#Weak_head_normal_form ) which IS the definition for the WHNF! The concept of WHNF used for lambda functions is simply extended to data constructors since they follow a similar order and are essential for our definitions of recursive functions.
@@philipphagenlocher Ohhhh I get it :>
WHNF is just already there because that's just what pattern matching (on data value constructors) requires! And so 'seq' is just exposing something explicitly that's already there :>
Thanks! :D
Just for fun, a = [1..] has similar mechanism as a = 1:a
take 1 a = [1]
take 2 a = [1,1]
...
how memory management work in haskell
that be a short video. "Today we gonna talk about memory. So, haskell has garbage collector. Thanks for watching"
We will take a look at memory in a later episode on profiling. The discussion on thunks and normal forms is all pretty much leading up to that!
@@philipphagenlocher That parallel and concurrent programming book (Simon Marlow) briefly covers profiling well enough and I can confirm that there's some weird thunk-iness and garbage collection with heap profiling done in ThreadScope and the following RTS program flags (can look at profiling data via postscript graphs later): -rtsopts -prof
@@SimGunther That's a great book! It has been one of my major sources for these videos after a viewer pointed me to it.