This is a nice, concise overview of some core, simple, practical functional concepts. I just want to make one note about your section on function composition. Notice at 16:33, Copilot suggests an implementation of the compose function for exactly 2 functions, which applies the functions in the correct order (based upon the mathematical definition of function composition). Specifically, the composition of functions f and g, which would be called as compose(f, g), first applies g (the right hand function) to an input (x), and then applies f to the result (i.e., compose(f, g)(x) == f(g(x))). However, the compose function that you instead implemented, taking any number of functions of type Composable, applies the functions in order from left to right, not right to left, because reduce traverses the list of functions from left to right. Therefore, your resulting compose function is not truly composing functions in the mathematical sense, and is generally called by other names in other languages (e.g., foldl or pipe) or even in existing functional Python libraries. For your compose function to behave similarly to the original Copilot suggestion, you would need to traverse the list of functions from right to left. This is also evidenced by the fact that in your original code leading up to your introduction of function composition, you had the expression (at 15:10) sort_fn(add_10(multiply_by_2(data))), but then (at 18:22) had to reverse the order of the functions in compose(multiply_by_2, add_10, quick_sort) to get the same result because your compose function applies the function in the reverse ("wrong") order.
Worth noting that copy() only returns a shallow copy. So it's a new list but if the original elements were mutable (yours were ints, so that's fine) then modifying them in the new list will also modify them in the old list. This can be a big gotcha in parallel code. Deepcopy from the copy module is _almost_ always what you need when you need an expensive copy of non fundamental types.
As always, it's another masterpiece. Thanks, Arjan. I believe the input ‘data’ at 20:09 could still be a list, instead of an iterator; Only the return needs to be an iterator In addition, we can still achieve the lazy evaluation 20:24 by simply using the result of the map function, without casting into a list
in the bubble sort, the inner loop starts from the outer loop starts which means that there are a subset of elements in the beginning that are not sorted.
Another benefit of pure functions: They are easy to cache, as you know that putting the same values inside will give you the same result. This can help you, if you have long calculations on certain values.
I use some of these techniques. Matching structures, composition and lazy code are the new ones I am about to discover thanks to this video. Thx for this great work 🙏
The vast majority of programmers aren't going to write quicksort either. In imperative languages like Python, you should likely default to using iteration whenever possible and only use recursion for obviously recursive problems (e.g. graph traversal). Even then, one must be careful about blowing up the stack, so it's important that you also know how to convert a recursive function into one that uses an explicit stack.
I've been doing a lot of the stuff in this video in Rust, it's nice to see a Python implementation of the same ideas which I can start to use in my day job.
One interesting "side effect" of this video, is it showing as typing, which should be "optional", intrinsically limits Python capabilities. Theres is nothing in this code that would limit it working with `int`s. Any orderable type would work, _tut_ for the typing.
I find elimination of state to be mostly an obstacle in concurrent programming because atomic operations make threads inherently agnostic of what's going on and whether they should do a/b/c depending on updated conditions, and it also makes it difficult for them to broadcast their own state as it will require an intermediate who would ultimately end up doing exactly what that thread could be doing autonomously (and with less context). It's not impossible to safely allot partitions of a shared workload or state object by using range/slice-types for instance while retaining communicability.
Using 'reduce' to apply functions sequentially seems complicated. Instead, using a monad class seems like a better way to handle exceptions and readability. Using the '__or__' method as a 'bind' method, we can use result = Monad(10) | add_ten | mul_two | div_zero like this. I'd love to hear your thoughts. I'd like to add that it doesn't fit the content of this video because you need to define the Monad class 😁
Great video! I have a question, can't you just do this instead: ``` def multiply_by_x(data, x: int) : return map(lambda y: x * y, data) def add_x(data, x: int) : return map(lambda y: x + y, data) ``` As map returns an iterator, similarly to yield from. So the same as your initial functions but do not cast it to list to iterate over the iterator, just return the iterator.
I was expecting him to simply return map, like you're suggesting. However, if you think about it, what he's doing is the same. At 20:00 he modifies the type hint for data to be an iterator. For loops use iterators without casting them to lists. The function is taking the data iterator as an input argument and returning a generator (iterator) by yielding from the for loop. So, there's no casting to lists and is true lazy evaluation.
@@Wolar94 I was mainly focused on your comment that he's casting to lists. When I first read that, I thought you were saying his newer functions using yield are casting to lists (which they are not). But now I see you were referring to the older functions that do cast to lists.
Thank you for this amazing video! I learn a new good coding practice every time I watch one of these. I was wondering if you could do a video on the toolz and cytoolz libraries, which actually facilitate the application of most of the functional concepts that you demonstrated in this video, in addition to other goodies which deal with iterators and dictionaries. For example, they actually have a function called compose_left, which allows one to create a function from all the function arguments provided to it. It also allows for piping data through a series of functions, and is heavily pro-lazy evaluation. Cytoolz is the Cython-based equivalent of toolz, having the exact same functions, but much faster. Would love to know what you think of it, and what might be the best ways to apply the functions from this library to enable better coding in Python.
Your point 3 needs a asterisk with the solition. The copy methid creates a shallow copy. Ehich means that if your liat contains anything that's passed by reference, it will be the same in the new list. If those mutable elements are changed in the new list, they will also be changed in the old list. For true immutability, the deepcopy functionality should be used.
wow, structural pattern matching, I didn't expect that. Python has so many FP features now. Multiple inheritance and the ability to call a method by passing it an instance are quite interesting for implememting FP patterns as well. I'm quite frustrated by the absence of a const keyword though. I tried using Final but it disables ype inferrence, so you must specify the type which is a pain when the vast majority of your variables are actually constants.
Is your Copilot suggesting lines you've already typed in a practice run before filming? Or does it not remember context in that way, and these are "fresh" suggestions?
I would argue at 20:09 the function parameter could be Iterable, and have the output be still be an iterator. This way anything that can be iterated over, lists or iterators (or anything implementing __iter__) can be passed and it don't need to cast my inputs with iter(...). I try and make my functions inclusive with Iterable inputs and Iterator outputs, unless there is a specific reason that doesn't work.
Can you show how you would deal with exceptions when you use function composition? Assume that the functions you want to compose could encounter division by zero or a TypeError or ValueError because you are processing data that is not (completely) under your control. For example when processing CSV files you may expect a certain shape from the first two lines, but you would have to validate the whole file to know that the number of entries is the same in every row. Add to that the fact that you may be calling 3rd party code in the functions you compose. How would you do the error/exception handling?
Great vid as always, theres also *functools groupby*, which seems pretty powerful like... Sqlite3 backend, really doesnt matter, but i found like: ``` Select from table where x, and y order by a,b ;``` Especially with um passing a method into the retrieval for how to create named tuples out of the list... you can get really ""complex"" behavior, with super performant code, thats also super readible. Ie used this in prompt generation for our tool, another thing im wondering about is separation vs interface vs api. For code that is being rapidly developped/iterated upon... (separation seems most important... i think.?)
19:58 These both functions should take an Iterable as an argument, not an Iterator. Interpreter will automatically call for an iterator while using the for loop. Iterator protocol requires the __next__ method, which is not implemented for a list, for example. But the functions obviously can work with a list. We always should use the broadest types possible in for a function arguments and the most exact type for the result. This is known as the robustness principle, "Be liberal in what you accept, and conservative in what you return".
The code at the beginning for the bubble_sort() where @ArjanCodes does bubble sort is not correct I believe ? Having the swapped flag, does not return the correct result ? I expect very high level content from you (following your videos), Let us know if this was a oversight ?
@@kaosce wait what? He converted the result of the map function to a list, and you can always write a generator expression if you don't want the complete list and just the generator...
The big difference for me is that you have to think of a name for the intermediate elements in a comprehension, e.g. [int(x) for x in numbers] or whatever, and sometimes that can really help. Other times, people's names are so vague or so misleading that they would have been better off with map.
Readability counts much more in real use cases. List comprehensions CAN get unreadable though, but as long as you keep them simple they are very easy to understand.
I made a composable in my project recently, but it did not have type hint! I had the apply be a lambda, which is ugly IMHO. Then you solved it... but then another lambda appeared! I would want the return to be `partial(reduce, apply, functions)`! I will incorporate the type definition though!
i still struggle to see the value of a compose function over a wrapper function that simply calls a series of functions sequentially. To me at least it feels much more readable and equally flexible. And if you rely on having so many arguments that in no longer remains readable it feels nicer to simply use a class instead. I do understand that the premise of the video is to utilise purely functional principles, but i really can't see a good use case for something like partial functions unless there's some underlying performance benefits. Even in languages like Go i mostly see the sequential approach against a struct of data which is also more readable imo. But then again i'm no expert in either languages, so that could just be lack of experience / use cases where this would make sense to apply
I think removing the middle from quicksort introduced a bug. The left will now contain the pivot. Also if the value of the pivot is repeated, those repetitions will not show up
You mentioned that pure functions are idempotent. This is not the case. A function f is idempotent (pronounced as eye-dem-póh-tent) when f(f(x)) = f(x), that is applying one more time does not have an effect. A pure function is _univalent_ : for every input value there is just _one_ output value.
Pure functions are *not* idempotent. `sin(x)` is pure -- `sin(sin(x)) != sin(x)` -- `sin(x)` is *not* idempotent. `pure` and `idempotent` are both categories of functions but they are not correlated in any way.
The usage of AI while programming can lead to terribly wrong code. I know, you like it because of the speed, but the moment you trust it blindly, you have lost the game.
Sorry, but this is Just too fast for me, too many concepts in extremely quick succession. I don't tink that you used the Term idempotent correcty, If this means the Same AS the mathematical definition.
The idea that the output should always be the same, if the input is the same, is just part of the mathematical definition of a function. Idempotent means that applying a function to itself renders the same result: f(f(x)) = f(x)
The definition in programming relates to state. it is indeed misused but it's also not wrong. I guess he meant to say "referentially transparent". if f(x) changes some piece of state (which a pure function does not), and f(x) f(x) has the same effect as f(x), then it's said to be idempotent. But this notation obfuscates the fact mutable state is a free variable, it's really more a binary fumction where the first argument is passed implicitly, so you can write it f(s, x) and f(f(s, x), x) or s `f` x `f` x and you realise it's indeed related to idempotence. Another example is the union of sets: A | B is the same thing as A | B | B
@@ApprendreSansNecessite I hear more commonly the word "deterministic" in programming to describe a function or process which, given the same inputs, will always produce the same output. I think this is what Arjan meant.
@@simonhaines7301 I have heard it as well. I don't like it personally because I think it's too vague. If the value of x is incrememted consistently every line, everything is still deterministic but it's not referentially transparent: you can't substitute a name or an expression by its value and consistently get the same value anywhere in the code, which is what we expect in FP. "referentially transparent" means to me "it's like in maths, you don't change the meaning of symbols".
it all arises from mathematics. if x -> y and y -> z then x -> y -> z could be seen as the single function x -> z. we then want to give the single function x -> z a name and just use that, forgetting about the intermediate steps. so z = f(g(x)) but what is the name of the single function that does x -> z?? let h = compose(f, g). then z = h(x). the function is called h. now that we have turned this process into a single function, then everything we know and think about functions applies to h, because it is just a function, both theoretically and literally in the code. we can forget about f and g. it honestly is a powerful form of abstraction, in the same way that something like class composition is a powerful form of abstraction in OOP.
I laugh at tutorial videos and people who watch them and think they are being productive wasting their time. When I was in a python network development internship, never ever found a video helping me, it was all going to documentation sites or asking people in forums. It doesn't matter cause like TechLead says, tech is almost dead nowadays, it just a skill for immigrants trying to immigrate in developed countries.
💡 Get my FREE 7-step guide to help you consistently design great software: arjancodes.com/designguide.
This is a nice, concise overview of some core, simple, practical functional concepts. I just want to make one note about your section on function composition. Notice at 16:33, Copilot suggests an implementation of the compose function for exactly 2 functions, which applies the functions in the correct order (based upon the mathematical definition of function composition). Specifically, the composition of functions f and g, which would be called as compose(f, g), first applies g (the right hand function) to an input (x), and then applies f to the result (i.e., compose(f, g)(x) == f(g(x))). However, the compose function that you instead implemented, taking any number of functions of type Composable, applies the functions in order from left to right, not right to left, because reduce traverses the list of functions from left to right. Therefore, your resulting compose function is not truly composing functions in the mathematical sense, and is generally called by other names in other languages (e.g., foldl or pipe) or even in existing functional Python libraries. For your compose function to behave similarly to the original Copilot suggestion, you would need to traverse the list of functions from right to left. This is also evidenced by the fact that in your original code leading up to your introduction of function composition, you had the expression (at 15:10) sort_fn(add_10(multiply_by_2(data))), but then (at 18:22) had to reverse the order of the functions in compose(multiply_by_2, add_10, quick_sort) to get the same result because your compose function applies the function in the reverse ("wrong") order.
I has the same problem. Use compose function from toolz 3rd party library, then you'll get what you want.
Worth noting that copy() only returns a shallow copy. So it's a new list but if the original elements were mutable (yours were ints, so that's fine) then modifying them in the new list will also modify them in the old list. This can be a big gotcha in parallel code.
Deepcopy from the copy module is _almost_ always what you need when you need an expensive copy of non fundamental types.
As always, it's another masterpiece. Thanks, Arjan.
I believe the input ‘data’ at 20:09 could still be a list, instead of an iterator; Only the return needs to be an iterator
In addition, we can still achieve the lazy evaluation 20:24 by simply using the result of the map function, without casting into a list
Dear Arjan, please add 1 second pause after printing code, because its hard to stop video to read code properly
Recently tried out Gleam was super fun! Python is my main language so really cool to see these ideas in Python, great timing Arjan!
Glad you enjoyed it!
in the bubble sort, the inner loop starts from the outer loop starts which means that there are a subset of elements in the beginning that are not sorted.
Bold of you to assume I'm going on any dates.
I have high expectations 😉
Arjan himself liked your comment on TH-cam, who could reject you after that?
Two days ago he liked me too, now I'm married with children. Think about it.
I’m really scared now to find out what happens if I remove that like.
@@ArjanCodes Don't.
This is really spot on
I was struggling with function composition for awhile
Glad you enjoyed it!
Really enjoyed this video. The advanced concepts are appreciated.
Glad you enjoyed it!
Another benefit of pure functions: They are easy to cache, as you know that putting the same values inside will give you the same result.
This can help you, if you have long calculations on certain values.
I use some of these techniques. Matching structures, composition and lazy code are the new ones I am about to discover thanks to this video. Thx for this great work 🙏
Happy to help!
Another great video. As a newbie, these really jelp to understand the important thingen. Thanks yuo!
Glad to hear you liked it 😊
Another great video from ArjanCodes. I love the pace, the tone and the way you explain concepts. Very clear and to the point.
Many thanks!
Glad you enjoyed it, Jose!
0:28 "As a side effect..." I saw what you did there.
Nice recursion example... quicksort is so much more likely to be used than factorial.
The vast majority of programmers aren't going to write quicksort either. In imperative languages like Python, you should likely default to using iteration whenever possible and only use recursion for obviously recursive problems (e.g. graph traversal). Even then, one must be careful about blowing up the stack, so it's important that you also know how to convert a recursive function into one that uses an explicit stack.
I've been doing a lot of the stuff in this video in Rust, it's nice to see a Python implementation of the same ideas which I can start to use in my day job.
One interesting "side effect" of this video, is it showing as typing, which should be "optional", intrinsically limits Python capabilities.
Theres is nothing in this code that would limit it working with `int`s. Any orderable type would work, _tut_ for the typing.
I find elimination of state to be mostly an obstacle in concurrent programming because atomic operations make threads inherently agnostic of what's going on and whether they should do a/b/c depending on updated conditions, and it also makes it difficult for them to broadcast their own state as it will require an intermediate who would ultimately end up doing exactly what that thread could be doing autonomously (and with less context). It's not impossible to safely allot partitions of a shared workload or state object by using range/slice-types for instance while retaining communicability.
Using 'reduce' to apply functions sequentially seems complicated. Instead, using a monad class seems like a better way to handle exceptions and readability. Using the '__or__' method as a 'bind' method, we can use
result = Monad(10) | add_ten | mul_two | div_zero
like this. I'd love to hear your thoughts. I'd like to add that it doesn't fit the content of this video because you need to define the Monad class 😁
Great video! I have a question, can't you just do this instead:
```
def multiply_by_x(data, x: int) :
return map(lambda y: x * y, data)
def add_x(data, x: int) :
return map(lambda y: x + y, data)
```
As map returns an iterator, similarly to yield from.
So the same as your initial functions but do not cast it to list to iterate over the iterator, just return the iterator.
I was expecting him to simply return map, like you're suggesting. However, if you think about it, what he's doing is the same. At 20:00 he modifies the type hint for data to be an iterator. For loops use iterators without casting them to lists. The function is taking the data iterator as an input argument and returning a generator (iterator) by yielding from the for loop. So, there's no casting to lists and is true lazy evaluation.
@@askii3 Yes i know it's the same, it's just unnecessarily making the code abit harder to read.
@@Wolar94 I was mainly focused on your comment that he's casting to lists. When I first read that, I thought you were saying his newer functions using yield are casting to lists (which they are not). But now I see you were referring to the older functions that do cast to lists.
Thank you for this amazing video! I learn a new good coding practice every time I watch one of these. I was wondering if you could do a video on the toolz and cytoolz libraries, which actually facilitate the application of most of the functional concepts that you demonstrated in this video, in addition to other goodies which deal with iterators and dictionaries. For example, they actually have a function called compose_left, which allows one to create a function from all the function arguments provided to it. It also allows for piping data through a series of functions, and is heavily pro-lazy evaluation. Cytoolz is the Cython-based equivalent of toolz, having the exact same functions, but much faster. Would love to know what you think of it, and what might be the best ways to apply the functions from this library to enable better coding in Python.
How have you managed to explain quicksort in 10 seconds better than my professor in a whole godamn semester. God.
Your point 3 needs a asterisk with the solition. The copy methid creates a shallow copy. Ehich means that if your liat contains anything that's passed by reference, it will be the same in the new list. If those mutable elements are changed in the new list, they will also be changed in the old list. For true immutability, the deepcopy functionality should be used.
Good point!
I just understood what yield does, thank you Arjan
Glad it was helpful!
wow, structural pattern matching, I didn't expect that. Python has so many FP features now. Multiple inheritance and the ability to call a method by passing it an instance are quite interesting for implememting FP patterns as well.
I'm quite frustrated by the absence of a const keyword though. I tried using Final but it disables ype inferrence, so you must specify the type which is a pain when the vast majority of your variables are actually constants.
Great video! It can be challenging to move all the print statements to the main function since you need to print intermediate results sometimes
Great tips for one of the greatest python style there is !
Glad you enjoyed it!
Thx Arjan! Partial is very useful! Its new for me😊
You’re welcome! 😇
For function composition, there's a cool package called pipe. Makes Python feel very functional oriented.
Interesting, thanks for sharing that!
What do you think of compose from the funcy-package? I've used it instead of writing my own compose-function.
Is your Copilot suggesting lines you've already typed in a practice run before filming? Or does it not remember context in that way, and these are "fresh" suggestions?
Always great and understandable videos. Thanks 🥲
I would argue at 20:09 the function parameter could be Iterable, and have the output be still be an iterator. This way anything that can be iterated over, lists or iterators (or anything implementing __iter__) can be passed and it don't need to cast my inputs with iter(...). I try and make my functions inclusive with Iterable inputs and Iterator outputs, unless there is a specific reason that doesn't work.
Great suggestion, that is a definite improvement over the type annotation I used.
Can you show how you would deal with exceptions when you use function composition? Assume that the functions you want to compose could encounter division by zero or a TypeError or ValueError because you are processing data that is not (completely) under your control. For example when processing CSV files you may expect a certain shape from the first two lines, but you would have to validate the whole file to know that the number of entries is the same in every row. Add to that the fact that you may be calling 3rd party code in the functions you compose.
How would you do the error/exception handling?
I rarely use recursion . Mostly to traverse a shallow tree like structure. Python has a recursion depth limit .
Great vid as always, theres also *functools groupby*, which seems pretty powerful like...
Sqlite3 backend, really doesnt matter, but i found like:
```
Select from table where x, and y order by a,b
;```
Especially with um passing a method into the retrieval for how to create named tuples out of the list... you can get really ""complex"" behavior, with super performant code, thats also super readible.
Ie used this in prompt generation for our tool, another thing im wondering about is separation vs interface vs api. For code that is being rapidly developped/iterated upon... (separation seems most important... i think.?)
19:58 These both functions should take an Iterable as an argument, not an Iterator. Interpreter will automatically call for an iterator while using the for loop. Iterator protocol requires the __next__ method, which is not implemented for a list, for example. But the functions obviously can work with a list. We always should use the broadest types possible in for a function arguments and the most exact type for the result. This is known as the robustness principle, "Be liberal in what you accept, and conservative in what you return".
Nice video! The subtitles are a little bit out of sync though. Please fix that. Thanks.
We’ll look into it, thanks!
As always Arjan, many thanks!
You’re welcome Tim!
The code at the beginning for the bubble_sort() where @ArjanCodes does bubble sort is not correct I believe ? Having the swapped flag, does not return the correct result ?
I expect very high level content from you (following your videos), Let us know if this was a oversight ?
I loved the refactoring of the github copilot code. I have to do that too! 😂
Yes, that happens a bit too often, haha.
List comprehensions are more readable than map, readability counts.
Not the same goals. Map return an iterator so it is only run when iterating while list comprehensions create an actual list
@@kaosce wait what? He converted the result of the map function to a list, and you can always write a generator expression if you don't want the complete list and just the generator...
@@kaosce moreover, map, filter, lambdas were there even before list comprehensions, but at the end, readability counts.
The big difference for me is that you have to think of a name for the intermediate elements in a comprehension, e.g. [int(x) for x in numbers] or whatever, and sometimes that can really help. Other times, people's names are so vague or so misleading that they would have been better off with map.
Readability counts much more in real use cases. List comprehensions CAN get unreadable though, but as long as you keep them simple they are very easy to understand.
Niceeeeeeeee! Another super useful video!
Enjoy! :)
Best dating advice ever
I made a composable in my project recently, but it did not have type hint! I had the apply be a lambda, which is ugly IMHO. Then you solved it... but then another lambda appeared! I would want the return to be `partial(reduce, apply, functions)`! I will incorporate the type definition though!
Oohhh I like that!
Corrected in edit. Glad you liked it😁
I highly recommend F# instead of Python
i still struggle to see the value of a compose function over a wrapper function that simply calls a series of functions sequentially. To me at least it feels much more readable and equally flexible. And if you rely on having so many arguments that in no longer remains readable it feels nicer to simply use a class instead. I do understand that the premise of the video is to utilise purely functional principles, but i really can't see a good use case for something like partial functions unless there's some underlying performance benefits.
Even in languages like Go i mostly see the sequential approach against a struct of data which is also more readable imo. But then again i'm no expert in either languages, so that could just be lack of experience / use cases where this would make sense to apply
I think removing the middle from quicksort introduced a bug. The left will now contain the pivot. Also if the value of the pivot is repeated, those repetitions will not show up
That be -- TTY relay only -- in skrrpts!
You mentioned that pure functions are idempotent. This is not the case. A function f is idempotent (pronounced as eye-dem-póh-tent) when f(f(x)) = f(x), that is applying one more time does not have an effect. A pure function is _univalent_ : for every input value there is just _one_ output value.
I was looking for this exact comment lmao. Beat me to it!
Love the mechanical keyboard 😂
Thanks
Thank you for your support!
reduce does beta reduction i believe
❤👏
pure function is not necessarily idempotent, these are two totally irrelevant concepts, not sure why you said that. 9:02
Please disable copilot 😂
I’m too lazy, haha.
@@ArjanCodes that’s how programming was invented:)
I really like copilot! It's good seeing it work and seeing it make mistakes
Pure functions are *not* idempotent. `sin(x)` is pure -- `sin(sin(x)) != sin(x)` -- `sin(x)` is *not* idempotent. `pure` and `idempotent` are both categories of functions but they are not correlated in any way.
Instructions unclear. Now I'm married.
At least you now have “closure”.
Monad is insulted not being mentioned here
Very hard to follow
17:12 oh gods, they brought all that [T] cryptoshit from Java and C++ to my beloved Python. 😢
I like to use functions, however i dont use this kind of concepts 😮
The usage of AI while programming can lead to terribly wrong code. I know, you like it because of the speed, but the moment you trust it blindly, you have lost the game.
Please stop using type hints if it starts accounting for half the video's runtime.
The topics are interesting, too bad that Arjan almost always uses complicated code to explain them.
Sorry, but this is Just too fast for me, too many concepts in extremely quick succession. I don't tink that you used the Term idempotent correcty, If this means the Same AS the mathematical definition.
The idea that the output should always be the same, if the input is the same, is just part of the mathematical definition of a function. Idempotent means that applying a function to itself renders the same result: f(f(x)) = f(x)
The definition in programming relates to state. it is indeed misused but it's also not wrong. I guess he meant to say "referentially transparent". if f(x) changes some piece of state (which a pure function does not), and f(x) f(x) has the same effect as f(x), then it's said to be idempotent. But this notation obfuscates the fact mutable state is a free variable, it's really more a binary fumction where the first argument is passed implicitly, so you can write it f(s, x) and f(f(s, x), x) or s `f` x `f` x and you realise it's indeed related to idempotence. Another example is the union of sets: A | B is the same thing as A | B | B
@@ApprendreSansNecessite I hear more commonly the word "deterministic" in programming to describe a function or process which, given the same inputs, will always produce the same output. I think this is what Arjan meant.
@@simonhaines7301 I have heard it as well. I don't like it personally because I think it's too vague. If the value of x is incrememted consistently every line, everything is still deterministic but it's not referentially transparent: you can't substitute a name or an expression by its value and consistently get the same value anywhere in the code, which is what we expect in FP. "referentially transparent" means to me "it's like in maths, you don't change the meaning of symbols".
Immutability like this is nonsense. Just more loops to jump through to write code. The point of a variable is that is variable.
No offense, but compose seems the kind of thing, sr devs would write to distance themselves from pleb devs. Respectfully, pleb dev.
it all arises from mathematics. if x -> y and y -> z then x -> y -> z could be seen as the single function x -> z. we then want to give the single function x -> z a name and just use that, forgetting about the intermediate steps.
so z = f(g(x)) but what is the name of the single function that does x -> z??
let h = compose(f, g). then z = h(x). the function is called h. now that we have turned this process into a single function, then everything we know and think about functions applies to h, because it is just a function, both theoretically and literally in the code. we can forget about f and g.
it honestly is a powerful form of abstraction, in the same way that something like class composition is a powerful form of abstraction in OOP.
@@simonhaines7301 interesting, thanks for reply!
I laugh at tutorial videos and people who watch them and think they are being productive wasting their time. When I was in a python network development internship, never ever found a video helping me, it was all going to documentation sites or asking people in forums. It doesn't matter cause like TechLead says, tech is almost dead nowadays, it just a skill for immigrants trying to immigrate in developed countries.
small error at 9:15 -- that's not what idempotent means. en.wikipedia.org/wiki/Idempotence
first...ah. not first. (sigh). i'll be first one day. congrats @ramimashalfontenla1312
Haha, keep at it! 💪