As someone who writes in a functional language to make a living, the most important thing to remember about "bugs" is that the worst ones don't announce themselves; the program runs and produces output with no errors, but the output is wrong because the programmer has made a wrong assumption or misunderstood something about the input data.
You guys are spoiled. I had memory corruption bugs that took me weeks(!) to find in C++. At least logic bugs in functional languages are quite easy to pinpoint.
@@artxiom If the code and programmers work well together, then I don't think errors take that long to fix. It isn't like Functional Programming is something godsent.
Functional programming is perfect in theory, but average/mediocre in practice. Functional programming does not optimize that well, and neither do its closures which live on the heap. Immutability comes at a cost as well, particularly in non-functional languages (e.g. JS). Complex functional code can quickly become spaghetti-like and hard to read/follow/reason about, and a similar thing happens with the error messages of such code. At least in larger projects (React js, nix, etc.), side-effects which violate "purity" are almost always a necessity. And lastly, conventional languages are more familiar to most people. Like anything else, it's all pros and cons and depends on the project at-hand whether a functional language is the right choice/tool for the job.
@@hari4406 OR “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.” ― Douglas Adams, Mostly Harmless (source GoodReads)
Total BS clickbait claim, pure functional doesn't exist in practice because as soon as you interface with the operating system there is I/O and side effects. "functional style" is what is used in practice and does not have any of these guarantees.
i think one of the largest benefits of pure functional programming is that the referential transparency allows you to use equational reasoning on your code to _easily_ see which programs are equivalent, which makes refactoring almost trivial. it’s a lot less stressful to know that you can move things around however much you want without worrying about anything changing behind your back
In contrast, when you want to refactor things, you might wind up having to change a whole bunch of things that are unrelated to the actual change, especially if your domain isn't something you could consider functional. Imagine a highly complex set of calculations about some business you're supporting, and now Marketing wants you to (say) change how long customer responses take way down at the bottom of the stack depending on how much money the customer spent last month. Now you have to pass that data through every level down to the bottom where you need it, and in every place you call that bottom function of deciding how long something will take. The alternative is to pass around an entire blob of everything about the customer, just in case you need some of it for a specific calculation somewhere, at which point you still have to find all references to it as well as handle returning new values back up the stack when you change it. Doing a functional approach to an OO problem makes refactoring as hard as doing a simulating an OO approach in the middle of functional code.
@@darrennew8211 it is sufficient to have a single data type that you pass around, you don’t need to populate it with everything you might possibly need, since adding more things to it only requires changing the place its constructed and the type definition. if you really need it to be globally changed immediately from deep in a branch (it’s probably a design issue but) you can always just put in a mutable reference for that specific value (and thus breaking purity). yes, it’s an annoying refactor to make, but it’s mechanical and you can just follow what the compiler tells you and be confident that it’s still working as expected this is not an issue i encounter regularly though, what is the context where you have ran into it when writing FP code? if you use the standard pattern of Elm code for example, what you’re mentioning is a complete non-issue, since there all state goes through a global function and you can just add the extra state you need at the appropriate location and it will be propagated along as needed by the code you have already written
@@asdfghyter I think adding all widely-distributed state to a global structure for an entire multi-million-loc program would probably be problematic without better support. I've never had the pleasure of working on a megaline program in a purely functional language, but my brain has problems dealing with pure functional once it gets above about five thousand lines. It sounds like you're basically saying "pass a pointer to the entire memory of the program around, and you can fix the problems with functional programming." :-)
@@darrennew8211 the idea of elm-style loop is that you have a tree structure with state that is relevant to different parts of the program, then when you enter a part of the program, you extract that part. you can also have some data that is relevant to all of the program. in order to change components of the state that is not yours, you return a message that tells the main loop to update that data. another solution that introduces some more impurity is to use erlang-style message passing between components. that way each component is still internally pure, but they can send messages to each other and get mutable state that way. Anyways, the thing you are describing sounds to me like it would be a problem even in OO languages, since you still need some way to have a reference to some way to find the data you need. How would you solve it in an OO-way? It's probably possible to solve it in a similar way for FP programs.
@@beowulf_of_wall_st My point is that if you have a routine that you suddenly need to have access to something to calculate a result that you never needed access to before, you might have to find all the places you call that thing and pass the new required information down the stack. Doubly-sucks if you suddenly need something like access to a database in a function you call from lots of places. That was always my problem with FP - you wind up with one seemingly simple change ("account for X in the calculation for Y") that requires changing dozens of places in the code. Using FP where OOP or procedural (or ECS or whatever) is more appropriate leads to those kinds of problems, IME. I'll grant that I haven't had a whole lot of experience in that, nor have I worked with people who have had a whole lot of experience in that.
Functional programming is a powerful paradigm in the programming world, where strict rules are applied in order to reduce bugs to a point where, in many common cases, they are almost impossible to write. It doesn't come without some difficulties though, and so this video aims to explain very simply, the concepts that underline the two main functional paradigms.
No actual mention of how, supposedly, "bugs are near impossible", because of course that's not true, and not even a characteristic of the functional paradigm.
You may have forgotten to watch the video? He mentions immutability and the absence of side effects, the two most common causes of any bug. Though he could have added absence of null, referential transparency and equivalence too.
Honestly, I understand what the title implies just by understanding the more declarative nature of functional programming. I think you're reading into the title too much friend.
3blues1brown meets fireship. Amazing animation that clearly communicates your message. I wish you made courses ESPECIALLY low level courses. You would be an awesome teacher
Both 3blues1brown and fireship are excellent teachers and presenters. Fireship is like a human google AI summary of relevant subjects. How that guy is able to digest so many subjects is amazing.
Sorry, I can't agree. 3blue1brown's animations graphically illustrate complex relationships and make seemingly invisible patterns comprehensible. (Even if Grant does use those "Pi" characters; they are stand-ins for *us* . They are confused or perplexed by or questioning things that *we* should be confused by....and that Grant is about to explain.) I didn't get any of that from Coderized's "coding style" video. I don't know much about this topic and I didn't come away having learned anything. This is more a *review* video for those that already *know* these styles and differences, and might need a refresher, or see them brought together in one place. So maybe it succeeds at that. I would prefer some actual code examples shown side-by-side that illustrate how each approach (or language) handles an important feature, and then the examples dissected to show the pros and cons of each approach. (Other commenters made this same point, and there *are* videos out there that do just that.) Instead (and this is all too common) the animations here are little more than "eye-candy" to fill the screen during the narration. At times you could close your eyes and not miss anything, because this is no real information on the screen. At about 1:50 he does show something technical, and I had my hopes up. But then it's soon back to cutesy and meaningless animations that contribute nothing as he confusingly discussed closures. (And the proof of that is that he later went back and re-edit the video with additional material to try and fill in the gaps.) With Grant's videos (3Blue1brown), I almost always want to know more about the topic, and am inspired to seek further. Often I am so captivated that I wind up binge watching more of his videos. Here, with Coderized, I feel cheated that the video is over and I didn't learn anything. I *am* inspired to learn more about this topic, just not via this channel.
Here's the best part about programming paradigms: Unless you're in a really bad enterprise situation where you're forced to use only one, then you can mix and match them. Functional programming is pretty useful, but there are times, when the side effects are needed, such as in game development, digital signal processing, etc. D supports multiple programming paradigms, and thus that's my main preferred programming language.
You sometimes lose a lot of the benefits of some paradigms by doing that, though. If you can return closures from methods inside objects that refer to instance variables, you lose some of the encapsulation. If you have a functional language where you can return a writable pointer to data, you lose the referential transparency. Stuff like that. Sort of like how an unsafe language lets you screw up *anyone's* code, not just yours.
@@concernedcitizen3254 It's not about being fashionable. It's about understanding that the current state of the art is far from its pinnacle and we're slouching iteratively towards Bethlehem with each paradigm shift. We've been coding for maybe 100 years...where will we be in 100 more? 1000? 10,000?
There are ways to solve the issues of needing to create side effects for game development without sacrificing the benefits of immutability and declarative programming. Take ECS, where every time the world is analyzed/queried in a system, it's like it takes a "snapshot" of it's current state, with everything about it only being readable/immutable. If you want to modify an entity's component, you can mutate the world by replacing that component with a modified, still immutable one, but it still will not affect the "snapshot" until the next time you query. Because systems always run in a set order, the mutations one system makes will never interrupt the task of another system in unpredictable ways
Well, every paradigm can have good or bad code; you can never have a *bugless* paradigm or bugless language (the bug is created by the user, afterall). The title is just meant to be sensational
Short version: Because if the entire code is stateless and immutable, then the entire code can be represented by mathematical functions which then can be used to compare against the specs and to even automatically prove properties. Long version: Say you have to implement a traffic light, and need to prove that at under no circumstance, two perpendicular lights will both signal a green light. In imperative programming QA and testing are the state of the art approach. But those are probabilistic: unless you have a testcase for each and every possible input, for each and every possible program state, tests are only evidence but never a full proof. Even if you just have two 64bit input variables, that means having 2^128 (~3.4e38) testcases for a full proof. This problem is known as "state space explosion". In order to reduce it you need to involve semantics definitions of the programming language and because of side effects those can get pretty nasty pretty fast. In functional programming you can just prove each function individually, since there are no side effects. So all you need to do is show that the traffic-light output function is not capable of showing green lights for two perpendicular lights for a range of inputs. Then you show the arguments are indeed in that range, by showing the previous function's output is in that range. Rinse and repeat until you're left with constants and user input. It's just so counter-intuitive to think functional. In the famous cooking recipe analogy to an algorithm, it would be as describing cooked spaghetti as a series of transformations on raw noodles and water: Functional: cook_spaghetti( spaghetti, add_salt( boil(water) ) ) Imperative: 1. take water 2. boil water 3. add salt 4. add spaghetti 5. cook spaghetti IMO the imperative list of commands is just so quick to write and cheap to develop that it's hard to justify higher development cost. Unless you're sending stuff to space or write safety critical software, tests will do just fine and will be much cheaper. However there are some uses outside that space: when you're dealing with lots of concurrent events, handling everything in imperative event handlers is a nightmare to maintain. Observables ala Reactive UI have been a godsend in that regard.
@@powertomato It’s not really about proving that the programs are correct. No one got the time to write formal proofs. It’s just that stateless programs are in general easier to reason about. You don’t have to worry about which order you call X() and Y(). If you can call both, then both are safe to use. When programming with states there’s a risk calling Y() before X() cause trouble, because the programmer didn’t consider this possibility. Suddenly you need to reason about which order things are done. But this is just one type of bug. Stateless programs are still subject to other types of bugs.
5:17 - Fun fact, before JavaScript classes were introduced, people just used functions as classes or object constructors. The function would return a plain object, onto which you would assign functions which you wanted to be the object's methods. Any data or "methods" which you wanted to simulate as private properties and methods would not be returned on the plain object but simply created inside the scope of the function
although this was/is a valid pattern, it may be important to note that js has always been based on the prototype model, classes are mostly sugar around the same proto model. e.g. rather than write a function that statically returns new closures (costing memory because scope), you'd write a prototype object then construct that instead. search 'prototype mdn' for more info
@@darrennew8211 Creating objects in closure isn't prototype OOP. Prototype OOP is attaching another actual object to search for properties in if they don't exist in initial object.
@@BlazingMagpie That's prototype inheritance. But sure, it can get kind of fuzzy around the edges, given the number of different ways to do encapsulation, instantiation, and inheritance.
@@melissaprice1424I could not say that it's totally useless since it introduced me to the different paradigms in one of the visuals as a beginner but I have to admit that most of the stuff goes in one ear and out to the other
I remember using a strongly-typed pure functional language… it was a nightmare to get anything past the compiler! But when you did, it never had bugs. Instead of coding quickly then spending ages bug fixing, you would spend ages coding and hardly any time bug fixing. And, as a bonus, it was sometimes possible to mathematically prove your code was correct.
I just love how resilient is FP code to bugs. I'm trying to use it in the core parts of my codes because you know you can really trust for them to just work. I also love its strong modular nature. It makes so easy to change the code behavior.
I love these videos! The slow pace, beautiful animations and how you compare these really complex subjects to our everyday objects makes it so much more clear to me!
I think this is very interesting. Cause while making an adder you try your best to derive logic from binary inputs such as 0b1 for example then 0b10. You want to combine those inputs in an and gate to get 0b11 for individual bits. So then you start creating an equation to explain what each logic gate could be doing. And you can use that example as 0b1+0b1==0b10 or 0b1 and 0b1 == 0b10. With xor gates you simply have the even numbers and 0 and you can think of it as a Table or index in which the bits have to equate to. Which is why you create if and then functions. Making it a more declarative type of mathematical language. Just it uses symbols for the if and then functions. Which means you could also do ⟹ [{|0b1|}] to represent then true. Meaning if the function is true then it outputs 0b1. It also uses lim(n) as a function since setting the limit would be important for 8 bit input counters or adders. Think of it as similar to boolean algebra but with if and then statements and a limit function to have a limit bit integer. It's also structured so that the bit's are represented clearly and so that only with the prefix 0b it can indicate as binary. Anything with |n| would mean the exact value of n. n multiplied by 1 being defined as 1 or 0. or {|n(1)|} := {|1|,|0|}. Since it would be multiplied by a factor of 1. But if multiplied by a factor of 2 and so on then its {(|0b10| ∨ |2|), (|0b1| ∨ |1|), (|0b0| ∨ |0|)} Which would be (n+n'...). Why n+n' Well simply because that signifies the addition between the two binary numbers. And you use if functions to basically check if n equals a certain number whether its decimal or binary and then if the number equals each individual number of a table then the result should be true. This would work for both xor and xnor. But if you want to do xor then just do n%2=={}
Just started training at a company that primarily uses Scala, and it's actually been a pretty interesting experience getting a handle at a new style of programming.
@ashaw596 If you're using pure functions, the compiler detects a lot of the stuff for you that would otherwise get cought at runtime. So from my, albeit short, experience yeah it seems as advertised.
@@AndreiGeorgescu-j9p Thankfully the company provided us with the latest edition of the book. Having to go through all of it and provide output in 3 weeks was quite the challenge, but we pulled through. We're getting engrossed with the frameworks that support it now.
I started off with C++. When I went to Python and then to my now main language of JavaScript, I was so confused that a lot of focus was placed on functions and not classes. Like yeah you can still use classes, but tutorials don't spend much time on it. While my C++ class was literally called C++ and datastructures (which are based on classes). Now I just have my main function that calls a whole bunch of other functions to do things that run forever. 😊 All my variables are fleeting and only things stored in a database matter. The different styles of programming are fascinating.
I have been coding in javascript mostly and recently got into C# and typescript. Coming from javascript (specifically react) I was totally on board with functional coding. Then moving to TS and C# I found out about the wonder that is classes, and OO programming. Now I'm finding myself mixing the two, and this video was a awesome look at something I thought I already knew a lot about. I didn't even know there was a thing as "pure functions". Awesome quality, and I definitely learned something.
I finally understand what a closure is, I swear I've banged my head up against FP so many times because I couldn't get my head around closures. Thank you for your short, simple and clear explanation.
In my experience pure functions are not significantly more bug free. Most bugs I experience are: 1. incorrect interpretation of requirements 2. careless coding 3. issues with data being passed from one part of the program to another or one system to another. 4. UI related unexpected behavior. 5. a misunderstanding of the language/framework being used. In the code, whether the function is 'pure' or takes in mutable arguments has very little effect on how many bugs may occur in that function and pure functions may in some instance be much harder to work with and understand thus creating more bugs which is why most of us do not particularly try to use pure functions even when it is possible to do so. Functional programming can have an edge when you do full unit testing and need highly reliable software where each function is analyzed for correctness but that type of code is actually quite rare in the real world.
It tends to work best when your problem domain is function. Something like a SQL interpreter (where the results are committed at the end) or a compiler or a sorting algorithm or something like that. If you're doing something intrinsically mutable (game engine) it's much less feasible and helpful, because as you say most of the errors made by people experienced enough to be using functional programming are logic errors.
@@aaronfleisher4694 I disagree that logic bugs are always careless coding. Sometimes they're just mistakes. Sometimes they're just situations your users tried that you didn't think of and break your code.
@@AndreiGeorgescu-j9p I disagree with your list of common bugs. Those are "common bugs that are pretty trivial to find the cause for." IME, any time the code base gets bigger than one person can keep in their head, the most common errors are 1) logic errors, 2) people who didn't know the requirements (whether coders or customers), 3) misunderstanding how libraries or frameworks you rely on work. 90% of what you say is right, but it's specific to a certain kind of program and a certain size of program.
@@AndreiGeorgescu-j9p I think you are generalizing too much. The software development space is large and the problems in on type of development are vastly different from those in another type of development. The issues in an OS project vs a website are vastly different. You mention null reference exceptions for example. When I code in Rust, I never ever get those because you can't. In JavaScript or Java it is certainly possible by in my experience relatively rare in Java and when it occurs in JavaScript it tends to be due to unexpected data entering the function not because of whether or not the parameters are mutable. I stand by my point that in my own line of work purely functional programming would not only be impossible but where it is possible it would make very little difference to the number of bugs I deal with because the vast majority of them have to do with issues of requirements, carelessness, communications between systems etc. We generally don't do unit testing for this reason because end to end testing is far more effective at picking up the bugs that actually occur. But in other types of development the experience will be very different. As mentioned I also have some work in Rust and the types of bugs are very different but still almost always intersystem or requirements related. Most of my Rust work however involves either new development or updating the code to handle updated dependencies. With Rust, my experience is that once it compiles it usually works and any changes will be due to new requirements not bugs.
@@AndreiGeorgescu-j9p Yes. It's entirely possible for something to be common but trivial. You know what's even more common and even more trivial, in exactly the same way? Misspelling a keyword, or leaving out a closing brace. Null reference exceptions are no longer a billion dollar mistake now that we have memory protection on computers. I disagree that the reduced cognitive load of FP would necessarily reduce the number of logical errors. A logical error is when you write code that you think is correct but it isn't. Making it easier to write correct code won't make you think the solution is better. I think people who are used to writing smaller simpler programs have a different idea of the distribution of bugs compared to someone writing large complex interacting long-lived systems.
Wow, great video! I left a comment on your other video and it's impressive to see how fast you iterated on it. Thanks so much for the work you do. My feedback on this is that since functional is a difficult topic that handles a lot of abstract ideas (with some abstractions on top of the abstractions), even if the explanation is logical and concise it doesn't necessarily reduce the difficulty in understanding it. One still has to keep a lot of these abstract concepts in their head. This was amplified since the video touched on many various topics (memoization, declarative/imperative, side effect, closure, etc.) in a relatively short video. The animations you used were beautiful and I loved to look at them. But in a lot of ways they didn't actually help me understand the topic any better, even in cases where they were used to represent abstract ideas. Contrast this with 3B1B (I know that's a high bar haha) where most of the animations he uses actually enhances the viewer's understanding of the topic at hand (and then is a little cute in the down time). Take 5:48 for example. While the diamond represents the abstraction of an expensive operation, the content doesn't actually visually represent that efficiency (i.e. processing an expensive operation multiple times) which forces you to hold that in your head kind of abstractly as you process the rest of the video. Another example is in the filter/sort/map around the 4:00 mark. The animations are beautiful and they do make sense. But more so if you already know what they do! The subtlety, speed, and singular nature of those animations make it less useful to someone who isn't already familiar with the topic at hand. Again this is all because I love the content that you make and I will continue to watch regardless haha. So it's not critical, just my two cents. Thank you for making content!
I think this is good feedback. I liked the visuals and tone but it unfortunately didn't help me take away any insight that I can apply to my own work. Maybe that's unfair as it might not have been the intention of the video. But I really would have liked to see the same concepts covered through a more problem-based learning approach. "Here's a scenario where non-functional code is problematic, and here's how you can apply these functional concepts to write better code". Great work though, just can be pushed further imo :)
It's actually rather easy to make bugs in even pure functional program. There is a paradigm where bugs are nearly impossible, though. It's the dependently typed paradigm, which is a subset of pure functional paradigm. In a nutshell, a dependently typed language allows a function to return a value of different type depending on the input *value* (not type, that's just generics), and the type is enforced at compile time. This is very useful because you can now encode proof and contract that ensures that the program works as intended. Essentially a statically typed contract oriented programming that you can reason about manually
I absolutely love how you expressed a con of functional programming as "Tricky for Imperative Programmers" ... this is perfect. I have been doing functional programming professionally for over a decade now. So many times I come across comparison videos that get this part confused and end up only saying that functional programming is "tricky" or "hard to read" ... but that is just their imperative bias coming through unchecked. We aren't born with the intuitive understanding of how a for loop works, we have to learn it ... and everyone learning programming learns it very early ... but we can't take for granted that learning still occurred. When people come across a map function your first instinct is that it's more confusing and harder to read than a simple for loop ... but this isn't a fundamental fact, it's just that you've been using a for loop for so much longer you just aren't used to using and seeing the map function. Once you are used to higher order functions like map, you may start to find them incredibly elegant (I do) and then start to prefer them. So thank you for getting this part right where so many other videos have failed!
bruh let's be real. most imperative programmers arent going to have that hard of a time figuring out how map works. functional code can get really function-nested or transformation-nested and thats what trips people off. if you want to design an optimised functional style model of something, you have to plan ahead of the right transformations your input data should go through to give you the desired output and mastering that isnt just a matter of lack of familiarity with functional paradigm. it's genuinely objectively harder for normies to wrap their head around.
@@samuraijosh1595 That's because most people do a terrible job at domain modelling in their current paradigm too. Less of a functional programming problem, and more of a poor design problem.
The hardest part about learning FP is unlearning the imperative paradigm. People confuse simplicity and familiarity all the time, and it significantly hinders progress.
I loved the Room 101 joke. On another topic. We have a bad habit in computing to use the same word for different concepts. Often we use the same word for a static entity and a dynamic entity. And I think this can be very confusing to learners. Some example are “variable”, “scope”, and “function”. I’m thoroughly guilty of this myself, although I’m trying to do it less. When I try to teach this topic, I find it useful to distinguish between function definitions and function values. Closures are just function values. The idea that a single function definition can give rise to different function values at different times takes some time for programmers to understand when they first encounter it. Anyway, great video; please keep it up.
7:11 The infamous Monad was recognized in Homological Algebra after the original term "standard construction" was used for it, but before it was officially a "monad" in category theory back in 1967. The etymology of the original term "triad" gives a crucial hint for what the monad is. First, since every function should take in one and only one parameter and return one and only one value, we need a rule set that is surjective to map B -> C. Enter the context! This world context C has access to the value B and carries rules, such as lazy invocations of the runtime to write/read file content, if one wants to manipulate the context passed around from one function to another. Secondly, we must have an injective function mapping A -> B. Luckily, any pure function will do the trick, but if you wanted to, you could use a function that is monadic as well. The reason that division is monadic in nature is due to division by zero cannot a defined result; thus we say this case returns Nothing while everything else creates Just the divided value. We commonly say that the IO monad is a more specific form of the state monad because it has more specific rules about the kind of state we're manipulating and how it changes. It's not like we can represent Tree Monads as IO Monads since it'd be silly to have tree rules used to manipulate file sockets. But if we wanted to use the value in the Tree Monad in an IO Monad, one must implement a way to extract a value from the Tree Monad. This may not be guaranteed in every monad like the Maybe Monad, but it's something to think about when designing your own monads. That's why Monads are called "triples" or "triads" in category theory. Because it stands for the three components of this specific structure: the input, the output value, and the world containing rules to manipulate itself (thus why monads are endofunctors) and the output value itself. The more you know 🌈
@@clamhammer2463 While useful as a way to look at a 3D rotation in a 4D perspective, you were better off just learning 3D rotors as a Marc ten Bosch article explains. On the other hand, if you're in a code base already inundated in quaternions, you're good to go there. Might as well just learn linear types and effect systems if you only care to have effects from a straight forward perspective.
@@SimGunther this was mostly a joke but I was required to learn it in order to encode the pitch/yaw/roll of a spaceshuttle to easily do the math with an easier data structure. After looking into monads and learning what they are and what they are for, I'll realistically never reach in that pocket of my toolbag again.
@@clamhammer2463 Glad to hear about that experience with quarternions! Also, that impracticality and unwieldy nature of Monads beyond the super mathy world is exactly why I mentioned linear types and effect systems if that's the itch you need to scratch in a functional language.
The "imperative" vs "declarative" divide is a bit nuanced. Let's say my problem is to transform a bunch of data. Here, functional programming is pretty great, and very *declarative* about the problem. But let's say my problem is low-level kernel programming. Well, most functional languages would really struggle to declaratively describe the underlying metal. In fact, in this domain, assembly might be the most declarative language. Database operations? SQL is just English at that point. But let's make the problem higher level. Let's make the problem "build me a customer dashboard". At this point, nothing is declarative. Until we have AI that can take words in and put full solutions out, real-world problems require a solution that needs to articulate implementation details that end users will not see; what we might call "imperative". The problem isn't arrays, or registers, or monoids, or functors.... it's a customer dashboard. This declarative/imperative classification is really domain-dependent, and it falls apart in 99% of real-world applications. Hillary Clinton is my dad.
I can generalize that for you. Software engineering is engineering. Engineering has only three rules: 1) Deliver! 2) Deliver on time! 2) Deliver on budget! That's it. Absolutely nobody gives a frell what's on the inside, as long as it works on the outside.
There is a simpler description for the semantics of lexical closures: when you evaluate an `(a) -> (b) -> a + b` function over the value `v` you substitute `a` for `v` and end up with `(b) -> v + b`, so there's no need for the term "closure" as such substitution is the natural way to apply a function in FP
@@ekstrapolatoraproksymujacy412i use pure functional programming when i have to build huge pipelines for huge distributed datasets, and I need the process to be absolutely inmutable and deterministic.
I appreciate this very much! It’s a great way of thinking about the paradigm and how we should think while using it, but it does lead to some misconceptions about things like its performance and storage aspects. For the languages themselves, there’s some funky stuff that actually goes on underneath the abstraction layer we think on. The craziest example I have is how some languages don’t completely create new versions of data, but structure references in a way that enables very very fast equality checking for structures as nested as you want. So, some underlying data structures stuff ends up being faster than it’s non-functional brethren despite suggesting the intuition (coming from a procedural mindset) that it should be slower.
can you give an actualy example of this instead of making a vague point that tries to assuage the bad reputation functional style has in terms of its performance?
@@samuraijosh1595 Gosh, sure! It’s going to be a struggle. A handful of functional programming talks do bring this up and mention it in passing, but it’s not the easiest to google search transcripts of TH-cam videos. I’ll see what I can dig up from my memory and I hope I can satisfy your question. Typing out a response on my phone is tricky. One would be Richard Feldman’s talk “outperforming imperative with pure functional” describing some work he’s doing on a newer language. A handful of Clojure talks mention this when introducing the language as a core for how they perform equality checks. There’s one by Mohit Thatte called “What Lies Beneath - A Deep Dive Into Clojure’s Data Structures”. And, I could have sworn there’s a talk by David Nolan where he’s doing a beginner’s talk for CLJS where he mentions the data structures in passing. I’m sure there’s other examples, but, simply, (some) functional language designers do not simply slap immutable structure primatives into their languages without thought or the recognition that there are performance hurdles. There’s tricks that go on under the hood that are intentionally hidden or at least abstracted from thought while you utilize it with the general immutable paradigm!
@@samuraijosh1595i have worked with Scala in a pure functional way, and it is the most performant thing that I found to handle huge distributed datasets. However, i saw juniors having many problems to adapt to it, because it uses some constrains/axioms that people are not used to, so they use the language in a non pure functional way, they get a non performant process, and they conclude that the language is the problem itself. This is why i think it is the best way of programming, but will remain as a niche, because you need a high level of logical-mathematical thinking to drop your axioms and start reasoning from new ones. I think FP is like superdeterminism in physics, once you obey its axioms, you will see how well it represents reality and you won't want to go back to old axioms, but is the process is emotionally hard because it says that free will doesn't exist, randomness doesn't exist, and you can not have FTL interactions. Most people have previous core axioms that contradict 1 of these 3 axioms. I think something similar happens with FP. When someone shows you an axiomatic model that makes better predictions that your axiomatic model, that creates cognitive dissonance in your mind. The first reaction that people have to cognitive dissonance is negation of the source that caused it.
there's a saying in the haskell community: "You should be able to figure out what a function does from just reading the function signature." half of the work is just the function signatures. a lot of the work is upfront so bugs in functional code are called "bad function signatures" and "type errors", both compile time concepts that never make it to production. -mostly because there are like 2 haskell users in production.- refactoring is guaranteed to be correct in the end, but you may have to churn through a whole chain of other function signatures if you forget to make something a monad, for example. i am *not* speaking from experience, just what i've heard around.
Sounds about right. Tho unless you go into theorem-proving territory, you will often have multiple possible functions for any given type. (still, most of them will be obviously incorrect, so it's definitely harder to mess up)
@@mskiptr i've heard another quote about someone who was preaching how the language Ada can encode such strong constraints that bugs were practically impossible. someone then asked if Ada could catch the following bug, which i believe is such a blunt reminder of logical errors and even typos: -- adds two numbers procedure Add (A, B: in Integer; C: out Integer) is begin C := A - B; -- definitely adds two numbers, don't mind the fact that it says A minus B, hehe end Add;
@@raffimolero64 That's certainly a major thing to account for. Even if you prove your code correct, if your specification has bugs, the code will not be what you expect. But we can remedy this at least to some extent. Your function in Idris would go something like this: ``` add : (a, b : Nat) -> Nat add a b = a - b ``` But then, once you go to use it, you might need the fact that `add x y = add y x` to call certain functions. Because of the bug this isn't true here, so you won't be able to construct a proof for that proposition. But for the regular `(+)` you can! ``` plusCommutative : (left, right : Nat) -> left + right = right + left plusCommutative 0 right = Calc $ |~ right ~~ right + 0 ..
As with any reasonable programmer I'm happy to use functional styles in my code where they make sense, but to say that bugs are near impossible or even significantly reduced by functional programming is a fundamental misunderstanding of software development. Most bugs are caused by factors like misunderstanding APIs, logical errors in algorithms, not knowing what combinations of state may actually be present in a program, or multithreaded race conditions. Functional programming can help with these somewhat, particularly in collection manipulations where readability is much better, but at the end of the day all of these errors are fundamentally just products of having to solve complicated problems in complicated systems with finite resources and time and that just extends far past where a coding style has any meaningful influence.
What the heck! Those neat animations really combined with those succinct explanations really helped me understand concepts I've had trouble understanding.
The animations are beautiful as others said, but this is probably not the best way to explain all the concepts and the benefits they offer. I am very experienced in functional programming but these descriptions look a bit rushed, too theoretical and confusing. Just my two cents.
Teach myself to code, and I've watched video after video about functional programming. And this is the first one of them I've watched that I understand, so I mean, there's that.
Never thought I would get cozy consuming functional programming related content Beautiful, purely beautiful. So purely beautiful it would never need a monad or other filthy thing. Congratulations on the video and the effort put in the animations.
It actually harms the video because it brings up an irrelevant concept that most viewers won't know about and thereby get confused by why it's brought up in the first place.
Technical videos are usually someone typing examples with a screen recorder, I don't hate that but this was really unique learning experience too. Thanks
Great content. If all programming courses were as focused on the main concepts as in these videos, without getting lost in details and syntax (ceremonies, as Venkat Subramaniam would call them), becoming a programmer would be a breeze. Instant subscription for me.
Good video, but OO is neither Imperative or Declarative. Procedural is opposite of OO, and Imperative is opposite of declarative. So a language can be both pure OO and pure FP at the same time. For this to happen functions and all other values needs to be objects, objects need to be immutable, and there can be no side-effects. Inheritance doesn't break the FP purity either. I know of no such language to date though.
This is what I love to do. Sometimes when I’m bored and have nothing else to do I download some github stuff and just improve it as much as I can. This can be really simple stuff or really hard stuff (which tbh I mostly give up on)
Functional has its uses, a pure functional program doesn't do anything, as everything that matters to humans needs side effects. Coding languages support or do not support functional paradigms. C# is focused on OOP, where Rust is more focused on other styles.
@@bluedark7724not true, when i had to build huge pipelines that handle huge distributed datasets, the best way i found is to use Scala in a pure functional way. I admit that it is a niche, but there are important scenarios where you need a huge process that you are sure that is absolutely inmutable and deterministic.
I think u have put a great effort in creating the animations. It is a great work and the explanations using these graphics are also very clear. Thanks ❤
I want to say that many things that I took for granted by years were confirmed with the animations. I now understand better what's happening behind the scenes in programming. Thank you!!!
Yeah 😅 - I uploaded this a few days ago but took it down shortly after to fix some issues and add in some clearer explanations. It has quite a few changes since the last one!
3:32 I’m glad you mentioned “a closure is a poor man’s objects”. It’s often overlooked in these types of vids that instance methods are closures over the instance variables. Objects are just a convenient way to group multiple closures together which all operate on the same scope. And like higher order functions, objects can be returned from functions. I think a good parallel is functional vs object programming is similar to the distinction between deterministic vs non-determinist finite automata.
I've been learning CS for 2 weeks now, which means that I probably didn't understand much of this video. But I sensed that it was clear and important. Saved it for later. Animation is amazing!
The video was great. But it only ensured me to keep as far from functional programming as possible. Maybe its good for students to broad their perspective but I can't imagine any large real life system architected in this paradigm.
Thinking functional programming doesn't lead to bugs is something a naive programmer with zero experience would claim. Programming languages like Rust may claim safe memory management, but it doesn't prevent you from making logical errors, for example doing the wrong mathematical operation or writing incorrect data to a file. And especially when you are working with other libraries (including std), you are working on trust that this library doesn't produce bugs. But I encurage you to simply go to the source repository of your library of choice and you will find an issue list which is filled to the brim.
1. Rust doesn't "claim" memory safety. 2. Some types of errors can indeed be solved by different (types of) languages. For example, data races are probably pretty rare in functional languages. 3. Std bugs are "relatively" rare and are mostly related to platform specific behaviour. For rust 0.7% of the issues on github affect std. But I do agree with the fact that a lot of bugs are logic related. Most important bugs are not though. (looking at you, 70% of chromiums high severity security bugs)
I don't think anyone thinks that. It just leads to significantly less bugs because it completely removes several types of bugs (so does Rust) - and for everyone who writes real world software this is a huge advantage.
I can't believe you have only 3 videos on your channel ! I was like "uh interesting but not sure I'm knowledgeable enough to fully understand anything, let's check if he has some kind of beginner content" Really looking forward to more to learn the world of coding :) Great job.
Ok, even though bug-free software is impossible you got me curious with the title of this video. I’m glad you added the “nearly” caveat though lol… However, I had to hit the pause button just a few minutes in at 3:18 because you’re already violating tenets of safe, clean & bug-limiting software engineering. If someone is reading this comment and you happen to be a new developer just starting out your career, please please please remember this: always strive for code that is easily & quickly understood, is cleanly written regardless of the paradigm, has helpful & appropriate comments, is unit tested, validates all external input, and gracefully handles errors. Even if you never write anything like safety-critical code that is a lot more restrictive & stringent, following those straightforward items I just listed will take you very far. Don’t be clever - be clear.
I really love your videos!! They are done with such care. Your animations are funny, yet the examples they represent are clear and you explain concepts slowly, without rushing. Thanks for all the effort you put into them! ⭐😁
Well animated, but not very informative, somehow... I guess it's targeted at complete beginners, so perhaps it would have been informative to me in the past. -- Would like a refresher on Monads though once you get around to it - but feel like at least explaining how side effects are handled in pure programming would have been sensible.
My personal favourite is OOP. The flexibilty, the encapsulation, it all fits together so perfectly. Each to their own, obviously, but in my mind, OOP just works.
well, until you add concurrency. then oop would work fine, if every object was in its own light-weight thread and communicated through message passing. unfortunately i'm not aware of an oo language that works this way.
I work with a guy that loves functional programming, and writes his apps in this way. I review his code often, and I can tell you that his code has as many bugs as anyone else's in my team.
but but but muh functional magic thingirinos!!!111!1!!1!! wait, you mean to tell me that using a different programming paradigm doesnt actually magically prevent programmers from making logic mistakes in their code?? wow, who would have thought! Like, i love functional programming, but i have to agree with you, this video tries to make it look like magic and its not a good thing to sell false expectations to newcomers to programmers, specially when this type of videos are usually aimed at the kind of people who are most susceptible to believing the title without giving it any second thoughts.
I really enjoying the pacing and visual representation of the functional programming concept in this presentation. Cheers to you for this approach in explaining it. I’ll have this as resource to use in my dev team’s ever evolving philosophical discussions on writing gud kode. 😄
Very good introduction to the paradigm. I once came along an article about the essence of functional programming. It focuses around the principle of "composition". Essentially what functional programming tries to accomplish is enforce the use of monovariadic (single parameter) functions, so that they become simple mappings from A->B, which is also in line with the philosophy of evaluation > execution and referential transparency: the ability to replace such a function with a value, and the code would still work, which supports memoization once more. And by being mappings from an input to an output, you can isolate them, test them and debug them. This is the reason why in my opinion "bugs are near impossible". Currying through closures is a way to create at once a chain of monovariadic functions, that can be evaluate at the end of the chain - Haskell implements currying as its primary function definition by design. This ensures high compatability. And more importantly it is easier to reason in terms of simple A->B mappings, which leads to category theory, where the big evil "Monad" comes from, which is just the big brother of the monoid (Array.concat) and the cousin of the functor (map(a => b)). Essentially just wrapping an entire context around a value, providing a way to wrap any new (naked) value into this context, and allowing to chain multiple functions together so that their contexts can interact (or merge) while returning the value "unwrapped". Again: Composition! This is incredibily useful to implement side-effects in a pure way. Instead of a central state that can be accessed by everything. A monad just carries that state with it and exposes just what is needed when evaluated, encapsulation done right. It allows for incredibly cool design patterns, which unfortunately do not mix well with other paradigms. Simple explaination: As soon as you introduce variable state, you lose the ability of absolute composability. You tie your logic to the order of execution. I think it depends for what target we write code: Humans (Functional might seem unintuitive, but once understood can unleash higher levels of expression. Focusing on clarity and sound reasoning), Computers (Procedural, that is just how the processor works, least overhead). OOP is something wedged inbetween the two (allures you with its intuitive concepts of objects, but down the line just becomes a jambled up mess). I really hope they find a way to make functional programming more performant...
This might be the most didactic and beautiful video about programming that I ever saw. I am a med student but I want so much to learning coding! Thank you for this wonderful work!❤
As someone who writes in a functional language to make a living, the most important thing to remember about "bugs" is that the worst ones don't announce themselves; the program runs and produces output with no errors, but the output is wrong because the programmer has made a wrong assumption or misunderstood something about the input data.
You guys are spoiled. I had memory corruption bugs that took me weeks(!) to find in C++. At least logic bugs in functional languages are quite easy to pinpoint.
@@artxiomit isn’t a match
@@artxiom If the code and programmers work well together, then I don't think errors take that long to fix. It isn't like Functional Programming is something godsent.
Functional programming is perfect in theory, but average/mediocre in practice. Functional programming does not optimize that well, and neither do its closures which live on the heap. Immutability comes at a cost as well, particularly in non-functional languages (e.g. JS). Complex functional code can quickly become spaghetti-like and hard to read/follow/reason about, and a similar thing happens with the error messages of such code. At least in larger projects (React js, nix, etc.), side-effects which violate "purity" are almost always a necessity. And lastly, conventional languages are more familiar to most people. Like anything else, it's all pros and cons and depends on the project at-hand whether a functional language is the right choice/tool for the job.
@@friedrichmyerswhat are you talking about, most devs spend half their time fixing their code
Bugs are nearly impossible? You underestimate my power.
The full saying is "never underestimate the power of stupidity"
I think the proper response is, "Impossible, Here, hold my beer."
@@hari4406 OR “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.”
― Douglas Adams, Mostly Harmless (source GoodReads)
@@hari4406 ahaha
Total BS clickbait claim, pure functional doesn't exist in practice because as soon as you interface with the operating system there is I/O and side effects. "functional style" is what is used in practice and does not have any of these guarantees.
The animation is amazing
So true
IKR? The Doodley of programming. Even the chill voice is similar
Which software are you using
Transitions look awesome 😍
Damn, I love the Buster Sword with materia 😂
Simply amazing
"where bugs are near impossible"
Challenge accepted.
i think one of the largest benefits of pure functional programming is that the referential transparency allows you to use equational reasoning on your code to _easily_ see which programs are equivalent, which makes refactoring almost trivial. it’s a lot less stressful to know that you can move things around however much you want without worrying about anything changing behind your back
In contrast, when you want to refactor things, you might wind up having to change a whole bunch of things that are unrelated to the actual change, especially if your domain isn't something you could consider functional. Imagine a highly complex set of calculations about some business you're supporting, and now Marketing wants you to (say) change how long customer responses take way down at the bottom of the stack depending on how much money the customer spent last month. Now you have to pass that data through every level down to the bottom where you need it, and in every place you call that bottom function of deciding how long something will take. The alternative is to pass around an entire blob of everything about the customer, just in case you need some of it for a specific calculation somewhere, at which point you still have to find all references to it as well as handle returning new values back up the stack when you change it.
Doing a functional approach to an OO problem makes refactoring as hard as doing a simulating an OO approach in the middle of functional code.
@@darrennew8211 it is sufficient to have a single data type that you pass around, you don’t need to populate it with everything you might possibly need, since adding more things to it only requires changing the place its constructed and the type definition. if you really need it to be globally changed immediately from deep in a branch (it’s probably a design issue but) you can always just put in a mutable reference for that specific value (and thus breaking purity).
yes, it’s an annoying refactor to make, but it’s mechanical and you can just follow what the compiler tells you and be confident that it’s still working as expected
this is not an issue i encounter regularly though, what is the context where you have ran into it when writing FP code?
if you use the standard pattern of Elm code for example, what you’re mentioning is a complete non-issue, since there all state goes through a global function and you can just add the extra state you need at the appropriate location and it will be propagated along as needed by the code you have already written
@@asdfghyter I think adding all widely-distributed state to a global structure for an entire multi-million-loc program would probably be problematic without better support. I've never had the pleasure of working on a megaline program in a purely functional language, but my brain has problems dealing with pure functional once it gets above about five thousand lines.
It sounds like you're basically saying "pass a pointer to the entire memory of the program around, and you can fix the problems with functional programming." :-)
@@darrennew8211 the idea of elm-style loop is that you have a tree structure with state that is relevant to different parts of the program, then when you enter a part of the program, you extract that part. you can also have some data that is relevant to all of the program. in order to change components of the state that is not yours, you return a message that tells the main loop to update that data.
another solution that introduces some more impurity is to use erlang-style message passing between components. that way each component is still internally pure, but they can send messages to each other and get mutable state that way.
Anyways, the thing you are describing sounds to me like it would be a problem even in OO languages, since you still need some way to have a reference to some way to find the data you need. How would you solve it in an OO-way? It's probably possible to solve it in a similar way for FP programs.
@@beowulf_of_wall_st My point is that if you have a routine that you suddenly need to have access to something to calculate a result that you never needed access to before, you might have to find all the places you call that thing and pass the new required information down the stack. Doubly-sucks if you suddenly need something like access to a database in a function you call from lots of places.
That was always my problem with FP - you wind up with one seemingly simple change ("account for X in the calculation for Y") that requires changing dozens of places in the code.
Using FP where OOP or procedural (or ECS or whatever) is more appropriate leads to those kinds of problems, IME. I'll grant that I haven't had a whole lot of experience in that, nor have I worked with people who have had a whole lot of experience in that.
Functional programming is a powerful paradigm in the programming world, where strict rules are applied in order to reduce bugs to a point where, in many common cases, they are almost impossible to write.
It doesn't come without some difficulties though, and so this video aims to explain very simply, the concepts that underline the two main functional paradigms.
Yeah, it stopped playing midway for me, haha!
Thanks for everything you're doing, I'm learning a lot from your work!
Pin this comment xD
I was hoping that you will explain monads in this version. Was disappointed. Well... maybe it's too hard to explain them in one short segment.
it's completely worth watching the video over and over cus it's that good
@@ProBarokis Monads would need their own video I think
No actual mention of how, supposedly, "bugs are near impossible", because of course that's not true, and not even a characteristic of the functional paradigm.
It's definitely for the youtube algorithm/to get more attention. Apparently the original video title was "what the func()"
@@prof_glue🤣🤣🤣🤣🤣🤣🤣🤣I'm adding that to my vocab bucket ...what the func()
I accidentally clicked the video again because there's a new title and thumbnail
You may have forgotten to watch the video? He mentions immutability and the absence of side effects, the two most common causes of any bug. Though he could have added absence of null, referential transparency and equivalence too.
Honestly, I understand what the title implies just by understanding the more declarative nature of functional programming. I think you're reading into the title too much friend.
3blues1brown meets fireship. Amazing animation that clearly communicates your message. I wish you made courses ESPECIALLY low level courses. You would be an awesome teacher
Also reminds me a bit of scruffy
Wow this describes my perception of this video so well. Amazing content!
Both 3blues1brown and fireship are excellent teachers and presenters. Fireship is like a human google AI summary of relevant subjects. How that guy is able to digest so many subjects is amazing.
But in the fireship that I know there is much more sarcasm
Sorry, I can't agree.
3blue1brown's animations graphically illustrate complex relationships and make seemingly invisible patterns comprehensible. (Even if Grant does use those "Pi" characters; they are stand-ins for *us* . They are confused or perplexed by or questioning things that *we* should be confused by....and that Grant is about to explain.)
I didn't get any of that from Coderized's "coding style" video. I don't know much about this topic and I didn't come away having learned anything. This is more a *review* video for those that already *know* these styles and differences, and might need a refresher, or see them brought together in one place. So maybe it succeeds at that.
I would prefer some actual code examples shown side-by-side that illustrate how each approach (or language) handles an important feature, and then the examples dissected to show the pros and cons of each approach. (Other commenters made this same point, and there *are* videos out there that do just that.)
Instead (and this is all too common) the animations here are little more than "eye-candy" to fill the screen during the narration. At times you could close your eyes and not miss anything, because this is no real information on the screen.
At about 1:50 he does show something technical, and I had my hopes up. But then it's soon back to cutesy and meaningless animations that contribute nothing as he confusingly discussed closures. (And the proof of that is that he later went back and re-edit the video with additional material to try and fill in the gaps.)
With Grant's videos (3Blue1brown), I almost always want to know more about the topic, and am inspired to seek further. Often I am so captivated that I wind up binge watching more of his videos. Here, with Coderized, I feel cheated that the video is over and I didn't learn anything. I *am* inspired to learn more about this topic, just not via this channel.
Here's the best part about programming paradigms: Unless you're in a really bad enterprise situation where you're forced to use only one, then you can mix and match them. Functional programming is pretty useful, but there are times, when the side effects are needed, such as in game development, digital signal processing, etc. D supports multiple programming paradigms, and thus that's my main preferred programming language.
I 💜 multi-paradigm languages. Always nice to be able to use the right tool for the job
Yes -- the same reason I like Common Lisp, give or take a few questionable macros that seem to be popular.
You sometimes lose a lot of the benefits of some paradigms by doing that, though. If you can return closures from methods inside objects that refer to instance variables, you lose some of the encapsulation. If you have a functional language where you can return a writable pointer to data, you lose the referential transparency. Stuff like that. Sort of like how an unsafe language lets you screw up *anyone's* code, not just yours.
@@concernedcitizen3254 It's not about being fashionable. It's about understanding that the current state of the art is far from its pinnacle and we're slouching iteratively towards Bethlehem with each paradigm shift. We've been coding for maybe 100 years...where will we be in 100 more? 1000? 10,000?
There are ways to solve the issues of needing to create side effects for game development without sacrificing the benefits of immutability and declarative programming. Take ECS, where every time the world is analyzed/queried in a system, it's like it takes a "snapshot" of it's current state, with everything about it only being readable/immutable. If you want to modify an entity's component, you can mutate the world by replacing that component with a modified, still immutable one, but it still will not affect the "snapshot" until the next time you query. Because systems always run in a set order, the mutations one system makes will never interrupt the task of another system in unpredictable ways
The animation is so clear, I love how it enhances your explanation. Thank you for this! Hoping to see more
Ok, I still miss the part where he tells us why he thinks that functional makes bugs near impossible
Well, every paradigm can have good or bad code; you can never have a *bugless* paradigm or bugless language (the bug is created by the user, afterall). The title is just meant to be sensational
As long as you code in dark mode, it wont attract any bugs.
Short version:
Because if the entire code is stateless and immutable, then the entire code can be represented by mathematical functions which then can be used to compare against the specs and to even automatically prove properties.
Long version:
Say you have to implement a traffic light, and need to prove that at under no circumstance, two perpendicular lights will both signal a green light.
In imperative programming QA and testing are the state of the art approach. But those are probabilistic: unless you have a testcase for each and every possible input, for each and every possible program state, tests are only evidence but never a full proof. Even if you just have two 64bit input variables, that means having 2^128 (~3.4e38) testcases for a full proof. This problem is known as "state space explosion". In order to reduce it you need to involve semantics definitions of the programming language and because of side effects those can get pretty nasty pretty fast.
In functional programming you can just prove each function individually, since there are no side effects. So all you need to do is show that the traffic-light output function is not capable of showing green lights for two perpendicular lights for a range of inputs. Then you show the arguments are indeed in that range, by showing the previous function's output is in that range. Rinse and repeat until you're left with constants and user input.
It's just so counter-intuitive to think functional.
In the famous cooking recipe analogy to an algorithm, it would be as describing cooked spaghetti as a series of transformations on raw noodles and water:
Functional:
cook_spaghetti(
spaghetti,
add_salt(
boil(water)
)
)
Imperative:
1. take water
2. boil water
3. add salt
4. add spaghetti
5. cook spaghetti
IMO the imperative list of commands is just so quick to write and cheap to develop that it's hard to justify higher development cost. Unless you're sending stuff to space or write safety critical software, tests will do just fine and will be much cheaper. However there are some uses outside that space: when you're dealing with lots of concurrent events, handling everything in imperative event handlers is a nightmare to maintain. Observables ala Reactive UI have been a godsend in that regard.
@@powertomato It’s not really about proving that the programs are correct. No one got the time to write formal proofs.
It’s just that stateless programs are in general easier to reason about. You don’t have to worry about which order you call X() and Y(). If you can call both, then both are safe to use.
When programming with states there’s a risk calling Y() before X() cause trouble, because the programmer didn’t consider this possibility. Suddenly you need to reason about which order things are done.
But this is just one type of bug. Stateless programs are still subject to other types of bugs.
@@oscarfriberg7661 " But this is just one type of bug. Stateless programs are still subject to other types of bugs.
" per example ?
5:17 - Fun fact, before JavaScript classes were introduced, people just used functions as classes or object constructors. The function would return a plain object, onto which you would assign functions which you wanted to be the object's methods. Any data or "methods" which you wanted to simulate as private properties and methods would not be returned on the plain object but simply created inside the scope of the function
although this was/is a valid pattern, it may be important to note that js has always been based on the prototype model, classes are mostly sugar around the same proto model. e.g. rather than write a function that statically returns new closures (costing memory because scope), you'd write a prototype object then construct that instead. search 'prototype mdn' for more info
That's called "Prototype OOP".
@@darrennew8211 Creating objects in closure isn't prototype OOP. Prototype OOP is attaching another actual object to search for properties in if they don't exist in initial object.
@@BlazingMagpie That's prototype inheritance. But sure, it can get kind of fuzzy around the edges, given the number of different ways to do encapsulation, instantiation, and inheritance.
i kinda still do this
Ah yes, time for monthly self-gratifying medium-length video about a subject I already know about
Perfectly clearly explained if you are talking to an audience who already knows the information. Totally useless to a beginner.
@@melissaprice1424 it's best for people like me ... where i have written few haskel and ocaml by never learned much deep
@@melissaprice1424I could not say that it's totally useless since it introduced me to the different paradigms in one of the visuals as a beginner but I have to admit that most of the stuff goes in one ear and out to the other
I remember using a strongly-typed pure functional language… it was a nightmare to get anything past the compiler! But when you did, it never had bugs. Instead of coding quickly then spending ages bug fixing, you would spend ages coding and hardly any time bug fixing. And, as a bonus, it was sometimes possible to mathematically prove your code was correct.
I'd describe _my_ personal coding paradigm as _disfunctional._
I just love how resilient is FP code to bugs. I'm trying to use it in the core parts of my codes because you know you can really trust for them to just work. I also love its strong modular nature. It makes so easy to change the code behavior.
I love these videos! The slow pace, beautiful animations and how you compare these really complex subjects to our everyday objects makes it so much more clear to me!
I never thought I would say this about a programming video, but this video is just... beautiful.
Appreciated 💜
I think this is very interesting. Cause while making an adder you try your best to derive logic from binary inputs such as 0b1 for example then 0b10. You want to combine those inputs in an and gate to get 0b11 for individual bits. So then you start creating an equation to explain what each logic gate could be doing. And you can use that example as 0b1+0b1==0b10 or 0b1 and 0b1 == 0b10. With xor gates you simply have the even numbers and 0 and you can think of it as a Table or index in which the bits have to equate to. Which is why you create if and then functions. Making it a more declarative type of mathematical language. Just it uses symbols for the if and then functions. Which means you could also do ⟹ [{|0b1|}] to represent then true. Meaning if the function is true then it outputs 0b1. It also uses lim(n) as a function since setting the limit would be important for 8 bit input counters or adders. Think of it as similar to boolean algebra but with if and then statements and a limit function to have a limit bit integer. It's also structured so that the bit's are represented clearly and so that only with the prefix 0b it can indicate as binary. Anything with |n| would mean the exact value of n. n multiplied by 1 being defined as 1 or 0. or {|n(1)|} := {|1|,|0|}. Since it would be multiplied by a factor of 1. But if multiplied by a factor of 2 and so on then its {(|0b10| ∨ |2|), (|0b1| ∨ |1|), (|0b0| ∨ |0|)} Which would be (n+n'...). Why n+n' Well simply because that signifies the addition between the two binary numbers. And you use if functions to basically check if n equals a certain number whether its decimal or binary and then if the number equals each individual number of a table then the result should be true. This would work for both xor and xnor. But if you want to do xor then just do n%2=={}
Just started training at a company that primarily uses Scala, and it's actually been a pretty interesting experience getting a handle at a new style of programming.
Do you think there are actually less bugs?
@ashaw596 If you're using pure functions, the compiler detects a lot of the stuff for you that would otherwise get cought at runtime. So from my, albeit short, experience yeah it seems as advertised.
@@AndreiGeorgescu-j9p Thankfully the company provided us with the latest edition of the book. Having to go through all of it and provide output in 3 weeks was quite the challenge, but we pulled through. We're getting engrossed with the frameworks that support it now.
I started off with C++. When I went to Python and then to my now main language of JavaScript, I was so confused that a lot of focus was placed on functions and not classes.
Like yeah you can still use classes, but tutorials don't spend much time on it. While my C++ class was literally called C++ and datastructures (which are based on classes).
Now I just have my main function that calls a whole bunch of other functions to do things that run forever. 😊 All my variables are fleeting and only things stored in a database matter. The different styles of programming are fascinating.
I have been coding in javascript mostly and recently got into C# and typescript. Coming from javascript (specifically react) I was totally on board with functional coding. Then moving to TS and C# I found out about the wonder that is classes, and OO programming. Now I'm finding myself mixing the two, and this video was a awesome look at something I thought I already knew a lot about. I didn't even know there was a thing as "pure functions". Awesome quality, and I definitely learned something.
Thanks man! Glad you enjoyed it. Yeah, to embrace a mix of the different paradigms is probably the best way to go! :)
I finally understand what a closure is, I swear I've banged my head up against FP so many times because I couldn't get my head around closures. Thank you for your short, simple and clear explanation.
In my experience pure functions are not significantly more bug free. Most bugs I experience are:
1. incorrect interpretation of requirements
2. careless coding
3. issues with data being passed from one part of the program to another or one system to another.
4. UI related unexpected behavior.
5. a misunderstanding of the language/framework being used.
In the code, whether the function is 'pure' or takes in mutable arguments has very little effect on how many bugs may occur in that function and pure functions may in some instance be much harder to work with and understand thus creating more bugs which is why most of us do not particularly try to use pure functions even when it is possible to do so.
Functional programming can have an edge when you do full unit testing and need highly reliable software where each function is analyzed for correctness but that type of code is actually quite rare in the real world.
It tends to work best when your problem domain is function. Something like a SQL interpreter (where the results are committed at the end) or a compiler or a sorting algorithm or something like that. If you're doing something intrinsically mutable (game engine) it's much less feasible and helpful, because as you say most of the errors made by people experienced enough to be using functional programming are logic errors.
@@aaronfleisher4694 I disagree that logic bugs are always careless coding. Sometimes they're just mistakes. Sometimes they're just situations your users tried that you didn't think of and break your code.
@@AndreiGeorgescu-j9p I disagree with your list of common bugs. Those are "common bugs that are pretty trivial to find the cause for." IME, any time the code base gets bigger than one person can keep in their head, the most common errors are 1) logic errors, 2) people who didn't know the requirements (whether coders or customers), 3) misunderstanding how libraries or frameworks you rely on work.
90% of what you say is right, but it's specific to a certain kind of program and a certain size of program.
@@AndreiGeorgescu-j9p I think you are generalizing too much. The software development space is large and the problems in on type of development are vastly different from those in another type of development. The issues in an OS project vs a website are vastly different. You mention null reference exceptions for example. When I code in Rust, I never ever get those because you can't. In JavaScript or Java it is certainly possible by in my experience relatively rare in Java and when it occurs in JavaScript it tends to be due to unexpected data entering the function not because of whether or not the parameters are mutable. I stand by my point that in my own line of work purely functional programming would not only be impossible but where it is possible it would make very little difference to the number of bugs I deal with because the vast majority of them have to do with issues of requirements, carelessness, communications between systems etc. We generally don't do unit testing for this reason because end to end testing is far more effective at picking up the bugs that actually occur. But in other types of development the experience will be very different. As mentioned I also have some work in Rust and the types of bugs are very different but still almost always intersystem or requirements related. Most of my Rust work however involves either new development or updating the code to handle updated dependencies. With Rust, my experience is that once it compiles it usually works and any changes will be due to new requirements not bugs.
@@AndreiGeorgescu-j9p Yes. It's entirely possible for something to be common but trivial. You know what's even more common and even more trivial, in exactly the same way? Misspelling a keyword, or leaving out a closing brace. Null reference exceptions are no longer a billion dollar mistake now that we have memory protection on computers.
I disagree that the reduced cognitive load of FP would necessarily reduce the number of logical errors. A logical error is when you write code that you think is correct but it isn't. Making it easier to write correct code won't make you think the solution is better. I think people who are used to writing smaller simpler programs have a different idea of the distribution of bugs compared to someone writing large complex interacting long-lived systems.
Wow, great video! I left a comment on your other video and it's impressive to see how fast you iterated on it. Thanks so much for the work you do.
My feedback on this is that since functional is a difficult topic that handles a lot of abstract ideas (with some abstractions on top of the abstractions), even if the explanation is logical and concise it doesn't necessarily reduce the difficulty in understanding it. One still has to keep a lot of these abstract concepts in their head. This was amplified since the video touched on many various topics (memoization, declarative/imperative, side effect, closure, etc.) in a relatively short video.
The animations you used were beautiful and I loved to look at them. But in a lot of ways they didn't actually help me understand the topic any better, even in cases where they were used to represent abstract ideas. Contrast this with 3B1B (I know that's a high bar haha) where most of the animations he uses actually enhances the viewer's understanding of the topic at hand (and then is a little cute in the down time).
Take 5:48 for example. While the diamond represents the abstraction of an expensive operation, the content doesn't actually visually represent that efficiency (i.e. processing an expensive operation multiple times) which forces you to hold that in your head kind of abstractly as you process the rest of the video.
Another example is in the filter/sort/map around the 4:00 mark. The animations are beautiful and they do make sense. But more so if you already know what they do! The subtlety, speed, and singular nature of those animations make it less useful to someone who isn't already familiar with the topic at hand.
Again this is all because I love the content that you make and I will continue to watch regardless haha. So it's not critical, just my two cents. Thank you for making content!
I think this is good feedback. I liked the visuals and tone but it unfortunately didn't help me take away any insight that I can apply to my own work.
Maybe that's unfair as it might not have been the intention of the video. But I really would have liked to see the same concepts covered through a more problem-based learning approach. "Here's a scenario where non-functional code is problematic, and here's how you can apply these functional concepts to write better code".
Great work though, just can be pushed further imo :)
It's actually rather easy to make bugs in even pure functional program. There is a paradigm where bugs are nearly impossible, though. It's the dependently typed paradigm, which is a subset of pure functional paradigm.
In a nutshell, a dependently typed language allows a function to return a value of different type depending on the input *value* (not type, that's just generics), and the type is enforced at compile time. This is very useful because you can now encode proof and contract that ensures that the program works as intended. Essentially a statically typed contract oriented programming that you can reason about manually
You're talking to the wind. These people can barely tie their own shoes.
I absolutely love how you expressed a con of functional programming as "Tricky for Imperative Programmers" ... this is perfect. I have been doing functional programming professionally for over a decade now. So many times I come across comparison videos that get this part confused and end up only saying that functional programming is "tricky" or "hard to read" ... but that is just their imperative bias coming through unchecked. We aren't born with the intuitive understanding of how a for loop works, we have to learn it ... and everyone learning programming learns it very early ... but we can't take for granted that learning still occurred. When people come across a map function your first instinct is that it's more confusing and harder to read than a simple for loop ... but this isn't a fundamental fact, it's just that you've been using a for loop for so much longer you just aren't used to using and seeing the map function. Once you are used to higher order functions like map, you may start to find them incredibly elegant (I do) and then start to prefer them. So thank you for getting this part right where so many other videos have failed!
Thanks! This was really nice to read
bruh let's be real. most imperative programmers arent going to have that hard of a time figuring out how map works. functional code can get really function-nested or transformation-nested and thats what trips people off. if you want to design an optimised functional style model of something, you have to plan ahead of the right transformations your input data should go through to give you the desired output and mastering that isnt just a matter of lack of familiarity with functional paradigm. it's genuinely objectively harder for normies to wrap their head around.
@@samuraijosh1595 That's because most people do a terrible job at domain modelling in their current paradigm too. Less of a functional programming problem, and more of a poor design problem.
The hardest part about learning FP is unlearning the imperative paradigm. People confuse simplicity and familiarity all the time, and it significantly hinders progress.
An excellent FORTRAN programmer is in every programming language an excellent ... FORTRAN programmer.
I loved the Room 101 joke. On another topic. We have a bad habit in computing to use the same word for different concepts. Often we use the same word for a static entity and a dynamic entity. And I think this can be very confusing to learners. Some example are “variable”, “scope”, and “function”. I’m thoroughly guilty of this myself, although I’m trying to do it less. When I try to teach this topic, I find it useful to distinguish between function definitions and function values. Closures are just function values. The idea that a single function definition can give rise to different function values at different times takes some time for programmers to understand when they first encounter it. Anyway, great video; please keep it up.
7:11 The infamous Monad was recognized in Homological Algebra after the original term "standard construction" was used for it, but before it was officially a "monad" in category theory back in 1967. The etymology of the original term "triad" gives a crucial hint for what the monad is.
First, since every function should take in one and only one parameter and return one and only one value, we need a rule set that is surjective to map B -> C.
Enter the context!
This world context C has access to the value B and carries rules, such as lazy invocations of the runtime to write/read file content, if one wants to manipulate the context passed around from one function to another.
Secondly, we must have an injective function mapping A -> B. Luckily, any pure function will do the trick, but if you wanted to, you could use a function that is monadic as well.
The reason that division is monadic in nature is due to division by zero cannot a defined result; thus we say this case returns Nothing while everything else creates Just the divided value.
We commonly say that the IO monad is a more specific form of the state monad because it has more specific rules about the kind of state we're manipulating and how it changes. It's not like we can represent Tree Monads as IO Monads since it'd be silly to have tree rules used to manipulate file sockets. But if we wanted to use the value in the Tree Monad in an IO Monad, one must implement a way to extract a value from the Tree Monad. This may not be guaranteed in every monad like the Maybe Monad, but it's something to think about when designing your own monads.
That's why Monads are called "triples" or "triads" in category theory. Because it stands for the three components of this specific structure: the input, the output value, and the world containing rules to manipulate itself (thus why monads are endofunctors) and the output value itself. The more you know 🌈
I learned a lot from this, thanks!
So, I took the time to learn quaternions. Doesn't that exempt me from learning monads?
@@clamhammer2463 While useful as a way to look at a 3D rotation in a 4D perspective, you were better off just learning 3D rotors as a Marc ten Bosch article explains. On the other hand, if you're in a code base already inundated in quaternions, you're good to go there.
Might as well just learn linear types and effect systems if you only care to have effects from a straight forward perspective.
@@SimGunther this was mostly a joke but I was required to learn it in order to encode the pitch/yaw/roll of a spaceshuttle to easily do the math with an easier data structure.
After looking into monads and learning what they are and what they are for, I'll realistically never reach in that pocket of my toolbag again.
@@clamhammer2463 Glad to hear about that experience with quarternions!
Also, that impracticality and unwieldy nature of Monads beyond the super mathy world is exactly why I mentioned linear types and effect systems if that's the itch you need to scratch in a functional language.
I can't even imagine how much work you put into this video... It's so beautiful and smart it's a form of art at this point.
your video titled "Never install locally" forever changed the way I work. Thank you for making these. And also, I like your voice very much :))
Really nice to hear, thank you!
The "imperative" vs "declarative" divide is a bit nuanced.
Let's say my problem is to transform a bunch of data. Here, functional programming is pretty great, and very *declarative* about the problem.
But let's say my problem is low-level kernel programming. Well, most functional languages would really struggle to declaratively describe the underlying metal. In fact, in this domain, assembly might be the most declarative language.
Database operations? SQL is just English at that point.
But let's make the problem higher level. Let's make the problem "build me a customer dashboard".
At this point, nothing is declarative. Until we have AI that can take words in and put full solutions out, real-world problems require a solution that needs to articulate implementation details that end users will not see; what we might call "imperative".
The problem isn't arrays, or registers, or monoids, or functors.... it's a customer dashboard.
This declarative/imperative classification is really domain-dependent, and it falls apart in 99% of real-world applications.
Hillary Clinton is my dad.
I can generalize that for you. Software engineering is engineering. Engineering has only three rules:
1) Deliver!
2) Deliver on time!
2) Deliver on budget!
That's it. Absolutely nobody gives a frell what's on the inside, as long as it works on the outside.
There is a simpler description for the semantics of lexical closures: when you evaluate an `(a) -> (b) -> a + b` function over the value `v` you substitute `a` for `v` and end up with `(b) -> v + b`, so there's no need for the term "closure" as such substitution is the natural way to apply a function in FP
A closure is just an instance of an anonymous class with a single method wrapped in syntactic sugar
good thing we didn’t touch on how it makes bugs (near) impossible
Thank you for the dedication in keeping the videos with high quality!
I have no idea what this functional programming is. Some real examples from programming languages would be really helpful.
That's the point, no one really uses functional programming in the real world
@@ekstrapolatoraproksymujacy412 you would be surprised where functional languages are used
@@ekstrapolatoraproksymujacy412 I use Scala in a pure functional way (for low level Spark)
@@ekstrapolatoraproksymujacy412i use pure functional programming when i have to build huge pipelines for huge distributed datasets, and I need the process to be absolutely inmutable and deterministic.
I appreciate this very much! It’s a great way of thinking about the paradigm and how we should think while using it, but it does lead to some misconceptions about things like its performance and storage aspects. For the languages themselves, there’s some funky stuff that actually goes on underneath the abstraction layer we think on.
The craziest example I have is how some languages don’t completely create new versions of data, but structure references in a way that enables very very fast equality checking for structures as nested as you want. So, some underlying data structures stuff ends up being faster than it’s non-functional brethren despite suggesting the intuition (coming from a procedural mindset) that it should be slower.
can you give an actualy example of this instead of making a vague point that tries to assuage the bad reputation functional style has in terms of its performance?
@@samuraijosh1595 Gosh, sure! It’s going to be a struggle. A handful of functional programming talks do bring this up and mention it in passing, but it’s not the easiest to google search transcripts of TH-cam videos. I’ll see what I can dig up from my memory and I hope I can satisfy your question. Typing out a response on my phone is tricky.
One would be Richard Feldman’s talk “outperforming imperative with pure functional” describing some work he’s doing on a newer language. A handful of Clojure talks mention this when introducing the language as a core for how they perform equality checks. There’s one by Mohit Thatte called “What Lies Beneath - A Deep Dive Into Clojure’s Data Structures”. And, I could have sworn there’s a talk by David Nolan where he’s doing a beginner’s talk for CLJS where he mentions the data structures in passing.
I’m sure there’s other examples, but, simply, (some) functional language designers do not simply slap immutable structure primatives into their languages without thought or the recognition that there are performance hurdles. There’s tricks that go on under the hood that are intentionally hidden or at least abstracted from thought while you utilize it with the general immutable paradigm!
@@samuraijosh1595i have worked with Scala in a pure functional way, and it is the most performant thing that I found to handle huge distributed datasets.
However, i saw juniors having many problems to adapt to it, because it uses some constrains/axioms that people are not used to, so they use the language in a non pure functional way, they get a non performant process, and they conclude that the language is the problem itself.
This is why i think it is the best way of programming, but will remain as a niche, because you need a high level of logical-mathematical thinking to drop your axioms and start reasoning from new ones.
I think FP is like superdeterminism in physics, once you obey its axioms, you will see how well it represents reality and you won't want to go back to old axioms, but is the process is emotionally hard because it says that free will doesn't exist, randomness doesn't exist, and you can not have FTL interactions. Most people have previous core axioms that contradict 1 of these 3 axioms.
I think something similar happens with FP. When someone shows you an axiomatic model that makes better predictions that your axiomatic model, that creates cognitive dissonance in your mind. The first reaction that people have to cognitive dissonance is negation of the source that caused it.
there's a saying in the haskell community: "You should be able to figure out what a function does from just reading the function signature."
half of the work is just the function signatures. a lot of the work is upfront so bugs in functional code are called "bad function signatures" and "type errors", both compile time concepts that never make it to production. -mostly because there are like 2 haskell users in production.-
refactoring is guaranteed to be correct in the end, but you may have to churn through a whole chain of other function signatures if you forget to make something a monad, for example.
i am *not* speaking from experience, just what i've heard around.
Great comment!
Sounds about right.
Tho unless you go into theorem-proving territory, you will often have multiple possible functions for any given type.
(still, most of them will be obviously incorrect, so it's definitely harder to mess up)
@@mskiptr i've heard another quote about someone who was preaching how the language Ada can encode such strong constraints that bugs were practically impossible.
someone then asked if Ada could catch the following bug, which i believe is such a blunt reminder of logical errors and even typos:
-- adds two numbers
procedure Add (A, B: in Integer; C: out Integer) is
begin
C := A - B; -- definitely adds two numbers, don't mind the fact that it says A minus B, hehe
end Add;
@@raffimolero64 That's certainly a major thing to account for. Even if you prove your code correct, if your specification has bugs, the code will not be what you expect.
But we can remedy this at least to some extent. Your function in Idris would go something like this:
```
add : (a, b : Nat) -> Nat
add a b = a - b
```
But then, once you go to use it, you might need the fact that `add x y = add y x` to call certain functions. Because of the bug this isn't true here, so you won't be able to construct a proof for that proposition. But for the regular `(+)` you can!
```
plusCommutative : (left, right : Nat) -> left + right = right + left
plusCommutative 0 right = Calc $
|~ right
~~ right + 0 ..
1:51 - MY EYES!!
As with any reasonable programmer I'm happy to use functional styles in my code where they make sense, but to say that bugs are near impossible or even significantly reduced by functional programming is a fundamental misunderstanding of software development. Most bugs are caused by factors like misunderstanding APIs, logical errors in algorithms, not knowing what combinations of state may actually be present in a program, or multithreaded race conditions. Functional programming can help with these somewhat, particularly in collection manipulations where readability is much better, but at the end of the day all of these errors are fundamentally just products of having to solve complicated problems in complicated systems with finite resources and time and that just extends far past where a coding style has any meaningful influence.
What the heck! Those neat animations really combined with those succinct explanations really helped me understand concepts I've had trouble understanding.
How is this animation so good!?
Damn good job.
The animations are beautiful as others said, but this is probably not the best way to explain all the concepts and the benefits they offer. I am very experienced in functional programming but these descriptions look a bit rushed, too theoretical and confusing. Just my two cents.
I totally agree.
Teach myself to code, and I've watched video after video about functional programming. And this is the first one of them I've watched that I understand, so I mean, there's that.
@@BigOlKnotheadHi im baconheadhair
@@baconheadhair6938 okay? im Knothead. What's good?
Nooo what💀 this is clicking before he even says the answer wdym?? Idk chemistry was easier than literature for me though
Such incredible production level on your videos, dude. The quality, awesome work 🤜🤛
Beautifully explained funational programming basics through animation. Code samples and more on monads would be fun too.
Never thought I would get cozy consuming functional programming related content
Beautiful, purely beautiful. So purely beautiful it would never need a monad or other filthy thing.
Congratulations on the video and the effort put in the animations.
Thank you!
That monad joke was amazing 😂. Great video and great animation, looking forward to more videos from you in the future!
nah, it's tired. monad is just an interface of 3 functions. the real torture is haskell syntax.
It actually harms the video because it brings up an irrelevant concept that most viewers won't know about and thereby get confused by why it's brought up in the first place.
Your mograph is cracked. Very high production!! Subscribed
I'm B L O W N by the animation, it is like watching art in its purest form while learning a ton. 👏
Care to share your secret... would love to make such illustrated videos for my students in class
Great animations, would love to know how too 😊
I believe he is using Adobe After Effects
Technical videos are usually someone typing examples with a screen recorder, I don't hate that but this was really unique learning experience too. Thanks
Great content. If all programming courses were as focused on the main concepts as in these videos, without getting lost in details and syntax (ceremonies, as Venkat Subramaniam would call them), becoming a programmer would be a breeze. Instant subscription for me.
This is how I envision learning in about 30 years to be, qualitively visualised, compact and simply beautifully executed.
Good video, but OO is neither Imperative or Declarative. Procedural is opposite of OO, and Imperative is opposite of declarative. So a language can be both pure OO and pure FP at the same time. For this to happen functions and all other values needs to be objects, objects need to be immutable, and there can be no side-effects. Inheritance doesn't break the FP purity either. I know of no such language to date though.
Scala?
This is what I love to do. Sometimes when I’m bored and have nothing else to do I download some github stuff and just improve it as much as I can. This can be really simple stuff or really hard stuff (which tbh I mostly give up on)
Finally, unbiased review.
I'm tired of people believing that I *must* use functional languages.
No one is saying you *must*. But you definitely *should*.
No one with a job is telling you to use functional languages
Functional has its uses, a pure functional program doesn't do anything, as everything that matters to humans needs side effects. Coding languages support or do not support functional paradigms. C# is focused on OOP, where Rust is more focused on other styles.
@@bluedark7724not true, when i had to build huge pipelines that handle huge distributed datasets, the best way i found is to use Scala in a pure functional way.
I admit that it is a niche, but there are important scenarios where you need a huge process that you are sure that is absolutely inmutable and deterministic.
Top animation... I've just been learning functional programming for a month, and this information is gold
I think u have put a great effort in creating the animations. It is a great work and the explanations using these graphics are also very clear. Thanks ❤
I want to say that many things that I took for granted by years were confirmed with the animations. I now understand better what's happening behind the scenes in programming. Thank you!!!
mmm.. video... 24/10
does anyone else feel deja vu?
Yeah 😅 - I uploaded this a few days ago but took it down shortly after to fix some issues and add in some clearer explanations. It has quite a few changes since the last one!
3:32 I’m glad you mentioned “a closure is a poor man’s objects”. It’s often overlooked in these types of vids that instance methods are closures over the instance variables. Objects are just a convenient way to group multiple closures together which all operate on the same scope. And like higher order functions, objects can be returned from functions. I think a good parallel is functional vs object programming is similar to the distinction between deterministic vs non-determinist finite automata.
Which software do you use to create these amazing animations?
Idk how you're not at 1 mil subs yet but you deserve it
Your animations are excellent - which software do you use to make them?
Those animations smooth af. Glad i found this channel
Appreciate the high quality work and the efforts that you have put in making this video ❤
I've been learning CS for 2 weeks now, which means that I probably didn't understand much of this video. But I sensed that it was clear and important. Saved it for later. Animation is amazing!
For some reason i remember watching this same video few days ago and yet it was uploaded 30 mins ago
He re-uploaded it
I took the original down shortly after to fix some issues and add in some clearer examples - hopefully it's better explained now.
The video was great. But it only ensured me to keep as far from functional programming as possible. Maybe its good for students to broad their perspective but I can't imagine any large real life system architected in this paradigm.
Haven't finished the vid yet, but looking at the animation and the effort you put into the vid, you earned my sub ;)
Thinking functional programming doesn't lead to bugs is something a naive programmer with zero experience would claim. Programming languages like Rust may claim safe memory management, but it doesn't prevent you from making logical errors, for example doing the wrong mathematical operation or writing incorrect data to a file. And especially when you are working with other libraries (including std), you are working on trust that this library doesn't produce bugs. But I encurage you to simply go to the source repository of your library of choice and you will find an issue list which is filled to the brim.
1. Rust doesn't "claim" memory safety.
2. Some types of errors can indeed be solved by different (types of) languages. For example, data races are probably pretty rare in functional languages.
3. Std bugs are "relatively" rare and are mostly related to platform specific behaviour. For rust 0.7% of the issues on github affect std.
But I do agree with the fact that a lot of bugs are logic related. Most important bugs are not though. (looking at you, 70% of chromiums high severity security bugs)
I don't think anyone thinks that. It just leads to significantly less bugs because it completely removes several types of bugs (so does Rust) - and for everyone who writes real world software this is a huge advantage.
Can't make bugs when you never get anything done.
What did you use to animate this? It's so beautiful and clear!
Your voice and animations are mesmerizing, so much so that I can forget to listen to the content lol. Either way, earned my sub
Absolutely amazing animation. Well done.
I can't believe you have only 3 videos on your channel !
I was like "uh interesting but not sure I'm knowledgeable enough to fully understand anything, let's check if he has some kind of beginner content"
Really looking forward to more to learn the world of coding :) Great job.
03:37 "or woman's" - Bro didn't want to get cancelled
Btw, nice animations and amazing explanation!
You have really outdone yourself with the animations on this one
Love the animation, great content. Expecting a lot more of computational theory from you.
Thanks for explaining how closures can be like objects.
I love the animations, do you use after effects?
Yup
Ok, even though bug-free software is impossible you got me curious with the title of this video. I’m glad you added the “nearly” caveat though lol… However, I had to hit the pause button just a few minutes in at 3:18 because you’re already violating tenets of safe, clean & bug-limiting software engineering.
If someone is reading this comment and you happen to be a new developer just starting out your career, please please please remember this: always strive for code that is easily & quickly understood, is cleanly written regardless of the paradigm, has helpful & appropriate comments, is unit tested, validates all external input, and gracefully handles errors.
Even if you never write anything like safety-critical code that is a lot more restrictive & stringent, following those straightforward items I just listed will take you very far. Don’t be clever - be clear.
Nice Video, amazing visualization
this sounds like a nightmare to use for game development
It's generally used in system critical applications, like banking software
Can't have bugs if your code doesn't do anything :)
it was a breath watching a video with a peaceful voice and no music in the background
Perhaps I missed something or I'm really stupid but where's the part where you teach how to make bugs nearly impossible?
It's not said explicitly, but it's generally how purely functional languages are considered because they are so strict
I really love your videos!! They are done with such care. Your animations are funny, yet the examples they represent are clear and you explain concepts slowly, without rushing. Thanks for all the effort you put into them! ⭐😁
Your videos almost look like digital art. Great job.
This was a really good watch, thanks!
Well animated, but not very informative, somehow... I guess it's targeted at complete beginners, so perhaps it would have been informative to me in the past. -- Would like a refresher on Monads though once you get around to it - but feel like at least explaining how side effects are handled in pure programming would have been sensible.
Good video, I liked how you explained everything and the animations :D
My personal favourite is OOP. The flexibilty, the encapsulation, it all fits together so perfectly. Each to their own, obviously, but in my mind, OOP just works.
well, until you add concurrency. then oop would work fine, if every object was in its own light-weight thread and communicated through message passing. unfortunately i'm not aware of an oo language that works this way.
The visuals and the explanations are AMAZING
very beautifully made video. as a beginning programmer, I really appreciate the way you explain things
Thanks!
Thank you!
I work with a guy that loves functional programming, and writes his apps in this way. I review his code often, and I can tell you that his code has as many bugs as anyone else's in my team.
but but but muh functional magic thingirinos!!!111!1!!1!!
wait, you mean to tell me that using a different programming paradigm doesnt actually magically prevent programmers from making logic mistakes in their code?? wow, who would have thought!
Like, i love functional programming, but i have to agree with you, this video tries to make it look like magic and its not a good thing to sell false expectations to newcomers to programmers, specially when this type of videos are usually aimed at the kind of people who are most susceptible to believing the title without giving it any second thoughts.
Bro is a real architect by designing this cool animation and naration
I really enjoying the pacing and visual representation of the functional programming concept in this presentation. Cheers to you for this approach in explaining it. I’ll have this as resource to use in my dev team’s ever evolving philosophical discussions on writing gud kode. 😄
Very good introduction to the paradigm. I once came along an article about the essence of functional programming. It focuses around the principle of "composition".
Essentially what functional programming tries to accomplish is enforce the use of monovariadic (single parameter) functions, so that they become simple mappings from A->B, which is also in line with the philosophy of evaluation > execution and referential transparency: the ability to replace such a function with a value, and the code would still work, which supports memoization once more. And by being mappings from an input to an output, you can isolate them, test them and debug them. This is the reason why in my opinion "bugs are near impossible".
Currying through closures is a way to create at once a chain of monovariadic functions, that can be evaluate at the end of the chain - Haskell implements currying as its primary function definition by design. This ensures high compatability. And more importantly it is easier to reason in terms of simple A->B mappings, which leads to category theory, where the big evil "Monad" comes from, which is just the big brother of the monoid (Array.concat) and the cousin of the functor (map(a => b)). Essentially just wrapping an entire context around a value, providing a way to wrap any new (naked) value into this context, and allowing to chain multiple functions together so that their contexts can interact (or merge) while returning the value "unwrapped". Again: Composition!
This is incredibily useful to implement side-effects in a pure way. Instead of a central state that can be accessed by everything. A monad just carries that state with it and exposes just what is needed when evaluated, encapsulation done right. It allows for incredibly cool design patterns, which unfortunately do not mix well with other paradigms. Simple explaination: As soon as you introduce variable state, you lose the ability of absolute composability. You tie your logic to the order of execution.
I think it depends for what target we write code: Humans (Functional might seem unintuitive, but once understood can unleash higher levels of expression. Focusing on clarity and sound reasoning), Computers (Procedural, that is just how the processor works, least overhead). OOP is something wedged inbetween the two (allures you with its intuitive concepts of objects, but down the line just becomes a jambled up mess). I really hope they find a way to make functional programming more performant...
This might be the most didactic and beautiful video about programming that I ever saw. I am a med student but I want so much to learning coding! Thank you for this wonderful work!❤