This is the THIRD video introducing functional programming where the host tries to introduce it as magic where you just put an expression and it's done, and then goes "but of course that's not all there is to it!". This annoys me because you could literally have introduced Java in this with "sum(1, 10)" and then defining the method for sum. Like.. am I missing something or is this approach people like to use to introduce functional programming incredibly silly?
May also be worth stating that imperative languages such as Java in this case also afford you the freedom to recursively define the sum function - you don't have to use a loop.
What he is trying to say is that in imperative languages you give instructions on how to do a process while in declarative languages you define the relations between those things.
Come on, what programmer doesn't mix up a symbol, or forget a semicolon, or forgets the name of a method sometimes? Those trivial errors usually happen when a programmer is thinking in a higher level of abstraction, and even more when there is an IDE underlining the syntax errors during typing. Only computers are required to run flawlessly, and even they have to be fault-tolerant sometimes. So why shouldn't we?
Brady -- I know these videos keep getting a lot of criticism, but I think that's a testament to how engaging this channel is. You've really created an amazing set of videos, here and at numberphile, sixty symbols etc.
It would be nice if the next explanation would be about how Java's (and others') recursion requires a finite call stack of return pointers while Haskell (and most functional languages) has guaranteed tail-call elimination and what that implies.
Very nice explanation. A very important part of functional programming is being able to store pointers to higher order procedures and being able to return these as a value.
An in depth explanation of functional vs imperative languages is probably beyond the scope of this channel. I think loops vs recursion is a decent place to begin understanding the differences between the two types of languages.
How times change, nowadays in Java you'd write something like IntStream.rangeClosed(1,10).reduce(0,Integer::sum);, which is suspiciously functional...ish style. Languages rarely fall into purely one category, and of course as we know, real programmers can use any language and any paradigm to write in Fortran. :)
brodaclop This looks more like Javascript. Java must have more objects. And of course, factories, proxies, abstractions and all of that useless complicated stuff.
In the for loop, if the variable i starts at zero, there will actually be one extra iteration at the beginning where 0 is added to 0. It would be much better to initialize i at 1 for this purpose.
I understand why people who haven't studied computer science will think this might be difficult to comprehend. At the same time, you can't assume that someone would be able explain the gist of a computer paradigm in a 10 minute video. For such a small introduction I think he actually got the gist of imperative/functional programming explained pretty well but there is so much more to this topic that you need to research yourself.
"[C/C++/C#] are explicitly MS target languages" "'high level' C/C++/C# binary will never fit in registry and or memory." These comments from you are two completely different and unrelated statements, and they are both nonsense.
helper :: Int Int -> Int helper total 0 = total helper total 1 = total helper total n = helper (total * n) n factorial :: Int -> Int factorial n = helper 1 n If you do out this version of factorial 10 by hand, you'll notice that you don't end up writing out longer and longer expressions, unlike with Laurence's version. In fact, the compiler will also notice this fact, and give you more efficient code as a result. This is the key to writing efficient functional programs.
It should be said that basically every modern programming language in use today uses a mixture of Imperative and Functional design. Working together is far more effective.
Kaitlyn Amanda Same as how basically all modern languages intimately mix OOP and imperative programming. I think people will continue to find that functional programming paradigms are really not that different from ones already used in imperative programming, and functional style will become just another tool in the programmer's toolbox. I mean heck, if you teach someone to use immutable types, recursion, and service provider interfaces, they're already halfway to being a functional programmer without realizing it.
Stuntddude That's the ultimate goal. Functional programming applies best in situations where efficiency is key while Imperative programming is better for being more flexible. Most programmers know how to do both of them, but it's experts who know when to use what.
VoltzLiveYT I think you may be taking the idea of mixing functional and imperative too literally. Nobody is saying you should write some parts of your program in functional code and some parts in imperative code (mixing code), but you should know when to use functional concepts and paradigms alongside other concepts and paradigms (mixing styles). If you've ever deliberately designed an immutable type or written a service provider interface in an imperative language, you've already taken part in this.
+Kaitlyn Amanda > Functional programming applies best in situations where efficiency is key You're full of it. Functional programming applies when _correctness_ is key, not efficiency. Copying an entire list, traversing to the end, then appending an element is much slower than just pushing a value to a vector.
Chris Gregory Appending to a linked list is done at the beginning for a reason, linked lists are not the only compound data type in functional programming (that would be truly stupid), and if you have to copy an entire list just to append a value to it, it's because your code is bad and needs to be refactored.
Hello, annotations are going away. You might want to pin the "The eliminates within the 'for' loop should be semi-colons, not commas - apologies" note.
factorial :: Int -> Int factorial 0 = 1 factorial 1 = 1 factorial n = n * factorial (n - 1) I was torn between explaining summation or factorial for this video, but the latter is a personal favourite. :3
Using a library function for the haskell part when you defined everything for the java part is cheating. And if you were to define it in haskell i would not use lists and just do: sum 0 = 0 sum n = n + sum (n-1) and just call sum 10.
The main purpose of this video was to show differences between imperative and functional styles of programming. The implementations used in the video therefore were chosen because they are easy to understand and show characteristics of the different styles. So in this case, the naive way of summation satisfies the requirements for an educational video, while the more efficient and portable solution doesn't.
I'm so hyper-aware of the high-rise-terminal (upspeak) these days, and this drove me nuts... I wish I didn't go so annoyed by these things as this was a good video!
If you ever have to work in a programming team, or if you have to modify code written by others, you'll gain an appreciation for a consistent style, even if you think your personal style would be an improvement.
The reason this is important is because a lot of useful/important algorithms are in NP as far as we know. I used the factoring problem for a reason: products of primes are at the core of RSA encryption. If you can find P and Q easily and quickly, you could potentially crack every single file ever encrypted using RSA. If you could prove the the factoring problem is in P, or that every NP problem is also in P, you have the potential to break -- and fix -- a lot of things.
What he is NOT saying though is that the "imperative" version is much faster than a recursive version. For high-load simulations and complex calculations, you won't see people using recursive functions just to save a few lines of code, couse they're much slower.
Haskell is my favorite programming language! It's a shame only few people know about it, thank you Sean Riley and Mr. Day for showcasing this language on Computerphile! I would like to appeal to everyone who already knows how to program in some imperative language such as Java, C#, Python etc. to learn a functional language such as Haskell, it will IMPROVE you as a programmer in these imperative languages as you will be able to look at a problem in a different way as before.
All of your comments are correct but as the guy in the video said, Haskell defines sum in a different way. At any rate, a Haskell list is a singly linked list defined as being something that is either (the empty list) or (an element and a list) and these are the 2 cases you should always check when pattern matching on it. By the way, the sum of [] is 0 in Haskell.
You clearly were talking about speed: "Even facebook realised that C is really slow and migrated do JIT compilers." Which is also wrong: they were using PHP, not C -- and again, JIT is an optimization for *interpretation* (like PHP), *not* an improvement over pre-compilation (like C). Another hint that you don't know what you're talking about is that you group "C/C++/C#" as if they're the same, which is as ignorant as grouping Java and Javascript; they have little in common besides the name.
The concept is actually fairly simple. Basically, there are some problems that are hard for computers to solve, but easy for computers to verify if they've found the correct solution. A good example is factoring a number that's the product of 2 primes. Assume P and Q are prime, and that R = P * Q. If you were to write a program to find the factors of R, if P and Q are large, then it will take the computer a long time to solve, because it has to check one value at a time.
Great video! Laurence mentions that these are both "high level" languages. I'd love to see a video about how a "low level" programming language would tackle problems differently from these "high level" programming languages. Cheers!
That is functional style. Although you will face a problem not mentioned in the video: a stack overflow for big n. Functional languages are inherently connected to tail recursion, where the compiler turns your recursive calls into something iterative to prevent creating a new stack frame for every recursion. The point is: this only works if the recursive call is the very last instruction in your code path. The sum in the video would have the same problem. That's why it's defined differently.
Forget about the commas in his `for` statement; I like the way he connects recursion with working out a simple math equation. I'd never thought about it quite exactly like that. Cognitive load lifted.
"What can't you do in any language by enhancing it or more generally adding libraries?" I think you're asking the wrong question, and missing the point. Languages are not just a means to an end. Consider languages like Coffeescript, whose *sole purpose* is to be translated to another existing language *just* so we can write in a style that is "nicer" to humans. The whole purpose of programming languages is to provide a bridge between human expression and computer instruction, not to "do" more.
Where Sum[] is an error, or not the compiler won't prevent it occurring - because there is no syntax for defining that pre-condition (see Jim Cullen's responses). So, if you use Sum in a situation where the input is the result of another function, and it is an empty list, then you will get the error Laurence mentions in the video @5:20 "non exhaustive patterns". A better solution would be to define the empty list pattern and return 0, or if haskell can assert a violation of contract.
That depends greatly on the kind of programming language you want to write. If you want to do it properly, you need at least three things: 1. A formal specification of the language syntax. 2. A specification (not necessarily formal) of the language semantics. 3. A way to execute a program written in the language syntax (1) according to the language semantics (2). There are tools which can automatically generate a parser from a syntax specification. The rest is harder.
Or you could just use basic number theory and python, in which case it would be: def pyramid_sum(n): total = (n + n**2)/2 return total Or, if you didn't know that, you could do: def pyramid_sum(n): while i < n: total += i += 1 else: return total
For his example of summing 1 to 10 the empty list case was not required, I agree. For writing a generic sum function for educational purposes it's a mistake to omit the empty list case. TIL infix functions bind weaker than regular functions. You're quite right that "n + sum ns" is equivalent to "n + (sum ns)". I'd still recommend using the latter style though, "print sum [1..10]" will fail to compile while "print (sum [1..10])" works as intended.
for some reason in my university we started with Scheme, so func. language. Quite disturbing for us who often started by Java or C, but it's definitely an interesting and refreshing approach, despite the bracket nightmare :P
The answer to all of your questions is 'monads'. You can implement side-effects such as state and I/O in Haskell through their usage, but you - for the most part - throw away assurances that your code is mathematically sound. Haskell is a Turing complete language, meaning that anything that can be written using a computer can be represented, albeit probably with more effort. For a particular example, look up the game Frag, a 3D FPS Doom tribute written entirely in Haskell! :)
Join the functional traaaaaaaaaaain. Pure functional languages like Haskell can make certain assumptions about your code because of no side effects that lets the compiler gut and rearrange stuff to make it faster. The first version of Haskell came out in the early 90s, and they've been improving the compiler ever since. Performance differences are negligible, and the productivity increase from safe, testable code is a huge difference.
Correct. As shown in the video, sum xs = foldl (+) 0 xs. As for what foldl does, it would be best if you read the wikipedia entry on Folds, or if you prefer a more mathematical approach google catamorphism. To put it simply, if you imagine the list [1,2,3] as f(1, f(2, f(3, []))) then that fold invocation will replace f with + and [] with 0. If that made little sense then try to remember the definition of a list, f would be the non-empty case list constructor.
Just a bit nitpicking: technically you're adding the numbers from 0 to 10 (instead of 1 to 10) in the Java example, which means 11 iterations instead of 10. Good video anyway
If you're complaining about using {} for a one-line loop, there are actually a lot of downsides to that -- probably the most important is that it's less friendly to source-control and diffs/merges. If you're talking about using total = total + 1 instead of +=, he's explaining it to people who aren't already familiar with programming, and += is a bit of an odd construct to people who haven't done much programming.
This. As a computer scientist, programming is simply a tool to set up my experiments, evaluate my ideas etc. Because of this, there can easily go by a month in which I don't write a single line of code. It's easy to forget the small syntactical differences between some languages when you're using multiple ones through your work and haven't written anything in so long :)
I may be wrong and I am not trying to attack Haskell, nor do I know much about it, but would I be correct in guessing that it is generally slower or more memory intensive given that it seems to be more heavily recursive?
What I hear in this video: imperative / Java = shit, functional / Haskell = brilliant. Without any explanation or real comparison of both approaches. This guy can not even programm a loop correctly syntactically and logically (hes looping 11 times). A good programmer first learns how to code correctly both with imperative and functional languages and then he can chose which one to use to solve his problem.
Technically, before anything was being added to the variable "total", the "i" used as a sort of counter had 1 added to it at the beginning of the loop. So "total", which started at 0 as it should, had 1 added in the first round of the loop. He had to start with "i = 0" so the loop would start by adding 1, not 2.
It tends to be more memory intensive actually but your point about debugging is kinda correct. "Kinda" because it helps you avoid writing bugs, finding them depends on your tools and language. Code written in a functional style tends to be more robust, testable and reusable/composable.
it's fairly simple actually the P=NP is a computer solving problem. It stands for Polynomial and non-polynomial and the question is : can you solve a NP problem (like Minesweeper) in a polynomial time (in a number of steps represented by a polynome) aka can you solve any minesweeper grid size with the same algorithm ; that would not go exponentially in steps.
1) You could write a recursive sum function in Java that does the same thing. It's not about the languages' approaches, it's about the programmers approaches. 2) That's why I said I found the video frustrating because there was too much emphasis on "This is how you do it in X versus Y", could have done the same thing with pseudo code. 3) It's the exact same problem. Find the sum of 1 to 10. Is a for loop going to work with non-consecutive integers?
I always think in imperative problem solving methods, even with languages that have some functional approach like Python I often do it the "long way round". I guess having developed in assemblers for so long your brain gets hard wired in such away that it can't be disengaged that easy anymore. Or I just have very poor neuroplasticine behaviour ;)
I'm referring to him saying "we added one to it", but I think "i" gets incremented after the first assignment, not before. Some people here seem to find the comments on syntax annoying (Brady made a mistake too), but this is a logical mistake and I think it's important to get it 100% right with code examples.
No. But the dictionary is a start. Did I mention a dictionary alone or that in addition to a dictionary there is a bountiful source of information available for those who wish to look? So if a word is difficult to understand, go out and find out about it. I'm not complaining about a video that goes out of it's way to simplify into small bite size bits, large complex systems and ideas. I'm grateful for it. :)
Actually, the java example would sum up to only 45. The for loop increments the i after iteration, so the series would be: 0+1+2+3+4+5+6+7+8+9. The total would equal 55 if you did ++i in the loop declaration ;]
lazos true, this misconception is very common, so I thought I'd shed some light, because I got blown on this question when I was doing some job interview :D I wouldn't want anyone to share that feeling :DDD
The loop was written with commas instead of semicolons. Granted, it's a little hard to spot in that weird font, but just compare it to the assignment in the line above.
Imperative is telling the computer what to do and functional is telling it what things are. Going from an imperative language to functional can feel really weird and backwards. But when you get into it it's kind of beautiful. If you have an algorithm to do something it can be more intuitive to write imperative code but if you approach it as "what is this thing? how do I define it?" then functional makes sense. And then I dropped out of school and haven't really programmed anything in years.
Even if you added features to an old language to mold it into something "for Java" (the VM), all libraries with platform-specific logic would be incompatible, and all the new "Java" features would be unusable with the old language's compiler. So your "new version" would essentially be a new language. And this is in fact exactly what happened: Java is based on C syntax. And they need to be separate, because the C compiler still has its place.
A good demonstration of one of the unfortunate reasons Haskell scares people away: we're having a discussion about mutation, when *bam*, suddenly we're talking about types and pattern matching. The Haskell community is pedagogically insane.
I'm confused. You do realize he used pattern matching to implement the sum function, right? He didn't just mention it. It was the integral part of the solution. Just as he used the for loop to implement the iterative one. He mentioned the for loop. Did he explain it?
The assumption of this video is that most of the audience is already aware of basic imperative programming; so I don't see why he should need to explain loops. It might have helped some people, but realistically it seems moot. Aside from pattern matching, he also uses a compiler, which is comprised of a lexer/tokenizer, a parser, at least two intermediate languages in the case of Haskell; he uses a left-to-right text layout; he uses a the decimal number system... he uses a ton of other things that are rather obscure in detail but clear in context. It's a chronic problem that a lot of CS instructors have: an inability to understand what's intuitively clear to the audience and what's not. Basic Haskell pattern matching is very intuitive. It doesn't need mentioning.
C/C++ can be consider in some degree low level because the abstraction to its primitives(data types) are closely related to the assembly layout, in fact in C++/C you can use the _asm keyword to program assembly(you can create for example a device driver more easily), another plus is the possibility to use intrinsics to harness specific CPU features like MMX/SSE and the like.
In Haskell example there is a syntax error. It should be (n:ns) instead of (n,ns). (n:ns) makes a list (with n as its head and ns as its tail), while (n,ns) makes a pair, and since sum works with lists it would produce an error (comma in parenthesis makes pairs, triplets and so on, while comma in brackets separates elements of a list).
Just a thought... The entire time I was in tech during the tech boom, I spent ALL my programming time fixing programs written by PhD's. I have a Associate of Science in Instructor in technology. Not in CS. They thought 68K family chips were great. I taught them different. Same Program in 80[''1234]86 vs 68000,68010,68020,68030. 8 times faster. Yes the 68K family programs beautifully, crap at clocks per byte moved/instruction. They would read an entire multiGB data file in 1 byte at a time... ugh... SHOOT ME!
This is the THIRD video introducing functional programming where the host tries to introduce it as magic where you just put an expression and it's done, and then goes "but of course that's not all there is to it!".
This annoys me because you could literally have introduced Java in this with "sum(1, 10)" and then defining the method for sum. Like.. am I missing something or is this approach people like to use to introduce functional programming incredibly silly?
Aaah! You used commas in the Java for loop instead of semicolons!
May also be worth stating that imperative languages such as Java in this case also afford you the freedom to recursively define the sum function - you don't have to use a loop.
Nitpicker alert: his Java example would not compile, he is using commas to separate the parts of the for loop instead of semicolons.
What he is trying to say is that in imperative languages you give instructions on how to do a process while in declarative languages you define the relations between those things.
Come on, what programmer doesn't mix up a symbol, or forget a semicolon, or forgets the name of a method sometimes? Those trivial errors usually happen when a programmer is thinking in a higher level of abstraction, and even more when there is an IDE underlining the syntax errors during typing. Only computers are required to run flawlessly, and even they have to be fault-tolerant sometimes. So why shouldn't we?
Name of this film changed from 'styles' to 'paradigms' to better reflect the content. >Sean
I could write the same recursive function in Java, so what's the real difference?
C++ templates are functional. Which makes them quite hard for most C++ programmers to use since the rest of C++ is (mostly) imperative.
10 years later and this explanation is still rock solid, thank you .:)
Brady -- I know these videos keep getting a lot of criticism, but I think that's a testament to how engaging this channel is. You've really created an amazing set of videos, here and at numberphile, sixty symbols etc.
It would be nice if the next explanation would be about how Java's (and others') recursion requires a finite call stack of return pointers while Haskell (and most functional languages) has guaranteed tail-call elimination and what that implies.
Very nice explanation. A very important part of functional programming is being able to store pointers to higher order procedures and being able to return these as a value.
Lets see the machine code then!
look! it’s that crypto nerd
An in depth explanation of functional vs imperative languages is probably beyond the scope of this channel. I think loops vs recursion is a decent place to begin understanding the differences between the two types of languages.
How times change, nowadays in Java you'd write something like IntStream.rangeClosed(1,10).reduce(0,Integer::sum);, which is suspiciously functional...ish style.
Languages rarely fall into purely one category, and of course as we know, real programmers can use any language and any paradigm to write in Fortran. :)
brodaclop This looks more like Javascript. Java must have more objects. And of course, factories, proxies, abstractions and all of that useless complicated stuff.
In the for loop, if the variable i starts at zero, there will actually be one extra iteration at the beginning where 0 is added to 0. It would be much better to initialize i at 1 for this purpose.
It's worth noting, that summing integers from 1 to N would be done by an equation: (N(N + 1))/2
I understand why people who haven't studied computer science will think this might be difficult to comprehend. At the same time, you can't assume that someone would be able explain the gist of a computer paradigm in a 10 minute video. For such a small introduction I think he actually got the gist of imperative/functional programming explained pretty well but there is so much more to this topic that you need to research yourself.
"[C/C++/C#] are explicitly MS target languages"
"'high level' C/C++/C# binary will never fit in registry and or memory."
These comments from you are two completely different and unrelated statements, and they are both nonsense.
helper :: Int Int -> Int
helper total 0 = total
helper total 1 = total
helper total n = helper (total * n) n
factorial :: Int -> Int
factorial n = helper 1 n
If you do out this version of factorial 10 by hand, you'll notice that you don't end up writing out longer and longer expressions, unlike with Laurence's version. In fact, the compiler will also notice this fact, and give you more efficient code as a result. This is the key to writing efficient functional programs.
It should be said that basically every modern programming language in use today uses a mixture of Imperative and Functional design. Working together is far more effective.
Kaitlyn Amanda Same as how basically all modern languages intimately mix OOP and imperative programming. I think people will continue to find that functional programming paradigms are really not that different from ones already used in imperative programming, and functional style will become just another tool in the programmer's toolbox.
I mean heck, if you teach someone to use immutable types, recursion, and service provider interfaces, they're already halfway to being a functional programmer without realizing it.
Stuntddude That's the ultimate goal. Functional programming applies best in situations where efficiency is key while Imperative programming is better for being more flexible. Most programmers know how to do both of them, but it's experts who know when to use what.
VoltzLiveYT I think you may be taking the idea of mixing functional and imperative too literally. Nobody is saying you should write some parts of your program in functional code and some parts in imperative code (mixing code), but you should know when to use functional concepts and paradigms alongside other concepts and paradigms (mixing styles). If you've ever deliberately designed an immutable type or written a service provider interface in an imperative language, you've already taken part in this.
+Kaitlyn Amanda
> Functional programming applies best in situations where efficiency is key
You're full of it. Functional programming applies when _correctness_ is key, not efficiency. Copying an entire list, traversing to the end, then appending an element is much slower than just pushing a value to a vector.
Chris Gregory Appending to a linked list is done at the beginning for a reason, linked lists are not the only compound data type in functional programming (that would be truly stupid), and if you have to copy an entire list just to append a value to it, it's because your code is bad and needs to be refactored.
Remember that these videos present an educatiional purpose. the brackets and the total = total =i; are there to be clearer for a wider audience.
Hello, annotations are going away. You might want to pin the "The eliminates within the 'for' loop should be semi-colons, not commas - apologies" note.
factorial :: Int -> Int
factorial 0 = 1
factorial 1 = 1
factorial n = n * factorial (n - 1)
I was torn between explaining summation or factorial for this video, but the latter is a personal favourite. :3
Using a library function for the haskell part when you defined everything for the java part is cheating. And if you were to define it in haskell i would not use lists and just do:
sum 0 = 0
sum n = n + sum (n-1)
and just call sum 10.
The main purpose of this video was to show differences between imperative and functional styles of programming.
The implementations used in the video therefore were chosen because they are easy to understand and show characteristics of the different styles.
So in this case, the naive way of summation satisfies the requirements for an educational video, while the more efficient and portable solution doesn't.
Yikes that indent on the for loop open bracket was cringe
int total = sum(1, 10); // Java
I'm so hyper-aware of the high-rise-terminal (upspeak) these days, and this drove me nuts... I wish I didn't go so annoyed by these things as this was a good video!
If you ever have to work in a programming team, or if you have to modify code written by others, you'll gain an appreciation for a consistent style, even if you think your personal style would be an improvement.
Pretty sure you got your definition of sum wrong since it skips a special case, namely the empty sum.
The reason this is important is because a lot of useful/important algorithms are in NP as far as we know. I used the factoring problem for a reason: products of primes are at the core of RSA encryption. If you can find P and Q easily and quickly, you could potentially crack every single file ever encrypted using RSA. If you could prove the the factoring problem is in P, or that every NP problem is also in P, you have the potential to break -- and fix -- a lot of things.
What he is NOT saying though is that the "imperative" version is much faster than a recursive version. For high-load simulations and complex calculations, you won't see people using recursive functions just to save a few lines of code, couse they're much slower.
Haskell is my favorite programming language! It's a shame only few people know about it, thank you Sean Riley and Mr. Day for showcasing this language on Computerphile!
I would like to appeal to everyone who already knows how to program in some imperative language such as Java, C#, Python etc. to learn a functional language such as Haskell, it will IMPROVE you as a programmer in these imperative languages as you will be able to look at a problem in a different way as before.
I would LOVE more videos about programming paradigms!
All of your comments are correct but as the guy in the video said, Haskell defines sum in a different way.
At any rate, a Haskell list is a singly linked list defined as being something that is either (the empty list) or (an element and a list) and these are the 2 cases you should always check when pattern matching on it.
By the way, the sum of [] is 0 in Haskell.
You clearly were talking about speed:
"Even facebook realised that C is really slow and migrated do JIT compilers."
Which is also wrong: they were using PHP, not C -- and again, JIT is an optimization for *interpretation* (like PHP), *not* an improvement over pre-compilation (like C).
Another hint that you don't know what you're talking about is that you group "C/C++/C#" as if they're the same, which is as ignorant as grouping Java and Javascript; they have little in common besides the name.
The concept is actually fairly simple.
Basically, there are some problems that are hard for computers to solve, but easy for computers to verify if they've found the correct solution. A good example is factoring a number that's the product of 2 primes. Assume P and Q are prime, and that R = P * Q. If you were to write a program to find the factors of R, if P and Q are large, then it will take the computer a long time to solve, because it has to check one value at a time.
Great video! Laurence mentions that these are both "high level" languages. I'd love to see a video about how a "low level" programming language would tackle problems differently from these "high level" programming languages.
Cheers!
That is functional style. Although you will face a problem not mentioned in the video: a stack overflow for big n. Functional languages are inherently connected to tail recursion, where the compiler turns your recursive calls into something iterative to prevent creating a new stack frame for every recursion. The point is: this only works if the recursive call is the very last instruction in your code path. The sum in the video would have the same problem. That's why it's defined differently.
Forget about the commas in his `for` statement; I like the way he connects recursion with working out a simple math equation. I'd never thought about it quite exactly like that. Cognitive load lifted.
the first part of the for loop (int i = 0) is the initializer - it is done first,
then the condition in the middle (i
"What can't you do in any language by enhancing it or more generally adding libraries?"
I think you're asking the wrong question, and missing the point. Languages are not just a means to an end. Consider languages like Coffeescript, whose *sole purpose* is to be translated to another existing language *just* so we can write in a style that is "nicer" to humans.
The whole purpose of programming languages is to provide a bridge between human expression and computer instruction, not to "do" more.
Where Sum[] is an error, or not the compiler won't prevent it occurring - because there is no syntax for defining that pre-condition (see Jim Cullen's responses). So, if you use Sum in a situation where the input is the result of another function, and it is an empty list, then you will get the error Laurence mentions in the video @5:20 "non exhaustive patterns". A better solution would be to define the empty list pattern and return 0, or if haskell can assert a violation of contract.
The Fibonacci sequence in Haskell:
fib n = fibs !! n where fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
That depends greatly on the kind of programming language you want to write.
If you want to do it properly, you need at least three things:
1. A formal specification of the language syntax.
2. A specification (not necessarily formal) of the language semantics.
3. A way to execute a program written in the language syntax (1) according to the language semantics (2).
There are tools which can automatically generate a parser from a syntax specification. The rest is harder.
Or you could just use basic number theory and python, in which case it would be:
def pyramid_sum(n):
total = (n + n**2)/2
return total
Or, if you didn't know that, you could do:
def pyramid_sum(n):
while i < n:
total += i += 1
else:
return total
For his example of summing 1 to 10 the empty list case was not required, I agree. For writing a generic sum function for educational purposes it's a mistake to omit the empty list case.
TIL infix functions bind weaker than regular functions. You're quite right that "n + sum ns" is equivalent to "n + (sum ns)". I'd still recommend using the latter style though, "print sum [1..10]" will fail to compile while "print (sum [1..10])" works as intended.
for some reason in my university we started with Scheme, so func. language. Quite disturbing for us who often started by Java or C, but it's definitely an interesting and refreshing approach, despite the bracket nightmare :P
The answer to all of your questions is 'monads'. You can implement side-effects such as state and I/O in Haskell through their usage, but you - for the most part - throw away assurances that your code is mathematically sound.
Haskell is a Turing complete language, meaning that anything that can be written using a computer can be represented, albeit probably with more effort. For a particular example, look up the game Frag, a 3D FPS Doom tribute written entirely in Haskell! :)
Join the functional traaaaaaaaaaain. Pure functional languages like Haskell can make certain assumptions about your code because of no side effects that lets the compiler gut and rearrange stuff to make it faster. The first version of Haskell came out in the early 90s, and they've been improving the compiler ever since. Performance differences are negligible, and the productivity increase from safe, testable code is a huge difference.
Correct. As shown in the video, sum xs = foldl (+) 0 xs. As for what foldl does, it would be best if you read the wikipedia entry on Folds, or if you prefer a more mathematical approach google catamorphism. To put it simply, if you imagine the list [1,2,3] as f(1, f(2, f(3, []))) then that fold invocation will replace f with + and [] with 0. If that made little sense then try to remember the definition of a list, f would be the non-empty case list constructor.
Just a bit nitpicking: technically you're adding the numbers from 0 to 10 (instead of 1 to 10) in the Java example, which means 11 iterations instead of 10. Good video anyway
I share these with many people hope some of them actually watch your videos.
The Holy FP Trinity (Functors, Monoids, and Monads) and their variants is the main reason why everyone is so scared of Haskell
If you're complaining about using {} for a one-line loop, there are actually a lot of downsides to that -- probably the most important is that it's less friendly to source-control and diffs/merges. If you're talking about using total = total + 1 instead of +=, he's explaining it to people who aren't already familiar with programming, and += is a bit of an odd construct to people who haven't done much programming.
such an awesome channel, keep it up please!
This. As a computer scientist, programming is simply a tool to set up my experiments, evaluate my ideas etc. Because of this, there can easily go by a month in which I don't write a single line of code. It's easy to forget the small syntactical differences between some languages when you're using multiple ones through your work and haven't written anything in so long :)
With the Java for loop you have ',' instead of ';'
You could, and it would have the same answer. But usually when you use a FOR loop you start from 0 and go to
Haskell, by use (July, '15) 41th most used language, with popularity 0.273%.
I may be wrong and I am not trying to attack Haskell, nor do I know much about it, but would I be correct in guessing that it is generally slower or more memory intensive given that it seems to be more heavily recursive?
This takes me back to 1978 and using basic, pascal and fortran. You did not say why one would choose one style of language over another!
What I hear in this video: imperative / Java = shit, functional / Haskell = brilliant. Without any explanation or real comparison of both approaches. This guy can not even programm a loop correctly syntactically and logically (hes looping 11 times).
A good programmer first learns how to code correctly both with imperative and functional languages and then he can chose which one to use to solve his problem.
Technically, before anything was being added to the variable "total", the "i" used as a sort of counter had 1 added to it at the beginning of the loop. So "total", which started at 0 as it should, had 1 added in the first round of the loop. He had to start with "i = 0" so the loop would start by adding 1, not 2.
whats the point. In java you can use a tracking variable or a recursive function to do this. why would i want to use a language like haskell then
I feel like this video is better at explaining loops vs. recursion than functional vs. iterative
just FYI for future visitors who don't know, Java has had functional support since 2014.
It tends to be more memory intensive actually but your point about debugging is kinda correct. "Kinda" because it helps you avoid writing bugs, finding them depends on your tools and language. Code written in a functional style tends to be more robust, testable and reusable/composable.
when will the video on the mentioned "dark art" be done?
it's fairly simple actually the P=NP is a computer solving problem. It stands for Polynomial and non-polynomial and the question is : can you solve a NP problem (like Minesweeper) in a polynomial time (in a number of steps represented by a polynome) aka can you solve any minesweeper grid size with the same algorithm ; that would not go exponentially in steps.
Or "i
1) You could write a recursive sum function in Java that does the same thing. It's not about the languages' approaches, it's about the programmers approaches.
2) That's why I said I found the video frustrating because there was too much emphasis on "This is how you do it in X versus Y", could have done the same thing with pseudo code.
3) It's the exact same problem. Find the sum of 1 to 10. Is a for loop going to work with non-consecutive integers?
I always think in imperative problem solving methods, even with languages that have some functional approach like Python I often do it the "long way round". I guess having developed in assemblers for so long your brain gets hard wired in such away that it can't be disengaged that easy anymore. Or I just have very poor neuroplasticine behaviour ;)
I'm referring to him saying "we added one to it", but I think "i" gets incremented after the first assignment, not before. Some people here seem to find the comments on syntax annoying (Brady made a mistake too), but this is a logical mistake and I think it's important to get it 100% right with code examples.
WHAT?
I've been programming for years, and i learned something from this video
sum(1, x) = (x*x)/2 + x/2
Wow, that was really fun! So weird and different. A lot of the idiosyncrasies of Erlang and Elixir make more sense to me now.
Upspeak! This presenter speaks in upspeak.
No. But the dictionary is a start. Did I mention a dictionary alone or that in addition to a dictionary there is a bountiful source of information available for those who wish to look? So if a word is difficult to understand, go out and find out about it. I'm not complaining about a video that goes out of it's way to simplify into small bite size bits, large complex systems and ideas. I'm grateful for it. :)
My eyes! Ahhhh it hurts so much!
Wow. I ask once for programming theory, and BAM, next video is exactly that. This is why i like these ______phile channels.
I prefer the O(1) version of the sum 1 to N function: (1 + N)*(N/2)
This is a nice little introduction into functional programming
haskell is awesome! currently learning it.
In that case, you are right, 'i' will be incremented after executing the loop. So we are summing now 0 to 10 instead of 1 to 10.
Actually, the java example would sum up to only 45. The for loop increments the i after iteration, so the series would be: 0+1+2+3+4+5+6+7+8+9.
The total would equal 55 if you did ++i in the loop declaration ;]
***** I stand corrected :D was under the false impression modern compilers took pre incrementors into account...
lazos
true, this misconception is very common, so I thought I'd shed some light, because I got blown on this question when I was doing some job interview :D I wouldn't want anyone to share that feeling :DDD
The loop was written with commas instead of semicolons. Granted, it's a little hard to spot in that weird font, but just compare it to the assignment in the line above.
its always hilarious when people post code snippets to comments, because most of them look mediocre at best
Imperative is telling the computer what to do and functional is telling it what things are.
Going from an imperative language to functional can feel really weird and backwards. But when you get into it it's kind of beautiful. If you have an algorithm to do something it can be more intuitive to write imperative code but if you approach it as "what is this thing? how do I define it?" then functional makes sense. And then I dropped out of school and haven't really programmed anything in years.
So is Haskell same as simple recursive solutionnnn. Is Haskell a language
Even if you added features to an old language to mold it into something "for Java" (the VM), all libraries with platform-specific logic would be incompatible, and all the new "Java" features would be unusable with the old language's compiler. So your "new version" would essentially be a new language. And this is in fact exactly what happened: Java is based on C syntax. And they need to be separate, because the C compiler still has its place.
A good demonstration of one of the unfortunate reasons Haskell scares people away: we're having a discussion about mutation, when *bam*, suddenly we're talking about types and pattern matching.
The Haskell community is pedagogically insane.
I'm confused. You do realize he used pattern matching to implement the sum function, right? He didn't just mention it. It was the integral part of the solution.
Just as he used the for loop to implement the iterative one. He mentioned the for loop. Did he explain it?
The assumption of this video is that most of the audience is already aware of basic imperative programming; so I don't see why he should need to explain loops. It might have helped some people, but realistically it seems moot.
Aside from pattern matching, he also uses a compiler, which is comprised of a lexer/tokenizer, a parser, at least two intermediate languages in the case of Haskell; he uses a left-to-right text layout; he uses a the decimal number system... he uses a ton of other things that are rather obscure in detail but clear in context.
It's a chronic problem that a lot of CS instructors have: an inability to understand what's intuitively clear to the audience and what's not. Basic Haskell pattern matching is very intuitive. It doesn't need mentioning.
C/C++ can be consider in some degree low level because the abstraction to its primitives(data types) are closely related to the assembly layout, in fact in C++/C you can use the _asm keyword to program assembly(you can create for example a device driver more easily), another plus is the possibility to use intrinsics to harness specific CPU features like MMX/SSE and the like.
pro java coder doesn't even know syntax of a for loop
In Haskell example there is a syntax error. It should be (n:ns) instead of (n,ns).
(n:ns) makes a list (with n as its head and ns as its tail), while (n,ns) makes a pair, and since sum works with lists it would produce an error (comma in parenthesis makes pairs, triplets and so on, while comma in brackets separates elements of a list).
Just a thought... The entire time I was in tech during the tech boom, I spent ALL my programming time fixing programs written by PhD's. I have a Associate of Science in Instructor in technology. Not in CS. They thought 68K family chips were great. I taught them different. Same Program in 80[''1234]86 vs 68000,68010,68020,68030. 8 times faster. Yes the 68K family programs beautifully, crap at clocks per byte moved/instruction. They would read an entire multiGB data file in 1 byte at a time... ugh... SHOOT ME!