@Marcus - The universe is not object oriented, our language is. It makes it easy to model real world problems in the programming language but there is nothing inherently natural about it. As he points out in the lecture it's often just verb first or noun first notation, which differs from language to language.
Similar to the way a program does exactly what your code tells it to do, the video does exactly what the title says it will do. The content of the video supports the title which itself can be re-worded as "why FP is *not* the norm" or conversely, "why OOP *is* the norm." What is it about OOP languages that make them popular? And on the other hand, what is it about FP lang's that make them unpopular? That's what the video is about.
There is a good reason why he never mentioned the history before ALGOL. Imperative and later OOP is a solution to the mistakes of FP in the context of business solutions.
A simple answer: Because most programs (games, editors, browsers, etc.) are basically *state machines* (not simple "computations"). They therefore fit the old imperative modell better, and without all the complexities of handling state in functional languages. Pure functions sure have their places though, but as local structural elements in that mainly imperative code, not as a dogma. (Regarless of whether you use any OO or not.)
Exactly. Imperative languages give explicit control over state where-as functional give only implicit control. Since a program is far more than just a mathematical formula, state is king.
@@CrimsonTide001 There's no real benefit to having explicit rather than implicit control of state for any of these applications, all it gives you is more rope to hang yourself in terms of bugs, especially when you're dealing with shared state. The real reason why imperative programming is the norm is that historically compilers haven't been readily available, affordable and sophisticated enough to give you the functional programming and still get good enough performance. But that is no longer the case.
@@salvatoreshiggerino6810 No, its because programs are fundamentally about manipulating state, and imperative gives you better tools to do just that. And in no way would I ever want to deal with shared state in a functional language in any real application. The mess of pseudo-functional but not really pure functional data structures I'd have to use would be mind numbing. FP has only one thing going for it, job security.
@@CrimsonTide001 Programs are fundamentally about transforming an input to an output, the manipulation of state is just an irrelevant implementation detail. Imperative programming has only one thing going for it, a legacy workforce.
@@salvatoreshiggerino6810 Not at all. Simulations (games, movies, scientific research), desktop apps (words processors, excel, email, internet, CAD/CAM, drawing/3D, any media authoring tools), media applications, etc... the vast majority of programs written are not about data transformation, but rather about data manipulation over time. The 'over time' part is of utmost importance. It is what separates computer science from mathematics. Any program that accepts input from a human is inherently state based. Whether that's playing a game, or writing a document, or surfing the web. As the input comes in the state of the program has to change to represent the new view of the data. Only the most simplest of programs, one's with zero user interaction, map well to FP. The way FP gets around this is using state based data structures, which is silly because they like to pretend there is no state, then admit that 'yeah I guess there is', then mess around with suboptimal and stupidly unwieldy constructs to try to shoehorn state into a system that wants to pretend it doesn't exist /facepalm. FP programmers are just in denial. Its not hard to grasp the concepts despite how hard they go out of their way to obfuscate the most simplest of tasks (seriously, look up any explanation/definition of monads, one of the simplest of constructs and yet its impossible for it not to be described/explained in the most ridiculous and hard to grok terms imaginable). They're just intentionally making the whole thing difficult, then run around claiming they're 'true programmers' for doing everything the hard way. Sure, running a marathon with 1 arm tied behind your back is possible, but unnecessarily difficult. State management is the single most important aspect of programs, which is why imperative has always won over functional. It has nothing to do with tools, or legacy workforce, or difficulty of understanding, or any of the other nonsense. Its because FP sucks.
@@archmad Precisely, and Python proves that a language doesn't even have to be decent to be hugely popular. It could have been a much, much better programming language if the developers actually knew what they were doing when they designed it. Semantic white space is one of the dumbest ideas ever, not to mention the fact that in versions of Python I've looked at require hackery if you don't want to print text without ending with a line feed and/or carriage return as appropriate to the OS. Every other language that I've messed with either made you add a linefeed to the end, or had a way of printing both partial and complete lines. The whole business of having to do the entire line at once is rather dumb and a bit of a pain sometimes.
Is it? A pun exploits different meanings of words, which you've done brilliantly, but it doesn't seem to turn toward humor in this case. I'm not sure. I've argued myself from one point of view to the other and back again several times. It's a pun.
@@PatrickPoet well, the problem of a huge number of arguments can be a turn off. That was the main point. By the way I come across the problem in Prolog too.
Implicits, records, and "lifted functions" can hide the arguments though. There are three types of lifted functions/functors : Applicatives, MonoidApplicatives (Monads), and Arrows so yeah it can be hard to pick... Oh yeah there are Comonads, ArrowChoice, MonadFix, ArrowLoop, as well...
Python also had a killer app in the last years: ML and AI in general. It worked out for them to jump on this train early and become the defacto standard language for this usage.
Python was still very popular way before ML got hyped though. I don’t think “ML and AI in general” to python is the same level of parity than say swift and iOS
@@NothingMoreThanMyAss NumPy and SciPy are also great killer apps for Python which made it popular as a replacement for Matlab in scientific computing. These were part of the what made ML in python popular.
00:00:27 Richard Feldman: Why are things the way they are? It's complicated. 00:00:53 Richard Feldman: Outline 00:00:59 Richard Feldman: Part 1. Language 00:01:01 Richard Feldman: What languages are norm today. Top 10. No functional programming language. 00:01:42 Richard Feldman: How did they get popular. 00:02:05 Richard Feldman: 1. Killer apps. VisiCal to Apple II/Rails to Ruby/Wordpress & Drupal to PHP ... 00:06:21 Richard Feldman: 2. Platform Exclusivity. ObjC/Swift to iPhone sales / JS to web&internet user / C# to Windows&VS 00:10:21 Richard Feldman: 3. Quickly Upgrade. CoffieScript&TypeScript to JS / Kotlin to Java 00:13:27 Richard Feldman: 4. Epic Marketing. $500M Java marketing compaign in 2003 00:16:15 Richard Feldman: 5. Slow and Steady. Python story 00:17:53 Richard Feldman: Other Popularity Factors. Syntax/JobMarket/Community 00:18:46 Richard Feldman: Why are the most popular languages OO except C. 00:19:15 Richard Feldman: Part 2. Paradigm 00:19:39 Richard Feldman: Uniuquely OO features. Encapsulation = Modularity 00:35:35 Richard Feldman: They are OO because modularity is a good idea and they originally got if from OO by chance. 00:35:47 Richard Feldman: Part 3. Style 00:35:50 Richard Feldman: FP Style : avoid mutation and side effects 00:36:31 Richard Feldman: Why isn't FP style the norm? No sufficiently large 'killer apps' / No exclusivity on large platforms / Can't be a quick upgrade if substantially diffeerent / No epic marketing budgets / Slow & steady growth takes decades 00:41:02 Richard Feldman: OO languages are the norm not because of uniquely OO features. 00:41:32 Richard Feldman: FP style just need time to become the norm. 00:41:50 Richard Feldman: Questions. 00:42:03 Question1: How do you see Lisp fiting in is it a FP language or not? 00:42:21 Answer: Classifying a language is functional or not is kind of arbitrary, definitely very fuzzy and ultimately not as important as talking about the style and their support for the style like being able to avoid mutation and side-effects and still have a good experience. 00:44:03 Question2: How does performance factor into this. 00:44:26 Answer: Perfromance is not the key factor to the popularity of a language.
You can use OO and FP at different granularity. Use OO modeling to find the right places in your application to put boundaries. Use FP techniques within those boundaries.
Well said. This is what I have thought about always. I hate that 'geek' debate of FP vs OOP. It is stupid. It is like saying you should do alot of addition instead of a little multiplication.
what you are talking about makes no sense, OOP idea to use methods that produce side-effects to change the internal state of the object is completely against pure functions from FP. I can only guess what you mean by OO modeling are data structures, that are already present in FP languages.
You missed the main point why the top 10 is how it is - all languages there **hybrid**. They are not obsessed with style purity but rather pragmatically adding what users wanted. OO was just a popular style at that time, so languages got some features supporting that style, some more, like Java and C++, some much less, like JS or Go. With languages it is like with sales - you sell what users want, sometimes with some help of paid or coincidental marketing; you do it consistently over longer period of time, and voila - language is popular! No magic in there. I suspect that relative failure in of pure FP languages proliferation is exactly because of their purity. Arguably most successful FP is Scala - a multi paradigm language by their own definition. I would predict that FP languages impact will remain only indirect - by inspiring the mainstream hybrid languages. When they absorb enough FP features, and you can program in them FP style with ease, there would be hardly a rational reason to switch to any pure FP language for any project.
Good point! He didn't mention this by calling them hybrid, but I think he meant it when he talked about familiarity and how features were inherited by new languages from old popular ones.
This is so right on the spot. I learned LISP long before any object oriented language. I will say this “pure function” mentality is quite self-defeating. State handling in pure functional paradigms are a clunker and really unsuitable for a lot of situations. The purists are what’s preventing functional from being used more.
True. After I discovered FP and learned a bit of Haskell, I was thinking of the same thing. We need to be functionally imperative. Iterators, high order functions, algebaric datatypes are good, but impractical and ancient math theories are just overkill and don't suit the time well.
Speaking as a Ruby/Rails developer, while I love the FP style support the language allows, I find sticking to a "pure" FP style would make my work a lot more difficult. There are times when I actually do want to have the capability to mutate state, and many, MANY times where "mutate object/collection in-place" saved me a TON of compute time and memory.
This talk reminds me of the dangers of Maslow's hammer and why programmers should seriously consider having at least a basic understand of a few programming paradigms and different programming languages, specially conflicting ones. So "Why Isn't Functional Programming the Norm?" Because having multiple and/or multi-tools at your disposal and knowing how and when to use each (like we have now with programming) is the real best way to do things in life in general.
That's one reason but not the main reason. It will, however, take at least a beginners' class in computer science to understand that what you write into your program has little to nothing to do with that your CPU actually does. The compiler transformations that are between the high level code and the executable have side effects that experienced programmers understand, fear and use. Only amateurs are ignoring them.
@@lepidoptera9337 You understand that you are basically saying "the main reason is because people don't understand their work environment, and thus, are not making good use of the tools at their disposal", right?
@@Essoje That's the reason for a lot of bad stuff in the world. Having said that, try to go to your boss and tell him "Boss, the thousand hours of refactoring that we just put into the code base are basically all just an exercise in creative comment writing. The compiler removes the objects anyway and makes spaghetti code.". While technically correct, it will certainly not get you that raise that you were hoping for. Your boss might even know all of that, but he has a boss who usually does not. And therein lies the problem. I ran into a similar problem at work once. One of my companies was making test and measurement equipment and I asked why they are not making power supplies, since Hewlett-Packard at the time was making like 400% profit margins on theirs. The answer was "Nobody is going to buy brand X. If you are a manager on a factory floor and you replace HP power supplies with ours to save a couple hundred bucks and one of our products fails, then you need to look for a new job because you just caused thousands, if not millions of dollars of losses. If the HP supply fails then you can simply say "We always use those and they were always very reliable.", whether that's true or not.". Such is the world. Having said that, it still pays to know what a compiler does with your code, even if you are the only one at work who does or who cares about the consequences.
@@lepidoptera9337 The main reason for refactoring code imo is to make the code less unmaintainable after several months/years of unrestrained scope creep. Everything starts off with a definition and suitable structure, and then it snowballs from there because the original definition/design was inadequate. Maintainability has nothing to do with what your compiler does.
@@k98killer That's what I said... it's code documentation. It's entirely for the benefit of the programmers. Neither the compiler, nor the CPU nor the users care the least bit.
Alright so first impressions, this guy did a FANTASTIC job preparing for this speech, because he literally had himself mentally prepared for a lackluster response right from the get-go and knew exactly how he wanted to play it out. Just literally kept rolling. I mean that's genuinely inspiring and makes me want to listen even further.
he was well prepared but he bluntly avoided the elephant in the room: FP is hard, python and Java is easy. This is the main driving force behind most of the popular languages (add to this that Javascript is the language of the web)
@@unperrier5998 Yeah, while it was a well made talk, I would title it "FP enthusiast struggles to cope with reality". He also makes a good case of why support of FP in OO languages is useful and the way to go. There are many partial issues that can be solved best with FP. But in reality, most large SW don't run for a determinate amount of time and then give a single result but are interactive systems. And by definition, such systems have to have side effects. Their entire purpose is to have side effects. So strict FP only languages have no mainstream future because of this IMO. He makes a good case of why inheritance as is does not have a future and Rust for example limits implementation inheritance since it is not a good pattern.
@@unperrier5998This is basically what I'm always saying to myself when I hear people lament OOP. When I was going through the wringer and first learning to program, finding OOP helped me wrap my head around what was going on.
I don't think FP would be considered as hard if history had been different in the way he proposes. I've seen plenty of people learning programming for the first time and having difficulty wrapping their head around what a 'class' is, myself included. Look at it another way: the reason we still use OO languages is because that's what most educations teach. What if those educations taught FP instead?
With a coworker who is obsessed with FP and Clojure: it really feels that way. Everything not FP is evil to him. I just stopped arguing because nothing comes from it that maybe clojure and FP is not the perfect solution for every problem.
@@Teekeks same experience. everything that's not fp is evil and trash and must be rewritten in a fp language, ironically he almost never does the rewriting.
The reason OO became the norm is because if you want to program procedurally in a OO, you can. There reverse is not the case. In OO, you don't have to new up an object to make things work. You can use a static class as if it's a namespace, and go completely procedural. Under the hood, a static class is pretty much just a pointer to other pointers. If you wanted to store state and deal in objects in a procedural language, you're going to fight the language. There is a time for programming procedurally, and a time for programming concretely. I prefer having the full toolset at my disposal. It is odd to me that functional programmers are so evangelical about functional being "better". It's like saying a screwdriver is better than a wrench. Sometimes you need one, sometimes you need the other. If I want to build a fast http handler, I'll code procedurally. If I want to build a large scale app with layers on layers, and a FIFO queue batch handling 100s of complex commands that might take 10 seconds to parse, I'd like to keep it organized in my head as much like the domain as possible, and write my code like English. And English tends to utilize nouns judiciously.
Why are you saying that a static class is just a pointer to other pointers? Not saying you're wrong, but it does seem you're going off the rails to make some point.
@@atlantic_love My point was that you can make a static class that contains functional methods, kept in an organized container of the static class, and whether you made a bunch of first class functions, or put them in a static class, they all end up pointing to some executable in the executable memory block. A method is not a pointer at the language level, but under the hood there's some memory address to the method in the executable code memory allocation that a class points to for that method. A static class is just putting those pointers in one accessible place.
The reverse is possible. OO tricks you into thinking Function and Data live in the same memery area. They don't. The compiler will literally convert it into a procedural program. In C they have handles which literally have the same purpose as fields in an object. In C++ your struct is literally the same size as a class with fields give or take a few bites for Object Header information.
Also, FP is opposite of perfect. It's main issue is community. Try learning Haskell and after they throw a bunch of quasiintellectual words that could be done without to explain exact same thing so everyone understands, you will know why most people don't give a fuck. I didn't set up to learn something just to be told that I need PhD in math theory to understand why you avoid multiparadigm languages and instead tell me of how great purity is while unironically using unsafePerformIO.
Simon WoodburyForget If you believe French is pure you’ll be disappointed learning it, the reason it is so hard to learn is because it is full of inconsistencies and grammatical exceptions
@Lemo would you rather it be the ruble or yuan instead? People love shitting on world powers and geopolitics until they realize a power vacuum would just lead to another global power to take its place and thus becomes "subjected" to its influence instead. Kinda the whole reason why the US "keep interfering and influencing" don't ya think? If not the US, who'd you rather it be?
Well I mean it's true isn't it? Not to say that FP is perfect, but it has fewer problems than say OOP. The main reason people don't like it is because it's unfamiliar. And because FP language communities tend to be more arrogant and uninviting.
I haven't done any research on this but I think Python has a "killer app" type thing going for it too. It started to take off when Data Science / Machine Learning started to take off, and it has by far some of the best tools for the job (the killer apps, so to speak). Numpy, Scipy, Pandas, Pytorch, openCV, and Matplotlib.
Yeah, as a new programmer, I agree this drew me to Python. Another killer app so to speak was Jupyter notebooks, it was a very intuitive way to write, run, and test code.
Yeah, python has several "killer apps" - Django, jupyter, library for ML stuff (written in C but locked to python because it returns mostly python data structs). The guy is wrong in saying that python is popular because of the design, since there isn't much of the design to speak of, it evolved pretty haphazardly.
He answers his own question at 43 seconds into the video. "There not any one nice, tight, simple answer." Carpenters don't do everything with hammers, or a screwdriver. They have a toolbox, and they select the right tool for the job. The same goes for computer languages - people who become emotionally involved with a single programming methodology are doing themselves a disservice, and when they stand up and advocate that practice they're doing everyone a disservice. There really is no valid argument to discount procedural / imperative programming. Computers are tools that follow instructions. Do this, do that, then do this other thing, etc. They're imperative / procedural by their very nature. I won't claim that style to be the nirvana approach either, but it's an essential item of the toolbox. I guess human nature could account for a person who's really only mastered one tool to want to make that the end-all be-all "nirvana tool." It just doesn't wash, though.
Exactly this. Most well written programs today will not be afraid to use sum types, as they're great, you will see iterators because they make your life easier, but they also will contain OOP where you need a state because using FP for that sounds like a nightmare.
Yes. For me, the language I use is mostly the language the system I support is written in. It is not an emotional choice. Heck, it is not even an intellectual choice. If I start a new system, what language will I program it in? Right now, it is Java. Again, not emotional at all, Java just happens to have the most supporting libraries that I can make use of without writing them myself. I was tempted a few times to use GO or Rust or C for the really time-sensitive portions of one of my systems. But once I removed the bottlenecks, I did not feel like I needed to go through the hassles to have multiple languages in one system. I did not consider C++, although I used it for quite a few jobs I had in my career. I find it easier to control a C program than a "naive" C++ programs. Some environments don't support C++ libraries because of memory constraints (yes, there are still devices and systems like that). And since most of them also don't have memory protection, a C program is actually easier to make reliable and robust than a C++ program. And I got sick of having to restart the device when the system crashed due to memory corruption. Just personal experience. I used to use Perl for throw-away programming, as I consider it a better shell for scripting. I started using Perl when I had to remember too many programs to write a simple shell script. I don't use Perl as much for that purpose any more, as CPAN is getting more and more outdated. So, I use Ruby instead for my build scripts, for example. For simple system scripts, Perl is the still the best, as long as I don't need too many libraries. I don't use Python if I have a choice. Now that is an emotional choice because I got one too many indentation wrong back in the '90s. After that, well, I still used it if it is warranted or when it is the language used (see the theme?) for the system I had to support. Did I hate using it? Not really. I would avoid it, that is all. Do I use functional languages? Well, I am still using (pure) Lisp, having written more than one Lisp interpreter for my own use. My extension language inside my systems is Lisp. I used SASL and Miranda back in the days. Love them, but too expensive to get a copy of my own. Late '80s, I believe. Heck, one of my first loves were SNOBOL 4. I just loved the pattern matching stuff. That is the reason XSLT is such a nice language to use for transforming XML. It just works tons better than any other alternative for that purpose. Swift is interesting. I like many aspects of it. And some aspects I can't stand. But that is the same for most languages I have to deal with. In short, languages are just tools. Yeah.
@@RicardoGonzalez-bx6zd I don't think that explains the actual question. If the toolbox paradigm were correct, then there should be a functional language within the top ten somewhere. Maybe not the top three or five, but one should be somewhere near there. It's like asking why a literal toolbox doesn't have a wrench. "It's because I have all these other tools in my toolbox", that still doesn't explain why there's no wrench.
@@squirlmy If you view FP as a commonplace tool that's as universally applicable as a wrench, sure. But the thing is, most popular languages have just enough of the FP stuff in them these days that going whole-hog functional is rarely the best choice. So instead of a wrench, it's more like, say, a seven-axis CNC mill. Most of the time I'll be using hammers and saws and wrenches and routers and drills and such, thank you very much. The mill sits in the corner. But occasionally I REALLY need what that mill can do and doing it without the mill, while technically possible, is too difficult and the mill gets fired up. (Whole-hog) FP is like the mill, not like a wrench. (A language with some FP capabilities is the wrench.)
Every once in a great while, I watch a video explaining what a monad is. Now having worked in the field 12 years, I can confidently say, I still have no effing clue what a monad is.
We also could do away without monads if we used real english words with easy to comprehend descriptions of said words that don't take post-PhD math studies to start grasping. We are programmers, not mathematicians. Imagine if welders had to have PhD in physics and chemistry to fucking weld two pieces of metal...
From reading wikipedia, my dumb brain wants to distill monad to this: "A monad wraps real world side effecty code in a function that returns a value so the rest of your pristine functional code can pretend the real side effecty world does not exist." Is that approximately right or did I miss something?
We aren't moving from OOP to Hybrid to FP, we are just moving to Hybrid because OOP is a useful tool. You mentioned C with classes didn't take off until after they added even more features and made C++, but that doesn't mean if it didn't have classes it would still take off. It could have required both classes and additional features to take off like all of the other useful hybrids.
I love how Functional programmers dont realise we have been doing functional programming as part of good programming practices in multi paradigm languages forever. They are going to shit a brick when they realise FP isn't the be all and end all. There are a myriad of other paradigms which are equally useful and even more useful in a lot of situations, that includes OOP.
@@thanasispappas62 I wouldn't even know how to do that, I programmed in dataflow(reactive), OOP, functional, procedural all as a matter of course on a iOS app without even thinking about it. Not to mention all the architectural and design patterns that were in place to hook it all together. I think every single OOP application I have worked on has had some functional aspects especially in languages people would think are heavily OOP like C#. I mean find a C# developer that hasnt used Linq, monads or lambdas.
@Felipe Gomes so the first day of learning oop? Maybe learn a bit more and you will realise that polymorphism is not only not bad but nothing to do with oop
It's because we were taught that OO is the future and the way to go. I think the reason is that functional programming isn't the norm is that C/C++ was known for being low level and fast since it's inception. The majority of programmers need performance, so they just automatically use C. Since the majority of programmers know C, languages that work similarly to C/C++ have gained popularity faster. Rust is gaining popularity fast, because it offers the performance of C, while being more logical and strongly typed than C.
@@coldsteel2105 humans also think unimpressive terms (most of them do anyway). You could easily teach a non programmer python but good luck with haskell
@@Nik6644 I can tell that you've never tried Haskell, if you think it's hard to learn. It's easier to learn and read than Python. It's true that any programming language, including Haskell, can be written in a way that's complicated and hard to read.
@@coldsteel2105 , you can't tell shit about what other people have done. Your opinion isn't the same as other people's experiences. I'm sorry people don't use your language, but pretending you're a mind reader or that your experiences means others are the same don't make a good programmer.
@@coldsteel2105 I don't think at all that that's the reason why. Most of the people programming C are directly in the hardware industry. Since C is such a simple and efficient language they can run on rather small chips that are cheap to make. There are only very few chips even capable of running languages like Java that also don't skyrocket the price. Within general app, website or software development C is rarely used for anything since it's way too low level and people's devices are good enough to warrant the overhead when compared to development prices. Nobody except for game developers program things meant for actual computers on C or C++. A bit of evidence for this is the popularity of C#, that language offers a lot out of the box when it comes to app development, and is actually used a lot in the industry. FP has it's places for sure, but mainly in hardware development these days. And the truth is, most developers arent in the hardware industry.
I think there are real weaknesses to languages with strong FP support: They model problems very well and make it easy to write correct programs, but they don't model machines very well. That's because machines have mutable memory, caches, and registers, and utilizing those things efficiently is how to write an efficient modern program. This is why I think languages that have first-class support for mutating variables and memory will always be important. I think ultimately performance is the hardest thing to get right in any programmer's day-to-day working. It's much easier to test for correctness than it is to test for performance. And especially with the advent of ML systems where more computation leads on its own to more correctness it's more important than ever to accurately and directly model the computation than it is to accurately and directly model the problem you're solving.
This comment deserves a lot more likes. This is the fundamental reason no one wants to go pure functional: you can't go back when you need to map the machine closely. From what most people see, hybrid is ideal and solves real world problems best.
Performance is not that of an issue these days, by the way, programs generated GHC Haskell compiler is as fast as Java code on some benchmarks, it's because of the optimizations it does.
Sorry, but I really, really, really doubt there are people here in this comment session than can write better code than modern compilers. And I think I haven't cared for language performance for at least 1 1/2 years at my job, in which I use an imperative language. IO, network and data structure are all more relevant, specially to performance. Not performance that would improve by changing programming paradigms, at least. Performance critical sections can be isolated and improve upon in any language. Even Haskell has great FFI for the scenarios you describe. Keep in mind the language most used for these "ML systems" is freaking Python, which even JS can beat up these days. Unless you work with real-time systems, video-games, system programming, embedded systems or mission-critical software (like airplane control systems), I highly doubt you're writing micro-optimizations everyday.
I'm not sure a lot of people even consider that the biggest part of why OOP is popular is the fact that human beings don't think "functionally" in their day to day lives. They think "objectively". Object-oriented programming I would argue is more intuitive, conceptually. We've evolved to think about objects and their states. We have entire sections of our brain specifically designed for it. Our visual cortex is one big object recognition processor. I think us programmers get way way too far the weeds instead of trying to take a step back and thinking about human nature, and the way our brains actually work.
"We've evolved to think about objects and their states." There is no WE. I had a terrible time with RDBMs (relational database managers) working with sets rather than records individually. I pluck records from a database and then process them sequentially. It is what I do.
We ALL natively use inheritance though. You've learned to drive a "car". Because you can drive a car, you can drive a "sedan" OR a "coupe", since they both inherit car. And since you can drive a coupe, you can drive a "corvette" and a "mustang". And all without special action, preparation, or training required to do that. I can throw you the keys and away you go.
@@jadedandbitter "I can throw you the keys and away you go." I get the idea. Items specific to a model (such as where is the headlight switch) inherit "car" and then extend it by adding methods unique to model. As you indicate, multiple levels of inheritance can exist, and it does indeed solve some problems while creating new problems. I doubt we all use inheritance natively. It is easy to assume that all people are like me, but I know this is not the case. What I cannot *feel* is just how different people can be. What you describe is a hierarchical world view and the traditional way of describing this is, if I remember right, nominalism; the existence of an ideal "chair" is the usual example. The ideal might not even exist. From that ideal are instantiated many kinds of chairs; and some things can serve as chairs that were never intended to be in class "chair". But other ways of looking at things don't involve classification. That's perhaps rare, but I've seen documentaries where a person's ability to identify an object depends on its orientation. Hold a hammer with the handle vertical, the subject can identify it as "hammer". Present the very same object but held horizontally, and the subject has no idea what it is. This person must memorize each object in each orientation and does not extrapolate similarities from one object to another. In the case of computer programs, VAST differences exist in *what is wanted* or to be accomplished. OOP treats data instances as an "object" but perhaps with some distinctions like what color is the object. DATA is the focal point of such systems; you ask the data to do things to itself, and the object knows how to do things. Older programming languages are focused on operations and data just happens to be what is operated on. I don't ASK a record to do anything! That's incomprehensible; it is data; it just sits there. This is what caused me so much difficulty with SQL and RDBMs, I am very strongly process oriented. A datum arrives at one end, gets processed, and maybe something spits out the other end of the process. It was hopelessly confusing to retrieve a thousand records with a single command, how do I deal with each individual record? The idea is that you don't. If you want to, well that can be quite a challenge.
I think that some of Python's success could be due to adoption in scientific communities and boosted by the 'killer app' of Notebooks. The Functional Programming style may end up getting a boost from applications implementing microservices where many services use an event bus and immutable data change objects to synchronized, and frontend state management that avoids mutation and side-affects such as React+Redux.
I think the success of python is the ease of use, no pre compile and very powerful modules like numpy, pandas, scipy etc. This made it super easy for researchers and scientists to hump in and get results without learning software development for 3 years.
@@abebuckingham8198 functional programming in python is pretty bad IMHO, stack limit, no tail call optimization, impure, and variable-variables, syntax pushes you towards for/while instead of map/fold, functions in Haskell are actually mathematical in nature (pure, all variables are constants), being able to do algebra on types / values without any gotchas is a huge cognitive deload.
don't forget that scientifics are not programmers. Python is popular in machine learning because it's like writing English. FP on the other hand is hard.
Went I learnt CS at university I was taught like OOP was to coding what the major and minor scales are to music. I was honestly led to believe that that was the only way to do it, save for maybe a small cult somwhere that still uses some other methods. It was presented as THE way we code. The end of history. I remember the first time I wrote functional code it felt like I was engaging in some forbidden heretical ritual. I remember feeling somewhat surprised that god had not struck me down with lightning for daring not to use a class.
That's really interesting, because my experience is drastically different. On my CS studies, we were shown, that OOP is definitely not the only way to go about programming. On the first semester we had Introduction to programming in C, which made me really like procedural programming. On the second semester we had a mandatory class called 'Programming Methodology', in which we learned the basics of functional programming, and later built an interpreter in a functional language.
@@ShannonBarber78 University of Cape Town. And correct I did not. I got a BSc Mathematics. I took some cs and computer engineering courses as part of the credits.
Major and minor scales are a perfect analogy because that doesn't even scratch the surface: Lydian, Locrian etc modes, Middle-Eastern music with quarter tones, just intonation and further down to the lower levels of music theory iceberg
Honestly I don’t even understand OOP. It seems so insane. Why do I have to fuck around with classes and crap when I can just make functions to complete specific computational tasks. Of course I am not a programmer by trade and most of my use for programming is making tools in matlab or vb script. I’m sure it makes more sense in scenarios where you need a “worker”? Still…
Nice talk! However, one aspect that I think you've missed is that programming desktop GUIs was a "killer app" for OOP in the late 80s/early 90s. Inheritance works relative well with a fixed set of operations (the widgets events) that can be extended to many widgets (cf. Wadler's "Expression problem"). Of course it didn't work as well for other domains (e.g. for data structures parametric polymorphism of SML was much better suited) but the OOP adoption motivated by GUIs blinded the other alternatives for the next 20 years.
This x1000. Objects were a good solution for GUIs, given how state was managed in any of the systems of the time. It wasn't necessarily optimal for other problems, but because everyone was already programming for the GUI anyway ...
GUI work is one of those things you never realise how difficult it is until you've had to do quite a bit of it. I could reasonably make the case that the GUI functionality of a modern program could be as much as 20 times more work to implement than the actual 'functionality' the program is intended to accomplish. GUI code is SO tedious to write...
So unless you buy into the contemporary "everything is a webpage" bullshit, when exactly did actually hooking up a GUI become any less important whatsoever, for any software that's not just running headlessly on a server back-end somewhere...?
@M. de k. Behind the scenes, there's no such thing as purely functional anything... Something has to generate side effects for things to happen... You don't consider the VM, interpreter or compiler - why should you care about the inner workings of a library when you should only be using its API?
In OO, when they say prefer composition over inheritance , they didn’t mean no inheritance. That was quite a jump in the thought process of this presentation, I think. Inheritance still plays a big role in OOP even if you adhere to the composition over inheritance practice.
I think the problem comes when you try to use inheritance for code reuse, needlessly or for non immutable type relationships. I would be hard pressed to even remember the last time I used inheritance more than a few times on a project (ignoring abstract classes)
So yeah there are very few reasons to favour inheritance over composition and most of the times you see inheritance or it causing problems you can fix it with a simple strategy pattern/ polymorphism.
I think this talk omits one very important point about what makes or breaks a language: the end result. In other words, when using a particular language to build a reasonably-sized system, does the language work? Does it perform reasonable well or is it too slow and requires too many performance tweaks? Does the language have safeguards to catch errors at compile time to avoid production errors (e.g. static typing)? Does it have a decent eco-system of tools, documentation, and libraries to make me more productive? And finally, does it have a large enough community to share ideas and to keep improving it? When factoring in these considerations, Java scores very high. It’s not just “marketing” as this speaker seems to suggest. This is why Java is still running very strong 25+ years later. Put simply, the language has to “work” to become successful. Otherwise, we end up with Ruby or Scala and people move on.
It's all C++ in my world of Chinese microcontrollers. The annoying thing is learning too many languages so you mistake one language for another. You just need to know one good language well and then you don't think about it anymore and its as if you write in English where your brain is 99% on the way you design what you are building. All the other software people write is C++ as well, so you can just copy libraries and whatnot and not worry.
@@Andrew-rc3vh Okay, you made me chuckle there, why would you copy a library? It is a _library_, you just use it. It seems to me that you are not getting the most out of your language.
@@MorningNapalm I often download libraries Github, especially the ones used to drive specific chips. Also another place is simply forums, often with code boxes on them. C is the language 99.9% use in the things I do. If it is not C, sometimes it is Lua, sometimes the odd Python file too.
Actually, Python had its "Rails moments" around 2012 with increased popularity of deep learning research and applications. It also coincided with widespread use in systems management and automation scripting.
And now it has a strong community around it with jupyter notebook / lab, numpy and pandas. It has become the default for many applications in science / statistics / data science
@@KaplaBen Dont forget automation with a lot of easy to use packages that can handle most communication protocolls in the industrial ecosystem. Like CAN, Modbus, MQtt for the IIoT, .... The community provides almost anything to exchange data with almost every machine, motor whatever. Just XCP is missing. At least i couldnt find anything for my needs.
@@Luxalpa I do love Unity, but I think Microsoft had way more to do with the popularity of C#/.NET. Besides Windows desktop application programming, there was also ASP.NET which was very popular among enterprise web developers. MSFT saw themselves as being in a very real battle with Sun (later Oracle) for the enterprise market. You'll note that if C# & Java are combined (even with say 30% overlap), that they beat out even JS.
I'd say one of the main reasons for those langs being in the top 10 was that they are so easy to learn once you have one of them already. It's like learning another romance lang. after you know one. e.g. learning Italian after you know Spanish. On the other hand, functional programming is like going from spanish to hungarian.
Learned BASIC in middle school & some Pascal in HS. Taught myself QBasic in the 90s. OOP is like learning that proverbial Hungarian for me. Fortunately, PHP works well as a procedural / functional language. It's only when I have to work with other people's code that I have to deal with the OOP paradigm.
@@Nik6644 Speak for yourself. I learned Lisp first after really struggling to learn imperative languages. Lisp just felt natural and intuitive to me, I would spend countless hours as a kid just writing lisp on scraps of paper and whatnot even when I was on the bus or having dinner, then there was Haskell. I had less intense but similar experiences with Erlang and a few other functional ones. No imperative language has ever made me feel this way, and I wouldn't willingly touch any of that stuff in my downtime.
Very little, yes. I was hoping he'd at least explain FP so I'd know how they compare. From what I've read, FP just sounds like cramming all the data into a module and sub-setting functions into various other modules. Just about any language can do that... except maybe Eiffel.
@@rontarrant And that tends to be the problem. OOP programers use FP all the time in general purpose languages. It isn't something that really needs dedicated languages unless you are working in some domain where you want to force the behavior for some reason. The big top 10 languages, the thing they share is not OOP, but general purposeness. Even if they are domain restricted like Javascript, they are about as far from one trick ponies as you can get.
C'mon Ron. You're going to denigrate FP, while claiming not to even know what it's definition is? Look it up! Your statement sounds like jealousy. "I ain't need no learnin' fron sum one with a paper sayin'' P -H -D"! lol Functional programming attempts to bind everything in pure mathematical functions style. It is a declarative. Its main focus is on “what to solve” in contrast to an imperative style where the main focus is “how to solve”. It uses expressions instead of statements. An expression is evaluated to produce a value whereas a statement is executed to assign variables. I was impressed when in the 90s I saw someone debugging a LISP program while it was running. Try to do that in other languages.
To be honest, this question seems a bit silly to me, like the old RISC vs CISC debates. Guess what, CISC won, and even the "RISC" architectures provide CISC instructions. But you know what, RISC won, as those instructions are broken down to microcode that is much more RISC like. Turns out, both won. The same is true with functional programming vs OO, both won. All the popular languages have OO and functional features, it's just pragmatic to give programmers lots of tools in the toolbox so that they can solve a variety of problems. And this has been the trend since even the '90s when the first C++ standard added the STL, a set of algorithms heavily inspired by functional programming style, but done in a way that works well with the C++ language. The first book I read that really taught me functional programming was Modern C++ Design. When I tried out Haskell I realized that I already understood the core principles based on my C++ experience, and now it was a matter of thinking purely in terms of functional programming. Nowadays the discussion isn't if a language should be one or the other, but to what degree. Should variables be immutable by default? Do we opt into purity or opt out? etc.
Take intels golden handcuff x86 cpu out of the equation and RISC won. Or better yet considering that most CPUs in this day and age are ARM processessors then RISC won.
@@carlosgarza31 Except ARM isn't strictly RISC these days, just like Intel isn't strictly CISC. That is way too idealistic and in the end pragmatism won out. The old concerns became obsolete once the instruction sets started being implemented in terms of an underlying microarchitecture. MIPS based CPUs are probably the purest RISC processors out there today.
Interesting metaphor/analogy. Btw, Apple might be moving to Arm. Motorola 68xxx (CISC) -> PowerPC (RISC) -> Intel (CISC) -> Arm (RISC). Btw initially Apple was using Pascal, for the first Mac :). Btw ObjectiveC was more like SmallTalk, whereas Swift feels more like Java/C#/C++ type OOP.
I think immutable for "variables" (consts) is kinda easy - const is the more frequent use case, and more safe, so should be default. Now mutable/immutable objects/structs is another question. I wouldn't mind a language in which immutable is default, as long as there is a way to opt-out. I think having immutable as default could help many devs realize the benefits, by just making them "stumble" into the defaultness and ponder (hmmm, this must have some benefits, since it is default :) ).
how much assembler did you do. I did it in the 80's to 90's I saw the start of the debate, it was clear for me, RISC was the best. The reason CISC ruled is Intel and it's killer app... (and killer hardware!) Windows over PC. Intel said "I can do one cycle instructions too" but that wasnt the point. You cannot mix the two oposite paradigms. It was easier to see that those days : You have very limited space in a chip: What stuff do you prefer to carry inside. * CISC favors complex and different length instruction set, lots of control circuits. It leaves little space to store data. The worst from Intel, all the operations worked only in the main register, "A" the accumulator. * RISC was: Very few very basic instrucrion set, lots of registers to store data, and any operation worked on any register. You don't waste so many cycles moving data from/to memory. REALLY fast. * Complex instructions were less than 1% of a program, many of them never used at all. * RISC forced you to do complex things outside the chip. Complex took more instructions, bur less data tamper. What about programing? This is important. RISC was posible because of compilers. Programers dont need to implement them on every program, compilers do.
This guy is a good public speaker indeed. I'm not a computer scientist or a programmer but I sometimes do some scripting although my preferred language, R, didn't make it to his list. Nonetheless, I like how the presenter guides us through the history of how these languages were developed and gives us the context in which the decisions that affect us when using these languages today were taken.
Python has a low barrier of entry and seems to be the non-programmers’ and hobbyist programmers’ language of choice. With its large set of libraries, it allows them to focus on solving their problems in math, statistics and AI instead of requiring them to become proficient at software development.
Unfortunately the price is that it is extremely difficult to do great software architecture in Python, without a superhuman effort and dollops of discipline.
Python is far too slow in the way it runs for general use. I could write sloppy C code which could still far outpace Python. Python will probably reside mostly in academia. It's easy to learn and quick to write. When you need fully optimized code, C or assembly will likely be the best option for many years to come. I write code for microcontrollers so I'm obviously biased. If you want one answer quickly, choose python. If you need many answers quickly, choose C or ASM.
Python is also huge in the second and third world, where education is not as great and Java and C# are not popular. A lot of this is because they don't speak much English there, and tutorials for the most popular languages here are in English. I've had personal experience with this, trying to help people in former Soviet countries learn the languages I know. In the end, Python was much easier for them just because most universities there don't even have a programming major, just basic IT work.
@@billybbob18 I think for small-medium scale systems, things like business CRUD applications, Python or a similar language can make a lot of sense. A small scale CRUD application might only take $500 a month to host, but developers in the USA cost like $10k a month.
Nah, python has REPUTATION for having low barrier of entry. There is a big difference. Language with low barrier to entry would be something like Go. Python is actually a lot more complicated, less predictable and with more non-obvious behavior.
8:20 - No one knows why flash died? I can tell you EXACTLY why flash died. It died the day chrome made flash disabled by default. I'm in the ad industry and when chrome did that, the entire industry switched from flash to js overnight.
Flash died because it was designed as a temporary workaround that would die once JS was finally standardized. Since the early days JS was the future of the Web but it's adoption was slowed down by a stupid war between IE and Netscape teams who decided to systematically implement each method slightly differently and with a different name. So any JS code from that time consisted in 2 copies of the same code, one for IE and the other for Netscape. While that stupid battle was ongoing, people who wanted truly inter-operable and easily maintainable code resorted to create a few workarounds to replace JS in areas were it was lacking or unpractical. Flash was one of them along with Java applets, and a couple other third-party modules. Now that everyone has finally agreed to work together on a single standard JS implementation, all those workarounds are no longer needed.
@Svein Are Karlsen The programming language is not responsible for bad coding practices by some amateurish plugin developers or web site creators who don't care about memory usage because the code executes on the client PC instead of their lovely server. Neither for the ad-based economy of many web sites that are cluttered with banners and unskippable commercials. First tip to reduce browser memory usage is to install an ad-blocker. You'll benefit on memory usage, loading time and readability on most web sites. After that if a single browser tab uses several gigs of ram, find another less memory hungry web site that has the same info with less strain on your computer. If all your tabs are affected deactivate all plugins and reactivate them one at a time to identify the crappy ones and look for alternatives. I have about 20 tabs constantly open for several weeks, some with video and various memory intensive dynamic content, plus a dozen plugins that add overhead in each tab. And even with all the memory leaks that built up over time the total memory footprint of my browser is less than 2GB.
Flash not running on iPhones or any of Apple's products was also a significant factor as to why it became abandoned. And of course, the reason for this lack of compatibility was primarily motivated by corporate rivalry.
Flash had a specific nitch for creating vector based graphic animations with extremely small file size. spec standard for banners was 40k and we used to fit pretty complicated stuff into that size with animated cartoon characters and such. The new js banner standard spec started at and still is 200k, and the animations are extremely simple compared to what was happening in the flash days, because still after all these years nothing even comes close to flash's ability to quickly create advanced animations in very small files. The fact flash didn't run on mobile was fine. Mobile is always a separate build anyway: simpler, smaller dimension ads, or quite often just static images. The nitch of easily making fun cartoony like content for the web died when flash did.
@@cakep4271 Perhaps not being accessible on mobile was fine at your company, but at the company I worked at it was reason enough to switch every future project to JavaScript.
In my experience (mostly with) php most code has mixed FP and OOP without problems. Some structures make more sense as clear distinguishable objects, some others don't need to. So asking if FP should be the norm is like asking if HTML should be the norm instead of CSS. At the end, whatever suits you. Most of the times it makes more sense for you to add a class to the DOM and let CSS draw the things as they are supossed to. But sometimes you are faster or it makes more sense to simply style the DOM node without trusting on CSS. So... my guess is FP+OOP is the future.
I know for a fact that in the 80's my Dad was studying so-called structural programming at Moscow State University. No mutation, pure functions, all that stuff. It WAS the norm. At least there and then.
I wonder what's next... Functional programming zealots knocking on my door, holding a brochure asking me whether I have a personal relationship with pure functions yet?
The biggest difference between circle.grow(3) and grow(circle, 3) is IDE hints. You don't need to remember names of functions in first syntax, you can just put a dot, and IDE will show what you can do with it.
The biggest difference is that in the second you're passing an object and the function doesn't have to have a reference of that object to do something with is, whereas if i had the following: grower.addThree(circle) This would break OOP's rules of encapsulation, because I'd be passing an object, not information about the object.
True, as of today. However one could imagine, e.g. writing `circle` and pressing smth, e.g. alt+enter and "show me the operations I can do on this object", which would be even better, because it would include cases in which it is not the first (or `this`) arg. CLOS multiple-dispatch polymorphism comes to mind.
Good question. And even better answers. After programming mostly in C# for the last 15-20 years, I am currently learning F#. I also believe that functional programming will become more important and prevalent in the next years.
I think the speaker missed one major reason why FP style isn't the norm. Because, at times, it can get really weird and difficult to understand, especially for beginner programmers. At times. But not always. OO style has the advantage of accessibility, and honestly, that's very important to a lot of language creators / maintainers. Python is such a popular language because of its ease and accessibility. I also think the reason languages like Java adopted the FP style was to generally give programmers an avenue to write shorter, more compact lines of codes. But all in all, it is growing in popularity, and I certainly love it, that's for sure. I really do appreciate it.
I agree with your first paragraph. For me, I understood what FP was for a long time. I had read many many books and articles on FP. I had finished tutorials etc... But, I didn’t use it because I didn’t understand how to jump beyond the text books and tutorials. It was only when I started doing some things to fix a problem I had that I suddenly realised after a couple months that I was doing FP. Since then it has clicked with me and I use the ideas a lot now.
"Hybrid language" means basically any OOP lang that markets themselves as "Multi-paradigm language". Meaning that it has basically zero facilities for functional programming.
OOP features sometime conflicting with FP's. For example, I can not imagine how interfaces (like in Java) can coexist with type classes (in Haskell, or Traits in Rust). And how can both Algebraic Data Types and Inheritance be working nicely together. So I dont think Hybrid is always a best ideal.
Engineering has always been about resolving what can be done vs. what can be done economically. This is true even in software engineering and it isn't complicated. Much of what we do today in software engineering is a direct result of the free software and open source movement and it is the main reason why programming has exploded over the last twenty years. But first let me talk about Visicalc because I think the analysis of why it became popular is completely wrong. First of all, I have no idea where he is getting the $10000 price tag for the hardware. An Apple II computer with monitor was just under $1300, and with two floppy drives at $350 each brings it up to under $2000. So if you, as a business, get 5 of these then, yes, $10000 is achievable. But as a business, where you have a serious need for this kind of work, you either shell out $2000 per machine, or you get a minicomputer system or timeshare on a mainframe. IBM's minicomputers, at that time could cost anywhere between $10,000 to $100,000, not including the maintenance contracts. So if you look at getting 5 microcomputers, which you own, at $10,000 vs a low end minicomputer for $10,000 + additional costs, as a business, it is a no-brainer. Additionally, the staff that you would need for the running the software and maintaining the equipment was far less for microcomputers than for minicomputers. It really comes down to the economics, not the "killer-app" theory. So, now let's look at the programming languages themselves. In fact, let us look at the top three languages: Javascript, Python and Java. They have two things in common: they have been around for better twenty years (remarkably, Python was created in 1989) and they are free. While Richard Feldman explains that Javascript is dominant because it holds sway over the internet and internet applications, that is strictly not true. With the advent of platforms like node.js plus many application that embed Javascript as their scripting extension, thanks in large part to Javascript interpreters starting with SpiderMonkey from the Mozilla Project. Java started out as a horribly buggy slow and buggy virtual machine and programming language to a fast and stable environment for serious enterprise programming. People who have embraced Python, have developed hundreds of libraries and has been the go-to environment for big data/machine learning/predictive analysis applications which can be written quickly and easily. Much of what it can do can also be done with the R language/environment and Octave, but without the overhead. C#, while technically free (if you buy the Microsoft development tools or use its open source version Mono) (actually, because Microsoft has made it free with its Community version of Visual Studio, so it is actually free) with their development tools has also matured since it was created 20 years ago and when I did use it for some of my projects, I found it very easy to use and was able to get projects done quickly. One of those projects was a small web server which I was able to have up and running in a couple of weeks. C/C++ used to cost around $100 - $1000 for a compiler. With projects like GCC and CLang have brought it down to zero. My first C compiler in 1990 cost just under $100 dollars. My first Modula-2 compiler in 1988 cost $150. In 1996, the C/C++ compilers came as part of the package of MSDN development tools so it was hard to say how much it cost by itself. So if you look at any language now, they are free. You pay for the development environment and tools to go along with the language. You also have to have the experience to go along with it, which means you have to find people who have wanted to invest their time in those languages, because companies do not want to pay for it. C/C++, Java, Javascript, C# and Objective C have similar structures and syntax. Python got its wings in academia where they are not as concerned about ROI (return on investment) as commercial businesses. What holds functional programming back is a compelling reason to use it. Richard Feldman has his roots (pardon the pun) in Elm, so I went to the elm.org website. It's title says: "Elm - A delightful language for reliable webapps". I don't use a language because it is "delightful". I use it because it solves a problem. I have used BASIC, FORTRAN, C/C++, Modula-2, Java, Javascript, XML/XLST, Prolog, LISP, Ada, PCL5, Postscript, Ant, Bash, and others, all with the intent of solving problems. I have had no compelling reason to use any functional programming language. Maybe one exists, I just haven't seen any. The second problem I have with it is how the functional programming community acts like they are victims. "Woe is us. Nobody likes us. Why doesn't everyone think like us." That is what this 46 minute video is, an expression of victim-hood when they should instead give examples of why their paradigm is better than others in particular cases. If you can't do that without slamming other tools, no one is going to take you seriously. If I say that you ought to use a hoe exclusively over a shovel for gardening, you would (justifiably) laugh at me. Why are you so concerned that other people don't think like you. It's silly and it makes you look childish. It looks like you are treating your choice of language paradigms as a religion rather than engineering tools. So stop blaming others for your problems. Stop whining and crying about how you are not understood. If you really have something (and I am not convinced that you have), then present your advantages rather than try to undercut everyone else.
I read your comments..., how interesting and compelling they are and wish I could get more of it...well done. Most people in this field do not express themselves as you do...my opinion.
Tldr. Nah, just kidding. You have a good one: projects budget, ROI, training, knowledge transfer, etc. FP (as I saw personally with Haskell and ML) has a way harder learning curve than others (like C, Java, Javascript) so there is a market problem. I do believe FP is (somehow) going to be on the top 10 someday though
That's why I stooped thinking with languages entirely. I just go like “I want to make a math function” and punch in some bits into an executable file and it just works
Interesting talk, though as some mentioned, Python did get a boost from the ML/AI hype train the last 7 years or so. I've always thought that functional languages and the functional style have never (and maybe will never) become the dominant ones because the world and people and computers don't operate that way. I think most programmers have heard the quote "To iterate is human, to recurse is divine". Well, that's just another way of saying people don't naturally think recursively, they iterate, they get a list of directions of steps, do this, then do that and each step changes the state of the world. Similarly computers are basically really complicated state machines. A program by definition, changes the state of the machine, even an empty program that doesn't do anything useful, just immediately exit is still changing state under the hood. And while a functional style might give performance benefits in rare situations involving big data operations across multiple servers and things like that, in general most applications are much faster when written in a traditional imperative/procedural style. The classic obvious example is gaming or anything graphical because generating a whole new game state and frame buffer every 16 milliseconds rather than editing in place is prohibitively expensive. Another point that's been made before is a huge amount of programming is directly with the hardware, the bits and bytes o configuration and initiation, low level drivers, embedded microcontrollers, none of which are feasible to do in a functional style. Even if someone wrote an OS in a functional language in a mostly functional style, there's no way to go all the way, including the bootloader, firmware etc. Edit: I forgot 2 more factors that I think are important. One, it is much easier to reason about performance when dealing with an imperative C-like language than a LISP/Scheme like language. It is easier to reasonably guess what the assembly would look like (and even possible to do inline assembly in C/C++ etc.). Even if you can write equally performing functional code, the generated assembly is not something that you could easily guess or map to from the source. Two, it is much easier to edit an iterative, C-like block structured language. We edit code in lines and semantically the code executes line by line. We can easily insert or remove lines, even non-trivial chunks that add/remove/change significant behavior. To do the same thing in a Lisp style language might change the entire structure of the program/function or more likely several functions. Our textual editing and debugging tools map far better onto a line/block oriented language than a Lisp-y functional language. Whether you actually use a debugger or print statements combined with careful thought, it is far easier to do with C than Lisp. Granted there are functional languages that are more block like but it's a semantic problem too. The functional style of first class functions created and passed around willy nilly is harder to step through even if the textual representation is more traditional.
@@jackmurphy8696 Probably not. Serious reverse engineering is hard enough already that iterative vs recursive in assembly isn't going to make much difference to them, certainly not compared to far better obfuscation techniques. Just use the normal obfuscation techniques combined with the movfuscator if you really want to drive a reverse engineer to suicide
@@RobertWinkler25 I figured it wouldn't stop any of the good ones out there , but it would certainly stop me. That stuff is really hard. Thanks for this info.
@@BrunodeSouzaLino The reason for starting from scratch was that between 1-4, maintaining backwards compatibility was starting to hold the language back, which isn't unique (see Python 2 -> 3). That said, if you haven't played around with Perl 6 (which is being renamed Raku to better distinguish it from Perl 5), you should take a look at it. Speed is finally mostly on par with other scripting languages and it handles FP very nicely (not surprising, given a lot of the improvements to Haskell came from P6's initial implementations that were done in Haskell, so developers were very used to the FP style)
@@MSStuckwisch That's mostly when you favor features and think not good about the future. That's why C++ succeeded. They really carefully wait a very long time for adding features and keep back compatibility a TOP priority. Never had to break C++ in order to go to the next version. Therefore C++ is superior. That's why people say: C++ MASTER RACE. Understand?
I remember those days. That was around the time I invented AJAX without realizing it (I was trying to get around browser incompatibilities). Perl is a fantastic language. Even Lisp snobs won't criticize it.
Functional languages all have the same basic flaw: That's not how computers work. Sure you can force a computer to implement the desired operations, but it is less efficient (under the best circumstances) than procedural languages that more closely align with what the hardware is actually doing. This rules out functional languages in any circumstance where performance might be critical. A good example of this is in embedded systems, where excess compute power is a waste of money and thus affects profitability. If I'm going to sell 1M units of something, then its worth an awful lot of developer time to use a 1$ cheaper microprocessor, and it turns out that in most embedded use cases, its actually much easier to write procedural code than it is to write functional code. Functional language developers also fundamentally misunderstand what computers are for. Computers are not primarily used for calculating results of functions. Rather, computers are about controlling the real world, storing information and sorting data. Calculating results strictly from current inputs is a very small piece of that. In other words, the "side effects" that functional languages poo-poo so much are the only real reason that computers even exist. Pure functions are the aberration, not the norm. For that reason alone, functional languages will never become commonplace.
You got it. You also mention the one and only side effect your employer is actually interested in: to make money. FP folks are a lot of things, but they are not engineers. They do not care for the three goals of engineering: To get it done. To get it done on time. To get it done on budget.
Wow that's a rather strange argument "if this other form of Java would have been marketed it would have been the major paradigm".. Don't you think a lot of thought would have gone into deciding which language to market in the first place? He is also vastly discrediting potential reasons like 'ease of use' and 'productivity', both of which are pretty hard to measure from language to language, but they just 'evolved' to be dominant perhaps for this reason
_> Don't you think a lot of thought would have gone into deciding which language to market in the first place?_ Why would you think that? Microsoft had BASIC. Sun had Java. Java was orignally called Oak and was meant to be a better C++. Sun wanted something that runs on a VM, something hardware agnostic, which C and C++ are decidedly not. And something with a more marketable name. Apple had a very marketable name. What kind of name is "Oak"? So they took the language they had, changed the name, put it on a VM, and aggressively marketed it, not always strictly honestly. The language can be trademarked, but any Turing-complete language is just one of infinitey many ways to express the same things. The algorithm doesn't care what language it is written in: It is still the same algorithm. The CPU doesn't care what language the code was compiled from: It is processing the same instructions.
True to a point. However, most of the frameworks that we've adopted in the last decade or so have just complicated things imo. I feel like we're at a point where we're changing for the sake of change... Has anyone else felt this, or is it just me?
I do program in C# with an FP style. It's interesting that my colleagues pick up on some of the patterns I use as, "oh that's a really good way of making systems safe and easy to test" but don't immediately identify the patterns they are starting to use as FP.
I would love to see well used funcional programming. The codebase right now is full of functions that changes states to other states and to edit it, you have to jump from line to line. Its a whole spagetti
Agree, and I would say more important than functional programming is functional thinking. Code can be structured in a functional way and give the same benefits as programming is functional languages.
C++ is actually evolving SUPER FAST right now. Soon we have Modules, Concepts, Meta-classes. and much fucking more. It just sucks everything up that is good, without losing back compatibility. For me it's the best language that currently exists. Hope C++ also sucks up things from Rust. :) Because rust has some great ideas.
• "Avoid mutation and side effects": anyone writing C++ with consts and no globals is doing this by default. I find compiler support for this in C* ++ to be just as good as in Scala. • "1st class functions": C++ requires some typing ceremony around lambdas; Boost makes this slightly less eye-gougey. • "support for the style": modern C++ culture is all about being functional, enough so that I'd argue that it compensates for the ceremony cost.
javascript isn't really OO - it is better described as prototypal in that it does away with classes and methods (although you can emulate them and recently they have added syntactical sugar for them) - functions are first-class citizens in javascript so you can *almost* argue that javascript is also a functional language although it falls short for purists. You might call it a PROTOTYPAL-FUNCTIONAL language as it straddles both and, in doing so, is very unique and effective way to program. Add to that it's C-style syntax and you have a legible and easily understood syntax in a unique language that allows you the best of both worlds where sometimes functional-style and composition are favored to solve problems, or when encapsulation (which it does quite well with closures) and strict hierarchies ( that are unlikely to change) are needed. It wasn't just that javascript had dominance in web-clients that propelled it - it was the freedom it gave us.
@@karlschipul9753 JS has always had "true OOP" classes, it's just that the inheritance model is prototypal. class is merely syntax sugar, with only a couple differences.
5 ปีที่แล้ว
What even is the point of prototypal inheritance tho? The only thing I've seen it useful for is shims, and that's more because of the lack of a standard library for JS and less because it was a genius idea to implement
The difference between Prototypical and Classical OOP is moot to normal programmers, it is only important to language implementers and low-level hackers. The only practical difference is in Prototypical you can grab the "prototype" object and manipulate it at runtime and have that change affect all pre-existing objects.
composition over inheritance is an advice you give to people learning OO because when they start getting into OO they have a tendency to start creating subclasses for every field they need to add when in fact they would be better served by having more members in some object rather than a specific subclass for a given structure. The "aha" moment comes when you realize that objects aren't there to encapsulate data/strucure, but behaviour. And this is where we get into the most important feature of OO which is conviniently left out of this talk: polymorphism. And if you want to understand just how powerful this can be go look up the design pattern template method (and in fact a lot of design patterns are really difficult to implement without OO features). and more importantly, as mentioned, functional programming requires no special language features. There's a reason for that. Paradigms are hierarchical. Functional -> Imperative -> OO you can do functional in OO languages, you can't do OO in languages with no OO support (or rather, you can hack something like it but it's not worth it) OO languages have MORE features. That's why they're popular. And applications aren't paradigm pure. Most real world applications don't use just one paradigm, they use all of them as needed. Calling a static function to make some calculations? Functional programming Calling a function to change a state (say, saving something to a database)? Imperative programming Using a subclass to extend the behaviour of your web service interceptor? Object oriented programming
@@vast634at least according to Wikipedia the development of C# started in 1999 because the developers of the .NET framework had the feeling that they needed a new programming language. For some reason (maybe because Sun had sued them) at that point they decided to develop a new language instead of continuing J++.
I used COBOL for 30 years. Then needed to write some smaller simple stuff. Tried vb3 then 4 and 5 I was writing simple useful apps in minutes for customers who were delighted. I was destroying a team of writers with quick simple applications that weren't brilliantly written or superb disciplined written code. They just worked gave me living for another 10 years. Some cobol I wrote in 1984 is still running and some vbapps are still running 115 years later. Not because They are great but because they work and do a job in a simple and accurate way. Why there are no great visual languages now I do not know.
The problem with functional programming is that it's a paradigm that requires the ability to formulate intensional rules. Most people are not very good at this sort of analytical modelling, and they prefer a more explicit, procedural approach. Procedural (imperative) while being potentially verbose, is easier to decipher. If you go back to set theory, most students would find it easier to describe sets by a non-compact system of attributes, as opposed to the optimal, overarching construction rule. Functional programming buffs get all excited by how elegantly they can model their problem with lambda calculus, and we're all in admiration... but it requires a specially-wired brain and in the end it's not necessarily more practical. The proof is in the pudding: It costs nothing today to have a go at functional programming, but it's not very popular. And it's not because the job market requires Javascript of C# - the job market requires the most efficient tool for the money it spends. You may be super efficient in Haskell, but that is just you. And because the majority of engineers are more comfortable with an imperative paradigm, doesn't make them lesser engineers.
The whole point of functional is that it easier to understand and read because it is declaritive and modularized. A pure function will always be easier to understand and test than a Mutable Object. Objects require set up and tear down to test, and since they are mutable they are harder to predict. Not only that but changing the order of operations on Object Method calls can lead to unexpected results because associativity is not respected. Overall it is stupid to say that Objects make things easier to read. They add boilerplate, interdependencies, and make your code base tightly coupled and nightmare to read through. Not only that, but once you are done with your complex UML Diagram, the only way for a person to mentally grasp the conplex class hierarchies, if your user requirements change, then you are fucked because OO Systems are fragile and are not Robust. Not to mention all the Damn design patterns you must learn to compensate for the shitty OOP features. Sure they solve the problem, but they are over engineered and you have to study for years to do it right. Prime example, Strategy pattern. That shit is Literally a Higher Order Function. Which one is easier? FP. FP is superior. The only issue is Pure FP. Programs need side effects. So The best Style uses heavy FP then some procedural logic for side effects. OOP is trash as fuck.
I had difficulty reading lisp code when I first approached the language - Common Lisp and Emacs Lisp. In english we are taught to read from left to right, and then down. Reading lisp requires us to find the middle, or innermost, function and then read the code in an outward fashion - i.e read up, down, and outwards from the center until we reach the outermost containing or "top-level" functions. It's disorienting to the eyes to have to dance around the page in this manner. The silver-lining is that you CAN get used to it and it becomes natural after awhile, but you have to C-/ or at-least adjust your left-to-right-and-down approach to reading text
I noticed that ironically enough, this talk is just like functional programming: it's creative, insightful and seems to be very precise but it doesn't address any practical problem and it has reactionary logic, all about "what isn't" rather than "what is" or "what can be".
The most popular Python projects are actually: ML: TensorFlow, Keras, Scikit-learn Web: Flask, Django Utilities: Ansible, Requests, Scrapy, ... (among many others) I would say Python has no single killer app, but a rather healthy amount of great projects in many areas.
@@crides0 For the same reasons that Visicalc/Excel are killer apps: They allow people to create things that are easy to share with other people, and those other people can then tinker with, without needing a lot of prior expertise with the app.
@@ernstraedecker6174tkinter got me into gui design. This is pythons greatest power. The default library out of the box gets you so far. You never really have to leave the language.
Don’t laugh about the Java smart card: Oracle works on getting Java onto embedded devices again with Graal. Python is simply very readable and useful in a way which captured people, and people built good tools for it. And it has the zen of python: Just start python and import this (literally "import this"). That’s a focus it always kept. It kept its APIs usable. Though the original idea of "programmers need less freedom than lisp" turned out to be wrong, since Python now provides a lot of metaprogramming tools. Objects and methods are most essential for IDE auto-completion - basically developer UX via "just put a dot after some symbol to see what you can do with it". I feel the lack of that everytime I program with Scheme.
You're right about auto completion. And this is something I value highly, some people don't seem to get what a savings it affords. All the same, I'm also a convert to FP thinking and Haskell-style composability. What's needed is here is a way to do the equivalent of "completion" using the argument signatures of the desired function. Much like Hoogle, but but preferably accessible via a few clicks from your favorite IDE...
What is point of running JRE on an embedded device with limited computing power? The whole idea of Java is that it is cross platform, does that mean my software can now run on both my fridge and my laptop?
@@douwehuysmans5959 The point is to be able to re-use your Java/other-language/libraries experience. And yes, but it will only be good for your fridge if you start development with the fridge. The point is similar to the point of using node.js serverside. A java-shop can then more easily get into building embedded devices.
@@EighthDayPerlman If you are a fan of Haskell and have been using Java I'd advise you to take a look at Frege. Frege is an implementation of Haskell for the JVM. Basically it would allow for using FP style code with all the features of Haskell while maintaining compatibility to the "impure" java code.
@@douwehuysmans5959 Wrong the whole idea of Java was to create a language that uneducated and unmotivated cheap indian coders could use. Together with UML it was the dream that no money needs to be wasted on programmers and CEOs could make even more.
Not just that, but all hardware instruction set architectures are procedural. So when translating from how people think to how computers work, functional languages are a complete non-sequitur.
Maybe they have more similarity to *your* thought process. I think pure FP would appear more natural to someone with a stronger mathematical background.
@@altus3278 Good point. Nevertheless, since the core digital computer concept (von Neumann architecture) is procedural and also created by an extraordinary mathematician, pure FP might not necessarily appeal as an obvious first choice, even if one is one of the great mathematicians. However, someone like John McCarty is a great example (Lisp inventor).
I mean there is no denying that a language like C is closer to the hardware than any FP language can ever be. And that's because C is a very close representation of how hardware actually works. I remember Linus Torvalds once saying he loves C because when he reads C, he can deduce what actually happens on the hardware. You'll always need a translation layer that converts any FP language into a procedural one, simply because a computer is a state machine, and FP languages are not a state machine. Thus it's just easier to write a program in a procedural language like C and understand what will actually happen on the hardware (if you care about that).
I don't get this. I don't at all think procedural languages are closer to thought processes but maybe it's different from person to person. I can much easier keep track of objects, how they relate to eachother and what they can do in my head than a strict order of operations. The latter is what the computer does and my brain is not a computer.
To be fair that's the subject. It's not "functional programming is awesome", it's "why isn't it the norm". Most of that explanation is why OOP is the norm instead, which is a history lesson in popularity
I see functional programming as when your program is limited to a single line, a function call. And that function may call other functions, but is still limited to a single line, and that line is the return statement which includes the value returned by the function. There is of course no assignment statement as no values may be modified and there can be no side effects like input or output. In other words, no functional program can be of any value except to the theoretical mathematician.
Why isn't FP the norm? Because it's inefficient for most things. The goal of a programmer shouldn't merely be to minimize the time they take writing code, but also to minimize the time their code takes to run. I'm writing this in 2022 and nearly all software still has problems with the second part of that. Modern computers with 16 cores running at 4ghz with 32gb of ram shouldn't feel so slow to use and they definitely shouldn't max out their RAM usage. The first computer I used at home was garbage, 533mhz Celeron, 64mb of ram with a 10gb hard drive, upgraded to 256mb of ram and 30gb of storage, and you know what, it ran fairly fast. The only software it couldn't run well were most games, and it seems like that's still a big problem. The key issue with regards to software is that things need to mutate to be useful. Working around that requires writing horribly inefficient code. As far as I can tell, our biggest problem within the industry is that no one can agree on syntax. It's the main reason why developers seem to accept or reject a language. All the code written today could have been written in C. Whether it would look good is another matter, but it could be done and with intelligent designers could be done well. What we really need is not yet another toy language/library/(proof of concept), but rather an attempt to efficiently solve the problems of each given arena and no more. A single language within each arena that aims to be what it needs to be and only that. No kitchen sink, rather singular focus.
Kind of disagree. Optimizing things a bit with respect to memory consumption I can sort of agree with, but trying to optimize for CPU efficiency shouldn't be a big focus today. The use of libraries to handle boiler plate things means your average programmer shouldn't need to think as much about it as they did way back when. As for the games the issue now and then is largely the same. Our CPU's are getting more powerful, but the explosive gain in performance has come about mainly due to multi threading. If you are rendering or displaying something in real time you need to maintain control in your main loop and this is limited by single core performance. Hence it's an exception to the general rule where optimizing for CPU efficiency really matters. Kind of off topic, but for games one of the biggest drawbacks of the modern architecture is the CPU/GPU boundary. Sure new methods have come around to alleviate things a bit, but the CPU still needs to be involved on some level when loading data to the GPU memory. If they could allow the latter to read directly from disk and completely bypass the CPU then it would help a lot with memory consumption in games. As for the kitchen sink v.s. a language in each arena I sort of disagree. Sure it would be nice to a have a universal FP approach, but the truth is adding FP style syntax to do data manipulation in an otherwise object oriented language is not a mistake. It can allow you to manipulate an object as if it were a basic data type using said FP syntax. Why would you not want that for the areas where this syntax is superior? The alternative of jumping into a completely different programming paradigm and converting the data structures you are working on to match it's expectations is a waste of resources. As for efficiency the compiler is handling all optimization either way and I'm sure it in some cases can be beneficial for the compiler to see objects be handled in this manner in terms of the output it produces as modern compilers in general are better at translating code to assembly than nearly all programmers out there.
@@aBoogivogi Interesting perspective, but I simply can't agree with the majority of it. The limits of processor technology were being hit two years ago when I wrote this and the situation is much the same, if not worse, today. There's simply too much data that needs to be processed for us to be lazy and just throw more hardware at the problem when there is no more hardware to throw. We need to come back around to writing better software. As for the functional paradigm, I don't see a single instance of it being the better solution than other paradigms, which include more than just FP and OOP. One thing that I think far too many people that are proponents of FP seem to forget is that what makes computing interesting is that state changes, and by far the best way of modeling that change in state is literally every other paradigm that's not FP. Avoiding the FP paradigm is both memory and processor efficient as well as easier for most humans to understand. The one thing I completely agree with is that compilers are far better than most programmers at optimizing code, and I'm not even restricting that to modern compilers, because 20 years ago they were better than most programmers of today.
Pure functional programming is awkward for managing ongoing, persistent functionality, such as running an app, or managing an interactive graphics canvas. The Model-View-Controller architecture (the dominant pattern) requires declarative commands to generate the View. That said, in a well-crafted app, much or most of the Controller logic is implemented in pure functional programming. I think it's a bit narrowminded to put the popular languages outside the "functional programming" paradigm, when these languages all support that style of coding wherever useful.
I could be misintepreting this comment, if so then sorry. But my understanding is that you're arguing that functional programming can exist in the more modern languages and MVC is similar to functional programming? If so, I'd have to disagree personally. While there might be slight overlap in these methods, but we're talking about two very different places in programming. While you could in theory program in a functional programming style in the more modern languages, in virtually any case I've seen where a functional programmer picks up a language like Java or other OOP languages, it doesn't end well. Yes, I agree that functional programming has it's uses, but I wouldn't necessarily argue that a controller method in MVC would be, in any way, the same as FP, at the end of the day you're splitting up responsibilities and tasks into subtasks which are then divided over clearly defined classes / objects.
@@martijnp The controller part of a MVC architecture cannot strictly follow a FP paradigm because it straddles an ongoing process (User's intentions --> the Model). The Model is by definition stateful, and by extension, the User interacts thru the Controller to change the Model's state in a desired direction. FP is only relevant to stateless calculations, which play an important SUPPORT role in giving MVCs their complexity. In computation where no state has to be remembered, FP is the most elegant paradigm. We're seeing more and more "plug and play" cloud-based services that operate as FP nodes....I used one recently to solve a 4th-order polynomial equation. The "web component" that does just the number-crunching is an ediface of FP. However, in order for a human to utilize it, there has to be a browser UI (input devices and graphics-output) that have to be able to remember things from minute-to-minute, i.e., UIs are by definition stateful.
Not sure about other areas, but in frontend web-development Flux is becoming a more favorable pattern than a good-old MVC, and it is totally based on FP principles: no side effects, no mutations. It is a more scalable approach as it ensures a unidirectional flow of state and separates actions from state selectors (which btw resembles a highly praised CQRS approach). So I don't know why everyone's saying that FP isn't a norm - in frontend apps it kind of is, although of course it's mixed with OO-style in certain aspects where convenient.
Take a look at how the Phoenix web framework or LiveView (Statefull websocket connection between client and server) handles things with the MVC architecture in pure functional and immutable ways. The "conn" connection gets passed arround and things get added/removed (again, new immutable). Elixir(or Erlang's) way of working with this makes it perfect for millions of web sockets:)
@@andreiclinciudev Unless we're working with 2 different definitions of "function" (send input x, receive immediate result f(x) where there is no remembered state), adding and removing are not functions, they change a memory state.
"Performance is a secondary concern..." That is arguably true today, but that was very much not the case in the early 80s when I was first learning programming. Performance is one of the main reasons why C was so popular at that point; you could write in a high-level language but get performance that was very close to assembler (referring to PC programming here). Regarding OOP, when it was first coming out, many programmers believed that it would provide a better way to program. This was aided by the emergence of GUIs and the seemingly-natural fit of UI elements to objects in the programming language. OOP also provides a framework for reasoning about program structure. This way of thinking about the program is significantly different between OOP and functional. I believe that difference is one of the main reasons for the slow adoption of FP. It's a significant effort to change the way you think about programming from OOP to FP.
Bending? The liberties taken with history seem to go beyond bending, and may be better described as fucking. For a video about functional programming, it's awfully dysfunctional. And fp in assembly? Which machine instruction does not change cpu or memory state? Halt? Good luck writing fp in assembly then...
@@lhpl I suspect that most FP language are written in C or C++, at least until they become powerful enough to be written in themselves, but at that point there is a good chance that they are no longer pure.
That graph clearly shows where the popularity of Python came from. It was at least partially due to Perl (and some other languages) programmers switching over, and adopting Python. It's a fairly simple transition. Perl still does what it was designed to do extremely well, but Python adds a set of very useful and convenient features. Especially for the kinds of people who were using Perl for things like bioinformatics.
And Ruby was better, regardless of Rails. But Python won. The reasons of Python's conquest deserve a whole lecture on its own. Much to learn about endorsement, by whom and why, etc etc.
@@wereNeverToBeSeenAgain - Had you started with QB or VB, your path may have been very different, though Python would have been a natural progression of that path
I code functional when the task wants it. I code oop when the task wants it. So I code in C/C++ and both approaches work. A little function here, and class or two there - inheritance if it makes sense ~ a tweak of polymorphism, and then back to a hang-it-all-out in the open global function. It's my party and I'll decorate it how I like :)
@@palpytine sure. If all I do is return void. But IMHO modification of data through address reference can still be considered functional. Eh semantics. Sloppy code that works is for single effort goals and lazy programmers too. I'm not suggesting this is a team practice approach. Like minded approach requires tight code for predictabiliry and reliable expectations. An extension of Scotty's "right tool for the job" on the Enterprise. Unless chewing gum fixes the warp core at the last moment. Klingon bird of prey not withstanding I'll take on Romulons any day of the week. Cheers.
@@AtomkeySinclair What you describe is the very opposite of functional programming - which is all about immutable data and pure referentially transparent functions that always return the same output for the same input. If you're making changes via global pointers and returning void then you're doing anti-functional programming.
Interesting talk, but others have mentioned.... 1. UIs - FP languages by their nature return values and thus are not naturally suited to UI refreshes and updates. This applies to both thick and web GUIs. We can build abstractions over the top....but eventually we need to update UI elements; and therefore side effects. 2. Performance - We programmed in Miranda and Haskell at Uni...performance was woeful. It's only because we've gotten a gazillion cores at high clock speeds, that we have the option to parallelize some (not all) operations . 3. OO - It's just not inheritance & modularity that are 1st class constructs. We have types, interfaces, and understandable polymorphism. 4. Debugging & Rapid Development - VS and other IDEs where we have auto-complete, can step through code and inspect variables is major factor for adoption.
1.) Haskell allows you to create embedded domain specific languages like Lisp, but with Monads this lets them be statically well-typed. Creating UIs games requires you use Functional Reactional Programming eDSLs like Reactive-Banana which is sort of what the language Elm is based on. 2.) FP languages (OCAML & Haskell & Steelbank Common Lisp) are actually really fast, faster than Ruby, Php, Python, Perl, JavaScript which are all popular. 3.) You have modularity and polymorphism in functional languages too 4.) Haskell (and other languages) have a more powerful form of auto-completion called holes/hoogle which let you autocomplete any piece of code. Functional languages all have good REPLs which can be intergrated with the debugger, and you can see the intermediate values of everything if you write your code in a functional way.
1. Check out FRP streams, for example Scala's fs2. Side effects are unavoidable in any meaningful program (the only program with no side effects is immediately returning) so abstracting around them keeps referential transparency. 2. Check out what GRIN is doing 3. The reason I program in Scala is the power I have being able to use OOP principles _and_ FP constructs like typeclasses 4. This I understand and many people work on solving this problem. Unfortunately the concept of errors in FP is kind of weird (exceptions vs option vs either vs try type)
@@aoeu256 To reply from your message you sent me about garbage collectors, persistent data structure, etc... First of all, you completely misunderstood my remark about C++ and I never talked about garbage collectors. Garbage collectors are fantastic and they are needed for many high-level languages. However, the rant I made about the argument "C++ classes didn't made the langage popular" is garbage, SINCE classes were badly implemented. Do you own research about multiple inheritances in C++ and the history of "virtual" keyword before calling me ignorant. Furthermore, persistent data structure have been introduced since the 80s (if not before), linear logic (1989 first language implementing those) and way before that in the database realm (70s) and object-oriented ownership system(80s-90s), Lens are simply getters for Haskell since Haskell poorly managed complex type definition. Monads are only used by purely functional language since THEY NEED IT to do side effects and object-oriented already had that way before Haskell with the "nullable type". Finally, the notion of a monad is an old mathematical construct SO IT IS NOT NEW AT ALL. I don't know why you talk about all the FPGA, garbage collectors, reactive programming (which are essentially events a.k.a old), this is irrelevant to the conversation and the points I made. My take on this is that object-oriented as a purpose in software development and cannot easily be replaced. Functional programming is great but they do have pros/cons like OOP. Go read some research papers and you will see how old is your brand "new" stuff.
@@aoeu256 Moreover, Haskell doesn't have polymorphism as CLOS or Eiffel have. Haskell has Typeclass which is only used for STATIC OVERLOADING, it is really limited since you cannot redefine typeclass function inside another typeclass that "inherit" it since it is not a "to be" relationship but a "to have" relationship.
@@Vsioul You can do all dynamic stuff in a still type-safe and proven-at-compile-time-way using existential types. But if you are used to Haskell's type system, you need existential types only in rare cases, compared to if you are used to think in an object-oriented type system.
What every speaker seems to miss when talking about C is embedded systems. Every small electronics decvice, that isn't running an OS is either programmed in assembly or C. Be it a toaszer or a sensor in a car or a washing machine
I long for the day when science is used to promote programing languages instead of cult-like bias propaganda. I am surprised there are no psycology studies into how humans program. It is always opinionated engineers with no cross-disiplinary work on the human brain that assert with confidence what is best.
I mean, good luck proving that a language is scientifically better than another one. It all comes down to personal preference and bias. > there are no psycology studies into how humans program There definitely are.
One of my favorites is a study comparing reading speed of identifiers using underlines (or presumably dashes) vs mixed case. Mob rule trumps science, alas. www.cs.kent.edu/~jmaletic/papers/ICPC2010-CamelCaseUnderScoreClouds.pdf
Instead of Killer Apps you should rather talk about Killer FEATURES. Java had and have plenty of them: - Android - chosen language there, the most recent killer feature - ability to run on multiple HW and OS - not a big winner for desktops, due to MS monopoly, but it's quite different situation on the server side - good IDEs - Net Beans, Eclipse, J Developer, Idea - and tooling from the beginning - decent basic run-time library and loads of open source libraries, and well organized, thanks to Maven - strong support for concurrent programming, with a multi-platform guarantees and enabling multiple styles - from primitives to Actor style frameworks - external Monitoring via JMX - very important in enterprise, mutli-node systems - Applets - there were times when it was the only way how to do certain things, JS was not powerful enough and Flash came later and only for certain areas (media, games..) - VM Tuning, very high VM configurability - GC - they were not the first one, but I would say the first successful ones - good build tools like Ant, Maven and now Gradle - many scoring "firsts" in their own areas - JIT compilation, Reflection, fast compilation, runtime bytecode manipulation, good support for AOP, etc, etc,..
The popularity section of this talk ignores "ecosystem". Python, C#, and Java are popular because if you need a library for something, it probably already exists.
I would argue that one of, of not the biggest factor in OOs success is its associativity with the real world. The vast majority of people don't naturally think in a functional way. We see the world as a collection of things, which have various attributes and actions they can perform. Our minds are object oriented. Designing a language to coincide with our natural thought processes made the world of programming far more accessible to a much larger population. OO, in a way, is a large reason why programming itself has become so wide spread and accessible, so it makes sense to me that the top languages would be OO-centric.
The world we see has things(data) and we can do something to that thing(functions). To couple them or not is the difference between OO and other programming styles, like FP or procedural.
OOP's objects aren't that similar to real world objects though... OOP is very hierarchical, while the world is "relational" although JavaScript's prototypes is pretty good at simulating the world. I'd say that Prolog (miniKanren/Screamer in Lisp*) is closest to English in that you can take an English sentence and rearrange it and dump it in code.
Finally the comment that I wanted to read! This is the exact reason why OOP exists because in real live, everything is an object, and has properties and methods, and inherits from other objects.
@@DorganDash I completely agree. The difference is not in coupling them, it is which comes first, conceptually: object or action. Thinking "I have a thing, what is it doing?" is, for most people, much easier to grasp than "something is being done, what thing is it being done to?" Don't get me wrong, I am not arguing that OO is better, just that it is more accessible. To be honest, I loath pure OO. Pure functional is certainly better in many ways, and can represent certain real world concepts that OO can't touch. The best analog for the real world is a hybrid that allows for simulation of object first and/or action first concepts, depending on what is needed.
@@williamross6477 I agree, but my point is that both are valid and easy ways to model real world concepts. The difference is in the execution, and the tight coupling of data and behavior in oop is one of the key elements that differentiates it from other styles in general(like procedural), not only from fp. I think imperative vs functional is a better comparisos than oop vs functional. Which one is better is a more complicated topic, they solve different problems and are suited for different needs. Telling a box to store things(oop way) and applying a fly action to a bird(the other ways) are both weird ways to model the world we live in.
The idea that we are moving from object-oriented (OO) to pure functional programming (FP) is not convincing at all. Some people point out that it is possible to do polymorphism with functional programming, but it is not as convenient. That's an obvious feature that benefits a lot from being simple to express. And in fact, modularity kind of points towards objects, because we are talking of a surface, and if you have a surface, then you enclose something, and so you get an object. So I rather believe that the mix of OO and FP will remain. If something replaces both, that won't be either OO nor FP.
yeah OOP certainly is not going anywhere. But arguably, FP is on the rise again. Btw can you elaborate on your point that polymorphism is not as convenient in FP?
I expect that in the next few years, a coherent paradigm incorporating both OO and FP will be identified, with it's own name and manifesto, and the apparent tension between the two will be put to rest.
Exactly. I think the future will be languages that are Hybrid. It will never go to fully functional. Functional programming has it's advantages, but also very very big disadvantages.
I think the best approach is to have a little bit of both FP and OOP, though with concepts now in c++ I think FP will become a little bit more common. Personally I consider free functions as general and should do general tasks while member functions are specific to the object. Take for example a function "reserve" if it's a member of a container it's pretty clear of what it does, if it's a free function then it's purpose may change based on the parameter type, which muddied it's meaning to me, or you make it's name super long or abbreviated to hell. I think the fact that the IDE helps you browse what the object can do, eg. What member functions it has, helps with the object's usage.
10:00 minor nitpic, Before making .NET and C#, Microsoft tried to build tools around a version of Java they called J++, but ended up getting sued. That's why they ended up making C#.
For the last 70 years computing has been dominated by the Von Neumann architecture. It's to be expected that programming languages would fit this model, as 99% of them do.
I started programming with APL on a IBM 370 in 1974. In graduate school, in 1983-1987, I programmed automated theorem provers in LISP. Working in the same area in my first job after graduate school, I programmed in Symbolics Common Loops, and then ML, the INRIA variant.
Object oriented programming and functional programming are no contradiction. There are object oriented functional programming languages, e.g. OCaml and O'Haskell. OOP is a type system feature and FP is a control flow feature.
Even more some when there's F#, I guess? (It's just OCaml but got the ability to use C# library) But personally I don't quite buy this kind of language because languages like C# has developed to a state that can do FP quite well, and thus I personally would consider the use of F# in most cases more for eliminating the use of brackets and dots rather than anything else, because real life situation isn't really that "mathy".
C got more popular than Pascal in the early 90s, and it also edged out other competing languages like Fortran and Basic and Ada at about this critical time too. Other existing languages from the time had large speed disadvantages, so they couldn't become the dominant desktop application language in the way C did.
because C is really, really, really good if you know what you're doing. but some people were unhappy with the repetition, so they hopped onto object oriented.
@@eusebiusthunked5259 Multiple different reasons: - Forth's stack based system is impressive, but most programmers prefer infix-based maths (a = b + c etc). - Mumps is oriented around databases and doesn't seem to have branched out towards general purpose computing (which would have required adding static typing). - Lisp had a similar disadvantage to Java (with the JVM) in requiring an engine and garbage collection, and having a hard time dealing with mixed Assembly language and low level hardware such as interrupts and timers. You could do that stuff with Pascal and C, and on DOS doing this was necessary to do a lot of stuff.
Probably the main thing holding back FP is the fact that MANY tasks are just easier to think about in an imperative way, and I think the ultimate admission of this fact is Lisp (my exposure is through emacs eLisp) which contains such functions as "set" and "progn". "set" violates the FP immutability paradigm, as it changes the value of a variable. "progn" is effectively a way to force imperative execution of sequential function calls. Emacs eLisp functions are littered with side effects because it is used to drive an editor (and everything else in the Emacs operating system). FP is useful sure, but outside of mathematically complex computation, the paradigm very much suffers the square peg -> round hole problem.
this guy is a really interesting lecturer. He took one of the most dry and boring topics and made it very engaging. No one is inherently interested in the history of lang, but the way he laid out the story and the happenstances of chance and pure collateral benefits/damages made the whole talk way more enthralling than the topic had the right to be.
Excellent talk, one small addition Microsoft did build a fantastic IDE for Java and called VJ++ , but it was their own take on Java and soon fell out of favor but lives on in some odd nooks and crannies as Visual J# .
You can write completely functional javascript, untyped, á la lisp, though. You can also write procedural and "object oriented" (javascript's object orientation deviates quite a lot from other examples), it all depends on the style you decide on.
Haskell, which I would consider the pure functional programming language, is just now gaining a lot of popularity (and tool improvement) because of Cardano (ADA) - at least I restarted learning Haskell. The functional paradigm might be a lot harder to learn in the beginning (in comparison to OO, you need to think harder), but its benefits have been adopted by all new programming languages!
It is not harder. You were just introduced to it later than imperative ones. The other way around would be painful too. Even jumping from procedural to OO was hard at time.
@@JanilGarciaJr If you already knew one fully functional programming language very well (e.g. Haskell) well, which modern imperitative programming language with garbage collection you think would be hard to learn? To me it appears that writing FP style code in imperative programming languages wouldn't be that hard because FP style is mostly about not using mutation and side-effects - you're not required to do that in imperative languages either except some library code you would need to use may require such interface. Sure, jumping from Haskell to C would be hard because with C the language doesn't support anything where you don't manually handle memory allocation and pointers by yourself. For me, the hard part is performance. Both in runtime (CPU and RAM usage) and developer productivity (for example, with pure FP you may not use some widely known algorithm that requires mutation and you have to invent a new alternative algorithm).
@@JanilGarciaJr I guess it depends on one's path, but at least in university, learning functional was only about one year after learning imperative, but what we learned was Miranda, which I still find a hell of a lot simpler to read and write than Haskell.
As someone who was programming through this period, the amount of hype for OO cannot be underestimated. New languages adopted OO features, there were attempts to make OO databases, OO operating systems, and more. It really was a juggernaut. Even LISP was adopting OO principles (Common Lisp Object System). I can see that looking back without that direct experience, it may seem different (especially as the primary sources are not online as the Web hadn't yet taken off).
As a system developer, I ask myself this question all the time. After all, most of the code I work with is either dysfunctional or non-functional.
That shits funny
ROFL, underrated comment!
@Marcus - is it really tho
@Marcus - we are in the matrix you dummy
@Marcus - The universe is not object oriented, our language is. It makes it easy to model real world problems in the programming language but there is nothing inherently natural about it. As he points out in the lecture it's often just verb first or noun first notation, which differs from language to language.
Why Isn't Functional Programming the Norm? Because when someone wants to talk about FP they spend all their time speaking about OOP.
That's deep tho.
Similar to the way a program does exactly what your code tells it to do, the video does exactly what the title says it will do. The content of the video supports the title which itself can be re-worded as "why FP is *not* the norm" or conversely, "why OOP *is* the norm." What is it about OOP languages that make them popular? And on the other hand, what is it about FP lang's that make them unpopular? That's what the video is about.
I think they're nested too deeply in their OOP hierarchies to be talking about FP
@@Kn0wOneNos3 Great way of putting it.
There is a good reason why he never mentioned the history before ALGOL.
Imperative and later OOP is a solution to the mistakes of FP in the context of business solutions.
A simple answer: Because most programs (games, editors, browsers, etc.) are basically *state machines* (not simple "computations"). They therefore fit the old imperative modell better, and without all the complexities of handling state in functional languages. Pure functions sure have their places though, but as local structural elements in that mainly imperative code, not as a dogma. (Regarless of whether you use any OO or not.)
Exactly. Imperative languages give explicit control over state where-as functional give only implicit control. Since a program is far more than just a mathematical formula, state is king.
@@CrimsonTide001 There's no real benefit to having explicit rather than implicit control of state for any of these applications, all it gives you is more rope to hang yourself in terms of bugs, especially when you're dealing with shared state.
The real reason why imperative programming is the norm is that historically compilers haven't been readily available, affordable and sophisticated enough to give you the functional programming and still get good enough performance. But that is no longer the case.
@@salvatoreshiggerino6810 No, its because programs are fundamentally about manipulating state, and imperative gives you better tools to do just that. And in no way would I ever want to deal with shared state in a functional language in any real application. The mess of pseudo-functional but not really pure functional data structures I'd have to use would be mind numbing.
FP has only one thing going for it, job security.
@@CrimsonTide001 Programs are fundamentally about transforming an input to an output, the manipulation of state is just an irrelevant implementation detail.
Imperative programming has only one thing going for it, a legacy workforce.
@@salvatoreshiggerino6810 Not at all. Simulations (games, movies, scientific research), desktop apps (words processors, excel, email, internet, CAD/CAM, drawing/3D, any media authoring tools), media applications, etc... the vast majority of programs written are not about data transformation, but rather about data manipulation over time. The 'over time' part is of utmost importance. It is what separates computer science from mathematics.
Any program that accepts input from a human is inherently state based. Whether that's playing a game, or writing a document, or surfing the web. As the input comes in the state of the program has to change to represent the new view of the data. Only the most simplest of programs, one's with zero user interaction, map well to FP. The way FP gets around this is using state based data structures, which is silly because they like to pretend there is no state, then admit that 'yeah I guess there is', then mess around with suboptimal and stupidly unwieldy constructs to try to shoehorn state into a system that wants to pretend it doesn't exist /facepalm.
FP programmers are just in denial. Its not hard to grasp the concepts despite how hard they go out of their way to obfuscate the most simplest of tasks (seriously, look up any explanation/definition of monads, one of the simplest of constructs and yet its impossible for it not to be described/explained in the most ridiculous and hard to grok terms imaginable). They're just intentionally making the whole thing difficult, then run around claiming they're 'true programmers' for doing everything the hard way. Sure, running a marathon with 1 arm tied behind your back is possible, but unnecessarily difficult.
State management is the single most important aspect of programs, which is why imperative has always won over functional. It has nothing to do with tools, or legacy workforce, or difficulty of understanding, or any of the other nonsense. Its because FP sucks.
I wish he spent even a fraction of time on explaining how FP is good or how it helps and is worth using
He can't do something that can't be done. He is not Jesus.
@@lepidoptera9337 I mean that implies it's bad, which it's not.
@gsdcbill Don't worry about it, I misread your comment.
you missed his point. OOP languages went through the roof not because they are OOP. Same thing you can see the FP. It will go up not because it is FP.
@@archmad Precisely, and Python proves that a language doesn't even have to be decent to be hugely popular. It could have been a much, much better programming language if the developers actually knew what they were doing when they designed it. Semantic white space is one of the dumbest ideas ever, not to mention the fact that in versions of Python I've looked at require hackery if you don't want to print text without ending with a line feed and/or carriage return as appropriate to the OS. Every other language that I've messed with either made you add a linefeed to the end, or had a way of printing both partial and complete lines. The whole business of having to do the entire line at once is rather dumb and a bit of a pain sometimes.
Because there are lots of arguments.
That's a pun, by the way.
Is it? A pun exploits different meanings of words, which you've done brilliantly, but it doesn't seem to turn toward humor in this case. I'm not sure. I've argued myself from one point of view to the other and back again several times. It's a pun.
@@PatrickPoet well, the problem of a huge number of arguments can be a turn off.
That was the main point.
By the way I come across the problem in Prolog too.
This made my day
Implicits, records, and "lifted functions" can hide the arguments though. There are three types of lifted functions/functors : Applicatives, MonoidApplicatives (Monads), and Arrows so yeah it can be hard to pick... Oh yeah there are Comonads, ArrowChoice, MonadFix, ArrowLoop, as well...
@@PatrickPoet what a rollercoaster ride this comment was.
Python also had a killer app in the last years: ML and AI in general. It worked out for them to jump on this train early and become the defacto standard language for this usage.
urbaniv The interface to users is, under the hood all these Python deep learning packages are much lower level languages
@@Falangaz exactly right... Great killer app ;-)
Don't forget Blender.
Python was still very popular way before ML got hyped though. I don’t think “ML and AI in general” to python is the same level of parity than say swift and iOS
@@NothingMoreThanMyAss NumPy and SciPy are also great killer apps for Python which made it popular as a replacement for Matlab in scientific computing. These were part of the what made ML in python popular.
00:00:27 Richard Feldman: Why are things the way they are? It's complicated.
00:00:53 Richard Feldman: Outline
00:00:59 Richard Feldman: Part 1. Language
00:01:01 Richard Feldman: What languages are norm today. Top 10. No functional programming language.
00:01:42 Richard Feldman: How did they get popular.
00:02:05 Richard Feldman: 1. Killer apps. VisiCal to Apple II/Rails to Ruby/Wordpress & Drupal to PHP ...
00:06:21 Richard Feldman: 2. Platform Exclusivity. ObjC/Swift to iPhone sales / JS to web&internet user / C# to Windows&VS
00:10:21 Richard Feldman: 3. Quickly Upgrade. CoffieScript&TypeScript to JS / Kotlin to Java
00:13:27 Richard Feldman: 4. Epic Marketing. $500M Java marketing compaign in 2003
00:16:15 Richard Feldman: 5. Slow and Steady. Python story
00:17:53 Richard Feldman: Other Popularity Factors. Syntax/JobMarket/Community
00:18:46 Richard Feldman: Why are the most popular languages OO except C.
00:19:15 Richard Feldman: Part 2. Paradigm
00:19:39 Richard Feldman: Uniuquely OO features. Encapsulation = Modularity
00:35:35 Richard Feldman: They are OO because modularity is a good idea and they originally got if from OO by chance.
00:35:47 Richard Feldman: Part 3. Style
00:35:50 Richard Feldman: FP Style : avoid mutation and side effects
00:36:31 Richard Feldman: Why isn't FP style the norm? No sufficiently large 'killer apps' / No exclusivity on large platforms / Can't be a quick upgrade if substantially diffeerent / No epic marketing budgets / Slow & steady growth takes decades
00:41:02 Richard Feldman: OO languages are the norm not because of uniquely OO features.
00:41:32 Richard Feldman: FP style just need time to become the norm.
00:41:50 Richard Feldman: Questions.
00:42:03 Question1: How do you see Lisp fiting in is it a FP language or not?
00:42:21 Answer: Classifying a language is functional or not is kind of arbitrary, definitely very fuzzy and ultimately not as important as talking about the style and their support for the style like being able to avoid mutation and side-effects and still have a good experience.
00:44:03 Question2: How does performance factor into this.
00:44:26 Answer: Perfromance is not the key factor to the popularity of a language.
Thank you so much
Godly
You put a lot of work into that timestamp index! That will be helpful to many people.
I can't help but read his surname as Feynman every time I look at his name
everyone is only interested in the timestamps. Nobody sees the irony!
You can use OO and FP at different granularity. Use OO modeling to find the right places in your application to put boundaries. Use FP techniques within those boundaries.
Well said. This is what I have thought about always. I hate that 'geek' debate of FP vs OOP. It is stupid. It is like saying you should do alot of addition instead of a little multiplication.
@@Sladeofdark un o
@@Sladeofdark that’s a good analogy
what you are talking about makes no sense, OOP idea to use methods that produce side-effects to change the internal state of the object is completely against pure functions from FP. I can only guess what you mean by OO modeling are data structures, that are already present in FP languages.
@@D4no00 that doesn't mean you're forced to mutate state with every method you write
You missed the main point why the top 10 is how it is - all languages there **hybrid**. They are not obsessed with style purity but rather pragmatically adding what users wanted.
OO was just a popular style at that time, so languages got some features supporting that style, some more, like Java and C++, some much less, like JS or Go.
With languages it is like with sales - you sell what users want, sometimes with some help of paid or coincidental marketing; you do it consistently over longer period of time, and voila - language is popular! No magic in there. I suspect that relative failure in of pure FP languages proliferation is exactly because of their purity. Arguably most successful FP is Scala - a multi paradigm language by their own definition. I would predict that FP languages impact will remain only indirect - by inspiring the mainstream hybrid languages. When they absorb enough FP features, and you can program in them FP style with ease, there would be hardly a rational reason to switch to any pure FP language for any project.
Good point! He didn't mention this by calling them hybrid, but I think he meant it when he talked about familiarity and how features were inherited by new languages from old popular ones.
This is so right on the spot. I learned LISP long before any object oriented language. I will say this “pure function” mentality is quite self-defeating. State handling in pure functional paradigms are a clunker and really unsuitable for a lot of situations. The purists are what’s preventing functional from being used more.
"Practicality beats purity."
-The Zen of Python
True. After I discovered FP and learned a bit of Haskell, I was thinking of the same thing. We need to be functionally imperative. Iterators, high order functions, algebaric datatypes are good, but impractical and ancient math theories are just overkill and don't suit the time well.
Speaking as a Ruby/Rails developer, while I love the FP style support the language allows, I find sticking to a "pure" FP style would make my work a lot more difficult. There are times when I actually do want to have the capability to mutate state, and many, MANY times where "mutate object/collection in-place" saved me a TON of compute time and memory.
The title should be “rough overview of the history of programming languages”
Done.
@@Clownacy Bro said done like he is the uploader of this video
@@wayne30047 Pure clownacy if you ask me.
I'm with you. This is a fun presentation but actually clickbait. But then I ask myself... what did I expect?
This talk reminds me of the dangers of Maslow's hammer and why programmers should seriously consider having at least a basic understand of a few programming paradigms and different programming languages, specially conflicting ones.
So "Why Isn't Functional Programming the Norm?"
Because having multiple and/or multi-tools at your disposal and knowing how and when to use each (like we have now with programming) is the real best way to do things in life in general.
That's one reason but not the main reason. It will, however, take at least a beginners' class in computer science to understand that what you write into your program has little to nothing to do with that your CPU actually does. The compiler transformations that are between the high level code and the executable have side effects that experienced programmers understand, fear and use. Only amateurs are ignoring them.
@@lepidoptera9337 You understand that you are basically saying "the main reason is because people don't understand their work environment, and thus, are not making good use of the tools at their disposal", right?
@@Essoje That's the reason for a lot of bad stuff in the world. Having said that, try to go to your boss and tell him "Boss, the thousand hours of refactoring that we just put into the code base are basically all just an exercise in creative comment writing. The compiler removes the objects anyway and makes spaghetti code.". While technically correct, it will certainly not get you that raise that you were hoping for. Your boss might even know all of that, but he has a boss who usually does not. And therein lies the problem.
I ran into a similar problem at work once. One of my companies was making test and measurement equipment and I asked why they are not making power supplies, since Hewlett-Packard at the time was making like 400% profit margins on theirs. The answer was "Nobody is going to buy brand X. If you are a manager on a factory floor and you replace HP power supplies with ours to save a couple hundred bucks and one of our products fails, then you need to look for a new job because you just caused thousands, if not millions of dollars of losses. If the HP supply fails then you can simply say "We always use those and they were always very reliable.", whether that's true or not.". Such is the world.
Having said that, it still pays to know what a compiler does with your code, even if you are the only one at work who does or who cares about the consequences.
@@lepidoptera9337 The main reason for refactoring code imo is to make the code less unmaintainable after several months/years of unrestrained scope creep. Everything starts off with a definition and suitable structure, and then it snowballs from there because the original definition/design was inadequate. Maintainability has nothing to do with what your compiler does.
@@k98killer That's what I said... it's code documentation. It's entirely for the benefit of the programmers. Neither the compiler, nor the CPU nor the users care the least bit.
Alright so first impressions, this guy did a FANTASTIC job preparing for this speech, because he literally had himself mentally prepared for a lackluster response right from the get-go and knew exactly how he wanted to play it out. Just literally kept rolling. I mean that's genuinely inspiring and makes me want to listen even further.
he was well prepared but he bluntly avoided the elephant in the room: FP is hard, python and Java is easy.
This is the main driving force behind most of the popular languages (add to this that Javascript is the language of the web)
@@unperrier5998 Go isn't on that list although it's probably the simplest capable language tho
@@unperrier5998 Yeah, while it was a well made talk, I would title it "FP enthusiast struggles to cope with reality". He also makes a good case of why support of FP in OO languages is useful and the way to go. There are many partial issues that can be solved best with FP. But in reality, most large SW don't run for a determinate amount of time and then give a single result but are interactive systems. And by definition, such systems have to have side effects. Their entire purpose is to have side effects. So strict FP only languages have no mainstream future because of this IMO.
He makes a good case of why inheritance as is does not have a future and Rust for example limits implementation inheritance since it is not a good pattern.
@@unperrier5998This is basically what I'm always saying to myself when I hear people lament OOP. When I was going through the wringer and first learning to program, finding OOP helped me wrap my head around what was going on.
I don't think FP would be considered as hard if history had been different in the way he proposes. I've seen plenty of people learning programming for the first time and having difficulty wrapping their head around what a 'class' is, myself included. Look at it another way: the reason we still use OO languages is because that's what most educations teach. What if those educations taught FP instead?
This is like one of thise religious events where they talk about signs of the time and stuff
Only there's probably more consentual sex being had at the religious events, oddly enough.
It also includes taking near facts and twisting them to form a wrong conclusion.
With a coworker who is obsessed with FP and Clojure: it really feels that way. Everything not FP is evil to him. I just stopped arguing because nothing comes from it that maybe clojure and FP is not the perfect solution for every problem.
@@Teekeks facts
@@Teekeks same experience. everything that's not fp is evil and trash and must be rewritten in a fp language, ironically he almost never does the rewriting.
The reason OO became the norm is because if you want to program procedurally in a OO, you can. There reverse is not the case. In OO, you don't have to new up an object to make things work. You can use a static class as if it's a namespace, and go completely procedural. Under the hood, a static class is pretty much just a pointer to other pointers. If you wanted to store state and deal in objects in a procedural language, you're going to fight the language. There is a time for programming procedurally, and a time for programming concretely. I prefer having the full toolset at my disposal. It is odd to me that functional programmers are so evangelical about functional being "better". It's like saying a screwdriver is better than a wrench. Sometimes you need one, sometimes you need the other. If I want to build a fast http handler, I'll code procedurally. If I want to build a large scale app with layers on layers, and a FIFO queue batch handling 100s of complex commands that might take 10 seconds to parse, I'd like to keep it organized in my head as much like the domain as possible, and write my code like English. And English tends to utilize nouns judiciously.
Why are you saying that a static class is just a pointer to other pointers? Not saying you're wrong, but it does seem you're going off the rails to make some point.
@@atlantic_love My point was that you can make a static class that contains functional methods, kept in an organized container of the static class, and whether you made a bunch of first class functions, or put them in a static class, they all end up pointing to some executable in the executable memory block. A method is not a pointer at the language level, but under the hood there's some memory address to the method in the executable code memory allocation that a class points to for that method. A static class is just putting those pointers in one accessible place.
@@ryanmcgowan3061 I suspect you're confusing procedural methods in a static class, with functional functions, which are quite a different animal.
The reverse is possible. OO tricks you into thinking Function and Data live in the same memery area. They don't. The compiler will literally convert it into a procedural program.
In C they have handles which literally have the same purpose as fields in an object. In C++ your struct is literally the same size as a class with fields give or take a few bites for Object Header information.
🥱SO many "I"s in this ramble.
Who are you, again?😏
I did miss him looking at FP to get an answer. It was too much of a 'FP is perfect already, the others are to blame'-speech.
Yep, it was like asking "why am I not rich?" then focusing on how other people are rich.
Also, FP is opposite of perfect. It's main issue is community.
Try learning Haskell and after they throw a bunch of quasiintellectual words that could be done without to explain exact same thing so everyone understands, you will know why most people don't give a fuck.
I didn't set up to learn something just to be told that I need PhD in math theory to understand why you avoid multiparadigm languages and instead tell me of how great purity is while unironically using unsafePerformIO.
Simon WoodburyForget If you believe French is pure you’ll be disappointed learning it, the reason it is so hard to learn is because it is full of inconsistencies and grammatical exceptions
@Lemo would you rather it be the ruble or yuan instead? People love shitting on world powers and geopolitics until they realize a power vacuum would just lead to another global power to take its place and thus becomes "subjected" to its influence instead. Kinda the whole reason why the US "keep interfering and influencing" don't ya think? If not the US, who'd you rather it be?
Well I mean it's true isn't it? Not to say that FP is perfect, but it has fewer problems than say OOP. The main reason people don't like it is because it's unfamiliar. And because FP language communities tend to be more arrogant and uninviting.
I haven't done any research on this but I think Python has a "killer app" type thing going for it too. It started to take off when Data Science / Machine Learning started to take off, and it has by far some of the best tools for the job (the killer apps, so to speak). Numpy, Scipy, Pandas, Pytorch, openCV, and Matplotlib.
Google is behind pushing python shit. No wonder.
Yeah, as a new programmer, I agree this drew me to Python. Another killer app so to speak was Jupyter notebooks, it was a very intuitive way to write, run, and test code.
@@AR0ACE Pything is a piece of ***t. Don't use it.
Yeah, python has several "killer apps" - Django, jupyter, library for ML stuff (written in C but locked to python because it returns mostly python data structs). The guy is wrong in saying that python is popular because of the design, since there isn't much of the design to speak of, it evolved pretty haphazardly.
I would consider this ecosystem, not killer app.
He answers his own question at 43 seconds into the video. "There not any one nice, tight, simple answer." Carpenters don't do everything with hammers, or a screwdriver. They have a toolbox, and they select the right tool for the job. The same goes for computer languages - people who become emotionally involved with a single programming methodology are doing themselves a disservice, and when they stand up and advocate that practice they're doing everyone a disservice.
There really is no valid argument to discount procedural / imperative programming. Computers are tools that follow instructions. Do this, do that, then do this other thing, etc. They're imperative / procedural by their very nature. I won't claim that style to be the nirvana approach either, but it's an essential item of the toolbox.
I guess human nature could account for a person who's really only mastered one tool to want to make that the end-all be-all "nirvana tool." It just doesn't wash, though.
Exactly this.
Most well written programs today will not be afraid to use sum types, as they're great, you will see iterators because they make your life easier, but they also will contain OOP where you need a state because using FP for that sounds like a nightmare.
Yes. For me, the language I use is mostly the language the system I support is written in. It is not an emotional choice. Heck, it is not even an intellectual choice.
If I start a new system, what language will I program it in? Right now, it is Java. Again, not emotional at all, Java just happens to have the most supporting libraries that I can make use of without writing them myself. I was tempted a few times to use GO or Rust or C for the really time-sensitive portions of one of my systems. But once I removed the bottlenecks, I did not feel like I needed to go through the hassles to have multiple languages in one system. I did not consider C++, although I used it for quite a few jobs I had in my career. I find it easier to control a C program than a "naive" C++ programs. Some environments don't support C++ libraries because of memory constraints (yes, there are still devices and systems like that). And since most of them also don't have memory protection, a C program is actually easier to make reliable and robust than a C++ program. And I got sick of having to restart the device when the system crashed due to memory corruption. Just personal experience.
I used to use Perl for throw-away programming, as I consider it a better shell for scripting. I started using Perl when I had to remember too many programs to write a simple shell script. I don't use Perl as much for that purpose any more, as CPAN is getting more and more outdated. So, I use Ruby instead for my build scripts, for example. For simple system scripts, Perl is the still the best, as long as I don't need too many libraries.
I don't use Python if I have a choice. Now that is an emotional choice because I got one too many indentation wrong back in the '90s. After that, well, I still used it if it is warranted or when it is the language used (see the theme?) for the system I had to support. Did I hate using it? Not really. I would avoid it, that is all.
Do I use functional languages? Well, I am still using (pure) Lisp, having written more than one Lisp interpreter for my own use. My extension language inside my systems is Lisp. I used SASL and Miranda back in the days. Love them, but too expensive to get a copy of my own. Late '80s, I believe.
Heck, one of my first loves were SNOBOL 4. I just loved the pattern matching stuff. That is the reason XSLT is such a nice language to use for transforming XML. It just works tons better than any other alternative for that purpose.
Swift is interesting. I like many aspects of it. And some aspects I can't stand. But that is the same for most languages I have to deal with.
In short, languages are just tools. Yeah.
Yes.
@@RicardoGonzalez-bx6zd I don't think that explains the actual question. If the toolbox paradigm were correct, then there should be a functional language within the top ten somewhere. Maybe not the top three or five, but one should be somewhere near there. It's like asking why a literal toolbox doesn't have a wrench. "It's because I have all these other tools in my toolbox", that still doesn't explain why there's no wrench.
@@squirlmy If you view FP as a commonplace tool that's as universally applicable as a wrench, sure. But the thing is, most popular languages have just enough of the FP stuff in them these days that going whole-hog functional is rarely the best choice. So instead of a wrench, it's more like, say, a seven-axis CNC mill.
Most of the time I'll be using hammers and saws and wrenches and routers and drills and such, thank you very much. The mill sits in the corner. But occasionally I REALLY need what that mill can do and doing it without the mill, while technically possible, is too difficult and the mill gets fired up.
(Whole-hog) FP is like the mill, not like a wrench. (A language with some FP capabilities is the wrench.)
Every once in a great while, I watch a video explaining what a monad is. Now having worked in the field 12 years, I can confidently say, I still have no effing clue what a monad is.
25 years here, and I can state with a great deal of certainty that a monad is something you don't need if you simply do OO,
We also could do away without monads if we used real english words with easy to comprehend descriptions of said words that don't take post-PhD math studies to start grasping.
We are programmers, not mathematicians. Imagine if welders had to have PhD in physics and chemistry to fucking weld two pieces of metal...
@@purpleice2343 Um... Yes, we ARE mathematicians. Programming is just a branch of mathematics.
From reading wikipedia, my dumb brain wants to distill monad to this:
"A monad wraps real world side effecty code in a function that returns a value so the rest of your pristine functional code can pretend the real side effecty world does not exist."
Is that approximately right or did I miss something?
@@derekgeorgeandrews It's pretty accurate, but you have to admit that using words like "monad" makes you look cool to your fellow functional hipsters.
We aren't moving from OOP to Hybrid to FP, we are just moving to Hybrid because OOP is a useful tool. You mentioned C with classes didn't take off until after they added even more features and made C++, but that doesn't mean if it didn't have classes it would still take off. It could have required both classes and additional features to take off like all of the other useful hybrids.
Are we really moving to hybrid though? I feel like that the vast majority of ppl still uses the OOP paradigm entirely
I love how Functional programmers dont realise we have been doing functional programming as part of good programming practices in multi paradigm languages forever.
They are going to shit a brick when they realise FP isn't the be all and end all. There are a myriad of other paradigms which are equally useful and even more useful in a lot of situations, that includes OOP.
@@thanasispappas62 I dont know a single good professional engineer with any experience that programs in a single paradigm
@@thanasispappas62 I wouldn't even know how to do that, I programmed in dataflow(reactive), OOP, functional, procedural all as a matter of course on a iOS app without even thinking about it. Not to mention all the architectural and design patterns that were in place to hook it all together.
I think every single OOP application I have worked on has had some functional aspects especially in languages people would think are heavily OOP like C#. I mean find a C# developer that hasnt used Linq, monads or lambdas.
@Felipe Gomes so the first day of learning oop? Maybe learn a bit more and you will realise that polymorphism is not only not bad but nothing to do with oop
I'm surprised this focused so much on the OO nature of the most popular languages rather than the fact that they're imperative.
It's because we were taught that OO is the future and the way to go. I think the reason is that functional programming isn't the norm is that C/C++ was known for being low level and fast since it's inception. The majority of programmers need performance, so they just automatically use C. Since the majority of programmers know C, languages that work similarly to C/C++ have gained popularity faster. Rust is gaining popularity fast, because it offers the performance of C, while being more logical and strongly typed than C.
@@coldsteel2105 humans also think unimpressive terms (most of them do anyway). You could easily teach a non programmer python but good luck with haskell
@@Nik6644 I can tell that you've never tried Haskell, if you think it's hard to learn. It's easier to learn and read than Python. It's true that any programming language, including Haskell, can be written in a way that's complicated and hard to read.
@@coldsteel2105 , you can't tell shit about what other people have done. Your opinion isn't the same as other people's experiences. I'm sorry people don't use your language, but pretending you're a mind reader or that your experiences means others are the same don't make a good programmer.
@@coldsteel2105 I don't think at all that that's the reason why. Most of the people programming C are directly in the hardware industry. Since C is such a simple and efficient language they can run on rather small chips that are cheap to make. There are only very few chips even capable of running languages like Java that also don't skyrocket the price. Within general app, website or software development C is rarely used for anything since it's way too low level and people's devices are good enough to warrant the overhead when compared to development prices. Nobody except for game developers program things meant for actual computers on C or C++. A bit of evidence for this is the popularity of C#, that language offers a lot out of the box when it comes to app development, and is actually used a lot in the industry.
FP has it's places for sure, but mainly in hardware development these days. And the truth is, most developers arent in the hardware industry.
I think there are real weaknesses to languages with strong FP support: They model problems very well and make it easy to write correct programs, but they don't model machines very well. That's because machines have mutable memory, caches, and registers, and utilizing those things efficiently is how to write an efficient modern program. This is why I think languages that have first-class support for mutating variables and memory will always be important.
I think ultimately performance is the hardest thing to get right in any programmer's day-to-day working. It's much easier to test for correctness than it is to test for performance. And especially with the advent of ML systems where more computation leads on its own to more correctness it's more important than ever to accurately and directly model the computation than it is to accurately and directly model the problem you're solving.
The unpayable cost of abstraction.
This comment deserves a lot more likes. This is the fundamental reason no one wants to go pure functional: you can't go back when you need to map the machine closely. From what most people see, hybrid is ideal and solves real world problems best.
Performance is not that of an issue these days, by the way, programs generated GHC Haskell compiler is as fast as Java code on some benchmarks, it's because of the optimizations it does.
@@v_iancu "Performance is not that of an issue these days" Ok that's nonsense.
Sorry, but I really, really, really doubt there are people here in this comment session than can write better code than modern compilers.
And I think I haven't cared for language performance for at least 1 1/2 years at my job, in which I use an imperative language. IO, network and data structure are all more relevant, specially to performance. Not performance that would improve by changing programming paradigms, at least.
Performance critical sections can be isolated and improve upon in any language. Even Haskell has great FFI for the scenarios you describe. Keep in mind the language most used for these "ML systems" is freaking Python, which even JS can beat up these days.
Unless you work with real-time systems, video-games, system programming, embedded systems or mission-critical software (like airplane control systems), I highly doubt you're writing micro-optimizations everyday.
I'm not sure a lot of people even consider that the biggest part of why OOP is popular is the fact that human beings don't think "functionally" in their day to day lives.
They think "objectively".
Object-oriented programming I would argue is more intuitive, conceptually.
We've evolved to think about objects and their states. We have entire sections of our brain specifically designed for it. Our visual cortex is one big object recognition processor.
I think us programmers get way way too far the weeds instead of trying to take a step back and thinking about human nature, and the way our brains actually work.
You have to train yourself to think functionally when object-orientation is kind of built-in from the jump.
"human beings don't think "functionally" in their day to day lives."
Some do, some don't. I have a very procedural oriented mind. A then B then C.
"We've evolved to think about objects and their states."
There is no WE. I had a terrible time with RDBMs (relational database managers) working with sets rather than records individually. I pluck records from a database and then process them sequentially. It is what I do.
We ALL natively use inheritance though. You've learned to drive a "car". Because you can drive a car, you can drive a "sedan" OR a "coupe", since they both inherit car. And since you can drive a coupe, you can drive a "corvette" and a "mustang". And all without special action, preparation, or training required to do that. I can throw you the keys and away you go.
@@jadedandbitter "I can throw you the keys and away you go."
I get the idea. Items specific to a model (such as where is the headlight switch) inherit "car" and then extend it by adding methods unique to model. As you indicate, multiple levels of inheritance can exist, and it does indeed solve some problems while creating new problems.
I doubt we all use inheritance natively. It is easy to assume that all people are like me, but I know this is not the case. What I cannot *feel* is just how different people can be. What you describe is a hierarchical world view and the traditional way of describing this is, if I remember right, nominalism; the existence of an ideal "chair" is the usual example. The ideal might not even exist. From that ideal are instantiated many kinds of chairs; and some things can serve as chairs that were never intended to be in class "chair".
But other ways of looking at things don't involve classification. That's perhaps rare, but I've seen documentaries where a person's ability to identify an object depends on its orientation. Hold a hammer with the handle vertical, the subject can identify it as "hammer". Present the very same object but held horizontally, and the subject has no idea what it is. This person must memorize each object in each orientation and does not extrapolate similarities from one object to another.
In the case of computer programs, VAST differences exist in *what is wanted* or to be accomplished. OOP treats data instances as an "object" but perhaps with some distinctions like what color is the object. DATA is the focal point of such systems; you ask the data to do things to itself, and the object knows how to do things.
Older programming languages are focused on operations and data just happens to be what is operated on. I don't ASK a record to do anything! That's incomprehensible; it is data; it just sits there.
This is what caused me so much difficulty with SQL and RDBMs, I am very strongly process oriented. A datum arrives at one end, gets processed, and maybe something spits out the other end of the process. It was hopelessly confusing to retrieve a thousand records with a single command, how do I deal with each individual record? The idea is that you don't. If you want to, well that can be quite a challenge.
I think that some of Python's success could be due to adoption in scientific communities and boosted by the 'killer app' of Notebooks. The Functional Programming style may end up getting a boost from applications implementing microservices where many services use an event bus and immutable data change objects to synchronized, and frontend state management that avoids mutation and side-affects such as React+Redux.
As a math guy the biggest selling point of Python was their robust support for functional programming. I'm sure I'm not alone in that.
I think the success of python is the ease of use, no pre compile and very powerful modules like numpy, pandas, scipy etc. This made it super easy for researchers and scientists to hump in and get results without learning software development for 3 years.
@@abebuckingham8198 functional programming in python is pretty bad IMHO, stack limit, no tail call optimization, impure, and variable-variables, syntax pushes you towards for/while instead of map/fold, functions in Haskell are actually mathematical in nature (pure, all variables are constants), being able to do algebra on types / values without any gotchas is a huge cognitive deload.
Killer app for python is different for different communities and that is the true selling point of python
don't forget that scientifics are not programmers. Python is popular in machine learning because it's like writing English.
FP on the other hand is hard.
Went I learnt CS at university I was taught like OOP was to coding what the major and minor scales are to music. I was honestly led to believe that that was the only way to do it, save for maybe a small cult somwhere that still uses some other methods. It was presented as THE way we code. The end of history. I remember the first time I wrote functional code it felt like I was engaging in some forbidden heretical ritual. I remember feeling somewhat surprised that god had not struck me down with lightning for daring not to use a class.
That's really interesting, because my experience is drastically different. On my CS studies, we were shown, that OOP is definitely not the only way to go about programming. On the first semester we had Introduction to programming in C, which made me really like procedural programming. On the second semester we had a mandatory class called 'Programming Methodology', in which we learned the basics of functional programming, and later built an interpreter in a functional language.
@@ShannonBarber78 University of Cape Town. And correct I did not. I got a BSc Mathematics. I took some cs and computer engineering courses as part of the credits.
Major and minor scales are a perfect analogy because that doesn't even scratch the surface: Lydian, Locrian etc modes, Middle-Eastern music with quarter tones, just intonation and further down to the lower levels of music theory iceberg
Honestly I don’t even understand OOP. It seems so insane. Why do I have to fuck around with classes and crap when I can just make functions to complete specific computational tasks.
Of course I am not a programmer by trade and most of my use for programming is making tools in matlab or vb script.
I’m sure it makes more sense in scenarios where you need a “worker”? Still…
functional programming and OOP are NOT opposites. They work very well together.
Why Isn't Functional Programming the Norm? because someone missed a ')' back in the 70's and only found it 30 years later
Good one lol
Just ] was invented a v. long time ago, so you could easily close all outstanding insane numbers of parentheses
That's below the belt mate!
it was also in a 5000+ line file and about 35 developers have touched it over the years leaving it a house of cards no one wants to touch
LOL
Nice talk! However, one aspect that I think you've missed is that programming desktop GUIs was a "killer app" for OOP in the late 80s/early 90s. Inheritance works relative well with a fixed set of operations (the widgets events) that can be extended to many widgets (cf. Wadler's "Expression problem"). Of course it didn't work as well for other domains (e.g. for data structures parametric polymorphism of SML was much better suited) but the OOP adoption motivated by GUIs blinded the other alternatives for the next 20 years.
But now react, which is a pretty popular GUI framework, is becoming more functional with hooks replacing classes.
This x1000. Objects were a good solution for GUIs, given how state was managed in any of the systems of the time. It wasn't necessarily optimal for other problems, but because everyone was already programming for the GUI anyway ...
GUI work is one of those things you never realise how difficult it is until you've had to do quite a bit of it.
I could reasonably make the case that the GUI functionality of a modern program could be as much as 20 times more work to implement than the actual 'functionality' the program is intended to accomplish.
GUI code is SO tedious to write...
So unless you buy into the contemporary "everything is a webpage" bullshit, when exactly did actually hooking up a GUI become any less important whatsoever, for any software that's not just running headlessly on a server back-end somewhere...?
@M. de k. Behind the scenes, there's no such thing as purely functional anything... Something has to generate side effects for things to happen... You don't consider the VM, interpreter or compiler - why should you care about the inner workings of a library when you should only be using its API?
In OO, when they say prefer composition over inheritance , they didn’t mean no inheritance. That was quite a jump in the thought process of this presentation, I think. Inheritance still plays a big role in OOP even if you adhere to the composition over inheritance practice.
I think the problem comes when you try to use inheritance for code reuse, needlessly or for non immutable type relationships.
I would be hard pressed to even remember the last time I used inheritance more than a few times on a project (ignoring abstract classes)
So yeah there are very few reasons to favour inheritance over composition and most of the times you see inheritance or it causing problems you can fix it with a simple strategy pattern/ polymorphism.
@Felipe Gomes I'm pretty sure subtyping is an important element of object oriented design.
@Felipe Gomes only if you are shit at programming. I think we might have found the problem.
@@sacredgeometry Then what does the "I" in SOLID refer to?
I think this talk omits one very important point about what makes or breaks a language: the end result. In other words, when using a particular language to build a reasonably-sized system, does the language work? Does it perform reasonable well or is it too slow and requires too many performance tweaks? Does the language have safeguards to catch errors at compile time to avoid production errors (e.g. static typing)? Does it have a decent eco-system of tools, documentation, and libraries to make me more productive? And finally, does it have a large enough community to share ideas and to keep improving it? When factoring in these considerations, Java scores very high. It’s not just “marketing” as this speaker seems to suggest. This is why Java is still running very strong 25+ years later.
Put simply, the language has to “work” to become successful. Otherwise, we end up with Ruby or Scala and people move on.
It's all C++ in my world of Chinese microcontrollers. The annoying thing is learning too many languages so you mistake one language for another. You just need to know one good language well and then you don't think about it anymore and its as if you write in English where your brain is 99% on the way you design what you are building. All the other software people write is C++ as well, so you can just copy libraries and whatnot and not worry.
@@Andrew-rc3vh Okay, you made me chuckle there, why would you copy a library? It is a _library_, you just use it. It seems to me that you are not getting the most out of your language.
@@MorningNapalm I often download libraries Github, especially the ones used to drive specific chips. Also another place is simply forums, often with code boxes on them. C is the language 99.9% use in the things I do. If it is not C, sometimes it is Lua, sometimes the odd Python file too.
Actually, Python had its "Rails moments" around 2012 with increased popularity of deep learning research and applications. It also coincided with widespread use in systems management and automation scripting.
And now it has a strong community around it with jupyter notebook / lab, numpy and pandas. It has become the default for many applications in science / statistics / data science
@@KaplaBen Dont forget automation with a lot of easy to use packages that can handle most communication protocolls in the industrial ecosystem. Like CAN, Modbus, MQtt for the IIoT, ....
The community provides almost anything to exchange data with almost every machine, motor whatever.
Just XCP is missing. At least i couldnt find anything for my needs.
Similar thing could be said about C# which massively grew in popularity due to Unity3D.
@@Luxalpa I do love Unity, but I think Microsoft had way more to do with the popularity of C#/.NET. Besides Windows desktop application programming, there was also ASP.NET which was very popular among enterprise web developers. MSFT saw themselves as being in a very real battle with Sun (later Oracle) for the enterprise market. You'll note that if C# & Java are combined (even with say 30% overlap), that they beat out even JS.
So you say Python has it's "Rails moments" as being a front-end language for high performant languages like C & C++ ?
The creator of elm-css recommending a CSS replacement as a killer app. The Elm community setting an example as always. Thanks Richard.
I've always used FP with OOP, I think both can live together and complement each other.
Do you use immutabilty or do you use other principles of FP?
He's channeling his nervousness into high speed communication instead of tugging on his elbow and stuttering which I appreciate
That's why I watched it at 0.75x
Finally a video I don't have to listen to at 1.5x
@@Sk0lzky I watched at 2x anyway. I'm a busy man :)
that's called being excited lmao
I'd say one of the main reasons for those langs being in the top 10 was that they are so easy to learn once you have one of them already. It's like learning another romance lang. after you know one. e.g. learning Italian after you know Spanish. On the other hand, functional programming is like going from spanish to hungarian.
True, I've learned C++ and in I've learned on the fly all other languages when i used them.
Learned BASIC in middle school & some Pascal in HS. Taught myself QBasic in the 90s. OOP is like learning that proverbial Hungarian for me. Fortunately, PHP works well as a procedural / functional language. It's only when I have to work with other people's code that I have to deal with the OOP paradigm.
@@corryunedited8154 Procedural is not functional don't get them confused
I learned Java and Haskell at the same time. Java was just so much more intuitive. People just think imperatively
@@Nik6644 Speak for yourself. I learned Lisp first after really struggling to learn imperative languages. Lisp just felt natural and intuitive to me, I would spend countless hours as a kid just writing lisp on scraps of paper and whatnot even when I was on the bus or having dinner, then there was Haskell. I had less intense but similar experiences with Erlang and a few other functional ones. No imperative language has ever made me feel this way, and I wouldn't willingly touch any of that stuff in my downtime.
This talk should be called "The History of OOP" because it has nothing to do with functional programming.
Very little, yes. I was hoping he'd at least explain FP so I'd know how they compare. From what I've read, FP just sounds like cramming all the data into a module and sub-setting functions into various other modules. Just about any language can do that... except maybe Eiffel.
But he was very sketchy on procedural and OO programming history.
I watched it out of nostalgia
@@rontarrant And that tends to be the problem. OOP programers use FP all the time in general purpose languages. It isn't something that really needs dedicated languages unless you are working in some domain where you want to force the behavior for some reason. The big top 10 languages, the thing they share is not OOP, but general purposeness. Even if they are domain restricted like Javascript, they are about as far from one trick ponies as you can get.
C'mon Ron. You're going to denigrate FP, while claiming not to even know what it's definition is? Look it up! Your statement sounds like jealousy. "I ain't need no learnin' fron sum one with a paper sayin'' P -H -D"! lol Functional programming attempts to bind everything in pure mathematical functions style. It is a declarative. Its main focus is on “what to solve” in contrast to an imperative style where the main focus is “how to solve”. It uses expressions instead of statements. An expression is evaluated to produce a value whereas a statement is executed to assign variables. I was impressed when in the 90s I saw someone debugging a LISP program while it was running. Try to do that in other languages.
I want to get an Apple II now LOL
To be honest, this question seems a bit silly to me, like the old RISC vs CISC debates. Guess what, CISC won, and even the "RISC" architectures provide CISC instructions. But you know what, RISC won, as those instructions are broken down to microcode that is much more RISC like. Turns out, both won.
The same is true with functional programming vs OO, both won. All the popular languages have OO and functional features, it's just pragmatic to give programmers lots of tools in the toolbox so that they can solve a variety of problems. And this has been the trend since even the '90s when the first C++ standard added the STL, a set of algorithms heavily inspired by functional programming style, but done in a way that works well with the C++ language. The first book I read that really taught me functional programming was Modern C++ Design. When I tried out Haskell I realized that I already understood the core principles based on my C++ experience, and now it was a matter of thinking purely in terms of functional programming.
Nowadays the discussion isn't if a language should be one or the other, but to what degree. Should variables be immutable by default? Do we opt into purity or opt out? etc.
Take intels golden handcuff x86 cpu out of the equation and RISC won. Or better yet considering that most CPUs in this day and age are ARM processessors then RISC won.
@@carlosgarza31 Except ARM isn't strictly RISC these days, just like Intel isn't strictly CISC. That is way too idealistic and in the end pragmatism won out. The old concerns became obsolete once the instruction sets started being implemented in terms of an underlying microarchitecture. MIPS based CPUs are probably the purest RISC processors out there today.
Interesting metaphor/analogy. Btw, Apple might be moving to Arm. Motorola 68xxx (CISC) -> PowerPC (RISC) -> Intel (CISC) -> Arm (RISC). Btw initially Apple was using Pascal, for the first Mac :). Btw ObjectiveC was more like SmallTalk, whereas Swift feels more like Java/C#/C++ type OOP.
I think immutable for "variables" (consts) is kinda easy - const is the more frequent use case, and more safe, so should be default. Now mutable/immutable objects/structs is another question. I wouldn't mind a language in which immutable is default, as long as there is a way to opt-out. I think having immutable as default could help many devs realize the benefits, by just making them "stumble" into the defaultness and ponder (hmmm, this must have some benefits, since it is default :) ).
how much assembler did you do. I did it in the 80's to 90's
I saw the start of the debate, it was clear for me, RISC was the best.
The reason CISC ruled is Intel and it's killer app... (and killer hardware!) Windows over PC.
Intel said "I can do one cycle instructions too" but that wasnt the point.
You cannot mix the two oposite paradigms.
It was easier to see that those days : You have very limited space in a chip: What stuff do you prefer to carry inside.
* CISC favors complex and different length instruction set, lots of control circuits. It leaves little space to store data. The worst from Intel, all the operations worked only in the main register, "A" the accumulator.
* RISC was: Very few very basic instrucrion set, lots of registers to store data, and any operation worked on any register. You don't waste so many cycles moving data from/to memory. REALLY fast.
* Complex instructions were less than 1% of a program, many of them never used at all.
* RISC forced you to do complex things outside the chip. Complex took more instructions, bur less data tamper.
What about programing? This is important. RISC was posible because of compilers. Programers dont need to implement them on every program, compilers do.
This guy is a good public speaker indeed. I'm not a computer scientist or a programmer but I sometimes do some scripting although my preferred language, R, didn't make it to his list. Nonetheless, I like how the presenter guides us through the history of how these languages were developed and gives us the context in which the decisions that affect us when using these languages today were taken.
@@dispatch-indirect9206 How is *R* a language for pirates?
@@caty863Arrrr
Python has a low barrier of entry and seems to be the non-programmers’ and hobbyist programmers’ language of choice. With its large set of libraries, it allows them to focus on solving their problems in math, statistics and AI instead of requiring them to become proficient at software development.
Unfortunately the price is that it is extremely difficult to do great software architecture in Python, without a superhuman effort and dollops of discipline.
Python is far too slow in the way it runs for general use. I could write sloppy C code which could still far outpace Python. Python will probably reside mostly in academia. It's easy to learn and quick to write. When you need fully optimized code, C or assembly will likely be the best option for many years to come. I write code for microcontrollers so I'm obviously biased. If you want one answer quickly, choose python. If you need many answers quickly, choose C or ASM.
Python is also huge in the second and third world, where education is not as great and Java and C# are not popular. A lot of this is because they don't speak much English there, and tutorials for the most popular languages here are in English.
I've had personal experience with this, trying to help people in former Soviet countries learn the languages I know. In the end, Python was much easier for them just because most universities there don't even have a programming major, just basic IT work.
@@billybbob18 I think for small-medium scale systems, things like business CRUD applications, Python or a similar language can make a lot of sense. A small scale CRUD application might only take $500 a month to host, but developers in the USA cost like $10k a month.
Nah, python has REPUTATION for having low barrier of entry. There is a big difference. Language with low barrier to entry would be something like Go. Python is actually a lot more complicated, less predictable and with more non-obvious behavior.
8:20 - No one knows why flash died? I can tell you EXACTLY why flash died. It died the day chrome made flash disabled by default. I'm in the ad industry and when chrome did that, the entire industry switched from flash to js overnight.
Flash died because it was designed as a temporary workaround that would die once JS was finally standardized. Since the early days JS was the future of the Web but it's adoption was slowed down by a stupid war between IE and Netscape teams who decided to systematically implement each method slightly differently and with a different name. So any JS code from that time consisted in 2 copies of the same code, one for IE and the other for Netscape. While that stupid battle was ongoing, people who wanted truly inter-operable and easily maintainable code resorted to create a few workarounds to replace JS in areas were it was lacking or unpractical. Flash was one of them along with Java applets, and a couple other third-party modules.
Now that everyone has finally agreed to work together on a single standard JS implementation, all those workarounds are no longer needed.
@Svein Are Karlsen The programming language is not responsible for bad coding practices by some amateurish plugin developers or web site creators who don't care about memory usage because the code executes on the client PC instead of their lovely server. Neither for the ad-based economy of many web sites that are cluttered with banners and unskippable commercials.
First tip to reduce browser memory usage is to install an ad-blocker. You'll benefit on memory usage, loading time and readability on most web sites.
After that if a single browser tab uses several gigs of ram, find another less memory hungry web site that has the same info with less strain on your computer. If all your tabs are affected deactivate all plugins and reactivate them one at a time to identify the crappy ones and look for alternatives.
I have about 20 tabs constantly open for several weeks, some with video and various memory intensive dynamic content, plus a dozen plugins that add overhead in each tab. And even with all the memory leaks that built up over time the total memory footprint of my browser is less than 2GB.
Flash not running on iPhones or any of Apple's products was also a significant factor as to why it became abandoned. And of course, the reason for this lack of compatibility was primarily motivated by corporate rivalry.
Flash had a specific nitch for creating vector based graphic animations with extremely small file size. spec standard for banners was 40k and we used to fit pretty complicated stuff into that size with animated cartoon characters and such. The new js banner standard spec started at and still is 200k, and the animations are extremely simple compared to what was happening in the flash days, because still after all these years nothing even comes close to flash's ability to quickly create advanced animations in very small files. The fact flash didn't run on mobile was fine. Mobile is always a separate build anyway: simpler, smaller dimension ads, or quite often just static images. The nitch of easily making fun cartoony like content for the web died when flash did.
@@cakep4271 Perhaps not being accessible on mobile was fine at your company, but at the company I worked at it was reason enough to switch every future project to JavaScript.
I usually speed up the video when watching presentations, but at this video ... i couldn't
Same. This guy already speaks at 1.25x naturally, lol.
I had to check I didn't have it set faster
I actually checked that he sound far more natural at 0.75x but since that is choppy...
I did watch this at 1.25 partly because his pronunciation is good. What I cannot speed up are British tv shows, tho
@@nin3se QI needs to be watched at 0.5x
In my experience (mostly with) php most code has mixed FP and OOP without problems. Some structures make more sense as clear distinguishable objects, some others don't need to. So asking if FP should be the norm is like asking if HTML should be the norm instead of CSS. At the end, whatever suits you. Most of the times it makes more sense for you to add a class to the DOM and let CSS draw the things as they are supossed to. But sometimes you are faster or it makes more sense to simply style the DOM node without trusting on CSS. So... my guess is FP+OOP is the future.
I know for a fact that in the 80's my Dad was studying so-called structural programming at Moscow State University. No mutation, pure functions, all that stuff.
It WAS the norm. At least there and then.
I wonder what's next... Functional programming zealots knocking on my door, holding a brochure asking me whether I have a personal relationship with pure functions yet?
@Eucalypticus if only both paradigms were Turing complete 😂😂😂
weird is, OOP zealots did that and still doing it.
The biggest difference between
circle.grow(3)
and grow(circle, 3)
is IDE hints. You don't need to remember names of functions in first syntax, you can just put a dot, and IDE will show what you can do with it.
The biggest difference is that in the second you're passing an object and the function doesn't have to have a reference of that object to do something with is, whereas if i had the following:
grower.addThree(circle)
This would break OOP's rules of encapsulation, because I'd be passing an object, not information about the object.
tell that to people working in vim without any completion ;))))
True, as of today. However one could imagine, e.g. writing `circle` and pressing smth, e.g. alt+enter and "show me the operations I can do on this object", which would be even better, because it would include cases in which it is not the first (or `this`) arg. CLOS multiple-dispatch polymorphism comes to mind.
Good question. And even better answers. After programming mostly in C# for the last 15-20 years, I am currently learning F#. I also believe that functional programming will become more important and prevalent in the next years.
I think the speaker missed one major reason why FP style isn't the norm. Because, at times, it can get really weird and difficult to understand, especially for beginner programmers. At times. But not always.
OO style has the advantage of accessibility, and honestly, that's very important to a lot of language creators / maintainers. Python is such a popular language because of its ease and accessibility.
I also think the reason languages like Java adopted the FP style was to generally give programmers an avenue to write shorter, more compact lines of codes.
But all in all, it is growing in popularity, and I certainly love it, that's for sure. I really do appreciate it.
I agree with your first paragraph.
For me, I understood what FP was for a long time. I had read many many books and articles on FP. I had finished tutorials etc...
But, I didn’t use it because I didn’t understand how to jump beyond the text books and tutorials. It was only when I started doing some things to fix a problem I had that I suddenly realised after a couple months that I was doing FP.
Since then it has clicked with me and I use the ideas a lot now.
Why would you want an FP-only language when a hybrid language gives you the best of both worlds. If you choose an FP-only language, you lose out.
because these people are religious zealots, not practical programmers.
Why don't you mix your chocolate icecream with shit, it's just a little bit of shit, surely that's fine?
"Hybrid language" means basically any OOP lang that markets themselves as "Multi-paradigm language". Meaning that it has basically zero facilities for functional programming.
@@vertie2090 Not at all. For example, Swift is fully an OOP as well as having all the facilities of a functional language.
OOP features sometime conflicting with FP's. For example, I can not imagine how interfaces (like in Java) can coexist with type classes (in Haskell, or Traits in Rust). And how can both Algebraic Data Types and Inheritance be working nicely together. So I dont think Hybrid is always a best ideal.
Engineering has always been about resolving what can be done vs. what can be done economically.
This is true even in software engineering and it isn't complicated. Much of what we do today in software engineering is a direct result of the free software and open source movement and it is the main reason why programming has exploded over the last twenty years.
But first let me talk about Visicalc because I think the analysis of why it became popular is completely wrong.
First of all, I have no idea where he is getting the $10000 price tag for the hardware. An Apple II computer with monitor was just under $1300, and with two floppy drives at $350 each brings it up to under $2000. So if you, as a business, get 5 of these then, yes, $10000 is achievable. But as a business, where you have a serious need for this kind of work, you either shell out $2000 per machine, or you get a minicomputer system or timeshare on a mainframe. IBM's minicomputers, at that time could cost anywhere between $10,000 to $100,000, not including the maintenance contracts. So if you look at getting 5 microcomputers, which you own, at $10,000 vs a low end minicomputer for $10,000 + additional costs, as a business, it is a no-brainer. Additionally, the staff that you would need for the running the software and maintaining the equipment was far less for microcomputers than for minicomputers. It really comes down to the economics, not the "killer-app" theory.
So, now let's look at the programming languages themselves. In fact, let us look at the top three languages: Javascript, Python and Java. They have two things in common: they have been around for better twenty years (remarkably, Python was created in 1989) and they are free. While Richard Feldman explains that Javascript is dominant because it holds sway over the internet and internet applications, that is strictly not true. With the advent of platforms like node.js plus many application that embed Javascript as their scripting extension, thanks in large part to Javascript interpreters starting with SpiderMonkey from the Mozilla Project. Java started out as a horribly buggy slow and buggy virtual machine and programming language to a fast and stable environment for serious enterprise programming. People who have embraced Python, have developed hundreds of libraries and has been the go-to environment for big data/machine learning/predictive analysis applications which can be written quickly and easily. Much of what it can do can also be done with the R language/environment and Octave, but without the overhead.
C#, while technically free (if you buy the Microsoft development tools or use its open source version Mono) (actually, because Microsoft has made it free with its Community version of Visual Studio, so it is actually free) with their development tools has also matured since it was created 20 years ago and when I did use it for some of my projects, I found it very easy to use and was able to get projects done quickly. One of those projects was a small web server which I was able to have up and running in a couple of weeks.
C/C++ used to cost around $100 - $1000 for a compiler. With projects like GCC and CLang have brought it down to zero. My first C compiler in 1990 cost just under $100 dollars. My first Modula-2 compiler in 1988 cost $150. In 1996, the C/C++ compilers came as part of the package of MSDN development tools so it was hard to say how much it cost by itself.
So if you look at any language now, they are free. You pay for the development environment and tools to go along with the language. You also have to have the experience to go along with it, which means you have to find people who have wanted to invest their time in those languages, because companies do not want to pay for it. C/C++, Java, Javascript, C# and Objective C have similar structures and syntax. Python got its wings in academia where they are not as concerned about ROI (return on investment) as commercial businesses. What holds functional programming back is a compelling reason to use it.
Richard Feldman has his roots (pardon the pun) in Elm, so I went to the elm.org website. It's title says: "Elm - A delightful language for reliable webapps". I don't use a language because it is "delightful". I use it because it solves a problem. I have used BASIC, FORTRAN, C/C++, Modula-2, Java, Javascript, XML/XLST, Prolog, LISP, Ada, PCL5, Postscript, Ant, Bash, and others, all with the intent of solving problems. I have had no compelling reason to use any functional programming language. Maybe one exists, I just haven't seen any.
The second problem I have with it is how the functional programming community acts like they are victims. "Woe is us. Nobody likes us. Why doesn't everyone think like us." That is what this 46 minute video is, an expression of victim-hood when they should instead give examples of why their paradigm is better than others in particular cases. If you can't do that without slamming other tools, no one is going to take you seriously. If I say that you ought to use a hoe exclusively over a shovel for gardening, you would (justifiably) laugh at me. Why are you so concerned that other people don't think like you. It's silly and it makes you look childish. It looks like you are treating your choice of language paradigms as a religion rather than engineering tools.
So stop blaming others for your problems. Stop whining and crying about how you are not understood. If you really have something (and I am not convinced that you have), then present your advantages rather than try to undercut everyone else.
Wall of text
I read your comments..., how interesting and compelling they are and wish I could get more of it...well done. Most people in this field do not express themselves as you do...my opinion.
Maybe check this out as a follow-up th-cam.com/video/3n17wHe5wEw/w-d-xo.html
Tldr. Nah, just kidding. You have a good one: projects budget, ROI, training, knowledge transfer, etc. FP (as I saw personally with Haskell and ML) has a way harder learning curve than others (like C, Java, Javascript) so there is a market problem. I do believe FP is (somehow) going to be on the top 10 someday though
This is a great comment. Thanks for sharing
That's why I stooped thinking with languages entirely.
I just go like “I want to make a math function” and punch in some bits into an executable file and it just works
Interesting talk, though as some mentioned, Python did get a boost from the ML/AI hype train the last 7 years or so.
I've always thought that functional languages and the functional style have never (and maybe will never) become the dominant ones because the world and people and computers don't operate that way. I think most programmers have heard the quote "To iterate is human, to recurse is divine". Well, that's just another way of saying people don't naturally think recursively, they iterate, they get a list of directions of steps, do this, then do that and each step changes the state of the world. Similarly computers are basically really complicated state machines. A program by definition, changes the state of the machine, even an empty program that doesn't do anything useful, just immediately exit is still changing state under the hood. And while a functional style might give performance benefits in rare situations involving big data operations across multiple servers and things like that, in general most applications are much faster when written in a traditional imperative/procedural style. The classic obvious example is gaming or anything graphical because generating a whole new game state and frame buffer every 16 milliseconds rather than editing in place is prohibitively expensive. Another point that's been made before is a huge amount of programming is directly with the hardware, the bits and bytes o configuration and initiation, low level drivers, embedded microcontrollers, none of which are feasible to do in a functional style. Even if someone wrote an OS in a functional language in a mostly functional style, there's no way to go all the way, including the bootloader, firmware etc.
Edit:
I forgot 2 more factors that I think are important.
One, it is much easier to reason about performance when dealing with an imperative C-like language than a LISP/Scheme like language. It is easier to reasonably guess what the assembly would look like (and even possible to do inline assembly in C/C++ etc.). Even if you can write equally performing functional code, the generated assembly is not something that you could easily guess or map to from the source.
Two, it is much easier to edit an iterative, C-like block structured language. We edit code in lines and semantically the code executes line by line. We can easily insert or remove lines, even non-trivial chunks that add/remove/change significant behavior. To do the same thing in a Lisp style language might change the entire structure of the program/function or more likely several functions. Our textual editing and debugging tools map far better onto a line/block oriented language than a Lisp-y functional language. Whether you actually use a debugger or print statements combined with careful thought, it is far easier to do with C than Lisp. Granted there are functional languages that are more block like but it's a semantic problem too. The functional style of first class functions created and passed around willy nilly is harder to step through even if the textual representation is more traditional.
Thanks man. Now this explains why I struggled with learning FP on my first encounter. Everything is so complicated it felt like BlackMagic.
Top comment! Thank you!
So would writing all your functions in a recursive manner be a great anti reverse engineering technique?
@@jackmurphy8696 Probably not. Serious reverse engineering is hard enough already that iterative vs recursive in assembly isn't going to make much difference to them, certainly not compared to far better obfuscation techniques.
Just use the normal obfuscation techniques combined with the movfuscator if you really want to drive a reverse engineer to suicide
@@RobertWinkler25 I figured it wouldn't stop any of the good ones out there , but it would certainly stop me. That stuff is really hard. Thanks for this info.
Perl was king of the early internet; everyone used the CGI module.
As history goes, Perl is so well designed that Larry Wall decided to start from scratch for version 6.
@@BrunodeSouzaLino The reason for starting from scratch was that between 1-4, maintaining backwards compatibility was starting to hold the language back, which isn't unique (see Python 2 -> 3). That said, if you haven't played around with Perl 6 (which is being renamed Raku to better distinguish it from Perl 5), you should take a look at it. Speed is finally mostly on par with other scripting languages and it handles FP very nicely (not surprising, given a lot of the improvements to Haskell came from P6's initial implementations that were done in Haskell, so developers were very used to the FP style)
@@MSStuckwisch That's mostly when you favor features and think not good about the future. That's why C++ succeeded. They really carefully wait a very long time for adding features and keep back compatibility a TOP priority. Never had to break C++ in order to go to the next version. Therefore C++ is superior. That's why people say: C++ MASTER RACE. Understand?
@@HermanWillems C++ obviously didn't start from scratch...
I remember those days. That was around the time I invented AJAX without realizing it (I was trying to get around browser incompatibilities).
Perl is a fantastic language. Even Lisp snobs won't criticize it.
Love it: “out of slowwhere” gonna pinch that phrase
slow-ware
morthim nope, it’s slowwhere in "out of slowwhere", like "out of nowhere".
@@becomepostal /r/whoosh
I'm gonna pinch you for being such a dorkwad.
It is a funny joke although after "functional javascript" would have been better comedic timing
Functional languages all have the same basic flaw: That's not how computers work. Sure you can force a computer to implement the desired operations, but it is less efficient (under the best circumstances) than procedural languages that more closely align with what the hardware is actually doing. This rules out functional languages in any circumstance where performance might be critical. A good example of this is in embedded systems, where excess compute power is a waste of money and thus affects profitability. If I'm going to sell 1M units of something, then its worth an awful lot of developer time to use a 1$ cheaper microprocessor, and it turns out that in most embedded use cases, its actually much easier to write procedural code than it is to write functional code.
Functional language developers also fundamentally misunderstand what computers are for. Computers are not primarily used for calculating results of functions. Rather, computers are about controlling the real world, storing information and sorting data. Calculating results strictly from current inputs is a very small piece of that. In other words, the "side effects" that functional languages poo-poo so much are the only real reason that computers even exist. Pure functions are the aberration, not the norm. For that reason alone, functional languages will never become commonplace.
You got it. You also mention the one and only side effect your employer is actually interested in: to make money. FP folks are a lot of things, but they are not engineers. They do not care for the three goals of engineering: To get it done. To get it done on time. To get it done on budget.
Wow that's a rather strange argument "if this other form of Java would have been marketed it would have been the major paradigm".. Don't you think a lot of thought would have gone into deciding which language to market in the first place?
He is also vastly discrediting potential reasons like 'ease of use' and 'productivity', both of which are pretty hard to measure from language to language, but they just 'evolved' to be dominant perhaps for this reason
_> Don't you think a lot of thought would have gone into deciding which language to market in the first place?_
Why would you think that?
Microsoft had BASIC. Sun had Java. Java was orignally called Oak and was meant to be a better C++. Sun wanted something that runs on a VM, something hardware agnostic, which C and C++ are decidedly not. And something with a more marketable name. Apple had a very marketable name. What kind of name is "Oak"?
So they took the language they had, changed the name, put it on a VM, and aggressively marketed it, not always strictly honestly.
The language can be trademarked, but any Turing-complete language is just one of infinitey many ways to express the same things. The algorithm doesn't care what language it is written in: It is still the same algorithm. The CPU doesn't care what language the code was compiled from: It is processing the same instructions.
True to a point. However, most of the frameworks that we've adopted in the last decade or so have just complicated things imo. I feel like we're at a point where we're changing for the sake of change...
Has anyone else felt this, or is it just me?
I do program in C# with an FP style. It's interesting that my colleagues pick up on some of the patterns I use as, "oh that's a really good way of making systems safe and easy to test" but don't immediately identify the patterns they are starting to use as FP.
I would love to see well used funcional programming.
The codebase right now is full of functions that changes states to other states and to edit it, you have to jump from line to line.
Its a whole spagetti
Functional programming is getting very popular within C++. A large number of talks on CPP conferences are about functional programming.
Agree, and I would say more important than functional programming is functional thinking. Code can be structured in a functional way and give the same benefits as programming is functional languages.
@@kwanarchive And many people still do effectively imperative programming in functional languages, which is awkward.
C++ is actually evolving SUPER FAST right now. Soon we have Modules, Concepts, Meta-classes. and much fucking more. It just sucks everything up that is good, without losing back compatibility. For me it's the best language that currently exists. Hope C++ also sucks up things from Rust. :) Because rust has some great ideas.
• "Avoid mutation and side effects": anyone writing C++ with consts and no globals is doing this by default. I find compiler support for this in C* ++ to be just as good as in Scala.
• "1st class functions": C++ requires some typing ceremony around lambdas; Boost makes this slightly less eye-gougey.
• "support for the style": modern C++ culture is all about being functional, enough so that I'd argue that it compensates for the ceremony cost.
@@EricPrudhommeaux I mean the ceremony just involves a shitload of brackets. Pretty normal for functional languages....
I've only worked with OO for the last 30 years so I was hoping to hear why Functional should be the norm.
javascript isn't really OO - it is better described as prototypal in that it does away with classes and methods (although you can emulate them and recently they have added syntactical sugar for them) - functions are first-class citizens in javascript so you can *almost* argue that javascript is also a functional language although it falls short for purists. You might call it a PROTOTYPAL-FUNCTIONAL language as it straddles both and, in doing so, is very unique and effective way to program. Add to that it's C-style syntax and you have a legible and easily understood syntax in a unique language that allows you the best of both worlds where sometimes functional-style and composition are favored to solve problems, or when encapsulation (which it does quite well with closures) and strict hierarchies ( that are unlikely to change) are needed. It wasn't just that javascript had dominance in web-clients that propelled it - it was the freedom it gave us.
Your statement is only true up until ES5 JS. ES6 and beyond have true OOP classes.
@@karlschipul9753 JS has always had "true OOP" classes, it's just that the inheritance model is prototypal. class is merely syntax sugar, with only a couple differences.
What even is the point of prototypal inheritance tho? The only thing I've seen it useful for is shims, and that's more because of the lack of a standard library for JS and less because it was a genius idea to implement
@@karlschipul9753 No, the classes in js are just syntactic sugar for prototype inheritance.
The difference between Prototypical and Classical OOP is moot to normal programmers, it is only important to language implementers and low-level hackers.
The only practical difference is in Prototypical you can grab the "prototype" object and manipulate it at runtime and have that change affect all pre-existing objects.
composition over inheritance is an advice you give to people learning OO because when they start getting into OO they have a tendency to start creating subclasses for every field they need to add when in fact they would be better served by having more members in some object rather than a specific subclass for a given structure. The "aha" moment comes when you realize that objects aren't there to encapsulate data/strucure, but behaviour. And this is where we get into the most important feature of OO which is conviniently left out of this talk: polymorphism. And if you want to understand just how powerful this can be go look up the design pattern template method (and in fact a lot of design patterns are really difficult to implement without OO features).
and more importantly, as mentioned, functional programming requires no special language features. There's a reason for that. Paradigms are hierarchical. Functional -> Imperative -> OO
you can do functional in OO languages, you can't do OO in languages with no OO support (or rather, you can hack something like it but it's not worth it)
OO languages have MORE features. That's why they're popular. And applications aren't paradigm pure. Most real world applications don't use just one paradigm, they use all of them as needed.
Calling a static function to make some calculations? Functional programming
Calling a function to change a state (say, saving something to a database)? Imperative programming
Using a subclass to extend the behaviour of your web service interceptor? Object oriented programming
Javascript was started at Netscape. Mozilla didn't exist yet.
Jeremiah Glover It was designed and implemented in 10 days. That's why it is such a piece of shit.
Michael Pohoreski from them till now it’s seen improvement. Just like all the modern languages
@@delavago5379 Polishing a _turd_ doesn't magically turn it into a _diamond._ JavaSchit is still shit -- it just smells less.
@M T Like Flash?
@@MichaelPohoreski Actually designed in about 22.5 years, for current JS.
C# was introduced in 2000, not 1995. Windows was primarily C++ and Visual Basic (Borland had a play in this, too).
Its a derivative of Java / J++, so that's probably why he used that date.
@@vast634at least according to Wikipedia the development of C# started in 1999 because the developers of the .NET framework had the feeling that they needed a new programming language. For some reason (maybe because Sun had sued them) at that point they decided to develop a new language instead of continuing J++.
I used COBOL for 30 years. Then needed to write some smaller simple stuff. Tried vb3 then 4 and 5 I was writing simple useful apps in minutes for customers who were delighted. I was destroying a team of writers with quick simple applications that weren't brilliantly written or superb disciplined written code. They just worked gave me living for another 10 years. Some cobol I wrote in 1984 is still running and some vbapps are still running 115 years later. Not because They are great but because they work and do a job in a simple and accurate way. Why there are no great visual languages now I do not know.
The problem with functional programming is that it's a paradigm that requires the ability to formulate intensional rules. Most people are not very good at this sort of analytical modelling, and they prefer a more explicit, procedural approach. Procedural (imperative) while being potentially verbose, is easier to decipher. If you go back to set theory, most students would find it easier to describe sets by a non-compact system of attributes, as opposed to the optimal, overarching construction rule. Functional programming buffs get all excited by how elegantly they can model their problem with lambda calculus, and we're all in admiration... but it requires a specially-wired brain and in the end it's not necessarily more practical. The proof is in the pudding: It costs nothing today to have a go at functional programming, but it's not very popular. And it's not because the job market requires Javascript of C# - the job market requires the most efficient tool for the money it spends. You may be super efficient in Haskell, but that is just you. And because the majority of engineers are more comfortable with an imperative paradigm, doesn't make them lesser engineers.
The whole point of functional is that it easier to understand and read because it is declaritive and modularized.
A pure function will always be easier to understand and test than a Mutable Object. Objects require set up and tear down to test, and since they are mutable they are harder to predict. Not only that but changing the order of operations on Object Method calls can lead to unexpected results because associativity is not respected.
Overall it is stupid to say that Objects make things easier to read. They add boilerplate, interdependencies, and make your code base tightly coupled and nightmare to read through.
Not only that, but once you are done with your complex UML Diagram, the only way for a person to mentally grasp the conplex class hierarchies, if your user requirements change, then you are fucked because OO Systems are fragile and are not Robust.
Not to mention all the Damn design patterns you must learn to compensate for the shitty OOP features. Sure they solve the problem, but they are over engineered and you have to study for years to do it right.
Prime example, Strategy pattern. That shit is Literally a Higher Order Function.
Which one is easier?
FP.
FP is superior. The only issue is Pure FP. Programs need side effects.
So The best Style uses heavy FP then some procedural logic for side effects.
OOP is trash as fuck.
I had difficulty reading lisp code when I first approached the language - Common Lisp and Emacs Lisp. In english we are taught to read from left to right, and then down. Reading lisp requires us to find the middle, or innermost, function and then read the code in an outward fashion - i.e read up, down, and outwards from the center until we reach the outermost containing or "top-level" functions. It's disorienting to the eyes to have to dance around the page in this manner. The silver-lining is that you CAN get used to it and it becomes natural after awhile, but you have to C-/ or at-least adjust your left-to-right-and-down approach to reading text
I noticed that ironically enough, this talk is just like functional programming: it's creative, insightful and seems to be very precise but it doesn't address any practical problem and it has reactionary logic, all about "what isn't" rather than "what is" or "what can be".
I'd argue numpy and jupyter notebooks have become killer apps for python.
The most popular Python projects are actually:
ML: TensorFlow, Keras, Scikit-learn
Web: Flask, Django
Utilities: Ansible, Requests, Scrapy, ... (among many others)
I would say Python has no single killer app, but a rather healthy amount of great projects in many areas.
Numpy? Maybe. Jupyter? Why?!
@@crides0 For the same reasons that Visicalc/Excel are killer apps: They allow people to create things that are easy to share with other people, and those other people can then tinker with, without needing a lot of prior expertise with the app.
My Python killer app is tkinter.
@@ernstraedecker6174tkinter got me into gui design. This is pythons greatest power. The default library out of the box gets you so far. You never really have to leave the language.
Don’t laugh about the Java smart card: Oracle works on getting Java onto embedded devices again with Graal.
Python is simply very readable and useful in a way which captured people, and people built good tools for it. And it has the zen of python: Just start python and import this (literally "import this"). That’s a focus it always kept. It kept its APIs usable. Though the original idea of "programmers need less freedom than lisp" turned out to be wrong, since Python now provides a lot of metaprogramming tools.
Objects and methods are most essential for IDE auto-completion - basically developer UX via "just put a dot after some symbol to see what you can do with it". I feel the lack of that everytime I program with Scheme.
You're right about auto completion. And this is something I value highly, some people don't seem to get what a savings it affords. All the same, I'm also a convert to FP thinking and Haskell-style composability. What's needed is here is a way to do the equivalent of "completion" using the argument signatures of the desired function. Much like Hoogle, but but preferably accessible via a few clicks from your favorite IDE...
What is point of running JRE on an embedded device with limited computing power? The whole idea of Java is that it is cross platform, does that mean my software can now run on both my fridge and my laptop?
@@douwehuysmans5959 The point is to be able to re-use your Java/other-language/libraries experience.
And yes, but it will only be good for your fridge if you start development with the fridge.
The point is similar to the point of using node.js serverside. A java-shop can then more easily get into building embedded devices.
@@EighthDayPerlman If you are a fan of Haskell and have been using Java I'd advise you to take a look at Frege.
Frege is an implementation of Haskell for the JVM. Basically it would allow for using FP style code with all the features of Haskell while maintaining compatibility to the "impure" java code.
@@douwehuysmans5959 Wrong the whole idea of Java was to create a language that uneducated and unmotivated cheap indian coders could use. Together with UML it was the dream that no money needs to be wasted on programmers and CEOs could make even more.
How about that procedural languages have a greater similarity to how thought processes seem to work and are thus more natural for people to use.
Not just that, but all hardware instruction set architectures are procedural. So when translating from how people think to how computers work, functional languages are a complete non-sequitur.
Maybe they have more similarity to *your* thought process. I think pure FP would appear more natural to someone with a stronger mathematical background.
@@altus3278 Good point. Nevertheless, since the core digital computer concept (von Neumann architecture) is procedural and also created by an extraordinary mathematician, pure FP might not necessarily appeal as an obvious first choice, even if one is one of the great mathematicians. However, someone like John McCarty is a great example (Lisp inventor).
I mean there is no denying that a language like C is closer to the hardware than any FP language can ever be. And that's because C is a very close representation of how hardware actually works. I remember Linus Torvalds once saying he loves C because when he reads C, he can deduce what actually happens on the hardware. You'll always need a translation layer that converts any FP language into a procedural one, simply because a computer is a state machine, and FP languages are not a state machine. Thus it's just easier to write a program in a procedural language like C and understand what will actually happen on the hardware (if you care about that).
I don't get this. I don't at all think procedural languages are closer to thought processes but maybe it's different from person to person. I can much easier keep track of objects, how they relate to eachother and what they can do in my head than a strict order of operations. The latter is what the computer does and my brain is not a computer.
20 minutes in and I don't have a clue what functional programming is yet. Got an history lesson in popularity.
Yes exactly!
To be fair that's the subject. It's not "functional programming is awesome", it's "why isn't it the norm". Most of that explanation is why OOP is the norm instead, which is a history lesson in popularity
I see functional programming as when your program is limited to a single line, a function call. And that function may call other functions, but is still limited to a single line, and that line is the return statement which includes the value returned by the function. There is of course no assignment statement as no values may be modified and there can be no side effects like input or output. In other words, no functional program can be of any value except to the theoretical mathematician.
This talk just proves that bad data yields bad conclusions...
Why isn't FP the norm? Because it's inefficient for most things. The goal of a programmer shouldn't merely be to minimize the time they take writing code, but also to minimize the time their code takes to run. I'm writing this in 2022 and nearly all software still has problems with the second part of that. Modern computers with 16 cores running at 4ghz with 32gb of ram shouldn't feel so slow to use and they definitely shouldn't max out their RAM usage. The first computer I used at home was garbage, 533mhz Celeron, 64mb of ram with a 10gb hard drive, upgraded to 256mb of ram and 30gb of storage, and you know what, it ran fairly fast. The only software it couldn't run well were most games, and it seems like that's still a big problem. The key issue with regards to software is that things need to mutate to be useful. Working around that requires writing horribly inefficient code. As far as I can tell, our biggest problem within the industry is that no one can agree on syntax. It's the main reason why developers seem to accept or reject a language. All the code written today could have been written in C. Whether it would look good is another matter, but it could be done and with intelligent designers could be done well. What we really need is not yet another toy language/library/(proof of concept), but rather an attempt to efficiently solve the problems of each given arena and no more. A single language within each arena that aims to be what it needs to be and only that. No kitchen sink, rather singular focus.
Kind of disagree. Optimizing things a bit with respect to memory consumption I can sort of agree with, but trying to optimize for CPU efficiency shouldn't be a big focus today. The use of libraries to handle boiler plate things means your average programmer shouldn't need to think as much about it as they did way back when. As for the games the issue now and then is largely the same. Our CPU's are getting more powerful, but the explosive gain in performance has come about mainly due to multi threading. If you are rendering or displaying something in real time you need to maintain control in your main loop and this is limited by single core performance. Hence it's an exception to the general rule where optimizing for CPU efficiency really matters. Kind of off topic, but for games one of the biggest drawbacks of the modern architecture is the CPU/GPU boundary. Sure new methods have come around to alleviate things a bit, but the CPU still needs to be involved on some level when loading data to the GPU memory. If they could allow the latter to read directly from disk and completely bypass the CPU then it would help a lot with memory consumption in games.
As for the kitchen sink v.s. a language in each arena I sort of disagree. Sure it would be nice to a have a universal FP approach, but the truth is adding FP style syntax to do data manipulation in an otherwise object oriented language is not a mistake. It can allow you to manipulate an object as if it were a basic data type using said FP syntax. Why would you not want that for the areas where this syntax is superior? The alternative of jumping into a completely different programming paradigm and converting the data structures you are working on to match it's expectations is a waste of resources. As for efficiency the compiler is handling all optimization either way and I'm sure it in some cases can be beneficial for the compiler to see objects be handled in this manner in terms of the output it produces as modern compilers in general are better at translating code to assembly than nearly all programmers out there.
@@aBoogivogi Interesting perspective, but I simply can't agree with the majority of it. The limits of processor technology were being hit two years ago when I wrote this and the situation is much the same, if not worse, today. There's simply too much data that needs to be processed for us to be lazy and just throw more hardware at the problem when there is no more hardware to throw. We need to come back around to writing better software. As for the functional paradigm, I don't see a single instance of it being the better solution than other paradigms, which include more than just FP and OOP. One thing that I think far too many people that are proponents of FP seem to forget is that what makes computing interesting is that state changes, and by far the best way of modeling that change in state is literally every other paradigm that's not FP. Avoiding the FP paradigm is both memory and processor efficient as well as easier for most humans to understand. The one thing I completely agree with is that compilers are far better than most programmers at optimizing code, and I'm not even restricting that to modern compilers, because 20 years ago they were better than most programmers of today.
This has "Am I out of touch? No, it's the children who are wrong." vibes
@@closure4791 yes
This guy is asking the wrong questions. The comments section of this video is more informative than the video itself.
Pure functional programming is awkward for managing ongoing, persistent functionality, such as running an app, or managing an interactive graphics canvas. The Model-View-Controller architecture (the dominant pattern) requires declarative commands to generate the View. That said, in a well-crafted app, much or most of the Controller logic is implemented in pure functional programming. I think it's a bit narrowminded to put the popular languages outside the "functional programming" paradigm, when these languages all support that style of coding wherever useful.
I could be misintepreting this comment, if so then sorry. But my understanding is that you're arguing that functional programming can exist in the more modern languages and MVC is similar to functional programming?
If so, I'd have to disagree personally. While there might be slight overlap in these methods, but we're talking about two very different places in programming. While you could in theory program in a functional programming style in the more modern languages, in virtually any case I've seen where a functional programmer picks up a language like Java or other OOP languages, it doesn't end well. Yes, I agree that functional programming has it's uses, but I wouldn't necessarily argue that a controller method in MVC would be, in any way, the same as FP, at the end of the day you're splitting up responsibilities and tasks into subtasks which are then divided over clearly defined classes / objects.
@@martijnp The controller part of a MVC architecture cannot strictly follow a FP paradigm because it straddles an ongoing process (User's intentions --> the Model). The Model is by definition stateful, and by extension, the User interacts thru the Controller to change the Model's state in a desired direction. FP is only relevant to stateless calculations, which play an important SUPPORT role in giving MVCs their complexity. In computation where no state has to be remembered, FP is the most elegant paradigm.
We're seeing more and more "plug and play" cloud-based services that operate as FP nodes....I used one recently to solve a 4th-order polynomial equation. The "web component" that does just the number-crunching is an ediface of FP. However, in order for a human to utilize it,
there has to be a browser UI (input devices and graphics-output) that have to be able to remember things from minute-to-minute, i.e., UIs are by definition stateful.
Not sure about other areas, but in frontend web-development Flux is becoming a more favorable pattern than a good-old MVC, and it is totally based on FP principles: no side effects, no mutations. It is a more scalable approach as it ensures a unidirectional flow of state and separates actions from state selectors (which btw resembles a highly praised CQRS approach).
So I don't know why everyone's saying that FP isn't a norm - in frontend apps it kind of is, although of course it's mixed with OO-style in certain aspects where convenient.
Take a look at how the Phoenix web framework or LiveView (Statefull websocket connection between client and server) handles things with the MVC architecture in pure functional and immutable ways.
The "conn" connection gets passed arround and things get added/removed (again, new immutable). Elixir(or Erlang's) way of working with this makes it perfect for millions of web sockets:)
@@andreiclinciudev Unless we're working with 2 different definitions of "function" (send input x, receive immediate result f(x) where there is no remembered state), adding and removing are not functions, they change a memory state.
"Performance is a secondary concern..." That is arguably true today, but that was very much not the case in the early 80s when I was first learning programming. Performance is one of the main reasons why C was so popular at that point; you could write in a high-level language but get performance that was very close to assembler (referring to PC programming here). Regarding OOP, when it was first coming out, many programmers believed that it would provide a better way to program. This was aided by the emergence of GUIs and the seemingly-natural fit of UI elements to objects in the programming language. OOP also provides a framework for reasoning about program structure. This way of thinking about the program is significantly different between OOP and functional. I believe that difference is one of the main reasons for the slow adoption of FP. It's a significant effort to change the way you think about programming from OOP to FP.
This presentation is bending history more than is the norm.
Bending? The liberties taken with history seem to go beyond bending, and may be better described as fucking. For a video about functional programming, it's awfully dysfunctional. And fp in assembly? Which machine instruction does not change cpu or memory state? Halt? Good luck writing fp in assembly then...
@@lhpl I suspect that most FP language are written in C or C++, at least until they become powerful enough to be written in themselves, but at that point there is a good chance that they are no longer pure.
That graph clearly shows where the popularity of Python came from. It was at least partially due to Perl (and some other languages) programmers switching over, and adopting Python. It's a fairly simple transition. Perl still does what it was designed to do extremely well, but Python adds a set of very useful and convenient features. Especially for the kinds of people who were using Perl for things like bioinformatics.
And Ruby was better, regardless of Rails. But Python won. The reasons of Python's conquest deserve a whole lecture on its own. Much to learn about endorsement, by whom and why, etc etc.
The correct answer is...Get them while they're newbies, and they're yours for a very long time
He basically did - he said they spent a fortune on marketing Java.
I started in C, switched to Java, they lost me to ECMAScript languages and Python. And I don't plan on sticking to any of those.
Just like lycanthropy.
@@wereNeverToBeSeenAgain - Had you started with QB or VB, your path may have been very different, though Python would have been a natural progression of that path
Yes until they finish university and have to do real programming.
I code functional when the task wants it. I code oop when the task wants it. So I code in C/C++ and both approaches work. A little function here, and class or two there - inheritance if it makes sense ~ a tweak of polymorphism, and then back to a hang-it-all-out in the open global function. It's my party and I'll decorate it how I like :)
Same
sounds more like you're describing procedural rather than functional
@@palpytine sure. If all I do is return void. But IMHO modification of data through address reference can still be considered functional. Eh semantics. Sloppy code that works is for single effort goals and lazy programmers too. I'm not suggesting this is a team practice approach. Like minded approach requires tight code for predictabiliry and reliable expectations. An extension of Scotty's "right tool for the job" on the Enterprise. Unless chewing gum fixes the warp core at the last moment. Klingon bird of prey not withstanding I'll take on Romulons any day of the week. Cheers.
same
@@AtomkeySinclair What you describe is the very opposite of functional programming - which is all about immutable data and pure referentially transparent functions that always return the same output for the same input. If you're making changes via global pointers and returning void then you're doing anti-functional programming.
Interesting talk, but others have mentioned....
1. UIs - FP languages by their nature return values and thus are not naturally suited to UI refreshes and updates. This applies to both thick and web GUIs. We can build abstractions over the top....but eventually we need to update UI elements; and therefore side effects.
2. Performance - We programmed in Miranda and Haskell at Uni...performance was woeful. It's only because we've gotten a gazillion cores at high clock speeds, that we have the option to parallelize some (not all) operations .
3. OO - It's just not inheritance & modularity that are 1st class constructs. We have types, interfaces, and understandable polymorphism.
4. Debugging & Rapid Development - VS and other IDEs where we have auto-complete, can step through code and inspect variables is major factor for adoption.
1.) Haskell allows you to create embedded domain specific languages like Lisp, but with Monads this lets them be statically well-typed. Creating UIs games requires you use Functional Reactional Programming eDSLs like Reactive-Banana which is sort of what the language Elm is based on.
2.) FP languages (OCAML & Haskell & Steelbank Common Lisp) are actually really fast, faster than Ruby, Php, Python, Perl, JavaScript which are all popular.
3.) You have modularity and polymorphism in functional languages too
4.) Haskell (and other languages) have a more powerful form of auto-completion called holes/hoogle which let you autocomplete any piece of code. Functional languages all have good REPLs which can be intergrated with the debugger, and you can see the intermediate values of everything if you write your code in a functional way.
1. Check out FRP streams, for example Scala's fs2. Side effects are unavoidable in any meaningful program (the only program with no side effects is immediately returning) so abstracting around them keeps referential transparency.
2. Check out what GRIN is doing
3. The reason I program in Scala is the power I have being able to use OOP principles _and_ FP constructs like typeclasses
4. This I understand and many people work on solving this problem. Unfortunately the concept of errors in FP is kind of weird (exceptions vs option vs either vs try type)
@@aoeu256 To reply from your message you sent me about garbage collectors, persistent data structure, etc... First of all, you completely misunderstood my remark about C++ and I never talked about garbage collectors. Garbage collectors are fantastic and they are needed for many high-level languages. However, the rant I made about the argument "C++ classes didn't made the langage popular" is garbage, SINCE classes were badly implemented. Do you own research about multiple inheritances in C++ and the history of "virtual" keyword before calling me ignorant. Furthermore, persistent data structure have been introduced since the 80s (if not before), linear logic (1989 first language implementing those) and way before that in the database realm (70s) and object-oriented ownership system(80s-90s), Lens are simply getters for Haskell since Haskell poorly managed complex type definition. Monads are only used by purely functional language since THEY NEED IT to do side effects and object-oriented already had that way before Haskell with the "nullable type". Finally, the notion of a monad is an old mathematical construct SO IT IS NOT NEW AT ALL. I don't know why you talk about all the FPGA, garbage collectors, reactive programming (which are essentially events a.k.a old), this is irrelevant to the conversation and the points I made. My take on this is that object-oriented as a purpose in software development and cannot easily be replaced. Functional programming is great but they do have pros/cons like OOP. Go read some research papers and you will see how old is your brand "new" stuff.
@@aoeu256 Moreover, Haskell doesn't have polymorphism as CLOS or Eiffel have. Haskell has Typeclass which is only used for STATIC OVERLOADING, it is really limited since you cannot redefine typeclass function inside another typeclass that "inherit" it since it is not a "to be" relationship but a "to have" relationship.
@@Vsioul You can do all dynamic stuff in a still type-safe and proven-at-compile-time-way using existential types. But if you are used to Haskell's type system, you need existential types only in rare cases, compared to if you are used to think in an object-oriented type system.
What every speaker seems to miss when talking about C is embedded systems. Every small electronics decvice, that isn't running an OS is either programmed in assembly or C. Be it a toaszer or a sensor in a car or a washing machine
C supports encapsulation by the "static" keyword. Every procedure marked with a static keyword is only visible within the current compilation unit.
So then C doesn't support modules larger than a single compilation unit?
I long for the day when science is used to promote programing languages instead of cult-like bias propaganda.
I am surprised there are no psycology studies into how humans program. It is always opinionated engineers with no cross-disiplinary work on the human brain that assert with confidence what is best.
I mean, good luck proving that a language is scientifically better than another one. It all comes down to personal preference and bias.
> there are no psycology studies into how humans program
There definitely are.
There are studies on that, and in all honesty, if it was scientific, we'd most likely have some weird cross-platform version of ASM.
One of my favorites is a study comparing reading speed of identifiers using underlines (or presumably dashes) vs mixed case.
Mob rule trumps science, alas.
www.cs.kent.edu/~jmaletic/papers/ICPC2010-CamelCaseUnderScoreClouds.pdf
Instead of Killer Apps you should rather talk about Killer FEATURES. Java had and have plenty of them:
- Android - chosen language there, the most recent killer feature
- ability to run on multiple HW and OS - not a big winner for desktops, due to MS monopoly, but it's quite different situation on the server side
- good IDEs - Net Beans, Eclipse, J Developer, Idea - and tooling from the beginning
- decent basic run-time library and loads of open source libraries, and well organized, thanks to Maven
- strong support for concurrent programming, with a multi-platform guarantees and enabling multiple styles - from primitives to Actor style frameworks
- external Monitoring via JMX - very important in enterprise, mutli-node systems
- Applets - there were times when it was the only way how to do certain things, JS was not powerful enough and Flash came later and only for certain areas (media, games..)
- VM Tuning, very high VM configurability
- GC - they were not the first one, but I would say the first successful ones
- good build tools like Ant, Maven and now Gradle - many scoring "firsts" in their own areas
- JIT compilation, Reflection, fast compilation, runtime bytecode manipulation, good support for AOP, etc, etc,..
The popularity section of this talk ignores "ecosystem". Python, C#, and Java are popular because if you need a library for something, it probably already exists.
I would argue that one of, of not the biggest factor in OOs success is its associativity with the real world. The vast majority of people don't naturally think in a functional way. We see the world as a collection of things, which have various attributes and actions they can perform. Our minds are object oriented. Designing a language to coincide with our natural thought processes made the world of programming far more accessible to a much larger population. OO, in a way, is a large reason why programming itself has become so wide spread and accessible, so it makes sense to me that the top languages would be OO-centric.
The world we see has things(data) and we can do something to that thing(functions).
To couple them or not is the difference between OO and other programming styles, like FP or procedural.
OOP's objects aren't that similar to real world objects though... OOP is very hierarchical, while the world is "relational" although JavaScript's prototypes is pretty good at simulating the world. I'd say that Prolog (miniKanren/Screamer in Lisp*) is closest to English in that you can take an English sentence and rearrange it and dump it in code.
Finally the comment that I wanted to read! This is the exact reason why OOP exists because in real live, everything is an object, and has properties and methods, and inherits from other objects.
@@DorganDash I completely agree. The difference is not in coupling them, it is which comes first, conceptually: object or action. Thinking "I have a thing, what is it doing?" is, for most people, much easier to grasp than "something is being done, what thing is it being done to?"
Don't get me wrong, I am not arguing that OO is better, just that it is more accessible. To be honest, I loath pure OO. Pure functional is certainly better in many ways, and can represent certain real world concepts that OO can't touch. The best analog for the real world is a hybrid that allows for simulation of object first and/or action first concepts, depending on what is needed.
@@williamross6477 I agree, but my point is that both are valid and easy ways to model real world concepts. The difference is in the execution, and the tight coupling of data and behavior in oop is one of the key elements that differentiates it from other styles in general(like procedural), not only from fp. I think imperative vs functional is a better comparisos than oop vs functional. Which one is better is a more complicated topic, they solve different problems and are suited for different needs. Telling a box to store things(oop way) and applying a fly action to a bird(the other ways) are both weird ways to model the world we live in.
Great talk! Richard is a natural born presenter and his slide deck is like a movie plot. Love it!
The idea that we are moving from object-oriented (OO) to pure functional programming (FP) is not convincing at all. Some people point out that it is possible to do polymorphism with functional programming, but it is not as convenient. That's an obvious feature that benefits a lot from being simple to express. And in fact, modularity kind of points towards objects, because we are talking of a surface, and if you have a surface, then you enclose something, and so you get an object. So I rather believe that the mix of OO and FP will remain. If something replaces both, that won't be either OO nor FP.
I totally agree. I've been mixing both in places where I can benefit from them. I find C++ to be the most flexible in these terms.
yeah OOP certainly is not going anywhere. But arguably, FP is on the rise again. Btw can you elaborate on your point that polymorphism is not as convenient in FP?
I expect that in the next few years, a coherent paradigm incorporating both OO and FP will be identified, with it's own name and manifesto, and the apparent tension between the two will be put to rest.
Exactly. I think the future will be languages that are Hybrid. It will never go to fully functional. Functional programming has it's advantages, but also very very big disadvantages.
Well, I can't speak for others. But I am waiting for the day Haskell will rule them all.
I think the best approach is to have a little bit of both FP and OOP, though with concepts now in c++ I think FP will become a little bit more common.
Personally I consider free functions as general and should do general tasks while member functions are specific to the object. Take for example a function "reserve" if it's a member of a container it's pretty clear of what it does, if it's a free function then it's purpose may change based on the parameter type, which muddied it's meaning to me, or you make it's name super long or abbreviated to hell.
I think the fact that the IDE helps you browse what the object can do, eg. What member functions it has, helps with the object's usage.
10:00 minor nitpic, Before making .NET and C#, Microsoft tried to build tools around a version of Java they called J++, but ended up getting sued. That's why they ended up making C#.
Oh... I remember Jscript. How is it evolving? And Xenix (long) before that!! I programed on Wang + MS Xenix
Blessing in disguise, considering C# is now far superior to Java.
@@drewmandan Really?
@@richardikin yes really
@@richardikin C# is infinitly better than Java.
For the last 70 years computing has been dominated by the Von Neumann architecture. It's to be expected that programming languages would fit this model, as 99% of them do.
It's not the norm because it is stateless. The real world is stateful.
I started programming with APL on a IBM 370 in 1974. In graduate school, in 1983-1987, I programmed automated theorem provers in LISP. Working in the same area in my first job after graduate school, I programmed in Symbolics Common Loops, and then ML, the INRIA variant.
@synfiguring Yes, it's so cool that nobody has ever understood a LISP program not written by himself more than ten minutes ago. ;-)
Object oriented programming and functional programming are no contradiction. There are object oriented functional programming languages, e.g. OCaml and O'Haskell. OOP is a type system feature and FP is a control flow feature.
Nice words.
On 19:30 he mentions it.
Even more some when there's F#, I guess? (It's just OCaml but got the ability to use C# library) But personally I don't quite buy this kind of language because languages like C# has developed to a state that can do FP quite well, and thus I personally would consider the use of F# in most cases more for eliminating the use of brackets and dots rather than anything else, because real life situation isn't really that "mathy".
Alternative title to this talk: Why is it only the C family of programming languages that is in the top 10?
C got more popular than Pascal in the early 90s, and it also edged out other competing languages like Fortran and Basic and Ada at about this critical time too. Other existing languages from the time had large speed disadvantages, so they couldn't become the dominant desktop application language in the way C did.
because C is really, really, really good if you know what you're doing. but some people were unhappy with the repetition, so they hopped onto object oriented.
And Python.
@@boptillyouflop I think the point is why procedural and stack C style Modula syntax, instead of Haskell, ML, Lisp, Forth, Mumps
@@eusebiusthunked5259 Multiple different reasons:
- Forth's stack based system is impressive, but most programmers prefer infix-based maths (a = b + c etc).
- Mumps is oriented around databases and doesn't seem to have branched out towards general purpose computing (which would have required adding static typing).
- Lisp had a similar disadvantage to Java (with the JVM) in requiring an engine and garbage collection, and having a hard time dealing with mixed Assembly language and low level hardware such as interrupts and timers. You could do that stuff with Pascal and C, and on DOS doing this was necessary to do a lot of stuff.
Probably the main thing holding back FP is the fact that MANY tasks are just easier to think about in an imperative way, and I think the ultimate admission of this fact is Lisp (my exposure is through emacs eLisp) which contains such functions as "set" and "progn".
"set" violates the FP immutability paradigm, as it changes the value of a variable.
"progn" is effectively a way to force imperative execution of sequential function calls.
Emacs eLisp functions are littered with side effects because it is used to drive an editor (and everything else in the Emacs operating system).
FP is useful sure, but outside of mathematically complex computation, the paradigm very much suffers the square peg -> round hole problem.
this guy is a really interesting lecturer. He took one of the most dry and boring topics and made it very engaging. No one is inherently interested in the history of lang, but the way he laid out the story and the happenstances of chance and pure collateral benefits/damages made the whole talk way more enthralling than the topic had the right to be.
Excellent talk, one small addition Microsoft did build a fantastic IDE for Java and called VJ++ , but it was their own take on Java and soon fell out of favor but lives on in some odd nooks and crannies as Visual J# .
You can write completely functional javascript, untyped, á la lisp, though. You can also write procedural and "object oriented" (javascript's object orientation deviates quite a lot from other examples), it all depends on the style you decide on.
Haskell, which I would consider the pure functional programming language, is just now gaining a lot of popularity (and tool improvement) because of Cardano (ADA) - at least I restarted learning Haskell.
The functional paradigm might be a lot harder to learn in the beginning (in comparison to OO, you need to think harder), but its benefits have been adopted by all new programming languages!
I’m learning Haskell too. It’s been great thus far 😊
Haskell is a goddam good language, hope that people take a look at it
It is not harder. You were just introduced to it later than imperative ones. The other way around would be painful too. Even jumping from procedural to OO was hard at time.
@@JanilGarciaJr If you already knew one fully functional programming language very well (e.g. Haskell) well, which modern imperitative programming language with garbage collection you think would be hard to learn? To me it appears that writing FP style code in imperative programming languages wouldn't be that hard because FP style is mostly about not using mutation and side-effects - you're not required to do that in imperative languages either except some library code you would need to use may require such interface. Sure, jumping from Haskell to C would be hard because with C the language doesn't support anything where you don't manually handle memory allocation and pointers by yourself.
For me, the hard part is performance. Both in runtime (CPU and RAM usage) and developer productivity (for example, with pure FP you may not use some widely known algorithm that requires mutation and you have to invent a new alternative algorithm).
@@JanilGarciaJr I guess it depends on one's path, but at least in university, learning functional was only about one year after learning imperative, but what we learned was Miranda, which I still find a hell of a lot simpler to read and write than Haskell.
As someone who was programming through this period, the amount of hype for OO cannot be underestimated. New languages adopted OO features, there were attempts to make OO databases, OO operating systems, and more. It really was a juggernaut. Even LISP was adopting OO principles (Common Lisp Object System). I can see that looking back without that direct experience, it may seem different (especially as the primary sources are not online as the Web hadn't yet taken off).