Nowadays the feeling I have is that Java developers and frameworks tend to use less and less inheritance. What I hate about Java programming is not actually the OO/DesignPatterns abuse (which are becoming more rare nowadays) but the abuse of Annotations and reflections, which brings implicit magic to you program and totally hide control flow, so frameworks become Documentation/CopyPaste/GuessTryRepeat Oriented Programming. This is the annoying part.
Where do I put the breakpoint or print statement when using annotations and reflection? If code doesn't run procedurally line by line, then I'm just too dumb to debug it.
My take: OOP is the skeuomorphism of programming. It was very useful for helping programmers conceptualize what was happening in their heads with the metaphor of objects interacting. But now, experienced programmers are realizing that code can be much simpler, do the same work, and be more aligned with the reality of the hardware with procedural programming.
No, OOP is still best for stuff you need to modify and maintain over a long period with multiple programmers. It's really more of a defence mechanism against the things that can go wrong. On the other hand, in systems programming(including cloud systems etc) building general environments rather than specific apps - anything that's not going to change much and it'd probably going to need complete rewriting when it does change - procedural is all you need. And it's better because you are closer to what's actually happening under the hood. You need OOP for apps and interfaces, but for systems and environments, procedural is better
@@bozdowleder2303 What is it about OOP that enables this in your opinion? What kind of OOP are you talking about? My experience is exactly contrary to that. Java/C++ etc etyle OO programs have a lot of state that is hidden but gets modified and that state affects their behaviour. This means that a method call that gives no outward hints can change the state and cause some subsequent method call to behave in unexpected ways. Other problem related to this is code reuse. Where the problem is again that methods do not act independently but most of the time they depend on the state of the object. In both these cases the state may not be simple data types but other objects which tends to multiply this effect. If you're talking about Erlang style OOP which is pretty close to original meaning of object oriented.. well, that's entirely different kettle of fish and I would agree that there are aspects that help build reliable programs.
@@MrChelovek68 In interface design oop is more intuitive. For example it's hard to imagine a procedural version of CSS. But otherwise it's more about protecting you from the bad things that can happen when the same code base is maintained and modified by multiple programmers.
@@bozdowleder2303 This is really the most important point. OOP is for the type of software that developers don’t control the flow, the user does. Objects “do what they do”, it’s skeuomorphic model of the world, the user pushes around the bits. It’s perfect for UI’s, and websites. If it’s a database, the user asks questions, defines filters, and the OO software gives answers, presented as the user wants them. The quid pro quo, is that there is simply no control flow to debug. The answer to “what it should it do end-to-end in this case” is “meh, I don’t know, let’s do it and see, as long as the bits meet their API functionality that’s all I can ask”. Whereas non-user-facing software, the software has a defined flow at design-time. If you want to model the weather it takes an input file of observations, it runs some physics equations, and produces an output file. Or the embedded controller for a jet engine. I don’t want the jet engine to be one of a Class of engines, there’s just one and it better not explode. This should be procedural. Anything else is just un-debuggable madness, because even the developer doesn’t have a clue what it’s supposed to do in particular circumstances, as that has all been abstracted away. If you want to watch somebody squirm, try getting one of those OOP functional people, give them a stack-frame, and ask them what went wrong. They can’t. They can’t even understand the question.
What if you could declare procedures inside structures and compiler would then add implicit first argument named e.g. "self". That will probably never take on.
You forget about one very important aspect of Object-Oriented Programming. It's the ergonomics of discovering what could be done with ADT by simply hitting the “.”. And that is the fact why Object-Oriented is so popular, especially for creating programming APIs and that’s why strong typing is now prevalent
in languages that you know/use .... in languages that are actually important - COBOL and RPG - these things aren't optional. And OO is a non-issue, it attempts to solve problems we don't actually have on real operating systems.
As someone who learnt Procedural programming with Pascal and Ansi C, OOP always seemed weird and more complicated to me. Not saying it's not useful in some cases, to me it just seems overcomplicated when Procedural Programming can do the job just fine. K.I.S.S.
@@toby9999 it's completely true. At the same time the features classes provide could be delivered in a different way (think of how go and rust deliver these features without classes). In OOP languages sadly since the class is the main abstraction, you are limited to this ideology of function and state that are tied to a module.
Most if not all really complex systems would be impossible without OOP, because it provides encapsulation: the ability to restrict access to a bundle of data to a small number of well defined operations, and enforce invariants. The hardware doesn’t care: OOP is there to keep programmers honest. After compilation, of course, what you have is just procedural code.
@@andreydonkrot- "begin/end" just came over from Algol 60, the parent of "Algol-like" languages. I thought that was the right way to do it. C changed the notation, and it was so influential that almost every language followed suit. But because the C developers didn't know where to put the braces, now hardly anybody knows, to the detriment of clarity in programming. Edit: Actually, it started with "B", the predecessor of "C", or perhaps with BCPL. Edit: I just looked up some BCPL code from Martin Richard's website (the creator of BCPL). He did, indeed, introduce the braces, and he uses them in Algol style, i.e. according to Dijkstra's rule as set forth in "A Method of Programming." So he is not at fault, as I expected.
Great talk, OOP always seemed dodgy, particularly w.r.t. maintenance. Modules (Pascal had them) were the crux of modern programming. Now we have AI for code generation and better compilers fixing a lot of the old problems, its back to 'C'
If you look at modern frameworks l>e Laravel that use tons of classes, you'll find that in virtually every case they're using a single instance for every class. Zend Framework/Laminas is even more extreme, as you don't ever instantiate a class, instead you're forced to use a "factory" to get a reference to the one singleton instance. That is not OOP, that is procedural programming with objects. If you don't ever use more than one object for each class, you don't need objects. Your class is just a package with package variables.
The more extreme part about Laravel, Symphony is that you get to using methods such as HTML::escape and load something like 20 classes on startup of the program just to call htmlspecialchars inside the method. After that people wonder why a small login screen takes 500mb of RAM and is a 100mb+ project instead of using 100K of RAM for a 100K project assets and js included.
I think there is also a meta-point about how trendy and fashionable languages and paradigms can be. The zeal for FP feels like how OOP was, and the zeal for Rust is similar to how people spoke about Java.
problem with Java - aside from the fact that it's inherently crappy - is that it is now under the control of a psychopath billionaire. Organizations like Bank of Nova Scotia have already banned the use of Java in their organizations (as in LAST YEAR).
Inheritance is good if used to add/compose one layer of functionality. Take the standard webpage or controller object and add auth, logging, and configuration to the base class. Makes it easy to make global changes to cross cutting functionality. This can be accomplished in other ways however. The real problem with OOP is layers and it’s slow. I worked on systems that had 3 layers and they weren’t too bad. Then I worked on a system where single responsibility was taken literally and each layer had one line of code in it. That thinking led to 25 layers, massive complexity, that was impossible to step thru.
Richard, along with your reasons for this trend back to procedural, there may be a wider scope affect at play here: the lifting up and out of the so-called OOP pillars (messaging) concepts to higher inter-org abstractions. There was a time when local compilation and local services and (custom) libraries were part of a smaller local geographic and administrative ecosystem. The notion of a looser coupled set of non-local services , SOA, micro-services whatever [WAN -ish], has put the interface farther up and out, relying on a higher degree of inter-org definition, accepted standards, and/or trust. So it makes sense that procedural code would rise from the ashes again, the "pillars of OOP" have been subsumed by other inter-cloud interfacing standards, API's , what not. If for no other reason, there is more use of procedural coding as the simple local cohesive of all the many published "WAN" interfaces.
In the ‘80’s I was taught ADT programming - Abstract Data Type - with Pascal and Modula. When OOP showed up it was you data definitions and their procedures stuffed in single file.
OO. Back in the day, we called it OOPS. I wrote some code in Ada, before I left the game. My main language was IBM/370 Assembler, but I also wrote in PL/I, Cobol and Fortran. My very first language was IBM 1440 Autocoder. The 1440 was a scaled-down version of the 1401. In those days, an expert programmer was one who could squeeze the most out of 4000 6-bit characters in RAM and instruction timings in milliseconds. Today, it looks like chaotic spaghetti, but you couldn't do too much in such a small program. We used op-codes as constants, modified code on the fly. It had variable word lengths, so we could fool around with word marks. It was the baling-wire-and-canvas age of programming. There was no separation of code and data, which made all kinds of hair-raising things possible, but self-modifying code was like a precursor of neural networks and AI.
Return of procedural programming is just the result of PTSD. OOP was the hot thing, people were trying to make everything as object-oriented as possible, that led to a lot of bad ideas like UML etc. Now people who were hurt by overzealous OOP evangelists are rejecting it wholesale.
and rightly so. If you can't make me understand it (something like OO) in a couple of hours, as something that make sense? then it's not worth it. I still get things done in ILE RPG (on system i). I mean ACTUALLY get things done, and maintainable. All of this talk about Java, bla bla, in the mean time in the real world .... we do Mastercard Debit acquiring and issuing etc.
OOP was great for SDKs and Frameworks and that is probably why it took off because it solved the problems of the SDK and Framework publishers (and OS api abstraction in particular). It’s also great if you get the domain model right, but software engineers seem to be bad at that early in a project and for OOP, early is exactly when you need to have nailed the hierarchy.
A simple form of oo can be done even in COBOL without oo language extensions. Just make one DATA DIVISION + PROCEDURE DIVISION per object, an ENTRY for each public method …
The speaker doesn't really remember the '90s. He was a kid. What this history doesn't include is the explosion of C with classes type dialects that came out in the late '80s and early '90s. There was also an explosion of Pascals with classes and pretty much every language received a class based object system. That's because object oriented programming was PHENOMENALLY POPULAR and I feel that we're being undersold on this because people's memory of this era are so poor. You really just need to ask a programmer who remembers those days a little better Despite this I agree with the main point of this video that OOPS peaked a while ago and that more and more programmers are wanting ditch more more procedural and functional alternatives. In my opinion, OOP was a bit of a mistake and the problem set which suits late bound encapsulated objects is actually pretty small.
biggest problem with oops is people just memorize it but then forget to use it. so many times in interview i have encountered a person grinding me on oops concept but when i see their code its next level bullshit.
I like Richard but a lambda proc'ing a let around a lambda is the first step of OO (an encapsulation for the better or worse): that's what AK meant, not the CLOS/MOP (almost AOP). Object is not about class. Class is a template for objects. You can do OO from any closures. The common alternative is the cloning ones AKA prototype-based (btw, Self is a very interesting alternative to Smalltalk; the Traits as object-behavior with no properties came from this one).
Interesting talk with some nice higher-level perspectives taking history into account. What I miss in the talk is the idea of embracing that within the same project you may very well have different areas where different styles need to be applied. In any large project you will see a mix of styles. Like: functional programming for UI, OOP for the architecture of your application and procedural programming for your GPU workload.
I feel like order of paradigms by niceness is logic > functional > Kay-style OO > procedural > typical OO. I believe the best part of the original OO idea is modularity and message passing (essential event queues that are handled solely by a FSM "object" instead of being able to reach in and control the inside of something from the outside). Modern day OO with inheritance and a more obscure version of namespaced procedures is the worst IMO. Procedural is very intuitive at first because we're used to sequences of instructions like in DIY guides and recipes so it seems natural to communicate with the hardware that way, and it's basically the idea of a Turing machine. But if you read "Can We Escape from the Von Neuman Architecture" by the inventor of BNF, it becomes apparent that statements are way less useful than expressions. Expressions convey the idea of referential transparency, or basically that it should be possible to cut-and-paste the definition of something with its name, which means side-effects need to be wrapped in monads to turn their action into a form of data. Hard-core functional like Haskell follows this by making everything descriptive rather than imperative. And programming just becomes writing down a specific vocabulary with everything is described in terms of primitive notions (just like math). Logic programming is currently no where near as popular as functional even. But if more effort was put into building an ecosystem around it to do what general apps do, then it could be the best. (Check out the Verse programming language being headed by the inventor of Haskell). The difference between logic and functional paradigms is basically the difference mathematical relations (non-deterministic) and functions (deterministic) and how relations can be solved backwards instead of only run forwards. This methodology would revolve around specifying a set of constraints on a more general domain and the output of the program being elements of the feasible set according to those constraints (which could turned on and off for different use cases). It's well studied in math like in optimization, relational algebra, SAT, constraint logic programming, etc.
I agree with freedomgoddess. Procedural programming never left. A lot of the specialized coding for satellite/instrument control has always been done with procedural programming. I working for both DOD and then NASA have used and continue to use procedural programming. I also do OO programming in both C++ and Python when I think it is appropriate and will lead to more easily expandable systems or subsystems. However, I always start out thinking procedural programming.
Go allow to effectively and efficiently implement OO programs! You don’t need classes and inheritance to do OO. If you have a method: a function attached to a data type and a mechanism to put an interface in front of it, you have an object. I think what we’ve seen is the raise of hybrid languages providing easy access to different paradigms, rather than procedural paradigm per se.
About his comment of finding it hard to believe or odd that Alan Kay said that it was possible to do OOP in LISP as earlier as the late 1960's, well, even before the CLOS (Common Lisp Object System that was added in the 1980's) LISP had first class functions and closures, so, as clunky as it might have been, yes, it was possible to do OOP, as you can encapsulate the environment and hide variables or exposed them with functions inside functions.
I do inheritance by copying the source from some existing thing and pasting it into my new thing. Now my new thing has all the behaviour of the old thing. But it has zero dependency on the old thing. I can even delete the old thing and the new thing keeps working.
That works until you need to do a mass change. The code that’s copied can morph making mass changes harder. Copying code obviously isn’t DRY. My point in disagreeing is to highlight there is no right way. I’ve done exactly what you mentioned many times and I’ve use inheritance and utility classes. Just depends on the situation. We as a collective need to stop looking at styles and languages as absolutes and do what makes sense which is what’s easiest and meets requirements.
@@johnlehew8192 Well, I made that post with half my tongue in my cheek. I was sort of bashing on people who inherit from something as a way of making a slightly different version of the somethings code, overriding this and that, without any particular rhyme or reason. Ending up with code that has dependencies on whatever inherited for no useful reason. Just the same but different in many odd ways. More philosophical... Apparently I, as a human, have inherited properties from my mother and father, and their mothers and fathers etc, etc. All done through copy and pasting of DNA with some mutation thrown in. BUT still my existence does not dependent on the existence of my parents or grand parents, long gone, their DNA deleted. Which is a good thing, for me at least :) Conversely inheritance in C++ as others creates a web of dependencies. Which at least I find difficult to deal with. Making changes to it can be as hard as those "mass changes" to copy/pasted code you mentioned. All in all I agree. Use whatever style/paradigm that does the job. Don't get fixated on OOP, Functional, DRY, SOLID, whatever. I sometimes get the feeling those catch phrases are just dreamed up by self proclaimed software engineering faith healers to sell their books, training courses and conference speaking. Promising snake oil, at a price, to magically cure all your software production problems.
I'd argue rust is in a loose sense object oriented too, since it supports structures with encapsulation, methods, and abstraction and polymorphism through traits (equivalent to interfaces in other typical OO languages). It's much more limited compared to what you'd find in languages such as C++ though
After all the functional talks that Richard Feldman has given, I was surprised to see him give a talk with this title. The talk is not about Felman moving from FP to PP, which would have been controversial for me, but instead it was nice to hear him give a talk about all the different programming paradigms. A few nitpicks: Brendan Eich was working at Netscape and not Mozilla at the time. And Richard, you can use the function as property syntax from ES2015, even if you use a JS logo from the 90's ;)
I started programming in 1974. I realized very early (late 80's) that OO was more difficult to teach, and far less productive especially for average programmers. It is also far more difficult to analyse and debug. Procedural programming needs one thing to render it truly useful - an integrated memory database. Relatively easy to do, this provides procedural programming with all of the (tentative) advantages in OO due to classes / objects being able to retain complex data at run-time, and doesn't add any of the baggage. I've been managing / writing systems in procedural programming with an integrated memory database for 2 decades now, and I am quite sure that in terms of the programming paradigms available today it is the best trade-off. AI might result in new trends, let's see.
@@MrHopp24 I guess you could use a very basic system like that, but for complex systems you actually need a relational database, even if only with minimal functionality. You do not need sql (at all), just row level access to data, and perhaps some kind of indexing ability (often not required). Essentially, the data concept behind class and object with relations is a very good idea and almost indispensable for complex systems, but it is (in my opinion) a bad idea to couple it with the programming language. Procedural access to a minimalist memory database is all you need. I have managed and programmed highly complex systems (over a million lines of source code) over the last 20 years, and the resulting programs are simple to code, understand, and debug. No hidden nothing, everything is apparent. Where required (and this is often) the entire memory database can be dumped onto disk to store state and essential data, and reloaded at startup. I am not so sure this is easy with class and object.
@@rs8197-dms I recently had to implement and debug a rather obscure topology decoding algorithm. Being able to just dump the program state to disk at every step and analyse the resulting data flow in a spreadsheet was key to getting it right. I've worked with IBM mainframes before and working with count-key data (CDK) formatted storage was the most pleasant programming experience I've had in years. OOP too often leads to convoluted and deeply hierarchical data (mostly by accident) that's hard to parse and reason about. I too moved away from that many years ago.
13:26 late binding vs static type checking: Weirdly enough I agree with both and think they should coexist. I want to be able to own & customize the final app, while preserving its invariants. E.g. If I want the send button on this comment to be a weather-dependent animal, I should be able to do that then run the app's validation/invariant checks to make sure I didn't break it before hitting save. Javascript & HTML come pretty close to that idea, but fail gloriously on invariant checks, understandability(everything is minified with 1k deps), and are very limited (to the browser). Some nix-like rollbacks would be cool for hot-swapping
You're saying how Python was supposedly influenced by Simula, but original Python didn't have any class support that I know of, unless I'm mistaken? And I think it got added kind of as an afterthought (which is why it's kind of clunky), but for those knowing Python's history well, let me know if I'm wrong.
My pet peeve is that C, which was so popular and influential, was written by people who didn't know where to put the braces, and as a result hardly anybody, in most languages, does it right. That includes Mr. Feldman, in his examples here, despite his obvious great knowledge about programming languages. I adhered to Dijkstra's rule*, and since I am now long retired I no longer have to fret about it. *A Method of Programming
I came from the pre-procedural time. There was a reason why OOP was popular. The felt freedom in procedural languages comes with a price. It's so easy to make a mess. Especially when multiple people work on the same stuff for years. It feels like work when fixing code. OOP gave structure and localizes the issues. But I always to overlook the hypes and used commonsense. I guess that's what happening now. But don't think that procedural coding is just heaven either!
What Alan Kay meant as lisp is more like Interlisp Loops, MIT Flavors, Common Lisp Object System and other Smalltalk-like lisp-based environments. Lisp has a rich history.
I think that Kay just meant what he said - that it was possible to implement object systems in Lisp and Smalltalk themselves. CLOS was only adopted to provide a standardized way of doing OOP in Lisp. I don't know why that fairly obvious point baffles Feldman, unless he's being deliberately obtuse.
Unfortunately OOP is often taught and understood as `class Dog extends Animal`, which is the worst way to explain OOP and OOM (right up there with the other extreme, IEnterpriseAbstractFactoryProvider) Marrying functional and object-oriented as well as compositional patterns is the way to go. Use the right tool for the job. I don't like JAI and neither ODIN. They lack expressiveness And I come from an Assembly and C background (then C++/Java/ObjC, then Python, and currently mostly C# and a bit of TS) If I want a good alt-C, I use Zig. If I need Cpp interop, I use Nim. Rust is absolutely an OO language. It has traits and and methods that accompany and operate on the data types they are declared in.
@@andrewf8366 I'd say OO is when objects themselves do things. Procedural is functions change objects that just store data. Functional is functions return new data based just on inputs. Best way to go IMO is a mix of all 3 - OO gives you really nice abstractions with interfaces. Procedural is great for IO, functional is great for business logic - extemely easy to unit test due to pure functions. That's why I really enjoy C#, it's great at all 3 (functional is getting better :))
I'm curious about your thoughts on disliking Odin. I spent a decade with C#, and over the past year, I've been exploring all of the new C-likes. Out of all of the ones I've tried, Odin has stuck. There are definitely features that I would love from Zig / C3, but overall, Odin is robust enough. I've found it to be the easiest to translate thought to code. To be honest, I would use plain C if it wasn't for windows. Linux made it so much simpler. Mostly due to lack of effort to learn...
OO is just fancy message passing with lots of helper stuff so you don't need to manually check what the message is all the time. It's nice in some instances but it's a bit broken in others. I think it can lead to people getting confused. But what do I know?
20:00 I don’t think the industry actually moved away from messaging and late-binding, but only the specific implementations due to pursuit for performance. The static type checking boom came only when type checking has advanced enough to check late-binding (generic types, trait constraints, gradual typing) and messaging (borrow checker), so it is not actually against those ideas. Not to mention the micro services paradigm is a system level realization of messaging and late binding, along with renewed persistent interest in Erlang BEAM.
Thank you IMHO OOP like java requires you to write more code, before resolving the problem you have to think about abstractions, best practices, etc... Functions are abstractions too but it’s thinner and direct. Using interfaces and other indirect abstractions may work for projects or application layers that might change over time like DB access or Auth, but not all projects are subject to those changes. Overall ,it’s subjective but simplicity plays a huge role in choosing a language over another.
In addition to simplicity, the developer must understand what is happening. OOP is ideal in this regard. As for abstractions, assembler is also an abstraction, use it. Machine codes are also an abstraction, you can use them, there is simply nowhere thinner. I don't understand why I don't like the obvious layout of components in Java. Functionalism is all the same, only it ties your hands much more. OOP is literally optimal for everyone. But, the functionalists point-blank do not notice the obvious.
@MrChelovek68 I think it depends on the developper's way of thinking. I've started with procedural programing in python, C then PHP. It feels like those langages formated my brain into procedural way of thinking and writing code. So creating classes feels like translating my thoughs into another language. I still can see the benefits of OOP though.
@@icantchosemyname It's funny, I studied Pascal, then C-Sharp, and now C. But, more convenient and intuitive than OOP, I have not seen anything. In fact, OOP in PHP is well documented. As for the formatting, I strongly agree) My comment is just about "why OOP is used", and everything else is dreams and shadows. I am so familiar with higher mathematics, but when functional programming begins, for example, my brain rejects it as something alien. I don't know why, but it's counterintuitive. purely for my taste, oop and in particular languages like Java or C-Sharp are ideal, because they do not adjust a person's thinking to a machine, but on the contrary, allow anyone to translate a thought into a code. At the same time, I dearly love both C and Pascal for their complete freedom of action.
Even within C++, I find myself reaching less for OOP and more for procedural or functional solutions to my problems. So, it's not always accurate to assume that C++ == OOP
You could say the same about PHP. And probably all multiparadigm languages that allow both OOP and procedural. Given the general idea of the talk, I'd say it's fine, there's wide brushes over everything used in the talk.
I think the most important factor in this equation is the philosophy of the ecosystem. You might be writing without classes at the application level, however most of the libraries you are using are most probably relying on classes anyway.
procedural falls apart when you have large sets of structs (like nodes in a syntax tree) and you need to call functions which each type implements differently. Having overring to make dispatch tables is way easier than function pointers and switching on type IDs.
Procedural is a technique and has its limitations. OOP is also a technique. You just have to use the right tool for the problem. 90+% of problems can be solved by procedural? When needed you introduce OOP or functional, why not calls to quantum in the future? The problem with OOP and its best practices is that now the push is "We have a hammer and everything you have to do has to be done with the hammer as this is the best practice and only professional way.". What happens when you need a small screwdriver? Well, we all know it. The stats for a very long time have been that around 90% of projects fail before first production release and things get worse.
I agree, I would rather have a large OOP code base than a large procedural code base. With microservices, code is smaller and procedural makes more sense. I think reducing code base size is driving us back to procedures.
Yeah, I think message passing, encapsulation, late binding, all of that just moved a level up with microservices due to the scale. And then services themselves don't need so much code, so less hierarchical procedural style came back. In anything, the ideas of OOP just scaled up out of single node, not became less popular.
Even though I do object-oriented programming, I've really grown to hate "inheritance hell", when there are long chains of A inherits from B which inherits from C. Maybe this is addressed in the video (I haven't watched the whole thing yet) but I assume part of the shift away from OO is people getting fed up with inheritance hell. Personally, I think I might like a language that still has objects/classes, but no inheritance. EDIT: now that I'm halfway through, yep this is where he talks about that exact issue. Specifically, he talks about composition being preferred over inheritance. He also describes classes without inheritance as being basically nested structs, which huh I never thought of that.
then what happens if you some class and you want to add an extra field and some methods? now you have to pack in the same class and mix it all together which is not great either. The bigger culprit tends to bad APIs that weren't well designed or grew out of control over time.
Me personally, I've been impressed by most *_Library Devs_* and what they've done with their APIs. So, I would be curious of some examples of Libs with "inheritance hell" ? My impression is that "inheritance hell" comes from "the business layer" as a result of Devs always being on a time crunch, and hence, leads to them just throwing shit together (that probably wasn't the best decision, but since they had a deadline, they just "went with it").
I wonder if the rise of gRPC and microservice system archetecture worker agents and orchestration messaging systems has something to do with procedural programming being seen more. Workers are getting something like a RabbitMQ message or an RPC call to 'do you task with this data'.
I remember mentioning procedural in an interview decades ago, it ended abruptly and I was walked out the door. My how things have changed. OOP is slow and this confirms my saying…. Speed always wins
This is the most flat earth talk I have ever watched, simple examples and preaching instead of actual real world enterprise class challenges. I come from a very strong Procedural programming background and I enjoye using it to the extent of right domain, when I learned OOP It addressed many of the shortcomings I had with desiging, changing, maintaining and understanding procedural code. I am not saying there won't be shortcomings, codebase rot and chaos in OOP solutions, but It gives at least a chance to write some code with engineering principles in mind. Good luck writing remotely quality and solid code for systems that have more than 2 screens, It will start deteoriating the second you need to add a second method to ftp from other endpoint, or the next time the solution requires a different protocol. It will force more method duplication, tight coupling and side effects, worried that adding a change in a method in a subclass will break your system ? Try changing a if statement in a procedural module.
yeah, I agree. The message is clear, and I can understand explanations for the styles differences and certain advantages, but the examples really let down. Exactly as you said, how will this support low coupling and extensibility?
Too bad the examples were too sandbox-like. Like the example with FtpDownloader, PatentJob and Config. Clearly, the procedural version raises few concerns - How to inject dependencies? How to swap the implementations with test stubs for test isolation?
this is a misunderstanding of what is object oriented programming. Even this guy who read a lot about programming misunderstands Lisp like most people do. Lisp has always been a multiparadigm programming language, it has never described itself as "functional" in fact, it was never self-identified as functional, functional was just one of the tool belts that could be used in Lisp but when SML was made all the FP community left Lisp, and it was until Richard Hickey that there was a new Lisp focusing for the very first time in FP. If you wanna think of a main programming paradigm for Lisp then that would be Symbolic Programming, not functional and not object oriented and CLOS is a symbolic programming approach to object orientation.
The only thing useful about OOP is encapsulation. The ability to statically guarantee that invariants are maintained by limiting access to values to only certain procedures.
No, *_Polymorhpism_* is the most valuable aspect of OOP, imo. Altho, I am seeing a lot of people in the comments saying it's ADT for code completion (via the dot operator).
IMHO. C++ are used in big programming such network cloud service, while C is used in fast efficiency time critical such real time programming. They are not substitute or to compete.
We could always go back to flowcharts and Assembly Language for concise code. You can even write self modifying code. Who needs a typeless interpreted script that dares to call itself a program language
The beginner needs it! I'm strongly in favor of Flowcharting, and Assembler- but only for a limited problem domain. It is simply a terrible waste of human lifespan to code in assembler for most things.
@JimLecka A beginner to a language will often make assumptions based upon their experiences with other languages that may have unintended consequences. For example a C programmer may have difficulty with Python, especially for if cases.
@@99bobcain Amongst other activities, I have taught introductory programming to completely raw beginners. At the rate of 500-1000 people per semester. The very first thing is to give them something simple to copy and type in. Like "hello world". About 10% fail and drop at this point. Then show them how to change "hello world" to something else like their name. Success at this point is their 1st positive feed back. Then gradually more concepts, some history, and learn by doing simple exercises. It is a long way down the trail to get to concepts like actual bit representations : I am happy if they get to use one (1) numeric type [best a default float] and simple character strings, with some control logic. The idea is to get them up to the point we can introduce them to a real programming language, in the next semester.
Procedural programming never went away. Of course, certain trends and fads appear in the industry, and old ones sometimes come back, but advanced and experienced programmers use the tools and paradigms that are best for a specific task, whether it is OOP, procedural programming, functional programming, or something else.. Poor programmers write poor code regardless of the programming paradigm.
I'm using OOP languages but these days I use classes mainly in two ways: 1) as algebraic data types and 2) as actions, i.e. I name the class after a verb, give it an apply() method. So it's basically just a function that I can call and pass around, with the added benefit that it can use private methods to structure the code more. Sometimes these functions with a single responsibility can get over 100 lines long, but it's nice to spread it out over 3-4 methods for some internal clarity. Other than that, I avoid most things associated with OOP, especially inheritance of behavior.
I program in a way that I call “Context Flow.” I rely heavily on modularity and global data. Functions have zero, one, or two arguments (for k-v pairs), and I never pass context. I never use classes. It looks very much like turtle commands. Systems are highly stateful. Looping structures have special support code. Context is king. Parallelism is achieved with multiple processes, threads are used only for very specific purposes and communicate only by way of queues.
I've seen a lot of code written like that. It's usually not easy to modify. Which really is the most important quality a piece of code should have in my opinion.
@@JanVerny I’ve been programming for 40 years. I have yet to see code that is easy to modify. I have met a plethora of people who SAY they are writing code that is easy to modify. But I’ve never actually seen that code. When people use “tightly controlled access” to make it “easy to modify,” what happens is that you need to modify ten files to get access to variable X from point A to point J. How’s that for “easy to modify” ..? Alternatively, people make it so that object A has access to object B has access to object C has access to… …but now you’ve both: (a) already effectively made everything globally accessible, so no win there, and (b) just made it so you have to walk through ten layers of indirection to get there. No real benefit. People talk a lot about “if you program this way, you won’t be able to understand the code,” or “it’ll be unsafe,” but what I really observe is that everything really comes down to whether you have an organized code base or not. If you have an organized code base, and consistent principles that you follow, than global variables is no more a problem than a global space of IP addresses. The argument against global variables then reminds me of the pearl clutching of programmers who claim that variable names need to be as long and descriptive as possible other wise “I can’t possibly understand what the variable does, otherwise.” My response is, “Yes you can, yes you do, shut up and stop being such a baby.” It doesn’t take any effort to pretend to not be able to understand what a variable “i” does in a loop iteration. But people will swear by it. The emperor has no clothes.
@@LionKimbro I don't have as many years of experience, but I have worked in codebases that were relatively easy to modify. Hell, over the years I've refactored a lot of my code, and sometimes it was a huge pain and sometimes even coming back to it after year, it was very easy.
@@LionKimbro Though, to give you some credit, I do think a lot of programmers talk negatively about certain practices to disguise their inability to read code and think about a problem. Any sort of dogma always leads to poor results. That's also why I haven't said that your approach is wrong or that it always leads to bad results.
"OOP" languages, in the sense of Java etc (not in the sense of Smalltalk), sold themselves on: encapsulation polymorphism abstraction inheritance Of those, the actual reason OOP languages were adopted was just encapsulation. All the things OOP brought to the table for the other three points were net negative. Modern languages now have very good module systems which covers encapsulation.
Nowadays the feeling I have is that Java developers and frameworks tend to use less and less inheritance. What I hate about Java programming is not actually the OO/DesignPatterns abuse (which are becoming more rare nowadays) but the abuse of Annotations and reflections, which brings implicit magic to you program and totally hide control flow, so frameworks become Documentation/CopyPaste/GuessTryRepeat Oriented Programming. This is the annoying part.
Extremely well said.
@@MarioMeyrelles One thing I really didn't like was Spring. Too much magic for my taste.
Where do I put the breakpoint or print statement when using annotations and reflection? If code doesn't run procedurally line by line, then I'm just too dumb to debug it.
composition for the win
@@mortensimonsen1645if only it was magic. It's just shit
My take: OOP is the skeuomorphism of programming. It was very useful for helping programmers conceptualize what was happening in their heads with the metaphor of objects interacting. But now, experienced programmers are realizing that code can be much simpler, do the same work, and be more aligned with the reality of the hardware with procedural programming.
No, OOP is still best for stuff you need to modify and maintain over a long period with multiple programmers. It's really more of a defence mechanism against the things that can go wrong. On the other hand, in systems programming(including cloud systems etc) building general environments rather than specific apps - anything that's not going to change much and it'd probably going to need complete rewriting when it does change - procedural is all you need. And it's better because you are closer to what's actually happening under the hood. You need OOP for apps and interfaces, but for systems and environments, procedural is better
@@bozdowleder2303 What is it about OOP that enables this in your opinion? What kind of OOP are you talking about? My experience is exactly contrary to that. Java/C++ etc etyle OO programs have a lot of state that is hidden but gets modified and that state affects their behaviour. This means that a method call that gives no outward hints can change the state and cause some subsequent method call to behave in unexpected ways. Other problem related to this is code reuse. Where the problem is again that methods do not act independently but most of the time they depend on the state of the object. In both these cases the state may not be simple data types but other objects which tends to multiply this effect.
If you're talking about Erlang style OOP which is pretty close to original meaning of object oriented.. well, that's entirely different kettle of fish and I would agree that there are aspects that help build reliable programs.
@@bozdowleder2303 oop it's really simplest thing for understanding program logic for anyone. because we are thinking by objects
@@MrChelovek68 In interface design oop is more intuitive. For example it's hard to imagine a procedural version of CSS. But otherwise it's more about protecting you from the bad things that can happen when the same code base is maintained and modified by multiple programmers.
@@bozdowleder2303 This is really the most important point. OOP is for the type of software that developers don’t control the flow, the user does. Objects “do what they do”, it’s skeuomorphic model of the world, the user pushes around the bits. It’s perfect for UI’s, and websites. If it’s a database, the user asks questions, defines filters, and the OO software gives answers, presented as the user wants them. The quid pro quo, is that there is simply no control flow to debug. The answer to “what it should it do end-to-end in this case” is “meh, I don’t know, let’s do it and see, as long as the bits meet their API functionality that’s all I can ask”.
Whereas non-user-facing software, the software has a defined flow at design-time. If you want to model the weather it takes an input file of observations, it runs some physics equations, and produces an output file. Or the embedded controller for a jet engine. I don’t want the jet engine to be one of a Class of engines, there’s just one and it better not explode. This should be procedural. Anything else is just un-debuggable madness, because even the developer doesn’t have a clue what it’s supposed to do in particular circumstances, as that has all been abstracted away.
If you want to watch somebody squirm, try getting one of those OOP functional people, give them a stack-frame, and ask them what went wrong. They can’t. They can’t even understand the question.
it never left. praise procedural.
What if you could declare procedures inside structures and compiler would then add implicit first argument named e.g. "self". That will probably never take on.
Hallelujah! Brother.
I thought a reference to Niklaus Wirth's 1976 book, Algorithms + Data Structures = Programs, would be good.
These programming videos about paradigms and history are very inspiring. And helps me be a better programmer.
I normally watch videos on 1.3x speed, but Richard Feldman saves me from having to do that! More content in less time 💌
I watched it 1.5x but definitelly felt like 1.75 :D
watched at 2x speed lol
Richard Feldman has a built-in 2X speed speaker
I always watch at 2x, I respect my own time.
0.9 was more than enough for me, so I know now that I will not attend a live presentation done by him.
Always a good time when there's a talk from Richard Feldman.
You forget about one very important aspect of Object-Oriented Programming. It's the ergonomics of discovering what could be done with ADT by simply hitting the “.”. And that is the fact why Object-Oriented is so popular, especially for creating programming APIs and that’s why strong typing is now prevalent
in languages that you know/use .... in languages that are actually important - COBOL and RPG - these things aren't optional. And OO is a non-issue, it attempts to solve problems we don't actually have on real operating systems.
that isn't an oop thing, really. it's more of a tooling thing. go and rust tools, for example, also have that.
As someone who learnt Procedural programming with Pascal and Ansi C, OOP always seemed weird and more complicated to me. Not saying it's not useful in some cases, to me it just seems overcomplicated when Procedural Programming can do the job just fine. K.I.S.S.
It looks like you are a classmate from the Fing .. Pascal, C, green screens, modems, Analysis I, Algebra .. etc.
I also never really grasped the point of OO in reality. Harder to understand , harder to read, achieves the same thing.
Yes.
It's always about understanding where to use what rather than what is good and what is bad.
No. Inheritance breaks type inference due to variance. It's provably shit.
@i-am-the-slime No, it's just a tool. Sometimes, it is useful, though not often. It doesn't break anything if it's used appropriately.
@@toby9999 it's completely true. At the same time the features classes provide could be delivered in a different way (think of how go and rust deliver these features without classes). In OOP languages sadly since the class is the main abstraction, you are limited to this ideology of function and state that are tied to a module.
@@toby9999Guns don't kill people type of argument
Most if not all really complex systems would be impossible without OOP, because it provides encapsulation: the ability to restrict access to a bundle of data to a small number of well defined operations, and enforce invariants.
The hardware doesn’t care: OOP is there to keep programmers honest. After compilation, of course, what you have is just procedural code.
didn't mention Pascal when talking about procedural programming is a war-crime.
Pascal is great but begin/end killed him. Also i would say java is much closer to turbo pascal than c++.
@@andreydonkrotI’ve never Pascalled but really like begin end in Julia, it’s very clear
@@andreydonkrot- "begin/end" just came over from Algol 60, the parent of "Algol-like" languages. I thought that was the right way to do it. C changed the notation, and it was so influential that almost every language followed suit. But because the C developers didn't know where to put the braces, now hardly anybody knows, to the detriment of clarity in programming.
Edit: Actually, it started with "B", the predecessor of "C", or perhaps with BCPL.
Edit: I just looked up some BCPL code from Martin Richard's website (the creator of BCPL). He did, indeed, introduce the braces, and he uses them in Algol style, i.e. according to Dijkstra's rule as set forth in "A Method of Programming." So he is not at fault, as I expected.
After 20 years in 2044,
OOP is back.!!
If you set the speed of the video to 0.75 you'll have normal speed.
I just tossed all the OOP books I bought at Borders in the 1990s. My relieved bookshelves thank you!
Great talk, OOP always seemed dodgy, particularly w.r.t. maintenance. Modules (Pascal had them) were the crux of modern programming. Now we have AI for code generation and better compilers fixing a lot of the old problems, its back to 'C'
If you look at modern frameworks l>e Laravel that use tons of classes, you'll find that in virtually every case they're using a single instance for every class. Zend Framework/Laminas is even more extreme, as you don't ever instantiate a class, instead you're forced to use a "factory" to get a reference to the one singleton instance.
That is not OOP, that is procedural programming with objects. If you don't ever use more than one object for each class, you don't need objects. Your class is just a package with package variables.
The more extreme part about Laravel, Symphony is that you get to using methods such as HTML::escape and load something like 20 classes on startup of the program just to call htmlspecialchars inside the method. After that people wonder why a small login screen takes 500mb of RAM and is a 100mb+ project instead of using 100K of RAM for a 100K project assets and js included.
Factories are my 27B/6…
I think there is also a meta-point about how trendy and fashionable languages and paradigms can be. The zeal for FP feels like how OOP was, and the zeal for Rust is similar to how people spoke about Java.
problem with Java - aside from the fact that it's inherently crappy - is that it is now under the control of a psychopath billionaire. Organizations like Bank of Nova Scotia have already banned the use of Java in their organizations (as in LAST YEAR).
Inheritance is good if used to add/compose one layer of functionality. Take the standard webpage or controller object and add auth, logging, and configuration to the base class. Makes it easy to make global changes to cross cutting functionality. This can be accomplished in other ways however. The real problem with OOP is layers and it’s slow. I worked on systems that had 3 layers and they weren’t too bad. Then I worked on a system where single responsibility was taken literally and each layer had one line of code in it. That thinking led to 25 layers, massive complexity, that was impossible to step thru.
"You're all wrong" -- Gray Haired Smalltalk Programmer
"Hold my beer" White haired FORTRAN dude.
Huh!!! Structured COBOL programmer.
You're all wrong. Even assembly isn't right. Should have just stuck to raw machine code !
"Laughing at this naivety" - while dusting off my box of punched cards.
@@michaelmoorrees3585 Machine code? Pfft! You haven't really lived when you didn't build your own PBX with at least ten extensions out of relays.
That guy must have been gulping coffee for two hours.
Meth???
Richard, along with your reasons for this trend back to procedural, there may be a wider scope affect at play here: the lifting up and out of the so-called OOP pillars (messaging) concepts to higher inter-org abstractions. There was a time when local compilation and local services and (custom) libraries were part of a smaller local geographic and administrative ecosystem. The notion of a looser coupled set of non-local services , SOA, micro-services whatever [WAN -ish], has put the interface farther up and out, relying on a higher degree of inter-org definition, accepted standards, and/or trust. So it makes sense that procedural code would rise from the ashes again, the "pillars of OOP" have been subsumed by other inter-cloud interfacing standards, API's , what not. If for no other reason, there is more use of procedural coding as the simple local cohesive of all the many published "WAN" interfaces.
In the ‘80’s I was taught ADT programming - Abstract Data Type - with Pascal and Modula. When OOP showed up it was you data definitions and their procedures stuffed in single file.
OO. Back in the day, we called it OOPS. I wrote some code in Ada, before I left the game. My main language was IBM/370 Assembler, but I also wrote in PL/I, Cobol and Fortran. My very first language was IBM 1440 Autocoder. The 1440 was a scaled-down version of the 1401. In those days, an expert programmer was one who could squeeze the most out of 4000 6-bit characters in RAM and instruction timings in milliseconds. Today, it looks like chaotic spaghetti, but you couldn't do too much in such a small program. We used op-codes as constants, modified code on the fly. It had variable word lengths, so we could fool around with word marks. It was the baling-wire-and-canvas age of programming. There was no separation of code and data, which made all kinds of hair-raising things possible, but self-modifying code was like a precursor of neural networks and AI.
Return of procedural programming is just the result of PTSD. OOP was the hot thing, people were trying to make everything as object-oriented as possible, that led to a lot of bad ideas like UML etc. Now people who were hurt by overzealous OOP evangelists are rejecting it wholesale.
Shhhh, let's see them sink in exposed internal data and come back 😅
and rightly so. If you can't make me understand it (something like OO) in a couple of hours, as something that make sense? then it's not worth it. I still get things done in ILE RPG (on system i). I mean ACTUALLY get things done, and maintainable. All of this talk about Java, bla bla, in the mean time in the real world .... we do Mastercard Debit acquiring and issuing etc.
OOP was great for SDKs and Frameworks and that is probably why it took off because it solved the problems of the SDK and Framework publishers (and OS api abstraction in particular). It’s also great if you get the domain model right, but software engineers seem to be bad at that early in a project and for OOP, early is exactly when you need to have nailed the hierarchy.
gah - once upon a time there was option explicit,
A simple form of oo can be done even in COBOL without oo language extensions. Just make one DATA DIVISION + PROCEDURE DIVISION per object, an ENTRY for each public method …
The speaker doesn't really remember the '90s. He was a kid. What this history doesn't include is the explosion of C with classes type dialects that came out in the late '80s and early '90s. There was also an explosion of Pascals with classes and pretty much every language received a class based object system. That's because object oriented programming was PHENOMENALLY POPULAR and I feel that we're being undersold on this because people's memory of this era are so poor.
You really just need to ask a programmer who remembers those days a little better
Despite this I agree with the main point of this video that OOPS peaked a while ago and that more and more programmers are wanting ditch more more procedural and functional alternatives.
In my opinion, OOP was a bit of a mistake and the problem set which suits late bound encapsulated objects is actually pretty small.
_Pascals with classes_
COBOL with classes ... yes, that was/is a thing
@@hjxkyw I was a programmer when COBOL got enriched with OOP-facilities, but I've never seen it being used in real life.
There's descriptions of procedural programming in Patterns of Enterprise Application Architecture by Martin Fowler
I haven't realized that it was gone at any time tbh. Backbone of structural programming.
When played at 75% this sounds almost normal
Spaghetti code will pay my bills until I retire love it
biggest problem with oops is people just memorize it but then forget to use it. so many times in interview i have encountered a person grinding me on oops concept but when i see their code its next level bullshit.
I like Richard but a lambda proc'ing a let around a lambda is the first step of OO (an encapsulation for the better or worse): that's what AK meant, not the CLOS/MOP (almost AOP). Object is not about class. Class is a template for objects. You can do OO from any closures. The common alternative is the cloning ones AKA prototype-based (btw, Self is a very interesting alternative to Smalltalk; the Traits as object-behavior with no properties came from this one).
what's the point of extreme late binding? why would I want my code to do that?
Interesting talk with some nice higher-level perspectives taking history into account. What I miss in the talk is the idea of embracing that within the same project you may very well have different areas where different styles need to be applied. In any large project you will see a mix of styles. Like: functional programming for UI, OOP for the architecture of your application and procedural programming for your GPU workload.
MODULA 3 ?
Brian Will has already made two compelling videos right here on TH-cam discussing the "OOP is bad" opinion -- WITH EXAMPLES.
I feel like order of paradigms by niceness is logic > functional > Kay-style OO > procedural > typical OO. I believe the best part of the original OO idea is modularity and message passing (essential event queues that are handled solely by a FSM "object" instead of being able to reach in and control the inside of something from the outside). Modern day OO with inheritance and a more obscure version of namespaced procedures is the worst IMO. Procedural is very intuitive at first because we're used to sequences of instructions like in DIY guides and recipes so it seems natural to communicate with the hardware that way, and it's basically the idea of a Turing machine.
But if you read "Can We Escape from the Von Neuman Architecture" by the inventor of BNF, it becomes apparent that statements are way less useful than expressions. Expressions convey the idea of referential transparency, or basically that it should be possible to cut-and-paste the definition of something with its name, which means side-effects need to be wrapped in monads to turn their action into a form of data. Hard-core functional like Haskell follows this by making everything descriptive rather than imperative. And programming just becomes writing down a specific vocabulary with everything is described in terms of primitive notions (just like math).
Logic programming is currently no where near as popular as functional even. But if more effort was put into building an ecosystem around it to do what general apps do, then it could be the best. (Check out the Verse programming language being headed by the inventor of Haskell). The difference between logic and functional paradigms is basically the difference mathematical relations (non-deterministic) and functions (deterministic) and how relations can be solved backwards instead of only run forwards. This methodology would revolve around specifying a set of constraints on a more general domain and the output of the program being elements of the feasible set according to those constraints (which could turned on and off for different use cases). It's well studied in math like in optimization, relational algebra, SAT, constraint logic programming, etc.
I agree with freedomgoddess. Procedural programming never left. A lot of the specialized coding for satellite/instrument control has always been done with procedural programming. I working for both DOD and then NASA have used and continue to use procedural programming. I also do OO programming in both C++ and Python when I think it is appropriate and will lead to more easily expandable systems or subsystems. However, I always start out thinking procedural programming.
Go allow to effectively and efficiently implement OO programs!
You don’t need classes and inheritance to do OO.
If you have a method: a function attached to a data type and a mechanism to put an interface in front of it, you have an object.
I think what we’ve seen is the raise of hybrid languages providing easy access to different paradigms, rather than procedural paradigm per se.
About his comment of finding it hard to believe or odd that Alan Kay said that it was possible to do OOP in LISP as earlier as the late 1960's, well, even before the CLOS (Common Lisp Object System that was added in the 1980's) LISP had first class functions and closures, so, as clunky as it might have been, yes, it was possible to do OOP, as you can encapsulate the environment and hide variables or exposed them with functions inside functions.
I do inheritance by copying the source from some existing thing and pasting it into my new thing. Now my new thing has all the behaviour of the old thing. But it has zero dependency on the old thing. I can even delete the old thing and the new thing keeps working.
That works until you need to do a mass change. The code that’s copied can morph making mass changes harder. Copying code obviously isn’t DRY. My point in disagreeing is to highlight there is no right way. I’ve done exactly what you mentioned many times and I’ve use inheritance and utility classes. Just depends on the situation. We as a collective need to stop looking at styles and languages as absolutes and do what makes sense which is what’s easiest and meets requirements.
@@johnlehew8192 Well, I made that post with half my tongue in my cheek. I was sort of bashing on people who inherit from something as a way of making a slightly different version of the somethings code, overriding this and that, without any particular rhyme or reason. Ending up with code that has dependencies on whatever inherited for no useful reason. Just the same but different in many odd ways.
More philosophical... Apparently I, as a human, have inherited properties from my mother and father, and their mothers and fathers etc, etc. All done through copy and pasting of DNA with some mutation thrown in. BUT still my existence does not dependent on the existence of my parents or grand parents, long gone, their DNA deleted. Which is a good thing, for me at least :) Conversely inheritance in C++ as others creates a web of dependencies. Which at least I find difficult to deal with. Making changes to it can be as hard as those "mass changes" to copy/pasted code you mentioned.
All in all I agree. Use whatever style/paradigm that does the job. Don't get fixated on OOP, Functional, DRY, SOLID, whatever. I sometimes get the feeling those catch phrases are just dreamed up by self proclaimed software engineering faith healers to sell their books, training courses and conference speaking. Promising snake oil, at a price, to magically cure all your software production problems.
Excellent talk!
I'd argue rust is in a loose sense object oriented too, since it supports structures with encapsulation, methods, and abstraction and polymorphism through traits (equivalent to interfaces in other typical OO languages). It's much more limited compared to what you'd find in languages such as C++ though
in the end "object oriented" doesn't really mean anything on it's own, and everyone equivocates on the word
After all the functional talks that Richard Feldman has given, I was surprised to see him give a talk with this title. The talk is not about Felman moving from FP to PP, which would have been controversial for me, but instead it was nice to hear him give a talk about all the different programming paradigms. A few nitpicks: Brendan Eich was working at Netscape and not Mozilla at the time. And Richard, you can use the function as property syntax from ES2015, even if you use a JS logo from the 90's ;)
So basically original OOP was another word for micro services without the network part...
The basic ideas of micro services can be tracked in several other paradigms, like RPC, SOA or even EJB's.
I started programming in 1974. I realized very early (late 80's) that OO was more difficult to teach, and far less productive especially for average programmers. It is also far more difficult to analyse and debug.
Procedural programming needs one thing to render it truly useful - an integrated memory database. Relatively easy to do, this provides procedural programming with all of the (tentative) advantages in OO due to classes / objects being able to retain complex data at run-time, and doesn't add any of the baggage. I've been managing / writing systems in procedural programming with an integrated memory database for 2 decades now, and I am quite sure that in terms of the programming paradigms available today it is the best trade-off.
AI might result in new trends, let's see.
Interesting.. what is an integrated memory database? Eg A hash table to store global state at runtime ?
@@MrHopp24 I guess you could use a very basic system like that, but for complex systems you actually need a relational database, even if only with minimal functionality. You do not need sql (at all), just row level access to data, and perhaps some kind of indexing ability (often not required). Essentially, the data concept behind class and object with relations is a very good idea and almost indispensable for complex systems, but it is (in my opinion) a bad idea to couple it with the programming language. Procedural access to a minimalist memory database is all you need. I have managed and programmed highly complex systems (over a million lines of source code) over the last 20 years, and the resulting programs are simple to code, understand, and debug. No hidden nothing, everything is apparent.
Where required (and this is often) the entire memory database can be dumped onto disk to store state and essential data, and reloaded at startup. I am not so sure this is easy with class and object.
@@rs8197-dms I recently had to implement and debug a rather obscure topology decoding algorithm. Being able to just dump the program state to disk at every step and analyse the resulting data flow in a spreadsheet was key to getting it right. I've worked with IBM mainframes before and working with count-key data (CDK) formatted storage was the most pleasant programming experience I've had in years. OOP too often leads to convoluted and deeply hierarchical data (mostly by accident) that's hard to parse and reason about. I too moved away from that many years ago.
you just described what elixir/erlang is, welcome to the club!
13:26 late binding vs static type checking: Weirdly enough I agree with both and think they should coexist.
I want to be able to own & customize the final app, while preserving its invariants.
E.g. If I want the send button on this comment to be a weather-dependent animal, I should be able to do that then run the app's validation/invariant checks to make sure I didn't break it before hitting save.
Javascript & HTML come pretty close to that idea, but fail gloriously on invariant checks, understandability(everything is minified with 1k deps), and are very limited (to the browser).
Some nix-like rollbacks would be cool for hot-swapping
You're saying how Python was supposedly influenced by Simula, but original Python didn't have any class support that I know of, unless I'm mistaken? And I think it got added kind of as an afterthought (which is why it's kind of clunky), but for those knowing Python's history well, let me know if I'm wrong.
“Pillars of OOP” -> POOP?
Intentional? You be the judge!
But the real question is: Was his *_Strawman_* of OPP (like @[13:30] with Late Binding & Static Type Checking) also _intentional_ ???
My pet peeve is that C, which was so popular and influential, was written by people who didn't know where to put the braces, and as a result hardly anybody, in most languages, does it right. That includes Mr. Feldman, in his examples here, despite his obvious great knowledge about programming languages.
I adhered to Dijkstra's rule*, and since I am now long retired I no longer have to fret about it.
*A Method of Programming
It’s not Richard Feldman it’s Ricardo Feldman. He use his hands to talk 🗣️ more than me using my hands to write code 🧑💻.
I came from the pre-procedural time. There was a reason why OOP was popular. The felt freedom in procedural languages comes with a price. It's so easy to make a mess. Especially when multiple people work on the same stuff for years. It feels like work when fixing code. OOP gave structure and localizes the issues. But I always to overlook the hypes and used commonsense. I guess that's what happening now. But don't think that procedural coding is just heaven either!
love a good brian will reference
What Alan Kay meant as lisp is more like Interlisp Loops, MIT Flavors, Common Lisp Object System and other Smalltalk-like lisp-based environments. Lisp has a rich history.
I think that Kay just meant what he said - that it was possible to implement object systems in Lisp and Smalltalk themselves. CLOS was only adopted to provide a standardized way of doing OOP in Lisp. I don't know why that fairly obvious point baffles Feldman, unless he's being deliberately obtuse.
Check Elixir and it's functional oriented way of programming... Once you jump in - there is hardly no coming back.
Great talk. Thanks
Unfortunately OOP is often taught and understood as `class Dog extends Animal`, which is the worst way to explain OOP and OOM (right up there with the other extreme, IEnterpriseAbstractFactoryProvider)
Marrying functional and object-oriented as well as compositional patterns is the way to go. Use the right tool for the job.
I don't like JAI and neither ODIN. They lack expressiveness And I come from an Assembly and C background (then C++/Java/ObjC, then Python, and currently mostly C# and a bit of TS)
If I want a good alt-C, I use Zig. If I need Cpp interop, I use Nim.
Rust is absolutely an OO language. It has traits and and methods that accompany and operate on the data types they are declared in.
So basically OOP means to you that you can use the "." to access functions on data?
@@andrewf8366 I'd say OO is when objects themselves do things. Procedural is functions change objects that just store data. Functional is functions return new data based just on inputs.
Best way to go IMO is a mix of all 3 - OO gives you really nice abstractions with interfaces. Procedural is great for IO, functional is great for business logic - extemely easy to unit test due to pure functions.
That's why I really enjoy C#, it's great at all 3 (functional is getting better :))
I'm curious about your thoughts on disliking Odin. I spent a decade with C#, and over the past year, I've been exploring all of the new C-likes. Out of all of the ones I've tried, Odin has stuck. There are definitely features that I would love from Zig / C3, but overall, Odin is robust enough. I've found it to be the easiest to translate thought to code.
To be honest, I would use plain C if it wasn't for windows. Linux made it so much simpler. Mostly due to lack of effort to learn...
@@cbbbbbbbbbbbb Odin lacks closures, methods, and true generics, a few things that might be considered expressive. I’m a big fan though.
OO is just fancy message passing with lots of helper stuff so you don't need to manually check what the message is all the time. It's nice in some instances but it's a bit broken in others. I think it can lead to people getting confused. But what do I know?
Late binding and static type checking are not incompatible. C++'s virtual methods do exactly that. JVM methods are late bound, too.
20:00 I don’t think the industry actually moved away from messaging and late-binding, but only the specific implementations due to pursuit for performance. The static type checking boom came only when type checking has advanced enough to check late-binding (generic types, trait constraints, gradual typing) and messaging (borrow checker), so it is not actually against those ideas. Not to mention the micro services paradigm is a system level realization of messaging and late binding, along with renewed persistent interest in Erlang BEAM.
Seems like late binding would make security screens of software more difficult went doing static analysis
Great presentation to understand what OOP really is, and why is losing momentum
PHP is only optionally OO, same with JavaScript.
A year ago I was writing PHP on a large corporate site, and it was procedural.
static-type-checking has nothing to do with (late or not) binding... you can statically check the type with its interface
13:30 He acts like you cannot change behavior with static-type checking (at runtime) !!!
Procedural programming never went away. OOP was simply a syntactical cloak.
24:06 a minute of silence for what could've been 🥺 press F
F
Thank you
IMHO OOP like java requires you to write more code, before resolving the problem you have to think about abstractions, best practices, etc...
Functions are abstractions too but it’s thinner and direct.
Using interfaces and other indirect abstractions may work for projects or application layers that might change over time like DB access or Auth, but not all projects are subject to those changes.
Overall ,it’s subjective but simplicity plays a huge role in choosing a language over another.
In addition to simplicity, the developer must understand what is happening. OOP is ideal in this regard. As for abstractions, assembler is also an abstraction, use it. Machine codes are also an abstraction, you can use them, there is simply nowhere thinner. I don't understand why I don't like the obvious layout of components in Java. Functionalism is all the same, only it ties your hands much more. OOP is literally optimal for everyone. But, the functionalists point-blank do not notice the obvious.
@MrChelovek68 I think it depends on the developper's way of thinking. I've started with procedural programing in python, C then PHP. It feels like those langages formated my brain into procedural way of thinking and writing code.
So creating classes feels like translating my thoughs into another language.
I still can see the benefits of OOP though.
@@icantchosemyname It's funny, I studied Pascal, then C-Sharp, and now C. But, more convenient and intuitive than OOP, I have not seen anything. In fact, OOP in PHP is well documented. As for the formatting, I strongly agree) My comment is just about "why OOP is used", and everything else is dreams and shadows. I am so familiar with higher mathematics, but when functional programming begins, for example, my brain rejects it as something alien. I don't know why, but it's counterintuitive. purely for my taste, oop and in particular languages like Java or C-Sharp are ideal, because they do not adjust a person's thinking to a machine, but on the contrary, allow anyone to translate a thought into a code. At the same time, I dearly love both C and Pascal for their complete freedom of action.
Is that photo of the Borders in SF near Union?
Even within C++, I find myself reaching less for OOP and more for procedural or functional solutions to my problems.
So, it's not always accurate to assume that C++ == OOP
You could say the same about PHP. And probably all multiparadigm languages that allow both OOP and procedural.
Given the general idea of the talk, I'd say it's fine, there's wide brushes over everything used in the talk.
I've always coded C++ in more of a "C with classes" style. Mostly procedural. I do use classes to create abstraction, but minimally.
I think the most important factor in this equation is the philosophy of the ecosystem. You might be writing without classes at the application level, however most of the libraries you are using are most probably relying on classes anyway.
php is a child of Perl btw, it was invented as just templating language for Perl ^^
Rust is multi paradigm, it doesn’t have class based OO but does allow vtable based objects implementing interfaces. Just no inheritance.
It's not just proc programing but also combination with functional programming too.
procedural falls apart when you have large sets of structs (like nodes in a syntax tree) and you need to call functions which each type implements differently. Having overring to make dispatch tables is way easier than function pointers and switching on type IDs.
Procedural is a technique and has its limitations. OOP is also a technique. You just have to use the right tool for the problem. 90+% of problems can be solved by procedural? When needed you introduce OOP or functional, why not calls to quantum in the future? The problem with OOP and its best practices is that now the push is "We have a hammer and everything you have to do has to be done with the hammer as this is the best practice and only professional way.". What happens when you need a small screwdriver? Well, we all know it. The stats for a very long time have been that around 90% of projects fail before first production release and things get worse.
I agree, I would rather have a large OOP code base than a large procedural code base. With microservices, code is smaller and procedural makes more sense. I think reducing code base size is driving us back to procedures.
Disagree, sum types handle syntax tree nodes much, much better than OOP + visitor pattern
I think the message passing idea lives on in distributed systems, actor model, etc...but it is operating at higher abstraction level.
Yeah, I think message passing, encapsulation, late binding, all of that just moved a level up with microservices due to the scale. And then services themselves don't need so much code, so less hierarchical procedural style came back.
In anything, the ideas of OOP just scaled up out of single node, not became less popular.
Even though I do object-oriented programming, I've really grown to hate "inheritance hell", when there are long chains of A inherits from B which inherits from C. Maybe this is addressed in the video (I haven't watched the whole thing yet) but I assume part of the shift away from OO is people getting fed up with inheritance hell. Personally, I think I might like a language that still has objects/classes, but no inheritance.
EDIT: now that I'm halfway through, yep this is where he talks about that exact issue. Specifically, he talks about composition being preferred over inheritance. He also describes classes without inheritance as being basically nested structs, which huh I never thought of that.
then what happens if you some class and you want to add an extra field and some methods? now you have to pack in the same class and mix it all together which is not great either. The bigger culprit tends to bad APIs that weren't well designed or grew out of control over time.
Me personally, I've been impressed by most *_Library Devs_* and what they've done with their APIs. So, I would be curious of some examples of Libs with "inheritance hell" ? My impression is that "inheritance hell" comes from "the business layer" as a result of Devs always being on a time crunch, and hence, leads to them just throwing shit together (that probably wasn't the best decision, but since they had a deadline, they just "went with it").
I wonder if the rise of gRPC and microservice system archetecture worker agents and orchestration messaging systems has something to do with procedural programming being seen more. Workers are getting something like a RabbitMQ message or an RPC call to 'do you task with this data'.
I remember mentioning procedural in an interview decades ago, it ended abruptly and I was walked out the door. My how things have changed. OOP is slow and this confirms my saying…. Speed always wins
This is the most flat earth talk I have ever watched, simple examples and preaching instead of actual real world enterprise class challenges. I come from a very strong Procedural programming background and I enjoye using it to the extent of right domain, when I learned OOP It addressed many of the shortcomings I had with desiging, changing, maintaining and understanding procedural code. I am not saying there won't be shortcomings, codebase rot and chaos in OOP solutions, but It gives at least a chance to write some code with engineering principles in mind. Good luck writing remotely quality and solid code for systems that have more than 2 screens, It will start deteoriating the second you need to add a second method to ftp from other endpoint, or the next time the solution requires a different protocol. It will force more method duplication, tight coupling and side effects, worried that adding a change in a method in a subclass will break your system ? Try changing a if statement in a procedural module.
yeah, I agree. The message is clear, and I can understand explanations for the styles differences and certain advantages, but the examples really let down. Exactly as you said, how will this support low coupling and extensibility?
Too bad the examples were too sandbox-like. Like the example with FtpDownloader, PatentJob and Config. Clearly, the procedural version raises few concerns - How to inject dependencies? How to swap the implementations with test stubs for test isolation?
this is a misunderstanding of what is object oriented programming.
Even this guy who read a lot about programming misunderstands Lisp like most people do. Lisp has always been a multiparadigm programming language, it has never described itself as "functional" in fact, it was never self-identified as functional, functional was just one of the tool belts that could be used in Lisp but when SML was made all the FP community left Lisp, and it was until Richard Hickey that there was a new Lisp focusing for the very first time in FP.
If you wanna think of a main programming paradigm for Lisp then that would be Symbolic Programming, not functional and not object oriented and CLOS is a symbolic programming approach to object orientation.
The only thing useful about OOP is encapsulation. The ability to statically guarantee that invariants are maintained by limiting access to values to only certain procedures.
it is absolutely useless
No, *_Polymorhpism_* is the most valuable aspect of OOP, imo.
Altho, I am seeing a lot of people in the comments saying it's ADT for code completion (via the dot operator).
IMHO. C++ are used in big programming such network cloud service, while C is used in fast efficiency time critical such real time programming. They are not substitute or to compete.
Procedural when it makes sense, FP for the rest
Love this 2x speed energy!
I had to slow the video down just to understand this guy, and I'm a native speaker.
I watched this in 2x speed
I bet you know why😂
lol, I've just realized that my playback speed is still 1x, he definitely sounds 2x-ish :)))
I'm not a native speaker, it is a bit hard to follow at full speed, I really have to sharpen my ears 🙂
We could always go back to flowcharts and Assembly Language for concise code. You can even write self modifying code. Who needs a typeless interpreted script that dares to call itself a program language
The beginner needs it! I'm strongly in favor of Flowcharting, and Assembler- but only for a limited problem domain. It is simply a terrible waste of human lifespan to code in assembler for most things.
@JimLecka A beginner to a language will often make assumptions based upon their experiences with other languages that may have unintended consequences. For example a C programmer may have difficulty with Python, especially for if cases.
@@99bobcain Amongst other activities, I have taught introductory programming to completely raw beginners. At the rate of 500-1000 people per semester. The very first thing is to give them something simple to copy and type in. Like "hello world". About 10% fail and drop at this point. Then show them how to change "hello world" to something else like their name. Success at this point is their 1st positive feed back. Then gradually more concepts, some history, and learn by doing simple exercises. It is a long way down the trail to get to concepts like actual bit representations : I am happy if they get to use one (1) numeric type [best a default float] and simple character strings, with some control logic. The idea is to get them up to the point we can introduce them to a real programming language, in the next semester.
Procedural programming never went away. Of course, certain trends and fads appear in the industry, and old ones sometimes come back, but advanced and experienced programmers use the tools and paradigms that are best for a specific task, whether it is OOP, procedural programming, functional programming, or something else.. Poor programmers write poor code regardless of the programming paradigm.
I'm using OOP languages but these days I use classes mainly in two ways: 1) as algebraic data types and 2) as actions, i.e. I name the class after a verb, give it an apply() method. So it's basically just a function that I can call and pass around, with the added benefit that it can use private methods to structure the code more. Sometimes these functions with a single responsibility can get over 100 lines long, but it's nice to spread it out over 3-4 methods for some internal clarity.
Other than that, I avoid most things associated with OOP, especially inheritance of behavior.
That's kind of the way I do it.
Finally. I always thought OO was not a gain.
When computers were a million times smaller and a thousand times slower, there was no way to luxuriate in that kind of slop.
40:55 closures are equivalent to objects. So it's quite disingenious to say you didn't use OOP when you use closures !
Closures are like three decades older than the first digital computer, let alone OOP.
maybe programming was all the AbstractSingletonProxyFactoryBean we made along the way...
We need to bring Smalltalk back from the hinterland... and I'm just the man to do it...
I program in a way that I call “Context Flow.” I rely heavily on modularity and global data. Functions have zero, one, or two arguments (for k-v pairs), and I never pass context. I never use classes. It looks very much like turtle commands. Systems are highly stateful. Looping structures have special support code. Context is king. Parallelism is achieved with multiple processes, threads are used only for very specific purposes and communicate only by way of queues.
Global state? 😮
I've seen a lot of code written like that. It's usually not easy to modify. Which really is the most important quality a piece of code should have in my opinion.
@@JanVerny I’ve been programming for 40 years. I have yet to see code that is easy to modify. I have met a plethora of people who SAY they are writing code that is easy to modify. But I’ve never actually seen that code.
When people use “tightly controlled access” to make it “easy to modify,” what happens is that you need to modify ten files to get access to variable X from point A to point J. How’s that for “easy to modify” ..?
Alternatively, people make it so that object A has access to object B has access to object C has access to…
…but now you’ve both: (a) already effectively made everything globally accessible, so no win there, and (b) just made it so you have to walk through ten layers of indirection to get there.
No real benefit.
People talk a lot about “if you program this way, you won’t be able to understand the code,” or “it’ll be unsafe,” but what I really observe is that everything really comes down to whether you have an organized code base or not. If you have an organized code base, and consistent principles that you follow, than global variables is no more a problem than a global space of IP addresses.
The argument against global variables then reminds me of the pearl clutching of programmers who claim that variable names need to be as long and descriptive as possible other wise “I can’t possibly understand what the variable does, otherwise.” My response is, “Yes you can, yes you do, shut up and stop being such a baby.” It doesn’t take any effort to pretend to not be able to understand what a variable “i” does in a loop iteration. But people will swear by it.
The emperor has no clothes.
@@LionKimbro I don't have as many years of experience, but I have worked in codebases that were relatively easy to modify. Hell, over the years I've refactored a lot of my code, and sometimes it was a huge pain and sometimes even coming back to it after year, it was very easy.
@@LionKimbro Though, to give you some credit, I do think a lot of programmers talk negatively about certain practices to disguise their inability to read code and think about a problem.
Any sort of dogma always leads to poor results. That's also why I haven't said that your approach is wrong or that it always leads to bad results.
OOP based software is a very hard to debug and to maintain
It always seemed terrible to me the fact that the language would force me to have an "Object" that consisted of just functions.
I use js but never use class or objects. It is not necessary for me
"OOP" languages, in the sense of Java etc (not in the sense of Smalltalk), sold themselves on:
encapsulation
polymorphism
abstraction
inheritance
Of those, the actual reason OOP languages were adopted was just encapsulation. All the things OOP brought to the table for the other three points were net negative.
Modern languages now have very good module systems which covers encapsulation.
Functions themselves especially functions with closures are encapsulation.
@PixelOutlaw Not as intuitive as objects, though. I've seen intances of it, and it made my head spin. I've never found a reason to use it.
*_Polymorphism_* is insanely powerful: It's definitely not "a net negative" !!!
@@nikolaikalashnikov4253 I agree. And it's nice if you use a programming language that can handle it properly like Lisp does via the CLOS.
I use a lot of static classes. Maybe I should ditch the OO, because static classes feel like the milk toast version of procedural programming.