Hi, i really loved that song and its been a long time since i listen to it, could you help me out and tell me the song's name? I'd preciate it a lot, thanks☺️
That intro song + trailer sequence is way too long and way too self-indulgent and not nearly as insightful or even funny as it would need to have been to be bearable. I regret sitting through most of it. Everybody who still wants to give the actual talk a chance should just skip to 10:34.
PS: To make things worse, that trailer is really an ad for his company, which the speaker proceeds to praise, so they really should have ticked that paid promotion checkbox, even if the payment is the speaker's salary, because it's illegal in several jurisdictions not to.
legacy code might not be too scary to update, there simply might be no time to update it. why refactor working code, however messy it is? for example we had an old database structure and code to poll wind power plants for their daily production and some data about the windmill itself (oil temperature, error reports, etc). The code put the data into the database. The database was a huge mess, it started out for a single windmill but soon had to cope with multiple locations with multiple windmills of different manufacturers, etc. the table names were terrible, no indexing, no references. Windmill data was columns per windmill, and so there was a table for every wind park that was identical, but with different amounts of columns because one had 3 turbines, another had 5. And of course it was a microsoft access database. every day some piece of code would generate an image file from the database to be updated on the web page to view the data. It was all a horrible mess. But it worked. The paged looked ugly though, but no one had time to update it, the customers didn't overly care too much about the page looking like it was from the 1990s (it was from the 1990s). I'd totally call that legacy code. Eventually we had a little downtime and I was tasked to update the database and web page. I didn't understand that asp nonsense. You couldn't find anything, and if you did it was chinese for me anyway. Of course some SQL statements or some conditionals were readable, but I redid the database, so that was useless. The code was written, without comments mind you, some 30 years ago so I basically had to redesign it from scratch with just the data and the look and feel of the original. Plenty of stuff was planned but never implemented. Someone had the fancy idea of including a webcam view of the turbine and put a placeholder picture. It was never implemented or removed in 30 years. Sometime 10 years in or so someone had the idea of refactoring the database, but didn't bother to migrate the data structure and instead opted to rewrite the sql queries to merge the tables... It was a huge mess. So I did a nice ACID based approach with some redundancy and indexing and constraints and references to make sure everything was sane at all time, rewrote the back end to use our new .Net framework and tied it into our new login system (the wind park thing was it's own service before). I rewrote the frontend to dynamically generate the graphs on the fly with chartjs, generated pretty, sortable and searchable and pageable tables, all with the option to print or screenshot and export to CSV. I implemented java based tabs instead of links with annoying reload times, etc. In short I did the big rewrite. And it was better. The legacy system evolved into a tumor and wasn't touched in forever. It looked garbage and wasn't integrated with our new products. The database was inefficient and hard to read. The only thing I didn't rewrite was the data collection from the windmills itself. so the ms access database is still being written to. just every 6 hours we run a new program to query that database and transform the data into a new format. Some guy promised to rewrite that part to put it into the new database directly, but I left the company. I imagine it was never done and my ugly fix will be the new legacy code... At least I wrote a few comments for the poor guy who has to deal with that stuff. On the other hand, the old database structure had meant that inserting the data was about as difficult as querying it. The new structure you just dump a timestamp, windmill id and your power production into it, so it should be simple (tm), and thus compile first time. So... legacy code is horrible code that no one has time to touch. Also for testing we have an own system. Actually we have 3 systems. One is only internal, database and web server filled with dummy data to develop and do basic tests on. The other is the production database with an experimental/beta and the stable release of the web server. New features get to be tested by beta users. Then when they have proven to work for a few months, or if it's something simple a few weeks, then it gets put on stable for a wider user base.
this man is the best speaker i have ever heard opine on programming! @DylanBeattie you are a legend (finding your rockstar talk was probably my highlight of 2020)
He inspired me to get back to teaching CS in addition to Math. I'm surprised I just found him. It will replace my evening movie time until I run out of the videos!
Based on experience with the world's largest legacy code project, the Y2K fix, I'd define legacy code as code that has to keep running. It doesn't matter what tools were used, what the quality was, or whether it could be fully understood, the code had to keep running. Y2K may have been the only large software project that didn't suffer from significant scope creep as it progressed. In fact, it was easier to toss stuff that wasn't still needed. The immovable deadline kept true priorities on top.
Also US was scraping for programmers from around the world during that time. My father (who was a programmer) got a job offer from American company despite the fact that we were living in the middle of Russia.
I didn't need to fix my code much for the feared Y2K. Internally all my programs used the UNIX time (So we will have a 2032-or something problem, but I expect to be dead by then, so not my problem) and mostly I just displayed the proper year number, like 1999. My program really didn't have much of manual data entry anyway, practically all the data was collected from various devices and time-stamped by the millisecond. This presented the BIG problem: How to syncronise all the clocks of every computer in the system?
The stuff I work on still hasn't moved to 8 digit dates. The suits are currently trying to figure out which consulting firm's gonna help us out of this mess...
For me, Legacy Code is code that when I get assigned to work on it, I consider getting a doctor's notice exempting me from working on that particular codebase...
My oldest son worked on the stock exchanges, in NYC, up to 2009. He claimed there was a DNK Room (e.g. Do Not Know Room.) It had old legacy PCs running critical programs that did stuff that kept the sock exchanges running. He claimed one of the PCs was an IBM AT! My son had learned computers on a Commodore 64 and later a Gateway 2000 with DOS 6 & Windows 3.1. He later worked for a computer workshop, for teenagers, that got old donated PCs from businesses. He learned how to repair them, make them work and network rhem so the kids could play games like Doom! When something crashed in the DNK room he was one of the few people who knew enough about old PCs, DOS ,Windows and networking to fix them and keep the stock exchange working.
Aah yes NYSE. I worked for them for almost 15 years. We had an absolute nightmare scenario where we had software running at a customer (because NYSE sold software at the time) where due to miscommunication? Half the codebase of that piece of software disappeared. After all it was no longer used anywhere... Except at this one big customer in Asia which was missed by upper management.
I had forgotten how long American Pie was. Or maybe I just never heard the long version...? I don't know. Either way, do y'all have a FLAC download of that? I unironically like it.
19:27 legacy code defined by *Michael Feathers* 19:40 definition of *legacy code* by the speaker 19:44 legacy code is the code that is too scary to update and too profitable to delete 24:18
My question for this would be, "What should you do if you find yourself to be the Dungeon Master, in that scenario?". You have an added level of job security, but at what cost? Do you have a duty of care over mentoring a team in it's use? Do you hold onto control, and class it as tenure? Should your role become that of an author, and produce a Dungeon Master's Guide, Monster's Manual, and Player's Handbook, for your codebase?
So Brandolini seems suggest the best thing for all involved is that the DM quits to work on something else That way You don't have to deal with your legacy and the Company will solve the critical dependency problem faster
@@hens0w That's not good advice on its own. It's the equivalent of throwing the instruction manual in the fire. You're getting rid of the one person that can actually help to refactor the code. If I found myself in that position, I'd want to take on an apprentice, or lead a time, and take time to put things down on paper. Even when you're doing a big re-write, you need understand what it is that you're rewriting. It's not often you can treat a rewrite as if it were a green-fields project. I would say the responsible thing would be to write a Dungeon Master's Guide (something like a C4 Model), and a Player's Handbook (Code Annotations), and a Monster's Manual (Annotated Test Suite). And, encourage your players to micro-GM within their own specialities. Just as it D&D, the job of a GM shouldn't be to railroad play. It should be to facilitate play, and provide a backdrop for players to interact with, in a predictable, and consistent manner. They are there to arbitrate decisions, rather than to enforce a strict definition of the rules. The same should be true within this scenario. Your team lead shouldn't dictate your code, line-by-line; and they shouldn't railroad decisions through, when your have valid concerns, or suggestions. In RPGs, the players also have a responsibility to keep checks on the GM, and unionise against bad roleplay, where found. An inexperienced, or just plain bad GM can ruin a game with even the best players. Players should be able to tell the GM that they're being a dick; and the GM should be able to take those criticisms to heart, and actually make changes for the better, even at their own expense. The roleplay is more important than the roleplayer.
From 1997 to 2001 I worked for a major insurance broker that had a critical sales tracking program written, by a contractor, in MS Access & VB. The contractor deliberately made the code obscure (like "Function 105B".) It was networked all over the country & internationally. Part of my job was to maintain it, make sure the data got updated, run it and provide results to top management. The contractor was often not helpful about problems yhey considered as impacting proprietary code. They later made a commercial version of the program and sold it to other insurance companies.
Here's my take: legacy code is code that has to run according to important but unknown requirements. > Legacy code is code that's too scary to update and too profitable to delete. Unknown requirements makes the code scary to update. Importance to the customer and profitability to the seller are flip sides of one another. I think my definition gets at the core of legacy-ness, with Beattie's observations being the direct consequences.
This is the best description of legacy code I've heard yet lol. So true about the unknown requirements. It's tough too knowing the code we write today eventually becomes legacy code, and the requirements and blockers leading us to specific implementations will be lost with time.
@@seancpp Especially since a lot of the requirements are arbitrary business needs rather then technical ones. It would be nice to have some easy marker in the code to say "nothing bad will happen if you change this"
Fascinating, so my former company was a great educator. We had an intern and he had to design a hardware platform for an application. And in his report it said it requires two CPUs. I was like: “yes you read the software supplier’s requirements well. But why?” -“it’s twice as fast!” “Really?” -“Ofcourse two CPUs is twice the computing capability!” (This was BSc level education!!!) “So you want to become a developer! Here’s your first development task. Install on that machine you’ve configured and ordered a C compiler. Write a little program that only does while(1) { } and run that. It took him 5 days to get everything installed and compiled. Because in school, toolchains are installed and Java is simple to compile! He ran it and I heard: “huh?!?!” So I asked what he saw. He explained that only one cpu is loaded and only reached 95% not even 100%. So I asked him to run a second instance. And he noticed they’d now run pretty much on two CPUs although the cpu affinity sucks (welcome to windows 2000 scheduler I said). So he learned about SMP and OS schedulers proactively. And I knew that this kid would go places. Because I did that 15 years prior too. He never finished his BSc, because it was useless and frustratingly management focused. But he’s the best developer in his company. Because he knows how to learn new stuff and research it and question hypotheses.
@Sam he came with us when he was a student! 3rd year! That’s how terrible these colleges have become. A lot of stupid management and terrible actually practical courses. That’s what you get when IBM and Getronics have influence on syllabus. Two big corporations who need to hire external engineers because they do t have them on the payroll. Educational has been dumbed down and this was 15 years ago, it’s even worse now!!! Pay 40k a year to learn crap you could learn from books in quarter of the time.
@@CallousCoder my experience in uni so far has struck me as them being pulled in two directions. Students and employers demand "practical" knowledge but its not a fucking "coding degree". Its a computer science degree. The "impractical" stuff is important, but its not obvious to most people.
@@shdowdrgonrider well, this is an interesting point. We in the Netherlands we have (had) tradeschools on BSc. and non-BSc level where you’d actually learned the trade. Like in my case I did tradeschool electrical engineering - with more CS and programming that BSc CS students have nowadays. There you were supposed to learn the applied knowledge and only breeze over the theoretical academic knowledge. This student, I wrote about was a BSc tradeschool student. But that has been thrown overboard in favor of the academic non-applicable syllabus. And frankly CS is not a 3 year academic course unless you deep dive into a single field. Basic data structures and design is something you could easily do in 2 or 3 semesters. Academic courses always had the problem that the graduates don’t really have a value for the marketplace because it’s sheer theoretical knowledge. They aren’t great at coding when they leave, they aren’t great in networking or sysadmin or design. So what is the value of that education? Learning for learning sake seems to be a non valuable approach. Just like a degree in ancient linguistics (my cousin is a professor in that). Fun and all that but it doesn’t make you employable or useful in today’s society. In my opinion, you go to school to become employable or to become able to make your own business. But ironically business schools don’t teach you how to create and run a business. And Computer Science doesn’t teach you how to actually work in an IT job effectively. Franky I don’t see the value anymore in today’s higher educational system. You still have to learn the actual details and applicable knowledge yourself. I don’t need a degree in anything to learn that. Back when I went to “college” aka tradeschool it made sense because you don’t have access to SMD soldering ovens, or heavy tools or expensive microcontrollers and eprom burners and CAD/CAM software back then. So the value was sheerly the practical subjects because of these tools. The whole theory (and there was a lot of that) was/is well documented and you can just find for free online. I didn’t need a teacher, regurgitating it verbatim form a book. But CS, linguistics, business, arts is something you can do all by yourself. And do it faster and more affordable. And don’t get me wrong academic studies for people who sheerly want to be academic is fine. But don’t bother me as a tax payer with your learning hobby 🤣 And that’s what was so great about the separation of tradeschools and university (MSc and PhD) levels until 20 years ago. When you wanted to become a rich tradesman you went to an HBO or MBO tradeschool. And if you wanted to solely be in academia, living on a meager research salary, you went to a university and do the whole MSc, PhD and professor route.
@@shdowdrgonrider I think Dylan’s example of the “sheds” he learned to make in university was very sound. I guess in tradeschool we didn’t get past a cottage either. But it at least was residential 🤣
I'm now 43 years old. When I turn 50 I'll be setting the clock for retirement, at that point whatever knowledge set I have I'll be sticking with knowing there will be PLENTY of stuff out there in those languages that'll need quiet maintenance forever.
I should change my job function description to Dungeon Master... In the early 2000's I started a new job, I inherited a 1.0 codebase that consisted in pure HTML with inline PHP3 (Including MySql quieries) the developer that created it had left the company a few weeks prior so no handover. Our first task was to adapt (rewrite) everything in the then newly adapted MCV framework so my (also newly hired) colleague and I did and after 3 months the core functionality was up and running in production and during the next year or so we adddd the less urgent features that we initially left out. In the years that followed new paradigms were adopted and like many companies we migrated many parts from the monolith to microservices, whoever parts that just worked was left many as it was and to this day about 15k lines of my original 2004 code survives, written in pre IDE days with a text editor I did however added some testcases a few years back just in case that one day someone need to change something without me there.
Wonderful content. @DylanBeattie has the ability to take 15 minutes of information and expand it to 60 minutes. In all sincerity, this is valuable information.
13:00 I'm with those new coders. Their criticism isn't that the code is old. It might be that the app is not adequate for today's demands. Or the code is unreasonably hard to maintain.
Good legacy code is wonderful. It never breaks, it gives you insight into a whole different world, and sometimes a chuckle when you find a comment that just says "TODO: fix this mystery bug" from 1996. I do however take complaint with a completely different class of code, originally written with the primary objective of time to market, designed by an architect in a rush and an unclear idea of the end goal, and programmed by people who were being clocked by the minute in their implementations of vague requirements. Those sorts of codebases tend to be atrocious, even if they follow MVVM or some other modern pattern to a t.
I worked in 2 of the 3 major banks and yes they are. But in both banks a little or time is spend on decommissioning systems and cloud migrations to cut away legacy and bloat. But there is another problem there, that is that often other consumers of systems are not willing to say goodbye to a system for whatever reason. We tried to decommission a system of which our system was the Golden source. And we said: “tells us where to put the data and we get it done.” We were provisioning the new system already but some people were still using the old system as there was not a complete replacement. Ironically we had a bug in a driver using that system that would hang out processing. The lead engineer who designed the whole system and developed that agent. Was sure it was a Microsoft bug. And I was like: “it’s the agent, as I’m now running MIM with the same version too or that other system you’ve build and it chucks through nicely.” He didn’t believe me so I said: “everyone you see the agent hang, let me know I will do a brief investigation.” 4 weeks later I had found the edge case that would every now and then (2 times a month) hang the agent. And that was when we were copying the source data some data was deleted. And my colleague checked if the number of records he counted were copied. Before there was no deletion because it was an operating system. And banks don’t delete data then but mark it as deleted. So I fixed it and my colleague was like: “waste of time in 2 months the system is gone.” I left almost 2 years ago. I rejoined a new team back in December and that system was just decommissioned…. But only the frontend :) So our agent probably is also still running.
@@CallousCoder Very interesting! It sounds like the hard part isn't the building or the setup or the maintenance, but its the convincing the humans to use the new system so they can benefit from it. What is the time frame that is expected for decommissioning from the point of starting the plan for the new system to full implementation and staff training?
@@whtiequillBj and to motivate the techs and managers to engineer the new solutions. But complacency is a big part. As we say in Dutch: “maar het werkt toch?” “But it’s working right?” And then those technologies will shoot root. But it’s definitely the human factor that allows systems to live on. But that’s also what Dylan says. People are afraid, I think people are complacent to do hardwork. :) Otherwise sandboxes would pop-up and they’d be experimenting to get ownership. Although the later used to be impossible with banks. Resources had to be approved and be defended. Now with the cloud that’s a lot easier.
That's because he doesn't work in the financial sector, rather some small startup that serves the entertainment industry. I seriously doubt casting directors would lose their s$%& if his webapp went down. Banks OTOH trade on a single commodity with their customers-trust. They can't afford mistakes. He tries to convince us that people are scared to touch legacy code. Nothing could be further from the truth. When you have code that has run error free for a decade or longer, that is not by chance. It took a lot of debugging throughout the 60s, 70s, 80s, and 90s, to get there. Now, some hotshot programmer thinks he or she can do better??? Go pound sand. That's why banks are "stuck in their ways". It would take probably another 50 years to re-invent the wheel.
I kind of fell into a codebase which is relatively new, but was written by a complete and total...not a good programmer. Not only was there no identifiable architecture or cohesive style, the code was heavily and pointlessly multi-threaded (before I started the big rewrite, there was an entire thread that did nothing but manage adjusting the size of the window). I could really relate this this, especially the stuff about experimenting rather than reading. It is only by experimenting that you realize changing the size of a particular image also impacts a calculated floating point number in a way that breaks all the tests.
In 2000 a payment service provider was built, one of the first ones. I worked on it in and off, and mainly in about 2012. The tech got sold on a few time. The nostalgia hit me when a few months ago, I did an online payment and the very typical transaction id format popped up on my bank statement. That part , designed by a few 22 year olds in 2000 is still running! 😂
This question jumped into my mind when i heard your questions about "TRUST " : Do you trust yourself ? Does this lead anyhow- what ever the answer is -to trust others?
If you have code that is too scary to update or too profitable to change, that means you don't have truth-worthy automation tests (your safety net) for it.
640 was enough to hold DOS with a little extra. That little was enough to move a small block around as required. In and out of Storage but only in Memory if and when needed.
You are thinking of 64k. 64k was enough to hold dos with a little extra. 640k was maxing out your dos system to the top spec, although you could do even more with extended memory on a 286.
@35:12, Harry Potter series: 4702 pages, 1.113.646 words, 30.059 uniq words, 211.023 lines, 65.025 sentences and 347 chapters... said awk in about 5 and a half seconds...
Markdown is also fairly new, so that's another indicator. (Or course, the output filename ending in .md doesn't actually mean that what's written to it is markdown, but that's what I first reacted to.)
I have written code that is many thousands of lines long, but never thought about the fact that it is more lines than a lot of complete books. It does make you think.
Legacy code is the code that's costing you millions because changing anything is even more expensive (or seems like it is). Just working is not enough.
I never graduated in computer science, no. I started working on programming right in the middle of my studies. I was hired to write COBOL code, but it didn't really give me any challenge. Then I moved to writing Turbo Pascal (4.0 if my memory serves me). A bit later I changed companies and I started writing in C. Oh, I still love that language! There's nothing you can't do with C. Operating Systems have been built on C code.
Your analogy of legacy code with books is interesting... Books have a table of contents, are factored into chapters and have an index in the back. We can still learn from history....
He talks about finishing projects, which I empathize with. But it's all tied to web now and that chimera vehemently refuses to uphold any legacy responsibilities. The whole misguided notion of "don't use this project, it's dead - don't you see it hasn't been updated in over 3 months" is hegemonic now. Constantly arising issues is the new metric of vitality.
Isn't this a core concept of micro services, that any component is designed to be replaced by a rewrite that's in the budget? I.e. limited in scope, well-defined, independent, dependency injected?
So many memories! I used Turbo Pascal a bit. Coding MASM I could control the world. I looked forward to Windows 1 arriving. And I did do a complete rewrite on a Basic app in early 2000s that used a lot of a$, b$ variables. Made it object oriented. Somehow figured out what it actually did and didn’t do. That new app ended up talking to an outside computer using web services. What fun!
PowerPoint is turing complete as well... but I've never managed to program any microcontroller in PPT C, C++, Rust, ASM are your options. The most widely accepted of those is C
I get a feeling that good code is like a bookshelf full of tiny books. Sure, there may be Brazilians of libes of code, but you don't have to reqd the entire Encyclopedia of Code. You'll get to it, but each piece should be self-contained. Today you're debugging the About page. As long as the About page doesn't require New Age crystal prayer medicine (which is divided in its own set of books) you should be able to put your Sched building knowledge to good use, and generalise it as you go. But yeah, they pretty much throw you in that crystal prayer thing and expect magic to happen.
My only problem with legacy code is that my current bosses won't trust me to refactor the shit out of it, which I know I could because I've done way harder things in the past. It's bad enough to keep using Struts 1.0 in 2022, but when you're told again and again that the unmaintaiable, inefficient code that's even today used as a template for new stuff is something they know is wrong but cannot be improved because we lack the necesary billable hours to do it, which of course causes way more work hours to be billed as bug fixes, you know the people you work with/for (a) are full of it, (b) feel threatened by anyone who really tries to create quality sw because that'd mean that what they produce on a daily basis is basically shit, and (c) deserve to keep suffering the stressful work environment they perpetuate by making the hole they're in deeper every day. It's really frustrating to be an enginer that gets paid to do a crappy work for no good reason.
If Dylan were working on computational science or molecular modeling, this conference could have been fully talking about fortran77 or fortran90 numerical libraries working behind everything. Not my favourite conference from him, but it is also a good one.
Interesting how computer science is described to differently. When I got my university diploma, you could get your diploma without having to learn to program, it was a structure science like math.
Gonna have to update this talk lol. The iPhone 1 probably won't even text and make phonecalls anymore since most operators have turned off their 3G service or will soon. Just proves his point even more though.
My previous phone I had for 6 years. I got it a few years before this talk was done, and I replaced it a few years ago. Though, at the end it was slowing down to a crawl.
Currently working with legacy code and I disagree with the premise. There's nothing to love in ancient, completely object oriented, uncommented, undocumented PHP code
It reminds me of the old joke... when your driving in the countryside and you get completely lost, then you spot a local with bailing-string holding up his trousers and hay in his hair and you stop, open the window and ask "excuse me please, how can I get to such and such a place". The local replies "well, I wouldn't go from 'ere".
These days I often think: “if aerospace or civil engineering was executed the way IT was, planes would drop out of the sky and bridges would be closed every couple of years and would be rebuild.” We should stop thinking that newer is better. It’s simply not true! It’s a different way of doing the same shit we’ve done for 50 years. And what we used to do in mere Kilobytes now takes 100s of Megabytes and it’s doesn’t provide a lot more business processes. I don’t care that my plane seats are still reserved on a mainframe probably with Cobol or REXX.
the people who have to make changes to those systems care though - and you care that the systems continue to work... so what do you do when you need a change to a cobol system and nobody knows cobol anymore? thats the reality you are dealing with and compared to something like architecture or mechanical engineering... software engineering is incredibly young want to know how often steam engines in the early days simply exploded? and as a final little thing... bridges are inspected and overhauled every couple of years, because if they are not maintained and adapted to the current circumstances...they might start to fail same is true for planes - they get maintenance even more often and a lot of their parts are replaced to make them as reliable as they are legacy code is the equivalent of a bridge that has not been inspected for a hundred years... it looks solid... and plenty people use it just fine... but would you put a truck over it? or build a rail line over it?
@@SharienGaming steam engines blew up, sure but they fixed those problems within 10-20 years. Airplanes went from barely able to fly 100 meters to supersonic in 43 years! Software only gets slower the more computing power we create 😆Due to kids no longer knowing how to write assembly or even C or C++ and thus result in using Python and JavaScript. IT isn’t young, it’s over 70 years old! I am an EE by education and to my amazement had better education about software engineering and system design than CS graduates. And I don’t see commitment by (especially young engineers) to deep dive into legacy code. It’s legacy not because it’s old, it’s legacy because devs don’t want to dive into it and maintain it, but it is important since it is running and making money. And concerning maintenance, that’s all mechanical parts that experience stresses and elements. Code doesn’t do that! Code doesn’t age doesn’t deteriorate. And electronics don’t get serviced anymore. Because we engineered the fuck out of it to the point that my C64 from 35 years still works and my ST as well. Legacy code is a psychological problem but a technical problem. People don’t want to deep dive and put in effort to understand something. Most born after 1985 don’t even have the basic experience to understand lower level stuff that was normal to use up the the early 00s. Because everything is abstracted away. Most devs I know can’t even use a command line! Seriously!!! So I disagree, we as a branch are immature and we don’t want to mature. Otherwise we wouldn’t create a new language every other week and a new framework every two years. And yet the stuff that the generation before me made still rules supreme, C/C++ assembly. Yesterday I solved a bug issues at a sort of legacy system a CI/CD pipeline which my colleague developed who left. And I understand azure pipelines well and eventhough his way of working is totally not my style I was able to find a bug and solve it. Even more interesting was that where the issues showed was ofcourse the system it was deployed at Azure Data Factory. I had never even touch ADF (I don’t care for click ETL), but hey my colleagues were stuck. So I just asked them how do you do this? How does that work? Can I see this? I could Google that too and would’ve if they weren’t there. And within 20 minutes I had found the root cause. And 10 minutes later the solution. A simple oversight in the CI/CD pipeline but one that would’ve never ever happened in electrical engineering or aerospace! And one that would’ve been easily caught if we had time to mature and have mature tools (we work with stupidly immature tools). He had forgotten to parameterize a user principal. And he had forgotten because he started development and only targeted a single environment. Despite me saying time and time again, start to develop with multi environment in mind. We are going to deploy to other environments! But that’s harder and more time consuming and you have to throughly test and think. And he left that part for me to solve after he’d left. Now I could’ve said: “it’s crap it’s not multi environment. We need to remake it.” Instead I was pragmatic and thought: “damn he got a superb sql deployment scheme and he got ADF deployment working and security checks. It will take me 3 months to do all that and I need to learn ADF, let’s just make sure this becomes multi environment.” And a proper toolset that Microsoft should’ve made would be to parse the yaml and warn when parameter assignments were literal assignments; because generally you don’t want that. I ran into bugs with that Yaml parser that I wonder how the fuck did Microsoft test this? Because you need to make a fucking effort to fuck up a logical operation and not be able to parse certain things dynamically and others you can. That is bad engineering! And you wouldn’t see that in civil- aerospace engineering.
@@SharienGaming oh and if you want to earn big money spend a month or 3 learning cobol! Those guys earn 200 euros an hour at the bank! I would learn it if the subject matter would’ve be boring financial processing. Stuffy business processes are not my thing. I also wouldn’t do that in C++ or Java if they asked - and they have asked 😄
@@CallousCoder code doesnt age? dont make me laugh ive seen and had to maintain code that has aged terribly - because the technologies it was built with or integrated with have become insanely outdated and unsupported (in part to the point that not even documentation on it exists anymore) sure the lines of code have not changed in that time... but everything around it has and that code has had at least 4 different teams work on it... probably more... and you can absolutely tell, because each new team had a different approach and probably didnt understand half the tricks and conventions the people before them used... and thats how the code reads i can deep dive into shit like that and figure it out... but in the same time i can probably build half the system from scratch as well do you know why new higher level languages are used so much rather than low level languages, even though the performance is lower? because maintainability is much more valuable than speed sure you can optimize the shit out of c code (even more so assembly) but good luck for anyone else understanding what the code does when you need to make changes... heck good luck understanding it yourself when you look at it about a year later that is why methodology and languages change so much... because we have learned a lot from the past and from the cost of building software in that way and yes, electronic computers may be 70 years old... but software engineering as a discipline is still developing and significantly younger than that and also... errors like you described from your collegue HAVE happened and do happen in aerospace engineering... yes they do get caught by stringent testing and layer over layer of redundancies and checks... and even with that...sometimes they make it through anyway... because humans are humans and we make mistakes thats also why methods like test driven development, code reviews, integration testing, pair programming and many more exist... they provide multiple levels of checking for mistakes but it takes time for something like that to propagate... with something like aerospace engineering matters likely were significantly sped up through governments imposing legal requirements on procedures, because there always were lives on the line... with software? often only the engineers care and their managers often dont give a damn, because testing takes time and slows feature releases down... mind you they usually get the bill for everything slowing down from accumulating technical debt and errors going through... but that rarely makes the manager learn the correct lesson so if can you please get off your high horse then ill get off your lawn
@@SharienGaming now your last paragraph is what matters. Other industries are mature because they are bound to procedures because life’s are at stake. They therefore take the to engineer something and pass the certification tests. But your solutions care and matter too! So managers and devs should care more about quality and maturity than about the quick to market, and the new shiny thing idiocy that reigns supreme and is the reason of such terrible systems that we have today. And software engineering really started in 1947 when Kathleen Booth inverter assembly opcodes to program with less errors (see back then they were engineering) Another issue for our branch’s immaturity, is the fact that software is too easy to fix. I often hear or read in release notes: “there’s a bug, but we release it and fix it later.” That doesn’t happen in proper engineering fields. When was it when you bought a carb and you got a note: “we know that sometimes your indicator lights will fail but we will probably fix it next time when you come for an oil change” 🤣 I have probably a far more sinister look at this increasingly immature branch as I studied EE, you can’t really change hardware so you must get it right before you send it into the world! And I started my career in designing and building bespoke research tools for science projects, so electronics and the embedded software for parts of a solar telescope and my own whole project was designing measuring equipment that autonomously could measure the fluctuations or the height of snow and ice on Antarctica. Our team designed certain parts of satellites (not me as I wasn’t senior enough yet, only the old geezers who had successful smaller projects; something we also should do in IT - proven own projects makes you a senior and not your amount of projects or duration in the business). But we truly engineered the solutions and we made sure that our stuff was resilient, redundant, power consumption was known and batteries were over speced because there’s no influence on the weather and I don’t feel like hoping on a plane and hiking 2 days through a blizzard on the Antarctic ice sheets to swap a battery or wipe the snow of the solar panel 🤣 But we couldn’t fit a car battery either because researchers had to hike with those machines to put them wherever the wanted the measurements taken. I heard 3 years ago that 22 of the 30 of my Antarctic snow measuring and ice measuring devices that I’d designed in 1993 still work and are operational and still “maintained”; Hmmm lowlevel assembly rules, because you got to do everything yourself and there aren’t obscure poorly designed libs or frame works that have unknown behaviors. Especially on a bare metal proprietary made microcontroller system. 😄 And your software didn’t decay, your hardware platform became obsolete. Which is a bit of a misnomer , if it runs it runs and again a show of immaturity of our branch. I also worked at IBM (2 years didn’t like the company) but those lovely OS390 mainframes can still run all the code from the 360! That was 37 years ago and I know at the bank I consult for now, that their 60s code base for transaction processing still runs, so that’s 60+ years old. And no plans to move it from cobol to Java - most has been migrated away as cost saving. Because as you said cobol devs are rare and expensive but mainly running a mainframe with support is the real running expense. So they reduced the number for mainframes to only 3; again proper engineering from IBM. We have two datacenters so two mainframes but you need a tiebreaker in case transactions give different outcomes. How many devs these days even know how to write parallels transaction systems to guarantee that the right amount of moneys is put into your account or taken out? Those are aerospace engineering practices. So let’s agree to kick these careless managers in the balls and take the time to properly test and engineer and give them the Fucking finger of they want to release shit that hasn’t been properly tested. Because as also a software consumer I am getting fucking annoyed with buggy OSes and software. Especially the crap that Fucking breaks after a mandatory update - hey Apple and Microsoft!!!
If you have code that is too scary to update or too profitable to change, that means you don't have truth-worthy automation tests (your safety net) for it.
The vid startet with a song, and that's where I am at. I gave the video a like. It has always and under any circumstances to be supported, when tech people try to be funny. Something destroyed it in them and it has to be regained. Regrown. Nourished. Given a big enough cohort, statistics kick in and there can be an evolution; brilliance, genius even. It may lead to hilariously brainmelting fun. They can AI and stuff! So please let's have that evolution. It's just a like.
This good ole days of writing and running Hello World in bytes of binary and memory usage. I think assembly should become a staple again in today’s education so that we stop developing bloatware web crap and terrible inefficient desktop apps. I get happy when I see what blender and Davinci do! Low level relatively small high performance software. That’s what we need more off!
Do we though?... The amount of skill and time it requires to do what you recommend versus the 'bloatware' approach does, evidently, not weigh up to the benefits of doing it like that... or at least it doesn't pay enough to entice would-be developers to learn and practice such things. Hardware resources are too cheap compared to skilled dev labor, so no company is going to pay for it when they can make MORE money selling 3 'bloatware web crap' apps instead of 1 'streamlined optimized efficient' app.. the customer doesn't give a shit. Unless your App literally drains all of their phone batteries they're not gonna complain... Hell, Facebook (back when I still used it) would drain your battery for no particular reason.. and that was way after they became the de-facto social media platform (so they had PLENTY of money to invest in a streamlined app if they so chose). The only mitigating argument I can imagine that would support your claim despite this would be the environmental one. (all those extra calculations do consume more electricity after all).. yet I don't think "forcing developers to write efficient code" is going to make it very highly on your local political 'green' party's to-do list.
@@ayporos time is pretty much the same. A good C/C++ developer codes as quickly as a JavaScript noob. That comes from experience, C++ is after all a pretty high level language. If you write bloatware, then you obviously spend more time doing all that extra inefficient code. I often find it frustrating when I need to do Python or PowerShell that I need to lookup trivial things that I know how to do in C/C++ And I even rewrote a program we use in our pipeline to search and replace a key value pair list from a json file in a target file and encode it. With a Rust application. Because starting a native application goes so much faster that a PowerShell one where the build agent needs to download the libraries. It would take 30 seconds to start. Mine less that a seconds to start the task. And doing that for several deploy tasks does add up. But I agree that customers don’t give a hoot. And that’s the whole problem. It sucks the fun and professionalism out of the job. And I’m no green tree hugger. I am not afraid for climate change doom scenarios. But this is indeed very easy reduction in energy abuse. And my customer really wants the cloud resources to be as used as little as possible to reduce costs. And ironically we are wrangling very inefficient azure cloud processes to scale up and down resource. And turn off resources in down time. Which requires a lot of extra development time in runtime management pipelines. When we would efficiently code that ETL process in C++ or Rust. It would run within probably 10-15 minutes compare the 45-70 minutes using SSIS. And all the stale data I would’ve removed already. Because stale data costs! Which I now have to remove with scripts on moving the unified database (I also pushed 7 databases into one using schemas and there’s a lot of data duplication because of the 7 databases, we w at to tackle). Because they simply didn’t think it through and instead of making it multi tenant they created a database instance per client. Because it was “easy and fast”. It’s now harder and takes longer to undo that stupidity they came up with 15 years ago. And we can reduce almost 90GB of data duplication - that’s including the stale data I already remove when moving. And that’s a throw of the database size. Back in 1995 we had database in MUMPS that ran a whole veterinarian clinic with on average 2500 animals dossiers. Average database was 60MB. We had on average 6 terminals connected to a 386 to 486. And 6/10 printers. And large veterinarian clinics about to 12/16 terminals and 20/25 printers. It was super efficient. When our company got bought the new dumb owner wanted a rich desktop experience. It bombed as we predicted. Because the ascii terminals were cheap, secure and it ran super fast even on that MUMPS environment. And the data as with the same data just daft encoding. Became on average 500MB. An invoice run became slower because of all the postscript full graphics none sense. And the vets were like: “I don’t give a fuxk that I can print my logo in full colour. I buy preprinted paper. I want a run to be done in the same time as before. Be sitting waiting on printers is more costly”. 10 years later I did hardware integration for a pharmacy information system. They too worked with text based terminals albeit already on PCs with terminal emulation. They made the same mistake. Went full on UI and I warned the owner and instructed him to look at the ladies behind the counter whisk through the terminals. All the single letter no enter or tab to select a menu. The even typed a head of the screen. It’s sheer rode working. And the order pick robots (that I integrated) are there to eliminate the time walking to a drawer. But no, full on UI was the things of the 2000s. Well that went down the drain fast. He lost half of his customer base. Because real users do mind speed. And now all these systems are mouse heavy. And it takes so much longer to get your meds. It adds 10-20 seconds. Which is ironic because the real hard thing (the mechanical parts of these robots) have almost tripled in the last 20 years. It’s a marvel of engineering but the systems doing the registration and validation and scanning. Have noticeably slowed down. And I notice that also with the length of the queue we are waiting in. Last week for this channel I wrote a little program to turn movies into an ASCII art PNG sequence - after I did a streaming version, that obviously has some dropped frames. And I started with encoding a 3 minute clip in 2 hours. And I was like… that’s stupid slow. Where’s the biggest slowdown? Oh that’s the Magick++ library. Hmmm I could use Qt I know how to image wrangle with that (I used in a lot of VFX projects). But then next to OpenCV I would need another massive library. Nope, let’s wrestle with OpenCV to turn strings into PNGs/ that took all but 20’minutes. Then it ran in 12 minutes. Which I thought was still too slow. And made it multi threaded. Because I wanted a clip to be rendered faster than the clip took to play. Know I churn through a 3 minute clip in 54 seconds. And that’s not even super optimized as I can optimize it even better. But that would take considerable amount of glue logic. In order to have a thread pool that’s atomic safe. Would take me an hour to do that. To shave off maybe 20 seconds per run. I would do that for production code! But the ought clip I’ll transform then this is good enough. But I know most developers these days would have stuck with OpenCV and Python and be 10-20 times slower. Why do computer vision with Python? Have some divinity people!
blender is a great example of two worlds colliding actually. its core is in c++, but ui fully in python. seeing how well both the cycles x project (c++) & ui overhaul (python) went in terms of increasing performance & usability, i'd say hard fast & easy slow languages both have their respective places in the industry
Video rendering software is where performance matters much more. Cutting off even a few seconds of render time can be a big deal. But why would anyone these days write a web app in a low level language, e.g. some CGI stuff in C or C++? Maybe if they host it on ancient hardware with severely limited resources.
@@martinsdicis4000 well also web apps have high workloads because of multi user. My client is a bank, if you see the amount of containers that are spun up for some customer services, it’s insane! Knowing that Python is 100 times slower than C/C++ and Node about it 10-20 times you can reduce the number of containers 10-100 times. Now that saves enormous amounts of energy )if you are afraid of climate change , not me but most millenials who develop this bloatware do, then there you can really make a difference. And also financially it’s a massive saving. If you run a single instance it doesn’t matter but we often spin up at times as many as 500-1000 containers. And ironically the database backend, which are properly written in a low level language don’t croak. The redundancy we have there is mainly for data consistency and availability. So your code efficiency does matter when it comes to heavy workloads. You can safe a lot. Funny enough my customers before the bank, we’re energy companies. And we knew when it was tax season as we saw prognosis of energy demand go up. That’s all because of the 3 major banks the end users hitting those banks and the IRS. And we talk about as much as 40-70MW extra of energy. That’s a about 10 windturbines that run a 100% efficiently! So imagine everybody wrote efficient code in C++ (or Rust) and we could reduce that a factor 10! We only have a 7MW extra consumption. Which our clients (greenhouses) can deliver with their diesel generators that are already running anyways to heat the greenhouse and produce CO2 for the plants. The biggest growth of our energy consumption isn’t cars or planes but cloud data centers! And majority of stupid TikTok and Instagram videos and cat pictures 😉
Legacy code is important (valuable ? but it might not be directly income related. ) that we are scared to change why scared ? we don't know what will happen ,eg we know we are unlikely to be able to fix it quickly if we change it today but it starts playing up next month or next year - we wont know which version of the code the fault was introduced, we won't know how to revert, except by clean sweep revert to the way it is today.
30:51 Headphones on? There shouldn't be any sound from anyone else that needs to be blocked out. If you need headphones, your work environment is toxic.
Trust is great, but it hasn't ever fixed bugs before they hit the market. I really take issue with the graph comparing lines. Those novels have pretty thickly populated lines of text. I'd like to see if linux kernel has a lot of empty space in lines or empty lines. Characters would be much more representative comparison. Like I've written perfectly appropriate code that had mostly empty space and curly brackets, with only a couple of words that extended for like 10 lines. I believe his point is still very valid, but I think it's not truly representative way to demonstrate it.
A lot of code editors count not just newlines, but also "source lines of code", where a SLOC is not whitespace or only brackets. Comments may also be excluded depending on the particular implementation.
28:50 i'm that guy who leaves for fear of becoming unemployable. My resumé on the right has php instead of something modern and fancy like C#. I'm not even using a common framework, it's turtles all the way down on a 20 year old base. Try selling that to some HR hiring filter. Few companies operate in this way, actually making their own stuff. Most of those that do, don't use php for it. And don't believe anybody could have relevant project, design or architecture experience in php. They are nowhere near each other, and they won't pay me as well as the one currently compensating me for slowly becoming irrelevant in their basement while they hire new people who exclusively work on new product features on the far fringes of our APIs.
You're counting everything in the repo which also includes device drivers, etc. - the kernel proper (as in, in the kernel/ directory) is around 100k (closer to 150k nowadays)
At least on older x86 systems, RAM was accessed in segments of -16kB-* and (going from memory but I'm pretty sure) a program was always allocated at least one segment to use. * Correction: 64kB; not sure where he got 16 from then, actually.
i expected the american pie parody to go for one verse, one chorus, but it just. kept going. for the whole 7 minutes. I'm in awe
That intro song is amazing. Complete history lesson to a late ninetees kid like me :D now on to the talk
Hi, i really loved that song and its been a long time since i listen to it, could you help me out and tell me the song's name? I'd preciate it a lot, thanks☺️
That intro song + trailer sequence is way too long and way too self-indulgent and not nearly as insightful or even funny as it would need to have been to be bearable. I regret sitting through most of it. Everybody who still wants to give the actual talk a chance should just skip to 10:34.
PS: To make things worse, that trailer is really an ad for his company, which the speaker proceeds to praise, so they really should have ticked that paid promotion checkbox, even if the payment is the speaker's salary, because it's illegal in several jurisdictions not to.
@@ropersonline waaaay to long indeed
We had toggle switches and didn't need to keyboard.
Days Of DEC equipment and EMC io boards.
legacy code might not be too scary to update, there simply might be no time to update it. why refactor working code, however messy it is? for example we had an old database structure and code to poll wind power plants for their daily production and some data about the windmill itself (oil temperature, error reports, etc). The code put the data into the database. The database was a huge mess, it started out for a single windmill but soon had to cope with multiple locations with multiple windmills of different manufacturers, etc. the table names were terrible, no indexing, no references. Windmill data was columns per windmill, and so there was a table for every wind park that was identical, but with different amounts of columns because one had 3 turbines, another had 5. And of course it was a microsoft access database. every day some piece of code would generate an image file from the database to be updated on the web page to view the data. It was all a horrible mess. But it worked. The paged looked ugly though, but no one had time to update it, the customers didn't overly care too much about the page looking like it was from the 1990s (it was from the 1990s).
I'd totally call that legacy code.
Eventually we had a little downtime and I was tasked to update the database and web page. I didn't understand that asp nonsense. You couldn't find anything, and if you did it was chinese for me anyway. Of course some SQL statements or some conditionals were readable, but I redid the database, so that was useless. The code was written, without comments mind you, some 30 years ago so I basically had to redesign it from scratch with just the data and the look and feel of the original. Plenty of stuff was planned but never implemented. Someone had the fancy idea of including a webcam view of the turbine and put a placeholder picture. It was never implemented or removed in 30 years. Sometime 10 years in or so someone had the idea of refactoring the database, but didn't bother to migrate the data structure and instead opted to rewrite the sql queries to merge the tables... It was a huge mess.
So I did a nice ACID based approach with some redundancy and indexing and constraints and references to make sure everything was sane at all time, rewrote the back end to use our new .Net framework and tied it into our new login system (the wind park thing was it's own service before). I rewrote the frontend to dynamically generate the graphs on the fly with chartjs, generated pretty, sortable and searchable and pageable tables, all with the option to print or screenshot and export to CSV. I implemented java based tabs instead of links with annoying reload times, etc. In short I did the big rewrite.
And it was better. The legacy system evolved into a tumor and wasn't touched in forever. It looked garbage and wasn't integrated with our new products. The database was inefficient and hard to read. The only thing I didn't rewrite was the data collection from the windmills itself. so the ms access database is still being written to. just every 6 hours we run a new program to query that database and transform the data into a new format. Some guy promised to rewrite that part to put it into the new database directly, but I left the company. I imagine it was never done and my ugly fix will be the new legacy code... At least I wrote a few comments for the poor guy who has to deal with that stuff. On the other hand, the old database structure had meant that inserting the data was about as difficult as querying it. The new structure you just dump a timestamp, windmill id and your power production into it, so it should be simple (tm), and thus compile first time.
So... legacy code is horrible code that no one has time to touch.
Also for testing we have an own system. Actually we have 3 systems. One is only internal, database and web server filled with dummy data to develop and do basic tests on. The other is the production database with an experimental/beta and the stable release of the web server. New features get to be tested by beta users. Then when they have proven to work for a few months, or if it's something simple a few weeks, then it gets put on stable for a wider user base.
this man is the best speaker i have ever heard opine on programming! @DylanBeattie you are a legend (finding your rockstar talk was probably my highlight of 2020)
he really is
He inspired me to get back to teaching CS in addition to Math. I'm surprised I just found him. It will replace my evening movie time until I run out of the videos!
nerds replaced movie slot with conference videos :) ngl, conf videos are more fun anyways.
Based on experience with the world's largest legacy code project, the Y2K fix, I'd define legacy code as code that has to keep running. It doesn't matter what tools were used, what the quality was, or whether it could be fully understood, the code had to keep running. Y2K may have been the only large software project that didn't suffer from significant scope creep as it progressed. In fact, it was easier to toss stuff that wasn't still needed. The immovable deadline kept true priorities on top.
Also US was scraping for programmers from around the world during that time. My father (who was a programmer) got a job offer from American company despite the fact that we were living in the middle of Russia.
I didn't need to fix my code much for the feared Y2K. Internally all my programs used the UNIX time (So we will have a 2032-or something problem, but I expect to be dead by then, so not my problem) and mostly I just displayed the proper year number, like 1999. My program really didn't have much of manual data entry anyway, practically all the data was collected from various devices and time-stamped by the millisecond. This presented the BIG problem: How to syncronise all the clocks of every computer in the system?
The stuff I work on still hasn't moved to 8 digit dates. The suits are currently trying to figure out which consulting firm's gonna help us out of this mess...
@@OldieBugger So, how did you solve the BIG problem?
For me, Legacy Code is code that when I get assigned to work on it, I consider getting a doctor's notice exempting me from working on that particular codebase...
My oldest son worked on the stock exchanges, in NYC, up to 2009. He claimed there was a DNK Room (e.g. Do Not Know Room.) It had old legacy PCs running critical programs that did stuff that kept the sock exchanges running. He claimed one of the PCs was an IBM AT! My son had learned computers on a Commodore 64 and later a Gateway 2000 with DOS 6 & Windows 3.1. He later worked for a computer workshop, for teenagers, that got old donated PCs from businesses. He learned how to repair them, make them work and network rhem so the kids could play games like Doom! When something crashed in the DNK room he was one of the few people who knew enough about old PCs, DOS ,Windows and networking to fix them and keep the stock exchange working.
Aah yes NYSE. I worked for them for almost 15 years. We had an absolute nightmare scenario where we had software running at a customer (because NYSE sold software at the time) where due to miscommunication? Half the codebase of that piece of software disappeared. After all it was no longer used anywhere... Except at this one big customer in Asia which was missed by upper management.
Nowadays, I'm pretty sure a decent AI could trawl through those millions of lines of code and produce a summary and some suggestions.
I had forgotten how long American Pie was. Or maybe I just never heard the long version...? I don't know.
Either way, do y'all have a FLAC download of that? I unironically like it.
19:27 legacy code defined by *Michael Feathers*
19:40 definition of *legacy code* by the speaker 19:44 legacy code is the code that is too scary to update and too profitable to delete
24:18
9
My question for this would be, "What should you do if you find yourself to be the Dungeon Master, in that scenario?". You have an added level of job security, but at what cost? Do you have a duty of care over mentoring a team in it's use? Do you hold onto control, and class it as tenure? Should your role become that of an author, and produce a Dungeon Master's Guide, Monster's Manual, and Player's Handbook, for your codebase?
So Brandolini seems suggest the best thing for all involved is that the DM quits to work on something else
That way You don't have to deal with your legacy and the Company will solve the critical dependency problem faster
@@hens0w That's not good advice on its own. It's the equivalent of throwing the instruction manual in the fire. You're getting rid of the one person that can actually help to refactor the code. If I found myself in that position, I'd want to take on an apprentice, or lead a time, and take time to put things down on paper. Even when you're doing a big re-write, you need understand what it is that you're rewriting. It's not often you can treat a rewrite as if it were a green-fields project. I would say the responsible thing would be to write a Dungeon Master's Guide (something like a C4 Model), and a Player's Handbook (Code Annotations), and a Monster's Manual (Annotated Test Suite). And, encourage your players to micro-GM within their own specialities.
Just as it D&D, the job of a GM shouldn't be to railroad play. It should be to facilitate play, and provide a backdrop for players to interact with, in a predictable, and consistent manner. They are there to arbitrate decisions, rather than to enforce a strict definition of the rules. The same should be true within this scenario. Your team lead shouldn't dictate your code, line-by-line; and they shouldn't railroad decisions through, when your have valid concerns, or suggestions.
In RPGs, the players also have a responsibility to keep checks on the GM, and unionise against bad roleplay, where found. An inexperienced, or just plain bad GM can ruin a game with even the best players. Players should be able to tell the GM that they're being a dick; and the GM should be able to take those criticisms to heart, and actually make changes for the better, even at their own expense. The roleplay is more important than the roleplayer.
@@ApacheGamingUK if you found yourself in that position you should use the insane amount of leverage you have to form a labor union
@@ApacheGamingUK Also rolling for initiativ makes for really structured meetings :)
From 1997 to 2001 I worked for a major insurance broker that had a critical sales tracking program written, by a contractor, in MS Access & VB. The contractor deliberately made the code obscure (like "Function 105B".) It was networked all over the country & internationally. Part of my job was to maintain it, make sure the data got updated, run it and provide results to top management. The contractor was often not helpful about problems yhey considered as impacting proprietary code. They later made a commercial version of the program and sold it to other insurance companies.
Here's my take: legacy code is code that has to run according to important but unknown requirements.
> Legacy code is code that's too scary to update and too profitable to delete.
Unknown requirements makes the code scary to update. Importance to the customer and profitability to the seller are flip sides of one another.
I think my definition gets at the core of legacy-ness, with Beattie's observations being the direct consequences.
This is the best description of legacy code I've heard yet lol. So true about the unknown requirements. It's tough too knowing the code we write today eventually becomes legacy code, and the requirements and blockers leading us to specific implementations will be lost with time.
@@seancpp Especially since a lot of the requirements are arbitrary business needs rather then technical ones. It would be nice to have some easy marker in the code to say "nothing bad will happen if you change this"
@@traveller23e That's what whitespace is. Unless you're writing in Python… or Whitespace.
Fascinating, so my former company was a great educator. We had an intern and he had to design a hardware platform for an application. And in his report it said it requires two CPUs. I was like: “yes you read the software supplier’s requirements well. But why?”
-“it’s twice as fast!”
“Really?”
-“Ofcourse two CPUs is twice the computing capability!”
(This was BSc level education!!!)
“So you want to become a developer! Here’s your first development task. Install on that machine you’ve configured and ordered a C compiler. Write a little program that only does while(1) { } and run that.
It took him 5 days to get everything installed and compiled. Because in school, toolchains are installed and Java is simple to compile!
He ran it and I heard: “huh?!?!”
So I asked what he saw. He explained that only one cpu is loaded and only reached 95% not even 100%.
So I asked him to run a second instance.
And he noticed they’d now run pretty much on two CPUs although the cpu affinity sucks (welcome to windows 2000 scheduler I said). So he learned about SMP and OS schedulers proactively. And I knew that this kid would go places. Because I did that 15 years prior too.
He never finished his BSc, because it was useless and frustratingly management focused. But he’s the best developer in his company. Because he knows how to learn new stuff and research it and question hypotheses.
@Sam he came with us when he was a student! 3rd year! That’s how terrible these colleges have become. A lot of stupid management and terrible actually practical courses. That’s what you get when IBM and Getronics have influence on syllabus. Two big corporations who need to hire external engineers because they do t have them on the payroll.
Educational has been dumbed down and this was 15 years ago, it’s even worse now!!! Pay 40k a year to learn crap you could learn from books in quarter of the time.
@@CallousCoder my experience in uni so far has struck me as them being pulled in two directions. Students and employers demand "practical" knowledge but its not a fucking "coding degree". Its a computer science degree. The "impractical" stuff is important, but its not obvious to most people.
@@shdowdrgonrider well, this is an interesting point. We in the Netherlands we have (had) tradeschools on BSc. and non-BSc level where you’d actually learned the trade. Like in my case I did tradeschool electrical engineering - with more CS and programming that BSc CS students have nowadays. There you were supposed to learn the applied knowledge and only breeze over the theoretical academic knowledge. This student, I wrote about was a BSc tradeschool student.
But that has been thrown overboard in favor of the academic non-applicable syllabus. And frankly CS is not a 3 year academic course unless you deep dive into a single field. Basic data structures and design is something you could easily do in 2 or 3 semesters.
Academic courses always had the problem that the graduates don’t really have a value for the marketplace because it’s sheer theoretical knowledge. They aren’t great at coding when they leave, they aren’t great in networking or sysadmin or design. So what is the value of that education? Learning for learning sake seems to be a non valuable approach. Just like a degree in ancient linguistics (my cousin is a professor in that). Fun and all that but it doesn’t make you employable or useful in today’s society.
In my opinion, you go to school to become employable or to become able to make your own business. But ironically business schools don’t teach you how to create and run a business. And Computer Science doesn’t teach you how to actually work in an IT job effectively.
Franky I don’t see the value anymore in today’s higher educational system.
You still have to learn the actual details and applicable knowledge yourself. I don’t need a degree in anything to learn that.
Back when I went to “college” aka tradeschool it made sense because you don’t have access to SMD soldering ovens, or heavy tools or expensive microcontrollers and eprom burners and CAD/CAM software back then. So the value was sheerly the practical subjects because of these tools. The whole theory (and there was a lot of that) was/is well documented and you can just find for free online. I didn’t need a teacher, regurgitating it verbatim form a book.
But CS, linguistics, business, arts is something you can do all by yourself. And do it faster and more affordable.
And don’t get me wrong academic studies for people who sheerly want to be academic is fine. But don’t bother me as a tax payer with your learning hobby 🤣
And that’s what was so great about the separation of tradeschools and university (MSc and PhD) levels until 20 years ago. When you wanted to become a rich tradesman you went to an HBO or MBO tradeschool. And if you wanted to solely be in academia, living on a meager research salary, you went to a university and do the whole MSc, PhD and professor route.
@@shdowdrgonrider I think Dylan’s example of the “sheds” he learned to make in university was very sound.
I guess in tradeschool we didn’t get past a cottage either. But it at least was residential 🤣
Talk starts at 10:36
a 10 min song woah
@@knuti27 8 minutes of song 2 minutes of video clips
Best post here in the comments, life saver
Honestly, that song was worth it though.
Choose not to miss the song! What an apt depiction of how we do things. Don't miss the opportunity to laugh at yourself!
I'm now 43 years old. When I turn 50 I'll be setting the clock for retirement, at that point whatever knowledge set I have I'll be sticking with knowing there will be PLENTY of stuff out there in those languages that'll need quiet maintenance forever.
I should change my job function description to Dungeon Master...
In the early 2000's I started a new job, I inherited a 1.0 codebase that consisted in pure HTML with inline PHP3 (Including MySql quieries) the developer that created it had left the company a few weeks prior so no handover. Our first task was to adapt (rewrite) everything in the then newly adapted MCV framework so my (also newly hired) colleague and I did and after 3 months the core functionality was up and running in production and during the next year or so we adddd the less urgent features that we initially left out. In the years that followed new paradigms were adopted and like many companies we migrated many parts from the monolith to microservices, whoever parts that just worked was left many as it was and to this day about 15k lines of my original 2004 code survives, written in pre IDE days with a text editor I did however added some testcases a few years back just in case that one day someone need to change something without me there.
Wonderful content. @DylanBeattie has the ability to take 15 minutes of information and expand it to 60 minutes. In all sincerity, this is valuable information.
13:00 I'm with those new coders. Their criticism isn't that the code is old. It might be that the app is not adequate for today's demands. Or the code is unreasonably hard to maintain.
Good legacy code is wonderful. It never breaks, it gives you insight into a whole different world, and sometimes a chuckle when you find a comment that just says "TODO: fix this mystery bug" from 1996. I do however take complaint with a completely different class of code, originally written with the primary objective of time to market, designed by an architect in a rush and an unclear idea of the end goal, and programmed by people who were being clocked by the minute in their implementations of vague requirements. Those sorts of codebases tend to be atrocious, even if they follow MVVM or some other modern pattern to a t.
Good video. I'm a little surprised that Dylan didn't talk about the Banking industry. I've heard they are well known for using legacy code.
I worked in 2 of the 3 major banks and yes they are.
But in both banks a little or time is spend on decommissioning systems and cloud migrations to cut away legacy and bloat. But there is another problem there, that is that often other consumers of systems are not willing to say goodbye to a system for whatever reason.
We tried to decommission a system of which our system was the Golden source. And we said: “tells us where to put the data and we get it done.” We were provisioning the new system already but some people were still using the old system as there was not a complete replacement.
Ironically we had a bug in a driver using that system that would hang out processing. The lead engineer who designed the whole system and developed that agent. Was sure it was a Microsoft bug. And I was like: “it’s the agent, as I’m now running MIM with the same version too or that other system you’ve build and it chucks through nicely.”
He didn’t believe me so I said: “everyone you see the agent hang, let me know I will do a brief investigation.”
4 weeks later I had found the edge case that would every now and then (2 times a month) hang the agent. And that was when we were copying the source data some data was deleted. And my colleague checked if the number of records he counted were copied. Before there was no deletion because it was an operating system. And banks don’t delete data then but mark it as deleted.
So I fixed it and my colleague was like: “waste of time in 2 months the system is gone.”
I left almost 2 years ago. I rejoined a new team back in December and that system was just decommissioned…. But only the frontend :)
So our agent probably is also still running.
@@CallousCoder Very interesting! It sounds like the hard part isn't the building or the setup or the maintenance, but its the convincing the humans to use the new system so they can benefit from it.
What is the time frame that is expected for decommissioning from the point of starting the plan for the new system to full implementation and staff training?
@@whtiequillBj and to motivate the techs and managers to engineer the new solutions. But complacency is a big part. As we say in Dutch: “maar het werkt toch?”
“But it’s working right?”
And then those technologies will shoot root. But it’s definitely the human factor that allows systems to live on. But that’s also what Dylan says. People are afraid, I think people are complacent to do hardwork. :)
Otherwise sandboxes would pop-up and they’d be experimenting to get ownership. Although the later used to be impossible with banks. Resources had to be approved and be defended. Now with the cloud that’s a lot easier.
My bank still insists I live in a local authority which was abolished 30 years ago.
That's because he doesn't work in the financial sector, rather some small startup that serves the entertainment industry. I seriously doubt casting directors would lose their s$%& if his webapp went down. Banks OTOH trade on a single commodity with their customers-trust. They can't afford mistakes. He tries to convince us that people are scared to touch legacy code. Nothing could be further from the truth. When you have code that has run error free for a decade or longer, that is not by chance. It took a lot of debugging throughout the 60s, 70s, 80s, and 90s, to get there. Now, some hotshot programmer thinks he or she can do better??? Go pound sand. That's why banks are "stuck in their ways". It would take probably another 50 years to re-invent the wheel.
I kind of fell into a codebase which is relatively new, but was written by a complete and total...not a good programmer. Not only was there no identifiable architecture or cohesive style, the code was heavily and pointlessly multi-threaded (before I started the big rewrite, there was an entire thread that did nothing but manage adjusting the size of the window).
I could really relate this this, especially the stuff about experimenting rather than reading. It is only by experimenting that you realize changing the size of a particular image also impacts a calculated floating point number in a way that breaks all the tests.
In 2000 a payment service provider was built, one of the first ones. I worked on it in and off, and mainly in about 2012. The tech got sold on a few time.
The nostalgia hit me when a few months ago, I did an online payment and the very typical transaction id format popped up on my bank statement. That part , designed by a few 22 year olds in 2000 is still running! 😂
Don't forget about COBOL, which is probably somewhere between Excel spreadsheets and VB.
This question jumped into my mind when i heard your questions about "TRUST " : Do you trust yourself ? Does this lead anyhow- what ever the answer is -to trust others?
If you have code that is too scary to update or too profitable to change, that means you don't have truth-worthy automation tests (your safety net) for it.
640 was enough to hold DOS with a little extra. That little was enough to move a small block around as required.
In and out of Storage but only in Memory if and when needed.
You are thinking of 64k. 64k was enough to hold dos with a little extra. 640k was maxing out your dos system to the top spec, although you could do even more with extended memory on a 286.
That intro song must make it to our campfire songbook. All my coder history in it ❤️
@35:12, Harry Potter series: 4702 pages, 1.113.646 words, 30.059 uniq words, 211.023 lines, 65.025 sentences and 347 chapters... said awk in about 5 and a half seconds...
As soon as was shown first code fragment at 18:33, I said it's new, because of await.
Markdown is also fairly new, so that's another indicator. (Or course, the output filename ending in .md doesn't actually mean that what's written to it is markdown, but that's what I first reacted to.)
I have written code that is many thousands of lines long, but never thought about the fact that it is more lines than a lot of complete books. It does make you think.
Legacy code is the code that's costing you millions because changing anything is even more expensive (or seems like it is). Just working is not enough.
After witnessing a dark agile scrumfall flexible scope but fixed price greenfield project I will never complain about lousy legacy code anymore.
I never graduated in computer science, no. I started working on programming right in the middle of my studies. I was hired to write COBOL code, but it didn't really give me any challenge. Then I moved to writing Turbo Pascal (4.0 if my memory serves me). A bit later I changed companies and I started writing in C. Oh, I still love that language! There's nothing you can't do with C. Operating Systems have been built on C code.
Your analogy of legacy code with books is interesting... Books have a table of contents, are factored into chapters and have an index in the back. We can still learn from history....
He talks about finishing projects, which I empathize with.
But it's all tied to web now and that chimera vehemently refuses to uphold any legacy responsibilities.
The whole misguided notion of "don't use this project, it's dead - don't you see it hasn't been updated in over 3 months" is hegemonic now. Constantly arising issues is the new metric of vitality.
i keep coming back to listen to the intro song.. its so good!!
Great stuff. Lots of fun, well sung as well. Impressed.
Isn't this a core concept of micro services, that any component is designed to be replaced by a rewrite that's in the budget? I.e. limited in scope, well-defined, independent, dependency injected?
You don't need Microservices for that, properly written code will have at least most of those properties as well
Best intro music I've ever heard 🎶🎵
This song is going to be stuck in my head for days.
22:53 I can relate. Not only because I used to design and code programs, but also because I play chess and need to think about what to do.
Never forget: Either your code is crap and will be thrown away or it will become legacy code.
You either die to unit tests or live long enough to become the legacy code.
It can be terrible to begin with and become legacy code regardless. Skipping the whole thrown away step.
@@SianaGearz You're right. Don't know what I was thinking, as I often enough see code that should have been thrown away before deploying.
So many memories! I used Turbo Pascal a bit. Coding MASM I could control the world. I looked forward to Windows 1 arriving. And I did do a complete rewrite on a Basic app in early 2000s that used a lot of a$, b$ variables. Made it object oriented. Somehow figured out what it actually did and didn’t do. That new app ended up talking to an outside computer using web services. What fun!
What are you talking about?
C# is turing complete. It can literally do anything computable.
So is brainfuck and xslt. Doesn't mean it's practical.
PowerPoint is turing complete as well... but I've never managed to program any microcontroller in PPT
C, C++, Rust, ASM are your options. The most widely accepted of those is C
As there exist no infinite memory, no Turing complete machine is possible.
@@NataliaBazj without waiving the memory being infinite, Turing Completeness becomes utterly useless as a concept, so why even bring it up?
I get a feeling that good code is like a bookshelf full of tiny books. Sure, there may be Brazilians of libes of code, but you don't have to reqd the entire Encyclopedia of Code. You'll get to it, but each piece should be self-contained. Today you're debugging the About page. As long as the About page doesn't require New Age crystal prayer medicine (which is divided in its own set of books) you should be able to put your Sched building knowledge to good use, and generalise it as you go. But yeah, they pretty much throw you in that crystal prayer thing and expect magic to happen.
We had toggle switches and didn't need to keyboard.
Days Of DEC equipment and EMC io boards.
The first time I've seen a conference talk with a mid-roll TH-cam ad.
My only problem with legacy code is that my current bosses won't trust me to refactor the shit out of it, which I know I could because I've done way harder things in the past. It's bad enough to keep using Struts 1.0 in 2022, but when you're told again and again that the unmaintaiable, inefficient code that's even today used as a template for new stuff is something they know is wrong but cannot be improved because we lack the necesary billable hours to do it, which of course causes way more work hours to be billed as bug fixes, you know the people you work with/for (a) are full of it, (b) feel threatened by anyone who really tries to create quality sw because that'd mean that what they produce on a daily basis is basically shit, and (c) deserve to keep suffering the stressful work environment they perpetuate by making the hole they're in deeper every day. It's really frustrating to be an enginer that gets paid to do a crappy work for no good reason.
If Dylan were working on computational science or molecular modeling, this conference could have been fully talking about fortran77 or fortran90 numerical libraries working behind everything.
Not my favourite conference from him, but it is also a good one.
In case anyone else was wondering, the Star Trek TNG episode at 48:27 is "Force of Nature" :)
Interesting how computer science is described to differently. When I got my university diploma, you could get your diploma without having to learn to program, it was a structure science like math.
this is like Max payin 3 , 90% cut-scenes 10% content
Was this intro song written in Rockstar?
Gonna have to update this talk lol. The iPhone 1 probably won't even text and make phonecalls anymore since most operators have turned off their 3G service or will soon. Just proves his point even more though.
GSM/CDMA (i.e. 2G) is often still in service though afaik, and that's what the first iPhone had - the second had 3G and was called "iPhone 3G".
My previous phone I had for 6 years. I got it a few years before this talk was done, and I replaced it a few years ago.
Though, at the end it was slowing down to a crawl.
Currently working with legacy code and I disagree with the premise. There's nothing to love in ancient, completely object oriented, uncommented, undocumented PHP code
see, PHP is an exception in that it was never enjoyable in any way shape or form
It reminds me of the old joke... when your driving in the countryside and you get completely lost, then you spot a local with bailing-string holding up his trousers and hay in his hair and you stop, open the window and ask "excuse me please, how can I get to such and such a place".
The local replies "well, I wouldn't go from 'ere".
These days I often think: “if aerospace or civil engineering was executed the way IT was, planes would drop out of the sky and bridges would be closed every couple of years and would be rebuild.”
We should stop thinking that newer is better. It’s simply not true! It’s a different way of doing the same shit we’ve done for 50 years. And what we used to do in mere Kilobytes now takes 100s of Megabytes and it’s doesn’t provide a lot more business processes.
I don’t care that my plane seats are still reserved on a mainframe probably with Cobol or REXX.
the people who have to make changes to those systems care though - and you care that the systems continue to work... so what do you do when you need a change to a cobol system and nobody knows cobol anymore?
thats the reality you are dealing with
and compared to something like architecture or mechanical engineering... software engineering is incredibly young
want to know how often steam engines in the early days simply exploded?
and as a final little thing... bridges are inspected and overhauled every couple of years, because if they are not maintained and adapted to the current circumstances...they might start to fail
same is true for planes - they get maintenance even more often and a lot of their parts are replaced to make them as reliable as they are
legacy code is the equivalent of a bridge that has not been inspected for a hundred years... it looks solid... and plenty people use it just fine... but would you put a truck over it? or build a rail line over it?
@@SharienGaming steam engines blew up, sure but they fixed those problems within 10-20 years. Airplanes went from barely able to fly 100 meters to supersonic in 43 years! Software only gets slower the more computing power we create 😆Due to kids no longer knowing how to write assembly or even C or C++ and thus result in using Python and JavaScript.
IT isn’t young, it’s over 70 years old!
I am an EE by education and to my amazement had better education about software engineering and system design than CS graduates. And I don’t see commitment by (especially young engineers) to deep dive into legacy code. It’s legacy not because it’s old, it’s legacy because devs don’t want to dive into it and maintain it, but it is important since it is running and making money.
And concerning maintenance, that’s all mechanical parts that experience stresses and elements. Code doesn’t do that! Code doesn’t age doesn’t deteriorate.
And electronics don’t get serviced anymore. Because we engineered the fuck out of it to the point that my C64 from 35 years still works and my ST as well.
Legacy code is a psychological problem but a technical problem. People don’t want to deep dive and put in effort to understand something. Most born after 1985 don’t even have the basic experience to understand lower level stuff that was normal to use up the the early 00s. Because everything is abstracted away. Most devs I know can’t even use a command line! Seriously!!!
So I disagree, we as a branch are immature and we don’t want to mature. Otherwise we wouldn’t create a new language every other week and a new framework every two years.
And yet the stuff that the generation before me made still rules supreme, C/C++ assembly.
Yesterday I solved a bug issues at a sort of legacy system a CI/CD pipeline which my colleague developed who left. And I understand azure pipelines well and eventhough his way of working is totally not my style I was able to find a bug and solve it. Even more interesting was that where the issues showed was ofcourse the system it was deployed at Azure Data Factory. I had never even touch ADF (I don’t care for click ETL), but hey my colleagues were stuck. So I just asked them how do you do this? How does that work? Can I see this? I could Google that too and would’ve if they weren’t there. And within 20 minutes I had found the root cause. And 10 minutes later the solution. A simple oversight in the CI/CD pipeline but one that would’ve never ever happened in electrical engineering or aerospace! And one that would’ve been easily caught if we had time to mature and have mature tools (we work with stupidly immature tools). He had forgotten to parameterize a user principal.
And he had forgotten because he started development and only targeted a single environment. Despite me saying time and time again, start to develop with multi environment in mind. We are going to deploy to other environments!
But that’s harder and more time consuming and you have to throughly test and think.
And he left that part for me to solve after he’d left.
Now I could’ve said: “it’s crap it’s not multi environment. We need to remake it.”
Instead I was pragmatic and thought: “damn he got a superb sql deployment scheme and he got ADF deployment working and security checks. It will take me 3 months to do all that and I need to learn ADF, let’s just make sure this becomes multi environment.”
And a proper toolset that Microsoft should’ve made would be to parse the yaml and warn when parameter assignments were literal assignments; because generally you don’t want that.
I ran into bugs with that Yaml parser that I wonder how the fuck did Microsoft test this? Because you need to make a fucking effort to fuck up a logical operation and not be able to parse certain things dynamically and others you can. That is bad engineering!
And you wouldn’t see that in civil- aerospace engineering.
@@SharienGaming oh and if you want to earn big money spend a month or 3 learning cobol! Those guys earn 200 euros an hour at the bank!
I would learn it if the subject matter would’ve be boring financial processing. Stuffy business processes are not my thing. I also wouldn’t do that in C++ or Java if they asked - and they have asked 😄
@@CallousCoder code doesnt age? dont make me laugh
ive seen and had to maintain code that has aged terribly - because the technologies it was built with or integrated with have become insanely outdated and unsupported (in part to the point that not even documentation on it exists anymore)
sure the lines of code have not changed in that time... but everything around it has
and that code has had at least 4 different teams work on it... probably more... and you can absolutely tell, because each new team had a different approach and probably didnt understand half the tricks and conventions the people before them used... and thats how the code reads
i can deep dive into shit like that and figure it out... but in the same time i can probably build half the system from scratch as well
do you know why new higher level languages are used so much rather than low level languages, even though the performance is lower? because maintainability is much more valuable than speed
sure you can optimize the shit out of c code (even more so assembly) but good luck for anyone else understanding what the code does when you need to make changes... heck good luck understanding it yourself when you look at it about a year later
that is why methodology and languages change so much... because we have learned a lot from the past and from the cost of building software in that way
and yes, electronic computers may be 70 years old... but software engineering as a discipline is still developing and significantly younger than that
and also... errors like you described from your collegue HAVE happened and do happen in aerospace engineering... yes they do get caught by stringent testing and layer over layer of redundancies and checks... and even with that...sometimes they make it through anyway... because humans are humans and we make mistakes
thats also why methods like test driven development, code reviews, integration testing, pair programming and many more exist... they provide multiple levels of checking for mistakes
but it takes time for something like that to propagate... with something like aerospace engineering matters likely were significantly sped up through governments imposing legal requirements on procedures, because there always were lives on the line... with software? often only the engineers care and their managers often dont give a damn, because testing takes time and slows feature releases down... mind you they usually get the bill for everything slowing down from accumulating technical debt and errors going through... but that rarely makes the manager learn the correct lesson
so if can you please get off your high horse then ill get off your lawn
@@SharienGaming now your last paragraph is what matters. Other industries are mature because they are bound to procedures because life’s are at stake. They therefore take the to engineer something and pass the certification tests.
But your solutions care and matter too!
So managers and devs should care more about quality and maturity than about the quick to market, and the new shiny thing idiocy that reigns supreme and is the reason of such terrible systems that we have today.
And software engineering really started in 1947 when Kathleen Booth inverter assembly opcodes to program with less errors (see back then they were engineering)
Another issue for our branch’s immaturity, is the fact that software is too easy to fix. I often hear or read in release notes: “there’s a bug, but we release it and fix it later.” That doesn’t happen in proper engineering fields. When was it when you bought a carb and you got a note: “we know that sometimes your indicator lights will fail but we will probably fix it next time when you come for an oil change” 🤣
I have probably a far more sinister look at this increasingly immature branch as I studied EE, you can’t really change hardware so you must get it right before you send it into the world! And I started my career in designing and building bespoke research tools for science projects, so electronics and the embedded software for parts of a solar telescope and my own whole project was designing measuring equipment that autonomously could measure the fluctuations or the height of snow and ice on Antarctica. Our team designed certain parts of satellites (not me as I wasn’t senior enough yet, only the old geezers who had successful smaller projects; something we also should do in IT - proven own projects makes you a senior and not your amount of projects or duration in the business).
But we truly engineered the solutions and we made sure that our stuff was resilient, redundant, power consumption was known and batteries were over speced because there’s no influence on the weather and I don’t feel like hoping on a plane and hiking 2 days through a blizzard on the Antarctic ice sheets to swap a battery or wipe the snow of the solar panel 🤣 But we couldn’t fit a car battery either because researchers had to hike with those machines to put them wherever the wanted the measurements taken.
I heard 3 years ago that 22 of the 30 of my Antarctic snow measuring and ice measuring devices that I’d designed in 1993 still work and are operational and still “maintained”; Hmmm lowlevel assembly rules, because you got to do everything yourself and there aren’t obscure poorly designed libs or frame works that have unknown behaviors.
Especially on a bare metal proprietary made microcontroller system. 😄
And your software didn’t decay, your hardware platform became obsolete. Which is a bit of a misnomer , if it runs it runs and again a show of immaturity of our branch. I also worked at IBM (2 years didn’t like the company) but those lovely OS390 mainframes can still run all the code from the 360! That was 37 years ago and I know at the bank I consult for now, that their 60s code base for transaction processing still runs, so that’s 60+ years old. And no plans to move it from cobol to Java - most has been migrated away as cost saving. Because as you said cobol devs are rare and expensive but mainly running a mainframe with support is the real running expense.
So they reduced the number for mainframes to only 3; again proper engineering from IBM. We have two datacenters so two mainframes but you need a tiebreaker in case transactions give different outcomes.
How many devs these days even know how to write parallels transaction systems to guarantee that the right amount of moneys is put into your account or taken out?
Those are aerospace engineering practices.
So let’s agree to kick these careless managers in the balls and take the time to properly test and engineer and give them the Fucking finger of they want to release shit that hasn’t been properly tested.
Because as also a software consumer I am getting fucking annoyed with buggy OSes and software. Especially the crap that Fucking breaks after a mandatory update - hey Apple and Microsoft!!!
I feel like im at the cinema watching trailers without popcorn
Wonderful music, thank you !!!
How can not this cracy alien synthwave artist love this?
Anyone else here remember that 'Oh no!' moment when the CTO decided MVP was achieved?
If you have code that is too scary to update or too profitable to change, that means you don't have truth-worthy automation tests (your safety net) for it.
"Where's the 'allergic to carrots' button?" Lost it.
GEnius , thanks , and I'm a Mac person, but limited Unix etc. Bye By American Pie, who sung that; Don McClean?
I love this song. Is ut on TH-cam music?
Is there a talk here somewhere or just a cross between Weird Al and Alan Turing?
The vid startet with a song, and that's where I am at. I gave the video a like. It has always and under any circumstances to be supported, when tech people try to be funny. Something destroyed it in them and it has to be regained. Regrown. Nourished. Given a big enough cohort, statistics kick in and there can be an evolution; brilliance, genius even. It may lead to hilariously brainmelting fun. They can AI and stuff! So please let's have that evolution. It's just a like.
8min 33sec BUMPER intro!!! J CRIZZLE!
cant believe he actually made a 57 minute song!
What's that Excel spreadsheet talk he mentions around 27:12?
th-cam.com/video/0yKf8TrLUOw/w-d-xo.html
Felienne Hermans • GOTO 2016
@@scurvofpcp Thanks!
Eew! That legacy codometer at 17:45 looks really legacy with that USB connector.
BEST INTRO EVER. AND YES, I AM SCREAMING IT'S SO GOOD!!! :D
😁😁😁rewinded the video now 10times to listen to the intro song
Never delete the legacy code. Are you that short on disk space? Always make sure it's backed up somewhere.
This good ole days of writing and running Hello World in bytes of binary and memory usage. I think assembly should become a staple again in today’s education so that we stop developing bloatware web crap and terrible inefficient desktop apps. I get happy when I see what blender and Davinci do! Low level relatively small high performance software. That’s what we need more off!
Do we though?...
The amount of skill and time it requires to do what you recommend versus the 'bloatware' approach does, evidently, not weigh up to the benefits of doing it like that... or at least it doesn't pay enough to entice would-be developers to learn and practice such things.
Hardware resources are too cheap compared to skilled dev labor, so no company is going to pay for it when they can make MORE money selling 3 'bloatware web crap' apps instead of 1 'streamlined optimized efficient' app.. the customer doesn't give a shit. Unless your App literally drains all of their phone batteries they're not gonna complain... Hell, Facebook (back when I still used it) would drain your battery for no particular reason.. and that was way after they became the de-facto social media platform (so they had PLENTY of money to invest in a streamlined app if they so chose).
The only mitigating argument I can imagine that would support your claim despite this would be the environmental one. (all those extra calculations do consume more electricity after all).. yet I don't think "forcing developers to write efficient code" is going to make it very highly on your local political 'green' party's to-do list.
@@ayporos time is pretty much the same. A good C/C++ developer codes as quickly as a JavaScript noob. That comes from experience, C++ is after all a pretty high level language.
If you write bloatware, then you obviously spend more time doing all that extra inefficient code.
I often find it frustrating when I need to do Python or PowerShell that I need to lookup trivial things that I know how to do in C/C++
And I even rewrote a program we use in our pipeline to search and replace a key value pair list from a json file in a target file and encode it. With a Rust application. Because starting a native application goes so much faster that a PowerShell one where the build agent needs to download the libraries.
It would take 30 seconds to start. Mine less that a seconds to start the task. And doing that for several deploy tasks does add up.
But I agree that customers don’t give a hoot. And that’s the whole problem. It sucks the fun and professionalism out of the job.
And I’m no green tree hugger. I am not afraid for climate change doom scenarios. But this is indeed very easy reduction in energy abuse.
And my customer really wants the cloud resources to be as used as little as possible to reduce costs.
And ironically we are wrangling very inefficient azure cloud processes to scale up and down resource. And turn off resources in down time. Which requires a lot of extra development time in runtime management pipelines. When we would efficiently code that ETL process in C++ or Rust. It would run within probably 10-15 minutes compare the 45-70 minutes using SSIS.
And all the stale data I would’ve removed already. Because stale data costs! Which I now have to remove with scripts on moving the unified database (I also pushed 7 databases into one using schemas and there’s a lot of data duplication because of the 7 databases, we w at to tackle). Because they simply didn’t think it through and instead of making it multi tenant they created a database instance per client. Because it was “easy and fast”. It’s now harder and takes longer to undo that stupidity they came up with 15 years ago.
And we can reduce almost 90GB of data duplication - that’s including the stale data I already remove when moving.
And that’s a throw of the database size.
Back in 1995 we had database in MUMPS that ran a whole veterinarian clinic with on average 2500 animals dossiers.
Average database was 60MB. We had on average 6 terminals connected to a 386 to 486. And 6/10 printers. And large veterinarian clinics about to 12/16 terminals and 20/25 printers.
It was super efficient. When our company got bought the new dumb owner wanted a rich desktop experience.
It bombed as we predicted. Because the ascii terminals were cheap, secure and it ran super fast even on that MUMPS environment. And the data as with the same data just daft encoding. Became on average 500MB. An invoice run became slower because of all the postscript full graphics none sense. And the vets were like: “I don’t give a fuxk that I can print my logo in full colour. I buy preprinted paper. I want a run to be done in the same time as before. Be sitting waiting on printers is more costly”.
10 years later I did hardware integration for a pharmacy information system. They too worked with text based terminals albeit already on PCs with terminal emulation.
They made the same mistake. Went full on UI and I warned the owner and instructed him to look at the ladies behind the counter whisk through the terminals. All the single letter no enter or tab to select a menu. The even typed a head of the screen. It’s sheer rode working. And the order pick robots (that I integrated) are there to eliminate the time walking to a drawer.
But no, full on UI was the things of the 2000s. Well that went down the drain fast. He lost half of his customer base. Because real users do mind speed.
And now all these systems are mouse heavy. And it takes so much longer to get your meds. It adds 10-20 seconds.
Which is ironic because the real hard thing (the mechanical parts of these robots) have almost tripled in the last 20 years. It’s a marvel of engineering but the systems doing the registration and validation and scanning. Have noticeably slowed down.
And I notice that also with the length of the queue we are waiting in.
Last week for this channel I wrote a little program to turn movies into an ASCII art PNG sequence - after I did a streaming version, that obviously has some dropped frames.
And I started with encoding a 3 minute clip in 2 hours. And I was like… that’s stupid slow. Where’s the biggest slowdown? Oh that’s the Magick++ library. Hmmm I could use Qt I know how to image wrangle with that (I used in a lot of VFX projects). But then next to OpenCV I would need another massive library. Nope, let’s wrestle with OpenCV to turn strings into PNGs/ that took all but 20’minutes. Then it ran in 12 minutes. Which I thought was still too slow. And made it multi threaded. Because I wanted a clip to be rendered faster than the clip took to play. Know I churn through a 3 minute clip in 54 seconds. And that’s not even super optimized as I can optimize it even better. But that would take considerable amount of glue logic. In order to have a thread pool that’s atomic safe. Would take me an hour to do that. To shave off maybe 20 seconds per run. I would do that for production code! But the ought clip I’ll transform then this is good enough. But I know most developers these days would have stuck with OpenCV and Python and be 10-20 times slower.
Why do computer vision with Python? Have some divinity people!
blender is a great example of two worlds colliding actually. its core is in c++, but ui fully in python. seeing how well both the cycles x project (c++) & ui overhaul (python) went in terms of increasing performance & usability, i'd say hard fast & easy slow languages both have their respective places in the industry
Video rendering software is where performance matters much more. Cutting off even a few seconds of render time can be a big deal. But why would anyone these days write a web app in a low level language, e.g. some CGI stuff in C or C++? Maybe if they host it on ancient hardware with severely limited resources.
@@martinsdicis4000 well also web apps have high workloads because of multi user.
My client is a bank, if you see the amount of containers that are spun up for some customer services, it’s insane! Knowing that Python is 100 times slower than C/C++ and Node about it 10-20 times you can reduce the number of containers 10-100 times. Now that saves enormous amounts of energy )if you are afraid of climate change , not me but most millenials who develop this bloatware do, then there you can really make a difference. And also financially it’s a massive saving.
If you run a single instance it doesn’t matter but we often spin up at times as many as 500-1000 containers. And ironically the database backend, which are properly written in a low level language don’t croak. The redundancy we have there is mainly for data consistency and availability. So your code efficiency does matter when it comes to heavy workloads. You can safe a lot.
Funny enough my customers before the bank, we’re energy companies. And we knew when it was tax season as we saw prognosis of energy demand go up. That’s all because of the 3 major banks the end users hitting those banks and the IRS.
And we talk about as much as 40-70MW extra of energy. That’s a about 10 windturbines that run a 100% efficiently!
So imagine everybody wrote efficient code in C++ (or Rust) and we could reduce that a factor 10! We only have a 7MW extra consumption. Which our clients (greenhouses) can deliver with their diesel generators that are already running anyways to heat the greenhouse and produce CO2 for the plants.
The biggest growth of our energy consumption isn’t cars or planes but cloud data centers! And majority of stupid TikTok and Instagram videos and cat pictures 😉
Does anyone really know how to do Unit testing properly now?
the intro is amazing (y) :D :D
Legacy code is important (valuable ? but it might not be directly income related. ) that we are scared to change why scared ? we don't know what will happen ,eg we know we are unlikely to be able to fix it quickly if we change it today but it starts playing up next month or next year - we wont know which version of the code the fault was introduced, we won't know how to revert, except by clean sweep revert to the way it is today.
Before Darth Vader came to power, Anakin did a lot of rewrites... :D
Visual Basic 6 also underpins my corner of the world's Grade 11 CS course. *facepalm*
30:51 Headphones on? There shouldn't be any sound from anyone else that needs to be blocked out. If you need headphones, your work environment is toxic.
God damn that's my life story. (the intro)
Trust is great, but it hasn't ever fixed bugs before they hit the market.
I really take issue with the graph comparing lines. Those novels have pretty thickly populated lines of text. I'd like to see if linux kernel has a lot of empty space in lines or empty lines. Characters would be much more representative comparison. Like I've written perfectly appropriate code that had mostly empty space and curly brackets, with only a couple of words that extended for like 10 lines. I believe his point is still very valid, but I think it's not truly representative way to demonstrate it.
A lot of code editors count not just newlines, but also "source lines of code", where a SLOC is not whitespace or only brackets. Comments may also be excluded depending on the particular implementation.
28:50 i'm that guy who leaves for fear of becoming unemployable.
My resumé on the right has php instead of something modern and fancy like C#. I'm not even using a common framework, it's turtles all the way down on a 20 year old base. Try selling that to some HR hiring filter.
Few companies operate in this way, actually making their own stuff. Most of those that do, don't use php for it. And don't believe anybody could have relevant project, design or architecture experience in php.
They are nowhere near each other, and they won't pay me as well as the one currently compensating me for slowly becoming irrelevant in their basement while they hire new people who exclusively work on new product features on the far fringes of our APIs.
The Linux kernel has not 120'000 LoC but 30'000'000 LoC.
You're counting everything in the repo which also includes device drivers, etc. - the kernel proper (as in, in the kernel/ directory) is around 100k (closer to 150k nowadays)
@@SimonBuchanNz Thanks. I got the number from a talk with Torvalds Linus.
The intro song goes way too long. It's funny for the first 2 minutes but then it just gets annoying.
fuck new technology, am i right guise?
VBA for life
"Commodore or Vic 20"?
A Vic 20 IS a Commodore.
Wow wow wow ! Amazing American digital pie !
UHH emails! I know how to send those :DDD
Awesome!!!!💪👍
Something I want to know: Does the Customer know what he is buying?
Starts at 8:55
I've been writing code since 1973.
maybe the word "solid" should be used instead of legacy
Don McLean would LOVE iT!!
oh, I could wait to install windows 3
Who really will love legacy code will be AGI. It will bath in it.
how the fuck do you use 16k of ram with an assembler hello world program?
The assembler itself could also be resident in memory :)
At least on older x86 systems, RAM was accessed in segments of -16kB-* and (going from memory but I'm pretty sure) a program was always allocated at least one segment to use.
* Correction: 64kB; not sure where he got 16 from then, actually.
17:38 is actually code legacometar
They are not actually learning at all...they are memorizing which is completely different!