Definitely not the case. You definitely can study how the runtime works and build accordingly. Some data-oriented design principles will apply in many places. If they can work on something like a Turing machine (e.g. data locality), it should work in something as complex as V8! We don't _have to_ keep it at that level - something that might seem as complex as reducing checks in your JS code, not appending properties dynamically to objects when the runtime has already assigned it a "shape" - these are things that require assumptions to be made, but also grant us the gift of speed. It's about _simple software_ that aims to do a few things. CPUs like simple software. So do JIT compilers.
TBF, Google engineers got annoyed and created Chrome's V8 engine, which forced Firefox to kick their development into gear as well. Sadly, Chrome now has (IMO) too much power over the web in general, probably because they abused their monopolistic powers as search engine (what's that? You want to search something? HOW ABOUT YOU TRY OUR BROWSER??? - yeah, OK Google, fuck off.)
Not funny and objectively false. Js might degrade performance by a single order of magnitude in the absolute worst case but it's perfectly capable of going fast. Modern software isn't just running 10x slower than it should, it's running 1000-10000x slower than it should.
JS isn't slow for most common use cases. And if you find a case where it is to slow, then you probably shouldn't be using JS for it. People do tend to write incredibly slow JS though.
I wish all the people who endlessly spout "premature optimization" would watch this and take the time to think about what Casey is saying. Yes doing hyper optimization before you know you have a problem is a waste of time. But taking slightly more time to do basic work so it does not start out garbage goes a long, long way.
I would say, this parrots who repeat "premature optimization" are actually doing premature pessimization. They often write code that is hard to understand and maintain and very slow, and has tons of useless abstractions.
@@ХузинТимур I would agree with that. They do the things they've been told are more enterprise-style code that can prevent compiler optimizations and make things harder to find because they think it makes it easier to refactor, instead of writing code that's easy to refactor or even outright replace when it isn't the correct answer anymore.
I wholeheartedly agree with you. Sometimes today it seems people say simply writing something properly, or employing a few best practices, is premature optimization.
The point is make it do the thing and then optimise it. Don't optimise it before it can do the thing, but also don't not optimise it at all. Another way of looking at it is red green refactor. Optimisation is part of the refactor step. This is also a small, repeated cycle, not something you do once at the end. If you know what will be slow you just avoid doing really slow things when you write it though e.g. avoid unnecessary i/o
At my work things are extremely slow, when I first saw it I was like how are you guys working with it. But the thing is that you get used to it. Now I don't even notice it, and it came pretty quickly, I was transformed in about 3 months.
i think most devs know when things are too slow. i feel like its demanding bosses/management demanding solutions delivered on unreasonably tight deadlines, sometimes all you can do is get something working and theres no time to optimize, even when you know there's room to and you never get the chance to go back and clean up/optimize it later, you're always being driven onto the next new thing (and not given enough time to make that as good as it should be)
That is very often the case, but recently I found a relatively simple refactor at my work recently that would have increased the speed of a hot path in our code by nearly 90%. When I presented my findings to my team they rejected it immediately because it wasn't OOP enough and they didn't like it because it was a little outside of their dogma.
@@Norfirio This. Sometimes it's even worse: you manage to make it in their own dogmas but you'd make them look stupid if they'd buy your idea. I once made a change to a panel that needed 3 minutes and 15 seconds to be tested (automated GUI robots crap) so that now the GUI part was trivial and didn't need testing, while the logic was being tested thoroughly (better than before!) in 15 seconds. Which was still way too much if you ask me, but saving 93% is already a step, then we could do another. Also, I was decoupling business logic from rendering logic. And it was pretty OOP (poor me), so eveyone should have been happy. Win-win-win situation. (Notice: applying the same pattern on the rest of the codebase we would get rid of GUI tests with robots, which implied: - not needing a dedicated computer with a desktop environment to do automated testing on that stuff - spending less money on electricity - reducing testing time by a lot, which alone would increase the quality of the codebase - adding more tests, since they were now less convoluted to set up, further increasing the quality) My superior literally said "I don't see the advantages. Also, I didn't ask you to work on this, you just wasted a week of company time." And he's a senior programmer with 20 years of experirence, who built that entire system by himself. Now, a possibility is that he didn't want to explain why we've been wasting so much time for years. But he could have done what I said without giving me the credit and take the win. Also, I left years ago and I know for a fact that they didn't enhance that testing system after me leaving either. So what this tells me is simply that some people are not cut out for programming, period.
@@wulf3n773 That's likely, but I mean, I've been pretty arrogant myself (I still am from time to time, I'm trying to turn that into sane ambition instead) so when someone is better than me it automatically pisses me off, I get that. But I also immediately try to understand what others are doing right so that I can close the gap, else I'll always be vulnerable on that side. I really don't get how one can be so mediocre to deny even to him/herself that what you did was good.
Does anyone know a good intro to assembly? Wanted to read the 'Zen of Assembly Language: Volume I, Knowledge' by the great Michael Abrash, but it's intro states that it's better if you have a good grasp of assembly before reading. Went around online, but found no real good books or resources (and reddit suggested some top-tier crap).
Learning about L1 and L2 caches is a pretty advanced understanding, it's far beyond basic assembly. It's good to know, it's necessary if you want to fully optimize performance, but something more basic would be just how integers and floating-point numbers are stored in memory, how stack is stored too.
I don't understand what's so advanced about learning the surface level of it, given the right information. The basics: Computers send signals near the speed of light, and the L1 cache is right next to the cores, the L2 cache is a little further etc. and RAM is way way out there in terms of distance (which determines the latency). Due to limitations of physical space and performance requirements, the L1 cache is tiny (almost always 64kB) and very simple, L2 is larger (~1MB) and slightly more sophisticated. Etc. Other than that, all you need to know is that the CPU is fastest if it does the maximum amount of math ops on the data before moving on (i.e. math ops are much cheaper than bringing in data). It can do roughly 16 math ops per cycle, and the latency to the L1 is roughly 2-4 cycles, L2 is like 10-15, L3 is 20-40 and RAM is like 100-300 and hard disk is like 10,000. There are some complications with prefetchers, branch predicters, and there being an L1 per core but outer caches are shared. But the gist is that it's expensive to jump to unpredictable memory patterns to do scattered work (like virtual methods), and cheap to do a lot of batch work on data, especially datasets that can fit inside one page (4kB). That's already a lot more than the average developer knows, and gets you pretty far.
honestly I love assembly so much. but, beginner friendly learning material is just insanely hard to find. I bought a highly recommended book for beginner (Blue fox), watch youtubes, blogs and gave up cause there is always something. kinda jealous of people who can understand assembly. I'm still open for another good learning material though.
if you know how to code in any programming language and you've been doing for a few years, learning to read and write some basic assembly is really not hard. Just keep messing around the web trying to find tutorials on youtube or other sites, read sample programs etc. Thats how i've always done it with everything. I've never read entire books or fully watched complete tutorial series
Look i don't disagree that those things will make you a better developer, but it's simply not necessary for the overwhelming majority of programming jobs. You're Infinitely more hirable if you spend that time learning cloud stuff and popular cloud abstractions.
Gets you nowhere in today’s market. I know a guy who’s written VMs, a FAT implementation, and a boot loader to basic OS kernel for Christs sake that can’t even get an interview right now. You think a company that just wants their shit code base connected with some cloud crap really cares if you can shift some bits around or make some triply linked list of an array of pointers to a pointer?
Perhaps go and find the argument that large Internet companies have made... they lose millions of potential transactions when their service/app/website responds 2s-3s slower than it should, because users have little patience. This is worth $billions... ...but hey let's just lose all of that money so some script kiddie can use a crazy-esoteric-OOP-design-pattern-of-the-week, rather than learning how the machine they are "software engineering" for actually works and how it actually executes their code.
It really depends on what it is you want to do. For games and embedded systems this is very useful knowledge or for reverse engineering closed source stuff with badly written documentation. I once had to reverse engineer the firmware of a hardware component because the memory addresses provided on the datasheet had mistakes. Most programmers probably would never have to do this, but it's a very useful skill to have when shit hits the fan.
Dude, I tried to optimize some cloud functions to diminish the cloud bills in a start up. I literally got told that those optimizations are not important and that I should more time doing other features.
I've heard about this throughout my career, the people that are a bit older than me really care about the level of abstraction they learned. I have way deeper knowledge of technical stuff than younger tech people. I've just never needed to get down to asm. My problem is the other way around. Ill cringe at doing something "fancy" which is basic for today, because I started out in x86 pcs with a 200mhz CPU, and I still carry that budget in my head. I need training to be LESS frugal!
In my opinion learning C before ASM is like learning to ride a bike without learning to walk. (Doesn't need to be x64, any architecture or bitness will do.)
Needing to do funky manipulation of the call stack is a good case for asm. But RNNs are a bit different even at a low level. I think Torvalds missed the point somewhat when he complained about the necessity of AVX512. In the last few days I saw it being discussed about AI facilities in the kernel, so maybe that oversight will be re-examined soon. Fast BLAS, LAPACK and DNN functionality in the kernel would be super nice IMHO. Especially if it's available across many different architectures, like ARM NEON for example.
Well honestly i can sum it up. Memory can be wrote or read, registers are loaded with data for operations such as cmp(compare) add sub mul or div. Flags are set in operations and you can conditionally jump to different sections depending on the result such as jump if equal or jump if zero or not equal or bellow ..etc.... Obviously that's just a summary but other than repitious calls, coprossing units such as floating point mmx sse and the ilk that's really what it boils down to.
You can't read *asm* if you haven't written *asm* before, and you can't start writing *asm* if it isn't rewarding and fun. Back in the '80s and '90s you would have direct access to the hardware, such as display, sound, interrupt mechanism, I/O ports, CPU privileged instructions and control registers etc. so it was very fun. I can't see anybody today sitting down to learn *asm* just to accomplish 10ms over 12ms a good C compiler might do for an arbitrary task, that's not vey rewarding. You can't even write a pixel to the video memory today, except by copying the whole framebuffer from main memory to video memory via API function call. Back then V-Sync was all the rage under dos graphics programming, today you have to go through the whole OPENGL crap (luckily there are friendlier alternatives, like the Raylib wrapper, that supports V-Sync). The old CRT displays were also far better for motion than the new LCD or OLED are. You have to see it to believe it.
Thats great! I have some idea on those info that helps guiding my code but how do I get to those metrics? Where should I look to grasp those assembly instructions? Where should I look and learn to Improve? Great vid, thanks!
I'd recommend programming in c/c++ an Arduino or esp32 microcontroller (visual studio code with platformio) and viewing the machine code coming from that tool chain. It is very low level programming and the code base is pretty comprehensible. When you understand what's happening there, you may want to climb the hill going for modern multi core architectures on typical x86/x64 processors. But even the first step on a simple microcontroller is very rewarding, and helps to get a true understanding on how things really work in a CPU.
MEng in electroninc or computer engineering from a top tier western university. Hate to break it to you but honestly, unless youre giga gifted thats the only way to RELALY learn it.
@@mememachine5244 uh... yeah no. I just learned it by writing emulators, that's a great way to learn how a CPU works. (there are tons of ways to learn, and you can 100% be self taught)
Man, I really like the way this guy speaks, but I just can't get behind what he's saying. Different developers operate at different levels. I took a machine language class. When you go back to JS land with that knowledge, you're not suddenly going to know how to optimize JS because you have no idea what compilers are doing behind the scenes. In fact, you might make your code worse because under the hood, the compiler is nullifying the thing you're trying to account for.
What's the purpose of this for making basic business CRUD apps, which is something most programmers do? I mean, I am not against optimizing, but for 90% of developers this means querying the database properly(backend), or managing rerenders(frontend). Why would I need to learn assembly language lol? I mean, this is just laughable argumentation from someone who obviously thinks "you are not real developer if you don't understand low level stuff".
@@RetroAndChill Yes, I don't really understand how this flow should go in for example javascript with node.js. When I have performance issues I should go into v8 compiler, then look at node.js runtime c++ code and how all of this is sent to the processor for my operating system. Is that the idea? I don't know. Maybe... When requests are slow in the browser maybe I should download chromium compile it down to byte code and look at the implementation of http request in byte code.
Not that laughable actually. If your CRUD app is poorly optimized and you don't know how to work with the CPU and memory what do you do? Ask for more resources because you don't know how to write performant code? Now that is laughable.
This guy is so focused on his branch that he does not see the rest of the world. There are fields where assembly languages are completely useless and there's no point of learning to read it.
I would restate what he's saying that a programmer should understand one or two levels below the paradigm they are working in. Not necessarily assembly. Without that, you end up with dogshit performance and that's on you.
Which fields? The only ones I can think of can be careless because the demand for software is much higher than the supply, and the existing software ecosystem for those fields is so bad that it's much faster not to care in the short-term. If many developers could read assembly and knew the basics of evaluating their algorithms against their hardware capabilities, then it would not be viable to not spend a few weeks learning this stuff, since the software quality is so much worse if you don't know this stuff.
All he is saying is you should know how long each basic task you are doing should take, according to the hardware's capabilities. So you know when something is performing way worse than it should. How is that NOT mandatory knowledge?
@@Brahvim Yes absolutely, i am not kidding. It takes not too long to implement general-purpose registers like EAX, EBX, ECX, EDX, ESP, and EBP. Some basic instructions: Arithmetic (ADD, SUB), bitwise (AND, OR, XOR), and comparison (CMP) are quickly implemented. Also simple control Flow: jumps (JE, JNE, JMP), function calls (CALL, RET). When you also support (MOV, PUSH, POP), and both zero flag (ZF) and carry flag (CF) you should be good to go to execute very basic c code. For example: int main() { int sum = 0; for (int i = 0; i < 10; i++) { sum += i; } return sum; } You could also implement and call an add(int a, int v) function. Or arrays: int main() { int arr[5] = {1, 2, 3, 4, 5}; int sum = 0; for (int i = 0; i < 5; i++) { sum += arr[i]; } return sum; } Then you may go further to implement floats (FADD, FMUL, etc.).
3:42 if every JS programmer got annoyed when things were going slowly then nobody would use JS
Definitely not the case. You definitely can study how the runtime works and build accordingly.
Some data-oriented design principles will apply in many places. If they can work on something like a Turing machine (e.g. data locality), it should work in something as complex as V8!
We don't _have to_ keep it at that level - something that might seem as complex as reducing checks in your JS code, not appending properties dynamically to objects when the runtime has already assigned it a "shape" - these are things that require assumptions to be made, but also grant us the gift of speed.
It's about _simple software_ that aims to do a few things. CPUs like simple software. So do JIT compilers.
TBF, Google engineers got annoyed and created Chrome's V8 engine, which forced Firefox to kick their development into gear as well.
Sadly, Chrome now has (IMO) too much power over the web in general, probably because they abused their monopolistic powers as search engine (what's that? You want to search something? HOW ABOUT YOU TRY OUR BROWSER??? - yeah, OK Google, fuck off.)
Not funny and objectively false. Js might degrade performance by a single order of magnitude in the absolute worst case but it's perfectly capable of going fast. Modern software isn't just running 10x slower than it should, it's running 1000-10000x slower than it should.
JS isn't slow for most common use cases. And if you find a case where it is to slow, then you probably shouldn't be using JS for it. People do tend to write incredibly slow JS though.
JS is not the performance bottleneck for web apps. It rarely does heavy lifting, the app would need to be insanely complex.
I wish all the people who endlessly spout "premature optimization" would watch this and take the time to think about what Casey is saying.
Yes doing hyper optimization before you know you have a problem is a waste of time. But taking slightly more time to do basic work so it does not start out garbage goes a long, long way.
premature optimisation whining is mainly a SAAR tier programmer cope.
I would say, this parrots who repeat "premature optimization" are actually doing premature pessimization. They often write code that is hard to understand and maintain and very slow, and has tons of useless abstractions.
@@ХузинТимур I would agree with that. They do the things they've been told are more enterprise-style code that can prevent compiler optimizations and make things harder to find because they think it makes it easier to refactor, instead of writing code that's easy to refactor or even outright replace when it isn't the correct answer anymore.
I wholeheartedly agree with you. Sometimes today it seems people say simply writing something properly, or employing a few best practices, is premature optimization.
The point is make it do the thing and then optimise it. Don't optimise it before it can do the thing, but also don't not optimise it at all.
Another way of looking at it is red green refactor. Optimisation is part of the refactor step. This is also a small, repeated cycle, not something you do once at the end.
If you know what will be slow you just avoid doing really slow things when you write it though e.g. avoid unnecessary i/o
At my work things are extremely slow, when I first saw it I was like how are you guys working with it. But the thing is that you get used to it. Now I don't even notice it, and it came pretty quickly, I was transformed in about 3 months.
i think most devs know when things are too slow. i feel like its demanding bosses/management demanding solutions delivered on unreasonably tight deadlines, sometimes all you can do is get something working and theres no time to optimize, even when you know there's room to
and you never get the chance to go back and clean up/optimize it later, you're always being driven onto the next new thing (and not given enough time to make that as good as it should be)
Exactly, I can confirm as senior dev.
That is very often the case, but recently I found a relatively simple refactor at my work recently that would have increased the speed of a hot path in our code by nearly 90%. When I presented my findings to my team they rejected it immediately because it wasn't OOP enough and they didn't like it because it was a little outside of their dogma.
@@Norfirio This.
Sometimes it's even worse: you manage to make it in their own dogmas but you'd make them look stupid if they'd buy your idea.
I once made a change to a panel that needed 3 minutes and 15 seconds to be tested (automated GUI robots crap) so that now the GUI part was trivial and didn't need testing, while the logic was being tested thoroughly (better than before!) in 15 seconds. Which was still way too much if you ask me, but saving 93% is already a step, then we could do another. Also, I was decoupling business logic from rendering logic. And it was pretty OOP (poor me), so eveyone should have been happy. Win-win-win situation.
(Notice: applying the same pattern on the rest of the codebase we would get rid of GUI tests with robots, which implied:
- not needing a dedicated computer with a desktop environment to do automated testing on that stuff
- spending less money on electricity
- reducing testing time by a lot, which alone would increase the quality of the codebase
- adding more tests, since they were now less convoluted to set up, further increasing the quality)
My superior literally said "I don't see the advantages. Also, I didn't ask you to work on this, you just wasted a week of company time." And he's a senior programmer with 20 years of experirence, who built that entire system by himself.
Now, a possibility is that he didn't want to explain why we've been wasting so much time for years. But he could have done what I said without giving me the credit and take the win. Also, I left years ago and I know for a fact that they didn't enhance that testing system after me leaving either. So what this tells me is simply that some people are not cut out for programming, period.
@@matthew314engineering7 Simply put you hurt his ego by showing that is "baby" was actually quite crappy.
@@wulf3n773 That's likely, but I mean, I've been pretty arrogant myself (I still am from time to time, I'm trying to turn that into sane ambition instead) so when someone is better than me it automatically pisses me off, I get that. But I also immediately try to understand what others are doing right so that I can close the gap, else I'll always be vulnerable on that side. I really don't get how one can be so mediocre to deny even to him/herself that what you did was good.
Does anyone know a good intro to assembly? Wanted to read the 'Zen of Assembly Language: Volume I, Knowledge' by the great Michael Abrash, but it's intro states that it's better if you have a good grasp of assembly before reading. Went around online, but found no real good books or resources (and reddit suggested some top-tier crap).
Learning about L1 and L2 caches is a pretty advanced understanding, it's far beyond basic assembly. It's good to know, it's necessary if you want to fully optimize performance, but something more basic would be just how integers and floating-point numbers are stored in memory, how stack is stored too.
I don't understand what's so advanced about learning the surface level of it, given the right information. The basics: Computers send signals near the speed of light, and the L1 cache is right next to the cores, the L2 cache is a little further etc. and RAM is way way out there in terms of distance (which determines the latency). Due to limitations of physical space and performance requirements, the L1 cache is tiny (almost always 64kB) and very simple, L2 is larger (~1MB) and slightly more sophisticated. Etc. Other than that, all you need to know is that the CPU is fastest if it does the maximum amount of math ops on the data before moving on (i.e. math ops are much cheaper than bringing in data). It can do roughly 16 math ops per cycle, and the latency to the L1 is roughly 2-4 cycles, L2 is like 10-15, L3 is 20-40 and RAM is like 100-300 and hard disk is like 10,000. There are some complications with prefetchers, branch predicters, and there being an L1 per core but outer caches are shared. But the gist is that it's expensive to jump to unpredictable memory patterns to do scattered work (like virtual methods), and cheap to do a lot of batch work on data, especially datasets that can fit inside one page (4kB). That's already a lot more than the average developer knows, and gets you pretty far.
@@Muskar2 Hey I found your comment valuable. YT comments sometimes contain some real gems.
honestly I love assembly so much. but, beginner friendly learning material is just insanely hard to find. I bought a highly recommended book for beginner (Blue fox), watch youtubes, blogs and gave up cause there is always something. kinda jealous of people who can understand assembly. I'm still open for another good learning material though.
Computer enhance. Casey’s course
if you know how to code in any programming language and you've been doing for a few years, learning to read and write some basic assembly is really not hard. Just keep messing around the web trying to find tutorials on youtube or other sites, read sample programs etc. Thats how i've always done it with everything. I've never read entire books or fully watched complete tutorial series
Look i don't disagree that those things will make you a better developer, but it's simply not necessary for the overwhelming majority of programming jobs. You're Infinitely more hirable if you spend that time learning cloud stuff and popular cloud abstractions.
Gets you nowhere in today’s market. I know a guy who’s written VMs, a FAT implementation, and a boot loader to basic OS kernel for Christs sake that can’t even get an interview right now.
You think a company that just wants their shit code base connected with some cloud crap really cares if you can shift some bits around or make some triply linked list of an array of pointers to a pointer?
What is his age? There are a lot of ageism in the industry and it is hard to find a job as a person older than 40 regardless of skills.
that sort of mindset will limit you greatly
Perhaps go and find the argument that large Internet companies have made... they lose millions of potential transactions when their service/app/website responds 2s-3s slower than it should, because users have little patience. This is worth $billions...
...but hey let's just lose all of that money so some script kiddie can use a crazy-esoteric-OOP-design-pattern-of-the-week, rather than learning how the machine they are "software engineering" for actually works and how it actually executes their code.
It really depends on what it is you want to do. For games and embedded systems this is very useful knowledge or for reverse engineering closed source stuff with badly written documentation. I once had to reverse engineer the firmware of a hardware component because the memory addresses provided on the datasheet had mistakes.
Most programmers probably would never have to do this, but it's a very useful skill to have when shit hits the fan.
Dude, I tried to optimize some cloud functions to diminish the cloud bills in a start up. I literally got told that those optimizations are not important and that I should more time doing other features.
Agner was in the masm forums back in the day it was real fun figuring out cpus back then.
I've heard about this throughout my career, the people that are a bit older than me really care about the level of abstraction they learned. I have way deeper knowledge of technical stuff than younger tech people. I've just never needed to get down to asm.
My problem is the other way around. Ill cringe at doing something "fancy" which is basic for today, because I started out in x86 pcs with a 200mhz CPU, and I still carry that budget in my head.
I need training to be LESS frugal!
In my opinion learning C before ASM is like learning to ride a bike without learning to walk. (Doesn't need to be x64, any architecture or bitness will do.)
@@williamdrum9899back then people needed to worki with inline assembly chunks in C, near and far pointers etc.
Thankyou for explaining Reality of today's developer and how it should be
*comments are turned off*
Needing to do funky manipulation of the call stack is a good case for asm. But RNNs are a bit different even at a low level. I think Torvalds missed the point somewhat when he complained about the necessity of AVX512. In the last few days I saw it being discussed about AI facilities in the kernel, so maybe that oversight will be re-examined soon. Fast BLAS, LAPACK and DNN functionality in the kernel would be super nice IMHO. Especially if it's available across many different architectures, like ARM NEON for example.
I can't believe a Brit missed the opportunity to legitimately blame the French for this
...?
for someone who's builing my own custom game engine in C.
C is not enough, with all the GNU tool kits and apitrace, I still needs to go "Deepah"
Well honestly i can sum it up. Memory can be wrote or read, registers are loaded with data for operations such as cmp(compare) add sub mul or div. Flags are set in operations and you can conditionally jump to different sections depending on the result such as jump if equal or jump if zero or not equal or bellow ..etc.... Obviously that's just a summary but other than repitious calls, coprossing units such as floating point mmx sse and the ilk that's really what it boils down to.
You can't read *asm* if you haven't written *asm* before, and you can't start writing *asm* if it isn't rewarding and fun. Back in the '80s and '90s you would have direct access to the hardware, such as display, sound, interrupt mechanism, I/O ports, CPU privileged instructions and control registers etc. so it was very fun.
I can't see anybody today sitting down to learn *asm* just to accomplish 10ms over 12ms a good C compiler might do for an arbitrary task, that's not vey rewarding.
You can't even write a pixel to the video memory today, except by copying the whole framebuffer from main memory to video memory via API function call.
Back then V-Sync was all the rage under dos graphics programming, today you have to go through the whole OPENGL crap (luckily there are friendlier alternatives, like the Raylib wrapper, that supports V-Sync).
The old CRT displays were also far better for motion than the new LCD or OLED are. You have to see it to believe it.
Thats great! I have some idea on those info that helps guiding my code but how do I get to those metrics? Where should I look to grasp those assembly instructions? Where should I look and learn to Improve? Great vid, thanks!
how and where should I learn this stuff best?
I'd recommend programming in c/c++ an Arduino or esp32 microcontroller (visual studio code with platformio) and viewing the machine code coming from that tool chain. It is very low level programming and the code base is pretty comprehensible. When you understand what's happening there, you may want to climb the hill going for modern multi core architectures on typical x86/x64 processors. But even the first step on a simple microcontroller is very rewarding, and helps to get a true understanding on how things really work in a CPU.
I don't know the best source, but in the end the major part is putting in the hours, this is my problem...
MEng in electroninc or computer engineering from a top tier western university.
Hate to break it to you but honestly, unless youre giga gifted thats the only way to RELALY learn it.
@@mememachine5244 uh... yeah no. I just learned it by writing emulators, that's a great way to learn how a CPU works. (there are tons of ways to learn, and you can 100% be self taught)
@@mememachine5244 Cope.
Man, I really like the way this guy speaks, but I just can't get behind what he's saying. Different developers operate at different levels. I took a machine language class. When you go back to JS land with that knowledge, you're not suddenly going to know how to optimize JS because you have no idea what compilers are doing behind the scenes. In fact, you might make your code worse because under the hood, the compiler is nullifying the thing you're trying to account for.
This right here. More effort should be taught in understanding how the compilers/interpreters work for your chosen language
You can see exactly what the compiler writes.
Literally all IDE's have a dissasembly view when in debug mode.
What's the purpose of this for making basic business CRUD apps, which is something most programmers do?
I mean, I am not against optimizing, but for 90% of developers this means querying the database properly(backend), or managing rerenders(frontend).
Why would I need to learn assembly language lol?
I mean, this is just laughable argumentation from someone who obviously thinks "you are not real developer if you don't understand low level stuff".
The same argument as "to paint a car you need to understand car engine's combustion cycle"
Especially because a lot of these apps are written in languages like Java or Python that are platform independent.
@@RueGoG 🤣🤣🤣
@@RetroAndChill Yes, I don't really understand how this flow should go in for example javascript with node.js. When I have performance issues I should go into v8 compiler, then look at node.js runtime c++ code and how all of this is sent to the processor for my operating system. Is that the idea? I don't know. Maybe... When requests are slow in the browser maybe I should download chromium compile it down to byte code and look at the implementation of http request in byte code.
Not that laughable actually. If your CRUD app is poorly optimized and you don't know how to work with the CPU and memory what do you do? Ask for more resources because you don't know how to write performant code? Now that is laughable.
This guy is so focused on his branch that he does not see the rest of the world.
There are fields where assembly languages are completely useless and there's no point of learning to read it.
So basically any web application where the main constraining factor is typically the network/database.
I would restate what he's saying that a programmer should understand one or two levels below the paradigm they are working in. Not necessarily assembly. Without that, you end up with dogshit performance and that's on you.
Which fields? The only ones I can think of can be careless because the demand for software is much higher than the supply, and the existing software ecosystem for those fields is so bad that it's much faster not to care in the short-term. If many developers could read assembly and knew the basics of evaluating their algorithms against their hardware capabilities, then it would not be viable to not spend a few weeks learning this stuff, since the software quality is so much worse if you don't know this stuff.
Agner Fog mentioned!!!
This is not necessary for the majority of programmers tbh
That attitude is why we have horrific performance in so many applications.
All he is saying is you should know how long each basic task you are doing should take, according to the hardware's capabilities. So you know when something is performing way worse than it should. How is that NOT mandatory knowledge?
AI....... (echo)
Every developer should write a basic x86 emulator. It’s actually not that complicated and it teaches a lot.
Maybe not that Arch, fuck me.
Really?
@@Brahvim Yes absolutely, i am not kidding. It takes not too long to implement general-purpose registers like EAX, EBX, ECX, EDX, ESP, and EBP. Some basic instructions: Arithmetic (ADD, SUB), bitwise (AND, OR, XOR), and comparison (CMP) are quickly implemented. Also simple control Flow: jumps (JE, JNE, JMP), function calls (CALL, RET). When you also support (MOV, PUSH, POP), and both zero flag (ZF) and carry flag (CF) you should be good to go to execute very basic c code.
For example:
int main() {
int sum = 0;
for (int i = 0; i < 10; i++) {
sum += i;
}
return sum;
}
You could also implement and call an add(int a, int v) function. Or arrays:
int main() {
int arr[5] = {1, 2, 3, 4, 5};
int sum = 0;
for (int i = 0; i < 5; i++) {
sum += arr[i];
}
return sum;
}
Then you may go further to implement floats (FADD, FMUL, etc.).
@Brahvim yes there are volumes and volumes of instructions and inconsistent encodings.
these people are stuck in past. stop listening to this guy.
Ah so you are part of the problem