+Philipp lsgfwpjhac Thank you so much for your comment! I'm really happy to hear this =) Hope to see you around in the comments section of my other videos =)
I have been wondering, what kind of information we could provide within our source code, for example through annotations, that could help the compiler to optimize better or to allow completely new optimizations? We could add new annotations as we find more useful information for compilers. What kind of information the developer knows of how his code should work that the compiler could use?
Hello and thank you for your comment! This is an interesting train of thought, but I think part of the reason why this doesn't already exist is because it's not really needed. By nature, compilers are the software that scrutinize your code. They're also the one with the clearest idea of the underlying computer architecture (bearing in mind that most optimization are architecture dependent), so really, all the compiler needs is your code! Annotations aren't great for a variety of reasons: Firstly, allowing the user to write them adds surface area for error. Plus, given that we want abstraction, the less coders have to think about the better, especially when it's architecture specific!
@@NERDfirst That's how I thought it too, it would be here if it were beneficial enough. But I can't shake the feeling that there would be information that the developer could provide, like different guarantees, that could help compilers. And that it could help those that really need the performance, but don't want to resort to assembly. If we would go to annotations, IDEs could help a lot of getting those correct. It would add some possibility for errors for sure. It would not be ideal solution, but it could allow things that otherwise could be impossible. Perhaps the benefits are too small and the problems too big. Maybe my thoughts on the areas it could help are false. Its interesting to see what happens on the long run. The last decades we have had the pleasure of getting ever more powerful computers and more memory. Now that's going to change, and we are having new challenges to reduce the usage of power in mobile and IoT devices.
Definitely a very interesting train of thought. All because I can't see a benefit doesn't mean nobody else can. Perhaps they're even already working on it. The way I see it is, anything that is a part of code can already be captured from the code itself without any further input on our part (which is better, actually, since like i said, less surface area for problems). But whatever exists outside of code are good candidates. The context behind procedures could potentially be taken into consideration as well.
4 years later and this still hasn't completely happened, I think rather than annotations, stronger type information can give the compiler enough of an understanding of your code to optimize the specific details of the algorithms you write. Right now, compiler optimization seems like it just tries to optimize the syntax that it understands, like for loops, recursion, if statements, arithmetic, etc.. but it can't take a look at a complex algorithm and say "oh yeah, this part here could be optimized out". If the compiler could force the programmer to explain his code more through the usage of stricter types, I believe it could contain some sort of a logical system that optimizes your code more thoroughly based on type information. Edit: Now thinking about it, annotations and stricter typing gives the compiler completely different ways to optimize the code, with an annotation, you don't really explain your code, you just directly tell the compiler what can be optimized, with stricter typing, you explain your code to the compiler, and it understands enough to do those optimizations on its own.
Hello and thank you for your comment! I wasn't able to find much on this topic beyond research papers, so this stuff is probably still new, in development, and hopefully super cutting edge! Their approach seems to be using genetic algorithms or machine learning to define the optimization parameters for compilation, in hopes of enhancing performance where it counts.
Bit-shifting as an alternative to multiplication or division is not an optimization. In fact, an optimizing compiler will turn a bit-shift into a multiplication or division in assembly for all modern x64 processors because it actually runs faster.
+ColaEuphoria Hello and thank you very much for your comment and insight! While I was unfortunately unable to find any sources that corroborate this, I think what we can learn from this is pretty fundamental to optimizations in general: Many optimizations are architecture-dependent. What works in one context may not always be useful in another.
The part about the compiler changing 'x = x + 1' to 'x++' is not correct. Both are single instructions and both will result in the same instructions being used - most of the time. The compiler has the choice of one of the two: add [x], 1 inc [x] ... and neither of these instructions 'add then assign'. If that were the case, then the compiler would output the following which it does not: mov eax, [x] add eax, 1 mov [x], eax OR mov eax, [x] inc eax mov [x], eax Good information otherwise.
Hello and thank you for your comment! Logically speaking, wouldn't a compiler that makes no attempt at optimization not even recognize that "x = x + 1" can be viewed as an add or increment operation to x? If it is simply following instructions exactly as given, it must do separate steps for adding and assignment. It takes "parsing" the intent of the statement to understand it as an incrementation, which is already an act of optimization. Of course, do also bear in mind that this video is speaking in abstract terms, not about any specific compiler. I'd expect any competent real-world compiler to be able to transparently make simple optimizations like this. Many of the examples given here are very much contrived, to make the basics easier to grasp.
@@NERDfirst If you turn off optimizations entirely then what you get is that load/inc/store. You have to go out of your way though to explicitly turn it off. My comment was actually more about how an optimizing compiler changes 'x = x + 1' to 'x++'. It doesn't. 'x = x + 1' has two "main" instructions it can use to perform the operation. They are: inc add Of the two, 'inc' introduces a false dependence on FLAGS therefore subsequent 'cmp/test/etc' instructions stall the pipeline. All x64 compilers use 'add' for this reason because add doesn't introduce any stall due to FLAGS. Therefore a 'x = x + 1' is never executed as '++x' and instead is always executed as 'x = x + 1'. I.e. add [x], 1 That's all I'm saying. ;)
I acknowledge your point, but again, do bear in mind that the video speaks of optimizations in the abstract, whilst you are going into the specific implementation details of x86, which are far beyond the scope of this video!
It's funny how many of these optimizations can be avoided by a experienced coder. it's like the compiler is expecting the user to not be very experienced, and fixing the "problems" (not actual problems, but code that can be easily optimized) In this way compilers are not far away from user friendly programs or operating systems.
+gustavrsh Very interesting way of looking at this, but I have to be honest - All the examples quoted in this video are *very* trivial, and all I'm doing is giving a very general idea as to how a optimizing compiler can run. There are far more complex optimizations that I don't profess to fully understand, that are probably more useful than this. As far as I understand, some of the more complex optimizations also look really ugly in code (ie. You _can_ implement those optimizations "by hand", but they'll make your code horribly unreadable), and that is where an optimizing compiler really shines.
+lcc0612 Yes, I have to agree that most of those things are trivial but the most trivial solution to the compiler problem would be that the IDE (the program you use to write your code) warns you before hand. Because if you are compiling a file over and over again the compiler has to deal with the issues over and over again. If your IDE warns you that you're giving the compiler a hard time, the problem wouldn't come up in the first place.
+gustavrsh Personally, I would hate to read high-level code with bit-shift divisions and manually unrolled loops. Additionally, oftentimes people who try to do these low-level optimizations by hand can get them wrong and write code that is actually slower or behaves differently, especially when optimizing floating point calculations.
+Max mamazu Hmmm... Well I'm not sure what to think about your point. I can see both the good and the bad of it! I don't have a huge problem with compilers warning about very trivial things that a coder should have done anyway (eg. No need to keep recomputing a constant in a loop), but there are many use cases where it's better if we let a compiler do the optimizations. Like how myself and +stuntddude has brought up in this thread, some optimizations make the code look really bad. So from a software engineering standpoint, it's better to leave them in a "less optimized" form. After all, it's not really a huge problem if a compiler has to repeatedly optimize a bit of code. Sure it'll take a little longer, but it takes a lot of the burden off the shoulders of the coder.
+Stuntddude Agreed! In general, optimizations require you to do things that are not immediately intuitive, which is just a can of worms no matter how you look at it. Particularly in an educational context, it's probably better to just let coders go with a more natural and intuitive way of writing code, and then have a compiler clean it up after them.
Your videos are so high in quality, I'm glad I found your channel.
Hope others will too :)
+Philipp lsgfwpjhac Thank you so much for your comment! I'm really happy to hear this =) Hope to see you around in the comments section of my other videos =)
Excellent video. Everything is easily understandable.
Hello and thank you very much for your comment! Glad you liked the video =)
I have been wondering, what kind of information we could provide within our source code, for example through annotations, that could help the compiler to optimize better or to allow completely new optimizations? We could add new annotations as we find more useful information for compilers. What kind of information the developer knows of how his code should work that the compiler could use?
Hello and thank you for your comment! This is an interesting train of thought, but I think part of the reason why this doesn't already exist is because it's not really needed.
By nature, compilers are the software that scrutinize your code. They're also the one with the clearest idea of the underlying computer architecture (bearing in mind that most optimization are architecture dependent), so really, all the compiler needs is your code!
Annotations aren't great for a variety of reasons: Firstly, allowing the user to write them adds surface area for error. Plus, given that we want abstraction, the less coders have to think about the better, especially when it's architecture specific!
@@NERDfirst That's how I thought it too, it would be here if it were beneficial enough. But I can't shake the feeling that there would be information that the developer could provide, like different guarantees, that could help compilers. And that it could help those that really need the performance, but don't want to resort to assembly.
If we would go to annotations, IDEs could help a lot of getting those correct. It would add some possibility for errors for sure. It would not be ideal solution, but it could allow things that otherwise could be impossible. Perhaps the benefits are too small and the problems too big. Maybe my thoughts on the areas it could help are false.
Its interesting to see what happens on the long run. The last decades we have had the pleasure of getting ever more powerful computers and more memory. Now that's going to change, and we are having new challenges to reduce the usage of power in mobile and IoT devices.
Definitely a very interesting train of thought. All because I can't see a benefit doesn't mean nobody else can. Perhaps they're even already working on it.
The way I see it is, anything that is a part of code can already be captured from the code itself without any further input on our part (which is better, actually, since like i said, less surface area for problems).
But whatever exists outside of code are good candidates. The context behind procedures could potentially be taken into consideration as well.
4 years later and this still hasn't completely happened, I think rather than annotations, stronger type information can give the compiler enough of an understanding of your code to optimize the specific details of the algorithms you write.
Right now, compiler optimization seems like it just tries to optimize the syntax that it understands, like for loops, recursion, if statements, arithmetic, etc.. but it can't take a look at a complex algorithm and say "oh yeah, this part here could be optimized out".
If the compiler could force the programmer to explain his code more through the usage of stricter types, I believe it could contain some sort of a logical system that optimizes your code more thoroughly based on type information.
Edit: Now thinking about it, annotations and stricter typing gives the compiler completely different ways to optimize the code, with an annotation, you don't really explain your code, you just directly tell the compiler what can be optimized, with stricter typing, you explain your code to the compiler, and it understands enough to do those optimizations on its own.
I love the sound effects
Hello and thank you for your comment! Heh glad you liked them!
you are thorough as ever. really great.
+Pratik Anand Thank you very much for your comment! Glad you liked the video =)
Bro what is auto tunning technique of compiler optimization???
Hello and thank you for your comment! I wasn't able to find much on this topic beyond research papers, so this stuff is probably still new, in development, and hopefully super cutting edge!
Their approach seems to be using genetic algorithms or machine learning to define the optimization parameters for compilation, in hopes of enhancing performance where it counts.
Bit-shifting as an alternative to multiplication or division is not an optimization. In fact, an optimizing compiler will turn a bit-shift into a multiplication or division in assembly for all modern x64 processors because it actually runs faster.
+ColaEuphoria Hello and thank you very much for your comment and insight! While I was unfortunately unable to find any sources that corroborate this, I think what we can learn from this is pretty fundamental to optimizations in general: Many optimizations are architecture-dependent. What works in one context may not always be useful in another.
Really nice Video! Helped me a lot!
Hello and thank you for your comment! Very happy to be of help =)
Could you please share the slides?
Hello and thank you for your comment! I'm afraid no slides were used in this video, just handwriting and screencapture.
I like how I can understand you loL.
Hello and thank you for your comment! That's a relief to hear!
The part about the compiler changing 'x = x + 1' to 'x++' is not correct. Both are single instructions and both will result in the same instructions being used - most of the time.
The compiler has the choice of one of the two:
add [x], 1
inc [x]
... and neither of these instructions 'add then assign'. If that were the case, then the compiler would output the following which it does not:
mov eax, [x]
add eax, 1
mov [x], eax
OR
mov eax, [x]
inc eax
mov [x], eax
Good information otherwise.
Hello and thank you for your comment! Logically speaking, wouldn't a compiler that makes no attempt at optimization not even recognize that "x = x + 1" can be viewed as an add or increment operation to x?
If it is simply following instructions exactly as given, it must do separate steps for adding and assignment. It takes "parsing" the intent of the statement to understand it as an incrementation, which is already an act of optimization.
Of course, do also bear in mind that this video is speaking in abstract terms, not about any specific compiler. I'd expect any competent real-world compiler to be able to transparently make simple optimizations like this. Many of the examples given here are very much contrived, to make the basics easier to grasp.
@@NERDfirst If you turn off optimizations entirely then what you get is that load/inc/store. You have to go out of your way though to explicitly turn it off.
My comment was actually more about how an optimizing compiler changes 'x = x + 1' to 'x++'. It doesn't.
'x = x + 1' has two "main" instructions it can use to perform the operation. They are:
inc
add
Of the two, 'inc' introduces a false dependence on FLAGS therefore subsequent 'cmp/test/etc' instructions stall the pipeline.
All x64 compilers use 'add' for this reason because add doesn't introduce any stall due to FLAGS. Therefore a 'x = x + 1' is never executed as '++x' and instead is always executed as 'x = x + 1'. I.e.
add [x], 1
That's all I'm saying. ;)
I acknowledge your point, but again, do bear in mind that the video speaks of optimizations in the abstract, whilst you are going into the specific implementation details of x86, which are far beyond the scope of this video!
It's funny how many of these optimizations can be avoided by a experienced coder. it's like the compiler is expecting the user to not be very experienced, and fixing the "problems" (not actual problems, but code that can be easily optimized)
In this way compilers are not far away from user friendly programs or operating systems.
+gustavrsh Very interesting way of looking at this, but I have to be honest - All the examples quoted in this video are *very* trivial, and all I'm doing is giving a very general idea as to how a optimizing compiler can run.
There are far more complex optimizations that I don't profess to fully understand, that are probably more useful than this. As far as I understand, some of the more complex optimizations also look really ugly in code (ie. You _can_ implement those optimizations "by hand", but they'll make your code horribly unreadable), and that is where an optimizing compiler really shines.
+lcc0612 Yes, I have to agree that most of those things are trivial but the most trivial solution to the compiler problem would be that the IDE (the program you use to write your code) warns you before hand. Because if you are compiling a file over and over again the compiler has to deal with the issues over and over again.
If your IDE warns you that you're giving the compiler a hard time, the problem wouldn't come up in the first place.
+gustavrsh Personally, I would hate to read high-level code with bit-shift divisions and manually unrolled loops. Additionally, oftentimes people who try to do these low-level optimizations by hand can get them wrong and write code that is actually slower or behaves differently, especially when optimizing floating point calculations.
+Max mamazu Hmmm... Well I'm not sure what to think about your point. I can see both the good and the bad of it! I don't have a huge problem with compilers warning about very trivial things that a coder should have done anyway (eg. No need to keep recomputing a constant in a loop), but there are many use cases where it's better if we let a compiler do the optimizations.
Like how myself and +stuntddude has brought up in this thread, some optimizations make the code look really bad. So from a software engineering standpoint, it's better to leave them in a "less optimized" form. After all, it's not really a huge problem if a compiler has to repeatedly optimize a bit of code. Sure it'll take a little longer, but it takes a lot of the burden off the shoulders of the coder.
+Stuntddude Agreed! In general, optimizations require you to do things that are not immediately intuitive, which is just a can of worms no matter how you look at it. Particularly in an educational context, it's probably better to just let coders go with a more natural and intuitive way of writing code, and then have a compiler clean it up after them.
full stop. this man made the sound effects of his intro with his own voice.
Hello and thank you for your comment! I sure did! Got tired looking for whoosh sound effects out there and just decided to blow into a microphone!