@johnmarston2474 Yeah, I didn't mention it, because that uses every thread and therefore may impede on other processes. Something better might be `make -j"$(( $(nproc) / 2 ))"`
i have a few critiques about the video: - i think you can cut the parts where you just cd into directories and run cmake and make. there's not much to see there as it was identical for both compilers. - in the complex example you say objdumps are very different but we can't see those differences in the vid. maybe you could run diff or something, because it felt kinda pointless to watch 2 minutes to only see a smaller binary size. - you can manage screens in a more efficient way to make it easier to see for the viewers. when you are browsing github repos 2/3 the screen is wasted with terminal and keebcam and it's difficult to see what's in the browser. likewise it's difficult to see terminal (when lines are long) because half the screen is occupied with keebcam and browser. - nice keys btw but imo keebcam is a distraction. - it would be cooler to use the compiler on a personal project of yours. building an existing project proves the point, sure, but compiling your code is more personal and interesting (and another way to plug your other content)
Thanks for the feedback, I noticed a lot of these points myself after rewatching it on a bigger screen. Will definitely try to improve all of that in the next videos✌️❤️
11:14 - "one of the main use cases (of the superoptimizer) is optimizing compilers so that compilers get better at optimizing your spaghetti code" - no, compilers compiled with "souper" won't become better at optimizing code, the only thing that can change is a little smaller binary size btw, when you compiled with regular compiler - what optimization level did you use? 2? 3? s? without this info saying that "souper" is better is pretty strange + making binary smaller is useful only in embedded systems, otherwise speed is what everyone is aiming for, but making binary smaller usually is opposite of making it faster
Actually, that's not true that the binaries always get slower. In fact, according to the scientific paper published by the developers alongside the Souper source code, the binary size decreases (it gets smaller, we agree on this), and the performance benchmarks were unevenly affected by Souper. Specifically, five binaries showed performance improvements, while seven got slower. However, they noted that none of these differences were particularly significant. But generally speaking, by reducing branches and the number of instructions, Souper offers substantial benefits by minimizing the need for branch predictions and improving execution cycle efficiency. The link for the paper is provided in the video description ✌️
@@Tariq10x High optimization levels could for example unroll some loops, use simd instructions, etc, which increases binary size but speeds up the execution, so stating "less is faster" shouldn't go without good proofs. But, in general, ofc less branching is faster ( :
Of course your channel has a low subscriber count, you look through the docks while making the video... cut out all the directory building and just show the difference between compiled programs
Just in case you didn't know; you can run make in parallel. Just specify `make -j' to build it on multiple threads
make -j${nproc} to use every thread possible
@johnmarston2474 Yeah, I didn't mention it, because that uses every thread and therefore may impede on other processes. Something better might be `make -j"$(( $(nproc) / 2 ))"`
i have a few critiques about the video:
- i think you can cut the parts where you just cd into directories and run cmake and make. there's not much to see there as it was identical for both compilers.
- in the complex example you say objdumps are very different but we can't see those differences in the vid. maybe you could run diff or something, because it felt kinda pointless to watch 2 minutes to only see a smaller binary size.
- you can manage screens in a more efficient way to make it easier to see for the viewers. when you are browsing github repos 2/3 the screen is wasted with terminal and keebcam and it's difficult to see what's in the browser. likewise it's difficult to see terminal (when lines are long) because half the screen is occupied with keebcam and browser.
- nice keys btw but imo keebcam is a distraction.
- it would be cooler to use the compiler on a personal project of yours. building an existing project proves the point, sure, but compiling your code is more personal and interesting (and another way to plug your other content)
Thanks for the feedback, I noticed a lot of these points myself after rewatching it on a bigger screen. Will definitely try to improve all of that in the next videos✌️❤️
11:14 - "one of the main use cases (of the superoptimizer) is optimizing compilers so that compilers get better at optimizing your spaghetti code" - no, compilers compiled with "souper" won't become better at optimizing code, the only thing that can change is a little smaller binary size
btw, when you compiled with regular compiler - what optimization level did you use? 2? 3? s? without this info saying that "souper" is better is pretty strange
+ making binary smaller is useful only in embedded systems, otherwise speed is what everyone is aiming for, but making binary smaller usually is opposite of making it faster
Actually, that's not true that the binaries always get slower. In fact, according to the scientific paper published by the developers alongside the Souper source code, the binary size decreases (it gets smaller, we agree on this), and the performance benchmarks were unevenly affected by Souper. Specifically, five binaries showed performance improvements, while seven got slower. However, they noted that none of these differences were particularly significant. But generally speaking, by reducing branches and the number of instructions, Souper offers substantial benefits by minimizing the need for branch predictions and improving execution cycle efficiency. The link for the paper is provided in the video description ✌️
@@Tariq10x High optimization levels could for example unroll some loops, use simd instructions, etc, which increases binary size but speeds up the execution, so stating "less is faster" shouldn't go without good proofs. But, in general, ofc less branching is faster ( :
you indeed used a compiler in this video
Nice keyboard sounds 👌
Sick video, subbed
Welcome to the software engineering group therapy 🤝
Of course your channel has a low subscriber count, you look through the docks while making the video... cut out all the directory building and just show the difference between compiled programs
Thanks for the feedback, I will cut that parts out next time ✌️❤️
cool
5:02: stop this clear non-sense. Use ctrl+l