Congratulations, you took perfectly readable code, with a logical progression, easily debuggable and turned into a nice spaghetti. The boomer in me would say "that's what is wrong with the new generation".
It's declarative instead of imperative. If you know what the functions do, then the declarative code is not only more readable but also less prone to bugs because you are using functions in the standard library. The entire craft of programming is composing different pieces of code, right? This is exactly what declarative code does best Edit: as others have pointed out, in the C++ case the original comment is justifiable. I thought the parent commenter was talking about languages like rust and haskell
I know this is meant to be the pin of shame but I agree in the case of C++. It's so difficult to understand compared to the original and Rust versions.
see the thing is that iota's greater instruction count is due to the fact that iota, being the smallest character, sometimes have the largest impact, both metaphorically and literally.
To me, the C code is perfectly clear and readable, and the Rust and Haskell is perfectly clear and readable. However, whenever I look at C++ code like this, it looks like an accident waiting to happen with it's amazingly aggressive overloading of syntax.
I think of course the clear misuse of operator overloading by the standard (which is ironic by itself) is a big problem in c++ but another great problem is the baggage c++ is carrying. They constantly add new features and libraries to std which build on the old libraries and therefore they never retire old libraries. This leads to a thousand ways to write the exact same assembly instructions. Also trying to forcefully reuse older parts of the library prevents them to construct fresh and simple new APIs like in the ruet example. Hack the same code can be written in c++ if you write the required library code from scratch. So it's not necessarily a problem of the language but the rotten standard library. One argument against this line of argumentation is that they keep backwards compatibility while reusing code they can't therefore remove anyway. But as I explained this leads to bad APIs and isn't the best solution to fix incompatible libraries. A better is again the rust way. In rust every library and program declares the std library and 3rd party library versions its using. The cargo build tool is then compiling each library with its required versions and links them together. Therefore older libraries can use removed std features while being used in never applications and vice versa. This even applies to the language version and keywords. Therefore impossible combinations of libraries do not exist on rust and the rust committee is free to retire library functions and keywords replaced by better alternatives.
A big part of the point in the original video was less about this kind of reduction, and more about style - while this kind of reduction is an excellent technique to go through when you can, the actual code itself was really just an example to point out the underlying issue of code style. The process of pulling out nesting into preconditions is broadly applicable to many different situations, where this kind of code transformation is much more specific and situational.
Style is one axis, paradigm another. I think a better sample for this video would've been some callback heavy snippet that's unreadable but still somehow manages to pass as pure and functional. Considering the OG video was about style from within the imperative way of thinking, this would've been fairer. That said, it's really hard to come across code that is pure and badly styled at the same time. I guess this is due to the fact that most programmers start out as imperative ones and branch out to the functional world as they grow. But I do think in languages like Rust it's very natural to write `good` code similar to what is shown here because of zero/low-cost batteries-included standard libraries.
@@arisweedler4703 I strongly disagree. While a functional style naturally tends towards nesting less, it is still certainly possible to write deeply nested code in functional languages, and there are - albeit rarely - situations where it's needed. But the point remains: the original video's intention was to point out how to deal with nesting better in a more imperative paradigm, by making use of preconditions and early exit. Responding to that with "just change to a functional style" may indeed be the best practice path in many cases, but it misses the point entirely, and is a much less broadly applicable principle.
@@foxoninetails_ else you said, yes. Totally agree. Lisp can get pretty nested. I took a class in college that used lisp and I didn’t know what I was doing and my friends and I wrote some successful and bad code. Nested deep. And as for the point of the original video, yeah. It was like “we’re imperative, this is better” and it is, I agree. In fact, the “bug” where there are two ranges used in the control flow and yet they are off by one is no longer a bug in this new style. It is now contextualized as an intentional bounds check using one range, then a map filter / APP one-liner / shape pipeline over another, similar but different range.
For those who are wondering, the sudden reduction in the number of instructions at -O3 starting from filter comes from the fact that the compiler was using SIMD instructions to handle multiple loop iterations at once in the previous versions, but after that, is unable to see through the views to perform the same optimization (which is disappointing). Edit: see reply below for better news!
Now I am confused... When trying the same code in QuickBench, the calculate functions get inlined, and then gcc optimizes the version with views using SIMD too, but with a way tighter inner loop... and that makes the views version 3.7 times faster than the original when operating on very large ranges. I might have made a mistake somewhere, I don't fully trust how I used QuickBench. Maybe someone else could try it out? For info, I was benching the call calculate(54, 12393243) with versions 1 vs 4 in the code_report Github repo. Edit: after further tests, it seems the views version get optimized with SIMD when inlined and using literal arguments. But when I pass in volatile ints as arguments, then I lose the SIMD optimization in the views version and the original version is now 3.4 times faster. So yeah, it is quite sensitive.
@@pesto801 No idea, I'd have to analyze the generated assembly to understand how the versions differ. We might be right on a threshold the optimizer uses to decide whether it is worth it or not to use SIMD.
Reduction? Am I color blind, am I not reading this correctly, or are you not reading this correctly? For -O3, they use the color blue, and the blue bars are by far the highest for the first ones, which makes no sense because -O3 would optimize the crap out of the original code so are the colors in the legend supposed to be reversed? Or, also a likely explanation: the number of instructions means nothing, without the context of what those instructions are
@@R4ngeR4pidz You are reading this correctly, blue is for -O3, and the number of instructions decreased when using views. The reason is that the code size usually increases when the optimizer starts using SIMD (versions on the left of the graph). It increases because the compiler has to add code before and/or after the main loop to handle the last few elements if it cannot guarantee that the number of elements is a multiple of the SIMD register size. So, to recap: versions on the left uses SIMD and thus have more instructions, those on the right don't use SIMD and are shorter. And yeah, as you say, the number of instructions is not necessarily a good metric for code performance.
Keep in mind that pure # of instructions may not be the only "optimization" parameter to be observed. C++ often optimizes code to run faster (having less branches) and this may lead to a few more instructions but with HUGE gains in execution time.
-O3 is loosely -O2 but also with aggressive optimisations that increase the code size. So usually it has more instructions, but they end up being faster. Sometimes, though, it isn't faster, because the increased size of the code increases the amount of cache misses to a poibt where it decreases performance. So imo the number of assembly instructions is simply not a great measure of performance. Very interesting to see and definitely an aspect of performance, but clearly not the only one
@@skeetskeet9403 All compilers have grown from C, have the same heritage and basic optimization techniques. The first C++ compiler was written in C front end that generated C code. In time it's flipped, and now C and C++ compilers are written in C++. Most other compiler also started as C++ and only in time some were able to go native.
@@ElementaryWatson-123 I don't see your point. The heritage of a compiler is entirely irrelevant to its performance and optimizations it's able to apply.
C++ segment: "Already here you can see by using more modern features we have created an absolute abomination the very sight of which has caused God to abandon us!"
Think of how much more readable the same code would be in Whiteboard C. #include "StandardIO.h" int calculate(int bottom, int top) if (top > bottom): int sum = 0 foreach bottom in top: (number % 2 == 0): sum += number return sum else: return 0; main(): println(calculate(5, 12)) // 36 println(calculate(5, 3)) // 0 So many C++ devs have spent so many years putting in feature requests that leave them feeling like they've contributed something to the whole, but we now have high level languages that take LONGER to understand and create MORE opportunities for readability and maintainability problems over the long run.
Readability is the ability to easily understand the code, it not only depends on the code itself but also on the reader's expertise. If you don't know Japanese, any of the books written in that language no matter how beautiful, would look to you like gobbledygook.
I know that's not the point of the video, but this can be solved with a very simple O(1) implementation by using the arithmetic series sum formula. That would have very few assembly instructions also :)
Yes would be interested to see examples for some arbitrary input sequence. Which makes the awkwardness of needing to use C++ range's std::ranges::iota() less of a problem, but you still might need it to generate some indices to something else, so I still like Rust's terse (begin..=end) and wish C++ had something like that (especially to be used in for loops).
I know this is not what this comment is about, but technically using the arithmetics series sum formula would be O(n log n). That is because a formula is only as fast as its the slowest part, and multiplication is O(n log n) (This was proven quite recently actually, like in 2019, but it is very complicated, there is something simpler you can read about though, it is called Karatsubas algorithm, a guy called nemean has a good video about it) In practice you are limited by the variable size so you would have to implement a custom data structure to handle that, and only then the time complexity would take effect.
@@anonanon6596 Well, that is true for arbitrary precision numbers (i.e. bignums). But usually we restrict ourselves to employing fixed sized numbers (e.g. 64 bit) that can be fed directly into the CPU's hardware multipliers, and those require constant time (a known number of cycles). Adding two numbers has the same issue, we need to traverse all bits (i.e. log2(n)) and the carry bits need to be propagated.
@@japedr I said "technically". I was being pedantic. But if we limit ourself to a single architecture, doesn't using big O lose some of it's meaning? Like, there are instances where N^2 is faster than N for small input, but small is a meaningless word in mathematics, it can be arbitrarily large.
Just wanna comment that I am pleasantly surprised by how expressive rust looks like. I mainly work with Julia which is also very neat because the equivalent code there would be. calculate(bottom,top)=sum(filter(iseven,bottom:top)) Julia's type inference handles all typing in this case and also handles the (6,6) case you mentioned.
When comparing assembly instructions you need to be very careful, since many instructions can execute faster than few instructions, especially when you unroll loops and use simd. Thats why its *always* better to benchmark.
For C version of the program: How about not checking if(top > bottom). It should have been >= instead of >, anyway. Now, if top < bottom, sum is already 0, so when we return sum, we'll return 0. That means that we can completely remove the outermost if-else. We can also go one step further. The expression number % 2 == 0 is either true or false, which is in C interpreted as 1 or 0. When it is true, we add number: sum += number, which is the same as sum += number*true. When it is false, we do nothing, which is the same as sum += number*false. Now, we can remove the check if (number % 2 == 0), and instead of sum += number, we can write sum += number*(number % 2 == 0). The entire function will then look like this (I renamed number as i): int calculate(int bottom, int top) { int i, sum = 0; for (i = bottom; i
C89 in 2022 :) it's been some time, I do think declaring the loop variable i in the for loop is objectively better. Also I don't think it is more readable to add zero, and I would expect the assembly to be the same if there was an 'if' statement, though could be wrong of course.
@@frydac I agree that it isn't more readable, but if the goal is to reduce nesting, this is one way to do it. As for the assembly code, I've never used assembly, so I can't comment on it. Although, it would be interesting to compare performance between different languages, as well as different code (like the one in my original comment).
There is even simpler way of doing this. Just round "bottom" to the next multiple of 2 and add every second number. Something like this: #define MOD(x, y) (((x) % (y) + (y)) % (y)) int calculate(int bottom, int top) { int sum = 0; for (bottom += MOD(bottom, 2); bottom
The initial code wasn't perfect, not even near, but...I feel like too often people confuse short code with good code. I'm mostly coding in python and it allows to produce a lot of oneliners which do massive work. The issue is though, that quite often they end up being unreadable, which is counter productive in the long run. The first line of zen of python is "Beautiful is better than ugly." and it's the first line for a reason.
I disagree. When I couple a few related lines in python using ; It increases readability. But however, don't just couple all lines in one line. It decreases readability. This same concept applies to me doing other languages like C#, Javascript etc.
Less code is not always better, but the examples in this case (at least the Rust and Haskelll ones) are not counterexamples. They are much clearer right out the gate what they do, and even better, it's clear that they actually do what they're supposed to. Python definitely has its issues in regards to having very condensed ways to do things which are harder to read, but Python also has multiple ways to do this that are similar to the Rust/Haskell approaches def calculate(bottom: int, top: int): return sum(n for n in range(bottom, top + 1) if n % 2 == 0) def calculuate(bottom: int, top: int): return sum(filter(lambda n: n % 2 == 0, range(bottom, top + 1))) The ugly lambda syntax and right-exclusive ranges make it a bit less attractive but ultimately still quite readable. I would prefer to see either of these over the original C code any day.
Short code can be good or bad and long code can be good or bad. You can write complicated long code but also very readable short code. If you can reduce long code into short code, while maintaining readability it will be always better. Not only does less lines mean there can be potentially less bugs but also it's faster and easier to read. You can understand what it is doing "on the spot" and don't have to scroll through a lot of code. A lot of code reduction is only about thinking declaratively and using FP functions like map() and reduce() efficiently. Unfortunately a lot of programmers are not taught these. I also learned it only after 10 years of programming...unfortunately, I wish I would have learned it way earlier.
I am flabbergasted that his complaint about remove nesting was just calling functions and his solution is just to call super convoluted std c++ templated functions that make the code less readable.
Question, why not use early return instead of using ternary operations? They are in my opinion simpler to read since you have all the exit conditions at the top and then you just have the code that you want to execute.
It's probably because when using a ternary operator you end up with a single expression in the function, which fits nicely with declarative programming. The thing is... even if C++ has become multiparadigm over the years and has functional features, they feel more like an afterthought and are kinda::disgusting::to::the::eye(). Rust, in the other hand, was handcrafted with functional programming in mind, and the readability when coding that way really shows. But yeah, the title of the video should be "Declarative Programming from C to C++ to Rust", because an early return would be optimal if we were talking about imperative programming or OOP (which is actually very readable when done in C++).
Correct, Rust was more inspired by ML languages, however some functional aspects from Haskell still come through, such as how typeclasses and traits are similar.
@@zzzyyyxxx Isn't ML language a repetition ? At this point, I think it's better to write meta language, not a lot more characters, and everybody will know what it's about. Or just write ML but it's also used for Machine Learning so it could be misleading.
I love how elegantly we can communicate the solution in functional languages! Here're scala and clojure and they are basically the same as Rust - I like them all! def calculate(bottom: Int, top: Int): Int = (bottom to top).filter(x => x % 2 == 0).sum (defn calculate [bottom top] (->> (range bottom (inc top)) (filter even?) (reduce +)))
All those languages lack customization necessary to solve real life problems. People always show trivial toy examples to show off how some language looks simple and concise. Then you ask them to write something even modestly more complex and you see how all the ugliness starts coming out. The same RUST, you can't write anything practical without "unsafe", and then you write an ugly code resembling C, at which point every C++ developer starts laughing at that.
Super interesting video. I fell in love with rust for two reasons. Firstly because I wanted a language that can do low level code manipulation but a language that nicely implement functional programming (I tried haskell and the experience was lovely)
My simple test for a programming language is to write a generic vector. Writing an efficient vector in C++ with custom allocator, placement, avoiding unnecessary initialization, exception safety, appropriate iterators, iterator traits, move semantics, etc. is straightforward. It takes some time but every competent programmer can do it easily. I used to ask that during a C++ interview to see if the candidate knows the basics. Any language that is worth exploring should at least provide a similar degree of expressiveness and flexibility. Now welcome to RUST. It turns out performing this simple task is a chore, a code is getting peppered with "unsafe", becomes pretty verbose, and actually looks as ugly as C. C++ is just conceptually richer and more expressive. And another thing you will immediately notice, how poor variadic generics are, RUST traits are nowhere as powerful as C++ concepts. RUST just needs years and years to grow, and at some point it will become a C++ with different syntax.
@@ElementaryWatson-123 Thanks for your return. I have heard that C++ is doing amazing things. I think it will still be needed in the future. But you won't understand the strength of Rust or any other language if you keep your actual perspective If you try to use a screw driver like a hammer it will never work since it's a screwdriver and you can do the same with any other tool which is not a hammer. If I try to use any other language like Rust, I could emulate some attribute but it will be most cumbersome than writting directly in Rust. So what's the point of using other languages ? The fact that Rust is replacing C++ in some area and even used for the developpement of the linux kernel doesn't mean that Rust is always superior than C++. He is juste better in areas where performance and safety are a must. Of course it's not for free since we lose the legendary expressiveness of C++ So yes Rust will be an eternal imperfect imitation of C++ long as you consider Rust as C++. But his genesis ans his philosophy are different I am happy C++ meets your needs. I am actually designing a programming language and study the differences between actual programming languages to see the features needed for it purpose and Rust is a good candidate to help me design his core calculus
There are very clear ways you can make the C code less verbose while still maintaining its readability. I’m surprised you didn’t try to fix it, and instead jumped straight to C++. Edit: This is how I would write the function in C, and yes, I prefer this even to Rust. int calculate(int bottom, int top) { if (top < bottom) return 0; int sum = 0; for (int i = bottom; i
Better, but could be better still by ensuring the bottom is even then incrementing by two each loop. Even that could still be improved, and I'm kicking myself for not thinking of it first, but by realizing that there are exactly half as many evens, possibly minus 1, in the range between the two numbers. No need to even loop.
@@anon_y_mousse The point is to add up the values though, for example in 2 to 10 you get the result 30, in 4 to 10 you get 28 via the loop, I've not been able to think of a calculation that does not involve the loop to get those values, how would you get them then?
This is the way the code should be written. Junior programmers should note the separation of the if statement, and the final return from the for loop, also how the sum initialisation is part of the block, because it is referred to the block. This is the way I write the code, and everyone should do sir. I am happy I am not the only one. This gives me hope. Well done.
Comparing trivial examples is useless. I always ask to provide equivalent implementations of generic vector in different languages. It's not a difficult task, but it touches a lot of basic stuff that programmers deal with every day, so it serves as a much better test.
I would have loved some benchmarks at the end. Like does the original version run faster? Do the std functions introduce unnecessary complexity or are they faster?
I don't consider myself a professional programmer, however, in my job as a physicist I have experience with numerical computation and programming with a focus on accuracy and speed. It was obvious that the original code was inefficient and this prompted me to comment and test before commenting. I assume the purpose of the example is to show how to modify code to use standard, safer, tested methods and to make the code more readable. Goals I am sure we all share. One of the bigger issues in programing, of course, are errors with unknow size of arrays. Not withstanding that there are far better ways to accomplish the same task as the C code, this particular code snip, however, is poorly chosen to illustrate the intended concepts. As pretty as the Rust and Haskell code are I remain unconvinced they are a significant worthwhile improvement. The modifications shown have all the hallmarks of using a sledge hammer to kill a fly. Adding even integers is trivial. Even the original C code shows that. Assume "low" is the lowest-even we want to include and "high" is the highest-even we want to include then: for(i=low;i
Your analytical solution, while very elegant, suffers from one problem: it also doesn't faithfully replicate the original I/O. What about negative inputs or mixed positive and negative inputs? Int is signed by default and negative inputs don't default to zero in the original code. You need to distinguish three cases, to make your approach work: 1. bottom & top > 0 stays the same (though I would use low = bottom - 2 + bottom % 2 as that automatically gets you the last positive even number below bottom and 0 is excluded) 2. Bottom ≤ 0, top > 0, needs their low adjusted to the nearest higher even number and negated: low = -(bottom + bottom % 2) this also covers the bottom = 0 case. 3. Top ≤ 0, bottom < 0, flip the signs and top/bottom to determine low/high, then multiply the sum by -1
I like your thinking about not altering the I/O of the function because bottom == top might need to return 0 and in making use of n(n + 1)/2, however, in doing so you have also changed the I/O because nowhere in the code does it check for negatives, which might be a valid use-case. Like you said, we'd need to know more about the original application. You are totally correct that readability has a cost, but after spending a few years working with code written by people with a different focus, readability is more than just nice, it is a godsend. If performance is a primary concern, I would suggest keeping the original code in the unit test as a reference and to compare to the optimised code to ensure the I/O has not been altered or to document how it has been altered. Getting back to the point of the original video, less indentation, we can re-write your code (not tested, probably has mistakes): int sumEvenIntRange(long int low, long int high) { // Method based on formula for sums of integers to n = (n*(n-1)/2) // If n is even, the sum of all even numbers up to n = ((n*n)+2n)/4 // This assumes low and high will both be positive, so: if (low < 0 || high < 0) { return 0; } // this should be the second change everyone makes! (function name first) if (low>=high) { // Condition from original code doubted by some return 0; } high -= ((high%2 == 0)? 0: 1); // high-even to be included is either high if even or one less if high is odd highEvens = ((high*high)+(high2; // compilers probably optimise multiply/shifting // don't calc low sum if not required if (low
@@SpaceMonkeyTCT Your assumption is incorrect. I tested all the code. I did not however test for negative integer input as you point out. Honestly, I was just mentally lazy to check the math to see if it would work for negative numbers. It does. Modifying to replicate the original code for both positive and negative input the code is now simpler: int calculate ( long int low, long int high) { if(low
@@logaandm I was also too lazy to check the maths for negatives, assuming it only worked for positives, good to know it works negative too. I still stand by my point of readability when it comes to inverting the outer if to return early and even if you don't like ternary operators or shifting, I find them equally readable (which is not very) :)
@@logaandm I ran some tests and looked at the assembler. Calculating the low and high is basically the same using ifs and using ternary, except the high ternary reduces to high &= -2; a neat trick! It rewrites ((high*high)+(2*high)-(low*low)-(2*low))/4 into ((high*high)+(high+high)-(low*low)-(low+low))>>2, which made me think of rejigging the maths to use fewer operators so my current code is now: int sumIntRange(int low, int high) { if (high > 2; } Which is a tiny bit more performant but not readable.
int a2 = (a + 1)/2; int b2 = b/2; return (a2 + b2) * (b2 - a2 + 1); Also can be calculated as arithmetic progression sum without any filters :) Thanks for the great video, I used it as the opportunity to try the Rust
This was my first thought as well, although I don’t think the focus of the video is coming up with a more creative solution, but rather a refactoring/reformatting of an existing solution.
топ и боттом проверяются "где то" выше, зачем у вас неимоверное стремление проверить и перепроверить всё и вся, что перепроверять не следует? входные параметры в функцию никогда и ни за что не должны проверяться в самой функции, все эти проверки должны быть [как минимум], в вызывающем месте который вызывает эту функцию и даже кандишен (топ>боттом) вкрай идиотичен, потому что он должен быть опять и снова "где то" выше, т.е. main() { иф(топ>боттом) кэлькулейт(боттом топ) } и даже это идиотично, ну потому что на самом деле проверка (топ>боттом) _уже_ где то _существует_, ессно на этапе создания структур например, именно там она должна быть а вы как обычно "стремитесь" к массовой идиотии, и проверяете всё и везде, однако же идиотия на то и идиотия, ну потому что полностью лишена логики в итоге все ваши идиотские проверки проверок от проверок над проверками только лишь выжирают батарейки и атомные электростанции, и толку с этого абсолютно нет, это банальное вредительство
Good catch. That case would never with the original code though, since the bottom has to be less than the top for the loop to be executed on the first place
I've recently found more and more similarities in rust and python, actually. This is how I'd express it in Python, below. Replace top+1 in the range, depending on intent. I had the "Nevernester" Video in my recommendations, too. I still think he makes some good points, especially for beginners. def calculate(bottom: int, top: int) -> int: return sum(n for n in range(bottom, top) if n % 2 == 0)
I understood the original C and it was easy to spot any bugs. As soon as you made C++20 substitutions I thought “what the hell is that?” and the code looked like illogical gibberish. In the commentary you then said “I think this is a lot nicer”! From my perspective you made simple, legible, code totally impenetrable to the point where I’d need to use tools to find bugs in it. I programmed commercial products in C in 1996 and C++ in 1998. I’m trying to figure out whether it’s worth learning this whole new syntax/toolbox that modern C++ requires- in this case it obfuscated the code.
not only is this not worth it, you might end up getting trapped in the same sunken cost fallacy that so many developers do, where they've wasted a bunch of time learning things that don't really improve all that much, and so not they have to lie to themselves about how the code looks nicer and smarter, and how everyone who didn't sink as much time as them learning useless things is just too stupid to get it.
But that's C++ nowadays, a language that provides 1 million ways to do something. That's why we are seeing a lot of new languages trying to conquer its place, bcs nobody wants the new C++.
I actually didn't know about closed_iota. However, it only exists in range-v3, not C++20 or 23 ranges. That being said, I did start using range-v3 when I introduced ranges::accumulate so I could have added it then. Thanks for putting it on my radar.
Note that since sum is commutative (and associative) you could use std::reduce in C++ instead of accumulate and use one of the parallel execution policies, if so desired. However the optimizer can also automatically vectorize loops sometimes as well, especially dealing with integral types and clear operations like +, so it is probablydoing that for the C code or the loop based C++ code and maybe the ranges one too, depending on the specific compiler etc.
The entire video I was waiting for you to change the C example by replacing "number++" with "number+=2" and reduce O(n) "even" comparisons to O(1) (only first number). The final C++ example made me want to scream. At least in Rust the same thing looks nicer
Best version I've seen yet that matches the output of the original. I like it, even if I think the functionality of outputting 0 when the range is passed in the wrong order is wrong behavior. Even correctly handles negative numbers.
Just looking at number of instruction generated is not always useful (unless you're trying to fit the code on a tiny PIC or something): first, the optimizer will try to inline functions and unroll loops (more in -O3 than -O2) which is great because inlining (and unrolling to some extent) makes a bunch of other optimizations possible, and in addition, in C++ you will get alternate code paths generated away from the normal code (kind of like having a separate, alternate function not next to the "hot path" of instructions in memory) for exception handling because they are expected to be unlikely but still possible; but the actual time (or instructions executed) in the case of no exceptions thrown will still be pretty small. Also tip when trying to look at the assembly code in Compiler Explorer to judge your code: use argc or argv or console input or random numbers or similar as dependent values (inputs) of the example code, otherwise the compiler/optimizer will just evaluate the code at compile time and the resulting program is actually just spitting out the constant results (even if the code for functions etc. is still there in the binary.)
I feel the C code itself could have some improvements: int calculate(int bottom, int top) { if (top > bottom) { int sum = 0; if (bottom & 1 != 0) bottom++; for (bottom; bottom
I’m guessing the bug is that if top and bottom are equal, the c/c++ code returns zero, while the rust code actually runs the filter/sum combo on that single int. Edit: I was right
One thing I didn't see mentioned in the video or in the comments so far is the matter of compile times. I did some quick tests locally using the 'time' utility on macOS, timing compilation of each version. Unfortunately compile times get gradually slower each time. The initial C and reformatted C++ version take approximately 0.06s to compile (M1 Macbook Pro, Ninja build), but by the time we get to the range-v3 accumulate (ternary operator) version, the code takes approximately 0.8s to compile. This means the original version compiles 13x faster than the rangified version (an order of magnitude). It's tough as I do really quite like the more declarative approach of the final code, but the cost (when magnified across an entire code base) is prohibitively high at the moment.
I read the last histogram to mean that the simplest no-nonsense C with the O2 flag is way more efficient than all the funky new-age stuff that came later
That seems the most likely but you shouldn't jump to conclusions that fast. More instructions doesn't mean slower, those extra instructions could be there for loop unrolling and vectorization which is what I guess happens in the O3 C. In any case, the only way to know wich one is faster is to benchmark it.
A "friendlier" option to the ternary operator which is going to be more readable for some people is to check the reverse of the if statement first, and "exit early", placing the evens sum logic after the if block. This gets rid of the need for the else block and leads to an almost entirely flat function...
I had similar thoughts about that video. He did make it better, and I can't fault him for making content that is helpful for someone who wasn't initiated to functional style.
I took the liberty of rewriting the calculate function in C. By observing that we only sum even numbers we can use the mathematical formula for even sums, which lets us skip the for-loop entirely and lets us skip doing the modulo operation on every number in our range, going from an O(n) solution to an O(1) solution. int calculate(int bottom, int top) { if (bottom > top) { return 0; } if (top % 2 == 0) { top = top - 2; } if (bottom % 2 == 0) { bottom = bottom - 2; } return top / 2 * (top / 2 + 1) - bottom / 2 * (bottom / 2 + 1); }
Thank you, that was very interesting. I did not get involved in C++ back in the 90s because there were too many problems to make the transition from C which I had been using since the 70s, but now I can see that it is a time to take a fresh look at C++ and (more importantly) at Rust. Cheers.
I love it when I find a random video that replies to another random video I watched. I missed the old TH-cam algorithm when I could see things that I had never seen before. this kind of reminds me of that
Awesome! Rust and Haskell love! Also saw that Never Nesting video and was also unimpressed. Not only that he ended up creating an off by one error when he inverted a conditional statement
Cool video, thanks for sharing. It's an interesting discussion; the transfer from imperative to declarative programming styles. As you mention in your video, the declarative syntax is less overwhelming once you become familiar with the tools available to you and that's where my biggest criticism of it lies. I have found that declarative code tends to have a negative impact on maintainability. The inability to use skills learned from other languages requires contributors to have knowledge of language-specific utilities, thereby increasing the barrier of entry and reducing the pool of viable contributors or increasing the time it takes for individuals to be productive. The declarative aspects of C++ are different to that of Rust, to that of Haskel, to that of JavaScript - but an "if" statement is an "if" statement everywhere you go. In my experience, boring and obvious code makes for maintainable and approachable code. While I, as a hobbyist, love tricky fancy code - it really sucks in a project setting.
I analysed the assembly code. At -O2 the C code has the tightest assembly and at -O3 the C code is auto vectorized. The C++ version cannot be auto vectorized and the scalar code that is generated at -O3 is H O R R I B L E compared to the C code at -O2. I wrote the scalar assembly by hand and compared it to GCC and I managed to shave off one instruction but that was it.
I would be embarrassed to see either of those in my code, and so would Gauss. Here is what I would do (Rust version): fn calculate(bottom: i32, top: i32) -> i32 { (top/2 + bottom/2)*(top/2 - bottom/2 + 1) } Infact, I'm pretty sure that some C and C++ compilers (clang) would recognize the pure for-loop versions as the sum of all even numbers between bottom and top and replace it with the very formula I used and thus turning an O(bottom-top) function in to a O(1) function. However, when using all the "zero-cost abstractions" this would not be the case. Also, as someone that is trained from childhood with function composition being from left to right and not right to left I find the Rust version to be backwards (the Haskell version is alright).
I do believe that the code is not meant to be taken literally, e.g. you might want to edit conditions and values later, it's just an example code, hence why the function is called "calculate" and not "sum_of_all_even_numbers". But yes, you're correct that this particular version is solved in a const time with a simple formula.
@@peezieforestem5078 I know that it is an specific example. However, you seem to miss my point, that by using all those extra levels of abstractions, both the programmer and the compiler might miss trivial optimizations.
@@tordjarv3802 I see your point, but you assume optimization criteria. Let's say we optimize for minimum code alteration for likely changing requirements while also maintaining minimal code size. Now, the solution you propose is not optimal, nevermind trivially optimal. Had you known in advance that the problem will always stay this exact way, you would be correct. But you don't, in fact, depending on your environment, you can be 90% certain that at some point your customer will barge in and tell you to redo everything. In this case, you optimizing for specifics of the problem is waste in the meta-sense, unless you wish to challenge yourself.
@@peezieforestem5078 If the customer demands to redo everything, it will not matter how the code looks (because you have to redo everything, or maybe you don't agree what "everything" means).
@@anserinus no those two are not the same, because we are talking about integer division. for example let top=5 and bottom=3 then (5/2 + 3/2)*(5/2-3/2+1)=(2 + 1)*(2 - 1 + 1)=6, while (5+3)*(5-3+2)/4 = 8.
"I saw it first!" I was also disappointed and somewhat annoyed when I watched the original video. I'm glad you made this video to show what a proper refactor can look like.
im dev for years in cpp and it indeed is ☠☠☠ even for me. you dont use such things in code that many developers see and read every so often as to not confuse them. when writing code you try to write it in a way, that even dumbest but decently qualified dev can understand it. also names of variables and function arent what you would want to leave in source code. also, when it comes to extracting parts of code to separate functions - it is quite controversial - in my workplace we dont usually care enought to reduce function volume more as it is not that big yet and is decently uderstandable in current form. also, we dont really want to jump too much around the code to check each function due to size of project.
@@Ogrodnik95 You don't see this code in production but are you using C++ 20 or 23 yet*? Maybe this is a future we have to look forward to. * I agree with what you've written btw, this question is not a challenge to what you said.
I saw the same video, and I was also disappointed. It's been a few years since I wrote anything in C++, and I'm glad to that it's getting better at FP, as whether I'm coding in R (which hates loops), or in Julia (which is just lovely), it's a style that's a lot more pleasant to work with.
I was never able to get into c/c++ but now I really enjoy rust. Rust is great, pretty, expressive and straightforward. I remember having much difficulty in c++ finding the "right" way to do the simplest things... And it was uglier the nicer the feature was ....
@@MisererePart Haskell will be that way right up until you need to optimize. The gap between readable and optimized code in Haskell is probably larger than in any other programming language
Having spent some time programming Javascript and Ruby, starting in high school with C and Pascal, I always though “Rust is a high level language” was an odd statement. But after seeing that C++ comparison, now I get it 😂
I would be honestly surprised if there was any meaningful difference in performance, especially at high optimization levels. I'm actually surprised it even produces different assembly at all.
I did not see any bug in the code. if the top is not greater than the bottom return 0. pretty clear what it was meant to do. If I had never seen a line of C code I could still figure it out. whether or not you wanted it to return the value if the range is length 0 is entirely up to the programmer. the only time i value abstraction is if I don't care about the code only what it returns. I like seeing things like this. conn = database.connect("database"). that is acceptable abstraction to me. if I am diving into the logic of a function, please make it clear about what it does don't abstract it with more functions i have to investigate. what would happen if I wanted it to return 0 if the input range was 0? I would probably end up rewriting the original function to something more understandable that doesn't use library calls.
if you wanted to compare assembly instructions, why not use -Os? Though im not even sure why even graph the number of assembly instructions generated, since it doesnt correlate with how fast the code will actually run. More assembly instructions just means your binary file will be bigger which is usually of little concern.
In case anyone missed it, and apparently many did, the idea of the original C code was to have some actually functional code to show how to reduce nesting It was not meant to be the best way of implementing the operation (it isn't) It was not meant to be perfect (it isn't) It also isn't the fastest, nor the prettiest, nor anything else It is a good example for what it is meant to be It was meant to give an example that can be abstracted to other situations (something it accomplished very well) Go watch the original video if you actually care about what it originally meant, but if you wanna whine about its imperfections, go on
I think you missed the point of the Code Aesthetic Video - it was more about control flow than functional programming and algorithms. The other examples he had, where it was more business logic and things you can't simply reduce into better functional mathematical expressions, are where his tips are needed to have a more readable and maintainable codebase.
Currently learning Rust, so great to see such a video about it. Absolutely great programming language. This is after years of experience with several C family anguages for me, with C# as my daily one, which is wordt mentioning for it's functional programming (especially LINQ) as well. Doing your example in C# the code looks pretty nice as well. The only 2 downsides are that for the range we still need to call the Enumerable.Range method instead of using the two-dot way. This is because the Range type behind the two dots in C# has not yet implemented the iterator (Enumerator) pattern today. And we also need the ternary operator solution for top-more-than-bottom check. Luckilly C# has a Sum method, but otherwise the Aggregate method was still an option for the accumulation solution (reduce operation).
I’m not a professional at all but what I think is that the “never nester” TH-cam video was going for code readability making it easier to understand. However I think ur going for something different. Not to say ones better than the other, but you don’t always have people who understand what you’re doing with lambda functions, iota, pipelining, etc. anyways all I’m saying is there’s pros and cons to everything
Although rust is influenced by functional languages in general it is inspired by ocaml heavily. The first iterations of the compiler were written in ocaml.
Yea, it would make sense that it is more influenced by OCaml due to the first versions of the compiler being written in it, but I have heard Rust folks say that Rust is Haskell + C++ so I'm not sure which was more influential.
@@code_report the only part of rust inspired by haskell directly was traits (typeclasses). you can look up graydon hoare's answer and you'll see that he said that Rust was mostly inspired by Ocaml and C++ (trying to remove it's footguns) not Haskell
@@myxail0 that was only added later, in fact not that long ago it was a try operator and then changed to ?. so it might have gotten more features from haskell or other languages with stuff like that, I'm not qualified to answer that tho
@@code_report Traits or type classes and rust’s error handling were very directly influenced by haskell. It sure has gotten a ton of features from c++ (without the footguns of course). All the languages mentioned influenced rust heavily, and I am not sure which one was more influencial either.
I started getting the CodeAesthetic videos recommended as well, so I had seen that one already. I like your approach using the built in helper functions, but I don't think that approach would have fit the CodeAesthetic video as those functions may not apply to other programming languages and he was providing a general example to be applied to any language.
Honestly, looking at the C++ part, I can't really think who would write code like that. You don't need to use every standard library function or have a lambda for a command as simple as "return e % 2 == 0" since you use it once. I think the reformatting stage was more than enough because the rest is just overengineering at that point.
Great video! Do you mind sharing what software you use to make the video? And how did you get the nice transitions with the text moving around on refactors? Keynote magic move?
I'm not a C++ guy, but the C++ refactor is so confusing to me. Personally, I think lambdas and ternaries are best used when passing them as parameters, not just as drop in replacements for more basic syntax. The Code Aesthetic video showcased using early returns with ifs, and I feel like that would have been way nicer.
I took a look at compiler explorer with GCC. The saddest thing that I saw with the c/c++ code is that the compiler never realized that it can test 1 element and then just do the sum, incrementing by 2 each time. I wonder if the Rust compiler could do better.
If the surrounding if can be omitted in rust, it can also be omitted in C, and probably in C++, but I'm not sure how iota is defined. Using the number of assembly instructions is a very bad metric, at least if you suggest less is better. I've created my own godbolt link under z eG84e44hr. If you look at the assembly generated from Rust and C, it's basically equivalent, and nicely vectorized with avx2. The C++ codegen is basically non optimized, and will be way slower.
Good video, but I'm not a fan of the "number of assembly instructions" benchmark. It would have been nice to have an actual time benchmark, which is not hard to do.
BEST THUMBNAIL EVER I always post in comments, that Rust is crossover of C/XX + Haskell + programmer experience of severe bugs. If you understand C/XX and it's struggles and pitfalls. If you understand Haskell and it's struggles and pitfalls. Then you understand why Rust is done the way, how it is designed to.
I was testing out the C++ refactor(s) and I can't get either 5:03 or 5:40 to work. The first returns the wrong value and the second throws an error about ranges::accumulate being overloaded or something. I installed ranges-v3 via vcpkg. VS 2022 using C++ 20 standard. Any ideas what might be wrong?
I'm no rust expert but there is also a bug in this definition: fn calculate(bottom: i32, top: i32) -> i32 { the sum could be larger than i32 max value. It may also apply to other examples.
7:26 I believe you changed the behavior of the function slightly. In the case where top == bottom and both are even numbers, the revised code will return that number. However, in the original code, it would have returned 0. edit: he points this out in the video
I looked at the example on GH and I think the function as written is just wrong. Aside from the obvious, #include "stdio.h" which should be #include if for no other reason than convention's sake, but the name and how it works don't seem appropriate to me. It's intended to calculate the sum of all the evens in the range, but why restrict the range in one way? I think it should be: int sum_evens_in_range ( int a, int b ) { int i; int sum = 0; if ( a > b ) { i = a; a = b; b = i; } for ( i = a + ( a & 1 ); i
I did some testing out of curiosity. The Rust implementation at 7:55 runs up to 12 times faster than a simple for loop implementation of the same thing when large random numbers are input. When a static range is the input both run at the same speed. I wonder why is this? If compiling without the --release flag the simple for loop can be several times faster than the syntactic sugar version.
It's been pointed out before but there's a closed form expression for this :) If x,y are even, the number of even numbers N in [x,y] is (x-y)/2 and then the sum of the them is Nx + N(N-1) My implementation is a little nasty so it can handle the odd/negative cases but would appreciate if someone could clean it up! I count 22 instructions on gcc 12.2 -O3 auto calculate(int x, int y) -> int { const auto N = (y - x) / 2 + std::abs(x * y + 1) % 2; const auto e = x + std::abs(x) % 2; return y
Congratulations, you took perfectly readable code, with a logical progression, easily debuggable and turned into a nice spaghetti. The boomer in me would say "that's what is wrong with the new generation".
It's declarative instead of imperative. If you know what the functions do, then the declarative code is not only more readable but also less prone to bugs because you are using functions in the standard library.
The entire craft of programming is composing different pieces of code, right? This is exactly what declarative code does best
Edit: as others have pointed out, in the C++ case the original comment is justifiable. I thought the parent commenter was talking about languages like rust and haskell
I know this is meant to be the pin of shame but I agree in the case of C++. It's so difficult to understand compared to the original and Rust versions.
Cool video though
C++ Version really was that
I was thinking the exact same thing myself. I'm glad I wasn't the only one
see the thing is that iota's greater instruction count is due to the fact that iota, being the smallest character, sometimes have the largest impact, both metaphorically and literally.
I just subbed to your channel, funny seeing you here
tf does that mean
@@alexandriap.3285 Yes
@@alexandriap.3285 The numbers of the instruction generated doesn't necessary mean anything.
Really makes you think
To me, the C code is perfectly clear and readable, and the Rust and Haskell is perfectly clear and readable. However, whenever I look at C++ code like this, it looks like an accident waiting to happen with it's amazingly aggressive overloading of syntax.
The iterator being declared in the for loop triggered me.
I was thinking the same thing.
It got less readable by the end of the C++ code. “One-Liners” are not always the best option when it comes to clarity.
@@austinmajeski9427 yup lol
I think of course the clear misuse of operator overloading by the standard (which is ironic by itself) is a big problem in c++ but another great problem is the baggage c++ is carrying. They constantly add new features and libraries to std which build on the old libraries and therefore they never retire old libraries. This leads to a thousand ways to write the exact same assembly instructions. Also trying to forcefully reuse older parts of the library prevents them to construct fresh and simple new APIs like in the ruet example.
Hack the same code can be written in c++ if you write the required library code from scratch. So it's not necessarily a problem of the language but the rotten standard library.
One argument against this line of argumentation is that they keep backwards compatibility while reusing code they can't therefore remove anyway. But as I explained this leads to bad APIs and isn't the best solution to fix incompatible libraries. A better is again the rust way. In rust every library and program declares the std library and 3rd party library versions its using. The cargo build tool is then compiling each library with its required versions and links them together. Therefore older libraries can use removed std features while being used in never applications and vice versa. This even applies to the language version and keywords. Therefore impossible combinations of libraries do not exist on rust and the rust committee is free to retire library functions and keywords replaced by better alternatives.
@@bram3367 I wouldn't say its overengineered. Its perfectly readable. The before... Completely different story.
A big part of the point in the original video was less about this kind of reduction, and more about style - while this kind of reduction is an excellent technique to go through when you can, the actual code itself was really just an example to point out the underlying issue of code style. The process of pulling out nesting into preconditions is broadly applicable to many different situations, where this kind of code transformation is much more specific and situational.
Style is one axis, paradigm another. I think a better sample for this video would've been some callback heavy snippet that's unreadable but still somehow manages to pass as pure and functional. Considering the OG video was about style from within the imperative way of thinking, this would've been fairer. That said, it's really hard to come across code that is pure and badly styled at the same time. I guess this is due to the fact that most programmers start out as imperative ones and branch out to the functional world as they grow. But I do think in languages like Rust it's very natural to write `good` code similar to what is shown here because of zero/low-cost batteries-included standard libraries.
The style that even makes it possible to nest like this is impossible in the paradigm of shape thinking, of functional programming.
@@arisweedler4703 I strongly disagree. While a functional style naturally tends towards nesting less, it is still certainly possible to write deeply nested code in functional languages, and there are - albeit rarely - situations where it's needed.
But the point remains: the original video's intention was to point out how to deal with nesting better in a more imperative paradigm, by making use of preconditions and early exit. Responding to that with "just change to a functional style" may indeed be the best practice path in many cases, but it misses the point entirely, and is a much less broadly applicable principle.
@@foxoninetails_ else you said, yes. Totally agree. Lisp can get pretty nested. I took a class in college that used lisp and I didn’t know what I was doing and my friends and I wrote some successful and bad code. Nested deep.
And as for the point of the original video, yeah. It was like “we’re imperative, this is better” and it is, I agree. In fact, the “bug” where there are two ranges used in the control flow and yet they are off by one is no longer a bug in this new style. It is now contextualized as an intentional bounds check using one range, then a map filter / APP one-liner / shape pipeline over another, similar but different range.
I love videos about refactoring, because it's like watching a fantasy movie about a world where we have time to reduce the tech debt in our codebase
For those who are wondering, the sudden reduction in the number of instructions at -O3 starting from filter comes from the fact that the compiler was using SIMD instructions to handle multiple loop iterations at once in the previous versions, but after that, is unable to see through the views to perform the same optimization (which is disappointing). Edit: see reply below for better news!
Now I am confused... When trying the same code in QuickBench, the calculate functions get inlined, and then gcc optimizes the version with views using SIMD too, but with a way tighter inner loop... and that makes the views version 3.7 times faster than the original when operating on very large ranges. I might have made a mistake somewhere, I don't fully trust how I used QuickBench. Maybe someone else could try it out? For info, I was benching the call calculate(54, 12393243) with versions 1 vs 4 in the code_report Github repo.
Edit: after further tests, it seems the views version get optimized with SIMD when inlined and using literal arguments. But when I pass in volatile ints as arguments, then I lose the SIMD optimization in the views version and the original version is now 3.4 times faster. So yeah, it is quite sensitive.
@@VincentZalzal I verify the same result. Any ideas where this might be coming from?
@@pesto801 No idea, I'd have to analyze the generated assembly to understand how the versions differ. We might be right on a threshold the optimizer uses to decide whether it is worth it or not to use SIMD.
Reduction?
Am I color blind, am I not reading this correctly, or are you not reading this correctly?
For -O3, they use the color blue, and the blue bars are by far the highest for the first ones, which makes no sense because -O3 would optimize the crap out of the original code
so are the colors in the legend supposed to be reversed?
Or, also a likely explanation: the number of instructions means nothing, without the context of what those instructions are
@@R4ngeR4pidz You are reading this correctly, blue is for -O3, and the number of instructions decreased when using views. The reason is that the code size usually increases when the optimizer starts using SIMD (versions on the left of the graph). It increases because the compiler has to add code before and/or after the main loop to handle the last few elements if it cannot guarantee that the number of elements is a multiple of the SIMD register size.
So, to recap: versions on the left uses SIMD and thus have more instructions, those on the right don't use SIMD and are shorter. And yeah, as you say, the number of instructions is not necessarily a good metric for code performance.
Keep in mind that pure # of instructions may not be the only "optimization" parameter to be observed. C++ often optimizes code to run faster (having less branches) and this may lead to a few more instructions but with HUGE gains in execution time.
-O3 is loosely -O2 but also with aggressive optimisations that increase the code size. So usually it has more instructions, but they end up being faster. Sometimes, though, it isn't faster, because the increased size of the code increases the amount of cache misses to a poibt where it decreases performance. So imo the number of assembly instructions is simply not a great measure of performance. Very interesting to see and definitely an aspect of performance, but clearly not the only one
This is true of C, C++ and Rust. They all have optimizing compilers for a reason
@@skeetskeet9403 All compilers have grown from C, have the same heritage and basic optimization techniques. The first C++ compiler was written in C front end that generated C code. In time it's flipped, and now C and C++ compilers are written in C++. Most other compiler also started as C++ and only in time some were able to go native.
@@ElementaryWatson-123 I don't see your point. The heritage of a compiler is entirely irrelevant to its performance and optimizations it's able to apply.
@@skeetskeet9403 of course, it's relevant
C++ segment: "Already here you can see by using more modern features we have created an absolute abomination the very sight of which has caused God to abandon us!"
Think of how much more readable the same code would be in Whiteboard C.
#include "StandardIO.h"
int calculate(int bottom, int top)
if (top > bottom): int sum = 0
foreach bottom in top: (number % 2 == 0): sum += number
return sum
else: return 0;
main():
println(calculate(5, 12)) // 36
println(calculate(5, 3)) // 0
So many C++ devs have spent so many years putting in feature requests that leave them feeling like they've contributed something to the whole, but we now have high level languages that take LONGER to understand and create MORE opportunities for readability and maintainability problems over the long run.
Love watching c++ devs argue which of their unreadable buggy code is more unreadable than the other
Readability is the ability to easily understand the code, it not only depends on the code itself but also on the reader's expertise. If you don't know Japanese, any of the books written in that language no matter how beautiful, would look to you like gobbledygook.
I know that's not the point of the video, but this can be solved with a very simple O(1) implementation by using the arithmetic series sum formula. That would have very few assembly instructions also :)
Yes would be interested to see examples for some arbitrary input sequence. Which makes the awkwardness of needing to use C++ range's std::ranges::iota() less of a problem, but you still might need it to generate some indices to something else, so I still like Rust's terse (begin..=end) and wish C++ had something like that (especially to be used in for loops).
EXACTLY!!! #$%^&*
I know this is not what this comment is about, but technically using the arithmetics series sum formula would be O(n log n).
That is because a formula is only as fast as its the slowest part, and multiplication is O(n log n) (This was proven quite recently actually, like in 2019, but it is very complicated, there is something simpler you can read about though, it is called Karatsubas algorithm, a guy called nemean has a good video about it)
In practice you are limited by the variable size so you would have to implement a custom data structure to handle that, and only then the time complexity would take effect.
@@anonanon6596
Well, that is true for arbitrary precision numbers (i.e. bignums). But usually we restrict ourselves to employing fixed sized numbers (e.g. 64 bit) that can be fed directly into the CPU's hardware multipliers, and those require constant time (a known number of cycles).
Adding two numbers has the same issue, we need to traverse all bits (i.e. log2(n)) and the carry bits need to be propagated.
@@japedr I said "technically". I was being pedantic.
But if we limit ourself to a single architecture, doesn't using big O lose some of it's meaning?
Like, there are instances where N^2 is faster than N for small input, but small is a meaningless word in mathematics, it can be arbitrarily large.
Just wanna comment that I am pleasantly surprised by how expressive rust looks like.
I mainly work with Julia which is also very neat because the equivalent code there would be.
calculate(bottom,top)=sum(filter(iseven,bottom:top))
Julia's type inference handles all typing in this case and also handles the (6,6) case you mentioned.
When comparing assembly instructions you need to be very careful, since many instructions can execute faster than few instructions, especially when you unroll loops and use simd. Thats why its *always* better to benchmark.
For C version of the program: How about not checking if(top > bottom). It should have been >= instead of >, anyway. Now, if top < bottom, sum is already 0, so when we return sum, we'll return 0. That means that we can completely remove the outermost if-else. We can also go one step further. The expression number % 2 == 0 is either true or false, which is in C interpreted as 1 or 0. When it is true, we add number: sum += number, which is the same as sum += number*true. When it is false, we do nothing, which is the same as sum += number*false. Now, we can remove the check if (number % 2 == 0), and instead of sum += number, we can write sum += number*(number % 2 == 0). The entire function will then look like this (I renamed number as i):
int calculate(int bottom, int top)
{
int i, sum = 0;
for (i = bottom; i
C89 in 2022 :) it's been some time, I do think declaring the loop variable i in the for loop is objectively better.
Also I don't think it is more readable to add zero, and I would expect the assembly to be the same if there was an 'if' statement, though could be wrong of course.
@@frydac I agree that it isn't more readable, but if the goal is to reduce nesting, this is one way to do it. As for the assembly code, I've never used assembly, so I can't comment on it. Although, it would be interesting to compare performance between different languages, as well as different code (like the one in my original comment).
Even worse than infinite loop, signed overflow is undefined.. here be dragons
There is even simpler way of doing this. Just round "bottom" to the next multiple of 2 and add every second number. Something like this:
#define MOD(x, y) (((x) % (y) + (y)) % (y))
int
calculate(int bottom, int top)
{
int sum = 0;
for (bottom += MOD(bottom, 2); bottom
@@clawsie5543Yes, this is better. Define is probably unnecessary, though. It should be enough to write bottom += bottom % 2.
The initial code wasn't perfect, not even near, but...I feel like too often people confuse short code with good code. I'm mostly coding in python and it allows to produce a lot of oneliners which do massive work. The issue is though, that quite often they end up being unreadable, which is counter productive in the long run. The first line of zen of python is "Beautiful is better than ugly." and it's the first line for a reason.
I disagree. When I couple a few related lines in python using ; It increases readability. But however, don't just couple all lines in one line. It decreases readability. This same concept applies to me doing other languages like C#, Javascript etc.
Less code is not always better, but the examples in this case (at least the Rust and Haskelll ones) are not counterexamples. They are much clearer right out the gate what they do, and even better, it's clear that they actually do what they're supposed to. Python definitely has its issues in regards to having very condensed ways to do things which are harder to read, but Python also has multiple ways to do this that are similar to the Rust/Haskell approaches
def calculate(bottom: int, top: int):
return sum(n for n in range(bottom, top + 1) if n % 2 == 0)
def calculuate(bottom: int, top: int):
return sum(filter(lambda n: n % 2 == 0, range(bottom, top + 1)))
The ugly lambda syntax and right-exclusive ranges make it a bit less attractive but ultimately still quite readable. I would prefer to see either of these over the original C code any day.
Short code can be good or bad and long code can be good or bad. You can write complicated long code but also very readable short code. If you can reduce long code into short code, while maintaining readability it will be always better.
Not only does less lines mean there can be potentially less bugs but also it's faster and easier to read. You can understand what it is doing "on the spot" and don't have to scroll through a lot of code.
A lot of code reduction is only about thinking declaratively and using FP functions like map() and reduce() efficiently. Unfortunately a lot of programmers are not taught these. I also learned it only after 10 years of programming...unfortunately, I wish I would have learned it way earlier.
I am flabbergasted that his complaint about remove nesting was just calling functions and his solution is just to call super convoluted std c++ templated functions that make the code less readable.
But but, pretty lambda's and collect...
Question, why not use early return instead of using ternary operations? They are in my opinion simpler to read since you have all the exit conditions at the top and then you just have the code that you want to execute.
I'd recommend you watch the original video by CodeAesthetic. It goes a lot more into those kinds of refactors.
Thank you! Exactly, I HATED that use of ternary operator. Just use a guard clause.
It's probably because when using a ternary operator you end up with a single expression in the function, which fits nicely with declarative programming. The thing is... even if C++ has become multiparadigm over the years and has functional features, they feel more like an afterthought and are kinda::disgusting::to::the::eye(). Rust, in the other hand, was handcrafted with functional programming in mind, and the readability when coding that way really shows.
But yeah, the title of the video should be "Declarative Programming from C to C++ to Rust", because an early return would be optimal if we were talking about imperative programming or OOP (which is actually very readable when done in C++).
@@fsalmacis5993 Ternary is fine for some cases. For cases like this, atrocious!
because its not nearly as fancy or cool looking
I'm sure Rust was more inspired by OCaml than Haskell. After all, the first iterations of the Rust compiler were written in OCaml.
You here LMAO?
Correct, Rust was more inspired by ML languages, however some functional aspects from Haskell still come through, such as how typeclasses and traits are similar.
@@zzzyyyxxx Yeah, I love that :D
@@SourCloud I am *everywhere,* as long as it's slightly rusty B)
@@zzzyyyxxx Isn't ML language a repetition ? At this point, I think it's better to write meta language, not a lot more characters, and everybody will know what it's about. Or just write ML but it's also used for Machine Learning so it could be misleading.
5:16 In CPP23 this feature is called “std::ranges::fold_left”
I love how elegantly we can communicate the solution in functional languages! Here're scala and clojure and they are basically the same as Rust - I like them all!
def calculate(bottom: Int, top: Int): Int =
(bottom to top).filter(x => x % 2 == 0).sum
(defn calculate [bottom top]
(->> (range bottom (inc top))
(filter even?)
(reduce +)))
Every time I look at scala code I think to myself that the code looks super nice. I'll have to find some time in the near future to dive more into it
+1 for clojure!
All those languages lack customization necessary to solve real life problems. People always show trivial toy examples to show off how some language looks simple and concise. Then you ask them to write something even modestly more complex and you see how all the ugliness starts coming out. The same RUST, you can't write anything practical without "unsafe", and then you write an ugly code resembling C, at which point every C++ developer starts laughing at that.
Super interesting video. I fell in love with rust for two reasons. Firstly because I wanted a language that can do low level code manipulation but a language that nicely implement functional programming (I tried haskell and the experience was lovely)
I also liked it for its reasonably good interop with other programming languages and because it promises to become even better!
My simple test for a programming language is to write a generic vector. Writing an efficient vector in C++ with custom allocator, placement, avoiding unnecessary initialization, exception safety, appropriate iterators, iterator traits, move semantics, etc. is straightforward. It takes some time but every competent programmer can do it easily. I used to ask that during a C++ interview to see if the candidate knows the basics. Any language that is worth exploring should at least provide a similar degree of expressiveness and flexibility.
Now welcome to RUST. It turns out performing this simple task is a chore, a code is getting peppered with "unsafe", becomes pretty verbose, and actually looks as ugly as C. C++ is just conceptually richer and more expressive. And another thing you will immediately notice, how poor variadic generics are, RUST traits are nowhere as powerful as C++ concepts. RUST just needs years and years to grow, and at some point it will become a C++ with different syntax.
@@ElementaryWatson-123 Thanks for your return. I have heard that C++ is doing amazing things. I think it will still be needed in the future. But you won't understand the strength of Rust or any other language if you keep your actual perspective
If you try to use a screw driver like a hammer it will never work since it's a screwdriver and you can do the same with any other tool which is not a hammer. If I try to use any other language like Rust, I could emulate some attribute but it will be most cumbersome than writting directly in Rust. So what's the point of using other languages ?
The fact that Rust is replacing C++ in some area and even used for the developpement of the linux kernel doesn't mean that Rust is always superior than C++. He is juste better in areas where performance and safety are a must. Of course it's not for free since we lose the legendary expressiveness of C++
So yes Rust will be an eternal imperfect imitation of C++ long as you consider Rust as C++. But his genesis ans his philosophy are different
I am happy C++ meets your needs. I am actually designing a programming language and study the differences between actual programming languages to see the features needed for it purpose and Rust is a good candidate to help me design his core calculus
There are very clear ways you can make the C code less verbose while still maintaining its readability. I’m surprised you didn’t try to fix it, and instead jumped straight to C++.
Edit: This is how I would write the function in C, and yes, I prefer this even to Rust.
int calculate(int bottom, int top)
{
if (top < bottom) return 0;
int sum = 0;
for (int i = bottom; i
Better, but could be better still by ensuring the bottom is even then incrementing by two each loop. Even that could still be improved, and I'm kicking myself for not thinking of it first, but by realizing that there are exactly half as many evens, possibly minus 1, in the range between the two numbers. No need to even loop.
@@anon_y_mousse The point is to add up the values though, for example in 2 to 10 you get the result 30, in 4 to 10 you get 28 via the loop, I've not been able to think of a calculation that does not involve the loop to get those values, how would you get them then?
@@zxuiji Another poster already did the work so I'll post their solution: b = ++b >> 1 > 1
This is the way the code should be written. Junior programmers should note the separation of the if statement, and the final return from the for loop, also how the sum initialisation is part of the block, because it is referred to the block. This is the way I write the code, and everyone should do sir. I am happy I am not the only one. This gives me hope. Well done.
Comparing trivial examples is useless. I always ask to provide equivalent implementations of generic vector in different languages. It's not a difficult task, but it touches a lot of basic stuff that programmers deal with every day, so it serves as a much better test.
I'm only on 1:36 and the entitlement is off the chart.. "Is not what I wanted to see.." ohh boy..
I would have loved some benchmarks at the end. Like does the original version run faster? Do the std functions introduce unnecessary complexity or are they faster?
i completely agree with you that the final version of the cpp code is more declarative and i like it
I don't consider myself a professional programmer, however, in my job as a physicist I have experience with numerical computation and programming with a focus on accuracy and speed. It was obvious that the original code was inefficient and this prompted me to comment and test before commenting.
I assume the purpose of the example is to show how to modify code to use standard, safer, tested methods and to make the code more readable. Goals I am sure we all share. One of the bigger issues in programing, of course, are errors with unknow size of arrays.
Not withstanding that there are far better ways to accomplish the same task as the C code, this particular code snip, however, is poorly chosen to illustrate the intended concepts. As pretty as the Rust and Haskell code are I remain unconvinced they are a significant worthwhile improvement. The modifications shown have all the hallmarks of using a sledge hammer to kill a fly.
Adding even integers is trivial. Even the original C code shows that. Assume "low" is the lowest-even we want to include and "high" is the highest-even we want to include then:
for(i=low;i
Your analytical solution, while very elegant, suffers from one problem: it also doesn't faithfully replicate the original I/O. What about negative inputs or mixed positive and negative inputs? Int is signed by default and negative inputs don't default to zero in the original code.
You need to distinguish three cases, to make your approach work:
1. bottom & top > 0 stays the same (though I would use low = bottom - 2 + bottom % 2 as that automatically gets you the last positive even number below bottom and 0 is excluded)
2. Bottom ≤ 0, top > 0, needs their low adjusted to the nearest higher even number and negated: low = -(bottom + bottom % 2) this also covers the bottom = 0 case.
3. Top ≤ 0, bottom < 0, flip the signs and top/bottom to determine low/high, then multiply the sum by -1
I like your thinking about not altering the I/O of the function because bottom == top might need to return 0 and in making use of n(n + 1)/2, however, in doing so you have also changed the I/O because nowhere in the code does it check for negatives, which might be a valid use-case. Like you said, we'd need to know more about the original application.
You are totally correct that readability has a cost, but after spending a few years working with code written by people with a different focus, readability is more than just nice, it is a godsend.
If performance is a primary concern, I would suggest keeping the original code in the unit test as a reference and to compare to the optimised code to ensure the I/O has not been altered or to document how it has been altered.
Getting back to the point of the original video, less indentation, we can re-write your code (not tested, probably has mistakes):
int sumEvenIntRange(long int low, long int high) {
// Method based on formula for sums of integers to n = (n*(n-1)/2)
// If n is even, the sum of all even numbers up to n = ((n*n)+2n)/4
// This assumes low and high will both be positive, so:
if (low < 0 || high < 0) {
return 0;
}
// this should be the second change everyone makes! (function name first)
if (low>=high) { // Condition from original code doubted by some
return 0;
}
high -= ((high%2 == 0)? 0: 1); // high-even to be included is either high if even or one less if high is odd
highEvens = ((high*high)+(high2; // compilers probably optimise multiply/shifting
// don't calc low sum if not required
if (low
@@SpaceMonkeyTCT Your assumption is incorrect. I tested all the code. I did not however test for negative integer input as you point out.
Honestly, I was just mentally lazy to check the math to see if it would work for negative numbers. It does.
Modifying to replicate the original code for both positive and negative input the code is now simpler:
int calculate ( long int low, long int high) {
if(low
@@logaandm I was also too lazy to check the maths for negatives, assuming it only worked for positives, good to know it works negative too. I still stand by my point of readability when it comes to inverting the outer if to return early and even if you don't like ternary operators or shifting, I find them equally readable (which is not very) :)
@@logaandm I ran some tests and looked at the assembler. Calculating the low and high is basically the same using ifs and using ternary, except the high ternary reduces to high &= -2; a neat trick! It rewrites ((high*high)+(2*high)-(low*low)-(2*low))/4 into ((high*high)+(high+high)-(low*low)-(low+low))>>2, which made me think of rejigging the maths to use fewer operators so my current code is now:
int sumIntRange(int low, int high) {
if (high > 2;
}
Which is a tiny bit more performant but not readable.
int a2 = (a + 1)/2;
int b2 = b/2;
return (a2 + b2) * (b2 - a2 + 1);
Also can be calculated as arithmetic progression sum without any filters :) Thanks for the great video, I used it as the opportunity to try the Rust
This was my first thought as well, although I don’t think the focus of the video is coming up with a more creative solution, but rather a refactoring/reformatting of an existing solution.
That's the problem with trivial examples, they are useless when comparing languages.
Maybe someone in comments also spotted this bug in C code: if the top is equal to INT_MAX we will get an infinite loop cause after checking number
топ и боттом проверяются "где то" выше, зачем у вас неимоверное стремление проверить и перепроверить всё и вся, что перепроверять не следует?
входные параметры в функцию никогда и ни за что не должны проверяться в самой функции, все эти проверки должны быть [как минимум], в вызывающем месте который вызывает эту функцию
и даже кандишен (топ>боттом) вкрай идиотичен, потому что он должен быть опять и снова "где то" выше, т.е.
main() {
иф(топ>боттом) кэлькулейт(боттом топ)
}
и даже это идиотично, ну потому что на самом деле проверка (топ>боттом) _уже_ где то _существует_, ессно на этапе создания структур например, именно там она должна быть
а вы как обычно "стремитесь" к массовой идиотии, и проверяете всё и везде, однако же идиотия на то и идиотия, ну потому что полностью лишена логики
в итоге все ваши идиотские проверки проверок от проверок над проверками только лишь выжирают батарейки и атомные электростанции, и толку с этого абсолютно нет, это банальное вредительство
а что бы функция не выкидывала переполнение, допишите к функции банальный комментарий, когда и при каких условиях функция будет валиться
Inclusive ranges: Just don't.
Good catch. That case would never with the original code though, since the bottom has to be less than the top for the loop to be executed on the first place
I've recently found more and more similarities in rust and python, actually. This is how I'd express it in Python, below. Replace top+1 in the range, depending on intent. I had the "Nevernester" Video in my recommendations, too. I still think he makes some good points, especially for beginners.
def calculate(bottom: int, top: int) -> int:
return sum(n for n in range(bottom, top) if n % 2 == 0)
this reminded me of an old code golf problem I solved, where I solved it something like this
sum(range(bottom + bottom % 2, top + 1, 2))
Imagine that: Python has the most elegant solution.
i don't like the n for n in
@@climatechangedoesntbargain9140 how about sum(filter(lambda x: x % 2 == 0, range(bottom, top)))
I understood the original C and it was easy to spot any bugs. As soon as you made C++20 substitutions I thought “what the hell is that?” and the code looked like illogical gibberish. In the commentary you then said “I think this is a lot nicer”! From my perspective you made simple, legible, code totally impenetrable to the point where I’d need to use tools to find bugs in it. I programmed commercial products in C in 1996 and C++ in 1998. I’m trying to figure out whether it’s worth learning this whole new syntax/toolbox that modern C++ requires- in this case it obfuscated the code.
I agree.
Its not. I program commercial products today (games) and I write code like you did in 1996.
not only is this not worth it, you might end up getting trapped in the same sunken cost fallacy that so many developers do, where they've wasted a bunch of time learning things that don't really improve all that much, and so not they have to lie to themselves about how the code looks nicer and smarter, and how everyone who didn't sink as much time as them learning useless things is just too stupid to get it.
But that's C++ nowadays, a language that provides 1 million ways to do something. That's why we are seeing a lot of new languages trying to conquer its place, bcs nobody wants the new C++.
I'm sure this is overly pedantic but the `iota(start, end + 1)` feels bad when `closed_iota(start, end)` exists.
I actually didn't know about closed_iota. However, it only exists in range-v3, not C++20 or 23 ranges. That being said, I did start using range-v3 when I introduced ranges::accumulate so I could have added it then. Thanks for putting it on my radar.
awesome to see how rust can even rival python in its conciseness and readability from time to time
here have some gay ass posing as gigachad
Me watching this guy "refactor" nice and readable code into the most monstrous spaghetti code I've ever seen
Note that since sum is commutative (and associative) you could use std::reduce in C++ instead of accumulate and use one of the parallel execution policies, if so desired. However the optimizer can also automatically vectorize loops sometimes as well, especially dealing with integral types and clear operations like +, so it is probablydoing that for the C code or the loop based C++ code and maybe the ranges one too, depending on the specific compiler etc.
Had a similar reaction to that same suggested youtube video. Love the content and comparisons between the languages!
The entire video I was waiting for you to change the C example by replacing "number++" with "number+=2" and reduce O(n) "even" comparisons to O(1) (only first number).
The final C++ example made me want to scream.
At least in Rust the same thing looks nicer
The best way to "never nest" this is to use a much faster O(1) algorithm instead of the O(n) ones presented:
int calc(int b, int t){
b = ++b>>11
Best version I've seen yet that matches the output of the original. I like it, even if I think the functionality of outputting 0 when the range is passed in the wrong order is wrong behavior. Even correctly handles negative numbers.
Damn ,The haskell code looks so good
Just looking at number of instruction generated is not always useful (unless you're trying to fit the code on a tiny PIC or something): first, the optimizer will try to inline functions and unroll loops (more in -O3 than -O2) which is great because inlining (and unrolling to some extent) makes a bunch of other optimizations possible, and in addition, in C++ you will get alternate code paths generated away from the normal code (kind of like having a separate, alternate function not next to the "hot path" of instructions in memory) for exception handling because they are expected to be unlikely but still possible; but the actual time (or instructions executed) in the case of no exceptions thrown will still be pretty small. Also tip when trying to look at the assembly code in Compiler Explorer to judge your code: use argc or argv or console input or random numbers or similar as dependent values (inputs) of the example code, otherwise the compiler/optimizer will just evaluate the code at compile time and the resulting program is actually just spitting out the constant results (even if the code for functions etc. is still there in the binary.)
Can confirm, at -O3 I've seen entire test suites reduced to printf( "%d
", result ); // though I did it intentionally, it was still funny to see.
I feel the C code itself could have some improvements:
int calculate(int bottom, int top) {
if (top > bottom) {
int sum = 0;
if (bottom & 1 != 0) bottom++;
for (bottom; bottom
good refactoring.
A further improvement can be:
int calculate(int bottom, int top) {
if (top
I’m guessing the bug is that if top and bottom are equal, the c/c++ code returns zero, while the rust code actually runs the filter/sum combo on that single int.
Edit: I was right
One thing I didn't see mentioned in the video or in the comments so far is the matter of compile times. I did some quick tests locally using the 'time' utility on macOS, timing compilation of each version. Unfortunately compile times get gradually slower each time. The initial C and reformatted C++ version take approximately 0.06s to compile (M1 Macbook Pro, Ninja build), but by the time we get to the range-v3 accumulate (ternary operator) version, the code takes approximately 0.8s to compile. This means the original version compiles 13x faster than the rangified version (an order of magnitude). It's tough as I do really quite like the more declarative approach of the final code, but the cost (when magnified across an entire code base) is prohibitively high at the moment.
It's uncanny how we get such similar youtube recomendations
I read the last histogram to mean that the simplest no-nonsense C with the O2 flag is way more efficient than all the funky new-age stuff that came later
That seems the most likely but you shouldn't jump to conclusions that fast.
More instructions doesn't mean slower, those extra instructions could be there for loop unrolling and vectorization which is what I guess happens in the O3 C.
In any case, the only way to know wich one is faster is to benchmark it.
"Nothing better than C" - Linus Torvalds
A "friendlier" option to the ternary operator which is going to be more readable for some people is to check the reverse of the if statement first, and "exit early", placing the evens sum logic after the if block. This gets rid of the need for the else block and leads to an almost entirely flat function...
I had similar thoughts about that video. He did make it better, and I can't fault him for making content that is helpful for someone who wasn't initiated to functional style.
I took the liberty of rewriting the calculate function in C. By observing that we only sum even numbers we can use the mathematical formula for even sums, which lets us skip the for-loop entirely and lets us skip doing the modulo operation on every number in our range, going from an O(n) solution to an O(1) solution.
int calculate(int bottom, int top) {
if (bottom > top) {
return 0;
}
if (top % 2 == 0) {
top = top - 2;
}
if (bottom % 2 == 0) {
bottom = bottom - 2;
}
return top / 2 * (top / 2 + 1) - bottom / 2 * (bottom / 2 + 1);
}
Thank you, that was very interesting. I did not get involved in C++ back in the 90s because there were too many problems to make the transition from C which I had been using since the 70s, but now I can see that it is a time to take a fresh look at C++ and (more importantly) at Rust. Cheers.
This kind of refactor feels like such a revelation after watching all your APL videos. That whole thinking with algorithms thing.
So i got recommended and started watching CodeAesthetic and love his style and explaination. Now its suggested you, its one big circle
I love it when I find a random video that replies to another random video I watched. I missed the old TH-cam algorithm when I could see things that I had never seen before. this kind of reminds me of that
Awesome! Rust and Haskell love!
Also saw that Never Nesting video and was also unimpressed. Not only that he ended up creating an off by one error when he inverted a conditional statement
Cool video, thanks for sharing. It's an interesting discussion; the transfer from imperative to declarative programming styles.
As you mention in your video, the declarative syntax is less overwhelming once you become familiar with the tools available to you and that's where my biggest criticism of it lies.
I have found that declarative code tends to have a negative impact on maintainability. The inability to use skills learned from other languages requires contributors to have knowledge of language-specific utilities, thereby increasing the barrier of entry and reducing the pool of viable contributors or increasing the time it takes for individuals to be productive.
The declarative aspects of C++ are different to that of Rust, to that of Haskel, to that of JavaScript - but an "if" statement is an "if" statement everywhere you go.
In my experience, boring and obvious code makes for maintainable and approachable code. While I, as a hobbyist, love tricky fancy code - it really sucks in a project setting.
I analysed the assembly code. At -O2 the C code has the tightest assembly and at -O3 the C code is auto vectorized. The C++ version cannot be auto vectorized and the scalar code that is generated at -O3 is H O R R I B L E compared to the C code at -O2. I wrote the scalar assembly by hand and compared it to GCC and I managed to shave off one instruction but that was it.
I would be embarrassed to see either of those in my code, and so would Gauss. Here is what I would do (Rust version):
fn calculate(bottom: i32, top: i32) -> i32 {
(top/2 + bottom/2)*(top/2 - bottom/2 + 1)
}
Infact, I'm pretty sure that some C and C++ compilers (clang) would recognize the pure for-loop versions as the sum of all even numbers between bottom and top and replace it with the very formula I used and thus turning an O(bottom-top) function in to a O(1) function. However, when using all the "zero-cost abstractions" this would not be the case. Also, as someone that is trained from childhood with function composition being from left to right and not right to left I find the Rust version to be backwards (the Haskell version is alright).
I do believe that the code is not meant to be taken literally, e.g. you might want to edit conditions and values later, it's just an example code, hence why the function is called "calculate" and not "sum_of_all_even_numbers". But yes, you're correct that this particular version is solved in a const time with a simple formula.
@@peezieforestem5078 I know that it is an specific example. However, you seem to miss my point, that by using all those extra levels of abstractions, both the programmer and the compiler might miss trivial optimizations.
@@tordjarv3802 I see your point, but you assume optimization criteria.
Let's say we optimize for minimum code alteration for likely changing requirements while also maintaining minimal code size. Now, the solution you propose is not optimal, nevermind trivially optimal.
Had you known in advance that the problem will always stay this exact way, you would be correct. But you don't, in fact, depending on your environment, you can be 90% certain that at some point your customer will barge in and tell you to redo everything.
In this case, you optimizing for specifics of the problem is waste in the meta-sense, unless you wish to challenge yourself.
@@peezieforestem5078 If the customer demands to redo everything, it will not matter how the code looks (because you have to redo everything, or maybe you don't agree what "everything" means).
@@anserinus no those two are not the same, because we are talking about integer division. for example let top=5 and bottom=3 then (5/2 + 3/2)*(5/2-3/2+1)=(2 + 1)*(2 - 1 + 1)=6, while (5+3)*(5-3+2)/4 = 8.
"I saw it first!"
I was also disappointed and somewhat annoyed when I watched the original video. I'm glad you made this video to show what a proper refactor can look like.
Man I'm still learning basic C stuff and that last C++ code is just ☠️☠️☠️
im dev for years in cpp and it indeed is ☠☠☠ even for me. you dont use such things in code that many developers see and read every so often as to not confuse them. when writing code you try to write it in a way, that even dumbest but decently qualified dev can understand it. also names of variables and function arent what you would want to leave in source code. also, when it comes to extracting parts of code to separate functions - it is quite controversial - in my workplace we dont usually care enought to reduce function volume more as it is not that big yet and is decently uderstandable in current form. also, we dont really want to jump too much around the code to check each function due to size of project.
@@Ogrodnik95 You don't see this code in production but are you using C++ 20 or 23 yet*? Maybe this is a future we have to look forward to.
* I agree with what you've written btw, this question is not a challenge to what you said.
@@not_ever we 'just' started using cpp17 :D
I saw the same video, and I was also disappointed. It's been a few years since I wrote anything in C++, and I'm glad to that it's getting better at FP, as whether I'm coding in R (which hates loops), or in Julia (which is just lovely), it's a style that's a lot more pleasant to work with.
I know this was about style with regard to nesting, but since you mentioned instruction count...then what about:
int calculate(int m, int n) {
if (n
I was never able to get into c/c++ but now I really enjoy rust. Rust is great, pretty, expressive and straightforward. I remember having much difficulty in c++ finding the "right" way to do the simplest things... And it was uglier the nicer the feature was ....
how did you create the animation bewteen slides? looks great
I have a follow up video on this in a couple days 🙂
PowerPoint maybe
this is the senior content i was looking for
Curious about the instruction count if C++ were compiled with Clang (an LLVM compiler), if it would be more similar to the rust output (also LLVM).
Clang 16.0.0 -O3 generates 45 lines, as opposed to Rust at 50.
I came from Haskell to Rust and I love it
Even as Rust developer i must agree Haskell solution is quite elegant here.
@@MisererePart Haskell will be that way right up until you need to optimize. The gap between readable and optimized code in Haskell is probably larger than in any other programming language
@@jenreiss3107 Well, you can optimize code with rewrite rules and other stuff like that…
Having spent some time programming Javascript and Ruby, starting in high school with C and Pascal, I always though “Rust is a high level language” was an odd statement. But after seeing that C++ comparison, now I get it 😂
Does this make the code execution faster or is it just about how fancy code can look?
I would be honestly surprised if there was any meaningful difference in performance, especially at high optimization levels. I'm actually surprised it even produces different assembly at all.
the datatype, we either need to limit the maximum sum of "sum" to an int32 or make "sum" > int32, int64/long etc
I did not see any bug in the code. if the top is not greater than the bottom return 0. pretty clear what it was meant to do. If I had never seen a line of C code I could still figure it out. whether or not you wanted it to return the value if the range is length 0 is entirely up to the programmer. the only time i value abstraction is if I don't care about the code only what it returns.
I like seeing things like this. conn = database.connect("database"). that is acceptable abstraction to me.
if I am diving into the logic of a function, please make it clear about what it does don't abstract it with more functions i have to investigate.
what would happen if I wanted it to return 0 if the input range was 0? I would probably end up rewriting the original function to something more understandable that doesn't use library calls.
Congratulations, you just reinvented FP in C++
11:30 So the code we started with was more readable and more efficient lol
Superb. I so enjoyed this video. Gracias!
if you wanted to compare assembly instructions, why not use -Os? Though im not even sure why even graph the number of assembly instructions generated, since it doesnt correlate with how fast the code will actually run. More assembly instructions just means your binary file will be bigger which is usually of little concern.
In case anyone missed it, and apparently many did, the idea of the original C code was to have some actually functional code to show how to reduce nesting
It was not meant to be the best way of implementing the operation (it isn't)
It was not meant to be perfect (it isn't)
It also isn't the fastest, nor the prettiest, nor anything else
It is a good example for what it is meant to be
It was meant to give an example that can be abstracted to other situations (something it accomplished very well)
Go watch the original video if you actually care about what it originally meant, but if you wanna whine about its imperfections, go on
I think you missed the point of the Code Aesthetic Video - it was more about control flow than functional programming and algorithms. The other examples he had, where it was more business logic and things you can't simply reduce into better functional mathematical expressions, are where his tips are needed to have a more readable and maintainable codebase.
Currently learning Rust, so great to see such a video about it. Absolutely great programming language.
This is after years of experience with several C family anguages for me, with C# as my daily one, which is wordt mentioning for it's functional programming (especially LINQ) as well.
Doing your example in C# the code looks pretty nice as well. The only 2 downsides are that for the range we still need to call the Enumerable.Range method instead of using the two-dot way. This is because the Range type behind the two dots in C# has not yet implemented the iterator (Enumerator) pattern today. And we also need the ternary operator solution for top-more-than-bottom check.
Luckilly C# has a Sum method, but otherwise the Aggregate method was still an option for the accumulation solution (reduce operation).
I’m not a professional at all but what I think is that the “never nester” TH-cam video was going for code readability making it easier to understand. However I think ur going for something different. Not to say ones better than the other, but you don’t always have people who understand what you’re doing with lambda functions, iota, pipelining, etc. anyways all I’m saying is there’s pros and cons to everything
How to turn clean code to unreadable mess
LOL
I prefer a guard clause over if/else or ternary in this case. I think it improves readability.
well im glad you arent my coworker then
Although rust is influenced by functional languages in general it is inspired by ocaml heavily. The first iterations of the compiler were written in ocaml.
Yea, it would make sense that it is more influenced by OCaml due to the first versions of the compiler being written in it, but I have heard Rust folks say that Rust is Haskell + C++ so I'm not sure which was more influential.
@@code_report the only part of rust inspired by haskell directly was traits (typeclasses). you can look up graydon hoare's answer and you'll see that he said that Rust was mostly inspired by Ocaml and C++ (trying to remove it's footguns) not Haskell
@@FlanPoirot and what about monadic ? operator
@@myxail0 that was only added later, in fact not that long ago it was a try operator and then changed to ?. so it might have gotten more features from haskell or other languages with stuff like that, I'm not qualified to answer that tho
@@code_report Traits or type classes and rust’s error handling were very directly influenced by haskell. It sure has gotten a ton of features from c++ (without the footguns of course). All the languages mentioned influenced rust heavily, and I am not sure which one was more influencial either.
I started getting the CodeAesthetic videos recommended as well, so I had seen that one already.
I like your approach using the built in helper functions, but I don't think that approach would have fit the CodeAesthetic video as those functions may not apply to other programming languages and he was providing a general example to be applied to any language.
Honestly, looking at the C++ part, I can't really think who would write code like that. You don't need to use every standard library function or have a lambda for a command as simple as "return e % 2 == 0" since you use it once. I think the reformatting stage was more than enough because the rest is just overengineering at that point.
Great video! Do you mind sharing what software you use to make the video? And how did you get the nice transitions with the text moving around on refactors? Keynote magic move?
th-cam.com/video/Vh3y1ela-_s/w-d-xo.html
@@code_report Thanks, great! Just saw that. The morph transition with words in PPT is really, really nice.
Code readability inversely proportional to number of lines 💀
what is the point of making one-liner from c++ code? what is the point of comparing NUMBER of assembly instructions but not performance?
That CPP code progression gave me flashbacks to when I thought I was being clever writing gnarly javascript nested ternaries....
I'm not a C++ guy, but the C++ refactor is so confusing to me. Personally, I think lambdas and ternaries are best used when passing them as parameters, not just as drop in replacements for more basic syntax. The Code Aesthetic video showcased using early returns with ifs, and I feel like that would have been way nicer.
Ok. you got me. I just love this video! Wonderful to learn that rust improvement
7:50 even cooler you can use rayon to parallelize that workload with just a .into_par_iter() before the filter and sum
I took a look at compiler explorer with GCC. The saddest thing that I saw with the c/c++ code is that the compiler never realized that it can test 1 element and then just do the sum, incrementing by 2 each time. I wonder if the Rust compiler could do better.
Isn’t rustc uses llvm to optimize? So the optimization passes is the same in c++ when using clang++ to compile
If the surrounding if can be omitted in rust, it can also be omitted in C, and probably in C++, but I'm not sure how iota is defined.
Using the number of assembly instructions is a very bad metric, at least if you suggest less is better.
I've created my own godbolt link under z eG84e44hr.
If you look at the assembly generated from Rust and C, it's basically equivalent, and nicely vectorized with avx2.
The C++ codegen is basically non optimized, and will be way slower.
Good video, but I'm not a fan of the "number of assembly instructions" benchmark. It would have been nice to have an actual time benchmark, which is not hard to do.
yes, sometimes less instructions can be take more time to process.
BEST THUMBNAIL EVER
I always post in comments, that Rust is crossover of C/XX + Haskell + programmer experience of severe bugs.
If you understand C/XX and it's struggles and pitfalls.
If you understand Haskell and it's struggles and pitfalls.
Then you understand why Rust is done the way, how it is designed to.
I was testing out the C++ refactor(s) and I can't get either 5:03 or 5:40 to work. The first returns the wrong value and the second throws an error about ranges::accumulate being overloaded or something. I installed ranges-v3 via vcpkg. VS 2022 using C++ 20 standard. Any ideas what might be wrong?
I'm no rust expert but there is also a bug in this definition:
fn calculate(bottom: i32, top: i32) -> i32 {
the sum could be larger than i32 max value. It may also apply to other examples.
I currently learn C and when ever i see a C++ code my brain explodes, i hope when i get to C++ i will be able to understand this too
7:26 I believe you changed the behavior of the function slightly. In the case where top == bottom and both are even numbers, the revised code will return that number. However, in the original code, it would have returned 0.
edit: he points this out in the video
Why no assembly for Haskell?
CodeAesthetic is amazing
I looked at the example on GH and I think the function as written is just wrong. Aside from the obvious, #include "stdio.h" which should be #include if for no other reason than convention's sake, but the name and how it works don't seem appropriate to me. It's intended to calculate the sum of all the evens in the range, but why restrict the range in one way? I think it should be: int sum_evens_in_range ( int a, int b ) { int i; int sum = 0; if ( a > b ) { i = a; a = b; b = i; } for ( i = a + ( a & 1 ); i
I did some testing out of curiosity. The Rust implementation at 7:55 runs up to 12 times faster than a simple for loop implementation of the same thing when large random numbers are input. When a static range is the input both run at the same speed. I wonder why is this?
If compiling without the --release flag the simple for loop can be several times faster than the syntactic sugar version.
It's been pointed out before but there's a closed form expression for this :)
If x,y are even, the number of even numbers N in [x,y] is (x-y)/2 and then the sum of the them is Nx + N(N-1)
My implementation is a little nasty so it can handle the odd/negative cases but would appreciate if someone could clean it up! I count 22 instructions on gcc 12.2 -O3
auto calculate(int x, int y) -> int {
const auto N = (y - x) / 2 + std::abs(x * y + 1) % 2;
const auto e = x + std::abs(x) % 2;
return y