I think the best part of this video is that it checked my bias against trying even courtesy, occasional C++ coding and learning as a systems language. Every single preconcieved bias, shattered! Thank you! Also, thank you for introducing me this the website, I find it much, much more intuitive to my current system as a 'playground'. I love it, I love your channel and I may be lack, but glad I rung the bell.
The true superpower of C++ is that it is SO deeply complex that you can still drop your daily “well akshually”s even if everyone around you has been working in C++ for more than two decades 😂
After declaring the input variable `constexpr`, you can also leave the functions `constexpr` instead of `consteval` and the assembly still break down to one line.
Yeah it's just about forcing a constexpr context really, making it all consteval succeeds at that but isn't the way to go imo, those functions look like they should be constexpr and be callable at compile time and run time.
@@QuickNETTech @robertfrysch7985 I meant to show that you could revert that consteval's and just use them as a tool but forgot. It is a great point though : )
some assembly instructions are way more expensive in terms of speed than others. for example summing integers is several times faster, that making syscall. what i want to say is less not always mean faster.
Funny thing: you don't need consteval at all, because marking vec as constexpr will force the compiler to evaluate its value at compile time anyway! So in theory it would be perfectly possible to implement something similar in C++17 assuming the search functions themselves are constexpr
7 หลายเดือนก่อน +60
Another superpower that C++ has is that by using templates and concepts, you can use compile time polymorphism, by making your interfaces as concepts and just passing a class that conforms to the concept
Almost every language has compile time polymorphism
7 หลายเดือนก่อน +9
@@climatechangedoesntbargain9140 Having a class derived from a class derived from a class to 20 degrees of nesting is not the same as having a bunch of concepts that can be implemented without nesting and passed by name as if they were the actual interface. Only C++ has that
@ compile time polymorphism is a contradiction to inheritance, which is runtime polymorphism. Concepts are just traits in Rust - where they are actually checked for conformity
C++ has had "compile time polymorphism" for seemingly ever? It's called static polymorphism or the Curiously Recurring Template Pattern (CRTP) which does have some limitations. I'm only fairly sure I understand what's being discussed by OP but I'm pretty sure what's being described is a thing called duck typing which isn't only available in C++ but the way it's done might be one of the nicer ways. I don't really know anything but C/C++ so I can't really say if only C++ has it because I don't know, I'm just fairly sure other langs can do it?
Type classes in Haskell (and possibly other FP languages) and traits in Rust should be about the same thing, in principle, although I'm not too familiar with either language (and yes, the previous user referred to compile time polymorphism, I think he's right)
@@CyberDork34 small correction, consteval must be _runnable_ at compile time. Whether the compiler actually runs it at compile time is a separate question. The standard allows it not to.
Any language with metaprogramming should be able to get to something like this. The Lisps come to mind right way for obvious reasons, but they are not all compiled and I'm assuming your amazement is mostly with the minimal assembly output as opposed to the actual..I'll call it "beta reduction"
The thing is that it allows the compile-time environment to run like the runtime environment without costing the runtime anything, very few languages actually have this in such a straightforward way that you could just use the plain language itself with one extra keyword to generate compile values, I don't believe there is a single bytecode language, and even many natively compiled languages don't have it either.
@@Spartan322 I highly suggest "Structure and Interpretation of Computer Programs" for a full treatment of the subject as I don't really have the time and space to get into how and why you're uninformed and why I said the Lisps are an obvious choice. I'll simply say that computing is not what you think it is and the distinction between compile time and run time is arbitrary in a language like a Lisp where there is a interpreter / compiler built into the language as a callable function. It's kinda like if you had a JIT compiler that you can call on a string at "runtime"
@@andrueanderson8637 That's excessively rude and incorrect. I've put my time studying and writing languages before, first off runtime in a compiler is still runtime , there is a distinction between compilation and runtime execution for every language, that's simply unavoidable, even inside the compiler, there is still a compilation phase that would then execute that, even if it doesn't output a file and instead just directly executes the instructions. That's still a compile time. The only really way to write an interpreter or a JIT compiler is to still compile something before you run it, even if you don't compile the whole thing at once, to execute instructions an instruction still has to be compiled.
@@Spartan322 I disagree, it wasn't meant to be rude it's just hard to read tone via text. Again, there is no difference between runtime and compile time in a Lisp. It's clear you don't understand how something like e.g. the reader process in Clojure works and that's absolutely okay, that's not necessarily a fault or something to be ashamed about, I'd guess most people are in the same boat. All I'm trying to get across here is that what you're saying is not strictly correct and programming and code execution is much more interesting and varied than that. Think of how your operating system works. Think of the fact that code and data are loaded into memory in the same (binary) format. Think of what "compile" actually means
ANSI Common Lisp can also run functions at compile time. They're called macros. Technically it's at macro expansion time. There are also reader macros so you can change the actual syntax of the language. That said, it is very difficult to write a Lisp program with the performance of C++. Also C++ has a much larger community backing it. So regardless of the warts, C++ is probably the best language to know well.
@@techtutorvideos It's been a while since I've used SBCL. I used it with Emacs + SLIME. It was awesome when I used it. I imagine it's way better now. It was faster than OpenMCL and way, way faster than Clisp. You certainly can make fast code with it.
What's funny is that he has included Lisps in videos before, but in this one ends up saying things like this is unparalleled by no other language.... there's a Grammarly article still online about how some of their Common Lisp macros made SBCL struggle to expand them because they would explode into thousands of lines
@marcsfeh Hell no, compile time execution is godsend when you need to generate lookup tables, that's what I use it mostly for. There are other brilliant usages like in ctre to construct regex at compile time, or like in fmt to verify validity of format string. It is all limited only by your imagination
@@germanassasin1046 I built a constexpr string hasher so I could stop rehashing strings at run time among other things, often around processing a string literal at compile time instead of run time. It's not quite limited to your imagination and more so limited by what you know at time compilation, if you know everything you can do it at compile time, if you don't know something because it's a run time value then obviously you can't constexpr that.
Imo compile-time programming goes really beyond just performance. It can be used to make code more secure by creating even stricter requirements at compile time.
For the input that is known at compile time Is_prime is evaluated at compile time. But in actual test where input array isn't known it is still a loop. At best it could be a binary search, if find_if and find_last_if could detect that primes array is sorted. But by substituting int[25] array with bool[100] one you can turn is_prime function into O(1) for any input without any constexpr. The other way is to test divisibility on 2, 3, 5 and 7, since those are all possible minimal divisors for numbers less than 100. Though the input limit of 10^5 numbers means that such performance considerations are not strictly necessary. It will probably work just fine even if you use generic implementation of is_prime from some textbook.
Interesting. Better performance? But you are only executing a single test case for its already determined result, i.e. these input values are fixed, which is why the entire solution can cascade through constexpr. This means that the compiler *is* the interpreter for this one case alone. So then how long does it take to compile? That is in effect your execution time for this one case. If you want this to be a general solution for any set of input values, read in from the user or file, could you still do it this way?
Yeah exactly, if the result is predetermined, why not just replace all the lines of code with the result as a single constant? I'm only a student, but I can't see a use case for this, considering programs are meant to be dealing with varying inputs
@@andrewtran9870 it's true that this example isn't super impressive, but there are uses for compile time computations. look at the implementation for println - it's a type safe C++23 replacement for printf and cout. the main problem of printf is that compilers have to verify that the user correctly matches %d to an int, %s to a null terminated C-string, etc. println takes variadic arguments and at compile time, matches the {}'s in println("Hello {}, number {}", "world", 1); to "world" and 1 respectively with type safety. Format string vulnerabilities (mismatching the printf format specifiers, which cout and println fix) is weirdly common problem, though I don't know why because any major C compiler should warn you about them.
@@andrewtran9870it’s more of the idea behind it. for example, if you have calculations from config headers then you can do all the calculations relating to config at compile time, improving performance. there are plenty of cases where you have values known at compile time that you do calculations with, but you keep them separated for readability or to make it easier to modify.
It happens from time to time, to have things that need to run under a function or a set of functions because their value changes from time to time. It's not that rare to have functions that use constant values as parameters. Or for example a function that runs several of those in different sections. Putting in the result manually by hand for each one is never desirable.
@@andrewtran9870 lookup tables, sine/cosine signals, cryptographic keys, hardware configurations or pretty much any data you need generated for the program to work. These are very useful in embedded systems
This came out of a leetcode question for which the test cases are not known at compile time so in that context, constexpr would only be useful to generate the prime table rather than hardcoding as literal values. For that case, the most interesting solution is the 100 bools one. By stuffing the bools into individual bits of a 128 bit SIMD register you can get them out again with a short instruction sequence of shifting, masking and moving to a general purpose register. Presumably C# would have the advantage there with its native SIMD types although we do have `std::experimental::simd` and various non-standard libraries and architecture and compiler-specific intrinsics.
You know It's very easy to forget. You can use namespace like that. [That makes using ranges, and Chrono significantly easier.] Well I mean the calculations do happen. They just happen to compile time instead.
Nim has thorough compile-time evaluation of most of the language. Not just const, but static blocks and all. You can slurp entire files into a variable and make the byte string a compile time binding in the executable. It's ready for anything you want to do.
in jai language from Jonathan blow, it's seems it very awesome for compile time values, it's seems we can `#run game();` and the score of all played game return to the compile tome without any verbose and many experience, just using #run keyword. but jai is still in alpha release
hmm.. sure, if your input is constexpr then it can all run at compile time. but if it's not, then is_prime is still O(N) even if you mark it constexpr I stick by my comment from previous video: An array of 100 bool with direct random access to the answer.
if you have N primes yes, ... if your array to check against is of size N then it is O(1) in other word if the array you wanna check for primes is a million times bigger every call of is_prime is still constant time.
Another way is to check divisibility on 2, 3, 5 and 7, since you need to check only primes up to sqrt(x). From my testing it has similar performance (~10% difference) as lookup table and is much easier to write.
Nice video and follow up. I would like to add that you can have the array of prime numbers be generated at compile time too, instead of hard-coding them as literals (if you don't mind a slower compilation). Maybe for prime numbers it's overkill and unnecessary but in general it's great because you can get rid of some magic numbers that make the code hard to reason about later and replace them with consteval algorithms that calculate those numbers, that surely are more self-documenting
Jason Turner greatly exposes in his video all ways to run some functionality at compile time to don't waste runtime. So the best way is to write all functions as constexpr which is not a guarantee of compile time computations, but just a hint for compiler. And have only one consteval function that takes constexpr method as parameter.
Scala 3 can run any code at compile time. In fact I think it provides three different ways of running code at compile time, although one of those - the type system, is ~decidable rather than arbitrary code.
As a longtime Java programmer, but 30-year C++ dabbler (Stanley Lippman's C++ Primer, 2nd Edition 1994), I can say that almost all the other programming languages were created because only a small minority programmers have the mental endurance, working memory, detail orientation, and conceptual ability to work with C++. It is truly the master programmer's programming language.
7 หลายเดือนก่อน
Hi! thanks for the great demonstration of what constexpr does by showing the resulting assembly code! This is the first time finally functionality of constexpr sink in with me. By "Does constexpr make is_prime O(1)?" I've totally meant "whether it'll be handled at compile time?". I should've been more clear about my intention to not confuse other commenters, but you got it right anyway. ^_^
The arrow syntax is actually required in order to use a decltype return type, if it uses the function's arguments, although its popularity outside of this usage (i.e. for the sake of aesthetics) indicates that `Type name` syntax is kinda shit lol
Can you give an explanation to what problem you're solving by using "auto" in your function signatures? When I look at your is_prime() function from C++23 in your previous video, I have no clue if it returns an int as the arrow operator would suggest, If it returns the boolean expression of !=, if it returns std::ranges::find()'s return value or if it returns the iterator of primes.end(). (I'm guessing the -> int was supposed to be -> bool but still, the question stands of why having auto is even necessary) To be honest, I don't even know if it is a function, or a lambda. In which case, why? Lambdas whole point is that they're anonymous. I can't help but feel that putting auto in your function signatures does nothing but make your already complex code even more unclear. Excited to hear your reply!
Trailing return type syntax is basically superior in every way to the traditional syntax and has been in the language for over a decade. Its required in some places and some people prefer using it everywhere for consistency. At this point you really can't complain about it being unclear - that's just a you problem...
I am under the impression that his use of trailing return type is all about stylistic self-consistency within his own code bases written in many different languages, and not so much about C++ itself. Watch the following video of his C++ talk; timeframe around 25 minutes, "Conor Hoekstra - Concepts vs Typeclasses vs Traits vs Protocols - Meeting C++ 2020" If you were genuinely curious about the advantages of training return types, then stack overflow's "Advantage of using trailing return type in C++11 functions" is your answer.
@@orbital1337 "Is basically superior in every way" is not a very descriptive or argumentative reason. Why is it better? What advantages does it provide, that learning the basic syntax of a C function can't? For the record, I've never seen anyone use it in C++ until I saw this video.
@@testtest-qm7cj So from what I can gather, people do it to have consistency with Lambda syntax. Idk if I'm getting something wrong but Lambdas aren't really functions no? They're anonymous and aren't mean to be used as functions, which is probably why they have completely different syntax from regular functions. Just seems a bit weird to add consistency to two things that are meant to be separate.
@@JFrancoelambdas don't have to be anonymous, they're basically functions with a scope. You can do: auto foo = /* your lambda */; and call foo as if it's just a regular function
Technically speaking, when you commented out all other code and just return the result of 'is_prime(43)' it's not necessary that 'is_prime' is calculated at compile time. As a matter of fact, if you turn off optimizations, so put -O0 instead of -O3 on gcc, you'll see that you get a normal function call probably also depending on the compiler version. You actually need to force the compiler to calculate it at compile time, you can place the result into a automatic constexpr variable, or better, constexpr static variable, and yeah there will be a difference in these two cases which could be interesting topic to cover in the future why there is a difference in the generated assembly. Great video though, keep up the good work!
Yeah, I'd say at least 50% of praise of constexpr doesn't stop to check, hey, if we don't write constexpr is this a valid optimisation anyway? Also lots of bug reports.
Shove this video to the faces of who says C++ is a mangled mess of a language and is basically unreadable. Yes, it is a mess. Yes, it is sometimes difficult to read. But still, the power it gives you is top notch.
it doesn't change the fact that it is verbose or overly complex at the cost of developer experience though. I you need something to be this powerful for sure c++ is good. But for most day to day programming this stuff is not relevant enough to sacrifice the ease of use and speed of development of other programming languages. Yes it is very powerful, but that does not make it the right language for everything.
@@dietibol that's very fair. The problem I found with today's programming ecosystem in general is, none of the advocate "modern" programmers under the sun don't realize it or don't care at all. Programming languages are tools at the end of the day. Use the right tool for the right problem, peps. It's quite simple.
Isnt this only valuable for fast startup? (Since if this was run repeatedly it should just be cached?) And if so isnt there other tools that exist for fast startup in other languagues that leverage a previous run of the program?
Results to function calls don't get cached in most languages unless you tell them to. The advantage about constexpr and other language's compile time evaluation schemes is you can use them all over the code with different results. Sure everything you do with constexpr you could also do by just always pre calculating results and hard coding the values into the program, but that will quickly become unreadable, or very annoying if you need to change some input value
@@sinomBoth constexpr and caching both require you to tell the program to do it. So the question remains. The only benefit of this vs caching seems to be just fast startup times right?
@@Dogo.RYou cant compare this things. Constexpr and consteval just markers that you can evalute something before program starts. This evaluted values has be the same every starts. Caching is so different case. You save runtime values to avoid not only evalution (IO operation too)
@@vas_._sfer6157 Saying I cant compare them then explaining again what constexpr does like the video, does not at all explain how the problems they solve are different. Nor does it directly address the question I asked.
No, it's useful for anywhere you might need a value to be available as a compile time value (template parameters, array bounds, things like that). Or if you need to generate a lookup table, or some parsing logic defined in a grammar known at compile time, etc. Lots of possibilities.
If you have a lot of numerical code that's going to be constant every time you run it and requires some pre-processing, then it's going to save you some time, but I've got to be honest, I don't think this would have helped me in any code I've personally written, since just about anything with a parameter needs to be evaluated at runtime.
Well, I think there is a reason why "constexpr everything" has been one of the hot topic in many C++ conferences for the last few years. Speakers of many of those talks argue that ordinary applications also contain surprisingly many redundant static data generations done in runtime, hence adopting constexpr as much as possible is beneficial. But many questions from the audience of those talks were essentially the same; "does my application really have such cases? / is this relevant to me?" So, I guess either constexpr is not for everyone, or realizing you can benefit from it is somewhat difficult in general.
Is there a real world reason to do this vs just knowing it's 3 at the end? I'm not being cynical. I think it's neat that C++ can do this. I'm genuinely curious. Like if I worked at a company that was using the output of this code for something - shouldn't I just provide 3 as the input to the next program since it's always going to be 3 after it's compiled?
This video just popped in my recommended vids and I just understood NOTHING !!! But I find it curious that a program would work "at compiled time", whatever that means. I'm just starting my journey in CompSci; I'm currently doing an internship. But once I'm done with it and fiddleling with high level languages (Python and JS), I wanna try the real deal with low level languages. I thought about doing C in which I have acquired the basics (print, input, conditionals, loops and functions) because it has resisted the trial of time, but C++ also tempts me a little but that mention of 4 different kind of C++ disorients me a little. One of my friend also suggested me GO, which he seems to be fond of, but if I was to learn modern low level languages I'd be much more interested in Rust with the hype there is around that guy. Which language do you guys suggest I start with ?
Typescript (in something like vitejs): Once you understand objects & functions you can do basically everything, it runs live in your browser & results are visual, and you have nice autosuggestions. After that rust will be a good switch to learn lower level mechanisms. After that if you want some paradigm shifts try sml, prolog & clojure too see how programming looks like from different perspectives P.S. Go has it's quirks, I don't think it's the best language to start with
I think learning C for it's simplicity and for how much it teaches you about what is happening under the hood of all the abstractions before learning C++ for things like RAII and templates is a good path but I might be biased. If you choose that path, take care to really learn modern C++ though and not become another "C with classes" programmer. On the other hand not everyone in SciComp needs to become a low-level wizard. Julia is very popular in SciComp if you want a language that was purpose-built for these kinds of applications. Python can also get you very far if you know it's shortcomings (e.g. avoid loops) and learn how to use its libraries correctly.
Yeah I'd say check out Jai if you can. It's compiletime evaluation is pretty crazy--essentially anything can be compiletime or runtime, without any special distinctions in syntax or functionality. So for one of the pathological examples, you can run an entire game with graphics, sound, etc. all at compiletime
@@phusicus_404 yeah it does sound pretty weird when you first hear it, but it's not actually that complicated. Just the language compiles in two phases. First it figures out what is compiletime, and compiles that part to bytecode which then gets invoked immediately while compiling. Then when all the compiletime stuff is done running, the remainder of the code gets compiled down to an executable, with any compiletime results available for use in the runtime stuff. No one would actually run a game that way at compiletime, but the point was just to show that compiletime and runtime code are completely interchangeable. Honestly I think the whole thing is pretty genius, completely removes the need for a separate macro language, while being just as powerful, if not more-so.
I mean, this is basically just offloading the calculations to the compiler's "run-time". So in that, this is no different to Python or any other interpreted language
Maybe from the perspective of starting at two black boxes but that's not how constexpr/consteval work, it's the compiler actually doing the work of checking and calculating everything and not just firing off some interpreter subprocess to return the result. Honestly building a C++ interpreter into the compiler may've been easier and lead to better compatibility 13 years ago but instead we have only partial compatibility and keep getting features added to compile time coding in C++
All good, but the std committee chose such terrible keywords, as always. They could have called the keywords compile_time and compile_time_only (or compie_time_forced) but no... constexpr and consteval which is guaranteed to baffle users the first time they come across these.
I thought C++ superpower was employability because no one wants to touch that language 😅 Jokes aside, nice video, I had no idea of the difference between constexpr and consteval
> constexpr and TMP in general in C++ is paralleled by no other language. If you're going to make such a strong statement, perhaps you should qualify it by comparing against strong contenders like Zig's comptime and Julia's metaprogramming, no? Constexpr is pretty neat, but e.g. soagen for C++ vs std.MultiArrayList in Zig really highlight the shortcomings of TMP in C++23.
Would you care to elaborate your last sentence? Since I have practically zero Zig knowledge, I am genuinely interested in to what shortcomings you are referring.
@@testtest-qm7cj Both soagen and std.MultiArrayList serve the same purpose: given a struct type, metraprogrammatically generate a corresponding struct-of-arrays type (SOA) that has the same interface as a normal array-of-structs (AOS) type. In C++ pseudocode terms it would look something like so: given a struct struct Foo { int x; float y; }; Automatically generate the type struct FooSOA { std::vector x; std::vector y; void push_back(const struct Foo& foo) { ... } // ... other std::vector methods }; In Zig, std.MultiArrayList does exactly this for arbitrary structs using static reflection and comptime (compile-time programming). And it's all regular Zig code. SoAgen does the same using TMP, but also requires an external "generator" (hence the name soa"gen"). This generator takes some C++ code as input and generates a C++ source file that contains the new soa type, which you then have to compile with your project. It has to do this because TMP alone is not sufficient. IIRC you can use soagen using only the TMP part, but that would make big compromises on ergonomics. Perhaps in the future when static reflection makes it to C++ a generator will no longer be required.
@@climatechangedoesntbargain9140 By crates I assume you mean Rust crates? My understanding is that Rust's proc macros are much more powerful than C++ macros. Idk maybe you could try doing this using a combination of C++ macros and TMP. Boost has a metaprogramming library called Hana that uses both TMP and macros. IIRC it has some way of using both to iterate over members of a struct. But I'm not sure if it can be used to go all the way to implement the full functionality of std.MultiArrayList or soagen.
Jumping through all these hoops just to get more optimization seems insane. Why cannot most of the stuff be consteval by default? (except for library functions of course)
These kind of compiler optimizations can and often will be done automatically without you needing to do anything. Consteval is more about forcing this kind of optimization through stricter requirements.
There's several reasons you don't want this to be the default. First, constexpr is a contract. If you declare a function is constexpr you're telling all your callers that it's ok to use the result in a compile-time context (e.g. a template parameter or an array size), and you're willing to support that use case. In a parallel world where functions are constexpr by default, callers will be able to use the results in a compile-time context whether or not you intended that. You could easily provide an implementation that is constexpr only by accident, and then break them in the future. Another reason is that this stuff is not free, it has a compile time price which you may have to be pay even if you never try to run the function at compile time, because the compiler still has to figure out if any subpieces of the function can be calculated at compile time and things like that. If you never intend for the function to be called at compile time, you don't have to pay this price and your compilation may be faster.
i mean, pre calc stuff in compile time is kinda cheating, but i can see how it could help parts in a bigger project where pre calc the result isn't an obvious thing.
I would have rather loved to see turning this not in to the one shot compiled binary, but rather into something useful. Even though it is beautiful that you can turn everything into constexpr and have couple lines of assembly, seeing this for 20th time is not too novel.
What is *pratical* use of this other than precomputing constants with code ? (Yes i kknow there are areas where it may be invaluable - embeded systems, signal, audio, video processing, cryptography etc.) Maybe some crazy soul will write backpropagation wich will learn neural network at compile time, and will compile to machine code only inference part with computed wages ;)
It's great for simplifiying metaprogramming, so you can write C++ code that reads like C++ instead of a particularly verbose dialect of Haskell. Anytime you need a lookup table, this kind of technique can be useful.
@@isodoubIet i'm not sure about "meta" part. In Zig it is clearly the case comptime is instead of templates, but here maybe I'm blind but i can seen only change of the moment where code is executed. Yes i agree that posibilty this is exponentially better than forcing templates to do things they shouldn't in write only scripts.
@@AK-vx4dy I'm sure. In fact, constexpr makes zero guarantees that the code will be executed at compile time, although it's a reasonable expectation that it would be -- in a debug build that might not be what you want, however, and that behavior is still conforming. Making metaprogramming easier is the primary intended use of constexpr.
@@isodoubIet For me meta means "programs which write programs", so that's why I wrote I'm not sure. Templates do . Contexpr alone don't. It replaces programs with precomputed results.
@@AK-vx4dy I know what it means, which is why I responded that yes, I am sure. constexpr replaces a whole host of code that would've had to have been template metaprograms. "It replaces programs with precomputed results." Incorrect, that's not what it does. I explained that in the previous. It allows for values to be calculated in ordinary c++ code that are acceptable in contexts where constant expressions are required.
Oh, by the way, I don't think it is accurate that constepxre makes it is O(1) - otherwise we would have O(1) e.g. BFS/DFS since the 90s with template metaprogramming, right? 😅 I think that the computation is shifted from run time to compile time, i.e. the "O" - linear for this problem I guess - "happens during compilation".
A few languages have CTE (this compile time evaluation/execution) in some form, you mentioned Zig, but others, like D, have had it for a very long time.
I'd be interested in a comparison of compile-time computation between C++, Rust and Zig. I know that Rust's const fn is pretty powerful, but I wouldn't be surprised if it was still behind C++
Comparing this approach with metacircular evaluation is interesting. Constexpr and eval becomes kind of a hard-coded instance where interpretation is dictated by the language specification of C++ rather than being implemented in the program. The huge amount of design work put into trying to make constexpr and consteval align with existing C++ is the real superpower in my mind. Same as not having to think about interpretation in Zig. Otherwise (in?)famous MCE's in other languages already have this handled, and are more flexible.
Just saw that as of today, rust has stabilized inline const (forced compile time evaluation) and will be added to the next release so i guess this video is out of date already!
"Does constexpr make is_prime at compile time... and the answer to that is ... yes" The answer is no. Constexpr is like a hint, which the compiler may ignore. The standard merely says that if you annotate a function with constexpr, the function _may_ be evaluated at compile time. There are no guarantees. constexpr is a metaprogramming tool: compose a bunch of constepxr operations and the result is an honest-to-goodness constant expression that you can use anywhere you might need one: template parameters, array bounds, etc. In this case, stuff would be evaluated at compile time, because it has to be. If you're merely using it as an optimization, the compiler may decide not to do it.
PS, the same applies to consteval; it'll run stuff at compile time if it feels like it. It probably will, but it could decide not to for whatever reason (e.g. you probably want it to run at runtime in debug builds so you can... debug it. This would be nonconforming otherwise).
Jai is also amazing for this, and even better than Zig. I can get a Win32 window with some basic graphics at compile-time in Jai lmao. Obviously there's no actual use there, but it just shows how capable it is.
C++ and Rust get compared a lot. Proponents of Rust often point out how fast Rust is, and I don't think they're wrong. The "normal" C++ solutions and the "normal" Rust solutions to the same code might often have Rust win. But with C++, going one or two levels of optimization deep is very easy- just enable a compiler flag or slap on some keywords to some functions and you already get a massive boost. But in Rust, you're going to have to do a lot of fiddling with custom data structures or unsafe blocks that the compiler won't easily reason about, so "one level of optimization" in C++ will easily beat "one level of optimization" in Rust, both in speed and ease of development.
rust works nearly the same way though? you can select different optimization flags and there's const contexts. it's pretty cool, i think you should check it out
Rust also can do compile time calculations thanks to static functions. Templates in c++ don't even hold a candle to rust's macros and generics, they are so much more powerful and more integrated into the language, whereas in c++ the templates seem almost glued on.
@@raykirushiroyshi2752 "Templates in c++ don't even hold a candle to rust's macros and generics" That's not true. C++ templates are more powerful and it's not even a constest.
I see the usefulness of this but at the same time i don't. What is the point to write a program for something that has a constant answer? Why not just encode that answer into your program yourself as an actual const T something = ...; at that point? I do understand that you're basically able to let the compiler figure out the answer so you skip manual labor of having to go through the calcs or having a small helper tool to calculate the constants and then manually embed them. However any real program does not have the luxury to know all variables at compile-time and wont be able to output what i would essentially call a "fake program" that literally just moves the answer into eax to return it. A real program has some sort of unknown element to it that will only be known at runtime and depending how deep that is rooted in the actual logic, say bunch of other things have dependency on whatever that thing is, then none of those will be able to be evaluated at compile time right? Because now its a real function again that gets some arguments pushed or passed via registers. Don't get me wrong i do like this but i just fail to see how this is truly useful. If some of you have some good counter examples i'd appreciate it greatly.
A better example for this program would have been generating a lookup table for the primes at compile time since the size and requirements are known and won't change with the input. As you have suggested you could also just manually encode this kind of information but that is prone to errors and also if you have a requirement change in the future would mean manually redoing all the work yourself, whereas with consteval you just have to change a single variable
@@Squizell Thank you for your comment. Yeah this is the one benefit i do understand. Being able to change the conceptual requirements in a simple manner through a single variable that dictates the rest and letting the compiler embed all the static data for you correctly automatically seems nice indeed. I just have a hard time of thinking of practical examples where i could make use of this. I can think of it as: anything that is generally expensive to compute but gets queried alot with a finite amount of answers or only a specific degree of precision needed (resulting in a finite set of answers) will benefit from a lookup table. Then again how confident can one be that the requirements wont change at runtime under any circumstance ever? This is where im stuck mentally. Did you ever use this technique for a project of yours?
Other example use of constexpr I can think of is it can be used for obfuscating string literals on your built binaries. I don't know why someone wants that, but it certainly is possible.
Boi I got no idea about C++, but damn I didn't know one print statement(and more likely rather the print import for the most part) will expand to 4 THOUSAND lines of Assembly code lmao
By declaring the array of integers as constexpr, you're getting rid of the entire point of the program. The goal is to create an algorithm to find the maximal difference between primes in an array, not to return the correct solution for one particular case. A practical application would take a client-provided array and return the correct answer, not to return 3 regardless of client input.
There is a reason that this wasn't done in the original video. This video is just an explainer for constexpr and not a solution of the original problem.
@@Spielix If this video was just meant to explain constexpr, then it did a poor job of that. I interpreted this video as a response to people saying that there was a better solution to the original problem through a different approach. If this video was meant to show that the original solution was a good choice, then reducing the program to int main() {return 3;} seems like a poor argument to me
@@sweetcornwhiskey People were arguing/misunderstanding what constexpr was doing/why the function was marked constexpr. This video was a reply trying to show what the effect of constexpr is or could be in a very artificial example. One can argue about how good a job the video did especially considering that consteval wasn't really needed to get the compiler to do everything at compile-time. But none of these videos is about producing the "ideal", optimized to perfection solution. That isn't what this channel is about according to what I have seen. The original video was showcasing C++ language and library evolution on a comprehensible but artificial problem using whatever style Connor finds the most elegant (which is highly subjective).
I think the best part of this video is that it checked my bias against trying even courtesy, occasional C++ coding and learning as a systems language. Every single preconcieved bias, shattered! Thank you!
Also, thank you for introducing me this the website, I find it much, much more intuitive to my current system as a 'playground'. I love it, I love your channel and I may be lack, but glad I rung the bell.
The true superpower of C++ is that it is SO deeply complex that you can still drop your daily “well akshually”s even if everyone around you has been working in C++ for more than two decades 😂
I'm starting to love C++ 😆
of all the languages i still cant believe i chose c++ to be the one i use all the time. how that happen ;-;
After declaring the input variable `constexpr`, you can also leave the functions `constexpr` instead of `consteval` and the assembly still break down to one line.
Yeah it's just about forcing a constexpr context really, making it all consteval succeeds at that but isn't the way to go imo, those functions look like they should be constexpr and be callable at compile time and run time.
@@QuickNETTech @robertfrysch7985 I meant to show that you could revert that consteval's and just use them as a tool but forgot. It is a great point though : )
I believe you can assign a constexpr function to constexpr variable to verify whether it'll evaluate at compile time@@code_report
As a person with a limited knowledge in C++ , the drop from 4000 lines into 129 in Assembly made my jaw dropped!
this is case for pretty much every language. print functions are surprisingly expensive!
some assembly instructions are way more expensive in terms of speed than others. for example summing integers is several times faster, that making syscall.
what i want to say is less not always mean faster.
Funny thing: you don't need consteval at all, because marking vec as constexpr will force the compiler to evaluate its value at compile time anyway!
So in theory it would be perfectly possible to implement something similar in C++17 assuming the search functions themselves are constexpr
Another superpower that C++ has is that by using templates and concepts, you can use compile time polymorphism, by making your interfaces as concepts and just passing a class that conforms to the concept
Almost every language has compile time polymorphism
@@climatechangedoesntbargain9140 Having a class derived from a class derived from a class to 20 degrees of nesting is not the same as having a bunch of concepts that can be implemented without nesting and passed by name as if they were the actual interface. Only C++ has that
@ compile time polymorphism is a contradiction to inheritance, which is runtime polymorphism.
Concepts are just traits in Rust - where they are actually checked for conformity
C++ has had "compile time polymorphism" for seemingly ever? It's called static polymorphism or the Curiously Recurring Template Pattern (CRTP) which does have some limitations. I'm only fairly sure I understand what's being discussed by OP but I'm pretty sure what's being described is a thing called duck typing which isn't only available in C++ but the way it's done might be one of the nicer ways. I don't really know anything but C/C++ so I can't really say if only C++ has it because I don't know, I'm just fairly sure other langs can do it?
Type classes in Haskell (and possibly other FP languages) and traits in Rust should be about the same thing, in principle, although I'm not too familiar with either language (and yes, the previous user referred to compile time polymorphism, I think he's right)
I feel like I just watched an existing C++ Weekly episode but with added peppy enthusiam
const functions in Rust function similarly to constexpr functions in C++
@@13thk not really, consteval can only evaluate at compile-time while rust's const functions can be evaluated both at compile time or runtime
@@David_Box oh, ok my mistake
In C++ constexpr = can be used at compile time if possible, or at runtime. consteval = must be run at compile time
@@CyberDork34 small correction, consteval must be _runnable_ at compile time. Whether the compiler actually runs it at compile time is a separate question. The standard allows it not to.
I can do the same in my favorite language. I run it, get my answer. Then write a new program: return 3.
😂😂😂😂
BTW both std::vector and std::string are usable as constexpr so you should be able to keep the vector. (starting with C++20)
Any language with metaprogramming should be able to get to something like this. The Lisps come to mind right way for obvious reasons, but they are not all compiled and I'm assuming your amazement is mostly with the minimal assembly output as opposed to the actual..I'll call it "beta reduction"
yeah, beta reduction is a good name for constexpr and consteval behavior
The thing is that it allows the compile-time environment to run like the runtime environment without costing the runtime anything, very few languages actually have this in such a straightforward way that you could just use the plain language itself with one extra keyword to generate compile values, I don't believe there is a single bytecode language, and even many natively compiled languages don't have it either.
@@Spartan322 I highly suggest "Structure and Interpretation of Computer Programs" for a full treatment of the subject as I don't really have the time and space to get into how and why you're uninformed and why I said the Lisps are an obvious choice. I'll simply say that computing is not what you think it is and the distinction between compile time and run time is arbitrary in a language like a Lisp where there is a interpreter / compiler built into the language as a callable function. It's kinda like if you had a JIT compiler that you can call on a string at "runtime"
@@andrueanderson8637 That's excessively rude and incorrect. I've put my time studying and writing languages before, first off runtime in a compiler is still runtime , there is a distinction between compilation and runtime execution for every language, that's simply unavoidable, even inside the compiler, there is still a compilation phase that would then execute that, even if it doesn't output a file and instead just directly executes the instructions. That's still a compile time. The only really way to write an interpreter or a JIT compiler is to still compile something before you run it, even if you don't compile the whole thing at once, to execute instructions an instruction still has to be compiled.
@@Spartan322 I disagree, it wasn't meant to be rude it's just hard to read tone via text. Again, there is no difference between runtime and compile time in a Lisp. It's clear you don't understand how something like e.g. the reader process in Clojure works and that's absolutely okay, that's not necessarily a fault or something to be ashamed about, I'd guess most people are in the same boat. All I'm trying to get across here is that what you're saying is not strictly correct and programming and code execution is much more interesting and varied than that. Think of how your operating system works. Think of the fact that code and data are loaded into memory in the same (binary) format. Think of what "compile" actually means
ANSI Common Lisp can also run functions at compile time. They're called macros. Technically it's at macro expansion time. There are also reader macros so you can change the actual syntax of the language. That said, it is very difficult to write a Lisp program with the performance of C++. Also C++ has a much larger community backing it. So regardless of the warts, C++ is probably the best language to know well.
@@techtutorvideos It's been a while since I've used SBCL. I used it with Emacs + SLIME. It was awesome when I used it. I imagine it's way better now. It was faster than OpenMCL and way, way faster than Clisp. You certainly can make fast code with it.
@@techtutorvideos a couple orders of magnitude can be 10 000x to 0.0001x.
Doesn't SBCL produce fairly performant executables
@@jawad9757 When I was using it, it was able to produce pretty fast code.
What's funny is that he has included Lisps in videos before, but in this one ends up saying things like this is unparalleled by no other language.... there's a Grammarly article still online about how some of their Common Lisp macros made SBCL struggle to expand them because they would explode into thousands of lines
at the end, you said maybe zig can do the same, I wonder, I heard that it is very elegant but not as powerful. Can you maybe dig into this?
Zig can run any arbitrary code at compile time as long as it doesn’t do any IO or interact with extern functions, so I guess it’s the same
@marcsfeh Hell no, compile time execution is godsend when you need to generate lookup tables, that's what I use it mostly for. There are other brilliant usages like in ctre to construct regex at compile time, or like in fmt to verify validity of format string. It is all limited only by your imagination
@@germanassasin1046 I built a constexpr string hasher so I could stop rehashing strings at run time among other things, often around processing a string literal at compile time instead of run time. It's not quite limited to your imagination and more so limited by what you know at time compilation, if you know everything you can do it at compile time, if you don't know something because it's a run time value then obviously you can't constexpr that.
in zig you can put code inside a "comptime {}" block, if the code can't be run at compile time, a compile error happens.
@marcsfeh it's how Zig does generics, which I would classify as *very* useful for many problem domains.
Imo compile-time programming goes really beyond just performance. It can be used to make code more secure by creating even stricter requirements at compile time.
For the input that is known at compile time Is_prime is evaluated at compile time. But in actual test where input array isn't known it is still a loop. At best it could be a binary search, if find_if and find_last_if could detect that primes array is sorted. But by substituting int[25] array with bool[100] one you can turn is_prime function into O(1) for any input without any constexpr. The other way is to test divisibility on 2, 3, 5 and 7, since those are all possible minimal divisors for numbers less than 100. Though the input limit of 10^5 numbers means that such performance considerations are not strictly necessary. It will probably work just fine even if you use generic implementation of is_prime from some textbook.
Nice video! I think limitless representation is another of C++'s superpower.
Interesting. Better performance? But you are only executing a single test case for its already determined result, i.e. these input values are fixed, which is why the entire solution can cascade through constexpr. This means that the compiler *is* the interpreter for this one case alone. So then how long does it take to compile? That is in effect your execution time for this one case. If you want this to be a general solution for any set of input values, read in from the user or file, could you still do it this way?
Yeah exactly, if the result is predetermined, why not just replace all the lines of code with the result as a single constant?
I'm only a student, but I can't see a use case for this, considering programs are meant to be dealing with varying inputs
@@andrewtran9870 it's true that this example isn't super impressive, but there are uses for compile time computations.
look at the implementation for println - it's a type safe C++23 replacement for printf and cout. the main problem of printf is that compilers have to verify that the user correctly matches %d to an int, %s to a null terminated C-string, etc. println takes variadic arguments and at compile time, matches the {}'s in println("Hello {}, number {}", "world", 1); to "world" and 1 respectively with type safety.
Format string vulnerabilities (mismatching the printf format specifiers, which cout and println fix) is weirdly common problem, though I don't know why because any major C compiler should warn you about them.
@@andrewtran9870it’s more of the idea behind it. for example, if you have calculations from config headers then you can do all the calculations relating to config at compile time, improving performance. there are plenty of cases where you have values known at compile time that you do calculations with, but you keep them separated for readability or to make it easier to modify.
It happens from time to time, to have things that need to run under a function or a set of functions because their value changes from time to time. It's not that rare to have functions that use constant values as parameters.
Or for example a function that runs several of those in different sections. Putting in the result manually by hand for each one is never desirable.
@@andrewtran9870 lookup tables, sine/cosine signals, cryptographic keys, hardware configurations or pretty much any data you need generated for the program to work. These are very useful in embedded systems
This came out of a leetcode question for which the test cases are not known at compile time so in that context, constexpr would only be useful to generate the prime table rather than hardcoding as literal values. For that case, the most interesting solution is the 100 bools one. By stuffing the bools into individual bits of a 128 bit SIMD register you can get them out again with a short instruction sequence of shifting, masking and moving to a general purpose register. Presumably C# would have the advantage there with its native SIMD types although we do have `std::experimental::simd` and various non-standard libraries and architecture and compiler-specific intrinsics.
You know It's very easy to forget. You can use namespace like that. [That makes using ranges, and Chrono significantly easier.]
Well I mean the calculations do happen. They just happen to compile time instead.
@code_report 7:05 "Tell me a language that's got this capabilities".
Scala3's inline keyword does the same. Very powerful.
Regarding the end of the video, Nim has 'const' variable declarations that act like consteval
Nim has thorough compile-time evaluation of most of the language. Not just const, but static blocks and all. You can slurp entire files into a variable and make the byte string a compile time binding in the executable. It's ready for anything you want to do.
@@nERVEcenter117 Yes, Nim is really nice for this kind of stuff and awesome in general!
@@nERVEcenter117 Love Nim. Wish it was used more. Great lang.
in jai language from Jonathan blow, it's seems it very awesome for compile time values, it's seems we can `#run game();` and the score of all played game return to the compile tome without any verbose and many experience, just using #run keyword.
but jai is still in alpha release
Not taking jai seriously until it's fully open source
hmm.. sure, if your input is constexpr then it can all run at compile time.
but if it's not, then is_prime is still O(N) even if you mark it constexpr
I stick by my comment from previous video: An array of 100 bool with direct random access to the answer.
and perhaps make a function to construct the array that is consteval and probably get the best of both worlds~
@@GiovanniCKC Yup exactly that!
if you have N primes yes, ... if your array to check against is of size N then it is O(1)
in other word if the array you wanna check for primes is a million times bigger every call of is_prime is still constant time.
@@urbaniuscee3657 yes. I believe the problem guaranteed numbers less 100
Another way is to check divisibility on 2, 3, 5 and 7, since you need to check only primes up to sqrt(x). From my testing it has similar performance (~10% difference) as lookup table and is much easier to write.
This channel is underrated.
Nice video and follow up.
I would like to add that you can have the array of prime numbers be generated at compile time too, instead of hard-coding them as literals (if you don't mind a slower compilation).
Maybe for prime numbers it's overkill and unnecessary but in general it's great because you can get rid of some magic numbers that make the code hard to reason about later and replace them with consteval algorithms that calculate those numbers, that surely are more self-documenting
Jason Turner greatly exposes in his video all ways to run some functionality at compile time to don't waste runtime.
So the best way is to write all functions as constexpr which is not a guarantee of compile time computations, but just a hint for compiler. And have only one consteval function that takes constexpr method as parameter.
Scala 3 can run any code at compile time. In fact I think it provides three different ways of running code at compile time, although one of those - the type system, is ~decidable rather than arbitrary code.
When all notice the POWER of C++, they will flip out :D
As a longtime Java programmer, but 30-year C++ dabbler (Stanley Lippman's C++ Primer, 2nd Edition 1994), I can say that almost all the other programming languages were created because only a small minority programmers have the mental endurance, working memory, detail orientation, and conceptual ability to work with C++. It is truly the master programmer's programming language.
Hi! thanks for the great demonstration of what constexpr does by showing the resulting assembly code! This is the first time finally functionality of constexpr sink in with me.
By "Does constexpr make is_prime O(1)?" I've totally meant "whether it'll be handled at compile time?". I should've been more clear about my intention to not confuse other commenters, but you got it right anyway. ^_^
i like how you show a couple of ignorant and rude comments at the beginning and contrast it with it this step-by-step educational video 👍
I love how cpp removed the return type for auto just to add it back at the end it is wild
The arrow syntax is actually required in order to use a decltype return type, if it uses the function's arguments, although its popularity outside of this usage (i.e. for the sake of aesthetics) indicates that `Type name` syntax is kinda shit lol
Can you give an explanation to what problem you're solving by using "auto" in your function signatures?
When I look at your is_prime() function from C++23 in your previous video, I have no clue if it returns an int as the arrow operator would suggest, If it returns the boolean expression of !=, if it returns std::ranges::find()'s return value or if it returns the iterator of primes.end(). (I'm guessing the -> int was supposed to be -> bool but still, the question stands of why having auto is even necessary)
To be honest, I don't even know if it is a function, or a lambda. In which case, why? Lambdas whole point is that they're anonymous.
I can't help but feel that putting auto in your function signatures does nothing but make your already complex code even more unclear.
Excited to hear your reply!
Trailing return type syntax is basically superior in every way to the traditional syntax and has been in the language for over a decade. Its required in some places and some people prefer using it everywhere for consistency. At this point you really can't complain about it being unclear - that's just a you problem...
I am under the impression that his use of trailing return type is all about stylistic self-consistency within his own code bases written in many different languages, and not so much about C++ itself. Watch the following video of his C++ talk; timeframe around 25 minutes, "Conor Hoekstra - Concepts vs Typeclasses vs Traits vs Protocols - Meeting C++ 2020"
If you were genuinely curious about the advantages of training return types, then stack overflow's "Advantage of using trailing return type in C++11 functions" is your answer.
@@orbital1337 "Is basically superior in every way" is not a very descriptive or argumentative reason. Why is it better? What advantages does it provide, that learning the basic syntax of a C function can't?
For the record, I've never seen anyone use it in C++ until I saw this video.
@@testtest-qm7cj So from what I can gather, people do it to have consistency with Lambda syntax. Idk if I'm getting something wrong but Lambdas aren't really functions no? They're anonymous and aren't mean to be used as functions, which is probably why they have completely different syntax from regular functions. Just seems a bit weird to add consistency to two things that are meant to be separate.
@@JFrancoelambdas don't have to be anonymous, they're basically functions with a scope.
You can do:
auto foo = /* your lambda */;
and call foo as if it's just a regular function
'is paralleled by no other language'
zig: am i a joke to you?
C++ Rules 😉👍
Technically speaking, when you commented out all other code and just return the result of 'is_prime(43)' it's not necessary that 'is_prime' is calculated at compile time. As a matter of fact, if you turn off optimizations, so put -O0 instead of -O3 on gcc, you'll see that you get a normal function call probably also depending on the compiler version. You actually need to force the compiler to calculate it at compile time, you can place the result into a automatic constexpr variable, or better, constexpr static variable, and yeah there will be a difference in these two cases which could be interesting topic to cover in the future why there is a difference in the generated assembly. Great video though, keep up the good work!
Yeah, I'd say at least 50% of praise of constexpr doesn't stop to check, hey, if we don't write constexpr is this a valid optimisation anyway? Also lots of bug reports.
C++❤
Nice video, thanks for sharing. Impressive how far you can go with C++
Jai can definitely do something like this, but you pretty much hit the nail on the head.
Jai can't do anything because it's not even been released or open sourced yet
Rust bros saying "But Rust..... but rust... but but but" in 3, 2, 1.
but rust
But lisp
But Rust
Shove this video to the faces of who says C++ is a mangled mess of a language and is basically unreadable. Yes, it is a mess. Yes, it is sometimes difficult to read. But still, the power it gives you is top notch.
Technically all it's power can be reduced to a Turing machine
it doesn't change the fact that it is verbose or overly complex at the cost of developer experience though. I you need something to be this powerful for sure c++ is good. But for most day to day programming this stuff is not relevant enough to sacrifice the ease of use and speed of development of other programming languages.
Yes it is very powerful, but that does not make it the right language for everything.
@@dietibol that's very fair. The problem I found with today's programming ecosystem in general is, none of the advocate "modern" programmers under the sun don't realize it or don't care at all. Programming languages are tools at the end of the day. Use the right tool for the right problem, peps. It's quite simple.
aren't modules stabilized already?
Isnt this only valuable for fast startup? (Since if this was run repeatedly it should just be cached?)
And if so isnt there other tools that exist for fast startup in other languagues that leverage a previous run of the program?
Results to function calls don't get cached in most languages unless you tell them to. The advantage about constexpr and other language's compile time evaluation schemes is you can use them all over the code with different results. Sure everything you do with constexpr you could also do by just always pre calculating results and hard coding the values into the program, but that will quickly become unreadable, or very annoying if you need to change some input value
@@sinomBoth constexpr and caching both require you to tell the program to do it.
So the question remains.
The only benefit of this vs caching seems to be just fast startup times right?
@@Dogo.RYou cant compare this things. Constexpr and consteval just markers that you can evalute something before program starts. This evaluted values has be the same every starts.
Caching is so different case. You save runtime values to avoid not only evalution (IO operation too)
@@vas_._sfer6157 Saying I cant compare them then explaining again what constexpr does like the video, does not at all explain how the problems they solve are different.
Nor does it directly address the question I asked.
No, it's useful for anywhere you might need a value to be available as a compile time value (template parameters, array bounds, things like that). Or if you need to generate a lookup table, or some parsing logic defined in a grammar known at compile time, etc. Lots of possibilities.
Dlang has exactly this, but instead of only marking functions you can also mark the outputs...
enum x = foo(bar);
vs the runtime
auto x = foo(bar);
Concepts. Compile time, sane polymorphism. Unlike the old SFINAE based polymorphism which was not quite sane.
thank you
Bloody brilliant. 😊
Awesome
If you have a lot of numerical code that's going to be constant every time you run it and requires some pre-processing, then it's going to save you some time, but I've got to be honest, I don't think this would have helped me in any code I've personally written, since just about anything with a parameter needs to be evaluated at runtime.
Well, I think there is a reason why "constexpr everything" has been one of the hot topic in many C++ conferences for the last few years. Speakers of many of those talks argue that ordinary applications also contain surprisingly many redundant static data generations done in runtime, hence adopting constexpr as much as possible is beneficial. But many questions from the audience of those talks were essentially the same; "does my application really have such cases? / is this relevant to me?" So, I guess either constexpr is not for everyone, or realizing you can benefit from it is somewhat difficult in general.
Amazing!
*angry in dlang*
Is there a real world reason to do this vs just knowing it's 3 at the end? I'm not being cynical. I think it's neat that C++ can do this. I'm genuinely curious. Like if I worked at a company that was using the output of this code for something - shouldn't I just provide 3 as the input to the next program since it's always going to be 3 after it's compiled?
This video just popped in my recommended vids and I just understood NOTHING !!! But I find it curious that a program would work "at compiled time", whatever that means.
I'm just starting my journey in CompSci; I'm currently doing an internship. But once I'm done with it and fiddleling with high level languages (Python and JS), I wanna try the real deal with low level languages. I thought about doing C in which I have acquired the basics (print, input, conditionals, loops and functions) because it has resisted the trial of time, but C++ also tempts me a little but that mention of 4 different kind of C++ disorients me a little.
One of my friend also suggested me GO, which he seems to be fond of, but if I was to learn modern low level languages I'd be much more interested in Rust with the hype there is around that guy.
Which language do you guys suggest I start with ?
Typescript (in something like vitejs): Once you understand objects & functions you can do basically everything, it runs live in your browser & results are visual, and you have nice autosuggestions.
After that rust will be a good switch to learn lower level mechanisms.
After that if you want some paradigm shifts try sml, prolog & clojure too see how programming looks like from different perspectives
P.S. Go has it's quirks, I don't think it's the best language to start with
I think learning C for it's simplicity and for how much it teaches you about what is happening under the hood of all the abstractions before learning C++ for things like RAII and templates is a good path but I might be biased. If you choose that path, take care to really learn modern C++ though and not become another "C with classes" programmer.
On the other hand not everyone in SciComp needs to become a low-level wizard. Julia is very popular in SciComp if you want a language that was purpose-built for these kinds of applications. Python can also get you very far if you know it's shortcomings (e.g. avoid loops) and learn how to use its libraries correctly.
Yeah I'd say check out Jai if you can. It's compiletime evaluation is pretty crazy--essentially anything can be compiletime or runtime, without any special distinctions in syntax or functionality. So for one of the pathological examples, you can run an entire game with graphics, sound, etc. all at compiletime
That sounds nonsense. We must take input from the user in real time, so what, we need to calculate every possible state of the game?
See it yourself:
Demo: Base language, compile-time execution
th-cam.com/video/UTqZNujQOlA/w-d-xo.html
@@phusicus_404 yeah it does sound pretty weird when you first hear it, but it's not actually that complicated. Just the language compiles in two phases. First it figures out what is compiletime, and compiles that part to bytecode which then gets invoked immediately while compiling. Then when all the compiletime stuff is done running, the remainder of the code gets compiled down to an executable, with any compiletime results available for use in the runtime stuff.
No one would actually run a game that way at compiletime, but the point was just to show that compiletime and runtime code are completely interchangeable. Honestly I think the whole thing is pretty genius, completely removes the need for a separate macro language, while being just as powerful, if not more-so.
@@phusicus_404Not. Compiler of this languages also is an Interpreter
@@david-andrewsamson45It better many times than C++ templates or macro sublangs on Rust
Oh my god when you see it
I mean, this is basically just offloading the calculations to the compiler's "run-time". So in that, this is no different to Python or any other interpreted language
Maybe from the perspective of starting at two black boxes but that's not how constexpr/consteval work, it's the compiler actually doing the work of checking and calculating everything and not just firing off some interpreter subprocess to return the result. Honestly building a C++ interpreter into the compiler may've been easier and lead to better compatibility 13 years ago but instead we have only partial compatibility and keep getting features added to compile time coding in C++
All good, but the std committee chose such terrible keywords, as always. They could have called the keywords compile_time and compile_time_only (or compie_time_forced) but no... constexpr and consteval which is guaranteed to baffle users the first time they come across these.
Imagine thinking that the need to learn what a word means is an unreasonable barrier to entry...
I thought C++ superpower was employability because no one wants to touch that language 😅 Jokes aside, nice video, I had no idea of the difference between constexpr and consteval
My favorite C++ superpower is RAII. Why doesn't anyone else have this??
> constexpr and TMP in general in C++ is paralleled by no other language.
If you're going to make such a strong statement, perhaps you should qualify it by comparing against strong contenders like Zig's comptime and Julia's metaprogramming, no? Constexpr is pretty neat, but e.g. soagen for C++ vs std.MultiArrayList in Zig really highlight the shortcomings of TMP in C++23.
Would you care to elaborate your last sentence? Since I have practically zero Zig knowledge, I am genuinely interested in to what shortcomings you are referring.
@@testtest-qm7cj Both soagen and std.MultiArrayList serve the same purpose: given a struct type, metraprogrammatically generate a corresponding struct-of-arrays type (SOA) that has the same interface as a normal array-of-structs (AOS) type. In C++ pseudocode terms it would look something like so: given a struct
struct Foo {
int x;
float y;
};
Automatically generate the type
struct FooSOA {
std::vector x;
std::vector y;
void push_back(const struct Foo& foo) {
...
}
// ... other std::vector methods
};
In Zig, std.MultiArrayList does exactly this for arbitrary structs using static reflection and comptime (compile-time programming). And it's all regular Zig code. SoAgen does the same using TMP, but also requires an external "generator" (hence the name soa"gen"). This generator takes some C++ code as input and generates a C++ source file that contains the new soa type, which you then have to compile with your project. It has to do this because TMP alone is not sufficient.
IIRC you can use soagen using only the TMP part, but that would make big compromises on ergonomics. Perhaps in the future when static reflection makes it to C++ a generator will no longer be required.
@@chaitanyakumar3809macros can't do this in c++?
Because I know there are that crates that implement macros for that
@@climatechangedoesntbargain9140 By crates I assume you mean Rust crates? My understanding is that Rust's proc macros are much more powerful than C++ macros.
Idk maybe you could try doing this using a combination of C++ macros and TMP. Boost has a metaprogramming library called Hana that uses both TMP and macros. IIRC it has some way of using both to iterate over members of a struct.
But I'm not sure if it can be used to go all the way to implement the full functionality of std.MultiArrayList or soagen.
Jumping through all these hoops just to get more optimization seems insane. Why cannot most of the stuff be consteval by default? (except for library functions of course)
These kind of compiler optimizations can and often will be done automatically without you needing to do anything. Consteval is more about forcing this kind of optimization through stricter requirements.
There's several reasons you don't want this to be the default. First, constexpr is a contract. If you declare a function is constexpr you're telling all your callers that it's ok to use the result in a compile-time context (e.g. a template parameter or an array size), and you're willing to support that use case. In a parallel world where functions are constexpr by default, callers will be able to use the results in a compile-time context whether or not you intended that. You could easily provide an implementation that is constexpr only by accident, and then break them in the future.
Another reason is that this stuff is not free, it has a compile time price which you may have to be pay even if you never try to run the function at compile time, because the compiler still has to figure out if any subpieces of the function can be calculated at compile time and things like that. If you never intend for the function to be called at compile time, you don't have to pay this price and your compilation may be faster.
i mean, pre calc stuff in compile time is kinda cheating, but i can see how it could help parts in a bigger project where pre calc the result isn't an obvious thing.
So great and useful, thank you so much
I would have rather loved to see turning this not in to the one shot compiled binary, but rather into something useful.
Even though it is beautiful that you can turn everything into constexpr and have couple lines of assembly, seeing this for 20th time is not too novel.
What is *pratical* use of this other than precomputing constants with code ?
(Yes i kknow there are areas where it may be invaluable - embeded systems, signal, audio, video processing, cryptography etc.)
Maybe some crazy soul will write backpropagation wich will learn neural network at compile time,
and will compile to machine code only inference part with computed wages ;)
It's great for simplifiying metaprogramming, so you can write C++ code that reads like C++ instead of a particularly verbose dialect of Haskell. Anytime you need a lookup table, this kind of technique can be useful.
@@isodoubIet i'm not sure about "meta" part. In Zig it is clearly the case comptime is instead of templates, but here maybe I'm blind but i can seen only change of the moment where code is executed.
Yes i agree that posibilty this is exponentially better than forcing templates to do things they shouldn't in write only scripts.
@@AK-vx4dy I'm sure. In fact, constexpr makes zero guarantees that the code will be executed at compile time, although it's a reasonable expectation that it would be -- in a debug build that might not be what you want, however, and that behavior is still conforming.
Making metaprogramming easier is the primary intended use of constexpr.
@@isodoubIet For me meta means "programs which write programs", so that's why I wrote I'm not sure. Templates do . Contexpr alone don't. It replaces programs with precomputed results.
@@AK-vx4dy I know what it means, which is why I responded that yes, I am sure. constexpr replaces a whole host of code that would've had to have been template metaprograms.
"It replaces programs with precomputed results."
Incorrect, that's not what it does. I explained that in the previous. It allows for values to be calculated in ordinary c++ code that are acceptable in contexts where constant expressions are required.
Rust can do this
True, the guy deleted my comment where I even give some examples. Don't know why though.
@@felixpuscasu5625 might be youtube's new filtering bullshit algorithm. Some of my comments are randomly deleted too.
@@felixpuscasu5625 youtube auto delete most of code bc it thinks it is virus looool
Oh, by the way, I don't think it is accurate that constepxre makes it is O(1) - otherwise we would have O(1) e.g. BFS/DFS since the 90s with template metaprogramming, right? 😅 I think that the computation is shifted from run time to compile time, i.e. the "O" - linear for this problem I guess - "happens during compilation".
3:45 why would you ever want to do that though? I do not get why is that important
Now try to optimize the original program to be faster with dynamic input, especially the is_prime seems to be in need of some love.
Is rust const so powerful like constexpr/consteval in c++?
Afaik not yet
idk I feel like this isn't as useful as you make it out to be. Realistically your program will mostly be runtime data.
A few languages have CTE (this compile time evaluation/execution) in some form, you mentioned Zig, but others, like D, have had it for a very long time.
Ah, yeah, a high perf programm that do nothning...
Doing nothing is the fastest operation of a computer
I'd be interested in a comparison of compile-time computation between C++, Rust and Zig. I know that Rust's const fn is pretty powerful, but I wouldn't be surprised if it was still behind C++
Comparing this approach with metacircular evaluation is interesting. Constexpr and eval becomes kind of a hard-coded instance where interpretation is dictated by the language specification of C++ rather than being implemented in the program.
The huge amount of design work put into trying to make constexpr and consteval align with existing C++ is the real superpower in my mind. Same as not having to think about interpretation in Zig.
Otherwise (in?)famous MCE's in other languages already have this handled, and are more flexible.
Can do the same thing in C:
int main(void)
{
return 3;
}
wysi
Why no vim?
Can Rust do this?
Jai, has the same feature. Even building projects of the language are done in Jai itself.
Hey I recognize your voice from ADSP. Didn't know you had a yputube channel.
Just saw that as of today, rust has stabilized inline const (forced compile time evaluation) and will be added to the next release so i guess this video is out of date already!
"Does constexpr make is_prime at compile time... and the answer to that is ... yes"
The answer is no. Constexpr is like a hint, which the compiler may ignore. The standard merely says that if you annotate a function with constexpr, the function _may_ be evaluated at compile time. There are no guarantees.
constexpr is a metaprogramming tool: compose a bunch of constepxr operations and the result is an honest-to-goodness constant expression that you can use anywhere you might need one: template parameters, array bounds, etc. In this case, stuff would be evaluated at compile time, because it has to be. If you're merely using it as an optimization, the compiler may decide not to do it.
PS, the same applies to consteval; it'll run stuff at compile time if it feels like it. It probably will, but it could decide not to for whatever reason (e.g. you probably want it to run at runtime in debug builds so you can... debug it. This would be nonconforming otherwise).
I was just about to comment that zig can do this too. But you beat me to it.
Jai is also amazing for this, and even better than Zig.
I can get a Win32 window with some basic graphics at compile-time in Jai lmao. Obviously there's no actual use there, but it just shows how capable it is.
C++ and Rust get compared a lot. Proponents of Rust often point out how fast Rust is, and I don't think they're wrong. The "normal" C++ solutions and the "normal" Rust solutions to the same code might often have Rust win. But with C++, going one or two levels of optimization deep is very easy- just enable a compiler flag or slap on some keywords to some functions and you already get a massive boost. But in Rust, you're going to have to do a lot of fiddling with custom data structures or unsafe blocks that the compiler won't easily reason about, so "one level of optimization" in C++ will easily beat "one level of optimization" in Rust, both in speed and ease of development.
rust works nearly the same way though? you can select different optimization flags and there's const contexts. it's pretty cool, i think you should check it out
C++ was around much longer than Rust, so just you wait
Rust also can do compile time calculations thanks to static functions. Templates in c++ don't even hold a candle to rust's macros and generics, they are so much more powerful and more integrated into the language, whereas in c++ the templates seem almost glued on.
@@raykirushiroyshi2752 Rust inline const just got stabilized a few hours ago
@@raykirushiroyshi2752 "Templates in c++ don't even hold a candle to rust's macros and generics"
That's not true. C++ templates are more powerful and it's not even a constest.
was about to comment zig just before the end of the video lol zig is great
more C23 videos plz
Pretty sure you could have just made the functions static constexpr.
I see the usefulness of this but at the same time i don't.
What is the point to write a program for something that has a constant answer?
Why not just encode that answer into your program yourself as an actual const T something = ...; at that point?
I do understand that you're basically able to let the compiler figure out the answer so you skip manual labor of having to
go through the calcs or having a small helper tool to calculate the constants and then manually embed them.
However any real program does not have the luxury to know all variables at compile-time and wont be able to
output what i would essentially call a "fake program" that literally just moves the answer into eax to return it.
A real program has some sort of unknown element to it that will only be known at runtime and depending how deep
that is rooted in the actual logic, say bunch of other things have dependency on whatever that thing is, then none of those will
be able to be evaluated at compile time right? Because now its a real function again that gets some arguments pushed or passed via registers.
Don't get me wrong i do like this but i just fail to see how this is truly useful.
If some of you have some good counter examples i'd appreciate it greatly.
A better example for this program would have been generating a lookup table for the primes at compile time since the size and requirements are known and won't change with the input. As you have suggested you could also just manually encode this kind of information but that is prone to errors and also if you have a requirement change in the future would mean manually redoing all the work yourself, whereas with consteval you just have to change a single variable
@@Squizell Thank you for your comment. Yeah this is the one benefit i do understand. Being able to change the conceptual requirements in a simple manner through a single variable that dictates the rest and letting the compiler embed all the static data for you correctly automatically seems nice indeed.
I just have a hard time of thinking of practical examples where i could make use of this. I can think of it as: anything that is generally expensive to compute but gets queried alot with a finite amount of answers or only a specific degree of precision needed (resulting in a finite set of answers) will benefit from a lookup table.
Then again how confident can one be that the requirements wont change at runtime under any circumstance ever? This is where im stuck mentally.
Did you ever use this technique for a project of yours?
Other example use of constexpr I can think of is it can be used for obfuscating string literals on your built binaries. I don't know why someone wants that, but it certainly is possible.
constexpr is the best accidental Turing Machine to exist in any programming language.
Is it accidental though?
Boi I got no idea about C++, but damn I didn't know one print statement(and more likely rather the print import for the most part) will expand to 4 THOUSAND lines of Assembly code lmao
bait video,
I heard that Zig has a comptime...
By declaring the array of integers as constexpr, you're getting rid of the entire point of the program. The goal is to create an algorithm to find the maximal difference between primes in an array, not to return the correct solution for one particular case. A practical application would take a client-provided array and return the correct answer, not to return 3 regardless of client input.
There is a reason that this wasn't done in the original video. This video is just an explainer for constexpr and not a solution of the original problem.
@@Spielix If this video was just meant to explain constexpr, then it did a poor job of that. I interpreted this video as a response to people saying that there was a better solution to the original problem through a different approach. If this video was meant to show that the original solution was a good choice, then reducing the program to int main() {return 3;} seems like a poor argument to me
@@sweetcornwhiskey People were arguing/misunderstanding what constexpr was doing/why the function was marked constexpr. This video was a reply trying to show what the effect of constexpr is or could be in a very artificial example. One can argue about how good a job the video did especially considering that consteval wasn't really needed to get the compiler to do everything at compile-time. But none of these videos is about producing the "ideal", optimized to perfection solution. That isn't what this channel is about according to what I have seen. The original video was showcasing C++ language and library evolution on a comprehensible but artificial problem using whatever style Connor finds the most elegant (which is highly subjective).
imagine seeding random number generators at build time!
This is misleading, is_prime(x) is not O(1), is_prime(32) is O(1)
🤌
Now why would imperative programmers make jokes about pure functional programmers? :D
If only C++ didn't insist on backwards compatibility and having 1000 ways to accomplish the same thing😕