Have you seen Stroustrup? He uses a dead-pan delivery, never laughs at all, like he is dead serious. Even off-stage he never drops the illusion... genius.
Brilliant talk. I have watched all the talks by Andrei online over the last 2 decades or so. Brilliant, fun filled man. It's kinda nostalgic to see him in grey hair. Wish him all health and long life!
10:47 I think this is the most important takeaway from this talk-"static for needed for simple implementation". All of the clever implementation techniques discussed in the latter part of the talk are, essentially, workarounds for the fact that C++ does not have a "static for" loop.
The example in slide 24 is broken. This was asked by a member of the audience and a non explanation was provided by the presenter. int b = f(1,2.5) Given that Ts have to be explicitly specified it will be empty in this case, and Us will always be empty due to the way matching rules work, the only way to make this work is by explicitly specifying Ts to capture all of the arguments except the last one (or not specifying it if it's empty as is the case with the first example). So int b = f(1, 2.5) would work and deduce as int f(Ts ..., Us ..., T) [with Ts = {int}; Us = {}; T = double]
first(...)! great talk! perhaps instead of the member unsigned d_active, a call to variant.set sets the pointer to the destructor (so no array lookup at destruciton time, but still a function indirection overhead) and d_active can be inferred from that. not a good bargain if set/get used often though.
Nice, I actually got directly to use the ideas from this talk at work. I didn't know you could split explicitly specified and deduced arguments into two distinct packs. That makes implementing something I thought impossible actually quite easy.
One slide 37: constexpr size_t m = lo + (hi - lo) / 2; if (n < m) { destroyLog(n, p); } else if (n > m) { If n < m is indeed true, the check for (lo ==m) should technically not be needed? Is it there just because the expansion is done at compile time ? In general can someone explain why we need the checks as part of the template parameters?
Right at the beginning, when comparing the elements of arrays to see if they are equal, before even looking at the indices of the arrays to compare, it is more efficient to compare the _sizes_ of the arrays (or vectors). If the arrays/vectors are 10 ,12 and 2 elements, then there is no point comparing the elements vertically as you know they are not equal before comparing the values at the indices. If the arrays/vectors have the same number of elements, then you can go and iterate over the index and compare the values. ~(EDIT: corrected typos)
There’s an error on slide 19, also the explanation is wrong. On the last line, there is f(1, 2, 3, "4"); _// Ts = _ The comment is correct, the instantiation of _f_ makes _Ts_ as mentioned. But _f_ “reverses” _U_ and _Ts._ The argument 2 binds to _U_ (which, correctly stated, must always be deduced), i.e. _U_ = *int.* Arguments 3 and "4" go to *const* _Ts_ & … which is ( *const int* &, *const double* & ). And "4" cannot bind to *double.*
Great talk, I like watching such subtleties are being described. By the way it seems there is a mistake about function parameters order, it can be detected at 27:41. In fact, the "3" will go to "Ts". In order to put it to U (and "2" to "Ts"), the function parameters should be declared as follows: f(T, Ts& ... , U). And this cooresponds to that whai is being said about the third line (the presented example can not be compiled because "4" is being bound to double) upd: found this issue in older comments
Not that C++ makes this easy but wouldn't the ideal solution at the end be to sort all types according to trivial destructibility so their indices are all grouped, then do one of 3 things at compile time: 1. Eliminate the destructor if all types are trivially destructible. 2. If you have mixed destructors, emit a single branch with a constant value and branch into the jump table if the value index is inside the range of types with destructors. 3. Eliminate the branch from (2) if all types have non-trivial destructors.
I deeply admire Andrei, for his C++ insights and his humor ;-) , thanks Andrei , you are (also) a great speaker. @0:20:00 you Andrei talk about adding type sequence to the standard, but .... couldn''t we consider std::tuple being such "type sequence" and shouldn't we consider lips alike operations (head,.cons,....) to be applied to a tuple ? What are your (or anyone else) thoughts about it ? Please comment.
the way tuple is implemented involves a lot of template instantiations, either O(n) or O(logn) depending on the implementation, so it's a very heavy solution. On the other hand a type_sequence is only a single instantiation
Tuple is too heavy. A lot of people, unfortunately use uninstantiated tuples to manipulate sequences of types. Yeah, there is also a necessity for type_container for a single type.
type_sequence has no storage it is a compile time construct, tuple is meant to be instanciated as an object eventually, it wouldnt respect the SRP to use tuple instead of type_sequence for this task.
While hash rable lookups can be lineaer (depending on implementation), creating the hash table can get expensive. All is fine until you have a collision. At that point, you have some choices in implementation. Option 1 (bad) is to scan to the next hole. This makes lookups as long as linear as well as inserts being linear. Ehen spaces run out, you will need to increase the size of yhe hash table anyway. Option 2. Buckets. Create a bucket to hold multiple values that hash the same. Now, you have options on how to implement this bucket. But be earned, for large enough n the runtime devolves to ehatever the bucket algorithm is. Why add the extra complexity? Option 3. Increase the size of the hash table. This will cause a rehash of every rntry already in the hash table. Be warned that only by doubling the size of the hash table at each increase can the amortized insert time be kept to ln(n) per insert. Constant size increases amortize to linear time per insert. Hash tables are also typically quite sparse when the first collision occurs. My takeaway is that hash tables work well for moderate size data sets where the size of the data set is fairly well known in advance.
So, Andrei, what's your code going to DO when the next compiler obeys subtley different semantics about 'what expansions are allowed', and where/when the expansions are allowed... ? Scary. (Re. approx 12:45).
What a convoluted mess, just stick to C where everything is already properly defined, changes to expected behaviour are extremely rare, if you don't understand C then you don't know how to programme to begin with, you only understand the wrappers
Alexandrescu is one of my favorite stand up comedians, in part because he mixes in some of the best c++ content out there
Have you seen Stroustrup? He uses a dead-pan delivery, never laughs at all, like he is dead serious. Even off-stage he never drops the illusion... genius.
@@shoulderstack5527 That's being 'narcistic', not 'genius'. Modesty and humility is equally important to be a genius. Einstein was a 'genius'.
I (re)watch his talks whenever I need a doze of intellectual laughter ;)
That's an odd requirement for genius. I thought it was simply doing ingenious things.
Brilliant talk. I have watched all the talks by Andrei online over the last 2 decades or so. Brilliant, fun filled man. It's kinda nostalgic to see him in grey hair. Wish him all health and long life!
Andrei always makes the best talks at cppcon
Always a pleasure to listen to Andrei talk. Thanks for sharing 😊
Glad you enjoyed it!
10:47 I think this is the most important takeaway from this talk-"static for needed for simple implementation". All of the clever implementation techniques discussed in the latter part of the talk are, essentially, workarounds for the fact that C++ does not have a "static for" loop.
what is a "static for"? can someone tell me the name of the talk where he mentioned this?
This guy just keeps impressing me, great talk!
Very intelligent and enjoyable talk
The example in slide 24 is broken. This was asked by a member of the audience and a non explanation was provided by the presenter.
int b = f(1,2.5)
Given that Ts have to be explicitly specified it will be empty in this case, and Us will always be empty due to the way matching rules work, the only way to make this work is by explicitly specifying Ts to capture all of the arguments except the last one (or not specifying it if it's empty as is the case with the first example). So
int b = f(1, 2.5) would work and deduce as
int f(Ts ..., Us ..., T) [with Ts = {int}; Us = {}; T = double]
He always has such interesting videos. Always learn from him.
Glad you like them!
16:11 "Voldemort types" is my new favorite way to refer to lambda expressions. XD
This talk is outstanding!
There is a 3rd thing one can do with packs: get it's length with sizeof...(Ts)
first(...)! great talk! perhaps instead of the member unsigned d_active, a call to variant.set sets the pointer to the destructor (so no array lookup at destruciton time, but still a function indirection overhead) and d_active can be inferred from that. not a good bargain if set/get used often though.
Nice, I actually got directly to use the ideas from this talk at work. I didn't know you could split explicitly specified and deduced arguments into two distinct packs. That makes implementing something I thought impossible actually quite easy.
One slide 37:
constexpr size_t m = lo + (hi - lo) / 2;
if (n < m) {
destroyLog(n, p);
} else if (n > m) {
If n < m is indeed true, the check for (lo ==m) should technically not be needed? Is it there just because the expansion is done at compile time ? In general can someone explain why we need the checks as part of the template parameters?
Right at the beginning, when comparing the elements of arrays to see if they are equal, before even looking at the indices of the arrays to compare, it is more efficient to compare the _sizes_ of the arrays (or vectors). If the arrays/vectors are 10 ,12 and 2 elements, then there is no point comparing the elements vertically as you know they are not equal before comparing the values at the indices. If the arrays/vectors have the same number of elements, then you can go and iterate over the index and compare the values.
~(EDIT: corrected typos)
There’s an error on slide 19, also the explanation is wrong. On the last line, there is
f(1, 2, 3, "4"); _// Ts = _
The comment is correct, the instantiation of _f_ makes _Ts_ as mentioned. But _f_ “reverses” _U_ and _Ts._ The argument 2 binds to _U_ (which, correctly stated, must always be deduced), i.e. _U_ = *int.* Arguments 3 and "4" go to *const* _Ts_ & … which is ( *const int* &, *const double* & ). And "4" cannot bind to *double.*
I assume the f() should be corrected as:
template
void f(T, TS..., U)
yes
Great talk, I like watching such subtleties are being described. By the way it seems there is a mistake about function parameters order, it can be detected at 27:41. In fact, the "3" will go to "Ts". In order to put it to U (and "2" to "Ts"), the function parameters should be declared as follows: f(T, Ts& ... , U). And this cooresponds to that whai is being said about the third line (the presented example can not be compiled because "4" is being bound to double)
upd: found this issue in older comments
This guy was supposed to be a comedian. Good technical side too.
Not that C++ makes this easy but wouldn't the ideal solution at the end be to sort all types according to trivial destructibility so their indices are all grouped, then do one of 3 things at compile time: 1. Eliminate the destructor if all types are trivially destructible. 2. If you have mixed destructors, emit a single branch with a constant value and branch into the jump table if the value index is inside the range of types with destructors. 3. Eliminate the branch from (2) if all types have non-trivial destructors.
I deeply admire Andrei, for his C++ insights and his humor ;-) , thanks Andrei , you are (also) a great speaker. @0:20:00 you Andrei talk about adding type sequence to the standard, but .... couldn''t we consider std::tuple being such "type sequence" and shouldn't we consider lips alike operations (head,.cons,....) to be applied to a tuple ? What are your (or anyone else) thoughts about it ? Please comment.
the way tuple is implemented involves a lot of template instantiations, either O(n) or O(logn) depending on the implementation, so it's a very heavy solution. On the other hand a type_sequence is only a single instantiation
@@DreyriRS Plus, you can construct a type_sequence, and pass it around as a value. Comes in handy if a pack *has* to be deduced.
Tuple is too heavy. A lot of people, unfortunately use uninstantiated tuples to manipulate sequences of types. Yeah, there is also a necessity for type_container for a single type.
type_sequence has no storage it is a compile time construct, tuple is meant to be instanciated as an object eventually, it wouldnt respect the SRP to use tuple instead of type_sequence for this task.
great talk!
While hash rable lookups can be lineaer (depending on implementation), creating the hash table can get expensive. All is fine until you have a collision. At that point, you have some choices in implementation.
Option 1 (bad) is to scan to the next hole. This makes lookups as long as linear as well as inserts being linear. Ehen spaces run out, you will need to increase the size of yhe hash table anyway.
Option 2. Buckets. Create a bucket to hold multiple values that hash the same. Now, you have options on how to implement this bucket. But be earned, for large enough n the runtime devolves to ehatever the bucket algorithm is. Why add the extra complexity?
Option 3. Increase the size of the hash table. This will cause a rehash of every rntry already in the hash table. Be warned that only by doubling the size of the hash table at each increase can the amortized insert time be kept to ln(n) per insert. Constant size increases amortize to linear time per insert.
Hash tables are also typically quite sparse when the first collision occurs.
My takeaway is that hash tables work well for moderate size data sets where the size of the data set is fairly well known in advance.
This man is a comedian!
28:00 - is there a error in slide, was it meant to be f(T, const Ts&..., U)?
it seems so, the shown code does not compile and T, Ts, u compiles
around 17:30 who was the person who answered, where Andrei said he knows it because he wrote his own compiler?
Walter Bright?
So, Andrei, what's your code going to DO when the next compiler obeys subtley different semantics about 'what expansions are allowed', and where/when the expansions are allowed... ? Scary. (Re. approx 12:45).
Can someone elaborate on this kinds idea?
Inverse Factory Method = Converse Factory Method...just a wording substitution that seems more clear and intuitive.
The github link points to 2020, should point to 2021 instead to get the slides
Slide 19 is incorrect. It won't even compile. It should be "int f(T, const Ts&..., U);"
There is a typo in slide 19, "U" should be the last function argument, i.e. it should be int f(T, const Ts&..., U)
It's not a typo. U can't be deduced, if its the parameter after pack. Actually, compiler will give you an error for this declaration
Decent talk, took quite a while to get to the actual point though
After 30+y of C++ practice, I'm forced to says that C++ is going to crumble under it's own weight.
Damn. Wish I could just write everything in Rust. :D
What a convoluted mess, just stick to C where everything is already properly defined, changes to expected behaviour are extremely rare, if you don't understand C then you don't know how to programme to begin with, you only understand the wrappers