I freaking LOVE that mans hair. That is a man who says "The 70's were rad and I'm never leaving them, and I don't care how much other people talk crap about me."
70-s or a few centuries ago-s :) Dude looks like a time traveler. And that shirt just seals the deal for me. The way it billows - it’s special and way cool in my book.
Around 47 minutes, Scott gets to the point. In C++ you are focused on the tool itself, rather than what you can do with the tool. It is a great graphic he uses. Programmers should be focused on their problem, not on the tool. Then he debunks his own work. "182 Guidelines!". I have posted about this needing to write officious 'style' documents for each project telling programmers what to use and not use in C++. With languages designed from the ground up, these things are just in the language without thinking. And, I agree, that other languages should not require this army of consultants going around to support the language. That is another problem with C++ - it is a language that demands the programmer to support C++. It should be the other way around - a programming language should support programming and programmers. And that is not what is meant in another of those trite C/C++ platitude that such languages are 'just training wheels for beginners'. That has always been wrong, but too many programmers, especially young ones have fallen for it. From Scott's statement to the young programmers, they will spend the next 10 years trying to understand this stuff. Ten years to move beyond being a beginner, really?
The simple reasons why C++ is exactly like you (and Meyers) described are: 1. it's old, and 2. backwards compatibility. Nothing ever gets obsoleted in C++ in order to not break the billion tons of legacy code. This, however, is a deliberate design choice and the one single reason why C++ is still in heavy use everywhere in the industry: It compiles. 😉
They should have simplified C++ 2003. Most add-ons are solutions to problems caused by other add-ons. I also recognize pseudo-science in the standard library.
@@heyheyhophop Sure. Use C++ everyday. Auto? Function(al)? Enum classes? Constexpr? Variant? Any? For(each), smart pointers among dozens of other addons are sugar with the side effects of sugar. Even C++03 ingredients such as enums are better implemented with a struct with constants. A lot of template tricks are just tricks with meager benefits. Pseudo science in STL? Everywhere. What computer scientist uses std::set or std::list? Who in her right mind would directly sort a std::vector instead of determining the order? Besides Stroustrup and 2 or 3 others... And BTW, I know that Bjarne does not agree with the added weight.
int x1; has an indeterminate starting value because the variable is just defined. The compiler hasn't yet decided even if it's going to allocate memory for it. It might never even *get* a value because it got optimized away.
Not in D. x1 will have value 0. If you don't take its address, and first op you do is write to it, or do += or something like that, it will be optimized by the compiler. But you are free to usee it assuming it is 0. If you read it, it will read 0. If you take an address and pass it somewhere to read or do read-modify-write, it will have zero in it. 'int x1;' is a definition and initialization in D. Similarly references to objects are initialized to null. If you really really want to avoid initalization (which is useful for big arrays for example), is to use '= void;' It will help compiler and ask it to not emit code for initialization.
auto xs = { 0 } infers initializer_list to make the following lowering possible: for (auto x: { 1, 2, 3 }) {…} to auto&& __xs = { 1, 2, 3 }; for (auto x: __xs) {…} Or at least I guess so.
18:10: how is this const int? it's just a regular int, the fact that it's const is due to the fact that the lambda's operator() overload is const qualified by default, unless you add mutable.
For example: some header (say, base.h) contains code for Base. Another file (derived.h) includes it and contains additional code for Derived. Now, yet another file (spec.h) includes base.h and specializes Base so that it doesn't have the method that Derived calls. Now if somebody includes all of that and calls a method on Derived - it would in turn call for a missing method in Base, which results in a compile error. When should we raise the error? The answer is: if it's called normally, we report the (possible) error immediately: we have no idea if such spec.h exists, we haven't seen it, but there is a possibility that it does, so, we play safe, and detect a potential error early. If it's called with this-> syntax, that's library author signals us: hey, it's OK, I promise, there would be no spec.h, and if it does appear - then you can raise an error.
It means that C++ in a very basic way does not work like C. In C, you can generate machine code as you parse it. It won’t be great code, but you can do it. Literally: emit code after each definition or statement. In other words: you can parse C like you would Pascal. Single pass, and between each global scope declaration or definition you already have all the types resolved, and have valid assembly output. When you are past the final non-blank character while parsing the C file, you could be done emitting the code, ready to close the object file (some object file format oddities notwithstanding), without having to know that what’s ahead is going to be an EOF. There could be one more function there :) In C++ you could do it most times, except when there is any code that could change meaning due to later specialization. In most cases, this means that a lot of template type deduction precludes one-pass compilation: during the first pass (first phase), you can’t be sure what the code means. It’s quite stupid really, because as soon as C++ got two-phase-lookup, it could have dropped the need to forward-declare anything. In fact, by converting “plain” types to template types in a way that force two-phase deduction, you get the compiler to process all the uses of the type after the whole thing was parsed in the first pass, guaranteed. This is something you could add to (a 4+ year old version of) clang in a week or so (took me longer since it was done in fits and starts), and it’s not some dramatic change. Everything did still work fine, and no new problems were introduced (the test suite still passed - good enough for me). You could then add an option to clang-format to remove forward declarations, and to remove the silly syntax that selects between 1- and 2-phase lookup (the good old casting to a fully substituted type would take care of it just fine). So anything that the language allows to be specialized past the point you currently are compiling, you have to defer further processing of, and resume when you reach the end of translation unit (in general), or at best when you reach a specialization that is constrained by language rules to be the most specific one. Now, later on you may still end up aborting if multiple specializations would make it ambiguous in the context that one template type was used. But that doesn’t help much, since lots of front end optimizations require complete AST of a translation unit anyway, and nobody emits machine code as the parse in anything but toy compilers or very resource-constrained systems (say Turbo Pascal on CP/M, where the IDE, compiler, most of the source you edit, and the OS all fit into ~64kb of RAM).
@@James-dw7un It's better than it has ever been which means you can be productive in D, but sadly it has got some accidental complexity to deal with, much like C++ but to a lesser extent. It's a bit like learning Vim: it will take some time before you are productive. My advice is to switch if you have a task that D enables you to do (like more complicated compile-time magic to do), otherwise probably not, unless you can make the time investment to learn all the aspects of using D.
Scott was so witty, and... hardly any laughter! Very disturbing. (OK, it gets better after some 20 minutes of warming-up.) As a rule of thumb, scheduling funny keynote speakers to a morning talk is a terrible waste.
It is kind of trite to excuse the problems of C++ as humorous. The problems need to be taken far more seriously. It is OK to laugh at the time, but as long as the serious implications as burdens to programmers and difficulty in getting software to work and then debug and maintain are understood. Not realising that is the terrible waste.
I freaking LOVE that mans hair. That is a man who says "The 70's were rad and I'm never leaving them, and I don't care how much other people talk crap about me."
Totally!
I think that it's more about himan, than 70's
70-s or a few centuries ago-s :) Dude looks like a time traveler. And that shirt just seals the deal for me. The way it billows - it’s special and way cool in my book.
Looks like John Fogerty circa 1969
No one mentioned that vector is actually not a SequenceContainer. One and only exception for vector
Scott Meyers looks like an IT version of He Man with this hair. But he has really high IQ instead of big muscles.
C++0x, at the time, was the beggining of the end.
I realise how little I thought I knew about C++.
Hopefully that leaves you with the feeling of wanting to know even less!
Around 47 minutes, Scott gets to the point. In C++ you are focused on the tool itself, rather than what you can do with the tool. It is a great graphic he uses.
Programmers should be focused on their problem, not on the tool.
Then he debunks his own work. "182 Guidelines!". I have posted about this needing to write officious 'style' documents for each project telling programmers what to use and not use in C++. With languages designed from the ground up, these things are just in the language without thinking.
And, I agree, that other languages should not require this army of consultants going around to support the language.
That is another problem with C++ - it is a language that demands the programmer to support C++. It should be the other way around - a programming language should support programming and programmers.
And that is not what is meant in another of those trite C/C++ platitude that such languages are 'just training wheels for beginners'. That has always been wrong, but too many programmers, especially young ones have fallen for it.
From Scott's statement to the young programmers, they will spend the next 10 years trying to understand this stuff. Ten years to move beyond being a beginner, really?
The simple reasons why C++ is exactly like you (and Meyers) described are: 1. it's old, and 2. backwards compatibility. Nothing ever gets obsoleted in C++ in order to not break the billion tons of legacy code. This, however, is a deliberate design choice and the one single reason why C++ is still in heavy use everywhere in the industry: It compiles. 😉
They should have simplified C++ 2003. Most add-ons are solutions to problems caused by other add-ons. I also recognize pseudo-science in the standard library.
Interesting, could you provide some examples?
@@heyheyhophop Sure. Use C++ everyday. Auto? Function(al)? Enum classes? Constexpr? Variant? Any? For(each), smart pointers among dozens of other addons are sugar with the side effects of sugar. Even C++03 ingredients such as enums are better implemented with a struct with constants. A lot of template tricks are just tricks with meager benefits. Pseudo science in STL? Everywhere. What computer scientist uses std::set or std::list? Who in her right mind would directly sort a std::vector instead of determining the order? Besides Stroustrup and 2 or 3 others... And BTW, I know that Bjarne does not agree with the added weight.
bump
Saying it's a “special rule” reminds me of the euphemistic use of that term for people with mental challenges.
Anybody know at 22:10 who "the man who teaches comparative programming languages" is?
I second this question
int x1; has an indeterminate starting value because the variable is just defined. The compiler hasn't yet decided even if it's going to allocate memory for it. It might never even *get* a value because it got optimized away.
It hasn't even been defined. It has only been declared.
Not in D. x1 will have value 0. If you don't take its address, and first op you do is write to it, or do += or something like that, it will be optimized by the compiler. But you are free to usee it assuming it is 0. If you read it, it will read 0. If you take an address and pass it somewhere to read or do read-modify-write, it will have zero in it. 'int x1;' is a definition and initialization in D. Similarly references to objects are initialized to null. If you really really want to avoid initalization (which is useful for big arrays for example), is to use '= void;' It will help compiler and ask it to not emit code for initialization.
But 24:47 auto x4 { 0 }; is int as of C++17. Slide at 28:45 was never shown.
By the power of Grayskull!
auto xs = { 0 } infers initializer_list to make the following lowering possible:
for (auto x: { 1, 2, 3 }) {…}
to
auto&& __xs = { 1, 2, 3 };
for (auto x: __xs) {…}
Or at least I guess so.
18:10: how is this const int? it's just a regular int, the fact that it's const is due to the fact that the lambda's operator() overload is const qualified by default, unless you add mutable.
No, you are wrong. Even with "mutable", the lambda's internal copy of cx will still be const (unless you define it using init capture)
"Later specialization" (28:25)? Can someone explain what he meant by that please?
For example: some header (say, base.h) contains code for Base. Another file (derived.h) includes it and contains additional code for Derived. Now, yet another file (spec.h) includes base.h and specializes Base so that it doesn't have the method that Derived calls. Now if somebody includes all of that and calls a method on Derived - it would in turn call for a missing method in Base, which results in a compile error. When should we raise the error? The answer is: if it's called normally, we report the (possible) error immediately: we have no idea if such spec.h exists, we haven't seen it, but there is a possibility that it does, so, we play safe, and detect a potential error early. If it's called with this-> syntax, that's library author signals us: hey, it's OK, I promise, there would be no spec.h, and if it does appear - then you can raise an error.
@@migmit Won't Base will still continue to have the the method's declaration (at least)? And won't it be passed down to derived types via inheritance?
dstgre No. Partial specialization completely replaces the general declaration. Base, if it's specialized, would not remember anything from Base.
It means that C++ in a very basic way does not work like C. In C, you can generate machine code as you parse it. It won’t be great code, but you can do it. Literally: emit code after each definition or statement. In other words: you can parse C like you would Pascal. Single pass, and between each global scope declaration or definition you already have all the types resolved, and have valid assembly output.
When you are past the final non-blank character while parsing the C file, you could be done emitting the code, ready to close the object file (some object file format oddities notwithstanding), without having to know that what’s ahead is going to be an EOF. There could be one more function there :)
In C++ you could do it most times, except when there is any code that could change meaning due to later specialization. In most cases, this means that a lot of template type deduction precludes one-pass compilation: during the first pass (first phase), you can’t be sure what the code means.
It’s quite stupid really, because as soon as C++ got two-phase-lookup, it could have dropped the need to forward-declare anything. In fact, by converting “plain” types to template types in a way that force two-phase deduction, you get the compiler to process all the uses of the type after the whole thing was parsed in the first pass, guaranteed.
This is something you could add to (a 4+ year old version of) clang in a week or so (took me longer since it was done in fits and starts), and it’s not some dramatic change. Everything did still work fine, and no new problems were introduced (the test suite still passed - good enough for me). You could then add an option to clang-format to remove forward declarations, and to remove the silly syntax that selects between 1- and 2-phase lookup (the good old casting to a fully substituted type would take care of it just fine).
So anything that the language allows to be specialized past the point you currently are compiling, you have to defer further processing of, and resume when you reach the end of translation unit (in general), or at best when you reach a specialization that is constrained by language rules to be the most specific one. Now, later on you may still end up aborting if multiple specializations would make it ambiguous in the context that one template type was used. But that doesn’t help much, since lots of front end optimizations require complete AST of a translation unit anyway, and nobody emits machine code as the parse in anything but toy compilers or very resource-constrained systems (say Turbo Pascal on CP/M, where the IDE, compiler, most of the source you edit, and the OS all fit into ~64kb of RAM).
This is a great reminder why I am truly done with C++. Hoping moving to D makes my life easier...
Move to Rust :)
How is D going? I am thinking about switching from C++ to D
@@James-dw7un It's better than it has ever been which means you can be productive in D, but sadly it has got some accidental complexity to deal with, much like C++ but to a lesser extent. It's a bit like learning Vim: it will take some time before you are productive. My advice is to switch if you have a task that D enables you to do (like more complicated compile-time magic to do), otherwise probably not, unless you can make the time investment to learn all the aspects of using D.
so, six years have passed. How are things going on with D? Is your life easier? Or you switched to Rust as someone here suggested?
i have not even finished this lecture and i already decided i never gonna touch c++ again.
Scott was so witty, and... hardly any laughter! Very disturbing. (OK, it gets better after some 20 minutes of warming-up.)
As a rule of thumb, scheduling funny keynote speakers to a morning talk is a terrible waste.
It is kind of trite to excuse the problems of C++ as humorous. The problems need to be taken far more seriously. It is OK to laugh at the time, but as long as the serious implications as burdens to programmers and difficulty in getting software to work and then debug and maintain are understood.
Not realising that is the terrible waste.