Thank you for creating this talk! This is exactly what the industry needs -- to bridge the gap between between how you did things in C/C++ and how you could do them in Rust.
Great talk :) Note that as of Rust 1.63.0, Mutex does not allocate anymore on Linux and can be even constructed at compile-time (Mutex::new is now a const function).
I think it's a really difficult thing to appropriately convey. Most videos I've seen when I started out basically went "when you start out, you'll fight with the borrow checker a lot" and now that I've used Rust for a while I think that's actually the best way to describe it. It was alien and sometimes really frustrating when I started out but eventually it just clicked, but it's really hard to pinpoint what the "it" that clicked actually is. These days I barely encounter the borrow checker anymore, I don't need to declare lifetimes often and if I do I don't need the compiler to remind me because it's obvious - and I've even had a couple of occasions where I missed them in C++ because I knew that a function would be potentially unsound if something changed on the call site. It's kind of a big deal when you start out because it's fundamentally different than what we as programmers were used to. But once you actually understand lifetimes, which I have no alternative to experience for, you almost don't wanna live without them anymore.
@@swapode The thing is, to code properly in C++ you're supposed to be fully conscious of lifetimes at all times. And when fail to do so, well that's when the myriad of security exploits crop up.
Awesome talk! It was just perfect for where I am at with Rust right now -- I sort of know its borrow/move semantics, because I watched some Rust for C++ devs videos, but I don't have almost any experience. Yet, I was able to understand everything and I feel like it gave me some very valuable insights into Rust. Thank you!
This must be the best talk ever on explaining the core ideas of a programming language - actually two. As an old c++ programmer, I currently enjoy C because of it's bitwise move semantics, which makes so much more sense than the c++ default cloning and confusing implicit convertions.
WOW, WOW, WOW a MILLION times! One hell of a presentation dude, one hell of a presentation! I learned c++ back ago but it was more like I learned c and then some basic class and new/delete topics and that was it, nothing this advanced in the language, and then went on with some assembly and basic hacking concepts. Recently though, I got The Rust Book printed and I currently am only 8 chapters in. Upon reading the ownership chapter, I was awestruck by the ingenuity behind the concept! We really needed a new systems language. Your video, now, managed to make me feel that awe again, and it didn't stop giving. Every single slide gave me that bit of satisfaction and it's all because of you, those quality slides, and your personality! Good fucking job brother!
Awesome talk, I feel like I understand both languages a little bit better now (although it seems like my distaste for C++ only grows every time I learn something new about it) . I really like the fast tempo, it doesn't give the mind too much time to wander off and lose focus. I was able to keep up almost all the way through, just the examples with Arc at the end flew straight over my head. I think the abrupt introduction of shared ownership sort of shattered my mental model where everything had a clear owner, scope and lifetime.
I've been meaning to do a deep dive into Arc and Mutex for a while now. I think you need to hold two big new facts in your head at the same time for them to click: 1) Since Rc and Arc make it easy for anyone to get a reference to their contents at any time, they can only hand out shared references. 2) Mutex lets you turn a shared reference into a mutable one. Both of those concepts are really interesting on their own, and it's worth spending time with each one by itself. It's wonderful that we can use them together, and it feels like Rust has really pushed the art of programming forward here. But _teaching_ them together is almost too much at once.
Thanks Jack! 34 minutes into this (watching at 1.5x speed :) ) and it's great so far. Found your video from the reddit thread you made and I'm glad I did
As someone currently learning Rust, this was just as interesting, as I'm now more aware of how and why Rust is so different to C / C++ in its core principles. Thanks :)
I started out with Rust as a hobby after I used C and C++ in a university course. Remember trying to port my databending tool from C++ into Rust. Honestly this video is perfect - it's such a good example of why we love Rust, there's many things but the amount of safety here is marvelous. In C++ I had multiple times where obfuscated memory errors made me rip my hair out. In Rust, whenever I was lost, I came out with a deeper knowledge and skill in what I was doing. The problem never was that everything was obfuscated - but that I didn't understand a part of Rust, and once I did that problematic scenario would be easily predictable and solveable. Rust has a habit of catching flawed design decisions because of this. You might not understand why it's screaming at you - but it's because the ingrained rules *know* that this will lead to problems. It knows that you should only be doing stuff like this if you _know what you're doing_ .
This was incredible - you did a great job finding the core differences to present and it really helped me understand why people care about Rust and what some of the trade offs are :)
i've always heard that c++ is hard. now i think i start to understand why. c++ was build with the "programmers know what they are doing, lets let them" in mind. maybe in the 80s and 90s it was still possible to safely navigate through all this implicit pitfall madness and havoc, but today - with c++ being backwards compatible - it seems like a ridiculously hopeless task. i think i also understand now why no one programs in the full c++ anymore, only its subsets. and i must say that rust with its novel memory management system and rules has just moved for me from quite intriguing to absolutely brilliant. it feels to me like it being used to build larger and larger projects and establishing itself as the industry standard for systems, embedded, iot, blockchains is not a matter of if, but when.
I thought on making the comparison of Rust and C++ in a similar format some time ago, but was overwhelmed with other stuff. It turns out you have done it in the pretty concise (w.r. to the extensive topic) and accessibe way, and I really like it. Thanks!
Really wonderful presentation! This was exactly the level of detail I've been wanting to see, both from the C++ and Rust angles. Thank you for producing this, and considering an audience like this. There are lots of us out here!
That's a brilliant talk. The material is well-organized and examples are really insightful. This gave me some feeling of how Rust code looks like and what are the main challenges about it. Many thanks!
Rust can produce basically any error you can run into with C++ - including segfaults. The benefit - Rust does a looot more compiletime-checks. that is the entire reason it exists - take C++ and make the default behaviour "safe". Your code in C++ might compile and run correctly for all normal use-cases, but still have undefined behaviour in some edgecase scenarios. Rust just forces you to either admit that your code might be unsafe in some cases (mark it as unsafe) or code more defensively. For the reference-example at 36:00 As you even explain - the C++ and Rust are not equivalent here. The C++ can do "more". And the generated assembly of a stand-alone function and the same function used in code can be very different. Depending on the code and how it is used it can even compile down to nothing. C++ allows you to do more things - including running against the wall head-first at full speed, while Rust requires you to tell it "i WANT to be able to run into the wall".
> The benefit - Rust does a looot more compiletime-checks. More runtime checks too! Particularly array bounds checks, and unwrap() panics. But importantly, I think there's a categorical difference between Rust and C++ here that isn't quite captured by "more". Certainly both languages can have all the same memory corruption bugs. With Rust we say "only if you're using unsafe code", but then some folks rightfully point out "Aren't you always using unsafe code under the covers whenever you use standard library functions?" That's true and a totally fair question, and I think it's interesting to try to clarify how Rust is different despite that. Here's how I try to explain it: If you write only safe Rust, and you manage to trigger memory corruption or some other undefined behavior, that bug is *not your fault*. That _always_ indicates a bug in some underlying library, or the compiler, or the OS, or the hardware, which needs to be fixed. The answer is never "you shouldn't do that". (I mean, it might also be true that you shouldn't do that, but memory corruption isn't the reason why :) For completeness, there are some exceptions to this rule. For example, you can use safe Rust to launch a debugger, attach it to your own process, and corrupt random memory. Or you could play similar shenanigans with /proc/*/mem on Linux. If you actually do those things and cause unintended memory corruption, the answer is indeed "don't do that". But I think everyone can agree that these examples have nothing to do with any particular programming language. If there's some more principled distinction that can be made here, I'd be happy to hear it.
Just wow, you are amazing teacher. I was looking for something like this for quite some time. I really would like you to become regular rust programming youtuber!
Awesome! It's indeed very useful for a C++ expert to quickly grasp the main ideas, it makes a lot of sense now. Other rust tutorials that I tried to follow start from discussing syntax and std library and I just give up. (and it was a bit slow on 1.5x ;) )
What I need is live stats about the speed that everyone's watching on, cause I don't think the people who watch at 0.5x are going to comment about it :-D
Wow, this is insanely well put together, great job. One thing I've noticed that's maybe also good mention is mem::take(&mut some_var). With that you can somewhat achieve what the move constructor does in C++. mem::take(...) basically is a convenience wrapper around mem::swap(..., ...) that swaps the variable with a default constructed object of its type. So you move out the content of the reference and leave a default constructed object behind. So that's a nice non-destructive way to move.
Best video about C++ vs Rust by far! 10:14, std::span (I guess C++17 too) is this pair. std::string_view is all the const part of a std::string. It works as a const string, but doesn't lose time allocating memory. 13:30, std::string_view will always has this problem, because it applies 2 pointers, at begin/end of another string, literal or not, allocated or not. So, you are saying to the compiler _"Don't worry, I'll keep an eye on this other string"_ . It's the price for performance. 14:10, on the stack, the content remains on faster memories, while heap is RAM and slower ones. 31:23, in C++, it's possible to make a class that hides a pointer, its constructor automatically verifies if the received pointer (not reference) is not null, neither is out of bounds, and only allows to dereference its inner pointer after calling an 'unsafe' f(). 36:38, the easiest way to solve this is by copying source to a local variable. 55:51, when optimizing flags are not turned on. 1:06:00, std::vector has this small-size optimization too, of bringing to faster memories (stack) what should be working on slow heap. I know this because I already coded app in which std::vector was faster than std::array, which is forced to live on the stack! 1:07:37, does that mean it doesn't accept user-defined constructors? This is often useful to me. 1:09:00, if it's unordered, it's fast: just copy the last 1 into the 1st place, and vector::resize (vector::size() - 1).
Amazing talk! Thank you so much. I've really learned a lot. I'm a Rust programmer who has never used C++. I now see how much safety the Rust's compiler offers and why it sometimes it refuses to compile certain things.
Very nice talk. One thing I believe you could have explained better is how Rust views memory. Saying that shared or exclusive references are pointers is fine to some degree, but to really make Rust "click" you should think about memory in terms of ownership and borrowing. Then, all of the sudden many aspects of Rust memory management (like move semantics for ex.) become surprisingly obvious. This is very different from how C/C++ memory works and requires changing of how you think about your code, but I believe it is necessary to really understand memory management in Rust and how it provides us with a memory safe code in return.
I'd be curious to get other folks' opinions about this, but my impression is that the concepts of ownership and borrowing are actually pretty similar between Rust and (modern) C++. For example, std::vector and std::unique_ptr are owning types, and T*, T&, and std::string_view are borrowing types. Of course the difference is that these things are guidelines / best practices in C++ but hard-and-fast rules in Rust, so a beginner Rust class needs to cover them on day 1, while a beginner C++ course might not mention them at all. But this is part of why I tend to think that learning Rust is a useful shortcut to learning *good* C++.
Awesome talk, learned a lot! Java/Kotlin developer here, I was always "kinda" interested in C++/Rust and your talk only made me even more interested. Let me ask you about your slides: what tool/website did you use to make them?
I always taught of moves as a semantic device, but I was wrong, those types do need to be passed to separate stack frames for example. But I assume the allocated memory associated with them stays where it was
For sure, there's definitely a ton of "semantically this is a copy, but in practice the compiler optimizes that copy away" going on, and there are a lot of low level details that I don't fully grasp myself. Like when I said at 51:42 that returning an int copies it to the caller's stack somewhere, in retrospect that's probably usually wrong. I think in most ABIs in practice, returning an int is actually defined to put it in some CPU register. But on the other hand, I don't think the C standard actually talks about registers very much, and most of that stuff is left up to the platform/implementation. You're also right that most of the time, returning larger values means that the function will implicitly get an extra pointer argument pointing somewhere in the caller's stack, and the return value will actually be created at that pointed-to location and never moved. In C++ this "return value optimization" does show up in the language standard, because copying and moving can have side effects, and because some types aren't copyable or movable at all.
I am a novice c++ programmer, some shallow knowledge but very little real world experience. I'm trying to learn rust coming from the world of garbage collected languages. Your presentation gives a great refresh of some cpp ideas and pitfalls all the while deepening my understanding of these 3 big ideas that rust is bringing to the table to solve them. Thanks a ton for the effort!
At 36:15 assigning to the same variable twice may not be pointless if the variable storage is located in a memory-mapped area. It may be a write to a hardware register, for example, although it should be declared as volatile in that case. Does Rust have a similar concept to volatile, to avoid the removal of the variable being set to 42?
Your examples are really good and this helped me a lot on my Rust journey. May I take your examples and present them to my team and ofc I'll credit you.
I am a little confused about the section where you were calling drop on the file handle. How does calling drop on the file handle result in that file handle being closed?
In both C++ and Rust, fstream and File will close the underlying file handle in their destructors. So if I had allowed the `file` variable to go out of scope naturally, it would've been closed naturally, and drop() is just making that happen earlier. I might've confused things a bit by putting so much emphasis on "the destructor of a moved-from value never runs", and maybe that makes it sound like our handle isn't going to get closed. But the full story is that, because `file` is moved into the drop() function's `_x` argument, it's actually the destructor of `_x` that ends up running and closing the handle. At a high level, moving usually means transferring ownership of a resource from one object to another, and eventually some final recipient of the resource (maybe and the end of a long chain of moves) is going to run a destructor and free it.
file is moved to an empty function and the function body becomes its owner, when it finishes, file goes out of scope and is destroyed you can simulate a similar thing in c++ using a function that accepts a move-only type (like std::unique_ptr) by value, it's just way more natural in rust given its defaults (move by default, can't use moved from variables, etc)
Any way to avoid paying atomic refcount runtime overhead in the Arc Mutex threaded example? Like is there an equivalent of `jthread`, where Rust can see that the borrow doesn't outlive the outer string object?
Yes there is, they're called "scoped threads": doc.rust-lang.org/std/thread/fn.scope.html. They weren't stable in the Rust standard library until about a year after this video was published, though a similar API has always been available in the Crossbeam crate. There's actually a really interesting story here, where the standard library had scoped threads prior to Rust 1.0, but then they discovered that the API was unsound because safe Rust is allowed to leak objects without running their destructors. The API we have today, where you have to pass in a closure that takes the scope object as an argument, is the workaround for that issue.
@@oconnor663 That's great. Knowing the C++ community, they (we) often don't tolerate runtime overhead even if we get safer code in exchange. If Rust wants to be a suitable alternative for most use cases, then zero-overhead solutions are needed, preferably in safe code. I'm sure the language developers know this, too.
Great talk! Learned some C++ here.. Small correction: Rust does not guarantee memory safety 100%, you can deliberately introduce memory unsafety without writing `unsafe`. But, you really need to try.
@@oconnor663 I recall hearing in a conference that researchers managed to create data races by messing with `Rc` so it is not 100% memory safe, but unfortunately I cannot find the source.
Hmm, Rc isn't Send or Sync, so it should be impossible to give the same Rc to multiple threads to try to provoke a data race. Maybe what you're remembering was about Arc? There are some subtleties around what atomic orderings get used to manipulate the refcount, and it's possible that a bug there could lead to unsoundness. But I'm pretty sure such a bug would be easy to fix, if it was found, and not some sort of fundamental design flaw in Arc or anything like that. To your point, though, bugs in unsafe code do happen, even in the standard library.
35:17 @oconnor663 Here you are presenting a function in implemented in _Rust_ that looks like the one implemented in _C++_ but that *is not equivalent (not even of the same type)* to the one implemented in _C++_ . So, *this does not demonstrate that **_RustC_** is more clever at compilling than **_CLang_* , but rather that it urges us to think more about what we ask to compile and tend to prevent us from asking for anything. By the way, *it would have been very interesting to show how to implement in **_Rust_** a function that would default to 0x0000002a when the source is the same as the destination* , to see a fair comparison with the _C++_ example. What if this was actually what we wanted? Also, to be fair again, what about implementing in _C++_ a function that would actually be equivalent to the _Rust_ function, performing the checks that _RustC_ performs for us ?
In Go the garbage collector makes the whole issue of dangling pointers go away because the compiler "escapes" the stack allocated memory to the heap. The function that returns a pointer to a locally allocated memory variable just works since the memory is resident on the heap instead of the stack. Instead of crashing in C or C++, or failing to compile in Rust; the Go program just works. I would happily trade a slight loss in performance for this automatic memory management.
This is absolutely true, and it's one of the biggest differences in the learning curves between the two languages. Go's escape analysis works for you even before you know that it exists. When you do learn about it, the lesson is something like "Hey did you ever stop to think about why this works?" In contrast, Rust's ownership and lifetime rules are a barrier to beginners getting simple programs working. You have to learn about them explicitly, along with some non-obvious strategies for satisfying them. It's a serious cost, and it's why I tell people who are asking "Should I learn Rust or Go first?" to just start with Go, because it's so much quicker getting started. That said, once you've put in the time to absorb Rust's memory discipline, and you've gotten past the "fighting the borrow checker" stage, there are some benefits. Sometimes you care about the performance cost of garbage collection, or you need to write embedded/kernel code where garbage collection doesn't work. But more broadly, as programs get larger and more complicated, unrestricted aliasing tends to lead to tricky bugs. For example in Go, whenever you append to a slice, you need to make sure that no one's holding a stale reference to the old slice. The more code you have touching that slice, the more opportunities there are to create stale references without meaning to. Similarly, whenever you have data shared across threads/goroutines, you need to make sure that no one takes any accidental references that might get used later outside of synchronization. Rust's memory discipline tends to make these "spooky action at a distance" bugs less common. Maybe someday we'll see a language somewhere in between, though, with Rust's approach to mutability but with Go's approach to heap allocation.
I'm using ASan and UBSan, which you can turn on with -fsanitize=address and -fsanitize=undefined in GCC or Clang. There's also one example with -fsanitize=thread.
Around 35:00, we have something called restrict to make the behaviour of both implementations equivalent. Note though, that in C++ this is a specific compiler thing that would be written as __restrict__ as this is borrowed from C99 and it's not on spec.
@@oconnor663 thank you for the recommendation. To clarify my comment, I don’t know if the generated assembly code will be the same but it does indeed at least indicates to the function caller that those addresses should not be aliased
Pretty good, but I found a problematic error: At 56:24, [Clone].clone() does NOT mean a deep clone. Some primitivish values like String do make a deep clone on the call, but that is at their discretion, and some, I. E. std::rc::RC ("Reference counted", pretty much C++ shared_ptr) explicitly do NOT deep clone. Only Copy types are guaranteed to do a deep clone, Clone may very much be by reference.
You're right, and indeed the final section about Arc relies on the behavior you're talking about. Now that you mention it, there are many other cases besides Rc/Arc where .clone() isn't a deep copy, like when you call .clone() on a &str (which is Copy!) and just get back another reference. This also brings up some interesting but tricky questions about how exactly the `.` operator behaves. I'll add an erratum.
@@oconnor663 Hmm that is correct, I didn't think of that. If you read the official documentation Copy means bitwise Copy. Because &str does not own its data that is legal as long as it follows lifetime rules. Regarding the behavior of `. `, im not sure I follow. This can call methods of "referenced" objects as long as they officially declare they are a reference, by implementing Deref. The official documentation doc.rust-lang.org/std/ops/trait.Deref.html, which I, btw, find one of the best aspects of Rust, goes into more detail. Not sure if that was what you meant.
Here's what I'm talking about with the `.` operator. Honestly I'm not 100% on the rules here myself. play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1c56fe220a29ccfc37e8a2a62ea0b1ef
@@oconnor663 Hmm, also not entirely sure, but I think I understood it now. First of all, I think you might be misunderstanding what line 4 and 16 do, the '.into()' call takes precedence over the references. You can see that by trying this: play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1cbcaced85e7d38f6587e1484402e196 What actually happens is, I think, this: play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=9758339b4d57afd12bef0eaaccdd6d65 This is another reason why I find Rusts inference system a bit too powerful. People don't really understand whats going on but get away with it, until the inference isn't following anymore or doesn't have enough information, and at this point people that just got by are hopeless, even if the concept itself wouldn't be problematic if they were lead to it by the compiler being a little less intelligent. Now, regarding the Error, there is indeed such a Clone implementation as you reference: doc.rust-lang.org/std/clone/trait.Clone.html#impl-Clone-122 This is, I'm pretty sure, the one that is getting used. This results in a &String (as type inference will tell you), and the assignment triggers the error you're seeing (if you look up the error it shows only generic wrong assignments). You can (I think) understand it if you read the reference: doc.rust-lang.org/reference/expressions/method-call-expr.html like a Lawyer. The first call finds no method with a String receiver, but for the &String receiver that gets looked up immediately after there is a method, so that gets called. For &String it immediately finds the right method. For &&String though it first finds the wildcard implementation of Clone for &T (where T=&String). The compiler apparently only complains about ambiguous Method names if they happen on the same step, which isn't the case here. If you dereference beforehand it works of course: play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=7e5bb5223ab475f76fbe184b23f7382a Interestingly, this also works: play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=5d9d68a1c70be8dc4f598be541822ead It appears to be another case of overarchiving inference combined with different rules for this kind of call instead of the method-call syntax. If you don't provide the return type indirectly by typing the return, it'll result in a &String instead of String. Actually here it seems to be able to take an arbitrary amount of references: play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=8cb1cf80dee255e1a627550912b87778
@@oconnor663 Hey, sorry to ask, but I'm not always sure how to interpret such interactions. Was what I said at all helpful, or was it just too much text, or was it... "insulting"? I'm much better at this kind of communication with immediate feedback, and with this kind of stuff I often don't get feedback at all, and I'd like to know what I can improve if I'm not a lost cause in that regard.
In Move Semantics for non copy types, when you say that Rust behaves similar to C where it copies the data over to the new owner, the difference being that Rust makes the copied type disappear from the previous scope, does it mean that the contents are copied bitwise implying a heavier operation than passing a ref of the type?
Yes, in some circumstances a move or copy is going to be more expensive than passing a reference. It's similar to the situation in C++, where passing a large std::array or similar by value can be more expensive than passing it by reference. That said, it's common (in all three languages) for the compiler to optimize these copies away, and often we don't need to worry about it. On the other hand, there are some tricky situations that come up when you're technically moving a giant object through the stack, and you end up with a stack overflow error that only pops up in debug mode when optimizations are off by default. This particular thing is less of a problem in C and C++, which have more support for in-place construction.
That's certainly true of all the standard types I know of, and it would be surprising for any type to behave differently. But as Herb Sutter puts it: "Move is just another non-const function. Any non-const function can document when and how it changes the object’s state, including to specify a known new state as a postcondition if it wants." herbsutter.com/2020/02/17/move-simply Another interesting caveat to consider is that it's possible for a type to be movable yet not default constructible. But I can't think of any examples.
@@oconnor663 That would happen for a class that maintains an "inline" invariant, but does not have a deleted move constructor. Maybe such classes should have a deleted move constructor, because move will most likely be a copy: github.com/milasudril/fruit/blob/main/lib/point.hpp
Move leaves the source in an unspecified but valid state - aka it can basically be any state - that might be default-constructed, it might be a special "empty" or error-state.
Hi Jack I have this following observation from the first topic of dangling ref. In all the examples, it looks like.. memory is statically allocated and compilers can see those things. If not compiler than..definitely some static analyser can see those lifetime errors. It would be good, if you had shown some examples from dynamic memory allocation and passing around that pointers.
It might be interesting to clarify that both Vec and String (and vector and string from C++) make heap allocations at runtime. Any reference to the contents of a Vec is actually pointing to the heap. Is that part of what you were looking for? If not, maybe you could give me some C++ examples of what you mean?
@@oconnor663 yes, I agree that the string uses heap for the underlying characters. But the string object as such is still lying on the stack. Due to which, static analyser or good compiler can see its lifetime. On the other hand, if we do auto str = new string(....) and then pass around str, then I would not expect compiler or static analyser to track the lifetime of str.
Basically, if compiler can not see, that some object x is going out of scope, or (dying) and still it emits warnings and errors ( because you have passed that ptr to multiple location, than, that would be super helpful.
Hmm, maybe you could show me some C++ example code, and I could help you translate that into Rust? As you can imagine, Rust doesn't really encourage anything that looks like C++'s new operator. The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr. If you really wanted to simulate the new operator, you'd probably use Box::leak(), which converts a Box into a &'static mut T that will never be freed. You won't generally be able to trigger lifetime errors with that reference (because it's static, and thus valid almost anywhere), but all the usual aliasing rules still apply to it (you can take aliasing shared references to the pointee, but never aliasing mutable references). All of this is pretty unusual, but it is actually safe code. If you *do* want to free the reference, by analogy to C++'s delete operator, you need to convert it back to a Box and allow that Box to drop, but that conversion is unsafe for several reasons. All of this is pretty esoteric, advanced Rust, but it can be an interesting to look at the docs for these APIs. Here's a playground example: play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=751a9c1756d4d50806db8fdcaf424265
@@oconnor663 `The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr` -- By looking at your latest comment, I think, I need to learn Rust a little bit. :) Thanks, it's always good to know other programming style.
Can anyone explain the reference invalidation in push_int_twice for me? Don't understand how pushing that reference twice (in the case he describes) causes trouble.
That example depends on how much initial capacity gets allocated for the vector. With GCC on Linux, the initial capacity is one int, so two pushes is enough to trigger the bug. MacOS/Clang might be allocating more, but if you change push_int_twice to push_int_ten_times I'm pretty sure that'll work.
Thank you for creating this talk! This is exactly what the industry needs -- to bridge the gap between between how you did things in C/C++ and how you could do them in Rust.
I'm a PHP programmer and using this video to learn JavaScript
There are only two programming languages, Lisp and not Lisp.
@@oconnor663 (and Lisp (not Lisp))
It is not the intended purpose ...
I'll drink to this comment. Fk it
@@konstantinrebrov675 woosh
Great talk :) Note that as of Rust 1.63.0, Mutex does not allocate anymore on Linux and can be even constructed at compile-time (Mutex::new is now a const function).
I'm a c++ programmer and using this video to learn more about c++.
Finally a rust video that doesn't just gloss over the complexities of lifetime like they don't exist. I've been searching for this video since 2018
I think it's a really difficult thing to appropriately convey. Most videos I've seen when I started out basically went "when you start out, you'll fight with the borrow checker a lot" and now that I've used Rust for a while I think that's actually the best way to describe it. It was alien and sometimes really frustrating when I started out but eventually it just clicked, but it's really hard to pinpoint what the "it" that clicked actually is. These days I barely encounter the borrow checker anymore, I don't need to declare lifetimes often and if I do I don't need the compiler to remind me because it's obvious - and I've even had a couple of occasions where I missed them in C++ because I knew that a function would be potentially unsound if something changed on the call site.
It's kind of a big deal when you start out because it's fundamentally different than what we as programmers were used to. But once you actually understand lifetimes, which I have no alternative to experience for, you almost don't wanna live without them anymore.
@@swapode The thing is, to code properly in C++ you're supposed to be fully conscious of lifetimes at all times. And when fail to do so, well that's when the myriad of security exploits crop up.
Everywhere else I saw, they explain the what of borrow checking and lifetime. But this video explains the Why. Beautiful
I’m a C++ guy who’s been wanting to learn Rust for a while. This was a fantastic introduction to its core concepts. Thank you!
Awesome talk! It was just perfect for where I am at with Rust right now -- I sort of know its borrow/move semantics, because I watched some Rust for C++ devs videos, but I don't have almost any experience. Yet, I was able to understand everything and I feel like it gave me some very valuable insights into Rust. Thank you!
Phenomenal presentation
Especially how you introduce how lifetime works
_Wherever I go, I see your face_
New voxel engine video when?
You really opened my eyes to Rust, wow just a few core guarantees and suddenly a lot of issues disappear.
This must be the best talk ever on explaining the core ideas of a programming language - actually two. As an old c++ programmer, I currently enjoy C because of it's bitwise move semantics, which makes so much more sense than the c++ default cloning and confusing implicit convertions.
WOW, WOW, WOW a MILLION times! One hell of a presentation dude, one hell of a presentation! I learned c++ back ago but it was more like I learned c and then some basic class and new/delete topics and that was it, nothing this advanced in the language, and then went on with some assembly and basic hacking concepts. Recently though, I got The Rust Book printed and I currently am only 8 chapters in. Upon reading the ownership chapter, I was awestruck by the ingenuity behind the concept! We really needed a new systems language. Your video, now, managed to make me feel that awe again, and it didn't stop giving. Every single slide gave me that bit of satisfaction and it's all because of you, those quality slides, and your personality! Good fucking job brother!
🤩
I already know some Rust but I'm using this talk to better understand C++
9:40 Those rust error messages are beautifully crafted.
Great presentation for both the C++ and Rust crowd! The safety problems of C++ make the need for Rust very clear.
Excellent presentation 👍 I like that it dives straight into _the_ selling point of Rust without wasting time on less relevant topics like syntax.
Awesome talk, I feel like I understand both languages a little bit better now (although it seems like my distaste for C++ only grows every time I learn something new about it) . I really like the fast tempo, it doesn't give the mind too much time to wander off and lose focus. I was able to keep up almost all the way through, just the examples with Arc at the end flew straight over my head. I think the abrupt introduction of shared ownership sort of shattered my mental model where everything had a clear owner, scope and lifetime.
I've been meaning to do a deep dive into Arc and Mutex for a while now. I think you need to hold two big new facts in your head at the same time for them to click: 1) Since Rc and Arc make it easy for anyone to get a reference to their contents at any time, they can only hand out shared references. 2) Mutex lets you turn a shared reference into a mutable one. Both of those concepts are really interesting on their own, and it's worth spending time with each one by itself. It's wonderful that we can use them together, and it feels like Rust has really pushed the art of programming forward here. But _teaching_ them together is almost too much at once.
nice talk, straight to the point, realistic(-ish sometimes) problems and solutions
Thanks Jack! 34 minutes into this (watching at 1.5x speed :) ) and it's great so far. Found your video from the reddit thread you made and I'm glad I did
As someone currently learning Rust, this was just as interesting, as I'm now more aware of how and why Rust is so different to C / C++ in its core principles. Thanks :)
Thank you for making this video. I finally understand the talks about how the borrow checker doesn't let you shoot yourself in the knee
I started out with Rust as a hobby after I used C and C++ in a university course. Remember trying to port my databending tool from C++ into Rust.
Honestly this video is perfect - it's such a good example of why we love Rust, there's many things but the amount of safety here is marvelous. In C++ I had multiple times where obfuscated memory errors made me rip my hair out. In Rust, whenever I was lost, I came out with a deeper knowledge and skill in what I was doing.
The problem never was that everything was obfuscated - but that I didn't understand a part of Rust, and once I did that problematic scenario would be easily predictable and solveable.
Rust has a habit of catching flawed design decisions because of this. You might not understand why it's screaming at you - but it's because the ingrained rules *know* that this will lead to problems. It knows that you should only be doing stuff like this if you _know what you're doing_ .
Wow, what a phenomenal, detailed, and comprehensive lecture about Rust for those who know some C++!
This video is basically a must see, I'm amazed how clear your explanations are
This was incredible - you did a great job finding the core differences to present and it really helped me understand why people care about Rust and what some of the trade offs are :)
This was immeasurably helpful, not only for rust but for c++ as well!
i've always heard that c++ is hard. now i think i start to understand why.
c++ was build with the "programmers know what they are doing, lets let them" in mind.
maybe in the 80s and 90s it was still possible to safely navigate through all this implicit pitfall madness and havoc, but today - with c++ being backwards compatible - it seems like a ridiculously hopeless task.
i think i also understand now why no one programs in the full c++ anymore, only its subsets.
and i must say that rust with its novel memory management system and rules has just moved for me from quite intriguing to absolutely brilliant.
it feels to me like it being used to build larger and larger projects and establishing itself as the industry standard for systems, embedded, iot, blockchains is not a matter of if, but when.
I thought on making the comparison of Rust and C++ in a similar format some time ago, but was overwhelmed with other stuff. It turns out you have done it in the pretty concise (w.r. to the extensive topic) and accessibe way, and I really like it. Thanks!
Really wonderful presentation! This was exactly the level of detail I've been wanting to see, both from the C++ and Rust angles. Thank you for producing this, and considering an audience like this. There are lots of us out here!
That's a brilliant talk. The material is well-organized and examples are really insightful. This gave me some feeling of how Rust code looks like and what are the main challenges about it. Many thanks!
My man went nuclear with this video. Greate job mate!
Rust can produce basically any error you can run into with C++ - including segfaults.
The benefit - Rust does a looot more compiletime-checks. that is the entire reason it exists - take C++ and make the default behaviour "safe".
Your code in C++ might compile and run correctly for all normal use-cases, but still have undefined behaviour in some edgecase scenarios.
Rust just forces you to either admit that your code might be unsafe in some cases (mark it as unsafe) or code more defensively.
For the reference-example at 36:00
As you even explain - the C++ and Rust are not equivalent here. The C++ can do "more". And the generated assembly of a stand-alone function and the same function used in code can be very different. Depending on the code and how it is used it can even compile down to nothing.
C++ allows you to do more things - including running against the wall head-first at full speed, while Rust requires you to tell it "i WANT to be able to run into the wall".
> The benefit - Rust does a looot more compiletime-checks.
More runtime checks too! Particularly array bounds checks, and unwrap() panics.
But importantly, I think there's a categorical difference between Rust and C++ here that isn't quite captured by "more". Certainly both languages can have all the same memory corruption bugs. With Rust we say "only if you're using unsafe code", but then some folks rightfully point out "Aren't you always using unsafe code under the covers whenever you use standard library functions?" That's true and a totally fair question, and I think it's interesting to try to clarify how Rust is different despite that. Here's how I try to explain it:
If you write only safe Rust, and you manage to trigger memory corruption or some other undefined behavior, that bug is *not your fault*. That _always_ indicates a bug in some underlying library, or the compiler, or the OS, or the hardware, which needs to be fixed. The answer is never "you shouldn't do that". (I mean, it might also be true that you shouldn't do that, but memory corruption isn't the reason why :)
For completeness, there are some exceptions to this rule. For example, you can use safe Rust to launch a debugger, attach it to your own process, and corrupt random memory. Or you could play similar shenanigans with /proc/*/mem on Linux. If you actually do those things and cause unintended memory corruption, the answer is indeed "don't do that". But I think everyone can agree that these examples have nothing to do with any particular programming language. If there's some more principled distinction that can be made here, I'd be happy to hear it.
I'm a learning Rust with no idea how to do C++ and using this video to learn both Rust and C++.
Just wow, you are amazing teacher. I was looking for something like this for quite some time. I really would like you to become regular rust programming youtuber!
Found this gem while browsing my reddit feed. Amazing work in putting together these core Rust concepts in such a succinct way. Cheers
This is one of the best introductions to Rust that I have seen. Very well done.
This was very interesting as someone with basic Rust knowledge but no C++ knowledge.
Awesome! It's indeed very useful for a C++ expert to quickly grasp the main ideas, it makes a lot of sense now. Other rust tutorials that I tried to follow start from discussing syntax and std library and I just give up.
(and it was a bit slow on 1.5x ;) )
What I need is live stats about the speed that everyone's watching on, cause I don't think the people who watch at 0.5x are going to comment about it :-D
@@oconnor663 generally I watch almost everything at 2x, so technically this talk was relatively fast indeed
🏆
This is perfect for someone with a few years of C++ work experience!
Thanks for the great talk, I believe if you put a `0:00:00 introduction` in the description TH-cam will recognize the timestamps
Wow! You're right, and this is awesome.
Wow, this is insanely well put together, great job. One thing I've noticed that's maybe also good mention is mem::take(&mut some_var). With that you can somewhat achieve what the move constructor does in C++. mem::take(...) basically is a convenience wrapper around mem::swap(..., ...) that swaps the variable with a default constructed object of its type. So you move out the content of the reference and leave a default constructed object behind. So that's a nice non-destructive way to move.
Awesome job, super helpful for a C++ person. I will not soon be able to unhear you rubbing your hands together excitedly.
Best video about C++ vs Rust by far!
10:14, std::span (I guess C++17 too) is this pair. std::string_view is all the const part of a std::string. It works as a const string, but doesn't lose time allocating memory. 13:30, std::string_view will always has this problem, because it applies 2 pointers, at begin/end of another string, literal or not, allocated or not. So, you are saying to the compiler _"Don't worry, I'll keep an eye on this other string"_ . It's the price for performance.
14:10, on the stack, the content remains on faster memories, while heap is RAM and slower ones.
31:23, in C++, it's possible to make a class that hides a pointer, its constructor automatically verifies if the received pointer (not reference) is not null, neither is out of bounds, and only allows to dereference its inner pointer after calling an 'unsafe' f().
36:38, the easiest way to solve this is by copying source to a local variable.
55:51, when optimizing flags are not turned on.
1:06:00, std::vector has this small-size optimization too, of bringing to faster memories (stack) what should be working on slow heap. I know this because I already coded app in which std::vector was faster than std::array, which is forced to live on the stack!
1:07:37, does that mean it doesn't accept user-defined constructors? This is often useful to me.
1:09:00, if it's unordered, it's fast: just copy the last 1 into the 1st place, and vector::resize (vector::size() - 1).
nice content Jack O'Connor. I smashed the thumbs up on your video. Always keep up the amazing work.
Plz keep creating more videos, i and a lot of people will love to support you, thx for this awesome video!
Great. This is the Rust intro/overview I’ve been waiting for.
one of the best explanations of lifetimes i have seen for rust. thank you.
This is a really good video. I watched a lot of Rust vids and this is my favorite so far.
The perfect Rust video doesn't exi-
Incredible video, great comparison side to side with extremely good to follow explanations!
This is an amazing amazing talk about this topic. Thank you for taking the time to put this together.
More talks from you please. Great teacher.
Having the drop function just an empty function is badass. That is so cool that I no longer hate Rust.
But I still like C++ more❤
Very good talk, thank you for your work. Hope there might be coming more
This was awesome, I'd love to see more videos like this from you about Rust.
Hard to overstate how great this is. And how interesting.
Amazing talk! Thank you so much. I've really learned a lot. I'm a Rust programmer who has never used C++. I now see how much safety the Rust's compiler offers and why it sometimes it refuses to compile certain things.
Rust is very, very interesting, thanks for these amazing explanations
Very nice talk. One thing I believe you could have explained better is how Rust views memory. Saying that shared or exclusive references are pointers is fine to some degree, but to really make Rust "click" you should think about memory in terms of ownership and borrowing. Then, all of the sudden many aspects of Rust memory management (like move semantics for ex.) become surprisingly obvious. This is very different from how C/C++ memory works and requires changing of how you think about your code, but I believe it is necessary to really understand memory management in Rust and how it provides us with a memory safe code in return.
I'd be curious to get other folks' opinions about this, but my impression is that the concepts of ownership and borrowing are actually pretty similar between Rust and (modern) C++. For example, std::vector and std::unique_ptr are owning types, and T*, T&, and std::string_view are borrowing types. Of course the difference is that these things are guidelines / best practices in C++ but hard-and-fast rules in Rust, so a beginner Rust class needs to cover them on day 1, while a beginner C++ course might not mention them at all. But this is part of why I tend to think that learning Rust is a useful shortcut to learning *good* C++.
The epitome of Rust lectures
Excellent video, thank you!
Please consider making more.
Awesome talk, learned a lot! Java/Kotlin developer here, I was always "kinda" interested in C++/Rust and your talk only made me even more interested.
Let me ask you about your slides: what tool/website did you use to make them?
Reveal.js. The source is here: github.com/oconnor663/cpp_rust_talk
Excellent example with lifetimes
Fantastic overview. One thing to note is that the borrow checker is not disabled in unsafe code like mentioned at 31:06.
Thank you oh my god. This was lit. Do more please!
I’m a professional HTML/CSS programmer watching this to help me improve.
I always taught of moves as a semantic device, but I was wrong, those types do need to be passed to separate stack frames for example. But I assume the allocated memory associated with them stays where it was
For sure, there's definitely a ton of "semantically this is a copy, but in practice the compiler optimizes that copy away" going on, and there are a lot of low level details that I don't fully grasp myself. Like when I said at 51:42 that returning an int copies it to the caller's stack somewhere, in retrospect that's probably usually wrong. I think in most ABIs in practice, returning an int is actually defined to put it in some CPU register. But on the other hand, I don't think the C standard actually talks about registers very much, and most of that stuff is left up to the platform/implementation. You're also right that most of the time, returning larger values means that the function will implicitly get an extra pointer argument pointing somewhere in the caller's stack, and the return value will actually be created at that pointed-to location and never moved. In C++ this "return value optimization" does show up in the language standard, because copying and moving can have side effects, and because some types aren't copyable or movable at all.
This is a great presentation, thanks man
I am a novice c++ programmer, some shallow knowledge but very little real world experience. I'm trying to learn rust coming from the world of garbage collected languages.
Your presentation gives a great refresh of some cpp ideas and pitfalls all the while deepening my understanding of these 3 big ideas that rust is bringing to the table to solve them.
Thanks a ton for the effort!
This is great, very informative.
This presentation was awesome! Thank you.
Absolutely amazing video, thanks!
At 36:15 assigning to the same variable twice may not be pointless if the variable storage is located in a memory-mapped area. It may be a write to a hardware register, for example, although it should be declared as volatile in that case. Does Rust have a similar concept to volatile, to avoid the removal of the variable being set to 42?
Yes Rust supports volatile writes: doc.rust-lang.org/std/ptr/fn.write_volatile.html.
Great job! More content please related to Rust. Perhaps building a small app which makes use of important Rust concepts?
Now I feel scared about all the C++ code I've written.
Yeah this is a reaction a lot people have when they learn Rust: th-cam.com/video/nY07zWzhyn4/w-d-xo.html
Great video! Commenting for the youtube algorithm :)
Your examples are really good and this helped me a lot on my Rust journey. May I take your examples and present them to my team and ofc I'll credit you.
Absolutely!
I am a little confused about the section where you were calling drop on the file handle. How does calling drop on the file handle result in that file handle being closed?
In both C++ and Rust, fstream and File will close the underlying file handle in their destructors. So if I had allowed the `file` variable to go out of scope naturally, it would've been closed naturally, and drop() is just making that happen earlier. I might've confused things a bit by putting so much emphasis on "the destructor of a moved-from value never runs", and maybe that makes it sound like our handle isn't going to get closed. But the full story is that, because `file` is moved into the drop() function's `_x` argument, it's actually the destructor of `_x` that ends up running and closing the handle. At a high level, moving usually means transferring ownership of a resource from one object to another, and eventually some final recipient of the resource (maybe and the end of a long chain of moves) is going to run a destructor and free it.
file is moved to an empty function and the function body becomes its owner, when it finishes, file goes out of scope and is destroyed
you can simulate a similar thing in c++ using a function that accepts a move-only type (like std::unique_ptr) by value, it's just way more natural in rust given its defaults (move by default, can't use moved from variables, etc)
Any way to avoid paying atomic refcount runtime overhead in the Arc Mutex threaded example? Like is there an equivalent of `jthread`, where Rust can see that the borrow doesn't outlive the outer string object?
Yes there is, they're called "scoped threads": doc.rust-lang.org/std/thread/fn.scope.html. They weren't stable in the Rust standard library until about a year after this video was published, though a similar API has always been available in the Crossbeam crate. There's actually a really interesting story here, where the standard library had scoped threads prior to Rust 1.0, but then they discovered that the API was unsound because safe Rust is allowed to leak objects without running their destructors. The API we have today, where you have to pass in a closure that takes the scope object as an argument, is the workaround for that issue.
@@oconnor663 That's great. Knowing the C++ community, they (we) often don't tolerate runtime overhead even if we get safer code in exchange. If Rust wants to be a suitable alternative for most use cases, then zero-overhead solutions are needed, preferably in safe code. I'm sure the language developers know this, too.
Great talk!
Learned some C++ here..
Small correction: Rust does not guarantee memory safety 100%, you can deliberately introduce memory unsafety without writing `unsafe`. But, you really need to try.
Are you referring to "soundness holes" in Rust itself, or to OS shenanigans like `/proc/*/mem`? Or maybe something else?
@@oconnor663 I recall hearing in a conference that researchers managed to create data races by messing with `Rc` so it is not 100% memory safe, but unfortunately I cannot find the source.
Hmm, Rc isn't Send or Sync, so it should be impossible to give the same Rc to multiple threads to try to provoke a data race. Maybe what you're remembering was about Arc? There are some subtleties around what atomic orderings get used to manipulate the refcount, and it's possible that a bug there could lead to unsoundness. But I'm pretty sure such a bug would be easy to fix, if it was found, and not some sort of fundamental design flaw in Arc or anything like that. To your point, though, bugs in unsafe code do happen, even in the standard library.
35:17 @oconnor663
Here you are presenting a function in implemented in _Rust_ that looks like the one implemented in _C++_ but that *is not equivalent (not even of the same type)* to the one implemented in _C++_ .
So, *this does not demonstrate that **_RustC_** is more clever at compilling than **_CLang_* , but rather that it urges us to think more about what we ask to compile and tend to prevent us from asking for anything.
By the way, *it would have been very interesting to show how to implement in **_Rust_** a function that would default to 0x0000002a when the source is the same as the destination* , to see a fair comparison with the _C++_ example. What if this was actually what we wanted?
Also, to be fair again, what about implementing in _C++_ a function that would actually be equivalent to the _Rust_ function, performing the checks that _RustC_ performs for us ?
Excellent talk, loved it.
In Go the garbage collector makes the whole issue of dangling pointers go away because the compiler "escapes" the stack allocated memory to the heap. The function that returns a pointer to a locally allocated memory variable just works since the memory is resident on the heap instead of the stack. Instead of crashing in C or C++, or failing to compile in Rust; the Go program just works. I would happily trade a slight loss in performance for this automatic memory management.
This is absolutely true, and it's one of the biggest differences in the learning curves between the two languages. Go's escape analysis works for you even before you know that it exists. When you do learn about it, the lesson is something like "Hey did you ever stop to think about why this works?" In contrast, Rust's ownership and lifetime rules are a barrier to beginners getting simple programs working. You have to learn about them explicitly, along with some non-obvious strategies for satisfying them. It's a serious cost, and it's why I tell people who are asking "Should I learn Rust or Go first?" to just start with Go, because it's so much quicker getting started.
That said, once you've put in the time to absorb Rust's memory discipline, and you've gotten past the "fighting the borrow checker" stage, there are some benefits. Sometimes you care about the performance cost of garbage collection, or you need to write embedded/kernel code where garbage collection doesn't work. But more broadly, as programs get larger and more complicated, unrestricted aliasing tends to lead to tricky bugs. For example in Go, whenever you append to a slice, you need to make sure that no one's holding a stale reference to the old slice. The more code you have touching that slice, the more opportunities there are to create stale references without meaning to. Similarly, whenever you have data shared across threads/goroutines, you need to make sure that no one takes any accidental references that might get used later outside of synchronization. Rust's memory discipline tends to make these "spooky action at a distance" bugs less common.
Maybe someday we'll see a language somewhere in between, though, with Rust's approach to mutability but with Go's approach to heap allocation.
What c++ memory analyzer are you using?
I'm using ASan and UBSan, which you can turn on with -fsanitize=address and -fsanitize=undefined in GCC or Clang. There's also one example with -fsanitize=thread.
Around 35:00, we have something called restrict to make the behaviour of both implementations equivalent. Note though, that in C++ this is a specific compiler thing that would be written as __restrict__ as this is borrowed from C99 and it's not on spec.
You might enjoy this talk too :) th-cam.com/video/DG-VLezRkYQ/w-d-xo.html
@@oconnor663 thank you for the recommendation. To clarify my comment, I don’t know if the generated assembly code will be the same but it does indeed at least indicates to the function caller that those addresses should not be aliased
Fantastic, thank you!
This was fabulous.
Pretty good, but I found a problematic error: At 56:24, [Clone].clone() does NOT mean a deep clone.
Some primitivish values like String do make a deep clone on the call, but that is at their discretion, and some, I. E. std::rc::RC ("Reference counted", pretty much C++ shared_ptr) explicitly do NOT deep clone.
Only Copy types are guaranteed to do a deep clone, Clone may very much be by reference.
You're right, and indeed the final section about Arc relies on the behavior you're talking about. Now that you mention it, there are many other cases besides Rc/Arc where .clone() isn't a deep copy, like when you call .clone() on a &str (which is Copy!) and just get back another reference. This also brings up some interesting but tricky questions about how exactly the `.` operator behaves. I'll add an erratum.
@@oconnor663 Hmm that is correct, I didn't think of that.
If you read the official documentation Copy means bitwise Copy.
Because &str does not own its data that is legal as long as it follows lifetime rules.
Regarding the behavior of `. `, im not sure I follow. This can call methods of "referenced" objects as long as they officially declare they are a reference, by implementing Deref. The official documentation doc.rust-lang.org/std/ops/trait.Deref.html, which I, btw, find one of the best aspects of Rust, goes into more detail.
Not sure if that was what you meant.
Here's what I'm talking about with the `.` operator. Honestly I'm not 100% on the rules here myself. play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1c56fe220a29ccfc37e8a2a62ea0b1ef
@@oconnor663 Hmm, also not entirely sure, but I think I understood it now.
First of all, I think you might be misunderstanding what line 4 and 16 do, the '.into()' call takes precedence over the references. You can see that by trying this:
play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1cbcaced85e7d38f6587e1484402e196
What actually happens is, I think, this:
play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=9758339b4d57afd12bef0eaaccdd6d65
This is another reason why I find Rusts inference system a bit too powerful. People don't really understand whats going on but get away with it, until the inference isn't following anymore or doesn't have enough information, and at this point people that just got by are hopeless, even if the concept itself wouldn't be problematic if they were lead to it by the compiler being a little less intelligent.
Now, regarding the Error, there is indeed such a Clone implementation as you reference:
doc.rust-lang.org/std/clone/trait.Clone.html#impl-Clone-122
This is, I'm pretty sure, the one that is getting used. This results in a &String (as type inference will tell you), and the assignment triggers the error you're seeing (if you look up the error it shows only generic wrong assignments).
You can (I think) understand it if you read the reference: doc.rust-lang.org/reference/expressions/method-call-expr.html like a Lawyer. The first call finds no method with a String receiver, but for the &String receiver that gets looked up immediately after there is a method, so that gets called. For &String it immediately finds the right method. For &&String though it first finds the wildcard implementation of Clone for &T (where T=&String). The compiler apparently only complains about ambiguous Method names if they happen on the same step, which isn't the case here.
If you dereference beforehand it works of course:
play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=7e5bb5223ab475f76fbe184b23f7382a
Interestingly, this also works:
play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=5d9d68a1c70be8dc4f598be541822ead
It appears to be another case of overarchiving inference combined with different rules for this kind of call instead of the method-call syntax. If you don't provide the return type indirectly by typing the return, it'll result in a &String instead of String. Actually here it seems to be able to take an arbitrary amount of references:
play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=8cb1cf80dee255e1a627550912b87778
@@oconnor663 Hey, sorry to ask, but I'm not always sure how to interpret such interactions.
Was what I said at all helpful, or was it just too much text, or was it... "insulting"?
I'm much better at this kind of communication with immediate feedback, and with this kind of stuff I often don't get feedback at all, and I'd like to know what I can improve if I'm not a lost cause in that regard.
for the example at 38:20 if u restrict one of the pointers in the store function it will produce the same assembly as the rust version
Gorgeous lecture!
Great talk
In Move Semantics for non copy types, when you say that Rust behaves similar to C where it copies the data over to the new owner, the difference being that Rust makes the copied type disappear from the previous scope, does it mean that the contents are copied bitwise implying a heavier operation than passing a ref of the type?
Yes, in some circumstances a move or copy is going to be more expensive than passing a reference. It's similar to the situation in C++, where passing a large std::array or similar by value can be more expensive than passing it by reference. That said, it's common (in all three languages) for the compiler to optimize these copies away, and often we don't need to worry about it. On the other hand, there are some tricky situations that come up when you're technically moving a giant object through the stack, and you end up with a stack overflow error that only pops up in debug mode when optimizations are off by default. This particular thing is less of a problem in C and C++, which have more support for in-place construction.
In C++, std::move usually implies that the object end up in default constructed state. Iterators are invalidated, and size() would return 0.
That's certainly true of all the standard types I know of, and it would be surprising for any type to behave differently. But as Herb Sutter puts it: "Move is just another non-const function. Any non-const function can document when and how it changes the object’s state, including to specify a known new state as a postcondition if it wants." herbsutter.com/2020/02/17/move-simply
Another interesting caveat to consider is that it's possible for a type to be movable yet not default constructible. But I can't think of any examples.
@@oconnor663 That would happen for a class that maintains an "inline" invariant, but does not have a deleted move constructor. Maybe such classes should have a deleted move constructor, because move will most likely be a copy: github.com/milasudril/fruit/blob/main/lib/point.hpp
Oh of course. I guess there are two different expected behaviors, either "move leaves the source in its default state" or "move is just a copy".
Move leaves the source in an unspecified but valid state - aka it can basically be any state - that might be default-constructed, it might be a special "empty" or error-state.
he probably thinks that the more he clicks the sooner will rust click for us
Hi Jack
I have this following observation from the first topic of dangling ref.
In all the examples, it looks like.. memory is statically allocated and compilers can see those things. If not compiler than..definitely some static analyser can see those lifetime errors. It would be good, if you had shown some examples from dynamic memory allocation and passing around that pointers.
It might be interesting to clarify that both Vec and String (and vector and string from C++) make heap allocations at runtime. Any reference to the contents of a Vec is actually pointing to the heap. Is that part of what you were looking for? If not, maybe you could give me some C++ examples of what you mean?
@@oconnor663 yes, I agree that the string uses heap for the underlying characters. But the string object as such is still lying on the stack. Due to which, static analyser or good compiler can see its lifetime. On the other hand, if we do auto str = new string(....) and then pass around str, then I would not expect compiler or static analyser to track the lifetime of str.
Basically, if compiler can not see, that some object x is going out of scope, or (dying) and still it emits warnings and errors ( because you have passed that ptr to multiple location, than, that would be super helpful.
Hmm, maybe you could show me some C++ example code, and I could help you translate that into Rust? As you can imagine, Rust doesn't really encourage anything that looks like C++'s new operator. The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr. If you really wanted to simulate the new operator, you'd probably use Box::leak(), which converts a Box into a &'static mut T that will never be freed. You won't generally be able to trigger lifetime errors with that reference (because it's static, and thus valid almost anywhere), but all the usual aliasing rules still apply to it (you can take aliasing shared references to the pointee, but never aliasing mutable references). All of this is pretty unusual, but it is actually safe code. If you *do* want to free the reference, by analogy to C++'s delete operator, you need to convert it back to a Box and allow that Box to drop, but that conversion is unsafe for several reasons. All of this is pretty esoteric, advanced Rust, but it can be an interesting to look at the docs for these APIs. Here's a playground example: play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=751a9c1756d4d50806db8fdcaf424265
@@oconnor663 `The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr`
-- By looking at your latest comment, I think, I need to learn Rust a little bit. :)
Thanks, it's always good to know other programming style.
Can anyone explain the reference invalidation in push_int_twice for me? Don't understand how pushing that reference twice (in the case he describes) causes trouble.
Having tried this out, it runs fine with g++ -std=c++11 on my Mac.
That example depends on how much initial capacity gets allocated for the vector. With GCC on Linux, the initial capacity is one int, so two pushes is enough to trigger the bug. MacOS/Clang might be allocating more, but if you change push_int_twice to push_int_ten_times I'm pretty sure that'll work.
After a moment of silence, I reached the conclusion you were making, thanks!
You're an excellent educator
Hi ! Awesome video ! You're the best teacher I have seen in a long time ! Like'd, subscribe'd, Bell'd
I'm a C programmer using this video to learn more about Rust
In Rust if you need immovable types then you should use Pin I think.
There was some discussion of Pin here: www.reddit.com/r/rust/comments/nprgwu/a_firehose_of_rust_for_busy_people_who_know_some_c/h0brxoa/