I'm a c++ programmer and using this video to learn more about c++.
ปีที่แล้ว +30
Great talk :) Note that as of Rust 1.63.0, Mutex does not allocate anymore on Linux and can be even constructed at compile-time (Mutex::new is now a const function).
Thank you for creating this talk! This is exactly what the industry needs -- to bridge the gap between between how you did things in C/C++ and how you could do them in Rust.
I think it's a really difficult thing to appropriately convey. Most videos I've seen when I started out basically went "when you start out, you'll fight with the borrow checker a lot" and now that I've used Rust for a while I think that's actually the best way to describe it. It was alien and sometimes really frustrating when I started out but eventually it just clicked, but it's really hard to pinpoint what the "it" that clicked actually is. These days I barely encounter the borrow checker anymore, I don't need to declare lifetimes often and if I do I don't need the compiler to remind me because it's obvious - and I've even had a couple of occasions where I missed them in C++ because I knew that a function would be potentially unsound if something changed on the call site. It's kind of a big deal when you start out because it's fundamentally different than what we as programmers were used to. But once you actually understand lifetimes, which I have no alternative to experience for, you almost don't wanna live without them anymore.
@@swapode The thing is, to code properly in C++ you're supposed to be fully conscious of lifetimes at all times. And when fail to do so, well that's when the myriad of security exploits crop up.
This must be the best talk ever on explaining the core ideas of a programming language - actually two. As an old c++ programmer, I currently enjoy C because of it's bitwise move semantics, which makes so much more sense than the c++ default cloning and confusing implicit convertions.
i've always heard that c++ is hard. now i think i start to understand why. c++ was build with the "programmers know what they are doing, lets let them" in mind. maybe in the 80s and 90s it was still possible to safely navigate through all this implicit pitfall madness and havoc, but today - with c++ being backwards compatible - it seems like a ridiculously hopeless task. i think i also understand now why no one programs in the full c++ anymore, only its subsets. and i must say that rust with its novel memory management system and rules has just moved for me from quite intriguing to absolutely brilliant. it feels to me like it being used to build larger and larger projects and establishing itself as the industry standard for systems, embedded, iot, blockchains is not a matter of if, but when.
Rust can produce basically any error you can run into with C++ - including segfaults. The benefit - Rust does a looot more compiletime-checks. that is the entire reason it exists - take C++ and make the default behaviour "safe". Your code in C++ might compile and run correctly for all normal use-cases, but still have undefined behaviour in some edgecase scenarios. Rust just forces you to either admit that your code might be unsafe in some cases (mark it as unsafe) or code more defensively. For the reference-example at 36:00 As you even explain - the C++ and Rust are not equivalent here. The C++ can do "more". And the generated assembly of a stand-alone function and the same function used in code can be very different. Depending on the code and how it is used it can even compile down to nothing. C++ allows you to do more things - including running against the wall head-first at full speed, while Rust requires you to tell it "i WANT to be able to run into the wall".
> The benefit - Rust does a looot more compiletime-checks. More runtime checks too! Particularly array bounds checks, and unwrap() panics. But importantly, I think there's a categorical difference between Rust and C++ here that isn't quite captured by "more". Certainly both languages can have all the same memory corruption bugs. With Rust we say "only if you're using unsafe code", but then some folks rightfully point out "Aren't you always using unsafe code under the covers whenever you use standard library functions?" That's true and a totally fair question, and I think it's interesting to try to clarify how Rust is different despite that. Here's how I try to explain it: If you write only safe Rust, and you manage to trigger memory corruption or some other undefined behavior, that bug is *not your fault*. That _always_ indicates a bug in some underlying library, or the compiler, or the OS, or the hardware, which needs to be fixed. The answer is never "you shouldn't do that". (I mean, it might also be true that you shouldn't do that, but memory corruption isn't the reason why :) For completeness, there are some exceptions to this rule. For example, you can use safe Rust to launch a debugger, attach it to your own process, and corrupt random memory. Or you could play similar shenanigans with /proc/*/mem on Linux. If you actually do those things and cause unintended memory corruption, the answer is indeed "don't do that". But I think everyone can agree that these examples have nothing to do with any particular programming language. If there's some more principled distinction that can be made here, I'd be happy to hear it.
Awesome talk! It was just perfect for where I am at with Rust right now -- I sort of know its borrow/move semantics, because I watched some Rust for C++ devs videos, but I don't have almost any experience. Yet, I was able to understand everything and I feel like it gave me some very valuable insights into Rust. Thank you!
WOW, WOW, WOW a MILLION times! One hell of a presentation dude, one hell of a presentation! I learned c++ back ago but it was more like I learned c and then some basic class and new/delete topics and that was it, nothing this advanced in the language, and then went on with some assembly and basic hacking concepts. Recently though, I got The Rust Book printed and I currently am only 8 chapters in. Upon reading the ownership chapter, I was awestruck by the ingenuity behind the concept! We really needed a new systems language. Your video, now, managed to make me feel that awe again, and it didn't stop giving. Every single slide gave me that bit of satisfaction and it's all because of you, those quality slides, and your personality! Good fucking job brother!
As someone currently learning Rust, this was just as interesting, as I'm now more aware of how and why Rust is so different to C / C++ in its core principles. Thanks :)
Awesome talk, I feel like I understand both languages a little bit better now (although it seems like my distaste for C++ only grows every time I learn something new about it) . I really like the fast tempo, it doesn't give the mind too much time to wander off and lose focus. I was able to keep up almost all the way through, just the examples with Arc at the end flew straight over my head. I think the abrupt introduction of shared ownership sort of shattered my mental model where everything had a clear owner, scope and lifetime.
I've been meaning to do a deep dive into Arc and Mutex for a while now. I think you need to hold two big new facts in your head at the same time for them to click: 1) Since Rc and Arc make it easy for anyone to get a reference to their contents at any time, they can only hand out shared references. 2) Mutex lets you turn a shared reference into a mutable one. Both of those concepts are really interesting on their own, and it's worth spending time with each one by itself. It's wonderful that we can use them together, and it feels like Rust has really pushed the art of programming forward here. But _teaching_ them together is almost too much at once.
I started out with Rust as a hobby after I used C and C++ in a university course. Remember trying to port my databending tool from C++ into Rust. Honestly this video is perfect - it's such a good example of why we love Rust, there's many things but the amount of safety here is marvelous. In C++ I had multiple times where obfuscated memory errors made me rip my hair out. In Rust, whenever I was lost, I came out with a deeper knowledge and skill in what I was doing. The problem never was that everything was obfuscated - but that I didn't understand a part of Rust, and once I did that problematic scenario would be easily predictable and solveable. Rust has a habit of catching flawed design decisions because of this. You might not understand why it's screaming at you - but it's because the ingrained rules *know* that this will lead to problems. It knows that you should only be doing stuff like this if you _know what you're doing_ .
Thanks Jack! 34 minutes into this (watching at 1.5x speed :) ) and it's great so far. Found your video from the reddit thread you made and I'm glad I did
I thought on making the comparison of Rust and C++ in a similar format some time ago, but was overwhelmed with other stuff. It turns out you have done it in the pretty concise (w.r. to the extensive topic) and accessibe way, and I really like it. Thanks!
Awesome! It's indeed very useful for a C++ expert to quickly grasp the main ideas, it makes a lot of sense now. Other rust tutorials that I tried to follow start from discussing syntax and std library and I just give up. (and it was a bit slow on 1.5x ;) )
What I need is live stats about the speed that everyone's watching on, cause I don't think the people who watch at 0.5x are going to comment about it :-D
This was incredible - you did a great job finding the core differences to present and it really helped me understand why people care about Rust and what some of the trade offs are :)
That's a brilliant talk. The material is well-organized and examples are really insightful. This gave me some feeling of how Rust code looks like and what are the main challenges about it. Many thanks!
Just wow, you are amazing teacher. I was looking for something like this for quite some time. I really would like you to become regular rust programming youtuber!
In Go the garbage collector makes the whole issue of dangling pointers go away because the compiler "escapes" the stack allocated memory to the heap. The function that returns a pointer to a locally allocated memory variable just works since the memory is resident on the heap instead of the stack. Instead of crashing in C or C++, or failing to compile in Rust; the Go program just works. I would happily trade a slight loss in performance for this automatic memory management.
This is absolutely true, and it's one of the biggest differences in the learning curves between the two languages. Go's escape analysis works for you even before you know that it exists. When you do learn about it, the lesson is something like "Hey did you ever stop to think about why this works?" In contrast, Rust's ownership and lifetime rules are a barrier to beginners getting simple programs working. You have to learn about them explicitly, along with some non-obvious strategies for satisfying them. It's a serious cost, and it's why I tell people who are asking "Should I learn Rust or Go first?" to just start with Go, because it's so much quicker getting started. That said, once you've put in the time to absorb Rust's memory discipline, and you've gotten past the "fighting the borrow checker" stage, there are some benefits. Sometimes you care about the performance cost of garbage collection, or you need to write embedded/kernel code where garbage collection doesn't work. But more broadly, as programs get larger and more complicated, unrestricted aliasing tends to lead to tricky bugs. For example in Go, whenever you append to a slice, you need to make sure that no one's holding a stale reference to the old slice. The more code you have touching that slice, the more opportunities there are to create stale references without meaning to. Similarly, whenever you have data shared across threads/goroutines, you need to make sure that no one takes any accidental references that might get used later outside of synchronization. Rust's memory discipline tends to make these "spooky action at a distance" bugs less common. Maybe someday we'll see a language somewhere in between, though, with Rust's approach to mutability but with Go's approach to heap allocation.
Wow, this is insanely well put together, great job. One thing I've noticed that's maybe also good mention is mem::take(&mut some_var). With that you can somewhat achieve what the move constructor does in C++. mem::take(...) basically is a convenience wrapper around mem::swap(..., ...) that swaps the variable with a default constructed object of its type. So you move out the content of the reference and leave a default constructed object behind. So that's a nice non-destructive way to move.
Really wonderful presentation! This was exactly the level of detail I've been wanting to see, both from the C++ and Rust angles. Thank you for producing this, and considering an audience like this. There are lots of us out here!
Very nice talk. One thing I believe you could have explained better is how Rust views memory. Saying that shared or exclusive references are pointers is fine to some degree, but to really make Rust "click" you should think about memory in terms of ownership and borrowing. Then, all of the sudden many aspects of Rust memory management (like move semantics for ex.) become surprisingly obvious. This is very different from how C/C++ memory works and requires changing of how you think about your code, but I believe it is necessary to really understand memory management in Rust and how it provides us with a memory safe code in return.
I'd be curious to get other folks' opinions about this, but my impression is that the concepts of ownership and borrowing are actually pretty similar between Rust and (modern) C++. For example, std::vector and std::unique_ptr are owning types, and T*, T&, and std::string_view are borrowing types. Of course the difference is that these things are guidelines / best practices in C++ but hard-and-fast rules in Rust, so a beginner Rust class needs to cover them on day 1, while a beginner C++ course might not mention them at all. But this is part of why I tend to think that learning Rust is a useful shortcut to learning *good* C++.
Best video about C++ vs Rust by far! 10:14, std::span (I guess C++17 too) is this pair. std::string_view is all the const part of a std::string. It works as a const string, but doesn't lose time allocating memory. 13:30, std::string_view will always has this problem, because it applies 2 pointers, at begin/end of another string, literal or not, allocated or not. So, you are saying to the compiler _"Don't worry, I'll keep an eye on this other string"_ . It's the price for performance. 14:10, on the stack, the content remains on faster memories, while heap is RAM and slower ones. 31:23, in C++, it's possible to make a class that hides a pointer, its constructor automatically verifies if the received pointer (not reference) is not null, neither is out of bounds, and only allows to dereference its inner pointer after calling an 'unsafe' f(). 36:38, the easiest way to solve this is by copying source to a local variable. 55:51, when optimizing flags are not turned on. 1:06:00, std::vector has this small-size optimization too, of bringing to faster memories (stack) what should be working on slow heap. I know this because I already coded app in which std::vector was faster than std::array, which is forced to live on the stack! 1:07:37, does that mean it doesn't accept user-defined constructors? This is often useful to me. 1:09:00, if it's unordered, it's fast: just copy the last 1 into the 1st place, and vector::resize (vector::size() - 1).
Amazing talk! Thank you so much. I've really learned a lot. I'm a Rust programmer who has never used C++. I now see how much safety the Rust's compiler offers and why it sometimes it refuses to compile certain things.
I always taught of moves as a semantic device, but I was wrong, those types do need to be passed to separate stack frames for example. But I assume the allocated memory associated with them stays where it was
For sure, there's definitely a ton of "semantically this is a copy, but in practice the compiler optimizes that copy away" going on, and there are a lot of low level details that I don't fully grasp myself. Like when I said at 51:42 that returning an int copies it to the caller's stack somewhere, in retrospect that's probably usually wrong. I think in most ABIs in practice, returning an int is actually defined to put it in some CPU register. But on the other hand, I don't think the C standard actually talks about registers very much, and most of that stuff is left up to the platform/implementation. You're also right that most of the time, returning larger values means that the function will implicitly get an extra pointer argument pointing somewhere in the caller's stack, and the return value will actually be created at that pointed-to location and never moved. In C++ this "return value optimization" does show up in the language standard, because copying and moving can have side effects, and because some types aren't copyable or movable at all.
Around 35:00, we have something called restrict to make the behaviour of both implementations equivalent. Note though, that in C++ this is a specific compiler thing that would be written as __restrict__ as this is borrowed from C99 and it's not on spec.
@@oconnor663 thank you for the recommendation. To clarify my comment, I don’t know if the generated assembly code will be the same but it does indeed at least indicates to the function caller that those addresses should not be aliased
Awesome talk, learned a lot! Java/Kotlin developer here, I was always "kinda" interested in C++/Rust and your talk only made me even more interested. Let me ask you about your slides: what tool/website did you use to make them?
Your examples are really good and this helped me a lot on my Rust journey. May I take your examples and present them to my team and ofc I'll credit you.
For the lifetime example (at ≈ 17:00), Rust 1.66.0 gives a clearer error message and even suggest the solution! ``` 89 | fn my_push_back(v: &mut Vec, s: &str) { | - - let's call the lifetime of this reference `'1` | | | let's call the lifetime of this reference `'2` 90 | v.push(s); | ^^^^^^^^^ argument requires that `'1` must outlive `'2` | help: consider introducing a named lifetime parameter | 89 | fn my_push_back, s: &'a str) { | ++++ ++ ++ ```
I am a novice c++ programmer, some shallow knowledge but very little real world experience. I'm trying to learn rust coming from the world of garbage collected languages. Your presentation gives a great refresh of some cpp ideas and pitfalls all the while deepening my understanding of these 3 big ideas that rust is bringing to the table to solve them. Thanks a ton for the effort!
35:17 @oconnor663 Here you are presenting a function in implemented in _Rust_ that looks like the one implemented in _C++_ but that *is not equivalent (not even of the same type)* to the one implemented in _C++_ . So, *this does not demonstrate that **_RustC_** is more clever at compilling than **_CLang_* , but rather that it urges us to think more about what we ask to compile and tend to prevent us from asking for anything. By the way, *it would have been very interesting to show how to implement in **_Rust_** a function that would default to 0x0000002a when the source is the same as the destination* , to see a fair comparison with the _C++_ example. What if this was actually what we wanted? Also, to be fair again, what about implementing in _C++_ a function that would actually be equivalent to the _Rust_ function, performing the checks that _RustC_ performs for us ?
@@oconnor663 In my understanding, taking references like that (rather than using addr_of[_mut]!(..)) may result in the second reference invalidating the pointer derived from the first reference. If it works now, then I may just have been mistaken for this particular case.
Very good question, and I don't know the answer. This discussion might be relevant: www.ralfj.de/blog/2019/04/30/stacked-borrows-2.html#why-not-a-strict-stack-discipline
@Thalia Nero, I've been reading more about this, and Miri's behavior here seems to be pretty complex. There are several issues converging together: 1) By default, Miri doesn't tag raw pointers or track their provenance. So even if the second borrow popped the first raw pointer off the borrow stack, the fact that another raw pointer winds up on top of the borrow stack makes the first raw pointer usable again. 2) Tagging raw pointers can be enabled with Miri's `-Zmiri-tag-raw-pointers` option. However, this code still passes with that option enabled. I believe this is because read operations are special cased to "disable" unique borrows above them in the borrow stack, rather than popping those borrows. This is the "READ-2" rule described in plv.mpi-sws.org/rustbelt/stacked-borrows/paper.pdf. 3) But even if the order of operations is reversed, so that the mutable borrow is taken second, Miri *still* doesn't fail this code. I'm really not sure why, but I think it's because it's not interpreting `&mut char_array[1]` as a borrow of the whole array. To actually get Miri to fail, we have to reverse the order of operations *and* make both pointers point into the same index of the array. This is surprising to me, and I'm not sure where it's described, but I'm asking about it here: www.reddit.com/r/rust/comments/sxemo2/this_example_code_appears_to_have_overlapping/hxrqo4r/
Great talk! Learned some C++ here.. Small correction: Rust does not guarantee memory safety 100%, you can deliberately introduce memory unsafety without writing `unsafe`. But, you really need to try.
@@oconnor663 I recall hearing in a conference that researchers managed to create data races by messing with `Rc` so it is not 100% memory safe, but unfortunately I cannot find the source.
Hmm, Rc isn't Send or Sync, so it should be impossible to give the same Rc to multiple threads to try to provoke a data race. Maybe what you're remembering was about Arc? There are some subtleties around what atomic orderings get used to manipulate the refcount, and it's possible that a bug there could lead to unsoundness. But I'm pretty sure such a bug would be easy to fix, if it was found, and not some sort of fundamental design flaw in Arc or anything like that. To your point, though, bugs in unsafe code do happen, even in the standard library.
That's certainly true of all the standard types I know of, and it would be surprising for any type to behave differently. But as Herb Sutter puts it: "Move is just another non-const function. Any non-const function can document when and how it changes the object’s state, including to specify a known new state as a postcondition if it wants." herbsutter.com/2020/02/17/move-simply Another interesting caveat to consider is that it's possible for a type to be movable yet not default constructible. But I can't think of any examples.
@@oconnor663 That would happen for a class that maintains an "inline" invariant, but does not have a deleted move constructor. Maybe such classes should have a deleted move constructor, because move will most likely be a copy: github.com/milasudril/fruit/blob/main/lib/point.hpp
Move leaves the source in an unspecified but valid state - aka it can basically be any state - that might be default-constructed, it might be a special "empty" or error-state.
Hi Jack I have this following observation from the first topic of dangling ref. In all the examples, it looks like.. memory is statically allocated and compilers can see those things. If not compiler than..definitely some static analyser can see those lifetime errors. It would be good, if you had shown some examples from dynamic memory allocation and passing around that pointers.
It might be interesting to clarify that both Vec and String (and vector and string from C++) make heap allocations at runtime. Any reference to the contents of a Vec is actually pointing to the heap. Is that part of what you were looking for? If not, maybe you could give me some C++ examples of what you mean?
@@oconnor663 yes, I agree that the string uses heap for the underlying characters. But the string object as such is still lying on the stack. Due to which, static analyser or good compiler can see its lifetime. On the other hand, if we do auto str = new string(....) and then pass around str, then I would not expect compiler or static analyser to track the lifetime of str.
Basically, if compiler can not see, that some object x is going out of scope, or (dying) and still it emits warnings and errors ( because you have passed that ptr to multiple location, than, that would be super helpful.
Hmm, maybe you could show me some C++ example code, and I could help you translate that into Rust? As you can imagine, Rust doesn't really encourage anything that looks like C++'s new operator. The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr. If you really wanted to simulate the new operator, you'd probably use Box::leak(), which converts a Box into a &'static mut T that will never be freed. You won't generally be able to trigger lifetime errors with that reference (because it's static, and thus valid almost anywhere), but all the usual aliasing rules still apply to it (you can take aliasing shared references to the pointee, but never aliasing mutable references). All of this is pretty unusual, but it is actually safe code. If you *do* want to free the reference, by analogy to C++'s delete operator, you need to convert it back to a Box and allow that Box to drop, but that conversion is unsafe for several reasons. All of this is pretty esoteric, advanced Rust, but it can be an interesting to look at the docs for these APIs. Here's a playground example: play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=751a9c1756d4d50806db8fdcaf424265
@@oconnor663 `The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr` -- By looking at your latest comment, I think, I need to learn Rust a little bit. :) Thanks, it's always good to know other programming style.
At 36:15 assigning to the same variable twice may not be pointless if the variable storage is located in a memory-mapped area. It may be a write to a hardware register, for example, although it should be declared as volatile in that case. Does Rust have a similar concept to volatile, to avoid the removal of the variable being set to 42?
The generics? Huh. I mean, yeah, you have to constrain them, but I would imagine that almost every competent C++ Programmer has used languages like Java before that behave similar in that regard, and on a whole C++ has a lot stranger concepts. Rusts generics are just compile time like C++ and have to be manually constrained, which, with traits, is elegant and I much prefer it. I get the struggle with the borrow checker tho. And on a personal note, the type inference is a bit problematic at times. I'm not a particular big fan of how it's idiomatic (and at all allowed) to type a vector with a later assignment, that makes the typing flaky and hard to predict in more complex situations.
@@9SMTM6 idk. I think with what makes a trait object safe vs not, when it needs to be bounded by `Sized`, when you should use an associated type, etc are all non obvious
@@touisbetterthanpi If we are indeed still speaking of generics, that is not really an issue of generics but of traits. While I still find them very elegant, due to how widely they are used there are some arcane elements to them indeed. I guess I can try to clear some of them up. Regarding associated type vs generic, AFAIK its largely a semantic difference, but with some differing rules that result from the semantic difference. Namely a Trait with an associated type is always the same trait, so you can't implement it multiple times with different associated types. A trait with a generic meanwhile can be implemented on the same type with multiple different generic types. What may make it easier to see when to use which is when looking at examples in std: When implementing (Into)Iterator its only implemented "once" per type, I. E. a vector of strings iterates over strings and nothing else. From or Into meanwhile are implemented multiple times with different generic types. You can convert an i32 into all kind of numbers, and you can convert all kinds of numbers into i32. Object safety and the Sized requirement are sort of related. I would say they are artifacts of the fact that Rust traits can do both static and dynamic dispatch, and how you can implement traits generically (ah yeah, if that's what you mean I get what youre saying, at the same time not many other languages allow something like that at all). The reasons for why these limitations are required are a bit too much for this comment, but the Rust reference has a great overview of what's required to make a trait object safe (and how you can have methods that require Sized on them): doc.rust-lang.org/stable/reference/items/traits.html#object-safety It also links to the RFC that explains at least some of the motivations, but that's highly technical indeed.
I'm a PHP programmer and using this video to learn JavaScript
There are only two programming languages, Lisp and not Lisp.
@@oconnor663 (and Lisp (not Lisp))
It is not the intended purpose ...
I'll drink to this comment. Fk it
@@konstantinrebrov675 woosh
I'm a c++ programmer and using this video to learn more about c++.
Great talk :) Note that as of Rust 1.63.0, Mutex does not allocate anymore on Linux and can be even constructed at compile-time (Mutex::new is now a const function).
I already know some Rust but I'm using this talk to better understand C++
Thank you for creating this talk! This is exactly what the industry needs -- to bridge the gap between between how you did things in C/C++ and how you could do them in Rust.
Finally a rust video that doesn't just gloss over the complexities of lifetime like they don't exist. I've been searching for this video since 2018
I think it's a really difficult thing to appropriately convey. Most videos I've seen when I started out basically went "when you start out, you'll fight with the borrow checker a lot" and now that I've used Rust for a while I think that's actually the best way to describe it. It was alien and sometimes really frustrating when I started out but eventually it just clicked, but it's really hard to pinpoint what the "it" that clicked actually is. These days I barely encounter the borrow checker anymore, I don't need to declare lifetimes often and if I do I don't need the compiler to remind me because it's obvious - and I've even had a couple of occasions where I missed them in C++ because I knew that a function would be potentially unsound if something changed on the call site.
It's kind of a big deal when you start out because it's fundamentally different than what we as programmers were used to. But once you actually understand lifetimes, which I have no alternative to experience for, you almost don't wanna live without them anymore.
@@swapode The thing is, to code properly in C++ you're supposed to be fully conscious of lifetimes at all times. And when fail to do so, well that's when the myriad of security exploits crop up.
Everywhere else I saw, they explain the what of borrow checking and lifetime. But this video explains the Why. Beautiful
Phenomenal presentation
Especially how you introduce how lifetime works
_Wherever I go, I see your face_
New voxel engine video when?
You really opened my eyes to Rust, wow just a few core guarantees and suddenly a lot of issues disappear.
9:40 Those rust error messages are beautifully crafted.
This must be the best talk ever on explaining the core ideas of a programming language - actually two. As an old c++ programmer, I currently enjoy C because of it's bitwise move semantics, which makes so much more sense than the c++ default cloning and confusing implicit convertions.
I’m a C++ guy who’s been wanting to learn Rust for a while. This was a fantastic introduction to its core concepts. Thank you!
i've always heard that c++ is hard. now i think i start to understand why.
c++ was build with the "programmers know what they are doing, lets let them" in mind.
maybe in the 80s and 90s it was still possible to safely navigate through all this implicit pitfall madness and havoc, but today - with c++ being backwards compatible - it seems like a ridiculously hopeless task.
i think i also understand now why no one programs in the full c++ anymore, only its subsets.
and i must say that rust with its novel memory management system and rules has just moved for me from quite intriguing to absolutely brilliant.
it feels to me like it being used to build larger and larger projects and establishing itself as the industry standard for systems, embedded, iot, blockchains is not a matter of if, but when.
Rust can produce basically any error you can run into with C++ - including segfaults.
The benefit - Rust does a looot more compiletime-checks. that is the entire reason it exists - take C++ and make the default behaviour "safe".
Your code in C++ might compile and run correctly for all normal use-cases, but still have undefined behaviour in some edgecase scenarios.
Rust just forces you to either admit that your code might be unsafe in some cases (mark it as unsafe) or code more defensively.
For the reference-example at 36:00
As you even explain - the C++ and Rust are not equivalent here. The C++ can do "more". And the generated assembly of a stand-alone function and the same function used in code can be very different. Depending on the code and how it is used it can even compile down to nothing.
C++ allows you to do more things - including running against the wall head-first at full speed, while Rust requires you to tell it "i WANT to be able to run into the wall".
> The benefit - Rust does a looot more compiletime-checks.
More runtime checks too! Particularly array bounds checks, and unwrap() panics.
But importantly, I think there's a categorical difference between Rust and C++ here that isn't quite captured by "more". Certainly both languages can have all the same memory corruption bugs. With Rust we say "only if you're using unsafe code", but then some folks rightfully point out "Aren't you always using unsafe code under the covers whenever you use standard library functions?" That's true and a totally fair question, and I think it's interesting to try to clarify how Rust is different despite that. Here's how I try to explain it:
If you write only safe Rust, and you manage to trigger memory corruption or some other undefined behavior, that bug is *not your fault*. That _always_ indicates a bug in some underlying library, or the compiler, or the OS, or the hardware, which needs to be fixed. The answer is never "you shouldn't do that". (I mean, it might also be true that you shouldn't do that, but memory corruption isn't the reason why :)
For completeness, there are some exceptions to this rule. For example, you can use safe Rust to launch a debugger, attach it to your own process, and corrupt random memory. Or you could play similar shenanigans with /proc/*/mem on Linux. If you actually do those things and cause unintended memory corruption, the answer is indeed "don't do that". But I think everyone can agree that these examples have nothing to do with any particular programming language. If there's some more principled distinction that can be made here, I'd be happy to hear it.
Awesome talk! It was just perfect for where I am at with Rust right now -- I sort of know its borrow/move semantics, because I watched some Rust for C++ devs videos, but I don't have almost any experience. Yet, I was able to understand everything and I feel like it gave me some very valuable insights into Rust. Thank you!
Great presentation for both the C++ and Rust crowd! The safety problems of C++ make the need for Rust very clear.
WOW, WOW, WOW a MILLION times! One hell of a presentation dude, one hell of a presentation! I learned c++ back ago but it was more like I learned c and then some basic class and new/delete topics and that was it, nothing this advanced in the language, and then went on with some assembly and basic hacking concepts. Recently though, I got The Rust Book printed and I currently am only 8 chapters in. Upon reading the ownership chapter, I was awestruck by the ingenuity behind the concept! We really needed a new systems language. Your video, now, managed to make me feel that awe again, and it didn't stop giving. Every single slide gave me that bit of satisfaction and it's all because of you, those quality slides, and your personality! Good fucking job brother!
🤩
Excellent presentation 👍 I like that it dives straight into _the_ selling point of Rust without wasting time on less relevant topics like syntax.
nice talk, straight to the point, realistic(-ish sometimes) problems and solutions
As someone currently learning Rust, this was just as interesting, as I'm now more aware of how and why Rust is so different to C / C++ in its core principles. Thanks :)
Awesome talk, I feel like I understand both languages a little bit better now (although it seems like my distaste for C++ only grows every time I learn something new about it) . I really like the fast tempo, it doesn't give the mind too much time to wander off and lose focus. I was able to keep up almost all the way through, just the examples with Arc at the end flew straight over my head. I think the abrupt introduction of shared ownership sort of shattered my mental model where everything had a clear owner, scope and lifetime.
I've been meaning to do a deep dive into Arc and Mutex for a while now. I think you need to hold two big new facts in your head at the same time for them to click: 1) Since Rc and Arc make it easy for anyone to get a reference to their contents at any time, they can only hand out shared references. 2) Mutex lets you turn a shared reference into a mutable one. Both of those concepts are really interesting on their own, and it's worth spending time with each one by itself. It's wonderful that we can use them together, and it feels like Rust has really pushed the art of programming forward here. But _teaching_ them together is almost too much at once.
Thank you for making this video. I finally understand the talks about how the borrow checker doesn't let you shoot yourself in the knee
I started out with Rust as a hobby after I used C and C++ in a university course. Remember trying to port my databending tool from C++ into Rust.
Honestly this video is perfect - it's such a good example of why we love Rust, there's many things but the amount of safety here is marvelous. In C++ I had multiple times where obfuscated memory errors made me rip my hair out. In Rust, whenever I was lost, I came out with a deeper knowledge and skill in what I was doing.
The problem never was that everything was obfuscated - but that I didn't understand a part of Rust, and once I did that problematic scenario would be easily predictable and solveable.
Rust has a habit of catching flawed design decisions because of this. You might not understand why it's screaming at you - but it's because the ingrained rules *know* that this will lead to problems. It knows that you should only be doing stuff like this if you _know what you're doing_ .
This was very interesting as someone with basic Rust knowledge but no C++ knowledge.
This was immeasurably helpful, not only for rust but for c++ as well!
This video is basically a must see, I'm amazed how clear your explanations are
Thanks Jack! 34 minutes into this (watching at 1.5x speed :) ) and it's great so far. Found your video from the reddit thread you made and I'm glad I did
Wow, what a phenomenal, detailed, and comprehensive lecture about Rust for those who know some C++!
I'm a learning Rust with no idea how to do C++ and using this video to learn both Rust and C++.
I thought on making the comparison of Rust and C++ in a similar format some time ago, but was overwhelmed with other stuff. It turns out you have done it in the pretty concise (w.r. to the extensive topic) and accessibe way, and I really like it. Thanks!
My man went nuclear with this video. Greate job mate!
Awesome! It's indeed very useful for a C++ expert to quickly grasp the main ideas, it makes a lot of sense now. Other rust tutorials that I tried to follow start from discussing syntax and std library and I just give up.
(and it was a bit slow on 1.5x ;) )
What I need is live stats about the speed that everyone's watching on, cause I don't think the people who watch at 0.5x are going to comment about it :-D
@@oconnor663 generally I watch almost everything at 2x, so technically this talk was relatively fast indeed
🏆
This is perfect for someone with a few years of C++ work experience!
This was incredible - you did a great job finding the core differences to present and it really helped me understand why people care about Rust and what some of the trade offs are :)
That's a brilliant talk. The material is well-organized and examples are really insightful. This gave me some feeling of how Rust code looks like and what are the main challenges about it. Many thanks!
Having the drop function just an empty function is badass. That is so cool that I no longer hate Rust.
But I still like C++ more❤
Just wow, you are amazing teacher. I was looking for something like this for quite some time. I really would like you to become regular rust programming youtuber!
In Go the garbage collector makes the whole issue of dangling pointers go away because the compiler "escapes" the stack allocated memory to the heap. The function that returns a pointer to a locally allocated memory variable just works since the memory is resident on the heap instead of the stack. Instead of crashing in C or C++, or failing to compile in Rust; the Go program just works. I would happily trade a slight loss in performance for this automatic memory management.
This is absolutely true, and it's one of the biggest differences in the learning curves between the two languages. Go's escape analysis works for you even before you know that it exists. When you do learn about it, the lesson is something like "Hey did you ever stop to think about why this works?" In contrast, Rust's ownership and lifetime rules are a barrier to beginners getting simple programs working. You have to learn about them explicitly, along with some non-obvious strategies for satisfying them. It's a serious cost, and it's why I tell people who are asking "Should I learn Rust or Go first?" to just start with Go, because it's so much quicker getting started.
That said, once you've put in the time to absorb Rust's memory discipline, and you've gotten past the "fighting the borrow checker" stage, there are some benefits. Sometimes you care about the performance cost of garbage collection, or you need to write embedded/kernel code where garbage collection doesn't work. But more broadly, as programs get larger and more complicated, unrestricted aliasing tends to lead to tricky bugs. For example in Go, whenever you append to a slice, you need to make sure that no one's holding a stale reference to the old slice. The more code you have touching that slice, the more opportunities there are to create stale references without meaning to. Similarly, whenever you have data shared across threads/goroutines, you need to make sure that no one takes any accidental references that might get used later outside of synchronization. Rust's memory discipline tends to make these "spooky action at a distance" bugs less common.
Maybe someday we'll see a language somewhere in between, though, with Rust's approach to mutability but with Go's approach to heap allocation.
This is one of the best introductions to Rust that I have seen. Very well done.
Found this gem while browsing my reddit feed. Amazing work in putting together these core Rust concepts in such a succinct way. Cheers
Wow, this is insanely well put together, great job. One thing I've noticed that's maybe also good mention is mem::take(&mut some_var). With that you can somewhat achieve what the move constructor does in C++. mem::take(...) basically is a convenience wrapper around mem::swap(..., ...) that swaps the variable with a default constructed object of its type. So you move out the content of the reference and leave a default constructed object behind. So that's a nice non-destructive way to move.
This is an amazing amazing talk about this topic. Thank you for taking the time to put this together.
More talks from you please. Great teacher.
Very good talk, thank you for your work. Hope there might be coming more
Really wonderful presentation! This was exactly the level of detail I've been wanting to see, both from the C++ and Rust angles. Thank you for producing this, and considering an audience like this. There are lots of us out here!
Incredible video, great comparison side to side with extremely good to follow explanations!
one of the best explanations of lifetimes i have seen for rust. thank you.
Awesome job, super helpful for a C++ person. I will not soon be able to unhear you rubbing your hands together excitedly.
Plz keep creating more videos, i and a lot of people will love to support you, thx for this awesome video!
Hard to overstate how great this is. And how interesting.
This is a really good video. I watched a lot of Rust vids and this is my favorite so far.
The perfect Rust video doesn't exi-
Thanks for the great talk, I believe if you put a `0:00:00 introduction` in the description TH-cam will recognize the timestamps
Wow! You're right, and this is awesome.
Great. This is the Rust intro/overview I’ve been waiting for.
Rust is very, very interesting, thanks for these amazing explanations
This was awesome, I'd love to see more videos like this from you about Rust.
Very nice talk. One thing I believe you could have explained better is how Rust views memory. Saying that shared or exclusive references are pointers is fine to some degree, but to really make Rust "click" you should think about memory in terms of ownership and borrowing. Then, all of the sudden many aspects of Rust memory management (like move semantics for ex.) become surprisingly obvious. This is very different from how C/C++ memory works and requires changing of how you think about your code, but I believe it is necessary to really understand memory management in Rust and how it provides us with a memory safe code in return.
I'd be curious to get other folks' opinions about this, but my impression is that the concepts of ownership and borrowing are actually pretty similar between Rust and (modern) C++. For example, std::vector and std::unique_ptr are owning types, and T*, T&, and std::string_view are borrowing types. Of course the difference is that these things are guidelines / best practices in C++ but hard-and-fast rules in Rust, so a beginner Rust class needs to cover them on day 1, while a beginner C++ course might not mention them at all. But this is part of why I tend to think that learning Rust is a useful shortcut to learning *good* C++.
Absolutely amazing video, thanks!
nice content Jack O'Connor. I smashed the thumbs up on your video. Always keep up the amazing work.
Excellent video, thank you!
Please consider making more.
Now I feel scared about all the C++ code I've written.
Yeah this is a reaction a lot people have when they learn Rust: th-cam.com/video/nY07zWzhyn4/w-d-xo.html
Best video about C++ vs Rust by far!
10:14, std::span (I guess C++17 too) is this pair. std::string_view is all the const part of a std::string. It works as a const string, but doesn't lose time allocating memory. 13:30, std::string_view will always has this problem, because it applies 2 pointers, at begin/end of another string, literal or not, allocated or not. So, you are saying to the compiler _"Don't worry, I'll keep an eye on this other string"_ . It's the price for performance.
14:10, on the stack, the content remains on faster memories, while heap is RAM and slower ones.
31:23, in C++, it's possible to make a class that hides a pointer, its constructor automatically verifies if the received pointer (not reference) is not null, neither is out of bounds, and only allows to dereference its inner pointer after calling an 'unsafe' f().
36:38, the easiest way to solve this is by copying source to a local variable.
55:51, when optimizing flags are not turned on.
1:06:00, std::vector has this small-size optimization too, of bringing to faster memories (stack) what should be working on slow heap. I know this because I already coded app in which std::vector was faster than std::array, which is forced to live on the stack!
1:07:37, does that mean it doesn't accept user-defined constructors? This is often useful to me.
1:09:00, if it's unordered, it's fast: just copy the last 1 into the 1st place, and vector::resize (vector::size() - 1).
This presentation was awesome! Thank you.
Thank you oh my god. This was lit. Do more please!
Amazing talk! Thank you so much. I've really learned a lot. I'm a Rust programmer who has never used C++. I now see how much safety the Rust's compiler offers and why it sometimes it refuses to compile certain things.
I’m a professional HTML/CSS programmer watching this to help me improve.
Fantastic overview. One thing to note is that the borrow checker is not disabled in unsafe code like mentioned at 31:06.
Excellent example with lifetimes
This is a great presentation, thanks man
I always taught of moves as a semantic device, but I was wrong, those types do need to be passed to separate stack frames for example. But I assume the allocated memory associated with them stays where it was
For sure, there's definitely a ton of "semantically this is a copy, but in practice the compiler optimizes that copy away" going on, and there are a lot of low level details that I don't fully grasp myself. Like when I said at 51:42 that returning an int copies it to the caller's stack somewhere, in retrospect that's probably usually wrong. I think in most ABIs in practice, returning an int is actually defined to put it in some CPU register. But on the other hand, I don't think the C standard actually talks about registers very much, and most of that stuff is left up to the platform/implementation. You're also right that most of the time, returning larger values means that the function will implicitly get an extra pointer argument pointing somewhere in the caller's stack, and the return value will actually be created at that pointed-to location and never moved. In C++ this "return value optimization" does show up in the language standard, because copying and moving can have side effects, and because some types aren't copyable or movable at all.
This is great, very informative.
Great job! More content please related to Rust. Perhaps building a small app which makes use of important Rust concepts?
Around 35:00, we have something called restrict to make the behaviour of both implementations equivalent. Note though, that in C++ this is a specific compiler thing that would be written as __restrict__ as this is borrowed from C99 and it's not on spec.
You might enjoy this talk too :) th-cam.com/video/DG-VLezRkYQ/w-d-xo.html
@@oconnor663 thank you for the recommendation. To clarify my comment, I don’t know if the generated assembly code will be the same but it does indeed at least indicates to the function caller that those addresses should not be aliased
The epitome of Rust lectures
Excellent talk, loved it.
Great video! Commenting for the youtube algorithm :)
Fantastic, thank you!
Awesome talk, learned a lot! Java/Kotlin developer here, I was always "kinda" interested in C++/Rust and your talk only made me even more interested.
Let me ask you about your slides: what tool/website did you use to make them?
Reveal.js. The source is here: github.com/oconnor663/cpp_rust_talk
This was fabulous.
for the example at 38:20 if u restrict one of the pointers in the store function it will produce the same assembly as the rust version
Your examples are really good and this helped me a lot on my Rust journey. May I take your examples and present them to my team and ofc I'll credit you.
Absolutely!
Great talk
In Rust if you need immovable types then you should use Pin I think.
There was some discussion of Pin here: www.reddit.com/r/rust/comments/nprgwu/a_firehose_of_rust_for_busy_people_who_know_some_c/h0brxoa/
For the lifetime example (at ≈ 17:00), Rust 1.66.0 gives a clearer error message and even suggest the solution!
```
89 | fn my_push_back(v: &mut Vec, s: &str) {
| - - let's call the lifetime of this reference `'1`
| |
| let's call the lifetime of this reference `'2`
90 | v.push(s);
| ^^^^^^^^^ argument requires that `'1` must outlive `'2`
|
help: consider introducing a named lifetime parameter
|
89 | fn my_push_back, s: &'a str) {
| ++++ ++ ++
```
Oh that's awesome!
I am a novice c++ programmer, some shallow knowledge but very little real world experience. I'm trying to learn rust coming from the world of garbage collected languages.
Your presentation gives a great refresh of some cpp ideas and pitfalls all the while deepening my understanding of these 3 big ideas that rust is bringing to the table to solve them.
Thanks a ton for the effort!
You're an excellent educator
35:17 @oconnor663
Here you are presenting a function in implemented in _Rust_ that looks like the one implemented in _C++_ but that *is not equivalent (not even of the same type)* to the one implemented in _C++_ .
So, *this does not demonstrate that **_RustC_** is more clever at compilling than **_CLang_* , but rather that it urges us to think more about what we ask to compile and tend to prevent us from asking for anything.
By the way, *it would have been very interesting to show how to implement in **_Rust_** a function that would default to 0x0000002a when the source is the same as the destination* , to see a fair comparison with the _C++_ example. What if this was actually what we wanted?
Also, to be fair again, what about implementing in _C++_ a function that would actually be equivalent to the _Rust_ function, performing the checks that _RustC_ performs for us ?
I'm a C programmer using this video to learn more about Rust
Gorgeous lecture!
great video. thanks for making it!
This is absolutely based!
The irony at 28:00 is that the unsafe code might actually be UB due to still experimental borrowing rules
Could you say more? That code does at least pass Miri today.
@@oconnor663 In my understanding, taking references like that (rather than using addr_of[_mut]!(..)) may result in the second reference invalidating the pointer derived from the first reference. If it works now, then I may just have been mistaken for this particular case.
Very good question, and I don't know the answer. This discussion might be relevant: www.ralfj.de/blog/2019/04/30/stacked-borrows-2.html#why-not-a-strict-stack-discipline
@Thalia Nero, I've been reading more about this, and Miri's behavior here seems to be pretty complex. There are several issues converging together: 1) By default, Miri doesn't tag raw pointers or track their provenance. So even if the second borrow popped the first raw pointer off the borrow stack, the fact that another raw pointer winds up on top of the borrow stack makes the first raw pointer usable again. 2) Tagging raw pointers can be enabled with Miri's `-Zmiri-tag-raw-pointers` option. However, this code still passes with that option enabled. I believe this is because read operations are special cased to "disable" unique borrows above them in the borrow stack, rather than popping those borrows. This is the "READ-2" rule described in plv.mpi-sws.org/rustbelt/stacked-borrows/paper.pdf. 3) But even if the order of operations is reversed, so that the mutable borrow is taken second, Miri *still* doesn't fail this code. I'm really not sure why, but I think it's because it's not interpreting `&mut char_array[1]` as a borrow of the whole array. To actually get Miri to fail, we have to reverse the order of operations *and* make both pointers point into the same index of the array. This is surprising to me, and I'm not sure where it's described, but I'm asking about it here: www.reddit.com/r/rust/comments/sxemo2/this_example_code_appears_to_have_overlapping/hxrqo4r/
Super helpful, thank you!
Great talk!
Learned some C++ here..
Small correction: Rust does not guarantee memory safety 100%, you can deliberately introduce memory unsafety without writing `unsafe`. But, you really need to try.
Are you referring to "soundness holes" in Rust itself, or to OS shenanigans like `/proc/*/mem`? Or maybe something else?
@@oconnor663 I recall hearing in a conference that researchers managed to create data races by messing with `Rc` so it is not 100% memory safe, but unfortunately I cannot find the source.
Hmm, Rc isn't Send or Sync, so it should be impossible to give the same Rc to multiple threads to try to provoke a data race. Maybe what you're remembering was about Arc? There are some subtleties around what atomic orderings get used to manipulate the refcount, and it's possible that a bug there could lead to unsoundness. But I'm pretty sure such a bug would be easy to fix, if it was found, and not some sort of fundamental design flaw in Arc or anything like that. To your point, though, bugs in unsafe code do happen, even in the standard library.
In C++, std::move usually implies that the object end up in default constructed state. Iterators are invalidated, and size() would return 0.
That's certainly true of all the standard types I know of, and it would be surprising for any type to behave differently. But as Herb Sutter puts it: "Move is just another non-const function. Any non-const function can document when and how it changes the object’s state, including to specify a known new state as a postcondition if it wants." herbsutter.com/2020/02/17/move-simply
Another interesting caveat to consider is that it's possible for a type to be movable yet not default constructible. But I can't think of any examples.
@@oconnor663 That would happen for a class that maintains an "inline" invariant, but does not have a deleted move constructor. Maybe such classes should have a deleted move constructor, because move will most likely be a copy: github.com/milasudril/fruit/blob/main/lib/point.hpp
Oh of course. I guess there are two different expected behaviors, either "move leaves the source in its default state" or "move is just a copy".
Move leaves the source in an unspecified but valid state - aka it can basically be any state - that might be default-constructed, it might be a special "empty" or error-state.
Hi Jack
I have this following observation from the first topic of dangling ref.
In all the examples, it looks like.. memory is statically allocated and compilers can see those things. If not compiler than..definitely some static analyser can see those lifetime errors. It would be good, if you had shown some examples from dynamic memory allocation and passing around that pointers.
It might be interesting to clarify that both Vec and String (and vector and string from C++) make heap allocations at runtime. Any reference to the contents of a Vec is actually pointing to the heap. Is that part of what you were looking for? If not, maybe you could give me some C++ examples of what you mean?
@@oconnor663 yes, I agree that the string uses heap for the underlying characters. But the string object as such is still lying on the stack. Due to which, static analyser or good compiler can see its lifetime. On the other hand, if we do auto str = new string(....) and then pass around str, then I would not expect compiler or static analyser to track the lifetime of str.
Basically, if compiler can not see, that some object x is going out of scope, or (dying) and still it emits warnings and errors ( because you have passed that ptr to multiple location, than, that would be super helpful.
Hmm, maybe you could show me some C++ example code, and I could help you translate that into Rust? As you can imagine, Rust doesn't really encourage anything that looks like C++'s new operator. The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr. If you really wanted to simulate the new operator, you'd probably use Box::leak(), which converts a Box into a &'static mut T that will never be freed. You won't generally be able to trigger lifetime errors with that reference (because it's static, and thus valid almost anywhere), but all the usual aliasing rules still apply to it (you can take aliasing shared references to the pointee, but never aliasing mutable references). All of this is pretty unusual, but it is actually safe code. If you *do* want to free the reference, by analogy to C++'s delete operator, you need to convert it back to a Box and allow that Box to drop, but that conversion is unsafe for several reasons. All of this is pretty esoteric, advanced Rust, but it can be an interesting to look at the docs for these APIs. Here's a playground example: play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=751a9c1756d4d50806db8fdcaf424265
@@oconnor663 `The more common idiom for managing arbitrary types through a heap pointer is Rust's Box, which is more like C++'s unique_ptr`
-- By looking at your latest comment, I think, I need to learn Rust a little bit. :)
Thanks, it's always good to know other programming style.
At 36:15 assigning to the same variable twice may not be pointless if the variable storage is located in a memory-mapped area. It may be a write to a hardware register, for example, although it should be declared as volatile in that case. Does Rust have a similar concept to volatile, to avoid the removal of the variable being set to 42?
Yes Rust supports volatile writes: doc.rust-lang.org/std/ptr/fn.write_volatile.html.
Rust has a few humps. The first hump isn't too bad if you know C++ well. The later humps (for me, lifetimes and generics) are much harder...
Depending how advanced you are, the crust of rust series is sooo helpful for a lot of that stuff.
The generics? Huh.
I mean, yeah, you have to constrain them, but I would imagine that almost every competent C++ Programmer has used languages like Java before that behave similar in that regard, and on a whole C++ has a lot stranger concepts. Rusts generics are just compile time like C++ and have to be manually constrained, which, with traits, is elegant and I much prefer it.
I get the struggle with the borrow checker tho.
And on a personal note, the type inference is a bit problematic at times. I'm not a particular big fan of how it's idiomatic (and at all allowed) to type a vector with a later assignment, that makes the typing flaky and hard to predict in more complex situations.
@@9SMTM6 idk. I think with what makes a trait object safe vs not, when it needs to be bounded by `Sized`, when you should use an associated type, etc are all non obvious
@@touisbetterthanpi If we are indeed still speaking of generics, that is not really an issue of generics but of traits.
While I still find them very elegant, due to how widely they are used there are some arcane elements to them indeed.
I guess I can try to clear some of them up.
Regarding associated type vs generic, AFAIK its largely a semantic difference, but with some differing rules that result from the semantic difference.
Namely a Trait with an associated type is always the same trait, so you can't implement it multiple times with different associated types.
A trait with a generic meanwhile can be implemented on the same type with multiple different generic types.
What may make it easier to see when to use which is when looking at examples in std: When implementing (Into)Iterator its only implemented "once" per type, I. E. a vector of strings iterates over strings and nothing else.
From or Into meanwhile are implemented multiple times with different generic types. You can convert an i32 into all kind of numbers, and you can convert all kinds of numbers into i32.
Object safety and the Sized requirement are sort of related. I would say they are artifacts of the fact that Rust traits can do both static and dynamic dispatch, and how you can implement traits generically (ah yeah, if that's what you mean I get what youre saying, at the same time not many other languages allow something like that at all).
The reasons for why these limitations are required are a bit too much for this comment, but the Rust reference has a great overview of what's required to make a trait object safe (and how you can have methods that require Sized on them): doc.rust-lang.org/stable/reference/items/traits.html#object-safety
It also links to the RFC that explains at least some of the motivations, but that's highly technical indeed.
Hi ! Awesome video ! You're the best teacher I have seen in a long time ! Like'd, subscribe'd, Bell'd
I know the mentioned talks because of my CPP history already 😀