Rust iterators and iterator pipelines are really amazing. I found them more efficient and versatile to use than what C++ is bringing to the table here (even more so given that most C++ toolchains do not even implement C++20 ranges to the full extent). And in optimised builds the rust compiler beats the hell out of them and delivers code that runs at least as fast as an imperative solution, if not faster - even if that imperative solution is written in C. Great video.
Everyone claims claims as fast as C and every single time I look at a benchmark it's like 2x to 3x slower lmao. If being almost as fast as C or C++ means being anywhere from 2x slower to like 8x slower, then it's not close lol, I really dislike this cope
Nah, it depends. There are no absolutes in this. Sometimes Rust is faster, sometimes Rust is slower. Often Rust is on par. There is, however, still a long way to go and quite a few things rustc, as well as LLVM , can still improve.
a good follow up is “there is no such a thing as a zero cost abstraction” by chandler carruth, a lot of the essence of the cpp lectures also apply to rust
@@oliverjumpertzme Not yet, I am still very much working on the core features. The parser and the code checker. And then I will work on codegen and stuff.
As with other Rust presentations, I like Rust's features but am disgusted by how it looks in the editor. Still enjoying writing Ruby in all its object-based goodness and arbitrary-precision data types; only rarely extracting any hot spots into external binaries written in clean C. Ruby + C >> Rust (at least for me). Great vid though. Subbed, cheers!
One zero-cost abstraction that wasn't mentioned that managed to really impress me was coroutines (the underlying mechanism behind async/await and generators). I already knew that Rust compiles coroutines to a completely normal state machine, consisting of an enum and a method that matches on said enum, but needing to "await" the completion of a separate coroutine abstracted into another function would still have a lot of overhead... _right?_ Nope. Even though my code had multiple levels of coroutines awaiting each other, and at the top level two separate coroutines being weaved together, when I actually looked at the assembly, I didn't even see a single call instruction. The entire thing had been inlined into a single massive state machine, exactly as I'd hoped it would, but was surprised to see anyways.
Technically, I didn't even leave it out because Rust is special regarding async/await. It has no own implementation but instead delivers the primitives necessary for external libs like tokio to implement their own runtimes. ☺️ So, the state machine you're talking about is (I assume) Tokio being compiled down to super simple code by the Rust compiler. 😁
@@oliverjumpertzme I specifically said "coroutines", because this wasn't using async/await, though it did use a manual macro implementation of await. This project was more interested in making two threads of execution run in perfect lock-step with each other. As a complete side effect of this, it forced a very elegant way to transfer information to and from the coroutines, since every synchronization point aligned with one of these information transfers (whether it was necessary or not :P).
This was a nice video, which I didn't expect given the baity title. I'd also like people to mention more often zero "runtime" cost. Rust and c++ take far longer to compile because of all this. It has cost every time you hit build, which is important to me too
i know i probably don't need to know this to get started, and i came here because i like learning about the complicated details, but programming seems to have the unique effect of being very quick to make me realise how out of my depth i am, that astrophysics or rocket science takes a lot more effort to do.
I’m learning rust, and studied some C and Java in high school. I think it mostly comes down to literal language. Eventually the video lost me too, when the rhetoric leaves the scope of terms I know. I think about it like video games. Once you are experienced in a game you often use language that is completely foreign to outsiders.
Hehe, don't worry. Many of us feel this way. ☺️ I think the great thing about programming languages and programming is that we build on many abstractions. You don't necessarily need that low-level knowledge for many use cases. It's nice to have and comes in pretty handy when you really need it, but otherwise, things still work if you don't know every little detail. ☺️
I personally feel like I'm in the "Make it readable spectrum" where whatever performance gain and nut cracking you get from trying to get performance with "clever" code is not worth the future issues
I get you. But that is where comments in code eventually become useful. To learn Rust, I did some leetcode problems, and I worked on my solutions until they run at least as fast as efficient C or C++ solutions of the problem, as I was really interested to see what Rust is bringing to the table here. With one problem (the "trapped water" problem) I had three iterations of the solution, and each one looked very different from the one before - and the final one needed extended comments to make me still understanding it after a break for the weekend as it exploits some not obvious properties of the problem to run fast. Now, you can get away with less then optimal solutions for many computing problems, but if you are in for e.g. algorithms for image, video or sound processing (or automatic trading), performance is key. And in the systems programming domain, where OS kernels, drivers, compilers and interpreters and .... are written, you do not want to waste performance on that low level. Keep headroom for all those poor JS or Python guys to make use of their favourite language instead of forcing them to actually learn something good. Otherwise they may come after you and blow your poorly written interpreter out of the water. Furthermore in times where much code is run in the cloud and gets you billed for CPU and memory usage, efficient code will save you money. There are some quite impressive success stories of "rewrite it in Rust" for cloud-executed code on the Internet, reducing costs by 90%.
To be truly the same as an int, I think you should be deriving `Copy` on Duration & co, and using "self" instead of "&self". Although, in these simple cases the compiler will assumedly figure it out.
Afair, yes, the compiler should be able to figure that out. But you don't need Copy for Duration, as i64 is already Copy and transparent directly inherits that i64. ☺️
Thanks for the tip! 💛 If you are curious now: Some things are super difficult to model in Rust because they actually are usually difficult to get right. Try to implement a LinkedList for example. You'll lose a few nerves, but learn A LOT about the language, the borrow checker, and how Rust gives you rails. 😁
I'm still in favor of Rust for the majority of applications BUT... Aliasing is a difficult topic. In Rust presence of aliasing *usually* excludes mutability. It is good for performance but cases where you need to share mutable data are difficult. Rust relies a lot on macros, compile-time checks, and optimization. So you're paying the cost for abstractions at compile-time instead of debugging-time and runtime. Some may feel less productive because they end up waiting for the compiler instead of debugging. However, it's usually an illusion because the computers are doing compile-time checks really fast. Rust has many facets, which may make it difficult to learn, if you try to learn *everything*. In practice, you don't need to know everything about Rust to write correct code. Rust's lifetimes are invasive (because they reflect the complexity of software). Raw pointers are simpler but also introduce lots of ways to shoot yourself in the foot. Rust just formalized the rules about pointers, and created a safe abstraction over them. Rust inherited the mistake with sizeof in C and C++. In Rust, just like in C and C++, there's no easy way to get the size of the struct without trailing padding. Rust doesn't have variadic generics, unlike C++. You can somewhat circumvent the problem with procedural macros but it's still a PITA. Rust's expressiveness limits what its generics can do compared to C++ templates. For example, there's no way to express that type F that implements Fn(/*args/*) -> /*output type*/ does not capture variables and is safe to coerce/cast to a function pointer. Currently, the bounds of lifetimes are not expressive enough, because they can express only ">" (outlives) requirement but not ">=" ("lives at least as much as"). In Rust there's an implicit assumption that rustc makes that expression of type &'a &'b T carries a proof that 'a outlives 'b but it's not the case for a and b being 'static lifetimes. This caused the creation of cve-rs. Luckily, Rust can create a breaking change that would fix the problem (unlike C++) but it's a huge deal in my opinion.
It probably doesn't matter (much) because Rust is typed and compiled and throws away everything unneeded. Scripting languages are different story, not knowing the full context, they have to keep all abstractions and metadata and thus are really, really inefficient with lambdas and closures.
Yep! Although some JIT compilers can also do quite a lot. But if they assumed a path wrong, they need to throw that away, run the interpreted script again and gather more data before they can JIT compile again. ☺️
I don't like the "zero" in "zero cost abstractions"... I'd be OK with "low cost" or even "very low cost" but any time you see anything claim to be "zero cost" you know you're looking at a lie.
I basically left Twitter behind. The platform is not what it used to be anymore. :/ I loved it there but if no one sees your posts anymore while Elon says the algorithm is fine, doesn't make much fun.
This is not true at all. Actual performance comes from good cache utilisation, good cpu utilisation, and good utilisation of dead time while processing io. None of which the compiler can do for you. Good cache utilisation comes from understanding what data you need to process and when. Packing said data into as small a space as possible, to get as much as possible into the cpu cache lines. See data oriented design. Good cpu utilisation comes from using threads and SIMD. And making sure there are as few synchronisation points as possible. Good dead time utilisation comes from async in Rust and many other languages.
Sorry, but no. The compiler can do quite a lot of it by generating code that makes best use of the concepts the target architecture provides (that's mostly LLVMs job). As I have presented, the compiler can also make use of SIMD on its own in some circumstances. And the most important part is: in 99% of use cases, hand-optimizing code is a severe waste of time and resources.
Theres no such thing as a "compiled language" Theres only "languages with a native compiler via direct or indirect means". Also worth noting that performance doesnt matter for 99% of uses that dont produce video. They can all be handled by using cloud machines instead of user machines and streaming down the content since non-video content is tiny and cloud bills are way way smaller than engineer bills. Especially with VPSs and coolify. Performance is mainly a "get the 5% right". As long as you arnt doing the abysmal things any further thought about performance is a huge waste of time unless you are doing it because you find it fun.
@@oliverjumpertzmeFirst of all something being on wikipedia doesnt mean its correct. And second that artical in itself says that the colloqueal usage is illdefined and vague. My statement is litterally backed up by the wikipedia artical. The wikipedia artical says there isnt a firm definition, the definition is vague. Id say under most usages of "exist" people are refering to firm and real. For example a "vaguely human" thing isnt a real human. And even if you want to argue about what "real" is and if a word without a firm definition is "real" or not, you still use the term in your video to no benefit.
Some of my professors from university then want to disagree with you, together with me. ☺️ Oh, and I don't want to argue here. You made an _absolute_ statement, which was just wrong. That's all I am saying.
@@oliverjumpertzme Regarding the topic. You still have to think about code very carefully. I'm currently working with Rust legacy codebase, it's insane what people can write.
@@GillesLouisReneDeleuze didn't deny this. You surely have to take care and pay attention to the context around you. The essence of this video is just: no need to micro-optimize because you think the compiler will not create the best result possible. There are cases you have to hand-optimize, but these are rare. ☺️
Loved it! You clearly put a ton of work into making this a great viewer experience 👌 More videos like this, please!
Thank you very very much, Simon! 💛
the level of detail in explaining the what and why around patterns in rust is high level ....keep the good work
Thanks a lot good sir! 💛🙏🏻
Rust iterators and iterator pipelines are really amazing. I found them more efficient and versatile to use than what C++ is bringing to the table here (even more so given that most C++ toolchains do not even implement C++20 ranges to the full extent). And in optimised builds the rust compiler beats the hell out of them and delivers code that runs at least as fast as an imperative solution, if not faster - even if that imperative solution is written in C.
Great video.
Absolutely, and thanks a lot! 💛🙏🏻
Everyone claims claims as fast as C and every single time I look at a benchmark it's like 2x to 3x slower lmao. If being almost as fast as C or C++ means being anywhere from 2x slower to like 8x slower, then it's not close lol, I really dislike this cope
Nah, it depends. There are no absolutes in this. Sometimes Rust is faster, sometimes Rust is slower. Often Rust is on par. There is, however, still a long way to go and quite a few things rustc, as well as LLVM , can still improve.
@@xravenx24fe exactly, especially for stuff where time is actually important like low latency systems
@@xravenx24fe Find one such claim. Usually Rust is within a few percent.
a good follow up is “there is no such a thing as a zero cost abstraction” by chandler carruth, a lot of the essence of the cpp lectures also apply to rust
Good suggestion!!
I am making my own compiled language, so I am taking notes.
Uh, that sounds interesting! Anything to share already? ☺️
@@oliverjumpertzme Not yet, I am still very much working on the core features. The parser and the code checker. And then I will work on codegen and stuff.
@@oglothenerd would love to get an update whenever you're further down the road!
@@oliverjumpertzme Yeah, I will probably post about it on my channel. Sadly I cannot share the Git repo links since this is TH-cam.
@@oglothenerd subbed ☺️
Awesome video, Oliver. The many animations took for sure a lot of work. Keep rocking!
Thanks a lot, Chris! 💛 Yep, they absolutely did take a LOT of time. 😂
This content is actually very good, surprised this doesn't have too many views.
Subscribed!
Thanks a lot! 💛 Working on fixing the latter. 😁
As with other Rust presentations, I like Rust's features but am disgusted by how it looks in the editor.
Still enjoying writing Ruby in all its object-based goodness and arbitrary-precision data types; only rarely extracting any hot spots into external binaries written in clean C.
Ruby + C >> Rust (at least for me).
Great vid though. Subbed, cheers!
Thanks a lot! ☺️
Hehe, yea, Rust is a little different. 😁
One zero-cost abstraction that wasn't mentioned that managed to really impress me was coroutines (the underlying mechanism behind async/await and generators). I already knew that Rust compiles coroutines to a completely normal state machine, consisting of an enum and a method that matches on said enum, but needing to "await" the completion of a separate coroutine abstracted into another function would still have a lot of overhead... _right?_
Nope. Even though my code had multiple levels of coroutines awaiting each other, and at the top level two separate coroutines being weaved together, when I actually looked at the assembly, I didn't even see a single call instruction. The entire thing had been inlined into a single massive state machine, exactly as I'd hoped it would, but was surprised to see anyways.
Technically, I didn't even leave it out because Rust is special regarding async/await. It has no own implementation but instead delivers the primitives necessary for external libs like tokio to implement their own runtimes. ☺️ So, the state machine you're talking about is (I assume) Tokio being compiled down to super simple code by the Rust compiler. 😁
@@oliverjumpertzme I specifically said "coroutines", because this wasn't using async/await, though it did use a manual macro implementation of await. This project was more interested in making two threads of execution run in perfect lock-step with each other. As a complete side effect of this, it forced a very elegant way to transfer information to and from the coroutines, since every synchronization point aligned with one of these information transfers (whether it was necessary or not :P).
@@angeldude101 ah, I see ☺️
This was a nice video, which I didn't expect given the baity title.
I'd also like people to mention more often zero "runtime" cost. Rust and c++ take far longer to compile because of all this. It has cost every time you hit build, which is important to me too
i know i probably don't need to know this to get started, and i came here because i like learning about the complicated details, but programming seems to have the unique effect of being very quick to make me realise how out of my depth i am, that astrophysics or rocket science takes a lot more effort to do.
I’m learning rust, and studied some C and Java in high school. I think it mostly comes down to literal language. Eventually the video lost me too, when the rhetoric leaves the scope of terms I know.
I think about it like video games. Once you are experienced in a game you often use language that is completely foreign to outsiders.
Hehe, don't worry. Many of us feel this way. ☺️ I think the great thing about programming languages and programming is that we build on many abstractions. You don't necessarily need that low-level knowledge for many use cases. It's nice to have and comes in pretty handy when you really need it, but otherwise, things still work if you don't know every little detail. ☺️
Wow ! Very high quality content !! Sub'd
Thanks a lot! 💛
I personally feel like I'm in the "Make it readable spectrum" where whatever performance gain and nut cracking you get from trying to get performance with "clever" code is not worth the future issues
I get you. But that is where comments in code eventually become useful. To learn Rust, I did some leetcode problems, and I worked on my solutions until they run at least as fast as efficient C or C++ solutions of the problem, as I was really interested to see what Rust is bringing to the table here. With one problem (the "trapped water" problem) I had three iterations of the solution, and each one looked very different from the one before - and the final one needed extended comments to make me still understanding it after a break for the weekend as it exploits some not obvious properties of the problem to run fast.
Now, you can get away with less then optimal solutions for many computing problems, but if you are in for e.g. algorithms for image, video or sound processing (or automatic trading), performance is key. And in the systems programming domain, where OS kernels, drivers, compilers and interpreters and .... are written, you do not want to waste performance on that low level. Keep headroom for all those poor JS or Python guys to make use of their favourite language instead of forcing them to actually learn something good. Otherwise they may come after you and blow your poorly written interpreter out of the water.
Furthermore in times where much code is run in the cloud and gets you billed for CPU and memory usage, efficient code will save you money. There are some quite impressive success stories of "rewrite it in Rust" for cloud-executed code on the Internet, reducing costs by 90%.
Almost always true, only exceptions are where performance is actually more important, like high frequency trading for example!
@@flacdontbetterhigh frequency trading is best optimized right out of existence…
Great video! Thanks for putting in the effort to make good animations and explanations👍
Thank you very much, and it's my pleasure! 💛
bro, kim jung part with the rocket was so funny
😁😁
great video Oliver
Thank you very much! 💛
What an amazing vid!!! Subscribed 🎉 thanks, man
Thank YOU! 💛🙏🏻
Great Video! Danke!
💛 thanks a lot! Sehr gerne. 😁
To be truly the same as an int, I think you should be deriving `Copy` on Duration & co, and using "self" instead of "&self". Although, in these simple cases the compiler will assumedly figure it out.
Afair, yes, the compiler should be able to figure that out. But you don't need Copy for Duration, as i64 is already Copy and transparent directly inherits that i64. ☺️
Great video, lots of great info and nice animations 👍
Thank you very much! ☺️
Fantastic content! Keep up the excellent work!
Thanks a lot, will surely do! 💛🙌🏻
i learned so much today. thanks!
I am super happy to read that! 💛🙌🏻
Wow. Great video indeed ❤ subbed.
Thanks a lot, and welcome to the club! 💛🙌🏻
This video is actually really nice
Thank you! ☺️
Great video 👍
Thanks a lot! 💛
What an awesome video.
💛🙏🏻
i love this channel
Thank you very much! That's very encouraging! 💛☺️
This is a really good video
Thank you ☺️
awesome video!
Thanks a lot! 💛🙌🏻 Glad you like it 🙏🏻
You should make a tutorial of Rust!!!!
I will definitely make way more videos about Rust, and who knows, perhaps I'll write a book one day. 😁🙌🏻
is no-one going to mention the name of the milimeters struct?
THANK YOU, for noticing. 😁 I already thought everyone would let me pass with this small easter egg. 😅
@@oliverjumpertzme hahahaha most welcome!
At first i thought you meant a fast length :3
but what is its downside? Does Rust have weaknesses? bottlenecks, etc.? It would be very interesting to watch a video on this topic too
Thanks for the tip! 💛 If you are curious now:
Some things are super difficult to model in Rust because they actually are usually difficult to get right. Try to implement a LinkedList for example. You'll lose a few nerves, but learn A LOT about the language, the borrow checker, and how Rust gives you rails. 😁
@@oliverjumpertzme gonna try one day 🤣 thanks for the answer
@@eagold rust-unofficial.github.io/too-many-lists/ good luck!
I'm still in favor of Rust for the majority of applications BUT...
Aliasing is a difficult topic. In Rust presence of aliasing *usually* excludes mutability. It is good for performance but cases where you need to share mutable data are difficult.
Rust relies a lot on macros, compile-time checks, and optimization. So you're paying the cost for abstractions at compile-time instead of debugging-time and runtime. Some may feel less productive because they end up waiting for the compiler instead of debugging. However, it's usually an illusion because the computers are doing compile-time checks really fast.
Rust has many facets, which may make it difficult to learn, if you try to learn *everything*. In practice, you don't need to know everything about Rust to write correct code.
Rust's lifetimes are invasive (because they reflect the complexity of software). Raw pointers are simpler but also introduce lots of ways to shoot yourself in the foot. Rust just formalized the rules about pointers, and created a safe abstraction over them.
Rust inherited the mistake with sizeof in C and C++. In Rust, just like in C and C++, there's no easy way to get the size of the struct without trailing padding.
Rust doesn't have variadic generics, unlike C++. You can somewhat circumvent the problem with procedural macros but it's still a PITA.
Rust's expressiveness limits what its generics can do compared to C++ templates. For example, there's no way to express that type F that implements Fn(/*args/*) -> /*output type*/ does not capture variables and is safe to coerce/cast to a function pointer.
Currently, the bounds of lifetimes are not expressive enough, because they can express only ">" (outlives) requirement but not ">=" ("lives at least as much as"). In Rust there's an implicit assumption that rustc makes that expression of type &'a &'b T carries a proof that 'a outlives 'b but it's not the case for a and b being 'static lifetimes. This caused the creation of cve-rs. Luckily, Rust can create a breaking change that would fix the problem (unlike C++) but it's a huge deal in my opinion.
Compile time
It probably doesn't matter (much) because Rust is typed and compiled and throws away everything unneeded.
Scripting languages are different story, not knowing the full context, they have to keep all abstractions and metadata and thus are really, really inefficient with lambdas and closures.
Yep! Although some JIT compilers can also do quite a lot. But if they assumed a path wrong, they need to throw that away, run the interpreted script again and gather more data before they can JIT compile again. ☺️
I don't like the "zero" in "zero cost abstractions"... I'd be OK with "low cost" or even "very low cost" but any time you see anything claim to be "zero cost" you know you're looking at a lie.
Agree. Some cost is always associated with something. But as said: zero-additional cost would also be okay, for me. ☺️
Why no posting on Twitter? For a long while I thought you were a fake account
I basically left Twitter behind. The platform is not what it used to be anymore. :/ I loved it there but if no one sees your posts anymore while Elon says the algorithm is fine, doesn't make much fun.
❤
am gonna learn rust...
Woohooo, go for it! 💪🏻
Computer Math is fun.
It is! 😁
Just rewrite Rust in Rust
Best idea 😂🙌🏻
This is not true at all. Actual performance comes from good cache utilisation, good cpu utilisation, and good utilisation of dead time while processing io. None of which the compiler can do for you.
Good cache utilisation comes from understanding what data you need to process and when. Packing said data into as small a space as possible, to get as much as possible into the cpu cache lines. See data oriented design.
Good cpu utilisation comes from using threads and SIMD. And making sure there are as few synchronisation points as possible.
Good dead time utilisation comes from async in Rust and many other languages.
Sorry, but no. The compiler can do quite a lot of it by generating code that makes best use of the concepts the target architecture provides (that's mostly LLVMs job).
As I have presented, the compiler can also make use of SIMD on its own in some circumstances.
And the most important part is: in 99% of use cases, hand-optimizing code is a severe waste of time and resources.
@user-gf7ss5je9h parallel computing is even another monster to tackle. 😁
because you aren't getting paid for it anyway
Theres no such thing as a "compiled language" Theres only "languages with a native compiler via direct or indirect means".
Also worth noting that performance doesnt matter for 99% of uses that dont produce video.
They can all be handled by using cloud machines instead of user machines and streaming down the content since non-video content is tiny and cloud bills are way way smaller than engineer bills. Especially with VPSs and coolify.
Performance is mainly a "get the 5% right".
As long as you arnt doing the abysmal things any further thought about performance is a huge waste of time unless you are doing it because you find it fun.
en.m.wikipedia.org/wiki/Compiled_language 😉
@@oliverjumpertzmeFirst of all something being on wikipedia doesnt mean its correct.
And second that artical in itself says that the colloqueal usage is illdefined and vague.
My statement is litterally backed up by the wikipedia artical.
The wikipedia artical says there isnt a firm definition, the definition is vague.
Id say under most usages of "exist" people are refering to firm and real.
For example a "vaguely human" thing isnt a real human.
And even if you want to argue about what "real" is and if a word without a firm definition is "real" or not, you still use the term in your video to no benefit.
Some of my professors from university then want to disagree with you, together with me. ☺️
Oh, and I don't want to argue here. You made an _absolute_ statement, which was just wrong. That's all I am saying.
This is just wrong
Aha. In which sense?
@@oliverjumpertzme Regarding the topic. You still have to think about code very carefully. I'm currently working with Rust legacy codebase, it's insane what people can write.
@@GillesLouisReneDeleuze didn't deny this. You surely have to take care and pay attention to the context around you. The essence of this video is just: no need to micro-optimize because you think the compiler will not create the best result possible. There are cases you have to hand-optimize, but these are rare. ☺️