Dude this video is so good. It's Rust, but these concepts feel deeper and applicable to many languages. Thanks for sharing your knowledge to the world.
Would love a Crust of Rust on alternate allocators, when you might actually need one (i.e. you benchmarked and determined the default allocator is insufficient), and how you would employ them in practice.
note to self: vec has retain filter. allocates on heap only (unlike smallvec). no iter because no index state. set is just map with empty keys. stdlib hashmap is DoS resistant by using OSs random seed. usually does linear probing ie fills stuff into other bucket by hasing stuff again. btree is like binarytree but with a sorted list in each node.
About the VecDeque collection. A nice feature to add is to be able to reserve a capacity in a power of 2 and have the underlying process working with binary and logical operator instead of mod operator. :)
One note is that the implementation of vec![] that just pushes is only a book example, the actual implementation is basically just Box::new([$items]).into_vec(). This lets the compiler place the item construction directly into the allocated memory (though unfortunately not guaranteed, and currently isn't in debug), and the into_vec call just adds a capacity equal to the array length. Also IIRC my CS lecturer said Deque is pronounced "deck", but he was German, so who knows!
*Vec::new does not allocate.* It has a capacity of 0 and a dangling pointer, the minimum size doesn't kick in until you push an item. If it did allocate, it couldn't be const.
Yep, hence the name "min_non_zero_size" for the constant in the code. You're right I should probably have called that out explicitly though - it's an important thing to be aware of!
Also, while we're on the topic: HashMap::new() doesn't allocate either. It too has capacity 0 even though it's not const because it has a pointer inside it, pointing to an empty control structure, which would not be legal in const context as I understand it. Also, even when the HashMap grows, it doesn't jump to 64 buckets, it starts with just 4 buckets (enough space for 3 key,value pairs) then 8 (enough for 7 pairs) then 16, 32, 64 now at 87.5% maximum occupancy, so e.g. if you ask for HashMap::with_capacity(60) you get a HashMap with 128 buckets, max capacity 112 items, as a the next smallest size would be over-crowded when 60 items are added.
@@tialaramex I checked out the code, and it looks like HashMap::new isn't const because it needs the `Default` impl on its hasher and allocator. It calls hashbrown::HashMap::with_hasher_in, which is const itself, but is called with `Default::default()` args.
Would be very nice Jon, if you reviewing tokio and other stuff with async programming in rust, last video was 3 years ago, after that many things change, so, would be nice
There is in fact a heap analog to the radix sort. It's called radix heap and it's amazing. Basically, you store a chain of Vecs, where Nth vec has items which differ form the last popped item in first N digits. It only works if the inserted priorities are always larger than the last popped priority. But the performance boost is quite substantial.
This was excellent, I learned a lot in a very short time. Realized I didn't need the Bytes Crate's `.put` method to extend a Vec with a Vec I can just use `.extend`.
1:33:31 The std version of HashMap (hash brown) actually does not shuffle entries around like you suggest. When an entry is removed the slot is filled with a so-called "tombstone" value to indicate that something was there in the past
The Foundation (R)(TM) is going to have an aneurysm when they discover the name of Jon's series. Jk (or maybe not). Anyway, your videos are always comprehensive. Love it!
Could you please do a stream where you implement some of these data structures (or possibly others) because i think that's a really hard thing to do in rust. it would be really appreciated if you could do that.
28:00 Actually, the lifetime is also required bc custom allocators can be non-static, in which case leak would only be allowed to leak for the lifetime of the allocator.
@jonhoo At 1:54:55 you say that floating point numbers are hashable but not orderable. I get the hashable bit, but why would floating point numbers not be orderable? I understand that not every floating point number can be represented in binary (I think IEEE 754 covers this if I remember correctly), but the "not orderable" part doesn't make immediate sense to me. Could you point me to a good explanation, please (or explain it here briefly)? Thank you. Oh, and great video! Learned a bunch. Nicely done.
From the f32 docs: "NaN has some potentially unexpected behavior: * It is unequal to any float, including itself! This is the reason f32 doesn’t implement the Eq trait. * It is also neither smaller nor greater than any float, making it impossible to sort by the default comparison operation, which is the reason f32 doesn’t implement the Ord trait. ..." You can use the total_order() method in most contexts (eg .max_by(f32::total_order) instead of just .max()) if you're ok with the IEEE specified order, but that does things like consider the sign of NaNs and 0, which can be surprising if you're not familiar with IEEE float details!
Very useful lesson, thanks! For BTreeMap you start with index 7. In reality though 7 wouldn't be enough for item to go "to the left". How does Rust determine the right value for "7" to make sure enough elements can go to the left?
I had a date today. My girlfriend told me either Jon Gjengset's Crust of Rust or her. Now I am proud rustacean following crust of rust, and ofcourse SINGLE
at 27:16 the lifetime of leak is 'a because it has to match the lifetime of the allocator, allocators may be unstable but you could make a stack allocator so every collections allocating would be tied to the lifetime of that allocator, and if you leak the collection you don't want the leaked ref to exceed that allocator lifetime
Can you do a video on stdin, stdout and the terminal? Like looking at the std::io::Read and Write traits, core::fmt::Formatter and core::fmt::Display. Maybe also how you can do more advanceed thing like making text and background coloured and how you can respond to inputs to make interactive things in the terminal
I wish that the std vector used a growth rate different than 2, as i think a few papers showed that something closer to 1.5 was more efficient in most cases
Really great video. Though isn't your description of linear probing closer to double hashing than a traditional linear probing, which just looks in the next bucket?
You could be right. My impression is that what people mean when they say linear probing isn't entirely well defined, but that could be ignorance on my part!
@@jonhoo It seem to me based on a wikipedia glance (en.wikipedia.org/wiki/Linear_probing) that it's essentially just taking the "next" bucket in the list until you get a free one (where quadratic probing is e.g. jumping multiple times). I'm not sure if there are multiple interpretations though. Great video either way!
Your representation of the btree with B=4 is not entirely correct. There should be 5 blocks of 4 entries on the second level, each one fits in-between the entries from the root. This way the number of layers L required doesn't grow as fast (I think it can contain about (1+B)^L entries).
also the number of pointers he explained isn't entirely true. There's a pointer to a new block (which may be null) b/w every pair of nodes. So we're not really saving space in that sense.
What does moving SmallVec over function boundaries mean, like capturing it to closures? Or does it mean that if I pass SmallVec to functions as reference the whole thing will be copied even though it is a reference? 26:46 Edit: Okay, I think you pretty much answered that later.
Can you do a stream where you go more into the haskell-style abstractions that you mentioned in the result/option section? (Monad, Applicative, Functor, etc)?
How does a `HashMap` manage memory allocation efficiently? Wouldn't random keys give random hashes which result in a very sparsely populated underlying `Vec` because the hashes are all over the place? For example, your diagram showed the underlying Vec with some key-value in index 7, while the other indices where empty.
What's the syntax in the type parameters of a vec where 'A = Global'? I understand Global refers to the global allocator, but what's the = part? I have seen it before with Phantom Data as well, but I'd love to know more about what it means. It's also a hard question to Google haha. Edit: can also see the same kind of thing with the HashMap S = RandomState.
it means that Global is the default type for A if you dont specify it its the reason that you can do Vec without specifying the allocator, the default Global allocator will be used
@@homelikebrick42 thanks for answering. I remember this in the book, but I remember the example being in a trait so didn't connect it to a type. A lot of things just clicked. Edit: by that I mean I thought it was some trait syntax, like with associated types, and the context held a different meaning outside of it.
`A = Global` simply means the type parameter `A` will default to `Global`. So if you create a vector `Vec` without explicitly specifying what `A` should be, then `A` will default to `Global`, i.e. `Vec` is the same as `Vec`. The same can be done for const generics. For example `struct MyArray {...}` means `N` will default to `10` if not specified otherwise.
2:00:00 I don't understand why only the method that returns a mutable reference to the key is unsafe. Shouldn't the ones that returned shared references to the key be unsafe? If the key is an Cell for example I could call key.set to the change the value from eg 1 to 1 Billion and break the invariants of BTreeMap. Edit: Changed example from atomics to cell because aparently they aren't Ord.
"It is a logic error for a key to be modified in such a way that the key’s hash, as determined by the Hash trait, or its equality, as determined by the Eq trait, changes while it is in the map. This is normally only possible through Cell, RefCell, global state, I/O, or unsafe code. The behavior resulting from such a logic error is not specified, but will be encapsulated to the HashMap that observed the logic error and not result in undefined behavior. This could include panics, incorrect results, aborts, memory leaks, and non-termination." -The Rust Standard Library, std::collections::HashMap So you're allowed to do it, and it's not unsafe, (in that it's not allowed to produce undefined behavior,) but it's an unforced error on your part and it *will* make your program very sad.
Why doesn't `VecDeque` use contiguous memory without wrapping around? Like a `Vec` that grows in both directions. It seems it would offer the best of both worlds: queue and stack semantics while still coercing to a slice. I don't understand why the whole memory wrapping juggling is necessary? Just for a bit of memory efficiency? Seems like a large tradeoff to throw away the benefits of a slice.
Does Vec reallocate 2x the memory? I think there are plenty of studies that show 1.5x is usually best as it gives the possibility of memory re-use on re allocations, but maybe thats not how Rust's Vec do things?
I don't know what studies you are referring to, but 2x is the best way to grow a vector, because amortized analysis proofs this mathematically. Other values like 1.5x simply are weird quirks, trying to abuse implementation details of particular hardware/architecture. But if you need a true generic best overall - it is 2x.
@@mk72v2oq I'm referring to the major vec implementations at Facebook and Google for instance, but there has been many, many others. As far as 1.5 is due to hardware quirks is wrong afaict, because you can actually mathematically prove memory never getting reused (leading to fragmentation) in a lot of memory allocators due to memory continually creeping forwards as 2x, the previously allocated memory can never be reused with the new allocation to actually fit the new allocation in the previous allocations. So I don't know what you mean by platform specific quirks. Simply proven by; 1 + 2^1 + 2^2 .. + 2^n = 2^(n+1) - 1. This is of course assuming that realloc is not being used, which was the case for vector in C++ because of how the standard defines non relocatable values. If I'm not mistaken, Rust does the same? So I think you're absolutely wrong about it being "abuse of hardware" Besides; if the cost of actual memory allocation is the issue; a growth factor will not solve your problems necessarily, a better domain analysis will, where you know how much memory you need to allocate, up front, making it a 1 time cost. Java used it for ArrayList, MSVC++ for std::vector used to have it, I am unaware of any changes they have made. In the end though, I'm sure 2x is just fine for all intents and purposes. But it is an issue I have with rust more broadly, that memory allocation control is so much more difficult to customize for obvious reasons. It is a tradeoff inherent to the design of the language.
@@simonfarre4907 you are making assumptions about allocators. This is an implementation detail. Amortized analysis is theoretical, like BigO notation, does not make assumptions about memory and allocator implementations. I.e. in the real world you totally can make an example where liner lookup O(n) will be faster than O(1), but it does not mean O(n) is faster than O(1) overall. Companies know their hardware and systems, so they do make assumptions. But this is far from generic case.
you could, but then the pointer would not correspond to the one you originally allocated. if you wanted to free or realloc the memory, you would need to retrieve the original pointer. One way you could do this would be to store an additional field, let's call it "offset". When freeing or reallocating, you can decrement the pointer by offset to get back the original one. this could introduce some weirdness with the amortized asymptotic cost of the operations (the standard vector operations guarantee that the amortized cost for each push/pop operation in a sequence is O(1)). For example, if you decide that on reallocation you should set "offset" to 0 and not leave space for freed elements, then you might have the following sequence of operations: 1) you pop from the front 2) you push to the back 3) you push to the front you might assume (3) to not allocate, since you just popped from the front and freed up a space, however pushing to the back could have just reallocated and you now lost that space. if instead you don't reset offset to 0, then the sequence: 1) push to back 2) pop from front 3) repeat will result in a vector of constantly bounded length that holds an infinitely increasing amoumt of memory, and you definitely don't want that.
@@andreapiseri8521 Thanks for the answer! Couldn't you have the Vec be responsible only for dropping its remaining capacity and have the popped value be wrapped in a Box that would be responsible for dropping that location?
the first solution might not look bad on paper, but it actually still doubles the upper bound for the inefficiency of memory consumption of vec, since: the standard implememtation of doubling the capacity on each allocation guarantees that the allocated memory from a vec is never more than double the highest length that the vec has held in the past. if your allocation strategy for the new vec is the same, then say you have 2^k + 1 elements in your vec, that is you just reallocated and have offset=0, capacity=2^(k+1). you can pop from the front 2^k times, then push to the back another 2^k times: the maximum length the vector ever reached is 2^k+1, however you now have to reallocate (since the last element falls at position "offset+len = 2^(k+1)+1", which means you will double the capacity to 2^k+2, which is four times the max length your vector ever reached. if you try to be clever and figure out that you can reallocate just 2^(k+1) again, then you will run into the following problem: you have len=2^k, cap=2^k, offset=0. 1) pop front 2) push back repeat each push back will reallocate, but it will keep the original capacity because len = 2^k, repeating the cycle.
@@ovidiu_nl unfortunately it's not that easy, at least not without having control over the allocator's architecture. Allocators generally give out memory in predetermined sizes from different pools, splitting the allocation in a "box" part and a "rest" part would break this strategy. Additionally, heap allocators in particular sometimes store metadata next to the actual allocated data, which you couldn't do for the "rest" of the vec because the "box" is in the way.
finally, you would need to notify the allocator that you've performed this split, which (even if you had an interface to do it) could incur a pretty big runtime cost
I'm new to rust but what I gathered is you don't have to shift all the elements in vecdeque. Just pop left then push right and change the start and end position whereas with a vec you'd have to shift every single element left then push the item on the end?
@2:45 - For VecDeque - what does empty look like? Surely Start and End are the same? And so if full, it cannot be Start & End are the same because empty and full would have he same state.
Well, when full, start and end are off by one, not same. However, what you're describing is a problem if there's just one element. Then start and end would point to the same element. So, yes, if you're storing start and end separately you do need to explicitly check for "empty" state. In our CS classes we used to do it by assigning start = end = -1, or you could say (None, None) in Rustic terms.
@@VivekYadav-ds8oz _Well, when full, start and end are off by one, not same_ That is what I think. Fullness is basically when end +1 == start (allowing for circular buffer wrap around). _However, what you're describing is a problem if there's just one element. Then start and end would point to the same element_ No. If start and end are the same => that is empty If start + 1 == end, that is 1 element in the buffer (allowing for circular buffer wrap around).
at 1:06:43 you say that a Set is faster than a Vec for Set operations. I found that most often iterating the Vec and using something like find or position is faster up until a very large number of elements because modern CPUs can optimise these kinds of operations really well. So if you need maximum performance, benchmarking with realistic payloads can be very helpful.
Well the point I was trying to make is that you should not rely solely on complexity when it comes to real life code performance. Even though a HashMap lookup is O(1) and finding an element in a Vec is O(n), the latter will be faster more often than one might think.
Comparing the std hasher or even when you use a fast hasher? Also, what do you consider to be a "very large number"? It's probably true for larger numbers than most people would think but pretty sure I've tested this on some Advent of Code problems in the past and got better results with a fast hash set. I guess more complex objects probably also favor hashing whereas a Vec will remain faster for longer if you're just storing ints. Also, you made a typo in your last comment. Finding an element in a Vec is O(n).
Dude this video is so good. It's Rust, but these concepts feel deeper and applicable to many languages. Thanks for sharing your knowledge to the world.
"I think I wrote part of this"
~ Legend!
The best Rust channel on TH-cam strikes again 😄
Would love a Crust of Rust on alternate allocators, when you might actually need one (i.e. you benchmarked and determined the default allocator is insufficient), and how you would employ them in practice.
Never seen a binary heap explained as a glass with different densities of liquids. Pretty brilliant if you ask me.
Still waiting for your decrusting Tokio, Rayon, Axum, rust-gpu or wgpu!
Oh man, my thoughts exactly
especially Tokio
Yeah, tokio would be amazing!
Tokio would be so nice!
rust-gpu!!! wgpu!!!
note to self:
vec has retain filter. allocates on heap only (unlike smallvec). no iter because no index state.
set is just map with empty keys. stdlib hashmap is DoS resistant by using OSs random seed. usually does linear probing ie fills stuff into other bucket by hasing stuff again.
btree is like binarytree but with a sorted list in each node.
Best serious on rust so far
Thoroughly enjoyed this video! Your videos are usually too advanced for me but this was perfect for where I'm at :)
Nice haircut, Jon! Always stoked to see you've released new Rust content.
your Rust videos are unmatched
This video is soo good
About the VecDeque collection. A nice feature to add is to be able to reserve a capacity in a power of 2 and have the underlying process working with binary and logical operator instead of mod operator. :)
One note is that the implementation of vec![] that just pushes is only a book example, the actual implementation is basically just Box::new([$items]).into_vec().
This lets the compiler place the item construction directly into the allocated memory (though unfortunately not guaranteed, and currently isn't in debug), and the into_vec call just adds a capacity equal to the array length.
Also IIRC my CS lecturer said Deque is pronounced "deck", but he was German, so who knows!
I definitely say de-queue for that one, but I don't really know
I’ve mostly heard “deck” in various industries, and in academic circles. I’m from the US.
Another bing bong banger from your boy Jon. Can't wait to fully consume this piece of content.
*Vec::new does not allocate.* It has a capacity of 0 and a dangling pointer, the minimum size doesn't kick in until you push an item. If it did allocate, it couldn't be const.
Yep, hence the name "min_non_zero_size" for the constant in the code. You're right I should probably have called that out explicitly though - it's an important thing to be aware of!
Also, while we're on the topic: HashMap::new() doesn't allocate either. It too has capacity 0 even though it's not const because it has a pointer inside it, pointing to an empty control structure, which would not be legal in const context as I understand it. Also, even when the HashMap grows, it doesn't jump to 64 buckets, it starts with just 4 buckets (enough space for 3 key,value pairs) then 8 (enough for 7 pairs) then 16, 32, 64 now at 87.5% maximum occupancy, so e.g. if you ask for HashMap::with_capacity(60) you get a HashMap with 128 buckets, max capacity 112 items, as a the next smallest size would be over-crowded when 60 items are added.
@@tialaramex I checked out the code, and it looks like HashMap::new isn't const because it needs the `Default` impl on its hasher and allocator. It calls hashbrown::HashMap::with_hasher_in, which is const itself, but is called with `Default::default()` args.
Would be very nice Jon, if you reviewing tokio and other stuff with async programming in rust, last video was 3 years ago, after that many things change, so, would be nice
There is in fact a heap analog to the radix sort. It's called radix heap and it's amazing. Basically, you store a chain of Vecs, where Nth vec has items which differ form the last popped item in first N digits. It only works if the inserted priorities are always larger than the last popped priority. But the performance boost is quite substantial.
This was excellent, I learned a lot in a very short time. Realized I didn't need the Bytes Crate's `.put` method to extend a Vec with a Vec I can just use `.extend`.
Crust of Rust is Awesome!
Wow, thank you!
1:33:31 The std version of HashMap (hash brown) actually does not shuffle entries around like you suggest. When an entry is removed the slot is filled with a so-called "tombstone" value to indicate that something was there in the past
this is epic!
The Foundation (R)(TM) is going to have an aneurysm when they discover the name of Jon's series. Jk (or maybe not).
Anyway, your videos are always comprehensive. Love it!
Could you please do a stream where you implement some of these data structures (or possibly others) because i think that's a really hard thing to do in rust.
it would be really appreciated if you could do that.
28:00 Actually, the lifetime is also required bc custom allocators can be non-static, in which case leak would only be allowed to leak for the lifetime of the allocator.
also another reason to make it generic is to support the cases when T cannot live for 'static
55:36 Jon accidentally teaching us about uterus anatomy 😂
I didn't know about hashmaps and "entry, so thanks!
@jonhoo At 1:54:55 you say that floating point numbers are hashable but not orderable. I get the hashable bit, but why would floating point numbers not be orderable? I understand that not every floating point number can be represented in binary (I think IEEE 754 covers this if I remember correctly), but the "not orderable" part doesn't make immediate sense to me. Could you point me to a good explanation, please (or explain it here briefly)? Thank you. Oh, and great video! Learned a bunch. Nicely done.
From the f32 docs:
"NaN has some potentially unexpected behavior:
* It is unequal to any float, including itself! This is the reason f32 doesn’t implement the Eq trait.
* It is also neither smaller nor greater than any float, making it impossible to sort by the default comparison operation, which is the reason f32 doesn’t implement the Ord trait.
..."
You can use the total_order() method in most contexts (eg .max_by(f32::total_order) instead of just .max()) if you're ok with the IEEE specified order, but that does things like consider the sign of NaNs and 0, which can be surprising if you're not familiar with IEEE float details!
Very useful lesson, thanks! For BTreeMap you start with index 7. In reality though 7 wouldn't be enough for item to go "to the left". How does Rust determine the right value for "7" to make sure enough elements can go to the left?
I had a date today. My girlfriend told me either Jon Gjengset's Crust of Rust or her. Now I am proud rustacean following crust of rust, and ofcourse SINGLE
We need to find a way to get your videos to ship with clippy. This was incredible. Looking forward to your previous videos
very informative.
at 27:16 the lifetime of leak is 'a because it has to match the lifetime of the allocator, allocators may be unstable but you could make a stack allocator so every collections allocating would be tied to the lifetime of that allocator, and if you leak the collection you don't want the leaked ref to exceed that allocator lifetime
Can you do a video on stdin, stdout and the terminal? Like looking at the std::io::Read and Write traits, core::fmt::Formatter and core::fmt::Display. Maybe also how you can do more advanceed thing like making text and background coloured and how you can respond to inputs to make interactive things in the terminal
"Maybe an hour and a half ... :D
Goes on recording for 2:45:03
Love it!
I wish that the std vector used a growth rate different than 2, as i think a few papers showed that something closer to 1.5 was more efficient in most cases
Really great video. Though isn't your description of linear probing closer to double hashing than a traditional linear probing, which just looks in the next bucket?
You could be right. My impression is that what people mean when they say linear probing isn't entirely well defined, but that could be ignorance on my part!
@@jonhoo It seem to me based on a wikipedia glance (en.wikipedia.org/wiki/Linear_probing) that it's essentially just taking the "next" bucket in the list until you get a free one (where quadratic probing is e.g. jumping multiple times). I'm not sure if there are multiple interpretations though. Great video either way!
Your representation of the btree with B=4 is not entirely correct. There should be 5 blocks of 4 entries on the second level, each one fits in-between the entries from the root. This way the number of layers L required doesn't grow as fast (I think it can contain about (1+B)^L entries).
I got it wrong too, B=4 apparently means 3 entries per block, with 4 blocks on the 2nd layer, 16 on the 3rd etc. So about B^L capacity.
also the number of pointers he explained isn't entirely true. There's a pointer to a new block (which may be null) b/w every pair of nodes. So we're not really saving space in that sense.
Could you do a walk through of the reqwest crate please?
What does moving SmallVec over function boundaries mean, like capturing it to closures? Or does it mean that if I pass SmallVec to functions as reference the whole thing will be copied even though it is a reference? 26:46
Edit: Okay, I think you pretty much answered that later.
Can you do a stream where you go more into the haskell-style abstractions that you mentioned in the result/option section? (Monad, Applicative, Functor, etc)?
How does a `HashMap` manage memory allocation efficiently? Wouldn't random keys give random hashes which result in a very sparsely populated underlying `Vec` because the hashes are all over the place? For example, your diagram showed the underlying Vec with some key-value in index 7, while the other indices where empty.
Ah, seems like the generated hashes are distributed somehow up to the length of the underlying `Vec`. Some comment in 1:27:44.
What's the syntax in the type parameters of a vec where 'A = Global'? I understand Global refers to the global allocator, but what's the = part? I have seen it before with Phantom Data as well, but I'd love to know more about what it means. It's also a hard question to Google haha.
Edit: can also see the same kind of thing with the HashMap S = RandomState.
it means that Global is the default type for A if you dont specify it
its the reason that you can do Vec without specifying the allocator, the default Global allocator will be used
@@homelikebrick42 thanks for answering. I remember this in the book, but I remember the example being in a trait so didn't connect it to a type. A lot of things just clicked.
Edit: by that I mean I thought it was some trait syntax, like with associated types, and the context held a different meaning outside of it.
`A = Global` simply means the type parameter `A` will default to `Global`. So if you create a vector `Vec` without explicitly specifying what `A` should be, then `A` will default to `Global`, i.e. `Vec` is the same as `Vec`.
The same can be done for const generics. For example `struct MyArray {...}` means `N` will default to `10` if not specified otherwise.
2:00:00 I don't understand why only the method that returns a mutable reference to the key is unsafe. Shouldn't the ones that returned shared references to the key be unsafe? If the key is an Cell for example I could call key.set to the change the value from eg 1 to 1 Billion and break the invariants of BTreeMap.
Edit: Changed example from atomics to cell because aparently they aren't Ord.
"It is a logic error for a key to be modified in such a way that the key’s hash, as determined by the Hash trait, or its equality, as determined by the Eq trait, changes while it is in the map. This is normally only possible through Cell, RefCell, global state, I/O, or unsafe code. The behavior resulting from such a logic error is not specified, but will be encapsulated to the HashMap that observed the logic error and not result in undefined behavior. This could include panics, incorrect results, aborts, memory leaks, and non-termination."
-The Rust Standard Library, std::collections::HashMap
So you're allowed to do it, and it's not unsafe, (in that it's not allowed to produce undefined behavior,) but it's an unforced error on your part and it *will* make your program very sad.
Why doesn't `VecDeque` use contiguous memory without wrapping around? Like a `Vec` that grows in both directions. It seems it would offer the best of both worlds: queue and stack semantics while still coercing to a slice. I don't understand why the whole memory wrapping juggling is necessary? Just for a bit of memory efficiency? Seems like a large tradeoff to throw away the benefits of a slice.
Couldn’t you manually deallocate a leaked box/vec by turning the ref into a pointer and calling from_raw before dropping?
Does Vec reallocate 2x the memory? I think there are plenty of studies that show 1.5x is usually best as it gives the possibility of memory re-use on re allocations, but maybe thats not how Rust's Vec do things?
github.com/rust-lang/rust/issues/111307
@@jonhoo and Spicy - thanks!
I don't know what studies you are referring to, but 2x is the best way to grow a vector, because amortized analysis proofs this mathematically.
Other values like 1.5x simply are weird quirks, trying to abuse implementation details of particular hardware/architecture. But if you need a true generic best overall - it is 2x.
@@mk72v2oq I'm referring to the major vec implementations at Facebook and Google for instance, but there has been many, many others.
As far as 1.5 is due to hardware quirks is wrong afaict, because you can actually mathematically prove memory never getting reused (leading to fragmentation) in a lot of memory allocators due to memory continually creeping forwards as 2x, the previously allocated memory can never be reused with the new allocation to actually fit the new allocation in the previous allocations. So I don't know what you mean by platform specific quirks. Simply proven by;
1 + 2^1 + 2^2 .. + 2^n = 2^(n+1) - 1. This is of course assuming that realloc is not being used, which was the case for vector in C++ because of how the standard defines non relocatable values. If I'm not mistaken, Rust does the same? So I think you're absolutely wrong about it being "abuse of hardware"
Besides; if the cost of actual memory allocation is the issue; a growth factor will not solve your problems necessarily, a better domain analysis will, where you know how much memory you need to allocate, up front, making it a 1 time cost.
Java used it for ArrayList, MSVC++ for std::vector used to have it, I am unaware of any changes they have made.
In the end though, I'm sure 2x is just fine for all intents and purposes. But it is an issue I have with rust more broadly, that memory allocation control is so much more difficult to customize for obvious reasons. It is a tradeoff inherent to the design of the language.
@@simonfarre4907 you are making assumptions about allocators. This is an implementation detail.
Amortized analysis is theoretical, like BigO notation, does not make assumptions about memory and allocator implementations. I.e. in the real world you totally can make an example where liner lookup O(n) will be faster than O(1), but it does not mean O(n) is faster than O(1) overall.
Companies know their hardware and systems, so they do make assumptions. But this is far from generic case.
What would happen if linear probing results in a cycle? I.e Hash(Hash(key) + key) = Hash(key)
Why can't you efficiently pop from the start of a Vec by simply incrementing the pointer to the heap and decrementing both length and capacity?
you could, but then the pointer would not correspond to the one you originally allocated. if you wanted to free or realloc the memory, you would need to retrieve the original pointer.
One way you could do this would be to store an additional field, let's call it "offset". When freeing or reallocating, you can decrement the pointer by offset to get back the original one.
this could introduce some weirdness with the amortized asymptotic cost of the operations (the standard vector operations guarantee that the amortized cost for each push/pop operation in a sequence is O(1)). For example, if you decide that on reallocation you should set "offset" to 0 and not leave space for freed elements, then you might have the following sequence of operations:
1) you pop from the front
2) you push to the back
3) you push to the front
you might assume (3) to not allocate, since you just popped from the front and freed up a space, however pushing to the back could have just reallocated and you now lost that space.
if instead you don't reset offset to 0, then the sequence:
1) push to back
2) pop from front
3) repeat
will result in a vector of constantly bounded length that holds an infinitely increasing amoumt of memory, and you definitely don't want that.
@@andreapiseri8521 Thanks for the answer! Couldn't you have the Vec be responsible only for dropping its remaining capacity and have the popped value be wrapped in a Box that would be responsible for dropping that location?
the first solution might not look bad on paper, but it actually still doubles the upper bound for the inefficiency of memory consumption of vec, since:
the standard implememtation of doubling the capacity on each allocation guarantees that the allocated memory from a vec is never more than double the highest length that the vec has held in the past.
if your allocation strategy for the new vec is the same, then say you have 2^k + 1 elements in your vec, that is you just reallocated and have offset=0, capacity=2^(k+1).
you can pop from the front 2^k times, then push to the back another 2^k times: the maximum length the vector ever reached is 2^k+1, however you now have to reallocate (since the last element falls at position "offset+len = 2^(k+1)+1", which means you will double the capacity to 2^k+2, which is four times the max length your vector ever reached.
if you try to be clever and figure out that you can reallocate just 2^(k+1) again, then you will run into the following problem:
you have len=2^k, cap=2^k, offset=0.
1) pop front
2) push back
repeat
each push back will reallocate, but it will keep the original capacity because len = 2^k, repeating the cycle.
@@ovidiu_nl unfortunately it's not that easy, at least not without having control over the allocator's architecture.
Allocators generally give out memory in predetermined sizes from different pools, splitting the allocation in a "box" part and a "rest" part would break this strategy. Additionally, heap allocators in particular sometimes store metadata next to the actual allocated data, which you couldn't do for the "rest" of the vec because the "box" is in the way.
finally, you would need to notify the allocator that you've performed this split, which (even if you had an interface to do it) could incur a pretty big runtime cost
great vid, will make sure to watch all of it👍😇🫡
Can someone explain a little more what he means with rotate at minute 47:48? I dont get why its better on vecdeque than in vec
I'm new to rust but what I gathered is you don't have to shift all the elements in vecdeque. Just pop left then push right and change the start and end position whereas with a vec you'd have to shift every single element left then push the item on the end?
@@phil6758 I think I see it much clearer now. Thanks
cant they make a pointer lookup table for linked lists? so that element access is const
@2:45 - For VecDeque - what does empty look like? Surely Start and End are the same?
And so if full, it cannot be Start & End are the same because empty and full would have he same state.
I think technically they store head + length, which doesn't have the same problem.
Well, when full, start and end are off by one, not same. However, what you're describing is a problem if there's just one element. Then start and end would point to the same element. So, yes, if you're storing start and end separately you do need to explicitly check for "empty" state. In our CS classes we used to do it by assigning start = end = -1, or you could say (None, None) in Rustic terms.
@@VivekYadav-ds8oz _Well, when full, start and end are off by one, not same_
That is what I think. Fullness is basically when end +1 == start (allowing for circular buffer wrap around).
_However, what you're describing is a problem if there's just one element. Then start and end would point to the same element_
No. If start and end are the same => that is empty
If start + 1 == end, that is 1 element in the buffer (allowing for circular buffer wrap around).
at 1:06:43 you say that a Set is faster than a Vec for Set operations. I found that most often iterating the Vec and using something like find or position is faster up until a very large number of elements because modern CPUs can optimise these kinds of operations really well. So if you need maximum performance, benchmarking with realistic payloads can be very helpful.
TBH you should probably always do benchmarking if you care about performance
Well the point I was trying to make is that you should not rely solely on complexity when it comes to real life code performance. Even though a HashMap lookup is O(1) and finding an element in a Vec is O(n), the latter will be faster more often than one might think.
Comparing the std hasher or even when you use a fast hasher? Also, what do you consider to be a "very large number"? It's probably true for larger numbers than most people would think but pretty sure I've tested this on some Advent of Code problems in the past and got better results with a fast hash set. I guess more complex objects probably also favor hashing whereas a Vec will remain faster for longer if you're just storing ints. Also, you made a typo in your last comment. Finding an element in a Vec is O(n).
How slow is binary heap's retain?
What is the whiteboard program?
34:40 now they know
Could you please share your Firefox config file?
github.com/jonhoo/configs/blob/master/gui/.mozilla/firefox/chrome/userChrome.css
Thank you!
Please a crust of rust of egui+eframe+rerun!!!
yah its been WILL
OrderedSet is quite nonsensical, but oh well, i wouldnt expose the ordering
Decrust ratatui please
As you can see, the real reason a HashMap is not really order one is because Jon can't draw a straight line.
First
Nth ~3hr since upload
crust of crab language*