I can't help thinking (And I'm not "throwing shade" at Rust here, plenty of languages and libraries use this model) that `async` / `await` takes something that's basically simple: Promises / Futures and wraps them up into "clever" "convenience" functions that then make the code more complicated. "Explicit is better than implicit." - PEP20 I still look back on when I first understood promises over in JS-land... it was like a moment of mystical revaluation because I suddenly intoned "Oh... that's what the Haskell people have been going on about all this time.... it's a monad.... it's a MONAD!"
Good presentation but the miniredis slides were a bit lazy.. For example: - Don't use std::sync::Mutex inside an async context because it will block the event loop, use tokio::sync::Mutex instead - Rust wont let you mutate a variable without you explicitly flagging a variable as mutable, so that insert() call will never compile without changing "let locked = ..." to "let mut locked = .."
Wrong. It is totally fine to use an std::Mutex within Tokio, as long as the lock is not carried across any await (yield) points. If you need a lock across yield points, you have to use the Tokio::Mutex to prevent a deadlock that could otherwise occur. However, the Tokio::Mutex introduces overhead compared to the std::Mutex, so std::Mutex should be preferred in the demonstrated scenario.
@@alanhoff89 In a nutshell, the async Mutex uses a low-level exclusive lock primitive with 1): non-blocking try-lock function; 2) a facility to register "waker" descriptors for tasks waiting on the lock, and notify the executor to resume polling these tasks once the lock is released.
@@alanhoff89 I guess it wasn't supposed to be detailed. Anyone can read the tutorial( tokio.rs/tokio/tutorial ) for more information. Also tokio channels are faster than tokio mutex and should be preferred whenever possible.
@@EngIlya I have 64GB and gifski (pngquant library) panicks with out of memory error after just 70 frames. The available memory is definitelly not an issue.
@@joseduarte9823 It was a library gone rogue that was raising the error (pngquant). Seems like if you throw at it lots of frames with lots of colors it has a hard time keeping the palette updated and it tripps.
The HashMap is accessed concurrently. Normal Hashmaps do not allow for concurrent access. An alternative would be to use a R/W Lock. However that might starve writers (to the HashMap).
@@sociocritical It would be ideal to use a concurrent hash table, c.f. Cliff Click's NonBlockingHashMap, so that performance doesn't plummet with any kind of concurrent access.
@@no-defun-allowed Stuff like that is pretty hard to predict. Since yes, locking reduces "p" in Amdahl's law, BUT lock-free HashMap implementations, that most likely make extensive use of CAS operations are not for free too. On x86-64 "LOCK CMPXCHG" is sequentially consistent by default and all in all a relatively expensive operation. Overall the overhead introduced by making the HashMap lock-free (pointer indirections etc. add up too; std::collections::HashMap on the other hand is essentially a highly optimized flat probing table, that speeds up lookups with SSE scans on a metadata array) in the first place can be a lot more expensive than "just" using a Mutex-Lock especially when there is low contention on the Hashmap. On Linux in particular Mutex implementations usually make use of "futex" and this results in one "LOCK CMPXCHG" in user space on low lock contention.
@@eraykaratay9266 Can you elaborate further? From my perspective that would not make any sense. Using a different task or thread for accessing the Hashmap implies that you need a MPSC queue (that serves as the channel), which must allow for concurrent access on the producer side just like the Hashmap. So then you would just shift the problem of thread synchronization from the Hashmap to the queue, which will most likely be more expensive than just accessing the HashMap concurrently in the first place.
Excellent presenters! Both present with passion. I thank you both too.
I'm implementing some asynchronous networking stuff for an API and I'm totally in love with Tokio.
Long live Rust
Until something better comes along.
@@geordonworley5618 and I think that's beautiful
@@geordonworley5618 it took many decades for us to "have nice things" via Rust. It will be quite a few decades before we will have something better.
Ahoy, ahoy, long may it die!
@@no-defun-allowed very smart comment
This is an exceptionally clear talk. Thanks, this is very helpful. I am currently getting into async in Rust.
Great explanation direct from tokio expert. Thank you Sir. Always be healthy. 🙏
Fantastic talk , need more like this !! I fell in love with rust too !! never had dev experience this smooth and joyful.
Rust is so much more interesting since they stripped the runtime .. the idioms def. take getting used to. Great presentation
Mutex in Rust is really cool!
Great presentation with great and inspiring presenters.
I can't help thinking (And I'm not "throwing shade" at Rust here, plenty of languages and libraries use this model) that `async` / `await` takes something that's basically simple: Promises / Futures and wraps them up into "clever" "convenience" functions that then make the code more complicated. "Explicit is better than implicit." - PEP20
I still look back on when I first understood promises over in JS-land... it was like a moment of mystical revaluation because I suddenly intoned "Oh... that's what the Haskell people have been going on about all this time.... it's a monad.... it's a MONAD!"
Great talk, very cool.
Good presentation but the miniredis slides were a bit lazy.. For example:
- Don't use std::sync::Mutex inside an async context because it will block the event loop, use tokio::sync::Mutex instead
- Rust wont let you mutate a variable without you explicitly flagging a variable as mutable, so that insert() call will never compile without changing "let locked = ..." to "let mut locked = .."
Wrong. It is totally fine to use an std::Mutex within Tokio, as long as the lock is not carried across any await (yield) points. If you need a lock across yield points, you have to use the Tokio::Mutex to prevent a deadlock that could otherwise occur. However, the Tokio::Mutex introduces overhead compared to the std::Mutex, so std::Mutex should be preferred in the demonstrated scenario.
@@sociocritical You're actually right, it also looks like tokio::sync::Mutex uses a synchronous mutex internally.
@@alanhoff89 From a quick glance at the source code (no guarantee) it seems like Tokio‘s Mutex (Semaphore) makes use of the parking_lot Mutex.
@@alanhoff89 In a nutshell, the async Mutex uses a low-level exclusive lock primitive with 1): non-blocking try-lock function; 2) a facility to register "waker" descriptors for tasks waiting on the lock, and notify the executor to resume polling these tasks once the lock is released.
@@alanhoff89 I guess it wasn't supposed to be detailed. Anyone can read the tutorial( tokio.rs/tokio/tutorial ) for more information. Also tokio channels are faster than tokio mutex and should be preferred whenever possible.
Great talk!
i like this video. very clear
great video, thanks
Thank you so much !
Wow!!! Well done. 👏🏾
question: how does Hyper handle URL *segments*?
aws sdk for rust is still not complete at the time
Async-std ?
How to increase the amount of memory Rust allocates for stack or heap?
I believe it will try to take as much as OS has available.
@@EngIlya I have 64GB and gifski (pngquant library) panicks with out of memory error after just 70 frames. The available memory is definitelly not an issue.
@@AlexTuduran It's probably too late but stack and heap sizes for C/C++/Rust programs are controlled by the OS, not the program itself
@@joseduarte9823 It was a library gone rogue that was raising the error (pngquant). Seems like if you throw at it lots of frames with lots of colors it has a hard time keeping the palette updated and it tripps.
you can do that by running your code in a thread like below
let thread = std::thread::Builder::new().stack_size(stack_size);
Unda da sea!
Nice
One question here is why do we want exclusive access to shared memory(HashMap in this case) when implementing redis GET function?
The HashMap is accessed concurrently. Normal Hashmaps do not allow for concurrent access. An alternative would be to use a R/W Lock. However that might starve writers (to the HashMap).
@@sociocritical Another alternative would be using channels to pass messages to dedicated task or thread.
@@sociocritical It would be ideal to use a concurrent hash table, c.f. Cliff Click's NonBlockingHashMap, so that performance doesn't plummet with any kind of concurrent access.
@@no-defun-allowed Stuff like that is pretty hard to predict. Since yes, locking reduces "p" in Amdahl's law, BUT lock-free HashMap implementations, that most likely make extensive use of CAS operations are not for free too. On x86-64 "LOCK CMPXCHG" is sequentially consistent by default and all in all a relatively expensive operation. Overall the overhead introduced by making the HashMap lock-free (pointer indirections etc. add up too; std::collections::HashMap on the other hand is essentially a highly optimized flat probing table, that speeds up lookups with SSE scans on a metadata array) in the first place can be a lot more expensive than "just" using a Mutex-Lock especially when there is low contention on the Hashmap. On Linux in particular Mutex implementations usually make use of "futex" and this results in one "LOCK CMPXCHG" in user space on low lock contention.
@@eraykaratay9266 Can you elaborate further? From my perspective that would not make any sense. Using a different task or thread for accessing the Hashmap implies that you need a MPSC queue (that serves as the channel), which must allow for concurrent access on the producer side just like the Hashmap. So then you would just shift the problem of thread synchronization from the Hashmap to the queue, which will most likely be more expensive than just accessing the HashMap concurrently in the first place.
C and c++ both have a runtime?
Templated report: Ownership, Fearless concurrency.... Again?
Do something serious and tell about it, stop repeating cloned reports every year!