AWS re:Invent 2020: Next-gen networking infrastructure with Rust and Tokio
ฝัง
- เผยแพร่เมื่อ 26 ธ.ค. 2024
- Today’s networking infrastructure software has stringent requirements. It must not only be fast, but also safe-that is, able to process untrusted data without crashing or being vulnerable to security exploits). Traditionally, these two requirements have been at odds. Network programmers had to pick a language that offered either speed or safety. With the Rust programming language and the Tokio networking library, you can have both. This session shows how Tokio’s zero-cost abstractions can be leveraged to deliver a networking platform that provides expressiveness, speed, and safety with tradeoffs between them.
Learn more about re:Invent 2020 at bit.ly/3c4NSdY
Subscribe:
More AWS videos bit.ly/2O3zS75
More AWS events videos bit.ly/316g9t4
#AWS #AWSEvents
Excellent presenters! Both present with passion. I thank you both too.
I'm implementing some asynchronous networking stuff for an API and I'm totally in love with Tokio.
This is an exceptionally clear talk. Thanks, this is very helpful. I am currently getting into async in Rust.
Long live Rust
Until something better comes along.
@@geordonworley5618 and I think that's beautiful
@@geordonworley5618 it took many decades for us to "have nice things" via Rust. It will be quite a few decades before we will have something better.
Ahoy, ahoy, long may it die!
@@no-defun-allowed very smart comment
Fantastic talk , need more like this !! I fell in love with rust too !! never had dev experience this smooth and joyful.
Great explanation direct from tokio expert. Thank you Sir. Always be healthy. 🙏
Great presentation with great and inspiring presenters.
Rust is so much more interesting since they stripped the runtime .. the idioms def. take getting used to. Great presentation
Mutex in Rust is really cool!
Great talk, very cool.
Good presentation but the miniredis slides were a bit lazy.. For example:
- Don't use std::sync::Mutex inside an async context because it will block the event loop, use tokio::sync::Mutex instead
- Rust wont let you mutate a variable without you explicitly flagging a variable as mutable, so that insert() call will never compile without changing "let locked = ..." to "let mut locked = .."
Wrong. It is totally fine to use an std::Mutex within Tokio, as long as the lock is not carried across any await (yield) points. If you need a lock across yield points, you have to use the Tokio::Mutex to prevent a deadlock that could otherwise occur. However, the Tokio::Mutex introduces overhead compared to the std::Mutex, so std::Mutex should be preferred in the demonstrated scenario.
@@sociocritical You're actually right, it also looks like tokio::sync::Mutex uses a synchronous mutex internally.
@@alanhoff89 From a quick glance at the source code (no guarantee) it seems like Tokio‘s Mutex (Semaphore) makes use of the parking_lot Mutex.
@@alanhoff89 In a nutshell, the async Mutex uses a low-level exclusive lock primitive with 1): non-blocking try-lock function; 2) a facility to register "waker" descriptors for tasks waiting on the lock, and notify the executor to resume polling these tasks once the lock is released.
@@alanhoff89 I guess it wasn't supposed to be detailed. Anyone can read the tutorial( tokio.rs/tokio/tutorial ) for more information. Also tokio channels are faster than tokio mutex and should be preferred whenever possible.
Great talk!
Wow!!! Well done. 👏🏾
I can't help thinking (And I'm not "throwing shade" at Rust here, plenty of languages and libraries use this model) that `async` / `await` takes something that's basically simple: Promises / Futures and wraps them up into "clever" "convenience" functions that then make the code more complicated. "Explicit is better than implicit." - PEP20
I still look back on when I first understood promises over in JS-land... it was like a moment of mystical revaluation because I suddenly intoned "Oh... that's what the Haskell people have been going on about all this time.... it's a monad.... it's a MONAD!"
question: how does Hyper handle URL *segments*?
great video, thanks
i like this video. very clear
aws sdk for rust is still not complete at the time
Async-std ?
Thank you so much !
How to increase the amount of memory Rust allocates for stack or heap?
I believe it will try to take as much as OS has available.
@@EngIlya I have 64GB and gifski (pngquant library) panicks with out of memory error after just 70 frames. The available memory is definitelly not an issue.
@@AlexTuduran It's probably too late but stack and heap sizes for C/C++/Rust programs are controlled by the OS, not the program itself
@@joseduarte9823 It was a library gone rogue that was raising the error (pngquant). Seems like if you throw at it lots of frames with lots of colors it has a hard time keeping the palette updated and it tripps.
you can do that by running your code in a thread like below
let thread = std::thread::Builder::new().stack_size(stack_size);
C and c++ both have a runtime?
One question here is why do we want exclusive access to shared memory(HashMap in this case) when implementing redis GET function?
The HashMap is accessed concurrently. Normal Hashmaps do not allow for concurrent access. An alternative would be to use a R/W Lock. However that might starve writers (to the HashMap).
@@sociocritical Another alternative would be using channels to pass messages to dedicated task or thread.
@@sociocritical It would be ideal to use a concurrent hash table, c.f. Cliff Click's NonBlockingHashMap, so that performance doesn't plummet with any kind of concurrent access.
@@no-defun-allowed Stuff like that is pretty hard to predict. Since yes, locking reduces "p" in Amdahl's law, BUT lock-free HashMap implementations, that most likely make extensive use of CAS operations are not for free too. On x86-64 "LOCK CMPXCHG" is sequentially consistent by default and all in all a relatively expensive operation. Overall the overhead introduced by making the HashMap lock-free (pointer indirections etc. add up too; std::collections::HashMap on the other hand is essentially a highly optimized flat probing table, that speeds up lookups with SSE scans on a metadata array) in the first place can be a lot more expensive than "just" using a Mutex-Lock especially when there is low contention on the Hashmap. On Linux in particular Mutex implementations usually make use of "futex" and this results in one "LOCK CMPXCHG" in user space on low lock contention.
@@eraykaratay9266 Can you elaborate further? From my perspective that would not make any sense. Using a different task or thread for accessing the Hashmap implies that you need a MPSC queue (that serves as the channel), which must allow for concurrent access on the producer side just like the Hashmap. So then you would just shift the problem of thread synchronization from the Hashmap to the queue, which will most likely be more expensive than just accessing the HashMap concurrently in the first place.
Unda da sea!
Nice
Templated report: Ownership, Fearless concurrency.... Again?
Do something serious and tell about it, stop repeating cloned reports every year!