Kotlin deals with code colouring in an interesting way: `inline` functions can transparently pass through the `async`-ness to a lambda without knowing about it ahead of time. It basically allows you to use something like `map` with an async function as long as you're inside of one.
One thing I've heard about but have not had the time to look up is "no I/O" crates. Crates where the library itself does no I/O, (takes only byte buffers/strings) and thus is sync/async agnostic. My guess is that it is not very good for state machine types of communication, but it does seem like a great workaround when it works.
The thing is, async file handles like network sockets and files dont actually need to be coupled to the runtime. for example, ringbahn crate has runtime agnostic file io. their examples use the executor from the futures crate just for demo. The "reactor" (IE: epoll, IOCP) is completely decoupled from the task scheduler in the design of rust async ecosystem. But then a bunch of devs got together at tokio and coupled these concepts together for some reason
The problem is that writing into/reading from and consequently allocating intermediate buffers is pretty much always more expensive than writing directly into a socket/stream (which will generally have a buffer under the hood, but you save one layer of allocations..).
@@janoschreppnow3785 Buffer management is a kernel thing dependent on the IO model of the underlying system ie: epoll vs IOCP vs uring. Tokio maintainers abstracted over all this in a onesize fits all approach and only use "my" executor. But it didnt have to be that way.
Most Crates do no IO on their own, they accept generic types and produce generic types that implement a certain IO trait and call the trait’s methods for you, but you pass them the IO object. So if those traits are Read, Write and BufRead, you get sync IO, if those traits are futures::{AsyncRead, AsyncReadExt, AsyncWrite…} you get async IO. You are explicitly doing IO by calling a function from the Crate and passing it an implementor of the trait, like std::fs::File.
IMO it's more accurate to say Go is all async all the time. When you build a Go binary its runtime for garbage collection and async execution is built into it too. Even if you never use the `go` keyword your Go code still runs in one goroutine; you just never spawn more of them. This is like making everything in Rust async by default. It more sidesteps the problem than solves it.
@@lucgeorges4360 Not exactly, spawn blocking moves the blocking call to a threadpool, while goroutines (even if using a threadpool underneath), are actually yielding, so the blocking doesn’t completely block.
@@LtdJorgeIn the end, both goroutines and spawn_blocking have "true" blocking, only interrupted by the preemptive multitasking runtime, right? Only difference is Rust uses the OS's built-in preemptive multitasking, while Go brings its own (which is more lightweight)
love the way PapaPrime reads, flip-flopping between "human being" and "Microsoft Sam". Really knows what his viewers are looking for. Keeps everyone on their toes, himself included.
This is such a good description of this content. Let's us all do stuff while listening rather than trying to read off Prime's screen or find the article ourselves. Really nice
In addition to improved crate-unification flexibility, another interesting idea might be to have macros that can output variants in different namespaces. I.e., instead of generating endpoint_sync and endpoint_async, more language support might allow rspotify::sync::endpoint and rspotify::async::endpoint. Being able to make distinctions via imports/using headers might be less cumbersome than peppering inline _[a]syncs everywhere.
I solved this in a project internally where I work - the api produces a command struct that contains endpoint, body, headers etc with generics for body and response that must impl ser/de and then passed to either a sync or async client. It feels a little weird to use at first but it works reasonably well
I'm going to spend a few months learning how to write a database, then spend the rest of the year producing databases in different languages. There must be more. MORE.
@useruser-tc7xx That's how people get into Rust in the first place, they follow random hype then they feel invested into the sunk cost and feel the need to push it, even though it doesn't really take root in the job market.
@@useruser-tc7xx Rust can't implemented proper virtual threads by design so it's stuck with async/await for the foreseeable future. Virtual threads require a garbage collector which Rust doesn't have. So while your grandpa can now create clean async code with no async/await and with none of the typical problems of the async code in Java, you and your trendy Rust will continue inventing more convoluted constructs to help you cope with async code mixing with non async code
@@thesenamesaretaken not really. A garbage collector is a thing that runs parallel to your program to clean up unused bits. You don't do anything to launch it, and in fact you can't launch it, it has a life of its own. Which is why the destructors of Java objects are mostly useless - they don't correspond to the lifecycle of your code and your application, they correspond to the whims of the GC. But reference counters react in response to your actions and your code. They don't have an entirely separate thing scouring the memory in search of unneeded parts. So you can't leave your async bits hanging, you have to handle their lifecycle in code in some way, meaning you can't mix async and sync code. But when you have the GC, the lifecycle of async pseudo-threads can be completely detached from your code, and you get to use them as if they were sync with no need for red/blue functions
write a base sync library then extend it with an async library to add the functionality to the base. if you don't need async then you use only the base. if you need async then you use both libraries.
Oh, regarding sync/async code variations, I think a look on the "generic keywords innitiative" post on Rust-lang blog would be a good read, it is a pretty interesting and I would say novel idea of bringing compiler-generated variations for sync/async code (and other types of discriminative keywords as well).
@@diadetediotedio6918 ... the additional syntax is the problem. You create the problem, and then "solve" it by adding yet another feature to the already bloated language. Fantastic.
Yep, as an Elixir dev this makes me grateful that we can switch from async to sync pretty easily whenever the need arises. I think Rust is awesome, but it's clearly not the best at concurrency right now. I agree with Primeagen's conclusion that it's better to focus on sync and let the client handle async as needed.
Correct me if I'm wrong, but I don't agree that you should default to making a sync version of the library and let the library's consumers deal with calling it asynchronously, because as soon as you have a sync function, there's no other option for the caller than to have an entire thread blocking on the operation until it finishes. Meanwhile if you just use async functions, the caller can simply use something like smol::block_on to turn it into a synchronous call. Especially with something like the http-client agnostic implementation I feel like that shouldn't be an issue at all.
As a .NET developer learning Rust, all of this is fascinating. People recommending wrapping a sync implementation with async convenience wrappers seems like either completer insanity or some Rust magic I don't understand. If the IO is sync, your code not going to be async just because you slap a promise on top of it... Right?
31:56 Mogueno would like a role model time scale to compare himself to. In my case i started typing in pop sci magazine listings in QBASIC in 1996, made my own local website in HTML + JS in 1997, made a QuakeC mod in 1998, learned Iptscrae and PHP in 1999, Java in 2000, C++ in 2001, etc.
QuakeC... damn, that's a throwback :D IMHO one of the most genius moves ever, making a render engine then making a runtime that runs your "script" that then becomes the actual game. And bam, now anyone has the capability do do anything within the engine's reach. Might sound commonplace now, UE/Unity/Godot/whatever, but ID did it back in the 90's...
@@ErazerPT Wonderfully simply as well. Just set self.think and self.nextthink to do all your async stuff with custom timing. I've also looked at Unreal Tournament's scripting, but found the state engine too complex.
He wasn't saying no-one was asking for the feature - he lists clients for both async and sync libs. Just that no-one had hit the problem of trying to use both versions in one codebase.
I would build the sync version as a core package and 2 more packages. One of them would be a tokio wrapper and the other would be a reqwest package. That way the user can use the meta package they need and you won't duplicate your code.
The idea that people have so many active requests to spotify that they need them all freeing threads is wild. I bet maybe _one_ of their users is really in this boat.
The reason Go is easy is because it is a garbage collected language. That gives a lot of language simplicity and safety automatically, but also a serious performance cost. The point of Rust is extreme efficiency and real safety at the same time, but a bit more difficult indeed. Of course I would like it when they could do something to solve the color issue at Rust language level, but then I also hope they consider a way to compile actual sync functions next to the async ones, and not functions that just look like they are sync while they are silently still state machines with a poll function. They are also doing something in the Zig language with colorless functions, but if I am right there they do not eliminate a state machine either in the sync usage case.
Go has not such a big perf impact, but the runtime does come with a cost. But Rust is trying to solve this, I recommend to look into 'keyword generics initiative'
I have the same problem with one of my C# libraries. It is a custom transaction ledger database. It can be embedded to a game for storing the economy of a game or can be used as a standalone server for processing transactions in an cryptocurrency market vs... It runs super-fast as a single server and for multiple server scenarios it supports distribution of records and clustering. It has locking mechanisms for handling transactions involving accounts distributed across different servers, and this creates a requirement for whole system to be async. I have two implementation of it right now. First one is an async implementation supports all kind of distributed usages. The other one is a synchronous implementation that has also heavy heap usage optimization. I wish I could convert one into another as a part of build process or abstract it somehow.
Went through something similar with a python project last year: I made a package that includes a DB model system, query builder, an ORM, a migration management tool, code generators, and a bundled coupling to the standard sqlite library (all opt-in features); the first feedback I got was "can you use it async?" So I started by converting it to async and made the bundled coupling use an async version of sqlite, then I tried wrapping the async stuff to use it synchronously. The async overhead for using sqlite was just too extreme -- sync writes took 90µs on average anyway and sync reads were like 20µs, while the async overhead added another 30-50µs to each -- so I ended up with duplicated code. The guy who requested it said after I had spent over a week on the async implementation that he wasn't going to use it. Oof.
If I were the maintainer, I'd either do only sync, like prime, or make sync and the async version being a call to the sync version with hardware threads. But I'd only do the latter if there were a lot of demand for. The engineering effort to make it work would be humongous.
Javascript has the advantage of the promise API. As long as your top level function is async (or you .then() the promise) you can use any number of sync functions in between your other async (or promise returning) function.
I don't understand that promises are used for multi thread. In my view it's only practically usable for a limit number of problems. But it's used for everything.
It's useful whenever the amount of concurrent work you want to do isn't sufficiently higher than the cost of switching/starting up an OS thread. And that happens to be the case for pretty much all of IO and user interaction. Why spin up a thread just to make a network request and read a handful of KiBs out of a buffer? It's hard to put into words just how absurdly wasteful that is, relative to the amount of actual work being done. Honestly, the real question should actually the other way around: When do you need a real thread? And it turns out you just don't really need them that often, outside of actually managing the runtime or doing compute heavy work while still being interruptible. It's really no wonder, why a language like Rust has very much embraced async and why something like Go doesn't even give you the option to spawn real threads.
@@JaconSamsta It's only wasteful if you are doing it enough that you care. If a context switch takes 12 microseconds and an async switch takes 3 microseconds, nobody but a heavily-loaded server is going to care.
@@darrennew8211 Just because you don't care about something being wasteful, doesn't mean it ceases to be wasteful. And we aren't talking about roughly the same order of magnitude, we are talking about the difference between (essentially) a function call and a context switch (and potentially setting up or cleaning up a thread). That's massive, no matter how you try to slice it. But yeah, you can always choose not to care about performance. That's basically why languages like JS or Python exist, because people valued other aspects of a language more. "I don't care" is a perfectly valid reason to do something sub-optimal and if you just need to build something that handles a couple hundred requests a second, it will hardly matter how poorly your code performs. It's just not very surprising, that people building high performance networking have the goal of producing as little overhead as possible. And in that case, using non-blocking calls should very much be the default. And look at languages like Go. They literally made cooperative coroutines the default! If you can afford the trade-offs a language like Go makes, you should certainly be using it instead. You can still use Rust if you prefer it, just don't be surprised when "gotta go fast"-lang wants to go fast and compensates by decreasing your precious DX in return.
The worst part is, the whole reason for async to exist is literally just optimization. I honestly believe just forking and joining threads is much simpler and easier, obviously that's not an option for JS, but everywhere else you can just fork/join threads. But, of course, the issue with doing that is the thread startup time and memory overhead... but then you can sidestep those issues by running green threads on top of a thread pool (ala Go) - I don't understand why async/await won instead of the Go model (other than - because JS did it)
I really dislike asyc-await since I feel it hides too much, requires too much, and gives too little benefit. I much prefer boost::fiber like stackful fibers, or Erlang-like actors, where the former mostly leaves your sync code untouched, and only requires changes at the "borders"/"edges" of code / control flow, while the latter encourages less coupling, and can potentially result in less memory waste than callbacks / async-await.
I'm actually writing some software that uses this library... the maybe-async wrapper is a complete pain. Especially because I tried to use it with async and sync in the same codebase. (conditionally compiling webassembly fml). I ended up abandoning it completely and just using their data model sub-crate and making the HTTP requests myself. Quite obscure library imo, interesting to see if pop up on my youtube feed.
11:14 but since the code itself is synchronous(that's why we are using the blocking version), B can't get called before A is done, right? so how is that a problem?
The coloring problem is an interesting one. I'm used to using languages like elixir which completely sidestep this problem by putting everything into a process instead. Go has a similar concurrency model because you have multiple call stacks which you can switch between via go routines. I believe Ruby also side steps this problem with its fibers and so does C#.
@@defeqel6537 Synchronously waiting tasks in C# can cause deadlocks and block threadpool threads. In pure synchronous code it's a bit less of a big deal, but that still depends on libraries you are using. If you use libraries that do not do their ConfigureAwait stuff right, you can get experience actual proplems in GUI applications.
The coloring "problem" isn't really solved but sidestepped as you say. So to add more context, the coloring "problem" is because a function that returns a future has a different return signature than a function that returns a value(that's not a future). That seems obvious but systems like java project loom and golang the non-blocking non-colored concurrency doesn't block the thread but does block the caller.. So to not block the caller you need futures and/or messaging(but guess how this gets implemented haha). With go you might want the caller to not need to deal with a return channel so you could end up with a caller blocking API AND a channel returning API when writing certain libraries. Async/Await allows you to write non-blocking(non caller blocking too!) imperative code. The Loom team is already working on "structured concurrency" to help deal with the DX angle. But now you've got an entire weird system of writing code that looks a little like callback hell if you squint or some complex state machine you have to construct. It's horses for courses but I personally enjoy message passing(ala golang channels etc) for writing smaller infra processes. I enjoy async/await for writing imperative business logic that can be organized and composed into a set of async "workflows". But also message passing is slower enough compared to traditional concurrency/parallel patterns using semaphores and etc that the most performant projects in those languages fall back on those more traditional patterns. For example Mozilla Heka and databases written in Go eschew channels altogether or at least in the most performance sensitive subsystems.
The tokio way to do this without the global mutex is to just use an actor/daemon that wraps the API instead of a mutex. Have the thread that initiate the request send a message to the actor and block until it gets a response, just like you would in Erlang. The actor can use async code internally, and certainly no Arc , you just clone channel producers and pass a Sender, while for the return value tokio::sync specifically provides oneshot channels. When all the producers are dropped the channel closes and the daemon dies as expected. It's a bit boilerplate heavy to write message impls depending on how wide your API, but if you have a wide API that needs to be converted to sync, then that is your actual problem imho.
I mean, if it's really just a matter of removing async and await, wouldn't the job be done by a simple copy/paste + automated *diff* check? or automated async / await removal? (genuine question, I don't know all the detailed implications of async programming in rust)
The most obvious thing was maybe don't implement a useless feature request? Web API's are async, if an end user wants a blocking API it can wrap it themselves.
It gets even worse if you add gui and api to the mix but its always solvable so not to bad. The fun part about rust is you have to invent or combine architectures for a lot of problems. I also thought using listeners would solve this problem on my hobby project 😂 😂 . There is always a simple way in rust in the end though, just need to invent it 😂
There should be a minimal Future "runtime" that only supports block_on, just for cases like this. Wouldn't it be almost trivial to implement? Edit: I did it. It was trivial to write, but I haven't tested it. Published as its_ok_to_be_block_on crate.
I started tp think lately that there is no "uncolored function" at all in any language, you will be compromising yourself either with a runtime or with some distinction between the "worlds", it is this way for Rust, Go, C#, Kotlin, all languages.
Yes, there will always be some form of coloring, but it's more a question of how much it breaks the upstream and what escape hatches a language provides to make these changes less cumbersome. In Rust it often requires you to think ahead of time and apply premature optimizations just to avoid major changes in the future when you _do_ need those optimizations. And Rust doesn't offer a ton of escape hatches in a lot of cases. Just adding a generic at lower level will bubble up to the top and any code that previously relied on it not being generic, now also have to be generic; and in some cases entire structs have to be colored just for that single use case; and it bubbles up further.
@@dealloc And my question is: What is the solution to this problem them? It just sounds to me like a compromise between performance / low-level control / syntax coloring And I think you cannot achieve all of these at the same time reasonably, because all solutions will end up clashing at some point. Even if you use raw threads you will still be needing to handle synchronization primitives manually and this will blow up in the code as you need to access values that are locked behind them. So, I think this is more of a "no free lunch" problem that the big monster people usually say about colored functions.
Maybe a crate such as rspotify is... stupid? How long should it take me as someone using Spotify web api to produce an access layer suitable for my purpose? In compliance with my choice of async behavior? Or maybe the crate should focus on defining the request body and how it's serialized, and not on sending an http request?
There is a reason people rewrite it in rust more often than write it in rust. its easier to iterate in simpler languages (GO) and convert to rust when you know at least 75% of the features . and u get capital to hire 200k/yr rust devs if u finish the product first.
how many additional servers can you purchase for the difference in 'normal' devs vs. rust devs. I assume that you only rewrite in rust when you have a crazy amount of requests where it actually makes sense - i.e. you have a product with millions of users.
Then why not iterate in Go, then fine tune / rewrite bottlenecks in a high performance tool that your top Go devs can use with minimal learning curve ? Save that 200k/yr and avoid a tonne of arguments
Honestly, the explanation for colored functions made no sense to me with red/blue but makes perfect sense with sync/async. Sometimes, it's not a good idea to try to popularize a concept. That might have helped other people to understand though.
Does any language do the opposite of async/await? As in, all methods are synchronous, but you can background them. E.g in a method, you may want to make two simultaneous calls to some external API. So you background them, then stop and wait for both. I’m meaning language support, not just Task.WaitAll(tasks). I think the closest may be channels in Go????
Maybe it's my inner functional bro speaking but you could just pass in a function that executes the request, or just make the library a request builder instead of a request executor, and leave it to the caller, or just have a single execute and execute async function that takes a built request and uses a generic type for the response Or just make people generate their own client libraries with grpc/openapi/asyncapi/etc
You can generalize it up to a certain level, but generalizing it also means that you'll make usage of the library more verbose. It's a tradeoff at the end of the day.
@@dealloc if you are creating a public library your goal should be to balance streamlining the code with flexibility. I would say you should be aiming to get the end user 90% of the way then let them finish off the last 10% to fit their individual needs. Having a little bit more verbosity in order to create more generalized code seems like an acceptable trade-off in most libraries where you don't know how it will be used.
@@evancombs5159 As I said it's a tradeoff; either your library provides consistent API for ease of use, or it ends up requiring boilerplate. It depends on the sort of library and DX you want to provide. Not all libraries are equal.
Why not provide a default async runtime but allow the library user to provide one as well? If we’re being real, why would you expect an web API library to not be async? It just fundamentally is, it’s web requests. Either slap on a tokio main, or some other macro that simply adds block_on() to every async call.
if you don't await multiple asyncs at once, or do some random concurrency thing in your code (start an async coroutine, continue some task, then await that coroutine), why would you want to make the whole thing async even
just dont support sync and if someone wants to use it sync just let them use the futures library to force it. im pretty sure i dont have a single project withought tokio
Wait I'm a bit confused. Why does the library need to have async? I thought that it had async because it had actions that could be waited, meaning that running it concurrently would be more efficient. Why create an async implementation if there is no IO operations that require waiting?
If the async executor is multithreaded, for example in tokio, then async means you can await multiple futures concurrently, effectively having multithreading at your disposal. That's why Rusts futures usually need sync, send, pin etc.
what happens if you create two crates one async and one sync both just have the same single codebase that uses the maybe async crate and separate flag values for the maybe async.
I love the idea of lunatic, which is a wasm runtime mimicking the process architecture of erlang. Every process is a wasm thread which is cheap, and scheduling is cooperative like go, where blocking syscalls are substituted by the runtime, resulting in uncolored async code! Too bad its dead
is it just me or does having do_thing() and do_thing_sync() not seem like that big of a deal?? Like, just do the fixed version of maybe_async, but only append a suffix to the sync versions. Literally the first thing I would have tried is making a macro to do that, lol.
Writing a sync library makes it more or less unusable in a async context tho if a application is IO bound + perf sensitive. Not great for a API wrapper I would say (sync inherently limits number of concurrent calls to number of OS threads, async does not).
So asyc code is contagious? I might try to just stick to threads and channels then haha, async is mostly for functions that might have waiting in them and should give way right?
Async is for when you have so many things being handled by your server that having one thread each is too much overhead. If you're on the *client* end, I can't imagine any reason to use async.
I'm not sure why people are bothered by Async like this. If the best solution to the problem you're solving involves solving a subproblem asynchronously, why not solve your problem asynchronously?
@@jpratt8676 I think its more that I've never needed to use concurrency really, and whenever I've needed parallelism, its usually for many heavy independent tasks, so par_iter().map() feels like the more direct and simple thing. At some point I might need it, and at that point I'll experiment with it more.
All the negative things will be said about Go however the language has made the world a better place concerning simplicity and speed. Rust is for the PHDs and Nerds. I am not one of them and I gladly accept I have skill issues with it. I find it too bloated and just complex. Looking forward to Zig 1.0.
Is this whole async business not another example of our attitude to always think and work on the wrong abstraction level? And not matter at which level we do that, we don't think it through and we don't finish the job. The problem with async is that some parts of our technology are operating synchronously (CPUs) and other parts are operating asynchronously (IO). The former uses instruction pointers and stacks while the latter uses interrupts. This did not change in half a century. When async-IO was introduced in UNIX, this was a misnomer, because IO was already asynchronous. Async-IO just allowed software to not to have to wait for asynchronous IO to complete and thus pass CPU cycles to other processes or threads. Or in other words, it makes it possible not to have to rely on the operating system abstraction of synchronicity (processes or threads) and let the process or thread use its own (e.g. the programming language running stuff in the thread or process). Why can you not use await f() in a synchronous method? In essence because it's to hard for language implementers to support that feature. They leave it up to application developers to solve the problem, something that they can't really do, given that they are in a much worse position to solve that difficult problem. Languages like C do not have async or await (not when last I looked). They're honest and don't even try to fix the problem, but they also don't add to it by providing a half baked solution that only works if you only do sync or async, which you actually can't really do unless you work in a very confined context. As a consequence you have to understand how you can and want to handle IO. As software running in a thread or process, you have to talk to the OS to do that and thus you need to understand how that works. You have a plethora of options available, but you also have to handle just as many buggy implementations. You need to understand concurrency and learn how to not deadlock, what is thread safe and how to semaphore. I keep wanting to learn Rust, but whenever I make an attempt to look at it more closely, I come across something that makes Rust extremely unattractive. All the good stuff or at least most of it, I know from other languages. There is no feature that I know of, that other languages did not yet come up with. But Rust collected a whole lot of the good stuff. That should make it really sexy. But to me it seems as if it also collected all the bad stuff from all over the place. The syntax from Perl. The attitude from SCO (in the project). Having to struggle with async/await and how to handle it from sync code as in this example is a native JavaScript problem. There is no async code. Code is always synchronous, IO is asynchronous.
Oh man, if only there was a feature in literally every modern language that'd allow us to have the same function names, which would mean we avoid the whole 'copying all the tests' problem'. Good thing tho omnipotent rust foundation decided that its bad idea (same as globals) and we can gadly copy-paste code over and over again ❤
Although wait, can't they do that anyway? Since they are asking user to pass async provider as a function parameter, why not just slap a "if(env=null){skip_async}"?
@@diadetediotedio6918the problem mentioned in the article was how going with the simplest solution of just having a copy of each function was bad, because it required manually keeping sure that the implementations of both was synced. All that followed were more and more complex solutions, but if the problem of keeping async and blocking implementations the same (and tested) was to be resolved, then you could just fall back to that. But with name overloading, (or just getting the effect of overloading by passing null), you could keep all your logic in one functions, without the need of copying either logic or tests. So that would solve the whole problem, no?
@@xeamek99 Bro, you understand that the question is not about arguments, but function unmatchable signature, right? What I'm saying here is that this does not make sense, there is no "null" to pass on, in any language at all. C# function overloading and still have the same problem literally because the compiler cannot magically infer these things, when a compiler with function overloading matches a function it matches based on arguments, the actual function modifiers would still be a problem.
Rust is scaring me: it's safe, yeah, but it is so overly complicated that i wonder if there is a way to make it saner. Basically a safe C++, and everyone LOVES C++. /s So sad.
Safe … if you don’t consider buffer over reads from speculative execution to be a problem Safe rust has the same vulnerability as all the other compiled languages, so you have to wonder what the point is
9:10 I don't think people realize how many globals are used in the software they are using... and globals aren't the devil. They don't realistically affect performance, and while they should be avoided for cases where they don't make sense in order to keep the code clean and sustainable, if you think your only option is to use a global, it's probably correct.
There are more Flutter state management libraries than there are databases.
Lmao 😂
That is kind of true, but a lot of them are based off of the bloc pattern
@@draakisbackand a lot of databases is sql in a shiny coat
@@kmp3e A lot of databases are just modded PostgreSQL as well.
@@kmp3e sure, it's basically just a case of someone reimplementing the same stuff with minor changes.
Kotlin deals with code colouring in an interesting way: `inline` functions can transparently pass through the `async`-ness to a lambda without knowing about it ahead of time. It basically allows you to use something like `map` with an async function as long as you're inside of one.
Also, runBlocking { mySuspendFun() }
yep, because the "inline" function modifier is basically a copy and paste, so the actual code of the function gets inlined at the call site
It does not deal tho, because you still need suspend in functions and it still propagates.
you can also just wrap async stuff in runBlocking and you are good
One thing I've heard about but have not had the time to look up is "no I/O" crates. Crates where the library itself does no I/O, (takes only byte buffers/strings) and thus is sync/async agnostic. My guess is that it is not very good for state machine types of communication, but it does seem like a great workaround when it works.
The thing is, async file handles like network sockets and files dont actually need to be coupled to the runtime. for example, ringbahn crate has runtime agnostic file io. their examples use the executor from the futures crate just for demo. The "reactor" (IE: epoll, IOCP) is completely decoupled from the task scheduler in the design of rust async ecosystem. But then a bunch of devs got together at tokio and coupled these concepts together for some reason
The problem is that writing into/reading from and consequently allocating intermediate buffers is pretty much always more expensive than writing directly into a socket/stream (which will generally have a buffer under the hood, but you save one layer of allocations..).
@@janoschreppnow3785 Buffer management is a kernel thing dependent on the IO model of the underlying system ie: epoll vs IOCP vs uring. Tokio maintainers abstracted over all this in a onesize fits all approach and only use "my" executor. But it didnt have to be that way.
Most Crates do no IO on their own, they accept generic types and produce generic types that implement a certain IO trait and call the trait’s methods for you, but you pass them the IO object. So if those traits are Read, Write and BufRead, you get sync IO, if those traits are futures::{AsyncRead, AsyncReadExt, AsyncWrite…} you get async IO. You are explicitly doing IO by calling a function from the Crate and passing it an implementor of the trait, like std::fs::File.
I much prefer Go async-all of the library code is synchronous, and the caller can call your function in a goroutine if they want async
You can achieve the same with a spawn_blocking in Rust
IMO it's more accurate to say Go is all async all the time. When you build a Go binary its runtime for garbage collection and async execution is built into it too. Even if you never use the `go` keyword your Go code still runs in one goroutine; you just never spawn more of them. This is like making everything in Rust async by default. It more sidesteps the problem than solves it.
@@lucgeorges4360 Not exactly, spawn blocking moves the blocking call to a threadpool, while goroutines (even if using a threadpool underneath), are actually yielding, so the blocking doesn’t completely block.
@@LtdJorgeIn the end, both goroutines and spawn_blocking have "true" blocking, only interrupted by the preemptive multitasking runtime, right?
Only difference is Rust uses the OS's built-in preemptive multitasking, while Go brings its own (which is more lightweight)
love the way PapaPrime reads, flip-flopping between "human being" and "Microsoft Sam". Really knows what his viewers are looking for. Keeps everyone on their toes, himself included.
This is such a good description of this content. Let's us all do stuff while listening rather than trying to read off Prime's screen or find the article ourselves. Really nice
In addition to improved crate-unification flexibility, another interesting idea might be to have macros that can output variants in different namespaces. I.e., instead of generating endpoint_sync and endpoint_async, more language support might allow rspotify::sync::endpoint and rspotify::async::endpoint. Being able to make distinctions via imports/using headers might be less cumbersome than peppering inline _[a]syncs everywhere.
I solved this in a project internally where I work - the api produces a command struct that contains endpoint, body, headers etc with generics for body and response that must impl ser/de and then passed to either a sync or async client. It feels a little weird to use at first but it works reasonably well
I'm going to spend a few months learning how to write a database, then spend the rest of the year producing databases in different languages. There must be more. MORE.
😂😂
You are one evil human 😀
Thanks for fixing my Rust FOMO.
Don't blindly follow what streamers say. It's entertainment first and foremost.
@useruser-tc7xx That's how people get into Rust in the first place, they follow random hype then they feel invested into the sunk cost and feel the need to push it, even though it doesn't really take root in the job market.
@@useruser-tc7xx Rust can't implemented proper virtual threads by design so it's stuck with async/await for the foreseeable future. Virtual threads require a garbage collector which Rust doesn't have. So while your grandpa can now create clean async code with no async/await and with none of the typical problems of the async code in Java, you and your trendy Rust will continue inventing more convoluted constructs to help you cope with async code mixing with non async code
@@NJ-wb1czRc and Arc are garbage collection in a trenchcoat
@@thesenamesaretaken not really. A garbage collector is a thing that runs parallel to your program to clean up unused bits. You don't do anything to launch it, and in fact you can't launch it, it has a life of its own. Which is why the destructors of Java objects are mostly useless - they don't correspond to the lifecycle of your code and your application, they correspond to the whims of the GC.
But reference counters react in response to your actions and your code. They don't have an entirely separate thing scouring the memory in search of unneeded parts. So you can't leave your async bits hanging, you have to handle their lifecycle in code in some way, meaning you can't mix async and sync code. But when you have the GC, the lifecycle of async pseudo-threads can be completely detached from your code, and you get to use them as if they were sync with no need for red/blue functions
write a base sync library then extend it with an async library to add the functionality to the base. if you don't need async then you use only the base. if you need async then you use both libraries.
Oh, regarding sync/async code variations, I think a look on the "generic keywords innitiative" post on Rust-lang blog would be a good read, it is a pretty interesting and I would say novel idea of bringing compiler-generated variations for sync/async code (and other types of discriminative keywords as well).
@@pureconex
Why "no"? It literally solves the problem
@@diadetediotedio6918 ... the additional syntax is the problem. You create the problem, and then "solve" it by adding yet another feature to the already bloated language. Fantastic.
Erlang developers can't comprehend
Yep, as an Elixir dev this makes me grateful that we can switch from async to sync pretty easily whenever the need arises.
I think Rust is awesome, but it's clearly not the best at concurrency right now. I agree with Primeagen's conclusion that it's better to focus on sync and let the client handle async as needed.
99.99% of applications don't need shared memory for performance reasons. Shared memory is just an attack vector for bad actors
Correct me if I'm wrong, but I don't agree that you should default to making a sync version of the library and let the library's consumers deal with calling it asynchronously, because as soon as you have a sync function, there's no other option for the caller than to have an entire thread blocking on the operation until it finishes.
Meanwhile if you just use async functions, the caller can simply use something like smol::block_on to turn it into a synchronous call. Especially with something like the http-client agnostic implementation I feel like that shouldn't be an issue at all.
This exactly! Thank you, I thought I'd completely misunderstood something.
As a .NET developer learning Rust, all of this is fascinating. People recommending wrapping a sync implementation with async convenience wrappers seems like either completer insanity or some Rust magic I don't understand. If the IO is sync, your code not going to be async just because you slap a promise on top of it... Right?
Just commenting so I can hear someone's response. I'm also a .NET dev learning Rust.
I hear Prime in my head whenever I see or hear the word Tokyo, and if that isn't a sure sign I need therapy I don't know what is
31:56 Mogueno would like a role model time scale to compare himself to. In my case i started typing in pop sci magazine listings in QBASIC in 1996, made my own local website in HTML + JS in 1997, made a QuakeC mod in 1998, learned Iptscrae and PHP in 1999, Java in 2000, C++ in 2001, etc.
QuakeC... damn, that's a throwback :D IMHO one of the most genius moves ever, making a render engine then making a runtime that runs your "script" that then becomes the actual game. And bam, now anyone has the capability do do anything within the engine's reach. Might sound commonplace now, UE/Unity/Godot/whatever, but ID did it back in the 90's...
@@ErazerPT Wonderfully simply as well. Just set self.think and self.nextthink to do all your async stuff with custom timing.
I've also looked at Unreal Tournament's scripting, but found the state engine too complex.
I couldn't imagine spending nine months trying to implement a feature no one was asking for. I'm sure he learned a lot, though!
He wasn't saying no-one was asking for the feature - he lists clients for both async and sync libs. Just that no-one had hit the problem of trying to use both versions in one codebase.
i wish prime would read some blog post by the people who created rust async that explain the design decisions behind it...
"I have fearful concurrency 😂😂"
0:45 Priceless
The moment it *tings* in the brain that it's the right thing to tweet
I would build the sync version as a core package and 2 more packages. One of them would be a tokio wrapper and the other would be a reqwest package. That way the user can use the meta package they need and you won't duplicate your code.
Feels like answering the feature request with "No, what would be the benefit?" would save everyone a lot of pain
The idea that people have so many active requests to spotify that they need them all freeing threads is wild. I bet maybe _one_ of their users is really in this boat.
The reason Go is easy is because it is a garbage collected language. That gives a lot of language simplicity and safety automatically, but also a serious performance cost. The point of Rust is extreme efficiency and real safety at the same time, but a bit more difficult indeed.
Of course I would like it when they could do something to solve the color issue at Rust language level, but then I also hope they consider a way to compile actual sync functions next to the async ones, and not functions that just look like they are sync while they are silently still state machines with a poll function.
They are also doing something in the Zig language with colorless functions, but if I am right there they do not eliminate a state machine either in the sync usage case.
Go has not such a big perf impact, but the runtime does come with a cost.
But Rust is trying to solve this, I recommend to look into 'keyword generics initiative'
@@diadetediotedio6918 Yep thanks a lot! Sounds really interesting. I found it and am going to read it.
you know, if prime has trouble with rust, I don't feel so bad anymore about barely getting started
"adding a new endpoint or modifying it meant writing or removing everything twice"
WET (write everything twice) peoples dream
LOLed so hard. thx for that
Instead of adding a _sync suffix it could add a sync (or async) submodule. Would be nicer, IMO.
I have the same problem with one of my C# libraries. It is a custom transaction ledger database. It can be embedded to a game for storing the economy of a game or can be used as a standalone server for processing transactions in an cryptocurrency market vs... It runs super-fast as a single server and for multiple server scenarios it supports distribution of records and clustering. It has locking mechanisms for handling transactions involving accounts distributed across different servers, and this creates a requirement for whole system to be async. I have two implementation of it right now. First one is an async implementation supports all kind of distributed usages. The other one is a synchronous implementation that has also heavy heap usage optimization. I wish I could convert one into another as a part of build process or abstract it somehow.
Went through something similar with a python project last year: I made a package that includes a DB model system, query builder, an ORM, a migration management tool, code generators, and a bundled coupling to the standard sqlite library (all opt-in features); the first feedback I got was "can you use it async?" So I started by converting it to async and made the bundled coupling use an async version of sqlite, then I tried wrapping the async stuff to use it synchronously. The async overhead for using sqlite was just too extreme -- sync writes took 90µs on average anyway and sync reads were like 20µs, while the async overhead added another 30-50µs to each -- so I ended up with duplicated code. The guy who requested it said after I had spent over a week on the async implementation that he wasn't going to use it. Oof.
This feels like a macro situation
If this was c I would redefine async and await to an empty string then compile twice
Looked into it a bit rust just REFUSES to let u do this trick with every fiber of its being...
God dam it why
@@nevokrien95macro hygiene, something I'm glad Rust has. It's annoying sometimes, but in return you mostly get sanity
@@nevokrien95 rust hates developers, you spend most of your time writing code and not functionality. C macros have reduced worldwide RSI cases.
If I were the maintainer, I'd either do only sync, like prime, or make sync and the async version being a call to the sync version with hardware threads. But I'd only do the latter if there were a lot of demand for. The engineering effort to make it work would be humongous.
Javascript has the advantage of the promise API. As long as your top level function is async (or you .then() the promise) you can use any number of sync functions in between your other async (or promise returning) function.
I don't understand that promises are used for multi thread. In my view it's only practically usable for a limit number of problems. But it's used for everything.
It's useful whenever the amount of concurrent work you want to do isn't sufficiently higher than the cost of switching/starting up an OS thread. And that happens to be the case for pretty much all of IO and user interaction.
Why spin up a thread just to make a network request and read a handful of KiBs out of a buffer? It's hard to put into words just how absurdly wasteful that is, relative to the amount of actual work being done.
Honestly, the real question should actually the other way around: When do you need a real thread? And it turns out you just don't really need them that often, outside of actually managing the runtime or doing compute heavy work while still being interruptible.
It's really no wonder, why a language like Rust has very much embraced async and why something like Go doesn't even give you the option to spawn real threads.
@@JaconSamsta It's only wasteful if you are doing it enough that you care. If a context switch takes 12 microseconds and an async switch takes 3 microseconds, nobody but a heavily-loaded server is going to care.
@@darrennew8211
Just because you don't care about something being wasteful, doesn't mean it ceases to be wasteful.
And we aren't talking about roughly the same order of magnitude, we are talking about the difference between (essentially) a function call and a context switch (and potentially setting up or cleaning up a thread). That's massive, no matter how you try to slice it.
But yeah, you can always choose not to care about performance. That's basically why languages like JS or Python exist, because people valued other aspects of a language more.
"I don't care" is a perfectly valid reason to do something sub-optimal and if you just need to build something that handles a couple hundred requests a second, it will hardly matter how poorly your code performs.
It's just not very surprising, that people building high performance networking have the goal of producing as little overhead as possible. And in that case, using non-blocking calls should very much be the default.
And look at languages like Go. They literally made cooperative coroutines the default!
If you can afford the trade-offs a language like Go makes, you should certainly be using it instead. You can still use Rust if you prefer it, just don't be surprised when "gotta go fast"-lang wants to go fast and compensates by decreasing your precious DX in return.
@@JaconSamsta Some algorithms are designed with spawning actual threads in mind, especially tensor algebra stuff.
@@PhthaloJohnson
Your point being?
1:13 Appreciate all the THAN*s in chat and Prime ignoring them like a chad.
The worst part is, the whole reason for async to exist is literally just optimization. I honestly believe just forking and joining threads is much simpler and easier, obviously that's not an option for JS, but everywhere else you can just fork/join threads. But, of course, the issue with doing that is the thread startup time and memory overhead... but then you can sidestep those issues by running green threads on top of a thread pool (ala Go) - I don't understand why async/await won instead of the Go model (other than - because JS did it)
I really dislike asyc-await since I feel it hides too much, requires too much, and gives too little benefit. I much prefer boost::fiber like stackful fibers, or Erlang-like actors, where the former mostly leaves your sync code untouched, and only requires changes at the "borders"/"edges" of code / control flow, while the latter encourages less coupling, and can potentially result in less memory waste than callbacks / async-await.
I'm actually writing some software that uses this library... the maybe-async wrapper is a complete pain. Especially because I tried to use it with async and sync in the same codebase. (conditionally compiling webassembly fml). I ended up abandoning it completely and just using their data model sub-crate and making the HTTP requests myself. Quite obscure library imo, interesting to see if pop up on my youtube feed.
Moral of the story: Don't throw yourself on the sword in pursuit of correctness
Are there still people who do not get Async Rust != Tokio?
This reminds me of "Avoiding async entirely" on the wg-async repo
probably because most async crates depend on tokio if you like it our not
11:14 but since the code itself is synchronous(that's why we are using the blocking version), B can't get called before A is done, right? so how is that a problem?
What you don't just hit everything with an arch with empty trait implementations and call it a day, it easy xD
Looking up the way Zig tries to solve it is quite interesting.
0:31 lol i felt that hit so hard
Prime, will you read "Let futures be futures" published 3 days ago by withoutboats? It seems relevant.
The coloring problem is an interesting one. I'm used to using languages like elixir which completely sidestep this problem by putting everything into a process instead. Go has a similar concurrency model because you have multiple call stacks which you can switch between via go routines. I believe Ruby also side steps this problem with its fibers and so does C#.
I'm not so sure about C#, but C++ boost::fibers certainly do
@@defeqel6537 Synchronously waiting tasks in C# can cause deadlocks and block threadpool threads. In pure synchronous code it's a bit less of a big deal, but that still depends on libraries you are using. If you use libraries that do not do their ConfigureAwait stuff right, you can get experience actual proplems in GUI applications.
I don't think they "sidestep" the problem, they rely on a runtime and/or different constructs to do the biddings
If anything C# is the language to blame for all this async/await madness as it was the first mainstream language to implement this feature.
The coloring "problem" isn't really solved but sidestepped as you say. So to add more context, the coloring "problem" is because a function that returns a future has a different return signature than a function that returns a value(that's not a future). That seems obvious but systems like java project loom and golang the non-blocking non-colored concurrency doesn't block the thread but does block the caller..
So to not block the caller you need futures and/or messaging(but guess how this gets implemented haha). With go you might want the caller to not need to deal with a return channel so you could end up with a caller blocking API AND a channel returning API when writing certain libraries.
Async/Await allows you to write non-blocking(non caller blocking too!) imperative code. The Loom team is already working on "structured concurrency" to help deal with the DX angle. But now you've got an entire weird system of writing code that looks a little like callback hell if you squint or some complex state machine you have to construct.
It's horses for courses but I personally enjoy message passing(ala golang channels etc) for writing smaller infra processes. I enjoy async/await for writing imperative business logic that can be organized and composed into a set of async "workflows".
But also message passing is slower enough compared to traditional concurrency/parallel patterns using semaphores and etc that the most performant projects in those languages fall back on those more traditional patterns. For example Mozilla Heka and databases written in Go eschew channels altogether or at least in the most performance sensitive subsystems.
The tokio way to do this without the global mutex is to just use an actor/daemon that wraps the API instead of a mutex. Have the thread that initiate the request send a message to the actor and block until it gets a response, just like you would in Erlang.
The actor can use async code internally, and certainly no Arc , you just clone channel producers and pass a Sender, while for the return value tokio::sync specifically provides oneshot channels. When all the producers are dropped the channel closes and the daemon dies as expected.
It's a bit boilerplate heavy to write message impls depending on how wide your API, but if you have a wide API that needs to be converted to sync, then that is your actual problem imho.
What switches are you using on your keyboard? They sound wonderful
Thank god I’m not the only one who feels this way about databases 😂
Re: colored functions, this is exactly what react did with hooks...
Why not just have maybe_async generate or fill a separate blocking module? That would allow for both sync and async
There are more React state management libraries than there are C++ "features"
I mean, if it's really just a matter of removing async and await, wouldn't the job be done by a simple copy/paste + automated *diff* check? or automated async / await removal? (genuine question, I don't know all the detailed implications of async programming in rust)
The most obvious thing was maybe don't implement a useless feature request? Web API's are async, if an end user wants a blocking API it can wrap it themselves.
It gets even worse if you add gui and api to the mix but its always solvable so not to bad. The fun part about rust is you have to invent or combine architectures for a lot of problems. I also thought using listeners would solve this problem on my hobby project 😂 😂 . There is always a simple way in rust in the end though, just need to invent it 😂
You know what Kotlin has? Built-in singletons ;)
Somebody was drinking too much Kooool-Aid
What I am hearing here is don't support blocking interfaces. If a library requires async for performance, then you should embrace async/await.
There should be a minimal Future "runtime" that only supports block_on, just for cases like this. Wouldn't it be almost trivial to implement?
Edit: I did it. It was trivial to write, but I haven't tested it. Published as its_ok_to_be_block_on crate.
Template language to generate rust code. Truly JDSL moment
I started tp think lately that there is no "uncolored function" at all in any language, you will be compromising yourself either with a runtime or with some distinction between the "worlds", it is this way for Rust, Go, C#, Kotlin, all languages.
Yes, there will always be some form of coloring, but it's more a question of how much it breaks the upstream and what escape hatches a language provides to make these changes less cumbersome. In Rust it often requires you to think ahead of time and apply premature optimizations just to avoid major changes in the future when you _do_ need those optimizations. And Rust doesn't offer a ton of escape hatches in a lot of cases.
Just adding a generic at lower level will bubble up to the top and any code that previously relied on it not being generic, now also have to be generic; and in some cases entire structs have to be colored just for that single use case; and it bubbles up further.
@@dealloc
And my question is: What is the solution to this problem them?
It just sounds to me like a compromise between performance / low-level control / syntax coloring
And I think you cannot achieve all of these at the same time reasonably, because all solutions will end up clashing at some point. Even if you use raw threads you will still be needing to handle synchronization primitives manually and this will blow up in the code as you need to access values that are locked behind them. So, I think this is more of a "no free lunch" problem that the big monster people usually say about colored functions.
that feel when not using a mutex cause thread dependent message queues are nice..
I'd generate a sync wrapper for the async module.
Maybe a crate such as rspotify is... stupid? How long should it take me as someone using Spotify web api to produce an access layer suitable for my purpose? In compliance with my choice of async behavior? Or maybe the crate should focus on defining the request body and how it's serialized, and not on sending an http request?
There is a reason people rewrite it in rust more often than write it in rust.
its easier to iterate in simpler languages (GO) and convert to rust when you know at least 75% of the features .
and u get capital to hire 200k/yr rust devs if u finish the product first.
how many additional servers can you purchase for the difference in 'normal' devs vs. rust devs. I assume that you only rewrite in rust when you have a crazy amount of requests where it actually makes sense - i.e. you have a product with millions of users.
Nah, the reason is that the product is already done, so it is easier to port, that's why virtually all languages have thousands of ports.
Then why not iterate in Go, then fine tune / rewrite bottlenecks in a high performance tool that your top Go devs can use with minimal learning curve ?
Save that 200k/yr and avoid a tonne of arguments
Honestly, the explanation for colored functions made no sense to me with red/blue but makes perfect sense with sync/async. Sometimes, it's not a good idea to try to popularize a concept.
That might have helped other people to understand though.
Does any language do the opposite of async/await? As in, all methods are synchronous, but you can background them. E.g in a method, you may want to make two simultaneous calls to some external API. So you background them, then stop and wait for both. I’m meaning language support, not just Task.WaitAll(tasks). I think the closest may be channels in Go????
I did some similar naming for a rust library.
It's called rslnp.
rs is Rust of course
SLN is the Scopes list notation
p is parser
Why not build it sync and offer an async wrapper library?
Maybe it's my inner functional bro speaking but you could just pass in a function that executes the request, or just make the library a request builder instead of a request executor, and leave it to the caller, or just have a single execute and execute async function that takes a built request and uses a generic type for the response
Or just make people generate their own client libraries with grpc/openapi/asyncapi/etc
That is how I would go about it too. Coding generically is difficult for most to think about when they are not used to it.
You can generalize it up to a certain level, but generalizing it also means that you'll make usage of the library more verbose. It's a tradeoff at the end of the day.
@@dealloc if you are creating a public library your goal should be to balance streamlining the code with flexibility. I would say you should be aiming to get the end user 90% of the way then let them finish off the last 10% to fit their individual needs. Having a little bit more verbosity in order to create more generalized code seems like an acceptable trade-off in most libraries where you don't know how it will be used.
@@evancombs5159 As I said it's a tradeoff; either your library provides consistent API for ease of use, or it ends up requiring boilerplate. It depends on the sort of library and DX you want to provide.
Not all libraries are equal.
Why not provide a default async runtime but allow the library user to provide one as well?
If we’re being real, why would you expect an web API library to not be async? It just fundamentally is, it’s web requests. Either slap on a tokio main, or some other macro that simply adds block_on() to every async call.
Even C++ has coroutines. Rust catch up!
if you don't await multiple asyncs at once, or do some random concurrency thing in your code (start an async coroutine, continue some task, then await that coroutine), why would you want to make the whole thing async even
Isn't having tokio as a dependency similar to using golang, in terms of runtime? I'd guess that's exactly how sync requests work in go.
And, if that's the case, simply adding a section in the readme should be enough
love this article, actually very educational
just dont support sync and if someone wants to use it sync just let them use the futures library to force it. im pretty sure i dont have a single project withought tokio
This whole article is a lesson to just say no.
Should have used Elixir, amirite
Tom could have solved this. Definite skill issue
Wait I'm a bit confused. Why does the library need to have async? I thought that it had async because it had actions that could be waited, meaning that running it concurrently would be more efficient. Why create an async implementation if there is no IO operations that require waiting?
If the async executor is multithreaded, for example in tokio, then async means you can await multiple futures concurrently, effectively having multithreading at your disposal. That's why Rusts futures usually need sync, send, pin etc.
@@lukaszoblakI see. You mean implementing traits like sync. In parallel is not the same as concurrent. But I got your point.
The good old fashioned copy and paste code 😂
what happens if you create two crates one async and one sync both just have the same single codebase that uses the maybe async crate and separate flag values for the maybe async.
I love the idea of lunatic, which is a wasm runtime mimicking the process architecture of erlang. Every process is a wasm thread which is cheap, and scheduling is cooperative like go, where blocking syscalls are substituted by the runtime, resulting in uncolored async code! Too bad its dead
NOO! it sounds pretty cool.
is it just me or does having do_thing() and do_thing_sync() not seem like that big of a deal?? Like, just do the fixed version of maybe_async, but only append a suffix to the sync versions. Literally the first thing I would have tried is making a macro to do that, lol.
Writing a sync library makes it more or less unusable in a async context tho if a application is IO bound + perf sensitive. Not great for a API wrapper I would say (sync inherently limits number of concurrent calls to number of OS threads, async does not).
so this is where the Primeagen tweets come from 😆
Arangodb has been around for a long time. It's a pretty great graph db.
singletons are the king of all antipatterns
So asyc code is contagious? I might try to just stick to threads and channels then haha, async is mostly for functions that might have waiting in them and should give way right?
Async is for when you have so many things being handled by your server that having one thread each is too much overhead. If you're on the *client* end, I can't imagine any reason to use async.
I'm not sure why people are bothered by Async like this.
If the best solution to the problem you're solving involves solving a subproblem asynchronously, why not solve your problem asynchronously?
@@jpratt8676 I think its more that I've never needed to use concurrency really, and whenever I've needed parallelism, its usually for many heavy independent tasks, so par_iter().map() feels like the more direct and simple thing. At some point I might need it, and at that point I'll experiment with it more.
Singleton pattern is globals with extra steps.
arc , as a web dev, this shit is hell
All I think about this is : oh god I just hope async never actually reach Zig, I don't want to deal with this nonsense xD
All the negative things will be said about Go however the language has made the world a better place concerning simplicity and speed. Rust is for the PHDs and Nerds. I am not one of them and I gladly accept I have skill issues with it. I find it too bloated and just complex. Looking forward to Zig 1.0.
Is this whole async business not another example of our attitude to always think and work on the wrong abstraction level? And not matter at which level we do that, we don't think it through and we don't finish the job.
The problem with async is that some parts of our technology are operating synchronously (CPUs) and other parts are operating asynchronously (IO). The former uses instruction pointers and stacks while the latter uses interrupts. This did not change in half a century.
When async-IO was introduced in UNIX, this was a misnomer, because IO was already asynchronous. Async-IO just allowed software to not to have to wait for asynchronous IO to complete and thus pass CPU cycles to other processes or threads. Or in other words, it makes it possible not to have to rely on the operating system abstraction of synchronicity (processes or threads) and let the process or thread use its own (e.g. the programming language running stuff in the thread or process).
Why can you not use await f() in a synchronous method? In essence because it's to hard for language implementers to support that feature. They leave it up to application developers to solve the problem, something that they can't really do, given that they are in a much worse position to solve that difficult problem.
Languages like C do not have async or await (not when last I looked). They're honest and don't even try to fix the problem, but they also don't add to it by providing a half baked solution that only works if you only do sync or async, which you actually can't really do unless you work in a very confined context. As a consequence you have to understand how you can and want to handle IO. As software running in a thread or process, you have to talk to the OS to do that and thus you need to understand how that works. You have a plethora of options available, but you also have to handle just as many buggy implementations. You need to understand concurrency and learn how to not deadlock, what is thread safe and how to semaphore.
I keep wanting to learn Rust, but whenever I make an attempt to look at it more closely, I come across something that makes Rust extremely unattractive.
All the good stuff or at least most of it, I know from other languages. There is no feature that I know of, that other languages did not yet come up with. But Rust collected a whole lot of the good stuff. That should make it really sexy. But to me it seems as if it also collected all the bad stuff from all over the place. The syntax from Perl. The attitude from SCO (in the project).
Having to struggle with async/await and how to handle it from sync code as in this example is a native JavaScript problem. There is no async code. Code is always synchronous, IO is asynchronous.
I'm getting dejavu, did I watch this on stream or is it a reupload
i think its 3rd or 4th async rust article at this point
nevermind I saw my name in chat I saw this live
"async was a mistake"
Just gifted you a great tweet :)
There is a new Mean Girls now. Sigh I'm sad and old.
Oh man, if only there was a feature in literally every modern language that'd allow us to have the same function names, which would mean we avoid the whole 'copying all the tests' problem'.
Good thing tho omnipotent rust foundation decided that its bad idea (same as globals) and we can gadly copy-paste code over and over again ❤
Although wait, can't they do that anyway? Since they are asking user to pass async provider as a function parameter, why not just slap a "if(env=null){skip_async}"?
What "same function names" bro, this does not even apply here.
@@diadetediotedio6918the problem mentioned in the article was how going with the simplest solution of just having a copy of each function was bad, because it required manually keeping sure that the implementations of both was synced.
All that followed were more and more complex solutions, but if the problem of keeping async and blocking implementations the same (and tested) was to be resolved, then you could just fall back to that.
But with name overloading, (or just getting the effect of overloading by passing null), you could keep all your logic in one functions, without the need of copying either logic or tests.
So that would solve the whole problem, no?
@@xeamek99
Bro, you understand that the question is not about arguments, but function unmatchable signature, right?
What I'm saying here is that this does not make sense, there is no "null" to pass on, in any language at all. C# function overloading and still have the same problem literally because the compiler cannot magically infer these things, when a compiler with function overloading matches a function it matches based on arguments, the actual function modifiers would still be a problem.
couldn't you write a procedural macro that removes asyncs and awaits to codegen the sync tests.
Rust is scaring me: it's safe, yeah, but it is so overly complicated that i wonder if there is a way to make it saner. Basically a safe C++, and everyone LOVES C++. /s
So sad.
I don't think it is a "safe C++", Rust is an easy language if you take it easy.
@@diadetediotedio6918 honest to god, everytime i read it it makes me dyslexyc. :/
Safe … if you don’t consider buffer over reads from speculative execution to be a problem
Safe rust has the same vulnerability as all the other compiled languages, so you have to wonder what the point is
This could all become fixed when `Effect Generics` land in Rust :)
9:10 I don't think people realize how many globals are used in the software they are using... and globals aren't the devil. They don't realistically affect performance, and while they should be avoided for cases where they don't make sense in order to keep the code clean and sustainable, if you think your only option is to use a global, it's probably correct.