Great talk, very unbiased and fair! Virtual threads are a great addition to the language, without a doubt. However it is worth mentioning that in terms of sheer performance (latency, not throughput), the 20-year old Java NIO with EPoll cannot be beat. A single thread spinning and pinned to an isolated CPU core is the king when it comes to low latency. And because we are talking about a single thread, which is handling many clients, there are no multithreading pitfalls. That means no synchronization, no deadlocks, no thread starvation, no race conditions, no locks, no nothing. Coding is very easy and joyful. That's the model the financial industry (high frequency market makers) has been very successfully using for 20 years now.
Yes but this is such an incredibly complicated model to build on. This is why we have frameworks like Reactor and Netty to try and make this model usable. In reality, virtual threads do exactly this. When a virtual thread yields, the scheduler either resumes another virtual thread or invokes epoll to see if any IO has completed. It's essentially the same thing but with a much simpler programming model. Virtual threads gives all the same benefits of old school Java NIO but without all the headaches of trying to put all your concurrent operations in a single epoll loop.
@@jp263 I have to use Rx (JS flavored) every day and like to think I'm reasonably competent at it by now, and I have to say I hate every second of dealing with that crap. I just wish there was something like Virtual Threads in JS ...
The reactor allows us to reason locally about concurrent behaviour. If I know statically where my task may be switched out for another, then I know where I need to recheck my assumptions.
What about Kotlin Coroutines? Until now they still give you Structured Concurrency, which is the missing part in Loom. But once Loom has that covered too, do Coroutines still buy you anything?
Well they will exist forever in Android (or at least for a decade after they are implemented in ART). Also it's not clear what's the replacement for streams of values (e.g. Flow) is. I suppose with virtual threads you can use regular blocking queues for that?
@@equeim I think instead of Flows we could just use regular Java Streams (or Kotlin Sequences) that just happen to potentially block the (virtual) thread when computing the next value. At least that seems like an equivalent abstraction to me. Producer/consumer queues can work too, but that feels like a lower abstraction level.
15:33 Modern systems don't allocate 2MB continuous memory for stack per thread. Stack is allocated dynamically. 2MB is reserved from virtual memory for stack, which can be configured, but this won't actually optimize memory usage much, only mapping tables. A thread typically consumes ~64KB initially (without extra stack frames).
Virtual threads solve the blocking issue, but not the underlying task parallelization & back-pressures ones. In the data world & GUI worlds, i imagine RX is here to stay.
Do you mean because virtual threads put into heap and unblock app thread only if it's well known api blocking call like http call or file open for example?
Do you really write code to utilize the back-pressure feature in a reactive library? If yes, may I see an example? I think that back pressure is a way to push the problem (which is that there's not enough memory/cpu to process data) from this application to another application. For example, application A calls application B for data. If application A uses the back pressure feature to tell app B not to send too much data, then application A doesn't need to have a lot of memory to hold data that hasn't been processed yet and application B is the place to hold the data that application A. So, instead of holding data in application A, you hold data in application B.
@@avalagum7957 yes we do, Swing ui +Chromium doing few k data points per second in between and you need to keep ui in sync, where data exchange is a mix of jni, rpc and websockets
@@avalagum7957if application B is a MongoDB, you already have the data stored on it. I think the problem is if application B loads all data in memory and sens to A reactively
What a nice talk. The only question is "will the scoped values accept a Generic value"? I do think a response for this but i wish someone of the Java team could answer so i can understand the spirit behind the usage of Scope Value in an oficial way of view. Thanks
Always sad to see people confusing latency and throughput. If i dont use concurrency, my latency will go up. My throughput for that thread will suffer. However, if i have enough threads, the gains will not be immediately there: any wait time just lets the cpu do a context switch. Yes, these take time but that is not mentioned. Cpu is still completely filled. The problem occurs when we use microservices with marginal cpu use per work package. Then most time will be wait time and we will hit the max threads. But if you dont, no reason to add the complexity of manual concurrancy.
Loose can also mean "escaped" as in "my horse got loose". Which works in the context he is using it. But since I'm not familiar what the proper nomenclature is you may be right.
@@shannonkohl68 You are right. For me, loose, like a horse or fugitive, also brings the connotation that it does something rogue, uncontrolled. The threads mentioned simply get lost. However, I have only spoken English as a second language for 45 years, and the difference between loose and lost does not lessen the excellent presentation.
No, the term "loose" is used metaphorically, as in if a thread is not "tight" with your system, your system is going to fall apart due that "loose thread"
You have to take into account that we're talking about "threads" which has connotations with weaving and cloth. And that context a "loose thread" makes sense, something that is not part of the whole anymore. A "lost thread" on the other hand would be strange concept, it would have to disappear without us knowing where it went.
Yes but should we be proud of it with callback based ones when java should do it simpler? And our `doOnError` or `onErrorResume` calls on call stack to safeguard I loose track when we debug. This is needed but we shouldn't be proud of workarounds.
Reactive programming is more than Java. It will remain important in micro service architecture and in Java areas where the vthread approach scheduling overhead is too much. For example, using reactive programming you do not move memory between the stack and the heap. It is a very small niche segment, and for most of the programming tasks where you needed to use reactive progrmming in Java to achive the performance vthreads will provide a simpler programming paradigm.
Virtual thread scheduling does some smart tricks and doesn't actually copy the stack most of the time. Stack size is also usually relatively small. When other Java features related to virtual threads arrive, and when the ecosystem matures, performance difference will all but disappear. Reactive programming will be used maybe for other features of it, like back pressure, or having drivers for some data sources etc.
This feels like infomercial levels of dishonesty. Stack traces aren't that bad if you don't use the JDK reactivity, which is pretty limited and has a terrible DX as proven by this video. This reactor code: ` var links = getLinks(); var images = getImages(); Mono.zip(images, links) .map(t2 -> page(t2.getT1(), t2.getT2())) .block();` Produces this stack trace when getImages() errors: `boom java.lang.RuntimeException: boom at org.abickford.fp4legacy.domain.Dishonest.getImages(Dishonest.java:26) at org.abickford.fp4legacy.domain.Dishonest.bs(Dishonest.java:14) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at java.base/java.util.ArrayList.forEach(ArrayList.java:1597) at java.base/java.util.ArrayList.forEach(ArrayList.java:1597)` That hardly feels like the intractable monster that was fearmongered in this video. "Loose threads" isn't a thing here either. I counted over a dozen methods to return defaults, call another async primitive, short circuit, propagate, etc on errors (default is the error channel in the subscriber). These toy examples also feel unfair. If you need to do actual traffic shaping and routing, timeouts, recovery, switching, etc, I'd argue a bunch of imperative threading primitives aren't better at all. I'd far prefer a nice declarative functional API over managing locks and latches. In fact, this whole pitch is a false dichotomy as RX can be implemented *with VT*! DGMW, i think VT are really cool! But the real comparison should be things like c# async/await and how VT don't color methods. VT also work with legacy blocking APIs like JPA/hibernate. This is a huge win and a real differentiator in java. Rx is the wrong battle.
It is an honor to watch Jose Paumard presenting...
Mr. Paumard and Mr. Venkat are true legends. Their videos are gold and even experienced Java Developers will learn a lot from these two gurus.
Virtual threads have been a game changer for our services. Thank you, JDK engineers (and Spring engineers for adopting them in the framework).
What I love about all this is that they did not follow the hype back then and did not add the async/await. Guys, you are awesome!
Great talk, very unbiased and fair! Virtual threads are a great addition to the language, without a doubt. However it is worth mentioning that in terms of sheer performance (latency, not throughput), the 20-year old Java NIO with EPoll cannot be beat. A single thread spinning and pinned to an isolated CPU core is the king when it comes to low latency. And because we are talking about a single thread, which is handling many clients, there are no multithreading pitfalls. That means no synchronization, no deadlocks, no thread starvation, no race conditions, no locks, no nothing. Coding is very easy and joyful. That's the model the financial industry (high frequency market makers) has been very successfully using for 20 years now.
Yes but this is such an incredibly complicated model to build on. This is why we have frameworks like Reactor and Netty to try and make this model usable. In reality, virtual threads do exactly this. When a virtual thread yields, the scheduler either resumes another virtual thread or invokes epoll to see if any IO has completed. It's essentially the same thing but with a much simpler programming model. Virtual threads gives all the same benefits of old school Java NIO but without all the headaches of trying to put all your concurrent operations in a single epoll loop.
Great conference Mr. Paumard.
Amazing presentation! Really well made and teaches the real reasons why virtual threads are an improvement very well
Really helpful - I understand the use of virtual threads for the first time. Thanks.
I am glad virtual threads came along before my company decided to adopt reactive programming. Now we just need to upgrade java and use virtual threads
i'm struggling with reactive code left by previous team, you are lucky!
I miss Rx. So much cleaner
@@jp263 I have to use Rx (JS flavored) every day and like to think I'm reasonably competent at it by now, and I have to say I hate every second of dealing with that crap.
I just wish there was something like Virtual Threads in JS ...
Before I watch the video, I can only answer the title and say: I really hope so.
The reactor allows us to reason locally about concurrent behaviour. If I know statically where my task may be switched out for another, then I know where I need to recheck my assumptions.
What a great presentation. Thank you!
typo on slide 47. blocking a virtual thread does NOT block the p.threaf.
👏
There are some exceptions.
E.g synchronzied block
truth, I see fear in my coworkers about the reactive Java code I write
The abuse of reactive programming will end with structured concurrency
The great José Paumard!!!
Very Well explained! Thank you!
Thanks champ.
What about reactive streams? I still think there is a place for reactive in stream processing IMO.
Amazing! Thank u 😊
What about Kotlin Coroutines? Until now they still give you Structured Concurrency, which is the missing part in Loom. But once Loom has that covered too, do Coroutines still buy you anything?
Well they will exist forever in Android (or at least for a decade after they are implemented in ART).
Also it's not clear what's the replacement for streams of values (e.g. Flow) is. I suppose with virtual threads you can use regular blocking queues for that?
@@equeim I think instead of Flows we could just use regular Java Streams (or Kotlin Sequences) that just happen to potentially block the (virtual) thread when computing the next value. At least that seems like an equivalent abstraction to me.
Producer/consumer queues can work too, but that feels like a lower abstraction level.
nice explanation
How happy I am that I use kotlin at my work, coroutines there are 10 years ahead of java can be
the talks by Jose are always crystal clear! Thanks for sharing them!
Jośe is a really GOAT I mean it!
15:33 Modern systems don't allocate 2MB continuous memory for stack per thread. Stack is allocated dynamically.
2MB is reserved from virtual memory for stack, which can be configured, but this won't actually optimize memory usage much, only mapping tables.
A thread typically consumes ~64KB initially (without extra stack frames).
Fun fact: OpenJ9 doesn't even allocate all that virtual memory upfront; it can grow the virtual space dynamically.
@@prdoyleThat explains why Java 8 consumed more memory on my measurements of stack memory usage
At 22:58 easiest way to explain reactive framework.
José is the GOAT
29:50 I suffered this for more than a year
Virtual threads solve the blocking issue, but not the underlying task parallelization & back-pressures ones. In the data world & GUI worlds, i imagine RX is here to stay.
Do you mean because virtual threads put into heap and unblock app thread only if it's well known api blocking call like http call or file open for example?
Do you really write code to utilize the back-pressure feature in a reactive library? If yes, may I see an example?
I think that back pressure is a way to push the problem (which is that there's not enough memory/cpu to process data) from this application to another application. For example, application A calls application B for data. If application A uses the back pressure feature to tell app B not to send too much data, then application A doesn't need to have a lot of memory to hold data that hasn't been processed yet and application B is the place to hold the data that application A. So, instead of holding data in application A, you hold data in application B.
@@avalagum7957 yes we do, Swing ui +Chromium doing few k data points per second in between and you need to keep ui in sync, where data exchange is a mix of jni, rpc and websockets
@@avalagum7957if application B is a MongoDB, you already have the data stored on it. I think the problem is if application B loads all data in memory and sens to A reactively
Let's hope so. Reactive is an unusable disaster.
Not that much Sure
What a nice talk. The only question is "will the scoped values accept a Generic value"? I do think a response for this but i wish someone of the Java team could answer so i can understand the spirit behind the usage of Scope Value in an oficial way of view.
Thanks
Always sad to see people confusing latency and throughput.
If i dont use concurrency, my latency will go up. My throughput for that thread will suffer. However, if i have enough threads, the gains will not be immediately there: any wait time just lets the cpu do a context switch. Yes, these take time but that is not mentioned. Cpu is still completely filled. The problem occurs when we use microservices with marginal cpu use per work package. Then most time will be wait time and we will hit the max threads. But if you dont, no reason to add the complexity of manual concurrancy.
Fruit Of Loom :D
14:43 I think You meant lost threads. Loose means not tight. Also at 46:00
Loose can also mean "escaped" as in "my horse got loose". Which works in the context he is using it. But since I'm not familiar what the proper nomenclature is you may be right.
@@shannonkohl68 You are right. For me, loose, like a horse or fugitive, also brings the connotation that it does something rogue, uncontrolled. The threads mentioned simply get lost. However, I have only spoken English as a second language for 45 years, and the difference between loose and lost does not lessen the excellent presentation.
No, the term "loose" is used metaphorically, as in if a thread is not "tight" with your system, your system is going to fall apart due that "loose thread"
You have to take into account that we're talking about "threads" which has connotations with weaving and cloth. And that context a "loose thread" makes sense, something that is not part of the whole anymore. A "lost thread" on the other hand would be strange concept, it would have to disappear without us knowing where it went.
I actually enjoy reactive tho. Most of my work has been creating reactive code
Yes but should we be proud of it with callback based ones when java should do it simpler? And our `doOnError` or `onErrorResume` calls on call stack to safeguard I loose track when we debug. This is needed but we shouldn't be proud of workarounds.
Reactive programming is more than Java. It will remain important in micro service architecture and in Java areas where the vthread approach scheduling overhead is too much. For example, using reactive programming you do not move memory between the stack and the heap. It is a very small niche segment, and for most of the programming tasks where you needed to use reactive progrmming in Java to achive the performance vthreads will provide a simpler programming paradigm.
Virtual thread scheduling does some smart tricks and doesn't actually copy the stack most of the time. Stack size is also usually relatively small. When other Java features related to virtual threads arrive, and when the ecosystem matures, performance difference will all but disappear.
Reactive programming will be used maybe for other features of it, like back pressure, or having drivers for some data sources etc.
I don't think so.
That makes no sense. Reactive Programming has more aspects
This feels like infomercial levels of dishonesty.
Stack traces aren't that bad if you don't use the JDK reactivity, which is pretty limited and has a terrible DX as proven by this video.
This reactor code:
` var links = getLinks();
var images = getImages();
Mono.zip(images, links)
.map(t2 -> page(t2.getT1(), t2.getT2()))
.block();`
Produces this stack trace when getImages() errors:
`boom
java.lang.RuntimeException: boom
at org.abickford.fp4legacy.domain.Dishonest.getImages(Dishonest.java:26)
at org.abickford.fp4legacy.domain.Dishonest.bs(Dishonest.java:14)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1597)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1597)`
That hardly feels like the intractable monster that was fearmongered in this video. "Loose threads" isn't a thing here either. I counted over a dozen methods to return defaults, call another async primitive, short circuit, propagate, etc on errors (default is the error channel in the subscriber).
These toy examples also feel unfair. If you need to do actual traffic shaping and routing, timeouts, recovery, switching, etc, I'd argue a bunch of imperative threading primitives aren't better at all. I'd far prefer a nice declarative functional API over managing locks and latches.
In fact, this whole pitch is a false dichotomy as RX can be implemented *with VT*!
DGMW, i think VT are really cool! But the real comparison should be things like c# async/await and how VT don't color methods. VT also work with legacy blocking APIs like JPA/hibernate. This is a huge win and a real differentiator in java.
Rx is the wrong battle.