Maybe the authors could divide the video content into two videos, so that explicit efforts at parallelization is not concerned at the same time as the other seemingly orthogonal improvements. I don't know as much about DE optimizations that he's talking about, but I have written several parallel grid DE solvers, and I'd like to dive in to the part I have skill at, mostly for shared memory computers. Maybre I can contribute there.
The improvements may be seemingly orthogonal but they are not: they are all provided by the same underlying ModelingToolkit.jl system for symbolic-numeric computing, and its interaction with the numerical tools of SciML. The documentation of the packages all talk about it separately, but the videos miss the larger story that is evolving which is what I was capturing here. Everything is moving in a direction where powerful numerical tools are being mixed with compiler tools that augment and reoptimize a user's model to get further benefits than any numerical method can on its own. It's still very early in this story and next JuliaCon will show more details about this whole process, but I think sharing the general idea is good for setting the community in the right direction. As for improved parallelism in the DE solvers, we're looking at implementations of parareal methods in OrdinaryDiffEq.jl and automatically parallelized stencil computations in DiffEqOperators.jl. Those would be two areas we'd be interested in having more hands. If you're interested, join the Julia Slack (julialang.org/slack) and in the #diffeq-bridged and #sciml channels you will find our developer discussions.
@@mysillyusername Oh, funny to get a ping on this. The ModelingToolkit.jl paper is now up, and has an example of where MTK transformations of the model expose more parallelism than the original formulation, and then exploit the automatic parallelization. So that's an area where the user couldn't necessarily parallelize the code without the DE optimizations involved, showing directly how it's all connected.
Chris needs to be cloned... This stuff is just too awesome. Hats off to him to getting so much done.
Great talk!
Can we go deeper?
Maybe the authors could divide the video content into two videos, so that explicit efforts at parallelization is not concerned at the same time as the other seemingly orthogonal improvements. I don't know as much about DE optimizations that he's talking about, but I have written several parallel grid DE solvers, and I'd like to dive in to the part I have skill at, mostly for shared memory computers. Maybre I can contribute there.
The improvements may be seemingly orthogonal but they are not: they are all provided by the same underlying ModelingToolkit.jl system for symbolic-numeric computing, and its interaction with the numerical tools of SciML. The documentation of the packages all talk about it separately, but the videos miss the larger story that is evolving which is what I was capturing here. Everything is moving in a direction where powerful numerical tools are being mixed with compiler tools that augment and reoptimize a user's model to get further benefits than any numerical method can on its own. It's still very early in this story and next JuliaCon will show more details about this whole process, but I think sharing the general idea is good for setting the community in the right direction.
As for improved parallelism in the DE solvers, we're looking at implementations of parareal methods in OrdinaryDiffEq.jl and automatically parallelized stencil computations in DiffEqOperators.jl. Those would be two areas we'd be interested in having more hands. If you're interested, join the Julia Slack (julialang.org/slack) and in the #diffeq-bridged and #sciml channels you will find our developer discussions.
And watch both videos at the same time...
@@mysillyusername Oh, funny to get a ping on this. The ModelingToolkit.jl paper is now up, and has an example of where MTK transformations of the model expose more parallelism than the original formulation, and then exploit the automatic parallelization. So that's an area where the user couldn't necessarily parallelize the code without the DE optimizations involved, showing directly how it's all connected.