Steve, a question: for a control problem, wouldn't we want an inverse operator -- one that maps the desired output to the control u(t)? Can the paper approach be adopted for that?
Apologies for the quibble. But could you post a link for the reference as it seems to be not quite correct. These guys are prolific, so searching on their names returns many papers, and JCP 378 (which is 2019) doesn't contain any papers by them.
Experimentally I've found that stacking all inputs into a single vector and using a vanilla feedforward network is just as good as the deeponet (at least for simple problems)
I am very curious how this compares to reinforcement learning in arriving at optimal control, even for relatively simple scenarios such as a thermostat.
Hi Steve, your lessons are excellent, thank you for your help! I was wondering when the set of videos on PINNs would be released since you mention them a lot in some of the videos on Loss Functions, for example.
Very interesting looks like this could work well in control theory. I wonder if this is more generalisable than state based models in control. Also it could be interesting to further split ut into its own net as well.
DDSE video series was so good. It had explained code for everything. Would really love it if these videos came with code of implementation and training.
Vivek here - absolutely loved the clear and simple explanations in this video! Keep them coming!
Steve, a question: for a control problem, wouldn't we want an inverse operator -- one that maps the desired output to the control u(t)? Can the paper approach be adopted for that?
Apologies for the quibble. But could you post a link for the reference as it seems to be not quite correct. These guys are prolific, so searching on their names returns many papers, and JCP 378 (which is 2019) doesn't contain any papers by them.
+ 1 on this
Hey, great explanation !
Which paper are you talking about in 12:20 that proved the irrepresentability of chaotic systems
Experimentally I've found that stacking all inputs into a single vector and using a vanilla feedforward network is just as good as the deeponet (at least for simple problems)
clear videos professor!... a big fan of ur lectures from India
I am very curious how this compares to reinforcement learning in arriving at optimal control, even for relatively simple scenarios such as a thermostat.
Hi Steve, your lessons are excellent, thank you for your help! I was wondering when the set of videos on PINNs would be released since you mention them a lot in some of the videos on Loss Functions, for example.
I think there is a small error - the paper was introduced in 2019, not 2023
Very interesting looks like this could work well in control theory. I wonder if this is more generalisable than state based models in control. Also it could be interesting to further split ut into its own net as well.
Awesome! Where can I find a simple sample implementation to build upon?
So essentially we are trying to learn the inverse differential operator?
Very interesting 😊
Is it possible to get a copy of slides, figures are so beautiful
Very interesting 🎉🎉one of your follower from Pakistan.you are my most favorite teacher ❤
DDSE video series was so good. It had explained code for everything. Would really love it if these videos came with code of implementation and training.
Where to find the code for this?
GLU?