Experimentally I've found that stacking all inputs into a single vector and using a vanilla feedforward network is just as good as the deeponet (at least for simple problems)
DDSE video series was so good. It had explained code for everything. Would really love it if these videos came with code of implementation and training.
Steve, a question: for a control problem, wouldn't we want an inverse operator -- one that maps the desired output to the control u(t)? Can the paper approach be adopted for that?
Very interesting looks like this could work well in control theory. I wonder if this is more generalisable than state based models in control. Also it could be interesting to further split ut into its own net as well.
Apologies for the quibble. But could you post a link for the reference as it seems to be not quite correct. These guys are prolific, so searching on their names returns many papers, and JCP 378 (which is 2019) doesn't contain any papers by them.
Hi Steve, your lessons are excellent, thank you for your help! I was wondering when the set of videos on PINNs would be released since you mention them a lot in some of the videos on Loss Functions, for example.
Vivek here - absolutely loved the clear and simple explanations in this video! Keep them coming!
Experimentally I've found that stacking all inputs into a single vector and using a vanilla feedforward network is just as good as the deeponet (at least for simple problems)
DDSE video series was so good. It had explained code for everything. Would really love it if these videos came with code of implementation and training.
Steve, a question: for a control problem, wouldn't we want an inverse operator -- one that maps the desired output to the control u(t)? Can the paper approach be adopted for that?
Very interesting looks like this could work well in control theory. I wonder if this is more generalisable than state based models in control. Also it could be interesting to further split ut into its own net as well.
Apologies for the quibble. But could you post a link for the reference as it seems to be not quite correct. These guys are prolific, so searching on their names returns many papers, and JCP 378 (which is 2019) doesn't contain any papers by them.
+ 1 on this
clear videos professor!... a big fan of ur lectures from India
Hi Steve, your lessons are excellent, thank you for your help! I was wondering when the set of videos on PINNs would be released since you mention them a lot in some of the videos on Loss Functions, for example.
Awesome! Where can I find a simple sample implementation to build upon?
I think there is a small error - the paper was introduced in 2019, not 2023
Very interesting 🎉🎉one of your follower from Pakistan.you are my most favorite teacher ❤
Very interesting 😊
So essentially we are trying to learn the inverse differential operator?
Is it possible to get a copy of slides, figures are so beautiful
Where to find the code for this?
GLU?