you guys are so brilliant. such a great idea, would love to hear a podcast with you guys talking about how you came up with these ideas/ the life cycle of SINDY rl.
Nick: Excellent work! This is genuine progress in AI to integrate state estimation SOTA with decision making (RL). Would love to see this further refined using POMDPs ( Partially Oberservable Markov Decision Processes).
Thanks for the presentation. Do I understand correctly that this whole process could be automated making highly efficient agents or do some aspects of this process require manual work? Also, how well does it scale to significantly harder RL problems? Does this technique stay computationally efficient (e.g. compared to PPO) in these harder ernvironments? Could this be combined with Reinforcement learning from human feedback (RLHF) in a practical manner?
Thanks Jim! The ensembling gives us way more robustness to noisy data and also to very few data samples, so it can let us train models much more quickly than NN models.
Has your lab considered experimenting with Kolmogorov-Arnold Networks in combination with SINDy? It feels like a potentially excellent match. Their approach to network sparsification, in particular, seems like it could be automated in a very interesting way via SINDy. In the recent paper they fix and prune activation functions by hand, but it seems that you could instead use SINDy to automatically fix a particular activation function once it fit a dictionary term beyond some threshold. Love the presentation!
Dear Sir, If we want to use reinforcement learning (RL) in a specific environment, I am concerned that the trial-and-error method will result in many errors, some of which may have negative consequences. Furthermore, I am unsure how many attempts the RL model will need to reach the optimal and correct decision. How can this challenge be addressed?
you guys are so brilliant. such a great idea, would love to hear a podcast with you guys talking about how you came up with these ideas/ the life cycle of SINDY rl.
Nick: Excellent work! This is genuine progress in AI to integrate state estimation SOTA with decision making (RL). Would love to see this further refined using POMDPs ( Partially Oberservable Markov Decision Processes).
Checkout PlaNet and dreamer models
Thanks for the presentation. Do I understand correctly that this whole process could be automated making highly efficient agents or do some aspects of this process require manual work? Also, how well does it scale to significantly harder RL problems? Does this technique stay computationally efficient (e.g. compared to PPO) in these harder ernvironments? Could this be combined with Reinforcement learning from human feedback (RLHF) in a practical manner?
Great presentation, very interesting approach. I’m curious about the intuition behind the ensemble…eager to read more. Thanks!
Thanks Jim! The ensembling gives us way more robustness to noisy data and also to very few data samples, so it can let us train models much more quickly than NN models.
Bold steps ... thrilling work! I look forward to working through the implementation.
Great work. This is fantastic!
Impressive! Thank you very much for sharing and for the inspiration.
Has your lab considered experimenting with Kolmogorov-Arnold Networks in combination with SINDy? It feels like a potentially excellent match.
Their approach to network sparsification, in particular, seems like it could be automated in a very interesting way via SINDy. In the recent paper they fix and prune activation functions by hand, but it seems that you could instead use SINDy to automatically fix a particular activation function once it fit a dictionary term beyond some threshold.
Love the presentation!
Neat idea -- definitely thinking about ways of connecting these topics. Thanks!
curious about how fitting can accelerate the training process. Any assumptions for action space/ state space / environment? Thanks for your attention.
This is so amazing I don't have words. Deepmind made computers play go game, chess game. It uses reinforcement learning. It is simply superb.
Amazing. I've been looking for something like this.
Absolutely brilliant
Dear Sir,
If we want to use reinforcement learning (RL) in a specific environment, I am concerned that the trial-and-error method will result in many errors, some of which may have negative consequences. Furthermore, I am unsure how many attempts the RL model will need to reach the optimal and correct decision. How can this challenge be addressed?
Real AI is RL
Great video!
Gracias por el video!
Great!
Amazing
Interesting