By slowly pressing a box internal pressure would rise and the walls of box would heat up. Box would remain pressed in position that force heat of walls and heat of walls would be collected for electric energy production or for heating , keeping the box in pressed position would be done whit some closed brackets so no input of energy for keeping the box pressed so all its heat would be used to produce output of energy
Basically the idea is that in a general dynamical system that is sensitive on initial conditions any path starting close to any other will diverge. Such systems though are too general and not representative of conservative systems. Within any small bundle of paths there is a special path that represents the conserved system and we can thing of all the others are "noise". These noisy paths are solutions of the general dynamical system(vacuously true) but not solutions of the conserved dynamical system. If you perturb the true solution of the conserved system then you end up with a solution that is not conserved. If you ignore this constraint and treat your conserved dynamical as non-conserved then any error in your approximation will diverge since as far as dynamical systems go they are equivalent "up to sensitivity". In a conserved system only one of those paths conserved the energy. Essentially there is an extra bit of info that distinguish it as the path for that conserved system which one can always use to check if they have the right solution and compensate if they don't. This effectively creates a bound on the error which removes the sensitivity on initial conditions. Basically if one things of the double pendulum one could expect that any two paths from two different initial conditions could in theory "meet" at the same state but with different total energy. But, in fact, this can't be true because some variable will have to be different to distinguish the two(and this will be some position or velocity). That is, in conservative systems there is really no dependence on sensitivity because the conserved quantity selects out a special path and it is well defined and doesn't change throughout the approximation process. So the conserved quantity acts as a reference point which one can always verify. Essentially the conserved quantity reduces the degree's of freedom. Of course there may be very complex systems in which the possible conserved paths are so close that they are within the error of the conserved approximation and this leads to a similar issue.
But the double pendulum is sensitive to initial conditions, even though it is a conservative system, gravity is the only force acting on the system and we know that's conservative!
@@mastershooter64 But that is the physics, that isn't built into the equations. The equations do not take into account that the system is conserved. That is the point. That is, when you setup the differential equations there is nothing in them that say "this is a conserved system" which is why it doesn't match reality. The actual ideal double pendulum is not sensitive to initial conditions. Yes, small differences will, over time, diverge... but they won't diverge exponentially due to conservation of energy. If you don't take into account conservation of energy within the equations then then initial conditions do diverge. Clearly the more degree's of freedom one has the worse the issue becomes but there is a difference between the solutions of a system that doesn't conserve energy and one that does. Essentially the ones that don't have a dense set of solutions while those that do have a sparse set. When you impose the conservation criteria on the system it will change the structure of paths locally. There is a way to distinguish between to "close enough" paths that "this path comes from a conserves system and that one doesn't". If you couldn't do that there would exist no special integerators for conserved systems because where would be no way to tell them apart from unconserved systems. Basically, as I said, when you impose conservation on the system it essentially "removes" possible paths that are not do not have the same energy(and outside of error) and so can more easily "snap" to the correct dynamics within error(what that error is depends on the system and numerical stability but it is always larger than when one doesn't include the conservation). For a non-conserved system every solution is a possible solution. In a conserved system only some of the non-conserved solutions will be correct as, if you want, these are "tagged" with the total energy and every solution in the conserved system must have the same total energy(since it is conserved) while in the non-conserved system one will have all possible energies. If you think of a bundle of solution paths for a dynamical system then you can compute the total energy for each path. Now only select out the paths that have the energy for the system you are dealing with in the conserved system. Also, it must be constant along the selected paths(as the total energy may change with time in some paths). When you select out these paths you will, obviously, be left with far fewer paths than in the non-conserved case. It is this property between the two that one can exploit. An analogy might be the difference between roads and open terrains. The roads limit the possible movement while on an open terrain(analogous to a non-conserved system) any path is possible). If one gets off the road while there are other roads nearby there is some "space" between them so you can, if you do not get too far off, snap back on the right road. The way you can think about this is that you subtract an error term that computes the total energy of the non-conserved dynamical system solution so that the new solution will have total energy you want. E.g., if f(t) is a trajectory to the non-conserved system then it may be wrong as it has energy Ef which is not the energy e you want. So you adjust f to get a new path that does have(or closer) the energy you want: h = f - (Ef - e)*g. h is this new solution and it has energy Eh which is the conserved quantity that you impose on the system(turning it into a conserved dynamical system). Solving for e and plugging back in one gets h = f - (Ef - Eh)*g/Eg. Then one can use calculus of variations to find g that has the minimum change(if g = 0, h = f). But by construction h will, regardless of f, have the energy Eh and so it is the solution with the total energy Eh that we want. h may not solve our initial conditions though as it may be some other solution to the conserved dynamical system. But if g is minimal and we are incrementing through the system and the numerical error is optimal then h will retain the initial conditions it is suppose to represent. (One could always backtrack to make sure h has the correct initial conditions within error)
I don't get what you mean with "noisy observation". Do you mean errors in measurements or ODE plus external noisy(i.e. SDE)? If the second case I don't think a pure Hamiltonian simulation is the right philosophy since in SDE noise can kick your system into impossible trajectories for an ODE. The right way to learn in SDE would be fitting data via the parameters that gives the most probables paths(i.e the trajectories that minimize the large deviation rate functionals). thanks, in advance. Leonardo
I believe there is a typo. The correct form of the Hamiltonian equations should be: \dot{q}_i = \frac{\partial H}{\partial p_i} \dot{p}_i = -\frac{\partial H}{\partial q_i} The derivation can be checked at the beginning of Chapter 7 in Landau's Mechanics or many other sources.
The Hamiltonian (and other conserved quantities) are always Koopman eigenfunctions for eigenvalue 1. As far as I know, there is still very little research into the connections between the symplectic structure and the Koopman operator of such systems, although a few papers have come out and this seems like a promising avenue.
More specifically, the algebra of functions on the phase space of a Hamiltonian system is very rich as a Lie algebra; meanwhile, the Koopman eigenfunctions are already known to have an algebraic structure and, of course, form a basis of this same space. I would be very interested in, say, what the Lie bracket of two eigenfunctions looks like, etc.
Emmy Noether wasn't receive the noble prize because she was Mathematician. And Noble prize distribution is for 6 fields : PHYSICS, chemistry, medicine, economics, literature and peace.
12:23 The canonical equations do not contain partial derivatives of the Hamiltonian with p dot or q dot
great video as always
By slowly pressing a box internal pressure would rise and the walls of box would heat up. Box would remain pressed in position that force heat of walls and heat of walls would be collected for electric energy production or for heating , keeping the box in pressed position would be done whit some closed brackets so no input of energy for keeping the box pressed so all its heat would be used to produce output of energy
Basically the idea is that in a general dynamical system that is sensitive on initial conditions any path starting close to any other will diverge. Such systems though are too general and not representative of conservative systems. Within any small bundle of paths there is a special path that represents the conserved system and we can thing of all the others are "noise". These noisy paths are solutions of the general dynamical system(vacuously true) but not solutions of the conserved dynamical system.
If you perturb the true solution of the conserved system then you end up with a solution that is not conserved. If you ignore this constraint and treat your conserved dynamical as non-conserved then any error in your approximation will diverge since as far as dynamical systems go they are equivalent "up to sensitivity". In a conserved system only one of those paths conserved the energy. Essentially there is an extra bit of info that distinguish it as the path for that conserved system which one can always use to check if they have the right solution and compensate if they don't. This effectively creates a bound on the error which removes the sensitivity on initial conditions.
Basically if one things of the double pendulum one could expect that any two paths from two different initial conditions could in theory "meet" at the same state but with different total energy. But, in fact, this can't be true because some variable will have to be different to distinguish the two(and this will be some position or velocity). That is, in conservative systems there is really no dependence on sensitivity because the conserved quantity selects out a special path and it is well defined and doesn't change throughout the approximation process. So the conserved quantity acts as a reference point which one can always verify. Essentially the conserved quantity reduces the degree's of freedom. Of course there may be very complex systems in which the possible conserved paths are so close that they are within the error of the conserved approximation and this leads to a similar issue.
But the double pendulum is sensitive to initial conditions, even though it is a conservative system, gravity is the only force acting on the system and we know that's conservative!
@@mastershooter64 But that is the physics, that isn't built into the equations. The equations do not take into account that the system is conserved. That is the point.
That is, when you setup the differential equations there is nothing in them that say "this is a conserved system" which is why it doesn't match reality.
The actual ideal double pendulum is not sensitive to initial conditions. Yes, small differences will, over time, diverge... but they won't diverge exponentially due to conservation of energy. If you don't take into account conservation of energy within the equations then then initial conditions do diverge.
Clearly the more degree's of freedom one has the worse the issue becomes but there is a difference between the solutions of a system that doesn't conserve energy and one that does.
Essentially the ones that don't have a dense set of solutions while those that do have a sparse set.
When you impose the conservation criteria on the system it will change the structure of paths locally. There is a way to distinguish between to "close enough" paths that "this path comes from a conserves system and that one doesn't".
If you couldn't do that there would exist no special integerators for conserved systems because where would be no way to tell them apart from unconserved systems.
Basically, as I said, when you impose conservation on the system it essentially "removes" possible paths that are not do not have the same energy(and outside of error) and so can more easily "snap" to the correct dynamics within error(what that error is depends on the system and numerical stability but it is always larger than when one doesn't include the conservation).
For a non-conserved system every solution is a possible solution. In a conserved system only some of the non-conserved solutions will be correct as, if you want, these are "tagged" with the total energy and every solution in the conserved system must have the same total energy(since it is conserved) while in the non-conserved system one will have all possible energies.
If you think of a bundle of solution paths for a dynamical system then you can compute the total energy for each path. Now only select out the paths that have the energy for the system you are dealing with in the conserved system. Also, it must be constant along the selected paths(as the total energy may change with time in some paths). When you select out these paths you will, obviously, be left with far fewer paths than in the non-conserved case. It is this property between the two that one can exploit.
An analogy might be the difference between roads and open terrains. The roads limit the possible movement while on an open terrain(analogous to a non-conserved system) any path is possible). If one gets off the road while there are other roads nearby there is some "space" between them so you can, if you do not get too far off, snap back on the right road.
The way you can think about this is that you subtract an error term that computes the total energy of the non-conserved dynamical system solution so that the new solution will have total energy you want.
E.g., if f(t) is a trajectory to the non-conserved system then it may be wrong as it has energy Ef which is not the energy e you want. So you adjust f to get a new path that does have(or closer) the energy you want: h = f - (Ef - e)*g. h is this new solution and it has energy Eh which is the conserved quantity that you impose on the system(turning it into a conserved dynamical system). Solving for e and plugging back in one gets h = f - (Ef - Eh)*g/Eg. Then one can use calculus of variations to find g that has the minimum change(if g = 0, h = f). But by construction h will, regardless of f, have the energy Eh and so it is the solution with the total energy Eh that we want. h may not solve our initial conditions though as it may be some other solution to the conserved dynamical system. But if g is minimal and we are incrementing through the system and the numerical error is optimal then h will retain the initial conditions it is suppose to represent. (One could always backtrack to make sure h has the correct initial conditions within error)
Thank you very much for great explanation.
Ruling out darwin single, huxly and panspermia is a huge achievement
Could you please do a video explanation one for Implicit Neural Representations with Periodic Activation Functions?
This may well be the video that draws me out of retirement. If someone wants to pay my health insurance... (:
Thank you thank you 🤖🤠💱🐎
algo upvote
How does this scale to complex systems? I wonder if the complexity of the loss function becomes expensive to back-propagate etc
What is the difference between neural ODE and this?
Is this lile Kalman Filters as NN ?
I don't get what you mean with "noisy observation". Do you mean errors in measurements or ODE plus external noisy(i.e. SDE)?
If the second case I don't think a pure Hamiltonian simulation is the right philosophy since in SDE noise can kick your system into impossible trajectories for an ODE.
The right way to learn in SDE would be fitting data via the parameters that gives the most probables paths(i.e the trajectories that minimize the large deviation rate functionals).
thanks, in advance. Leonardo
Hi, i have a small confusion... is it the derivative of H w.r.t. p_dot and q_dot or just it is the derivative of H w.r.t. p and q?,
I believe there is a typo. The correct form of the Hamiltonian equations should be:
\dot{q}_i = \frac{\partial H}{\partial p_i}
\dot{p}_i = -\frac{\partial H}{\partial q_i}
The derivation can be checked at the beginning of Chapter 7 in Landau's Mechanics or many other sources.
@@harrison-7h thank you for the clarification
hi, may i know why your video is getting "re-uploaded" ? i guess it is a second video like this...
Como sería el sistema dinámico de Koopmans asociado?
The Hamiltonian (and other conserved quantities) are always Koopman eigenfunctions for eigenvalue 1. As far as I know, there is still very little research into the connections between the symplectic structure and the Koopman operator of such systems, although a few papers have come out and this seems like a promising avenue.
More specifically, the algebra of functions on the phase space of a Hamiltonian system is very rich as a Lie algebra; meanwhile, the Koopman eigenfunctions are already known to have an algebraic structure and, of course, form a basis of this same space. I would be very interested in, say, what the Lie bracket of two eigenfunctions looks like, etc.
How did she never win a Nobel Prize??!!
was a woman
Emmy Noether wasn't receive the noble prize because she was Mathematician.
And Noble prize distribution is for 6 fields : PHYSICS, chemistry, medicine, economics, literature and peace.
Read my guy you might not ask stupid questions
@@camerashysd7165 swear
Could you provide code ❤
The code of the paper is open, you can find it in the paper
Solution for the Three-body problem!