DR. Strogatz, thank you once again for a solid lecture on Averaging theory for weakly nonlinear oscillators. The harmonic and the van der Pol oscillators are among weakly Nonlinear systems.
21:23 Why the professor erased rdot phidot after the student asked the question? I do not get it! I thought rdot phidot are non-autonomous(explicitly time dependent!)
38:28. (well the camera is not focusing where it should be) it is due to the fact that the r dot and phi dot (before introducing the O(e) terms) already contain an e as a multiplier. introducing an additional O(e) to the phi term (recall again that the phi term is contained in both r dot and phi dot equations) just doubles the degree of the error
So what he means by having time dependence in the term h, is really just havng the system being forced by some sort of time dependent forcing like a sine wave. A good example is the forced Duffing Oscillator where you assume the forcing is of the Order Epsilon O(e^1) and lump that term in with the other damping and nonlinear terms.
statement 1 : r is varying slowly compared to the original signal statement 2 : dr/dt(which means r also) and original signal seem to have the same frequency from their sinusoidal equations Not sure if anyone else has this confusion. I agree that the absolute magnitde for dr/dt seems to be small, but not slow. If you average, the thing the original signal gets as much averaged as this r signal. Or I'm missing something seriously.
dr/dt doesn't fluctuate, so it can't have a frequency. dr/dt is the rate of change of amplitude as illustrated by the professor at 58:15. I don't follow what you mean by the original signal getting averaged as much as the r signal. We're not averaging the whole original r, just small snippets of it on a time scale of one oscillation (or 2pi period). Think of the original "r" as being a ramp function with an extremely gradual ascent over a very long period of time. It would appear nearly horizontal on the time scale of, say, a nanosecond. Now, imagine zooming into the ramp function all the way to that time scale and approximating it with a step function, where each step indicates an approximation for the value of the ramp function over that minuscule time scale. The shorter our time scale, the more loyal our approximation to the actual ramp. This is essentially what we're doing with "r" in the problem. The short time scale, in the context of this problem, is the time period for one oscillation and the each step of the ramp function represents the moving average over a window of time period of one oscillation. It's analogous to approximating a ramp using square pixels. If the pixels are small enough, the approximation would look nearly indistinguishable from an actual ramp. These "tiny approximations" that constitute the ramp can be likened to what the professor deems "moving average".
If you like, another way of looking at it is by redefining the "dt" component of "dr/dt" to mean the time taken for one oscillation, rather than instantaneous time. It is merely a less precise definition of the derivative operation. In other words, rather than using the successive values of "r" registered at an instant (as you would to calculate the regular "dr/dt"); you're using the average value of "r" over a window of time (in this case, time taken for one oscillation) to calculate an approximation for "dr/dt". In order to complete the process, you'd have to execute this crude derivative operation throughout "r", essentially moving this window of time along with you as you do so. Hence, the term "moving average".
DR. Strogatz, thank you once again for a solid lecture on Averaging theory for weakly nonlinear oscillators. The harmonic and the van der Pol oscillators are among weakly Nonlinear systems.
And very good example with breathing!! Quite illuminating!
21:23 Why the professor erased rdot phidot after the student asked the question? I do not get it! I thought rdot phidot are non-autonomous(explicitly time dependent!)
Comments: 33:37 The integral may be done by Leibniz rule, not fundamental theorem of Calculus.
@46:00, Who can tell me where the O(e^2) (e is epsilon) is come from? What theorem is it??
38:28. (well the camera is not focusing where it should be) it is due to the fact that the r dot and phi dot (before introducing the O(e) terms) already contain an e as a multiplier. introducing an additional O(e) to the phi term (recall again that the phi term is contained in both r dot and phi dot equations) just doubles the degree of the error
Is there a reason lecturers always avoid writing on "the sacred third board" underneath the two sliding ones?
At 2:55 he says h could also be written as h(t,x,x.). Does anyone know what would be the result of this?
So what he means by having time dependence in the term h, is really just havng the system being forced by some sort of time dependent forcing like a sine wave. A good example is the forced Duffing Oscillator where you assume the forcing is of the Order Epsilon O(e^1) and lump that term in with the other damping and nonlinear terms.
statement 1 : r is varying slowly compared to the original signal
statement 2 : dr/dt(which means r also) and original signal seem to have the same frequency from their sinusoidal equations
Not sure if anyone else has this confusion. I agree that the absolute magnitde for dr/dt seems to be small, but not slow. If you average, the thing the original signal gets as much averaged as this r signal. Or I'm missing something seriously.
dr/dt doesn't fluctuate, so it can't have a frequency. dr/dt is the rate of change of amplitude as illustrated by the professor at 58:15. I don't follow what you mean by the original signal getting averaged as much as the r signal. We're not averaging the whole original r, just small snippets of it on a time scale of one oscillation (or 2pi period).
Think of the original "r" as being a ramp function with an extremely gradual ascent over a very long period of time. It would appear nearly horizontal on the time scale of, say, a nanosecond. Now, imagine zooming into the ramp function all the way to that time scale and approximating it with a step function, where each step indicates an approximation for the value of the ramp function over that minuscule time scale. The shorter our time scale, the more loyal our approximation to the actual ramp.
This is essentially what we're doing with "r" in the problem. The short time scale, in the context of this problem, is the time period for one oscillation and the each step of the ramp function represents the moving average over a window of time period of one oscillation.
It's analogous to approximating a ramp using square pixels. If the pixels are small enough, the approximation would look nearly indistinguishable from an actual ramp. These "tiny approximations" that constitute the ramp can be likened to what the professor deems "moving average".
If you like, another way of looking at it is by redefining the "dt" component of "dr/dt" to mean the time taken for one oscillation, rather than instantaneous time. It is merely a less precise definition of the derivative operation.
In other words, rather than using the successive values of "r" registered at an instant (as you would to calculate the regular "dr/dt"); you're using the average value of "r" over a window of time (in this case, time taken for one oscillation) to calculate an approximation for "dr/dt". In order to complete the process, you'd have to execute this crude derivative operation throughout "r", essentially moving this window of time along with you as you do so. Hence, the term "moving average".
One point. Around minute 11, he defines r and phi functions. They will only make sense if we are not at a fixed point.
Extraordinarily beautiful!
wow the method of calculating integral sin^2(theta) was simply brilliant!
thank you
good thank you
great
1:02:00 went wild
lol