You know what's crazy. I was pronouncing it the correct way for my whole life. But I watched like one video in research that pronounced it the wrong way and for some reason decided that they were right and I was wrong. So I decided to intentionally change how I pronounced it to the wrong way. I'm actually so dumb lmao
@@Phanimations that's crazy; on a similar note I'm convinced that Italian physicists can't pronounce "Dirac" Correctly; no professor at my Uni called the poor guy right. (Hasty generalization I know, kinda funny tho)
Great video! It's funny that 3Blue1Brown's manim environment became the official animation framework of TH-cam math videos. Every time I see some video made by manim, before watching it, I know that it is gonna be a good one. It never disappoints.
Runge-Kutta still leaks energy. For the equations of motion, the one integration scheme to rule them all is Velocity-Verlet. That conserves energy _exactly_ (apart from floating-point round-off), is 2nd order, and is just as computationally cheap as Euler.
5 หลายเดือนก่อน +26
To make it easy for others: en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet
Yes, I decided to leave off Symplectic Integrators. But of course, Velrlet and Velocity Verlet would have been better options. Seeing a lot of comments about this, I will probably do a sequel to this video in the future
This is a great intro to approximation schemes! Really well explained and I love how you included all the animations to visualize what happens for each method. It was very helpful to see how drastic an effect the lack of energy conservation can have.
Idk man seems youtube glitch or something but it's missing 'k' after 898 in your sub count hope youtube fix the issue soon, once again thanks for a great video
I love this kind of videos. I work on driveshafts, which are rotational mass-spring-damper systems to some degree. I loved doing the Taylor expansion and wrote some homework about how much accuracy you gain for increased compute, as you increased the order.
Amazing video ! I loved your pace and your little jokes, it really helps staying engaged with your presentation. The visualisations are of course really good too. :)
Wow, I honestly had no idea these methods were even connected! Thank you for the straight-forward explanation and visualizations. Top notch content, Sir.... 👍
The RK algorithms are a very fascinating topic, and I've even implemented a few of them in a C++ application before specifically RK4. Yet, I still feel or think that FFTs and their inverses are some of the most interesting algorithms out there. Complex Vector and Field analysis, the Hamiltonians, especially the quaternions and octonions, and so much more are all interesting topics. We stand on the shoulder of giants! I truly enjoy videos like these, keep up the great work!
Thanks! FFTs are a topic I think deserve a longer and more dedicated video, but I'm considering doing one on them. Even for this, I barely touched the surface level of what RK is (didn't even derive RK4 lol), just wanted to provide some intuition for it. Both for the motivation, and why it would be more accurate than a simple Euler-Cromer method
@@Phanimations Not exactly but similar to why Quaternions prove to work better than Euler Angles when performing rotations in 3D about the independent axes. When using Euler Angles there is the phenomenon of producing Gimbal Lock within the use of the rotation matrices. This where one axis ends up being rotated onto another where they become coincidental and from there you end up losing a degree of freedom as the two axes are locked and you can no longer differentiate between them. Quaternions helps to prevent this. Also, quaternions, even though the mathematical notations and expressions are fairly complex, implementing them in software is fairly trivial and they also have a very nice added benefit of being able to be calculated against other vectors and matrices as well as being converted to them. Because of this, they are also computationally cheap, very efficient and quite effective. It's not exactly the same, but it goes to show where a variety of Euler methods although is simpler to digest and work out by hand, also has their shortcomings. FFTs just provide a very good and efficient way to transpose from one system to another especially when working with wave patterns or anything with a frequency domain. Without FFTs, audio processing either being a wave file, a midi file, or even an MP3 file wouldn't be as efficient as they are today. Audio even when compressed requires a lot of information and can be fairly computationally expensive. FFTs reduces that by a couple orders of magnitude. Instead of trying to perform 20 thousand sine or cosine function calls per second for a 20kHz frequency. We can just sample it and use the sample rate to reconstruct a good enough approximation of the individual waves. Well sort of, as that's the abridged version. But yeah, I find it all to be very interesting and intriguing. I'm not just intrigued with this type of stuff either. I'm also intrigued by 3D Graphics Rendering, Game Engine - Physics Simulations, as well as actual CPU Hardware Design (ISA design). Then again, this gets into physics when you go beyond the logical device level and get into the actual structure of the transistors, resistors, etc. that are designed to manipulate electricity. And here we are again, with wave propagation. Right back to the use of wave functions and the power of FFTs lol! I just like things related to engineering. Factorio, Satisfactory, Dyson Sphere Program, Oxygen Not Included, Planet Crafter, Mindustry, Turing Complete, etc. there're all a part of my Steam Library and are my hobbies. And I'm no stranger to Music as I did play the Trumpet for close to 10 years back in my school days.
If you like this and Fourier methods, you should check out dispersion and dissipation analysis (sometimes referred to as "Fourier analysis") for ode solvers (and pde solvers too, but that's a bit more complex). It essentially allows someone to understand how a solver will respond to any initial condition of a linear problem.
You need to look into Clifford Algebras and Geometric Calculus. Try Macdonald's two books "Linear and Geometric Algebra" and "Vector and Geometric Calculus". Books are small and concise. He and a couple other people really blew the field open not long ago. Tensors and quaternions are subsets. Clifford makes them much simpler. Computer graphics is using this now, especially sims and games.
Nice video! However, at 9:38 it should be noted that the "order" of a method does not refer to the number of terms/stages (k1, k2), but rather the truncation of the Taylor series. This means that a 2nd order method will exacly match the taylor series up to the x^2/2! term within each time step, while the following terms (x^3/3!,...) are not exact or missing. For some fully implicit methods (Gauss-Legendre), the order can be two times the number of stages. (They're computationally expensive and I wouldn't recommend using them a lot but provide inpressive results for large time steps.)
Thank you for mentioning this. I could have mentioned this, however, since I didn't go through the Taylor series derivation, I thought it would just confuse most viewers.
Another method (or sub-method) that maybe deserved a mention is the so-called leapfrog integration, where the average derivative for xI is taken from an average value of the previous tick acc and an extrapolated value for halfway towards the next one. It's sort of similar to RK2, but the samples are offset back by one(half) of a tick. It's relatively stable, and unlike RK2, you don't actually need to computer the derivative twice for each tick, as the first one is carried in from the previous iteration.
Yes, this is also a good method. I just wanted to keep the video simple. Perhaps I should have teased the sequel video at the end. I'm getting a lot of responses about this, so I think I'm going to make a sequel to this video covering Symplectic Integrators, along with some others like leapfrog
2 minutes in, I can already say: I like your style! Just one remark: I think it would be better to clarify that we can best describe nature with the help of differential equations, instead of saying that nature is governed by differential equations (even if it was true that we live in a simulation, this would be outside the scope of scientific thinking). This reminds me of talking about charged particles "feeling" a force and thereby intentionally reacting to it, or explaining darwinism by active adaptation to a changing environment, or even "lonely atoms which want to form bonds to share electrons" to fulfill some godly given octet rule so that all of them can live a long and happy life, and every other thing our teachers tought us "although we should keep in mind that this is just a simplification" - although the concepts you are about to explain (from my point of view at this moment in time, at minute 2 in your video) exactly oppose these views, of course. But keep in mind: Now the maths begins, and view counts will only drop from here on. To be clear: I really like your video! Narration, animation, general style: wonderful! And I had a look at your channel, and I will watch a few more of your videos.
Thank you for the feedback, yes I should have been more clear about that. I was mostly focused on the mathematics here, but of course, everything physical we describe with mathematics is just a model. I'll keep this mentality in mind for future videos.
i think metaphysics is a part of science. many great scientific insights came from studying metaphysics. i.e. what is true , how much can we know of truth, and what is that truth composed of? and there maybe a correspondence between our models and reality, i.e. reality = diff equations, and it's interesting enough for you to bring up, and I think it's helps strengthen our mind's talking about this... and it can give us insights into math from a point of view of physics and vice versa.
Great video and VPython code! Thanks for sharing. Just one heads-up: I think there's a tiny blunder in one of the equations around 2:20. The velocity goes with sin(), not cos(). And I also vote for modelled, not governed ;)
Actually nature doesn't give a fk of what we think, it has some rules/constants that cannot be crossed and axioms such as discreteness and continuity and it happens to be explained by differential equations because we cannot imagine a stopped frame of time and we need to measure change in variables(differentials) to understand the universe
GAH!!! nice video, but DAMN that flashbang at the end! 12 or so minutes of black followed by what felt like gluing my eyeballs to the sun! anyway you get a sub for quality content, keep it up.
Oh this video is unfortunately a little bit late for me, just had my numerical simulation exam last month ;) still watching this video since it got recommended to me and its so fascinating how people came up with such things decades ago!!!
damn I have never understood RK methods before, just knew that they are better than Euler’s. after you explained the explicit and implicit methods I was like ”why don’t you just average them” - and apparently that’s already RK2!
Very clear! Implicit Gaussian Collocation for the win though! (For numerically fully conserving skew-symmetric use cases.) EDIT: But you already mentioned Sympletic integrators in one of your responses.
@@Phanimations Thanks! Subbed as I’m looking forward to hearing more of (sympletic) integrators. I’ve used Gaussian Collocation for simulating with conservation accuracy down to machine-precision (with help of skew-symmetric PDE form), which is cool, but I can’t say I really ever understood what is sympleticity means nor how to derive, e.g., exponentially sympletic variants.
1:09 this sound is equivalent to undertaker's unexpected entry during an ongoing wrestling match😅 haven't watched the complete video, I'll leave a 'review' comment after watching it but it seems video will be awesome and informative.
Nice! I coded RK4 and other methods in python for a 3 body problem simulation. RK4 and Velocity Verlet were way more stable than Euler or even a 2nd order Taylor series when we consider conservation of energy. Thank you for the video;
Nothing in the universe is "powered by differential equations"! Differential equations CAN DESCRIBE a lot of things. This is very important to understand.
0:21 eveything in nature isn't governed by differential equations (DE), DEs describe nature, they don't govern them. I know it can be seen as a nitpik, but I felt that the semantic difference between 'governing' and 'describing' were big enough to warrant the comment. The rest of the video was great!
I would point out that even the "exact" answer is an approximation, because you have to approximate the value of sine or cosine in order to draw the graph or get a numerical result for the position of the object on the spring. Now I know that you can easily calculate the value of the trigonometric functions to far more accuracy than you need -- but those number are still calculate by an approximation algorithm.
Of course this is correct. I figured including this would be a bit off topic, as the approximation we're concerned with in the video is of the "initial value problem" type rather than for function values. Thank you for the comment though
Hey, a minor correction, Matlab's ode45 uses a 4- and 5-STAGE Runge Kutta method (as it uses 5 k_i), but the 5-stage method is still an order 4 method, due to I think it being able to trace exactly polynomials of degree 4 but not 5.
From my days in college, when I took Numerical Analysis, I had an idea. Do the iterative equation that, with each iteration, produces Output values relative to increasing Time… BUT… for each Output value, use it as a STARTING VALUE for the iterative equation method known as *Picard’s Method.* When Picard’s Method iterates, it STAYS ON a single Time value (i.e. as you iterate, Picard’s Method doesn’t “move you” along the Time axis.) I never got to try my idea, but I always wanted to. Overall, the idea is that you keep switching back ‘n forth between an Increasing Time iterative equation AND a Picard’s Method iterative equation. For each “invocation” of Picard’s Method, you perform enough iterations until a suitable degree of convergence is achieved.
It's clear to see that the higher order algorithms are more exact per timestep but they're also more computationally expensive because of calculating multiple derivatives per timestep. It would've been nice to see how exact each algorithm is per derivative evaluation. Because it might be more efficient computationally to use a smaller time interval with a lower order algorithm than using a higher order algorithm.
This is great. You seem to be getting a bit of flak from the very knowledgeable on the subject, but I just feel the video is directed to those like me, who *in theory * can follow the equations but have trouble with the 'where are we going with this' part. And in that respect the video succeeds with flying colours. So, thank you!
For any linear system (such as the one modeled here) a discrete state-space model can be accurate even with a course time step. If you model a single iteration accurately, then you have a template that can be applied simply at each iteration.
yes this is of course true. I'm covering other systems in another video that should be out within ~1 month (maybe longer). That one will focus on symplectic integrators
Thank you so much! I hear about Runge-Kutta so often at the lab but never understood it until now! But it bothers me that pretty much the same math (at least in my brain) has so many different names: finite difference, Euler, Runge-Kutta, Taylor expansion... I am bad with names :')
It's similar, but there are differences, as I outlined in the video. The point of this was mostly to show why we use "RK4", and what it is, since that term is often thrown around without actually understanding how it works
@@Phanimations Yeah a lot of people in my field just say "we use Runge-Kutta" then use ODE45 without thinking about what's behind the scene. The video is great!
Yes! I really wanted to incorporate this into the video, but I wanted to get it out before SoMEπ ended, so I ended up not incorporating it. I'm not sure where I'll do this, but maybe I'll make a video on my patreon or a second channel demonstrating this. Otherwise, a sequel covering symplectic integrators will be coming at some point!
I recall Euler's method being introduced to me explicitly as a tool that does not produce good approximations, but rather convergent ones, which is useful for proving existence of actual solutions.
I think I learned some things. sqrt(k/m) is an angle. And because it uses regular trig functions, and not hyperbolic functions, then it has an imaginary component to it. See, e^t is hyperbolic, e^(i*t) is trig. That runs counter to what I was thinking about massless stuff being trig. So, there must be a situation where both are true. Maybe the spring constant is the angular part.
Couldn't you just make energy conservation explicit? That is, calculate the total kinetic + potential energy in the system at t0 and then adjust velocity or velocities at every subsequent step to force the total energy to match?
No, because energy conservation is fundamentally at odds with "velocity adjustments" (i.e. impulses.) In other words, they are what got you there in the first place. By the time you've noticed a physically incorrect circumstance, it's already too late to "fix" it in a physically correct way. For some very simple and often frictionless contexts, we actually do have exact solutions in terms of energy conservation, but for almost all Lagrangians we have only approximation methods, and we can only improve their accuracy by really including the higher-order terms.
Nope. Though, it does sound similar I agree. All the music I use in this video is in the description. Even with classical music I'm trying to only use stuff that's either public domain or Creative Commons licensed
Hm, could be a good topic, maybe as a sort of sequel to this video? I'm trying to not present topics in a super dry manner. I'd rather motivate them first, so perhaps continuing the conservation of Energy throughline (or Hamiltonian ig) would be good for that. Thanks for the suggestion.
I found this video fascinating, and very cool overall. [Subscribed] It is surprising how rare it is to see the words "Runge-Kutta" compared to Euler tho. However, with deep respect re: @0:20, NOTHING in nature is "governed" by differential equations, rather differential equations allow us to see how nature is governed. (nice bonus at the end!)
Another integration method that you can consider is Verlet one, third order error for position and second order error for velocity. It is highly used in games, since we care also about object interactions and with Verlet this is really easy. We can enforce non penetration constraints without necessarily applying a the force on those objects, but just displacing their positions and still not completely break the system. Obviously non physical correct, but robust and somewhat believable.
So, one gripe... Nature isn't GOVERNED by Differential Equations... it can be modeled by them... and... really only a small tiny portion of nature. It's not like it's not like Nature was consulting a Math text and decided hey ... that sounds fun...
During university I did a project on halo orbits and used a RK of order 10. During the exam the professor asked why I didn't used a symplectic method (one that preserves the energy): RK still had an energy error of machine's precision's order and was much faster.
yeah RK is really good for a lot of things. Also, symplectic integrators are also not 100% accurate anyways. Though, Velocity Verlet is faster than RK4 and is quite good as well
I am not sure if I'm right or wrong, but at 2:36 I believe that v(t) should have sin(.) instead of cos(.) since we are taking the derivative of x(t), which contains cos(.).
The set of equations at 2:31 don't make sense: By these two equations, x(t) and v(t) both get extreme values at the same points (derive and equal to zero). So if x(t) is at an extreme point, so is v(t).. In reality, when an object connected to a spring oscillates, when x(t) is in an extreme point, v(t) is zero. Not both of them zero. Now you may claim "but I moved the graph left so zero is an extreme point". Alright, but in that case, what about the other extreme point who is not zero? There you will have x(t) not equal zero, and v(t) should have been zero. but in your equations, v(t) will not be zero over there as well: v(t) will always be proportional with negative sign to x(t). So.. Nope. Replace one of the cos with sin, please 🙂.
You might think my comment is mean or unsupportive of your work. But actually I really enjoy watching your video. The problem is you didn't answer your title. You didn't explain why the method is better than Euler. You didn't derive it or show any reason why it is more stable. You just showed graphically how it plays out but never actually proved anything about it
Thank you for the feedback. I think your concern comes from a differing opinion on what "why" means. I do concede that the title is not true in a strict mathematical sense (I didn't derive RK after all). However, I did provide reasoning for why RK2 (and by proxy RK4) seem to conserve energy better than an Euler method. I could have mentioned Symplectic Integrators, or even the Euler-Cromer method (which only changes one term, yet conserves energy for these problems). RK methods also don't inherently conserve energy, they simply converge much faster. I approached this video with the notion in mind that it is unclear why for certain systems, RK methods, even RK2, seem to simulate so much better than standard Euler Methods. I wanted to provide a sort of intuitive motivation, and I think I accomplished this. You are correct in saying I did not mathematically prove anything about Runge-Kutta methods. This was never the intention. I apologize if you found the title misleading. TL;DR, The "why" in the title is not the "why" of a mathematician. It's the "why" of an engineer or experimental physicist.
Cool [B)] There are other methods which use a constraint to ensure energy is exactly conserved (perhaps at the cost of accuracy in other ways, or computational cost? Not sure) right? Edit: nice bonus
Mhm. Though, Runge-Kutta methods are generally preferred in almost all cases. Euler's method is just generally a bit easier to implement. There are of course many other approximation schemes.
Actually, there are a class of ODE solvers called Symplectic Integrators which work incredibly well for Hamilton differential equations. For example, if you're doing simulations of satellite orbits, a Symplectic integrator will allow for accurate and stable simulations over very long periods of time.
@@jameswright4732 Yes, I was debating on including them in this video. I decided against it as I wanted to keep the focus narrow, and the video not too long. I may end up doing a sequel to this video covering Symplectic Integrators.
If one of the problems is unwanted gain or loss of energy as the approximation proceeds, are there methods that calculate the total energy initially and after each step and compensates for energy gain or loss as it goes?
The "easiest" step up from explicit euler and implicit euler is "semi implicit euler", because you just need to swap a line of code and get 10x better results than both methods. Runge Kutta 2+ is the step after that.
Thanks for sharing your videos. I love to see how everyone has a different perspective. Would love to see more animations / videos on computational / numerical methods - difference equations, Runge-Kutta (regardless of pronunciation : ), Fourier transform. Check out GoldPlatedGoof ‘Fourier for the rest of us’. I bet you could do a very interesting video on his Dot Product / Fourier relationship. The ability to represent any curve with Fourier epicycles is truly mindblowing! Thanks, keep it up!
I saw one comment about how differential equations only "model reality and not govern it", and thought "hey, sometimes one or two random philosophical comments are good" and upvoted it. Then I realised half of this comment section is obsessed with that one phrasing for some reason 😄 Working with scientists, I've gotten used to the "this system is governed by" phrasing so much that to me it seems a weird thing to get hung up on. But I guess it's never a bad thing to get a reminder that the map is not the territory, or even a bunch of redundant reminders 😂
You can calculate the acceleration from the position but how do you know what the f(x,t) is for the spring? Like if you told me the displacement and time of the mass I could not tell you what the derivative should be. How does he actually calculate this!? I feel a bit dumb
Always this phrasing. Nothing is governed by differential equations. Differential equations are part of a language that is made to describe the physical world. Just like natural language can describe feelings (for example); it doesn't govern them though. Also, "free bonus at the end" - do I have to pay for the first part of the video?
please learn how to pronounce Runge.You don't have to be perfect, but at least try.
You know what's crazy. I was pronouncing it the correct way for my whole life. But I watched like one video in research that pronounced it the wrong way and for some reason decided that they were right and I was wrong. So I decided to intentionally change how I pronounced it to the wrong way.
I'm actually so dumb lmao
I've heard people pronounce it "rungie", so it's at least better than that. Haha
@@Phanimations that's crazy; on a similar note I'm convinced that Italian physicists can't pronounce "Dirac" Correctly; no professor at my Uni called the poor guy right.
(Hasty generalization I know, kinda funny tho)
@@GeodesicBruhlet me guess, they say it like dee-rac?
@@Phanimations English: /ˈrʊŋəˈkʊtɑː/
Great video! It's funny that 3Blue1Brown's manim environment became the official animation framework of TH-cam math videos. Every time I see some video made by manim, before watching it, I know that it is gonna be a good one. It never disappoints.
I agree
My favorite quote
from engineer courses
is: everything is linear
if you watch it really closely.
valid
Chaos Theory: "Am I a joke to you?"
@@alexandervorgias4812 also valid
100% everything has a linear relationship. If it wasn't for the additive identity: 1+0 = 1. No other field within mathematics would be possible.
@@alexandervorgias4812 Nope, even within Chaos Theory. There are linear relationships even if we don't have the ability to recognize them.
Runge-Kutta still leaks energy.
For the equations of motion, the one integration scheme to rule them all is Velocity-Verlet. That conserves energy _exactly_ (apart from floating-point round-off), is 2nd order, and is just as computationally cheap as Euler.
To make it easy for others: en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet
Yes, I decided to leave off Symplectic Integrators. But of course, Velrlet and Velocity Verlet would have been better options.
Seeing a lot of comments about this, I will probably do a sequel to this video in the future
I'm a big fan of Verlet systems too! So simple and so insanely stable.
@@akaHarvesteR Verlet was my professor at university for one year.
And RKF78 is nearly symplectic ;)
This is a great intro to approximation schemes! Really well explained and I love how you included all the animations to visualize what happens for each method. It was very helpful to see how drastic an effect the lack of energy conservation can have.
Idk man seems youtube glitch or something but it's missing 'k' after 898 in your sub count hope youtube fix the issue soon, once again thanks for a great video
there’s a k now
It's fixed
16.9K, can’t wait to see it grow more!
22.3K on oct 12th 2024.
We can model the hidden "subscriber function". 🤓
You've really made numerical methods an interesting area for me.
Hahaha, thanks for the Vsauce callback at 1:08
Manchurian candidate activation signal @ 1:08
❤
Excellent video! The visuals and the voice over were spot on :) you’ve made a great addition to the set of SoME videos.
That honestly is a great video, keep up the good work! (and so cool there's ondine in the beginning of the video)
Thanks! [ And Scarbo at the end :) ]
I love this kind of videos. I work on driveshafts, which are rotational mass-spring-damper systems to some degree. I loved doing the Taylor expansion and wrote some homework about how much accuracy you gain for increased compute, as you increased the order.
I really didn't expect to hear Gaspard de la Nuit :)
I do try to have good music lol
This video is absolutely beautiful, though it takes some time to fully digest.
Amazing video ! I loved your pace and your little jokes, it really helps staying engaged with your presentation. The visualisations are of course really good too. :)
Wow, I honestly had no idea these methods were even connected! Thank you for the straight-forward explanation and visualizations. Top notch content, Sir.... 👍
The RK algorithms are a very fascinating topic, and I've even implemented a few of them in a C++ application before specifically RK4. Yet, I still feel or think that FFTs and their inverses are some of the most interesting algorithms out there. Complex Vector and Field analysis, the Hamiltonians, especially the quaternions and octonions, and so much more are all interesting topics. We stand on the shoulder of giants! I truly enjoy videos like these, keep up the great work!
Thanks! FFTs are a topic I think deserve a longer and more dedicated video, but I'm considering doing one on them.
Even for this, I barely touched the surface level of what RK is (didn't even derive RK4 lol), just wanted to provide some intuition for it. Both for the motivation, and why it would be more accurate than a simple Euler-Cromer method
@@Phanimations Not exactly but similar to why Quaternions prove to work better than Euler Angles when performing rotations in 3D about the independent axes.
When using Euler Angles there is the phenomenon of producing Gimbal Lock within the use of the rotation matrices. This where one axis ends up being rotated onto another where they become coincidental and from there you end up losing a degree of freedom as the two axes are locked and you can no longer differentiate between them.
Quaternions helps to prevent this. Also, quaternions, even though the mathematical notations and expressions are fairly complex, implementing them in software is fairly trivial and they also have a very nice added benefit of being able to be calculated against other vectors and matrices as well as being converted to them. Because of this, they are also computationally cheap, very efficient and quite effective.
It's not exactly the same, but it goes to show where a variety of Euler methods although is simpler to digest and work out by hand, also has their shortcomings.
FFTs just provide a very good and efficient way to transpose from one system to another especially when working with wave patterns or anything with a frequency domain.
Without FFTs, audio processing either being a wave file, a midi file, or even an MP3 file wouldn't be as efficient as they are today. Audio even when compressed requires a lot of information and can be fairly computationally expensive. FFTs reduces that by a couple orders of magnitude. Instead of trying to perform 20 thousand sine or cosine function calls per second for a 20kHz frequency. We can just sample it and use the sample rate to reconstruct a good enough approximation of the individual waves. Well sort of, as that's the abridged version.
But yeah, I find it all to be very interesting and intriguing. I'm not just intrigued with this type of stuff either. I'm also intrigued by 3D Graphics Rendering, Game Engine - Physics Simulations, as well as actual CPU Hardware Design (ISA design). Then again, this gets into physics when you go beyond the logical device level and get into the actual structure of the transistors, resistors, etc. that are designed to manipulate electricity. And here we are again, with wave propagation. Right back to the use of wave functions and the power of FFTs lol!
I just like things related to engineering. Factorio, Satisfactory, Dyson Sphere Program, Oxygen Not Included, Planet Crafter, Mindustry, Turing Complete, etc. there're all a part of my Steam Library and are my hobbies. And I'm no stranger to Music as I did play the Trumpet for close to 10 years back in my school days.
If you like this and Fourier methods, you should check out dispersion and dissipation analysis (sometimes referred to as "Fourier analysis") for ode solvers (and pde solvers too, but that's a bit more complex). It essentially allows someone to understand how a solver will respond to any initial condition of a linear problem.
@@jameswright4732 I've written a couple of simple ODEs.
You need to look into Clifford Algebras and Geometric Calculus. Try Macdonald's two books "Linear and Geometric Algebra" and "Vector and Geometric Calculus". Books are small and concise. He and a couple other people really blew the field open not long ago. Tensors and quaternions are subsets. Clifford makes them much simpler. Computer graphics is using this now, especially sims and games.
perfect timing of you to post this video this semester
Nice video!
However, at 9:38 it should be noted that the "order" of a method does not refer to the number of terms/stages (k1, k2), but rather the truncation of the Taylor series. This means that a 2nd order method will exacly match the taylor series up to the x^2/2! term within each time step, while the following terms (x^3/3!,...) are not exact or missing.
For some fully implicit methods (Gauss-Legendre), the order can be two times the number of stages. (They're computationally expensive and I wouldn't recommend using them a lot but provide inpressive results for large time steps.)
Thank you for mentioning this. I could have mentioned this, however, since I didn't go through the Taylor series derivation, I thought it would just confuse most viewers.
i always learn something new and interesting from your content!
Very nice and informative video, Sir !!! Looking forward for more of such content.
Thanks man, fantastic explanation! Looking forward to more videos of yours.
Music choice is splendid. Love Ravel.
dude i just found out about you i just want to say i loved this video ABSOLUTELY!!!
3 mins in and it's already getting interesting; never disappoints
Thanks for this. I'll implement this to orbit trajectory
Another method (or sub-method) that maybe deserved a mention is the so-called leapfrog integration, where the average derivative for xI is taken from an average value of the previous tick acc and an extrapolated value for halfway towards the next one.
It's sort of similar to RK2, but the samples are offset back by one(half) of a tick.
It's relatively stable, and unlike RK2, you don't actually need to computer the derivative twice for each tick, as the first one is carried in from the previous iteration.
Yes, this is also a good method. I just wanted to keep the video simple. Perhaps I should have teased the sequel video at the end. I'm getting a lot of responses about this, so I think I'm going to make a sequel to this video covering Symplectic Integrators, along with some others like leapfrog
@@Phanimations nice! Looking forward to that one! 😄
Clear and concise. Thanks!
2 minutes in, I can already say: I like your style! Just one remark: I think it would be better to clarify that we can best describe nature with the help of differential equations, instead of saying that nature is governed by differential equations (even if it was true that we live in a simulation, this would be outside the scope of scientific thinking). This reminds me of talking about charged particles "feeling" a force and thereby intentionally reacting to it, or explaining darwinism by active adaptation to a changing environment, or even "lonely atoms which want to form bonds to share electrons" to fulfill some godly given octet rule so that all of them can live a long and happy life, and every other thing our teachers tought us "although we should keep in mind that this is just a simplification" - although the concepts you are about to explain (from my point of view at this moment in time, at minute 2 in your video) exactly oppose these views, of course. But keep in mind: Now the maths begins, and view counts will only drop from here on.
To be clear: I really like your video! Narration, animation, general style: wonderful! And I had a look at your channel, and I will watch a few more of your videos.
Thank you for the feedback, yes I should have been more clear about that. I was mostly focused on the mathematics here, but of course, everything physical we describe with mathematics is just a model. I'll keep this mentality in mind for future videos.
i think metaphysics is a part of science. many great scientific insights came from studying metaphysics. i.e. what is true , how much can we know of truth, and what is that truth composed of?
and there maybe a correspondence between our models and reality, i.e. reality = diff equations, and it's interesting enough for you to bring up, and I think it's helps strengthen our mind's talking about this...
and it can give us insights into math from a point of view of physics and vice versa.
Great video and VPython code! Thanks for sharing. Just one heads-up: I think there's a tiny blunder in one of the equations around 2:20. The velocity goes with sin(), not cos(). And I also vote for modelled, not governed ;)
Nature is not governed by equations. It is modelled in equations.
its governed by equations.
@@Yuri_alphqI don't think this is a debate that we'll be able to settle in a TH-cam comment section lol
@@Yuri_alphqprove it
Actually nature doesn't give a fk of what we think, it has some rules/constants that cannot be crossed and axioms such as discreteness and continuity and it happens to be explained by differential equations because we cannot imagine a stopped frame of time and we need to measure change in variables(differentials) to understand the universe
@@Yuri_alphqProve it
Omg, amazing. I'm CFD engineer and in masters I leaned this all we use every day but I never had comprehensive intimation of the topic.
I loved the video. Numerical methods are a super interesting topic.
Wonderful perception ❤
I wish you had made this video during my computational physics class 😅. Nevertheless, thanks for your clear explanation. Deserves more subs.👍
My favourite astrophysics professor taught us to use symplectic integrators for orbital mechanics because they explicitly conserve energy.
Leapfrog 🔛🔝
Somehow you managed to squeeze half of my Computational Physics 1 exam in a 13 minutes video
GAH!!! nice video, but DAMN that flashbang at the end! 12 or so minutes of black followed by what felt like gluing my eyeballs to the sun! anyway you get a sub for quality content, keep it up.
Oh this video is unfortunately a little bit late for me, just had my numerical simulation exam last month ;) still watching this video since it got recommended to me and its so fascinating how people came up with such things decades ago!!!
amazing song choice, ondine is beautiful
I'm not the only one that noticed
As a Physics student, these videos are a great motivators.
Glad to hear it!
damn I have never understood RK methods before, just knew that they are better than Euler’s. after you explained the explicit and implicit methods I was like ”why don’t you just average them” - and apparently that’s already RK2!
Loving Gaspard as the background❤
Thank you for this beautiful explaination
Great video! Just now I'm (trying) learning some numerical method to solve maths problems over C languaje. Thanks!
i see manim in action, i subs
Incredible video!
Very clear! Implicit Gaussian Collocation for the win though! (For numerically fully conserving skew-symmetric use cases.) EDIT: But you already mentioned Sympletic integrators in one of your responses.
Mhm, sequel video covering them coming soon! There's so many though haha, we'll see if I do end up including Gaussian Collocation
@@Phanimations Thanks! Subbed as I’m looking forward to hearing more of (sympletic) integrators. I’ve used Gaussian Collocation for simulating with conservation accuracy down to machine-precision (with help of skew-symmetric PDE form), which is cool, but I can’t say I really ever understood what is sympleticity means nor how to derive, e.g., exponentially sympletic variants.
1:09 this sound is equivalent to undertaker's unexpected entry during an ongoing wrestling match😅 haven't watched the complete video, I'll leave a 'review' comment after watching it but it seems video will be awesome and informative.
Thank you for making me sleepy goodnight
Nice! I coded RK4 and other methods in python for a 3 body problem simulation. RK4 and Velocity Verlet were way more stable than Euler or even a 2nd order Taylor series when we consider conservation of energy. Thank you for the video;
Nothing in the universe is "powered by differential equations"!
Differential equations CAN DESCRIBE a lot of things.
This is very important to understand.
this is true. the universe is powered by weed.
I think you meant to write a double negative sentence. Check again
I don't think it was meant literally.
CAN DESCRIBE imperfectly i might add
Like he said, it's te language of nature
Clearly explained. Thanks.
0:21 eveything in nature isn't governed by differential equations (DE), DEs describe nature, they don't govern them.
I know it can be seen as a nitpik, but I felt that the semantic difference between 'governing' and 'describing' were big enough to warrant the comment. The rest of the video was great!
I would point out that even the "exact" answer is an approximation, because you have to approximate the value of sine or cosine in order to draw the graph or get a numerical result for the position of the object on the spring. Now I know that you can easily calculate the value of the trigonometric functions to far more accuracy than you need -- but those number are still calculate by an approximation algorithm.
Of course this is correct. I figured including this would be a bit off topic, as the approximation we're concerned with in the video is of the "initial value problem" type rather than for function values. Thank you for the comment though
Great video
Hey, a minor correction, Matlab's ode45 uses a 4- and 5-STAGE Runge Kutta method (as it uses 5 k_i), but the 5-stage method is still an order 4 method, due to I think it being able to trace exactly polynomials of degree 4 but not 5.
Hey Vsauce, reference here!!!
From my days in college, when I took Numerical Analysis, I had an idea. Do the iterative equation that, with each iteration, produces Output values relative to increasing Time… BUT… for each Output value, use it as a STARTING VALUE for the iterative equation method known as *Picard’s Method.* When Picard’s Method iterates, it STAYS ON a single Time value (i.e. as you iterate, Picard’s Method doesn’t “move you” along the Time axis.) I never got to try my idea, but I always wanted to. Overall, the idea is that you keep switching back ‘n forth between an Increasing Time iterative equation AND a Picard’s Method iterative equation. For each “invocation” of Picard’s Method, you perform enough iterations until a suitable degree of convergence is achieved.
It's clear to see that the higher order algorithms are more exact per timestep but they're also more computationally expensive because of calculating multiple derivatives per timestep. It would've been nice to see how exact each algorithm is per derivative evaluation. Because it might be more efficient computationally to use a smaller time interval with a lower order algorithm than using a higher order algorithm.
Hm, well I didn't show this directly. But notice that the Euler time step with dt = 0.02 is still worse than RK4 with a 0.1 time step
The other end of the spring is fastened to a Hooke.
This is great. You seem to be getting a bit of flak from the very knowledgeable on the subject, but I just feel the video is directed to those like me, who *in theory * can follow the equations but have trouble with the 'where are we going with this' part. And in that respect the video succeeds with flying colours. So, thank you!
For any linear system (such as the one modeled here) a discrete state-space model can be accurate even with a course time step. If you model a single iteration accurately, then you have a template that can be applied simply at each iteration.
yes this is of course true. I'm covering other systems in another video that should be out within ~1 month (maybe longer). That one will focus on symplectic integrators
Thank you so much! I hear about Runge-Kutta so often at the lab but never understood it until now! But it bothers me that pretty much the same math (at least in my brain) has so many different names: finite difference, Euler, Runge-Kutta, Taylor expansion... I am bad with names :')
It's similar, but there are differences, as I outlined in the video. The point of this was mostly to show why we use "RK4", and what it is, since that term is often thrown around without actually understanding how it works
@@Phanimations Yeah a lot of people in my field just say "we use Runge-Kutta" then use ODE45 without thinking about what's behind the scene. The video is great!
one nifty thing is you can use the difference between k2 and k3 to estimate the error and adapt your timestep dynamically.
Yes! I really wanted to incorporate this into the video, but I wanted to get it out before SoMEπ ended, so I ended up not incorporating it. I'm not sure where I'll do this, but maybe I'll make a video on my patreon or a second channel demonstrating this.
Otherwise, a sequel covering symplectic integrators will be coming at some point!
I recall Euler's method being introduced to me explicitly as a tool that does not produce good approximations, but rather convergent ones, which is useful for proving existence of actual solutions.
hm, well Euler's method can be covergent. However, as I showed in the video, for many systems errors will cause it to diverge quickly.
I think I learned some things. sqrt(k/m) is an angle. And because it uses regular trig functions, and not hyperbolic functions, then it has an imaginary component to it. See, e^t is hyperbolic, e^(i*t) is trig. That runs counter to what I was thinking about massless stuff being trig. So, there must be a situation where both are true. Maybe the spring constant is the angular part.
0:28 and the way i wander through the vastness of space
Make more videos like this, please!
At 9:05, aren't you missing a dT in the x-term for the new point? It should be Xi + (dT / 2)*(K1 + K2) as mentioned in the last bullet point?
Check out the semi-implicit Euler method. It's especially important because it preserves energy very well for small enough regular time steps.
Couldn't you just make energy conservation explicit? That is, calculate the total kinetic + potential energy in the system at t0 and then adjust velocity or velocities at every subsequent step to force the total energy to match?
No, because energy conservation is fundamentally at odds with "velocity adjustments" (i.e. impulses.) In other words, they are what got you there in the first place. By the time you've noticed a physically incorrect circumstance, it's already too late to "fix" it in a physically correct way.
For some very simple and often frictionless contexts, we actually do have exact solutions in terms of energy conservation, but for almost all Lagrangians we have only approximation methods, and we can only improve their accuracy by really including the higher-order terms.
I think the presentation went too fast in the few key moments, like when you define implicit scheme and how to actually calculate it
@@sheevys thanks for the feedback. I’ll be more careful about that next time!
Is that Rousseau’s piano? I swear it sounds just like his piano, I’m so used to his tuning
Nope. Though, it does sound similar I agree.
All the music I use in this video is in the description. Even with classical music I'm trying to only use stuff that's either public domain or Creative Commons licensed
How about symplecetic integrators?
Hm, could be a good topic, maybe as a sort of sequel to this video?
I'm trying to not present topics in a super dry manner. I'd rather motivate them first, so perhaps continuing the conservation of Energy throughline (or Hamiltonian ig) would be good for that. Thanks for the suggestion.
I'm surprised how no one seems to have brought this up but for v(t) at 2:30, it should be sin not cos
Can you make a video on symplectic integrators?
I found this video fascinating, and very cool overall. [Subscribed]
It is surprising how rare it is to see the words "Runge-Kutta" compared to Euler tho.
However, with deep respect re: @0:20, NOTHING in nature is "governed" by differential equations, rather differential equations allow us to see how nature is governed.
(nice bonus at the end!)
Another integration method that you can consider is Verlet one, third order error for position and second order error for velocity. It is highly used in games, since we care also about object interactions and with Verlet this is really easy. We can enforce non penetration constraints without necessarily applying a the force on those objects, but just displacing their positions and still not completely break the system. Obviously non physical correct, but robust and somewhat believable.
Thank you yes, I'm going to do a video soon on symplectic integrators
So, one gripe... Nature isn't GOVERNED by Differential Equations... it can be modeled by them... and... really only a small tiny portion of nature. It's not like it's not like Nature was consulting a Math text and decided hey ... that sounds fun...
During university I did a project on halo orbits and used a RK of order 10. During the exam the professor asked why I didn't used a symplectic method (one that preserves the energy): RK still had an energy error of machine's precision's order and was much faster.
yeah RK is really good for a lot of things. Also, symplectic integrators are also not 100% accurate anyways. Though, Velocity Verlet is faster than RK4 and is quite good as well
@@Phanimations The simplectic I tried was leapfrog but second derivative computations for gravity in a rotating system were quite heavy.
I am not sure if I'm right or wrong, but at 2:36 I believe that v(t) should have sin(.) instead of cos(.) since we are taking the derivative of x(t), which contains cos(.).
I independently reinvented RK2 in high school. Very simple idea.
The set of equations at 2:31 don't make sense: By these two equations, x(t) and v(t) both get extreme values at the same points (derive and equal to zero). So if x(t) is at an extreme point, so is v(t).. In reality, when an object connected to a spring oscillates, when x(t) is in an extreme point, v(t) is zero. Not both of them zero. Now you may claim "but I moved the graph left so zero is an extreme point". Alright, but in that case, what about the other extreme point who is not zero? There you will have x(t) not equal zero, and v(t) should have been zero. but in your equations, v(t) will not be zero over there as well: v(t) will always be proportional with negative sign to x(t). So.. Nope. Replace one of the cos with sin, please 🙂.
You might think my comment is mean or unsupportive of your work. But actually I really enjoy watching your video. The problem is you didn't answer your title. You didn't explain why the method is better than Euler. You didn't derive it or show any reason why it is more stable. You just showed graphically how it plays out but never actually proved anything about it
Thank you for the feedback.
I think your concern comes from a differing opinion on what "why" means. I do concede that the title is not true in a strict mathematical sense (I didn't derive RK after all).
However, I did provide reasoning for why RK2 (and by proxy RK4) seem to conserve energy better than an Euler method. I could have mentioned Symplectic Integrators, or even the Euler-Cromer method (which only changes one term, yet conserves energy for these problems). RK methods also don't inherently conserve energy, they simply converge much faster.
I approached this video with the notion in mind that it is unclear why for certain systems, RK methods, even RK2, seem to simulate so much better than standard Euler Methods. I wanted to provide a sort of intuitive motivation, and I think I accomplished this.
You are correct in saying I did not mathematically prove anything about Runge-Kutta methods. This was never the intention. I apologize if you found the title misleading.
TL;DR, The "why" in the title is not the "why" of a mathematician. It's the "why" of an engineer or experimental physicist.
Cool [B)]
There are other methods which use a constraint to ensure energy is exactly conserved (perhaps at the cost of accuracy in other ways, or computational cost? Not sure) right?
Edit: nice bonus
Mhm. Though, Runge-Kutta methods are generally preferred in almost all cases. Euler's method is just generally a bit easier to implement.
There are of course many other approximation schemes.
Actually, there are a class of ODE solvers called Symplectic Integrators which work incredibly well for Hamilton differential equations. For example, if you're doing simulations of satellite orbits, a Symplectic integrator will allow for accurate and stable simulations over very long periods of time.
@@jameswright4732 Yes, I was debating on including them in this video. I decided against it as I wanted to keep the focus narrow, and the video not too long. I may end up doing a sequel to this video covering Symplectic Integrators.
thank you!
missing a delta t in the formula for k2 at 8:50?
If one of the problems is unwanted gain or loss of energy as the approximation proceeds, are there methods that calculate the total energy initially and after each step and compensates for energy gain or loss as it goes?
The "easiest" step up from explicit euler and implicit euler is "semi implicit euler", because you just need to swap a line of code and get 10x better results than both methods. Runge Kutta 2+ is the step after that.
Would have been interesting to speak about leapfrog
Thanks for sharing your videos. I love to see how everyone has a different perspective. Would love to see more animations / videos on computational / numerical methods - difference equations, Runge-Kutta (regardless of pronunciation : ), Fourier transform. Check out GoldPlatedGoof ‘Fourier for the rest of us’.
I bet you could do a very interesting video on his Dot Product / Fourier relationship. The ability to represent any curve with Fourier epicycles is truly mindblowing! Thanks, keep it up!
I saw one comment about how differential equations only "model reality and not govern it", and thought "hey, sometimes one or two random philosophical comments are good" and upvoted it. Then I realised half of this comment section is obsessed with that one phrasing for some reason 😄
Working with scientists, I've gotten used to the "this system is governed by" phrasing so much that to me it seems a weird thing to get hung up on. But I guess it's never a bad thing to get a reminder that the map is not the territory, or even a bunch of redundant reminders 😂
Wish your channel existed more than a decade ago
I would love to see similar coverage of simplectic and stiff methods. Say leapfrog and adams bash.
You can calculate the acceleration from the position but how do you know what the f(x,t) is for the spring? Like if you told me the displacement and time of the mass I could not tell you what the derivative should be. How does he actually calculate this!? I feel a bit dumb
I would say the language of Nature is conservation, expressed in mathematical terms with differential equations.
Should't K2 also have a dT term attached to it?
At 6:40, how do we know f(i+1) yet?
Always this phrasing. Nothing is governed by differential equations. Differential equations are part of a language that is made to describe the physical world. Just like natural language can describe feelings (for example); it doesn't govern them though. Also, "free bonus at the end" - do I have to pay for the first part of the video?
Leapfrog KDK(or DKD) is generally a overall better pick in application, I think