Now is a great time to have a fresh look at this video’s sponsor KiwiCo - head to www.kiwico.com/welchlabs and use code WELCHLABS for 50% off your first monthly club crate or 20% off your first Panda Crate.
@@Anon54387 But be sure to make allowance for the fact that the kiwis are all upside down. Since -0 = 0, this is very often a wash. Also watch out for leap years. 2400 CE is going to be another weird one.
@ i always supposed its because lowercase σ is the stefan-boltzmann constant, but ive also always used σ_b for that anyway (incidentally its also the standard deviation, but while that comes up all the time when you analyze data, i never actually have to write it symbolically)
As a space nerd who loves this sort of math, I'm astounded that I never realized how central Kepler's equation was to the historical development of calculus!
Oh man this brings back memories. KSP used three different methods to compute E. The Newton-Raphson method actually turned out to be very unreliable for several cases, and would often blow up. What worked pretty well was a binary partitioning method, where you start with a large correction and half the search space at each iteration. It was slower to converge, but almost always found a decent solution. There was also a dedicated solver for extreme eccentricities, and of course another one for hyperbolic cases.
I was wondering how you knew this and then I saw your username. I was working on a 2d orbital space game a few years ago and I wondered how KSP calculated orbits, and I guess now I know :)
Do you remember under what circumstances Newton's method failed to converge for elliptical orbits? I know that hyperbolic orbits are problematic, especially with high e, but I'm surprised that Newton's method would ever fail for elliptical orbits.
3:00 That's slightly off: the orbit is a parabola only at exactly e=1. For greater values of e it's a hyperbola. Not that having the right name for the right conic section is all that important here.
Well coincidentally, our teacher today began teaching about parabola and when he said for e=1, locus is parabola There i got confused because in the video he mentioned something else😅
@@PhoenixEditz69 If I recall correctly, a completely different approach is used for hyperbolic orbits. Parabolic orbits, which require e is exactly 1 are as rare as perfect circular orbits. We don't use Kepler's equation for that either, because we don't have to solve for E (eccentric anomaly) to get the true anomaly. We get the true anomaly directly from a much simpler equation.
@@isaacclark9825 yeah i also think in actual life parabolic trajectories are rare as nothing is perfect and hyperbolas are more relevant I dont know if im correct but this could also be the reason why we study hyperbolic geometry and spherical geometry at higher level I haven't seen many people talk about parabolic geometry specially being a high school student😅 Most of college stuff like multi variable calculus, differential equations, and astrophysics i have learnt through YT videos So finding right stuff sometimes become tricky Like why eccentricity can't be zero But rather it tends to zero🤔 Though visualy it seems correct but mathematically it's counter-intuitive
@@isaacclark9825 ”we get the true anomaly from a much simpler equation” - Yes, it’s on the screen at 09:07! Barker’s goes direct from true anomaly to time for parabolic orbits, with analytical solutions in both directions. Because the comets have such a high eccentricity, they can be approximated with a parabolic close to the Sun - hence Halley’s method mentioned at this point in the video For hyperbolic orbits you are right, we have a different equation, but it is similar in that it is also transcendental and requires similar numerical techniques to solve
Keplers algorithm resembles a control system closed loop in discrete steps, where the error signal is fed back with a proportional gain of 1. Some gains can make the closed loop unstable. This is an early version of a P-controller! You can analyse stability in terms of the z-transform of the open and closed loops.
yeah lol when i saw that part that's exactly what I thought of too! now this has me wondering if newton's method can be expressed in control system terms too
I honestly think that this is one of the best channels in all of the internet, not just TH-cam. I feel blessed everytime I venture in the minds of the geniuses of our past.
There is one guy named Michael MacKelvie who makes videos about sports which is at par with with the research, narrative style, and animation of this channel. That is also brilliant. Check it out!
(Edit: the last paragraph) Superb work!! It always baffles me that Lagrange inversion isn't a standard topic in Calculus courses in universities. Most equations in science and engineering are "impossible" to solve in the sense that no closed-form solutions exist. Yes, we have powerful computers to quickly obtain highly accurate numerical solutions, but an approximate, analytic solution gives a much better qualitative picture. (perhaps it is difficult to justify convergence at elementary level, i.e., without complex analysis, but the inversion technique is just too good to miss out!) And thank you very much for providing reference btw. I'd like to add one more: Theory of Functions of a Complex Variables, by Soviet mathematician Alexei. I. Markushevich, translated by Richard A. Silverman. It has an extensive treatment on the technique of inversion of power series. In particular, the power series solution to Kepler's equation is covered in Example 3, page 99, Volume II. Markushevich's treatise is also extremely thorough (more so than western classic such as Ahlfors) and is the standard reference on complex analysis in the Russian-speaking world. PS. For those who wish to learn about Lagrange inversion but without a background in complex analysis, see pp151-155 Art.125,126 of the book "an Elementary Treatise on the Differential Calculus; Containing the Theory of Plane Curves, with Numerous Examples" by Benjamin Williamson. It is an old book and is available on the Internet Archive.
This Kepler series has been superb. (And let’s all take a moment to thank Grant Sanderson for making Manim open source! Its signature look and power is all over TH-cam these days.)
I love your video on imaginary numbers. It helped me gain the intuition needed to work with them. I think one reason we may never find a perfect solution to Kepler’s equation is the fact there is no perfect formula to draw an ellipse. You can easily plot a circle with a single function but true ellipses… not to much. Not without multiple steps, iteration or some approximations. It would be very cool to discover one some day though
I remember trying to solve this before knowing it was a famously difficuly/impossible problem, got to the exact form of M=E-esin(E), though with different variable names, and tried to use the taylor series to make an estimate. Worked well for low eccentricities, but lets not even talk about the high ones haha. Great video about this topic, so here is a comment for the algorithm :)
Similar thing happened to me with x+log(x)=0 For a while I refused to acknowledge the problem - this was just a boring heat exchanger, after all. Until wolfram alpha convinced me
@@maxmn5821similar experience, banging my head against a wall trying to invent the Lambert W function until i heard about it in a blackpenredpen video
Same. I was working on a programming project that needed to model orbits. The idea was that you had a "time slider" that you could move back and forth like a scroll bar to make the planets move around the star. I also needed to have a rocket ship that could apply thrust at specific points in time to alter orbits and transfer between different planets and the star. I thought that E would be an easy solve. I thought... I eventually just ended up taking the idea of a binary search tree and applying it to E, so that I could find it out via brute force. It worked pretty fast since I was using a computer, so I didn't worry about it too much beyond that.
@@Sluppie I think you have discovered that simple, stable, numerical methods are preferable to more advanced ones, on a computer. I know someone who took thousands of terms of the Taylor Series for sin(x) to examine it for convergence. He found that even the error on a sine wave is a sine wave!
Bro is single handedly bringing back the science in science education channels on YT, nothing but bravos to you! Always excited to see what you teach nexy
High quality content. There was a lack of videos on TH-cam about the history and development of science (beyond the typical topics) and you adress that gap very well.
Oh, man! Believe it or not, I just started taking a course on Numerical methods recently and only 2 days before, I had a lecture on the Newton-Raphson method in class. AND, I just started learning python animations with Manim! That's how you make your videos, right? You chose the perfect time to drop this, lol. I am starting to see just how much code and time you need to devote to get these beautiful animations. Thanks for the great vid!
While watching the part about Kepler’s numerical approach, I thought “That’s what the Newton(-Raphson) method is for…” Turns out Newton came up with it specifically for this equation!
So, fun fact, that same newton iteration method is used in the famous Fast Inverse Square Root algorithm in Quake III Arena. Kepler, Newton, and Raphson didn't know it at the time, but the fruits of their labor let us have really fast lighting calculations (The algorithm is now badly outdated, but it's the inspiration and basis for better fast square root solvers)
After watching your complex number series, I felt heartbroken that you stopped posting.I didn't even know you were back until a short of yours popped up! So glad you are back! Take love❤
Years ago when learning how to compute implied interest rates and future cash flows based on the present value of an investment, I had no idea how closely these were related to fundamental problems in mechanics and other fields. Thanks for a very informative video.
My favorite method to solve it is the Halley method. It's similar to the Newton-Raphson method but also takes into account the second derivative, which makes the convergence faster. Generally speaking, that method is not very popular because the extra calculations needed to get the second derivative tend to offset the faster convergence, but in this case the second derivative of "E-e.sin(E)" is just "e.sin(E)", so no extra calculation is needed. In the end, the Halley method appears to be more efficient in the case of the Kepler equation, which makes me wonder why it's not more popular.
Hey man, really excited about the imaginary numbers book. Your series on imaginary numbers in 2016 was extremely impactful to me then and now. I still think about those videos often, and can’t wait to have this book in my possession!!
Wow. I have worked with this equation since I was a high school lad, even up until last year when I implemented a solution (for low eccentricitry orbits) in software last year I expected that the subject would interest only about four TH-cam viewers, but you have done much better than that!!
It's sad that I attempted to find E after a few pages of complex work of sin I decided to watch the video, just to find out e isn't the constant for Euler's number and all my work was wrong anyway... so sad...
Prussing and Conway's Orbital Mechanics book suggests using the Laguerre algorithm. I used it years ago when I wrote programs for astrodynamics. The Laguerre method was meant for solving polynomials of degree n. Although Kepler's Equation is not a polynomial, you can just use n=4, 5 or 6 and run the iterations. I verified what was said in the book; it converges quicker than Newton's method and converges for any initial guess (a bad first guess in Newton's method may lead to no convergence).
Small remark: in order to get series in e via Lagrange inversion theorem, mentioned in the video cca 13:00, it is convenient to choose f(E) = (E-M)/sin(E) (note that f is analytic around M and f'(M) = 1/sin(E) is not zero) and look at the equation f(E) = e.
Ahh, this brings back so many fond memories of my first experience in programming. Some 50 years ago when I started to plot cometary positions based on their orbital elements, I used a programmable calculator to hunt for the value of E, basically using iteration (my first Hewlett Packard calculator had only 49 steps). Eventually when I acquired my first computer (an Apple II), I wrote dedicated programs (initially in Basic, then Pascal) to tabulate an ephemeris of the positions. A fascinating book on the subject is Peter Colwell’s “Solving Kepler’s Equation”, which I highly recommend 👍🏼
Not explicitly mentioned I presume for simplicity reasons, the Newton-Raphson method relies on the derivative of the function (see if you can spot it 7:03), which of course Kepler did not have but Newton very much did. Newton-Raphson remains to this day a very practical and useful numerical method (although it’s not the only one and it doesn’t work well for every kind of equation).
WAIT A MINUTE. I remember watching your whole series on imaginary numbers back when i was 16 and I knew nothing about them. You sparked my joy for maths and now here i am 6 years later struggling with my elecrtical engineering degree :(
I stumbled upon this equation when trying to solve the exact same thing, for a recent Walter Lewin problem about the transfer orbit to Mars. I wanted to make a Geogebra animation that showed the path of the rocket as a function of time, with Earth and Mars taking circular orbits at the same time. I derived the equation of this form, and realized there was no analytic solution, and implemented a first order use case of Newton's method to solve it.
For my part I was trying to backtrack Mars' orbit (which is elliptic) from a point where the planet and my spacecraft would meet. I succeeded... some of the time.
5:13 Something is off with your line for M = 77,7°. It intersects the white curve at E >120° but when you zoom in the whole graph is shifting. It looks too low to be the value for E close to 90°. So it's up to you to figure out what's wrong here.
Amazing video! I'm actually taking a grad-level orbital mechanics class right now and had to implement a bunch of these algorithms that you mentioned in the video, this explains it in a really nice and conceptually palatable way so nice job :D. Such a fun topic haha
Ahhh, thank you so much. If only I had known all this when I started writing my astronomy code in high school :) PS. Last year I found Kepler's Equation in a strange place… in a _gardening_ problem. Given a length of chicken wire: Form the chicken wire into a circular arc (less than 2π radians) and place the ends against a straight exterior wall of the house. How large a sector of the circle (call it 2θ) should I use to maximize the enclosed area? … Well, when θ − sin θ appeared in the derivative, I said "No no no! That's Kepler's equation and can't be solved explicitly." By this time I knew better than to try! So I graphed it, and π/2 looked good, so I plugged it in, and yada yada yada the answer is: Semicircle.
Fascinating history of a real problem being investigated by some truely awesome intellects. As an engineer, I just used dA/dt = C to estimate the position vs time on an orbital trajectory of a spacecraft in a geostationary transfer orbit from low Earth orbit. I just divided the orbit into 1º slices assuming straight lines for every slice resulting in an ellipse with 360 sides, made of triangles all with an easily calculatable areas summing to the total area of the eclipse. Linear interpolation on the little line segments worked just fine for positions not on even degrees. Turned out to be accurate within a second of the actual ~12 hr orbital period. All hail Excel :-).
E simplify in a scalar matrix to -sin(e) since that returns standard deviation of a log variance for any index in E. If E is an empty set, it returns -1.
If confronted with this equation I would rewrite it as a fixed point equation: E=f(E) where f(E)=M+e sin E. For e1 (hyperbolic orbits), we have E=g(E) where g(E)= sin⁻¹ [(E-M)/e] and the same approach should work at least if e is sufficiently large.
8:57 “high curvature causes Newton's method to overshoot the correct answer” I assume this also refers to 6 iterations, as it was for the Kepler’s method? I wonder how many more NR iterations are actually needed to converge? In fact, if it’s done manually without a computer, it seems to me that it would be easier and faster to do a few more NR steps than compute all those really complicated series…
@qadirtimerghazin Yes, Newton's method will generally converge -- sometimes it requires extra iterations. However, based on other comments in this video, there are specific enough values for which Newton's method *won't* converge, no matter how many iterations you use. When / how this occurs I do not know although I strongly suspect it has to do with eccentricity values close to (but below) 1 and a very poor initial guesses for E.
@@jmr5125 I didn’t have time to investigate this in detail, but our future overlord GPT 4o said that “Newton-Raphson method is widely used for solving Kepler’s equation because it converges very quickly for small and moderate eccentricities. For high eccentricities and problematic initial guesses, however, alternative or hybrid approaches might be necessary to ensure reliable convergence.” As a computational chemist, I’ve dealt with many cases when the NR method frustratingly just couldn’t converge trying to find a critical point on a highly multi-dimensional molecular potential surfaces, but it’s quite interesting that even this seemingly simple function can be challenging for the NR method…
please do a video like this on the history behind runge kutta methods ode45 etc for differential equation solution ... it was very interesting to learn context behind newton raphson and how and why it was developed, it is used in electrical engineering for something called load flow analysis and allows for things like economic dispatch and maintainence on the grid without swictching the entire thing off
The Lagrange Inversion Formula looks super complicated but it's just Taylors Theorem at f(a) plus "calculating" the (higher) derivative(s) of the inverse function (i.e., prove that $(f^{-1})^{(n)}(f(a))= \lim_{x \to a} (d/dx)^{n-1}[(x-a)/(f(x)-f(a)]$)
Here’s a fun idea. Imagine taking the Newton-Raphson method, and upgrading it to use higher order derivatives too. In general I believe this is known as Householder iteration. The trouble with this is that in order to use it, you have to compute those higher order derivatives. The rate of convergence for each iteration is higher, but the workload per iteration increases as well. Sometimes it can be worth it. For example, there’s a 2014 paper by a man named Izzo who uses this to solve Lambert’s problem from the field of… orbital mechanics! Suppose you have two position vectors of an asteroid and the times of measurement, can you calculate what the asteroid’s orbit must be? That’s Lambert’s problem
4:15 it does work, the absolute value of the error is reducing , even though the sign is flipping. this is still converging (much more slowly though) edit: he said 'does not work *at all* "
This is very similar to calculating the fill level of a horizontal cylindrical tank. I had initially used a binary method but the Newton Kepler method is don't require as many iterations.
I'd love to see a 4 hour version of this video that actually explains everything from first principles, I watched the whole thing but I'm not exactly grokking it.
Where's the function for the movement of the Earth? The model of the ellipse is taking the position of the p!anet in the sky as if the Earth was stationary, in the given time period the Earth has also moved around it's own orbit thus introducing an error between the calculation and the observation.
Using Newton's method, we solved Kepler's equation for the given mean anomaly and eccentricity . The resulting value for the eccentric anomaly is approximately . This iterative method finds by gradually refining an initial estimate until it converges within a set tolerance, giving us an accurate solution for the position of the planet in its orbit.
I might of done a mistake that simplified the problem bc I just solved E based on M The formula is E=-i*ln(-(2W(1÷2-e^(iM)÷2)-1) Where W is the W lambart function and i is the imaginary unit
4:10 I've seen people use a guess and check approah, i.e. they try a value and see if it's too small or large, then guess again So i think this is really just a natural extension, after all, -since sin(x) = x for small values-
Hi! Amazing video, I was wondering if an EU-shipping delivery will be available for your shop in a close future, I really want to buy some items :). Thanks for your astonishing work!
I remember I once tried to program in planetary motion based primarily on Kepler's law and subsequently gave up after several hours of trying. Fun times lol
Amazing video! Where did you get all this information? Which sources did you consult or did you go to the original works of Kepler, Newton, Laplace ... themselves?
I wonder if it is possible to plug many of these equations into a regression algorithm and arrive at a constant value that could be applied to eccentric orbits.
Now is a great time to have a fresh look at this video’s sponsor KiwiCo - head to www.kiwico.com/welchlabs and use code WELCHLABS for 50% off your first monthly club crate or 20% off your first Panda Crate.
Will your book be shipping outside America?
I just factor out the E, set 1-e*sin to zero and call it a day.
@@Anon54387
But be sure to make allowance for the fact that the kiwis are all upside down.
Since -0 = 0, this is very often a wash.
Also watch out for leap years. 2400 CE is going to be another weird one.
Using e for eccentricity is evil. Seeing a formula with e^n in it where e isn't euler's number shakes me to my core
i do a lot of disk work in astrophysics and we use capital Σ as a parameter (surface mass density) and im still not used to it
@@VerrouSuo That is weird, usually I see lowercase sigma for surface density. Using capital greek letters as parameters is very weird
@ i always supposed its because lowercase σ is the stefan-boltzmann constant, but ive also always used σ_b for that anyway
(incidentally its also the standard deviation, but while that comes up all the time when you analyze data, i never actually have to write it symbolically)
Some use ε as eccentricity though...
@@CepheusMappy Yes. That's what I've always seen used. It's an e without being the symbol for euler's number essentially.
As a space nerd who loves this sort of math, I'm astounded that I never realized how central Kepler's equation was to the historical development of calculus!
And of complex analysis, more broadly I guess.
X,2x+5=8')
Oh man this brings back memories.
KSP used three different methods to compute E. The Newton-Raphson method actually turned out to be very unreliable for several cases, and would often blow up.
What worked pretty well was a binary partitioning method, where you start with a large correction and half the search space at each iteration. It was slower to converge, but almost always found a decent solution.
There was also a dedicated solver for extreme eccentricities, and of course another one for hyperbolic cases.
Woah it's the man himself!
You don't need to solve it, and that's how it is solved 👉 The Connections (2021) [short documentary] ❤
I was wondering how you knew this and then I saw your username.
I was working on a 2d orbital space game a few years ago and I wondered how KSP calculated orbits, and I guess now I know :)
Woah :D.
Would you approach it any differently today?
Do you remember under what circumstances Newton's method failed to converge for elliptical orbits? I know that hyperbolic orbits are problematic, especially with high e, but I'm surprised that Newton's method would ever fail for elliptical orbits.
I have a beautiful solution to Kepler's equation, but it could not be contained within the margins of this comment section
I get that reference
Good Now i will look for your proof
Ou must understand that Einsteins equations but be run assuming a flat earth. Well done...
You don't need to solve it, and that's how it is solved 👉 The Connections (2021) [short documentary] ❤
@@WiseandVegan Lol. It's still an unsolved problem. Did you not watch the video?
3:00 That's slightly off: the orbit is a parabola only at exactly e=1. For greater values of e it's a hyperbola. Not that having the right name for the right conic section is all that important here.
Well coincidentally, our teacher today began teaching about parabola and when he said
for e=1, locus is parabola
There i got confused because in the video he mentioned something else😅
@@PhoenixEditz69 If I recall correctly, a completely different approach is used for hyperbolic orbits. Parabolic orbits, which require e is exactly 1 are as rare as perfect circular orbits. We don't use Kepler's equation for that either, because we don't have to solve for E (eccentric anomaly) to get the true anomaly. We get the true anomaly directly from a much simpler equation.
@@isaacclark9825 yeah i also think in actual life parabolic trajectories are rare as nothing is perfect and hyperbolas are more relevant
I dont know if im correct but this could also be the reason why we study hyperbolic geometry and spherical geometry at higher level
I haven't seen many people talk about parabolic geometry specially being a high school student😅
Most of college stuff like multi variable calculus, differential equations, and astrophysics i have learnt through YT videos
So finding right stuff sometimes become tricky
Like why eccentricity can't be zero
But rather it tends to zero🤔
Though visualy it seems correct but mathematically it's counter-intuitive
@@isaacclark9825e=0 for a circle
@@isaacclark9825 ”we get the true anomaly from a much simpler equation” - Yes, it’s on the screen at 09:07! Barker’s goes direct from true anomaly to time for parabolic orbits, with analytical solutions in both directions. Because the comets have such a high eccentricity, they can be approximated with a parabolic close to the Sun - hence Halley’s method mentioned at this point in the video
For hyperbolic orbits you are right, we have a different equation, but it is similar in that it is also transcendental and requires similar numerical techniques to solve
Keplers algorithm resembles a control system closed loop in discrete steps, where the error signal is fed back with a proportional gain of 1. Some gains can make the closed loop unstable. This is an early version of a P-controller! You can analyse stability in terms of the z-transform of the open and closed loops.
so you can make a block diagram for this equation interesting.
yeah lol when i saw that part that's exactly what I thought of too! now this has me wondering if newton's method can be expressed in control system terms too
@paramdandekar562 It would be a PD method, since it involves both the output value at the time step (P) as well as its derivative (D)
@@jaspervandenbosch3838 not exactly though right? PD is when U(s)/E(s) = Kd*s + Kp, whereas Newton's method has the derivative divided by the value?
@paramdandekar562 Yeah, you're right it's not really a PD. I think it should be possible to express in terms of a control scheme though
I honestly think that this is one of the best channels in all of the internet, not just TH-cam.
I feel blessed everytime I venture in the minds of the geniuses of our past.
You don't need to solve it, and that's how it is solved 👉 The Connections (2021) [short documentary] ❤
There is one guy named Michael MacKelvie who makes videos about sports which is at par with with the research, narrative style, and animation of this channel. That is also brilliant. Check it out!
(Edit: the last paragraph)
Superb work!! It always baffles me that Lagrange inversion isn't a standard topic in Calculus courses in universities. Most equations in science and engineering are "impossible" to solve in the sense that no closed-form solutions exist. Yes, we have powerful computers to quickly obtain highly accurate numerical solutions, but an approximate, analytic solution gives a much better qualitative picture. (perhaps it is difficult to justify convergence at elementary level, i.e., without complex analysis, but the inversion technique is just too good to miss out!)
And thank you very much for providing reference btw. I'd like to add one more: Theory of Functions of a Complex Variables, by Soviet mathematician Alexei. I. Markushevich, translated by Richard A. Silverman. It has an extensive treatment on the technique of inversion of power series. In particular, the power series solution to Kepler's equation is covered in Example 3, page 99, Volume II. Markushevich's treatise is also extremely thorough (more so than western classic such as Ahlfors) and is the standard reference on complex analysis in the Russian-speaking world.
PS. For those who wish to learn about Lagrange inversion but without a background in complex analysis, see pp151-155 Art.125,126 of the book "an Elementary Treatise on the Differential Calculus; Containing the Theory of Plane Curves, with Numerous Examples" by Benjamin Williamson. It is an old book and is available on the Internet Archive.
This Kepler series has been superb. (And let’s all take a moment to thank Grant Sanderson for making Manim open source! Its signature look and power is all over TH-cam these days.)
I love your video on imaginary numbers. It helped me gain the intuition needed to work with them.
I think one reason we may never find a perfect solution to Kepler’s equation is the fact there is no perfect formula to draw an ellipse. You can easily plot a circle with a single function but true ellipses… not to much. Not without multiple steps, iteration or some approximations. It would be very cool to discover one some day though
I remember trying to solve this before knowing it was a famously difficuly/impossible problem, got to the exact form of M=E-esin(E), though with different variable names, and tried to use the taylor series to make an estimate. Worked well for low eccentricities, but lets not even talk about the high ones haha. Great video about this topic, so here is a comment for the algorithm :)
Similar thing happened to me with x+log(x)=0
For a while I refused to acknowledge the problem - this was just a boring heat exchanger, after all. Until wolfram alpha convinced me
@@maxmn5821 lambert w function moment
@@maxmn5821similar experience, banging my head against a wall trying to invent the Lambert W function until i heard about it in a blackpenredpen video
Same. I was working on a programming project that needed to model orbits. The idea was that you had a "time slider" that you could move back and forth like a scroll bar to make the planets move around the star. I also needed to have a rocket ship that could apply thrust at specific points in time to alter orbits and transfer between different planets and the star. I thought that E would be an easy solve.
I thought...
I eventually just ended up taking the idea of a binary search tree and applying it to E, so that I could find it out via brute force. It worked pretty fast since I was using a computer, so I didn't worry about it too much beyond that.
@@Sluppie I think you have discovered that simple, stable, numerical methods are preferable to more advanced ones, on a computer. I know someone who took thousands of terms of the Taylor Series for sin(x) to examine it for convergence. He found that even the error on a sine wave is a sine wave!
16:52 Laplace such a good physicist he still attends lectures 4 years after his death :P
Now THAT is dedication to accuracy. Your ghost is running out of academy meetings to issue errata.
And lot of dedication from Cauchy, given that he was abroad - banished from France at the time.
I noticed that too, had to rewind to double check. Did he mean 1821 or some other value?
Bro is single handedly bringing back the science in science education channels on YT, nothing but bravos to you! Always excited to see what you teach nexy
Well, with some help from Grant Sanderson and Maxim.
A function not having analytic solution is indeed a good reason why it's hard to solve.
High quality content. There was a lack of videos on TH-cam about the history and development of science (beyond the typical topics) and you adress that gap very well.
Oh, man! Believe it or not, I just started taking a course on Numerical methods recently and only 2 days before, I had a lecture on the Newton-Raphson method in class. AND, I just started learning python animations with Manim! That's how you make your videos, right? You chose the perfect time to drop this, lol. I am starting to see just how much code and time you need to devote to get these beautiful animations. Thanks for the great vid!
I solved it actually. Will post proof in a sec
It’s true, he showed it to me last night
It's true my professor told me about him
@@platymusPrime¿In a dream with a Indian god?
Ok Fermat
It's been 676 seconds 😢
While watching the part about Kepler’s numerical approach, I thought “That’s what the Newton(-Raphson) method is for…” Turns out Newton came up with it specifically for this equation!
You don't need to solve it, and that's how it is solved 👉 The Connections (2021) [short documentary] ❤
I experienced exactly the same thoughts. 😮
It’s so sad when I realize I have learned so many things through life, and still, my life is a failure.
So, fun fact, that same newton iteration method is used in the famous Fast Inverse Square Root algorithm in Quake III Arena. Kepler, Newton, and Raphson didn't know it at the time, but the fruits of their labor let us have really fast lighting calculations (The algorithm is now badly outdated, but it's the inspiration and basis for better fast square root solvers)
Watching this two weeks into a complex analysis course would add do much motivation
So good to have these content… keeps studies exciting and broadens the context of material
After watching your complex number series, I felt heartbroken that you stopped posting.I didn't even know you were back until a short of yours popped up!
So glad you are back! Take love❤
Years ago when learning how to compute implied interest rates and future cash flows based on the present value of an investment, I had no idea how closely these were related to fundamental problems in mechanics and other fields. Thanks for a very informative video.
My favorite method to solve it is the Halley method. It's similar to the Newton-Raphson method but also takes into account the second derivative, which makes the convergence faster. Generally speaking, that method is not very popular because the extra calculations needed to get the second derivative tend to offset the faster convergence, but in this case the second derivative of "E-e.sin(E)" is just "e.sin(E)", so no extra calculation is needed.
In the end, the Halley method appears to be more efficient in the case of the Kepler equation, which makes me wonder why it's not more popular.
Hey man, really excited about the imaginary numbers book. Your series on imaginary numbers in 2016 was extremely impactful to me then and now. I still think about those videos often, and can’t wait to have this book in my possession!!
Wow. I have worked with this equation since I was a high school lad, even up until last year when I implemented a solution (for low eccentricitry orbits) in software last year I expected that the subject would interest only about four TH-cam viewers, but you have done much better than that!!
I love so much your videos, your style, explanations and the topics you choose are simply perfect. Excelent work!
It's sad that I attempted to find E after a few pages of complex work of sin I decided to watch the video, just to find out e isn't the constant for Euler's number and all my work was wrong anyway... so sad...
Prussing and Conway's Orbital Mechanics book suggests using the Laguerre algorithm. I used it years ago when I wrote programs for astrodynamics. The Laguerre method was meant for solving polynomials of degree n. Although Kepler's Equation is not a polynomial, you can just use n=4, 5 or 6 and run the iterations. I verified what was said in the book; it converges quicker than Newton's method and converges for any initial guess (a bad first guess in Newton's method may lead to no convergence).
- I can't love this enough! :)
Small remark: in order to get series in e via Lagrange inversion theorem, mentioned in the video cca 13:00, it is convenient to choose f(E) = (E-M)/sin(E) (note that f is analytic around M and f'(M) = 1/sin(E) is not zero) and look at the equation f(E) = e.
Ah, playing with Gil Strang's Introduction to Applied Mathematics. I taught from this book many times in the 1990s.
Ahh, this brings back so many fond memories of my first experience in programming. Some 50 years ago when I started to plot cometary positions based on their orbital elements, I used a programmable calculator to hunt for the value of E, basically using iteration (my first Hewlett Packard calculator had only 49 steps). Eventually when I acquired my first computer (an Apple II), I wrote dedicated programs (initially in Basic, then Pascal) to tabulate an ephemeris of the positions. A fascinating book on the subject is Peter Colwell’s “Solving Kepler’s Equation”, which I highly recommend 👍🏼
Not explicitly mentioned I presume for simplicity reasons, the Newton-Raphson method relies on the derivative of the function (see if you can spot it 7:03), which of course Kepler did not have but Newton very much did. Newton-Raphson remains to this day a very practical and useful numerical method (although it’s not the only one and it doesn’t work well for every kind of equation).
WAIT A MINUTE. I remember watching your whole series on imaginary numbers back when i was 16 and I knew nothing about them. You sparked my joy for maths and now here i am 6 years later struggling with my elecrtical engineering degree :(
I stumbled upon this equation when trying to solve the exact same thing, for a recent Walter Lewin problem about the transfer orbit to Mars. I wanted to make a Geogebra animation that showed the path of the rocket as a function of time, with Earth and Mars taking circular orbits at the same time. I derived the equation of this form, and realized there was no analytic solution, and implemented a first order use case of Newton's method to solve it.
For my part I was trying to backtrack Mars' orbit (which is elliptic) from a point where the planet and my spacecraft would meet. I succeeded... some of the time.
This was really great. The only thing Id want more of is more explanation of the history/derivation of the equation. Maybe another video?
I cover some of this here! th-cam.com/video/Phscjl0u6TI/w-d-xo.html
@@WelchLabsVideo Thanks.. just watched the first one.. my reaction is.. WOW!.. so amazingly well done! thanks!
Veritasium has a good video about this.
5:13 Something is off with your line for M = 77,7°. It intersects the white curve at E >120° but when you zoom in the whole graph is shifting. It looks too low to be the value for E close to 90°. So it's up to you to figure out what's wrong here.
Amazing video! I'm actually taking a grad-level orbital mechanics class right now and had to implement a bunch of these algorithms that you mentioned in the video, this explains it in a really nice and conceptually palatable way so nice job :D. Such a fun topic haha
This was so beautiful I almost teared up. Thank you for posting this video.
Ahhh, thank you so much. If only I had known all this when I started writing my astronomy code in high school :)
PS. Last year I found Kepler's Equation in a strange place… in a _gardening_ problem.
Given a length of chicken wire: Form the chicken wire into a circular arc (less than 2π radians) and place the ends against a straight exterior wall of the house. How large a sector of the circle (call it 2θ) should I use to maximize the enclosed area? … Well, when θ − sin θ appeared in the derivative, I said "No no no! That's Kepler's equation and can't be solved explicitly." By this time I knew better than to try! So I graphed it, and π/2 looked good, so I plugged it in, and yada yada yada the answer is: Semicircle.
This is an amazing video! Thank you Stephen :D
10:20
That... was pretty nice! Good point!
I've honestly want to someday construct a mechanical integrator for similar purposes. 😅
ive been waiting for this video for so long after seeing the yt short
Fascinating history of a real problem being investigated by some truely awesome intellects. As an engineer, I just used dA/dt = C to estimate the position vs time on an orbital trajectory of a spacecraft in a geostationary transfer orbit from low Earth orbit. I just divided the orbit into 1º slices assuming straight lines for every slice resulting in an ellipse with 360 sides, made of triangles all with an easily calculatable areas summing to the total area of the eclipse. Linear interpolation on the little line segments worked just fine for positions not on even degrees. Turned out to be accurate within a second of the actual ~12 hr orbital period. All hail Excel :-).
Another one unbelievable good video.
Kepler is one of my favorite. His equations are remarkable. Dark matter seem to enforce both his law of equal distance over equal time and newton law.
E simplify in a scalar matrix to -sin(e) since that returns standard deviation of a log variance for any index in E. If E is an empty set, it returns -1.
Great video! I hope you'll make a new one about solving for higher e values :)
Fantastic video! You made the convergence of Taylor series so clear and easy to understand! =)
Tbh...Cauchy is a real beast!
This guy was disrespectacularly smart - no kidding
Disrespectacularly… I learnt a new word
I made it up :) you're welcome
I love your videos a lot
Unbelievably good presentation!
Bessel functions bring back happy memories, I wrote a planetary system simulator some time in 2002 and used that stuff extensively.
Great video , thanks for making it
If confronted with this equation I would rewrite it as a fixed point equation: E=f(E) where f(E)=M+e sin E. For e1 (hyperbolic orbits), we have E=g(E) where g(E)= sin⁻¹ [(E-M)/e] and the same approach should work at least if e is sufficiently large.
I like your thinking
You're reminding me my first steps in programmation (in BASIC!), when I tried to make a program to calculate the position of the planets...
8:57 “high curvature causes Newton's method to overshoot the correct answer” I assume this also refers to 6 iterations, as it was for the Kepler’s method? I wonder how many more NR iterations are actually needed to converge? In fact, if it’s done manually without a computer, it seems to me that it would be easier and faster to do a few more NR steps than compute all those really complicated series…
@qadirtimerghazin Yes, Newton's method will generally converge -- sometimes it requires extra iterations.
However, based on other comments in this video, there are specific enough values for which Newton's method *won't* converge, no matter how many iterations you use. When / how this occurs I do not know although I strongly suspect it has to do with eccentricity values close to (but below) 1 and a very poor initial guesses for E.
@@jmr5125 I didn’t have time to investigate this in detail, but our future overlord GPT 4o said that “Newton-Raphson method is widely used for solving Kepler’s equation because it converges very quickly for small and moderate eccentricities. For high eccentricities and problematic initial guesses, however, alternative or hybrid approaches might be necessary to ensure reliable convergence.”
As a computational chemist, I’ve dealt with many cases when the NR method frustratingly just couldn’t converge trying to find a critical point on a highly multi-dimensional molecular potential surfaces, but it’s quite interesting that even this seemingly simple function can be challenging for the NR method…
please do a video like this on the history behind runge kutta methods ode45 etc for differential equation solution ... it was very interesting to learn context behind newton raphson and how and why it was developed, it is used in electrical engineering for something called load flow analysis and allows for things like economic dispatch and maintainence on the grid without swictching the entire thing off
Is okay to admit i enjoyed watching this entire video, shared the video to a discord I am in, but have NO IDEA HOW THE MATHEMATICS WORK?
Yay it's finally out!!
The error correction portion reminds me of Bayes Theorem!
Maybe its related to that some way?
The Lagrange Inversion Formula looks super complicated but it's just Taylors Theorem at f(a) plus "calculating" the (higher) derivative(s) of the inverse function (i.e., prove that $(f^{-1})^{(n)}(f(a))= \lim_{x \to a} (d/dx)^{n-1}[(x-a)/(f(x)-f(a)]$)
at 16:47... it would have been quite difficult for Laplace to hear Cauchy's 1831 lecture at the French Academy of Sciences. (Laplace died in 1827.)
Cauchy is like that one op anime character that basically does everything with style.
He probably got the decade wrong while he was talking or something simple like that.
It would be nice to see a deep dive into the radius of convergence
Awesome video.... Thank you!
Here’s a fun idea. Imagine taking the Newton-Raphson method, and upgrading it to use higher order derivatives too. In general I believe this is known as Householder iteration.
The trouble with this is that in order to use it, you have to compute those higher order derivatives. The rate of convergence for each iteration is higher, but the workload per iteration increases as well.
Sometimes it can be worth it. For example, there’s a 2014 paper by a man named Izzo who uses this to solve Lambert’s problem from the field of… orbital mechanics! Suppose you have two position vectors of an asteroid and the times of measurement, can you calculate what the asteroid’s orbit must be? That’s Lambert’s problem
Altair 705 here has pointed out that the second derivative is Halley's method, which here is just... e sinE.
Well done. I always like your stuff. - Retired Engineer
4:15 it does work, the absolute value of the error is reducing , even though the sign is flipping. this is still converging (much more slowly though)
edit: he said 'does not work *at all* "
This is very similar to calculating the fill level of a horizontal cylindrical tank. I had initially used a binary method but the Newton Kepler method is don't require as many iterations.
I'd love to see a 4 hour version of this video that actually explains everything from first principles, I watched the whole thing but I'm not exactly grokking it.
zetamath video about analytic continuation is focused on the subject of the radius of convergence in the complex plane, and it's fantastic!
Didn't understand this the first time through but will watch again
Great content!
Is it me or does the graph at 4:58 like like a cumulative frequency curve
Cool. Thanks for sharing.
Where's the function for the movement of the Earth? The model of the ellipse is taking the position of the p!anet in the sky as if the Earth was stationary, in the given time period the Earth has also moved around it's own orbit thus introducing an error between the calculation and the observation.
Using Newton's method, we solved Kepler's equation for the given mean anomaly and eccentricity . The resulting value for the eccentric anomaly is approximately .
This iterative method finds by gradually refining an initial estimate until it converges within a set tolerance, giving us an accurate solution for the position of the planet in its orbit.
fantastically fascinating.
I might of done a mistake that simplified the problem bc I just solved E based on M
The formula is E=-i*ln(-(2W(1÷2-e^(iM)÷2)-1)
Where W is the W lambart function and i is the imaginary unit
I can tell you had to because you use the ÷ symbol instead of / for division
4:10 I've seen people use a guess and check approah, i.e. they try a value and see if it's too small or large, then guess again
So i think this is really just a natural extension, after all, -since sin(x) = x for small values-
Hailey expanded Newton's method to use the second derivative to correct for under or overshoot. It will converge significantly faster.
Hi! Amazing video, I was wondering if an EU-shipping delivery will be available for your shop in a close future, I really want to buy some items :).
Thanks for your astonishing work!
I remember I once tried to program in planetary motion based primarily on Kepler's law and subsequently gave up after several hours of trying. Fun times lol
7:34 "The lord Jefus Chrift..." 😂
The music really adds gravitas.
excellent video
Edmond Halley, like rally. J.K. Rowling, like bowling.
HOLD UP, where do we find how to use Bessel functions to model light diffracting through DNA strands? I am so down for this.
Amazing video! Where did you get all this information? Which sources did you consult or did you go to the original works of Kepler, Newton, Laplace ... themselves?
Thanks! Sources in description.
Do a video on the riemann hypothesis
He doesn't say that the reason for determining the area swept out in the circle is because we don't know the elepticity of the ellipse.
I wonder if it is possible to plug many of these equations into a regression algorithm and arrive at a constant value that could be applied to eccentric orbits.
Kepler's method is very smart! 6:00 :))
It's equivalent to conservation of angular momentum, ellipse or circle, the energy is the same.
At 21:02, I don't understand why there is an x in Bessel's solution. Is it supposed to be e?
Finally, after a lot of teaser 😅
At least once per week I’m reminded that I’d have no chance at a career in a quantitative field before computers.
Lol I agree (I think!)
A Pade expansion including both poles and zeros is more general than a Taylor series expansion.
It isn t really
Padé approximants are cool but not really more applicable except in special cases.
Why not use Laurent series instead though?
I swear the algo has been hyping up this equation recently, unless that you guys doing it