#YAY . . . and thanks for another wonderful excursion through some of math's byways! A little extension of this subject: This method is sometimes called the "tangent method," because it uses the tangent to the graph, to home in on a zero of the function. It requires finding the derivative of the function; but sometimes, it's difficult or impossible to calculate the derivative. In such cases, there's what's sometimes called the "secant method." In this method, you need 2 starting points, not just one (& it helps greatly if they bracket the target; that is, if one of them gives f(x) < 0 and the other gives f(x) > 0). Then you just evaluate the function itself at x₁ and x₂ and draw a straight line (the "secant") between (x₁, f(x₁)) and (x₂, f(x₂)), and solve the equation of that line for y=0. Call this new x, "x₃" and now use x₂ and x₃ to find the next x, etc. In each iteration, you discard the "older" of the two guesses you have, make the newer one the 'new' older one, and the newly generated guess becomes the newer one. The downsides are: (1) You have to keep track of 2 current 'guesses,' instead of just 1, and (2) Convergence to the zero is a bit slower. The upsides are: (1) You have only 1 function to evaluate each time, not 2. (2) You don't need to take the derivative of your function Interesting exercise to try: bprp started by showing that there must be a zero between x=0 and x=1. Try applying the secant method, using these two points as your starters. Note that there's an advantage to using the one that gets f(x) closer to 0, as your "2nd," or, latest, guess. Compare the rates of convergence of these two methods. Fred
@@marcushendriksen8415 Yes! In fact, in the days before home computers & handheld electronic scientific calculators, a method that was taught for this, was basically the decimal form of bisection - you'd bracket the zero, say, with consecutive integers, n and n+1; interpolate based on f(x) at each of those; then bracket the new x-value with consecutive tenths; interpolate again; etc., to the desired accuracy. It had someone's name attached to it, that I can't recall. Fred
+ Stephen Turner: Yes, FORTRAN 77. Then FORTRAN 90. My computer teeth were cut on ALGOL 60. And as a kid, I watched my dad coding in FORTRAN II, using specially printed coding pads, because everything had to be column-controlled. I think he actually started out in FORTRAN I - or more likely, pre-FORTRAN, 1951 or earlier. I was a tot, so I don't recall. All I know is, his first computer experience was before there was magnetic core memory. Fred
For Ω*e^Ω = 1, we can know: 1: e^Ω = 1/Ω 2: ln(Ω) + Ω = 0, ln(Ω)/Ω = -1 Ω^(e^Ω) = Ω^(1/Ω) = e^(ln(Ω)/Ω)= e^-1 = 1/e This requires patience to prove but not that hard.
I am a retired academic inspector of maths in Tunisia. I have a great admiration for your enthousiasm and courage. You can be an inspiration for every future mathematician
For the Newton method to work better, you can just analyse the function for its maxima and minima (since you can derive it, that should be possible). Then pick an x0 such that f(x0) is between a negative local minimum and a positive local maximum or, if there is only one local extremum, an x0 close to that. Of course, you can still end up in loops.
This is really a great introduction to the Newton-Raphson method. And if anyone knew about it already, it's also a great introduction to the Lambert W function. So it's a great video overall!
Simply do O(n+1)=exp(-O(n)) with O(0) close to the solution (for example 1 works), this converges quite fast towards the correct solution. This method works for slowly changing functions as O(n+1) and O(n) are equal as n->infinity.
Making zero the subject makes things so much more convenient: we can apply Newton’s method on f(x) if we want to solve f(x)=0, we can apply factorisation; the roots to f(x) are intersections of the x-axis and as |x|=0 iff x=0 m: the roots to f(x) are the points such that |f(x)|=0
You can check OEIS sequences A370490 and A370491 to see how you can obtain an infinite series for the Omega constant using Whittaker's root series formula. I used Whittaker's root series formula to obtain infinite series for other mathematical constants (like 1/e, plastic number, Dottie number and ln 2). Sometimes Whittaker's root series formula is a useful alternative to Newton's method. 🙂
For any functions f and g. We can find where f(x) * g(x) = 1 by solving (fx)^-1 = g(x), so in this case when: 1/x = e^x. ln (1/x) = ln (e^x) natural log boths sides ln(1) - ln (x) = x (log properties) -ln(x) = x (simplify) Doesn't seem as helpful as I'd hoped-- but I did graph it and got x = ~.567 :).
Thanks for the video. I would point out two things: (1) This approximation method is the same as expanding the function in a Taylor series to liner order in (x-xg) where xg is your initial guess, (2) the iteration is very robust so that even a terrible initial guess, xg, will converge rapidly to the answer.
Thank you!! 3 years I have tried to find a way to solve the equation: 3^x = x^2 And I knew the answer because I used a graphic calculator but I wanted to find a mathematical way of solving the problem and I did using this method!! I asked my profesors for help but they told me it can't be done and that these problems are solved by computers So, thank you for the enlightment
Hmmm, looks like it has only one real solution, and it's negative .... somewhere between -1 and -½. Your professors were right if your question was taken as, "How do you solve that equation symbolically?" Numerically, it's quite soluble, and Newton's Method works just fine; so does the "secant" method. Fred
ffggddss I knew there was a solution because graphs of x^2 and 3^x have an intersection Question I asked my professors was simply "If x^2 equals 3^x, how do I find x?" Too bad they don't teach us these very usefull and interesting methods in highschool
Some places *do* teach these sorts of things. Too many do not. The way you asked, should have elicited, *at least,* something like, "There's no analytic method, but there are many numerical techniques to solve that." Fred
if you are into functions, which means you have a feel for them and their general shape, then all you need do is graph a few points around a suspected zero of that function and a short bit of interpolation and you will have the zero. Newton's method is the most complicated way I've seen to get this accomplished.
+WisdomVendor1 If you interpolate between two points on the graph that bracket the true root, then you are essentially applying one iteration of the so-called "secant method" (which can be reguarded as the Newton(-Raphson) method where the derivative is replaced/approximated by a finite difference). In my (humble) opinion, this does not lead to a formula that is (essentially) easier ... As a concept, I agree with you.
note that if you are measuring schemes which converge rapidly you should take note of the number of operations in each iteration. the following scheme is obtained from asimple rearrangement of the original equation. It takes more iterations by fewer operations x(n+1) = exp(-x(n)). very easy to implement on a calculator just press [+/-] and [exp] alternately a dozen times. since we know root is between 0 and 1 , start at 0.5, Note that inverting the equation to give x(n+1) =-ln(x(n)) . this is unstable. and although this can be annalysed ( ref “radius of convergence) its often easier just to try it and it it doesn't converge then invert the equation. For equations which are difficult to differentiate. eg it might be a complicated equation or just a table or results that you interpolate with cubic splies. then the chord method is best. start with two points either side orf a root find where the chord intersects the axis and use that as a new point. replace the point that is the same side of the x-axis
1 question, how can you have an inverse function (called "w" in the video) when the graph is not a function. A function is when it is one to one or many to one mapping. If the graph of the complicated expression does not follow these types of mapping it is not a function => it does not have an inverse function. Thanks!
Maybe you will find x(i) very close to a local maximum or local minimum of the function before you find the root. Then your x(i+1) goes.... whoosh, far away. If the problem is persistent because the root and the extremum are very close to each other, then you will need to use *Father Wiggly's Famous Reverse Interpolation Bisection Method.* This is a combination of bisection method, to find a small interval that contains the root, and then (after the root is tightly contained) a 2nd degree Lagrange interpolating polynomial is used to estimate the value of the root even more accurately. I wrote an algorithm using this method to find the eccentric anomaly of a hyperbolic transfer orbit after being given the mean anomaly and the eccentricity. *Father Wiggly's Famous Reverse Interpolation Bisection Method* works when Newton's Method, Danby's Method, and similar methods using higher order Taylor series fail because of skittering off the roots of the derivatives. Father Wiggly was my cat for a long time. I named the method for him.
My first assignment before beginning college had Newton's Method in it, and for our Math program over the summer I ran into lambetW's all the time, and I had no idea what they meant. And now I understand what both mean. Well done thanks for the video.
When I did maths , ages ago, I recall that there is a way to determine legitimate starting values for the Newton Raphson method that will always ensure convergence, I will have to look up my old notes.
I love this video! This is my favorite problem in math. I came across this problem in high school by my own and couldn't figure it out right away. Eventually I came up with the idea of approximating it with e^-e^-e^…. Once I found the approximation, I googled the number and found out it was called the Omega constant, which is really cool. And it mentioned the Lambert W function, but I didn't really understand it at the time. I just love this problem because it opened my eyes to the fact you can't really solve everything with regular algebra :p
Do you know about tying a goat to the edge of a circular field so that it can eat half the grass. From memory I think that only has an approximate solution as well.
+Kyle Amoroso I first came across this about 30 years ago. Googling turned up Wikipedia ("It was first published in 1748 in England, in the yearly publication The Ladies Diary: or, the Woman’s Almanack."(!)), Wolfram, stackexchange and an xkcd discussion. There is no exact solution. This formulation is from the xkcd discussion: *You have a goat tied to the fence of a circular paddock. The paddock has a radius of 100M. You want the goat to eat HALF of the grass.* *How long does the rope need to be?* Assume an even covering of grass, and don't worry about the goat's neck length etc.
+ Guest Informant: That sounds like it's related to a problem I ran across in astronomy. You have a solar eclipse that will be partial where you are. You know the apparent angular diameters of both Sun & Moon (they're generally *not* equal!), and you know, at maximum eclipse (or at any particular time during it), what fraction of the Sun's apparent *diameter* will be encroached on by the Moon. Assuming both bodies to be exactly spherical, what fraction of the Sun's apparent *area* will be covered? Fred
Another Omega Constant that's also really cool is Chaitin's Constant which is defined as the probability that a given program will halt given the capacity to run correctly, but because of its nature and the relation to the Halting Problem, it is uncomputable and 'non-guessable' but they are represented with the capital Omega when discussing them to have a nice symbol to work with.
The omega constant always felt redundant to me when you can just use W(1). If we didn't have it, we could actual use omega(1), which looks more like a predefined function, as it uses a Greek letter, like pi(x) and gamma(x).
Just to add a little history, Newton’s buddy Halley (of the comet) derived an approximation method involving, additionally, the second derivative. Householder generalized these methods to include the third, fourth, etc. derivatives.
Use Newton's method after taking ln (natural log) of both sides => x + log(x) = 0 => converges faster with fewer and lower computational-cost iterations.
I'd like to try and calculate this constant to lots of digits. What are some of the ways that have been used to calculate this? It doesn't look like it has a good power series or such.
O.k., I've managed to put this Lambert W into my fractal rendering program. So, here is goes. The image obtained when solving x exp(x) = 1 with a newton method. The color is by root reached, the shading is: the more iterations were needed to converge, the darker point is. There is a whole bunch of different complex roots (the Lambert W is a multivalued function, the equation in question in theory has infinite number of roots), but I've had only 6 base colors cycled, so different colors are different roots for sure, but same color doesn't mean same root. Nevertheless, a big basins of same color really converge to the same single root. imgur.com/a/OwB2OPr
This is how I solve it. xe^x=1 xe^x-1=0 f(x)=xe^x-1 The function is continuous on all the real numbers, so I can proceed like this: f(0)=-10 This means that the equation has exactly one solution and that this solution is between 0 and 1. Here's what I do now: x_1=(0*1.72-1*(-1))/(1.72-(-1))=0.368 Now, I found this value, and I want to know how good it is. For this purpose, I find the corresponding f value, namely: f(x_1)=-0.466
A quick and dirty way to solve for this is to solve x = exp(-x) numerically by iterating exp(-x) until it converges on a solution. Seed at 1. The first approximation is 1/e. After 22 iterations it's at 0.56714. It converges way slower than Newton, but if you have a spreadsheet you can set it up and calculate it in seconds.
I have a philosophical question. Consider sin(x) and W(x). I think of sin(x) as 'natural' and W(x) as 'artificial' or 'concocted' or perhaps 'ad hoc'. But I wonder if sin(x) and W(x) are both really on the same footing, and I am just biased because sin(x) is more familiar to me?
Hey , if xe^x=1 and we should find x , why dont write e^x=1/x and doing diffr(d/dx) for two sides? And its will be 1/x =d/dx of 1/x beacause d/dx of e^x is e^x
I solved the question in the thumbnail image by expanding the function into x(1+x+x^2/2...)-1=0, and then using Newton's Method with only those first 3 terms. It converged very quickly!
Crystal-Math it'd be pretty cool, but I don't think u can, otherwise u would have a nice limit def for omega, and I believe bprp would've pointed it out
I cannot really express how I'm thankful to you. You explained this in such a beautiful manner that now I can work with Lambert W functions for Reals very easily. Thank you BPRP o7
Is there some way to represent omega as a limit? After all, if we were to apply Newton's method an infinite number of times, we'd theoretically get closer and closer to the answer, and maybe we'd be able to write that in some limit form?
This was a good explanation to how to get omega, but you didn't talk much about what this function is useful for. Please consider mentioning that in future if you know why certain functions are helpful or useful for some application.
Yes, it can, but yes, only with the restriction x ≥ -1, because if you graph y = xeˣ, you'll see that it's monotonic increasing only for that interval (half-line). It has a local minimum (which is also global) at (-1, -1/e); to the left of which, it's monotonic decreasing. Lambert W (its principal branch) is defined for the above half-line. [That is, W(x) is defined for x ≥ -1/e; and W(x) ≥ -1.] There's a nice Wikipedia page on it . . . PS, pronunciation; because Lambert was Swiss, there are two: German: "Lahm-bert" (close to bprp's pronunciation) French: "Lahm-bare" Fred
Hmmm. I'd never solve it this way. It seems intuitive to take the logarithm of the equation first, and I thought that was going to be the point of this vid. Then x+Log[x]=0. Starting with 1, we get: 1. 0.5 0.5643823935 0.5671389877 0.5671432904 Converges much more quickly. See at 15:06
This brings up a very good point. When you have an equation to solve for a root, it's sometimes advantageous to manipulate it into a form that will make Newton's Method converge faster. Fred
I used the iterative formula O(n+1) =1/(e^O(n)), which(using a non-zero value as the starting point) converges very quickly to 0.56714... . A much shorter and simpler method, is there something wrong with it?
Ashutosh Anand let a={x}, b=[x], a+b=x. if it's in a geometric progression, then b/a=(a+b)/b, or b^2-ab-a^2=0. using the quadratic formula eventually gives b=phi*a, where phi is the golden ratio. since a
I have a question. If math is a language then all things must be related right? If thats true then what is the relationship between these numbers that appear everywhere? Like the golden ratio or any other number that appears in many places what does it have in relation to others?
ashiq ashiq Imagine a function like 1-x^2 (a downward-facing parabola, basically a hill, peaking at 0, and with roots at 1 and -1); starting from any positive number will converge to 1, and any negative number will converge to -1. Basically, in any well-behaved, smooth function with a root, if you zoom into the area where it intersects the x-axis deep enough, you’ll find an area that looks reasonably close to a straight line; when the root approximation from Newton’s method enters this area, it can never escape, and from then on will just move closer and closer to the root in the middle of the area, since the tangent lines drawn in that area just point closer and closer to the root. I believe this area is known as a basin of attraction. In a function with multiple roots, the root you get depends on which of these basins the approximation enters first. For example, try this with a function such as x^3-x around the area between -1 and 1. The areas at the far right and far left stay far right and left, converging to +-1, while there’s an basin in the middle where it stays pointed towards 0 and converges there. However, there are areas on each side (between around .4 and .57 on the positive side, and their negatives on the other) where the tangent line drawn by the method does point towards the center rather than the outside, but the slope is shallow enough that it can wind up on the outside on the other side, or in other places; you can get some pretty complex behavior as it bounces back and forth between those two areas trying to find a basin to settle in.
#YAY . . . and thanks for another wonderful excursion through some of math's byways!
A little extension of this subject: This method is sometimes called the "tangent method," because it uses the tangent to the graph, to home in on a zero of the function.
It requires finding the derivative of the function; but sometimes, it's difficult or impossible to calculate the derivative.
In such cases, there's what's sometimes called the "secant method." In this method, you need 2 starting points, not just one (& it helps greatly if they bracket the target; that is, if one of them gives f(x) < 0 and the other gives f(x) > 0).
Then you just evaluate the function itself at x₁ and x₂ and draw a straight line (the "secant") between (x₁, f(x₁)) and (x₂, f(x₂)), and solve the equation of that line for y=0.
Call this new x, "x₃" and now use x₂ and x₃ to find the next x, etc. In each iteration, you discard the "older" of the two guesses you have, make the newer one the 'new' older one, and the newly generated guess becomes the newer one.
The downsides are:
(1) You have to keep track of 2 current 'guesses,' instead of just 1, and
(2) Convergence to the zero is a bit slower.
The upsides are:
(1) You have only 1 function to evaluate each time, not 2.
(2) You don't need to take the derivative of your function
Interesting exercise to try: bprp started by showing that there must be a zero between x=0 and x=1.
Try applying the secant method, using these two points as your starters. Note that there's an advantage to using the one that gets f(x) closer to 0, as your "2nd," or, latest, guess.
Compare the rates of convergence of these two methods.
Fred
Thank you for the detail explanation as always!!
Did you just explained my numerical methods class lol
There's also the more basic bisection method, which is slower but is guaranteed to converge.
@@marcushendriksen8415 Yes! In fact, in the days before home computers & handheld electronic scientific calculators, a method that was taught for this, was basically the decimal form of bisection - you'd bracket the zero, say, with consecutive integers, n and n+1; interpolate based on f(x) at each of those; then bracket the new x-value with consecutive tenths; interpolate again; etc., to the desired accuracy.
It had someone's name attached to it, that I can't recall.
Fred
@@marcushendriksen8415 I just remembered the name (of the decimal version). It was called Horner's Method.
Fred
Wow, Newton sure knew a ton!
Definitely.
Very underrated
@Abdega: I see what you did there!
Fred
And how!
Dum dum, tch
Back in the stone age, the first useful program I wrote in FORTRAN was an implementation of the Newton method for finding the zeros of polynomials.
Steve the Cat Couch
The Stone Age for me was "Visual Basic". And I wrote a tic tac toe program. Lol
Yup, Waterloo Fortran IV (Watfiv), about 1979.
I actually have never heard of that until you two mentioned it...
Yes, Watfor was Waterloo Fortan. The successor was Fortran 4 (IV), hence the 'fiv' pun in the name. Then came Fortan 77, I think
+ Stephen Turner: Yes, FORTRAN 77. Then FORTRAN 90.
My computer teeth were cut on ALGOL 60.
And as a kid, I watched my dad coding in FORTRAN II, using specially printed coding pads, because everything had to be column-controlled. I think he actually started out in FORTRAN I - or more likely, pre-FORTRAN, 1951 or earlier.
I was a tot, so I don't recall.
All I know is, his first computer experience was before there was magnetic core memory.
Fred
So pretty much, you're saying, when faced with a hard equation, just go off on a tangent a bunch of times?
shang xu more or less
Hah! Yes.
But bprp didn't say it; Newton (and Raphson) did!
Fred
Definitely yes lmao
The function must be continuous
"Is it really so hard?" "if Ω*e^Ω = 1, then Ω^(e^Ω) = 1/e"
..Yes, yes it is
For Ω*e^Ω = 1, we can know:
1: e^Ω = 1/Ω
2: ln(Ω) + Ω = 0, ln(Ω)/Ω = -1
Ω^(e^Ω) = Ω^(1/Ω) = e^(ln(Ω)/Ω)= e^-1 = 1/e
This requires patience to prove but not that hard.
@@mokouf3 easier proof, if Ω*e^Ω = 1:
1: Ω = 1/e^Ω
Ω^(e^Ω) = 1/(e^Ω)^(e^Ω) = 1/e^(Ω*e^Ω) = 1/e^1 = 1/e
@@rarebeeph1783 I like your proof.
I am a retired academic inspector of maths in Tunisia. I have a great admiration for your enthousiasm and courage. You can be an inspiration for every future mathematician
13:52 x_3 was almost an Euler-Mascheroni jumpscare lol
xD i felt the same way 😂
For the Newton method to work better, you can just analyse the function for its maxima and minima (since you can derive it, that should be possible).
Then pick an x0 such that f(x0) is between a negative local minimum and a positive local maximum or, if there is only one local extremum, an x0 close to that.
Of course, you can still end up in loops.
This is really a great introduction to the Newton-Raphson method. And if anyone knew about it already, it's also a great introduction to the Lambert W function.
So it's a great video overall!
Finally found a channel, who's narrator is a great teacher. You taught me, numerical analysis, in just 20 minutes, wow.
You are a great tutor and a blessing, man! Thank you, I wish you were around 20 years ago to enhance my calculus experience!
U.V. S. Thanks.
I am a teacher : )
"That's because...it's too boring. Let's use Newton's Method."
Hahahahah exactly what my teacher told us.
Simply do O(n+1)=exp(-O(n)) with O(0) close to the solution (for example 1 works), this converges quite fast towards the correct solution. This method works for slowly changing functions as O(n+1) and O(n) are equal as n->infinity.
Making zero the subject makes things so much more convenient: we can apply Newton’s method on f(x) if we want to solve f(x)=0, we can apply factorisation; the roots to f(x) are intersections of the x-axis and as |x|=0 iff x=0 m: the roots to f(x) are the points such that |f(x)|=0
You can check OEIS sequences A370490 and A370491 to see how you can obtain an infinite series for the Omega constant using Whittaker's root series formula. I used Whittaker's root series formula to obtain infinite series for other mathematical constants (like 1/e, plastic number, Dottie number and ln 2). Sometimes Whittaker's root series formula is a useful alternative to Newton's method. 🙂
For any functions f and g. We can find where f(x) * g(x) = 1 by solving (fx)^-1 = g(x),
so in this case when:
1/x = e^x.
ln (1/x) = ln (e^x) natural log boths sides
ln(1) - ln (x) = x (log properties)
-ln(x) = x (simplify)
Doesn't seem as helpful as I'd hoped-- but I did graph it and got x = ~.567 :).
Thanks for the video. I would point out two things: (1) This approximation method is the same as expanding the function in a Taylor series to liner order in (x-xg) where xg is your initial guess, (2) the iteration is very robust so that even a terrible initial guess, xg, will converge rapidly to the answer.
fun fact, lowercase Ω is ω, so it kind looks like a w.
Thank you!!
3 years I have tried to find a way to solve the equation: 3^x = x^2
And I knew the answer because I used a graphic calculator but I wanted to find a mathematical way of solving the problem and I did using this method!!
I asked my profesors for help but they told me it can't be done and that these problems are solved by computers
So, thank you for the enlightment
Hmmm, looks like it has only one real solution, and it's negative .... somewhere between -1 and -½.
Your professors were right if your question was taken as, "How do you solve that equation symbolically?"
Numerically, it's quite soluble, and Newton's Method works just fine; so does the "secant" method.
Fred
ffggddss I knew there was a solution because graphs of x^2 and 3^x have an intersection
Question I asked my professors was simply "If x^2 equals 3^x, how do I find x?"
Too bad they don't teach us these very usefull and interesting methods in highschool
Some places *do* teach these sorts of things. Too many do not.
The way you asked, should have elicited, *at least,* something like, "There's no analytic method, but there are many numerical techniques to solve that."
Fred
WOW this is my first time heard about Omega constant... so cool~
if you are into functions, which means you have a feel for them and their general shape, then all you need do is graph a few points around a suspected zero of that function and a short bit of interpolation and you will have the zero. Newton's method is the most complicated way I've seen to get this accomplished.
+WisdomVendor1
If you interpolate between two points on the graph that bracket the true root, then you are essentially applying one iteration of the so-called "secant method" (which can be reguarded as the Newton(-Raphson) method where the derivative is replaced/approximated by a finite difference).
In my (humble) opinion, this does not lead to a formula that is (essentially) easier ...
As a concept, I agree with you.
Great video! It would have been great to mention that Omega is trascendental :)
Not that it would be a great surprise.
16:12 This is just b r i l l i a n t
He stares into our soul each time he looks up the values of the approximation to W(1).
Also, "Start learning toady" - description
Tommy Thach lol thanks.
That was a fast reply, thank *you*!
: )
because the function is so kink.
it makes mathematicians _lip smack_ *moist*
Recently had Newton Raphson method in Approximations in my 4th semester. The method is easy but never knew about Omega constant. Thanks for that.
note that if you are measuring schemes which converge rapidly you should take note of the number of operations in each iteration. the following scheme is obtained from asimple rearrangement of the original equation. It takes more iterations by fewer operations x(n+1) = exp(-x(n)). very easy to implement on a calculator just press [+/-] and [exp] alternately a dozen times. since we know root is between 0 and 1 , start at 0.5,
Note that inverting the equation to give x(n+1) =-ln(x(n)) . this is unstable. and although this can be annalysed ( ref “radius of convergence) its often easier just to try it and it it doesn't converge then invert the equation. For equations which are difficult to differentiate. eg it might be a complicated equation or just a table or results that you interpolate with cubic splies. then the chord method is best. start with two points either side orf a root find where the chord intersects the axis and use that as a new point. replace the point that is the same side of the x-axis
1 question, how can you have an inverse function (called "w" in the video) when the graph is not a function. A function is when it is one to one or many to one mapping. If the graph of the complicated expression does not follow these types of mapping it is not a function => it does not have an inverse function. Thanks!
There's really two W functions, W0 and W1, but there's only one for positive x
Man, I was coming here to explain, but both of the explanations I had in mind we're beaten to by you two.
Maybe you will find x(i) very close to a local maximum or local minimum of the function before you find the root. Then your x(i+1) goes.... whoosh, far away. If the problem is persistent because the root and the extremum are very close to each other, then you will need to use *Father Wiggly's Famous Reverse Interpolation Bisection Method.*
This is a combination of bisection method, to find a small interval that contains the root, and then (after the root is tightly contained) a 2nd degree Lagrange interpolating polynomial is used to estimate the value of the root even more accurately. I wrote an algorithm using this method to find the eccentric anomaly of a hyperbolic transfer orbit after being given the mean anomaly and the eccentricity.
*Father Wiggly's Famous Reverse Interpolation Bisection Method* works when Newton's Method, Danby's Method, and similar methods using higher order Taylor series fail because of skittering off the roots of the derivatives.
Father Wiggly was my cat for a long time. I named the method for him.
My first assignment before beginning college had Newton's Method in it, and for our Math program over the summer I ran into lambetW's all the time, and I had no idea what they meant. And now I understand what both mean. Well done thanks for the video.
Is there any chance that W(x) can be written in terms of Ω? It otherwise seems odd to have a constant when you could just write W(1).
When I did maths , ages ago, I recall that there is a way to determine legitimate starting values for the Newton Raphson method that will always ensure convergence, I will have to look up my old notes.
I love this video! This is my favorite problem in math. I came across this problem in high school by my own and couldn't figure it out right away. Eventually I came up with the idea of approximating it with e^-e^-e^…. Once I found the approximation, I googled the number and found out it was called the Omega constant, which is really cool. And it mentioned the Lambert W function, but I didn't really understand it at the time. I just love this problem because it opened my eyes to the fact you can't really solve everything with regular algebra :p
Do you know about tying a goat to the edge of a circular field so that it can eat half the grass. From memory I think that only has an approximate solution as well.
@@guest_informant oo I have not! Now I wanna try it. So you'd be solving for the length of the rope and the radius of the field, or something?
+Kyle Amoroso
I first came across this about 30 years ago. Googling turned up Wikipedia ("It was first published in 1748 in England, in the yearly publication The Ladies Diary: or, the Woman’s Almanack."(!)), Wolfram, stackexchange and an xkcd discussion. There is no exact solution. This formulation is from the xkcd discussion:
*You have a goat tied to the fence of a circular paddock. The paddock has a radius of 100M. You want the goat to eat HALF of the grass.*
*How long does the rope need to be?*
Assume an even covering of grass, and don't worry about the goat's neck length etc.
+ Guest Informant: That sounds like it's related to a problem I ran across in astronomy.
You have a solar eclipse that will be partial where you are. You know the apparent angular diameters of both Sun & Moon (they're generally *not* equal!), and you know, at maximum eclipse (or at any particular time during it), what fraction of the Sun's apparent *diameter* will be encroached on by the Moon.
Assuming both bodies to be exactly spherical, what fraction of the Sun's apparent *area* will be covered?
Fred
Another Omega Constant that's also really cool is Chaitin's Constant which is defined as the probability that a given program will halt given the capacity to run correctly, but because of its nature and the relation to the Halting Problem, it is uncomputable and 'non-guessable' but they are represented with the capital Omega when discussing them to have a nice symbol to work with.
The omega constant always felt redundant to me when you can just use W(1). If we didn't have it, we could actual use omega(1), which looks more like a predefined function, as it uses a Greek letter, like pi(x) and gamma(x).
Just to add a little history, Newton’s buddy Halley (of the comet) derived an approximation method involving, additionally, the second derivative. Householder generalized these methods to include the third, fourth, etc. derivatives.
"When calculus meet analitic geometry and shake hands"
Is I.V.T. the Bolzano´s teorem?
The Bolzanos theorem is an especific case of IVT
Use Newton's method after taking ln (natural log) of both sides => x + log(x) = 0 => converges faster with fewer and lower computational-cost iterations.
I didn’t know this thing. Thanks for showing it.
😱😱😱😲😲wow...really genius ....love it... Newtown was an amazing guy ....
Is this correct??
W(1) = omega
Yes!!
I'd like to try and calculate this constant to lots of digits. What are some of the ways that have been used to calculate this? It doesn't look like it has a good power series or such.
Not loving a computable number being called Omega. :3
I knew there would be a Chaitin reference somewhere in the comments :)
20:50 That's BRILLIANT
Nice, and pretty close to the Euler-Mascheroni constant.
Whoa, this is the solid basement to explore a whole big world of fractals, Newton basins!
O.k., I've managed to put this Lambert W into my fractal rendering program.
So, here is goes. The image obtained when solving x exp(x) = 1 with a newton method. The color is by root reached, the shading is: the more iterations were needed to converge, the darker point is. There is a whole bunch of different complex roots (the Lambert W is a multivalued function, the equation in question in theory has infinite number of roots), but I've had only 6 base colors cycled, so different colors are different roots for sure, but same color doesn't mean same root. Nevertheless, a big basins of same color really converge to the same single root.
imgur.com/a/OwB2OPr
Really enjoyed as always 😃😃
This might be an ok thing to introduce householder methods in general.
This is how I solve it.
xe^x=1
xe^x-1=0
f(x)=xe^x-1
The function is continuous on all the real numbers, so I can proceed like this:
f(0)=-10
This means that the equation has exactly one solution and that this solution is between 0 and 1. Here's what I do now:
x_1=(0*1.72-1*(-1))/(1.72-(-1))=0.368
Now, I found this value, and I want to know how good it is. For this purpose, I find the corresponding f value, namely:
f(x_1)=-0.466
You can just equate this as e^x=1/x
The point of intersection between e^x & a hyperbola 1/x
x belongs to (0,1)
You can also represent Omega as an infinite nest of -Ln(-Ln(-Ln(...)))
A quick and dirty way to solve for this is to solve x = exp(-x) numerically by iterating exp(-x) until it converges on a solution. Seed at 1. The first approximation is 1/e. After 22 iterations it's at 0.56714. It converges way slower than Newton, but if you have a spreadsheet you can set it up and calculate it in seconds.
This is the first lesson in my Numerical Methods and I didn't learn it til today! Lol
PS. I already finished the course lmao
But why is there a constant that is equal to W(1)? Does it have a practical implication?
From where do you have the t shirt?
I have a philosophical question. Consider sin(x) and W(x). I think of sin(x) as 'natural' and W(x) as 'artificial' or 'concocted' or perhaps 'ad hoc'. But I wonder if sin(x) and W(x) are both really on the same footing, and I am just biased because sin(x) is more familiar to me?
Hi ABCD is a Square. O is a point inside the square such that
Not tired of scratching my head and looking to TH-cam for help... not even close "Tutor guy" I have "BPRP guy!!"😀
Hey , if xe^x=1 and we should find x , why dont write e^x=1/x and doing diffr(d/dx) for two sides? And its will be 1/x =d/dx of 1/x beacause d/dx of e^x is e^x
x e^x = 1
take product log on both side
W(x e^x) = W(1)
x = W(1)
x = 0.567143
What about getting a second order approximation for the function using the second derivative and getting a faster converging approximation?
13:52 you dont need to be so tense, you can read the value showing us, thats ok 😂😂
I solved the question in the thumbnail image by expanding the function into x(1+x+x^2/2...)-1=0, and then using Newton's Method with only those first 3 terms. It converged very quickly!
Nice one.
Is it possible to get the formula for X sub n in explicit form?
Would be nice.
Crystal-Math it'd be pretty cool, but I don't think u can, otherwise u would have a nice limit def for omega, and I believe bprp would've pointed it out
1/e^(1/e^(1/e^(1/e^(1/e .... It's an infinite serie .
x[n] = exp(-x[n-1]) is another iterative scheme but doesn't converge as fast.
Can you do a video about logarithmic integral function? or exponential Integral function? Or both?
I cannot really express how I'm thankful to you. You explained this in such a beautiful manner that now I can work with Lambert W functions for Reals very easily. Thank you BPRP o7
where can you get the blackpenredpen shirts?
Thanks, could you give a quick response if x*exp(-x) = 0.9 can be solved with Newton method? Likely it is not, I have wrong numbers?
Is there some way to represent omega as a limit? After all, if we were to apply Newton's method an infinite number of times, we'd theoretically get closer and closer to the answer, and maybe we'd be able to write that in some limit form?
Could you please make a video explaining gradient decent and convex problem?
This was a good explanation to how to get omega, but you didn't talk much about what this function is useful for. Please consider mentioning that in future if you know why certain functions are helpful or useful for some application.
Omega = 1/e^(1/e^(1/e^(1/e...
Is there any other uses of the omega constant? Does it pop up in any math field?
19:34 are you sure xe^x can be inversed ?? Is this function "1-1" to be able to be inversed ?
Yes, it can, but yes, only with the restriction x ≥ -1, because if you graph y = xeˣ, you'll see that it's monotonic increasing only for that interval (half-line).
It has a local minimum (which is also global) at (-1, -1/e); to the left of which, it's monotonic decreasing.
Lambert W (its principal branch) is defined for the above half-line. [That is, W(x) is defined for x ≥ -1/e; and W(x) ≥ -1.]
There's a nice Wikipedia page on it . . .
PS, pronunciation; because Lambert was Swiss, there are two:
German: "Lahm-bert" (close to bprp's pronunciation)
French: "Lahm-bare"
Fred
Hmmm. I'd never solve it this way. It seems intuitive to take the logarithm of the equation first, and I thought that was going to be the point of this vid. Then x+Log[x]=0. Starting with 1, we get:
1.
0.5
0.5643823935
0.5671389877
0.5671432904
Converges much more quickly. See at 15:06
This brings up a very good point.
When you have an equation to solve for a root, it's sometimes advantageous to manipulate it into a form that will make Newton's Method converge faster.
Fred
Good question! Generally, if the function has steep slopes I try to transform it to one that is more gently varying.
that was the function i got at my calculus exam *-*
Sounds like Lambert just wanted a constant named after himself so he co-opted the Omega constant.
Can you do a video on how to use Lambert W function to evaluate solution for like xe^x=n and x^x=n where n is different than 1
Yea, you can check out my new videos for them!
Surprisingly, the constant has an infinite representation of e^-e^-e^-e^-e^-e^-e^-....
Because it's a solution to x = e^-x and e^x = 1/x
I used the iterative formula O(n+1) =1/(e^O(n)), which(using a non-zero value as the starting point) converges very quickly to 0.56714... . A much shorter and simpler method, is there something wrong with it?
W(1) Done
x=W(1)
I'm Greek. I write capital Ω all the time. He makes them better...
You don’t bother to make them good because you make them all the time.
Is this newton raphson approximation right?
And as always a great video.
Roberto Miglioli thanks!!!
!!! No supreme shirt?!!! great vid btw
lemon Danish that fell on the floor
: )
I need solve this e^x-ln(x+3) Thanks!
I have never heard of Newton's methood, it's so cool.
I love calculus so much. Thanks!
#YAY
You had to find the second derivative to use this formula. From its sign depends on the left or right, you need to start the approximation.
{x} , [x] , x are in Geometric Progression. Solve for x. ( {x} is fractional function and [x] is greatest integer function )
#Yay
Ashutosh Anand let a={x}, b=[x], a+b=x. if it's in a geometric progression, then b/a=(a+b)/b, or b^2-ab-a^2=0. using the quadratic formula eventually gives b=phi*a, where phi is the golden ratio. since a
Doaremon music in the background, brings back so many memories!
on x_3 i thought it was going to be "y" which equal 0.577
finally I understand Newtons method . Thanks :-)
Is there an infinite series to omega?
I have a question. If math is a language then all things must be related right? If thats true then what is the relationship between these numbers that appear everywhere? Like the golden ratio or any other number that appears in many places what does it have in relation to others?
You'll use pi and e a lot together. Makes a little sense considering Euler's e^ix = cosx + isinx
Doaremon tone in the starting ❤️
Yea :D
If the function contains more than one roots?
ashiq ashiq Then the root you converge on depends on your starting point.
I can't understand ..can u tell me more clearly?
ashiq ashiq Imagine a function like 1-x^2 (a downward-facing parabola, basically a hill, peaking at 0, and with roots at 1 and -1); starting from any positive number will converge to 1, and any negative number will converge to -1. Basically, in any well-behaved, smooth function with a root, if you zoom into the area where it intersects the x-axis deep enough, you’ll find an area that looks reasonably close to a straight line; when the root approximation from Newton’s method enters this area, it can never escape, and from then on will just move closer and closer to the root in the middle of the area, since the tangent lines drawn in that area just point closer and closer to the root. I believe this area is known as a basin of attraction. In a function with multiple roots, the root you get depends on which of these basins the approximation enters first.
For example, try this with a function such as x^3-x around the area between -1 and 1. The areas at the far right and far left stay far right and left, converging to +-1, while there’s an basin in the middle where it stays pointed towards 0 and converges there. However, there are areas on each side (between around .4 and .57 on the positive side, and their negatives on the other) where the tangent line drawn by the method does point towards the center rather than the outside, but the slope is shallow enough that it can wind up on the outside on the other side, or in other places; you can get some pretty complex behavior as it bounces back and forth between those two areas trying to find a basin to settle in.
@@KnakuanaRka thx man..
Is W(-1) also related to Ω?
It's also fun that the solution to x.ln(x)=1 is 1/W(1).
When I was at University, it was called the Newton-Raphson method. I wonder why Raphson is no longer mentioned.
Thank you
Good Lecture
Do the computers calculate 0's of functions this way?
Yes