A concise and cogent summary! I remember this from about 30 years ago in University. Lovely. Absolutely essential when developing functions in modeling physical phenomena for me as an amateur science buff.
I remember "learning" Taylor series in university but didn't grasp the concept. I could replicate the steps but didn't actually understand what I was doing. 😅 Now I (think) I know! Thanks, Tom! Subscribed!
To get the first few terms for log(1+sinx) it might be easier to substitute the series for sinx into the series for log(1+x). log(1+x) = x - x^2/2 + x^3/3 - ..., and sinx = x - x^3/6 + ... -> log(1+sinx) = (x - x^3/6) - (x - x^3/6)^2/2 + (x - x^3/6)^3/3 = x - x^2/2 + x^3/6 + ... Here we used degree-3 polynomials for log(1+x) and sinx so our result for log(1+sinx) will be accurate up to the x^3 term
Hello, I found the video you uploaded about calculus really useful and I would like to thank you for the valuable information you shared with us, but I have a suggestion for you to have this video in more streams. It also expands your subtitle options. Thanks in advance
24:00 The derivatives are only getting more complicated because no simplification has been done, which sounds like a tautology. However, (-cos² x)/(1+sin x)² - sin x/(1 + sin x) = -(1 + sin x)/(1 + sin x)² and, because we are interested in small x, so (1 + sin x) ≠ 0, we get -1/(1 + sin x) which differentiates to (cos x)/(1 + sin x)² Without cancelling the (1 + sin x), you get the derivative (cos x)(1 + sin x)²/(1 + sin x)⁴ which is almost what maple's long answer simplifies to: it gives (cos x)(1 + sin x)/(1 + sin x)³ which is one factor fewer of (1 + sin x) top and bottom. Unless I've messed up my algebra somewhere. At the end of the day, unless playing with algebra is your way of relaxing, I guess it's easier and more reliable to just use a tool. :-)
It is very interesting to see that calculus apparently exists. In germany in CS we learn Analysis 1 + 2 + 3. I developed this strange paranoia that on every corner there's a proof to do.
Hey tom, curently in year 12 looking at the maclaurin expansion, unfortunatly we dont do the taylor series at a level maths or futher maths but i always wanted to learn it so i found this awsome. Keep up the great work, also cant wait too see you saturday :)
@@anawilliams1332 ah that's a shame. If you are really interested in more pure stuff there is technically nothing stopping you from getting an Edexcel further pure 1 book and doing bits of it for fun. If you wanna go to uni to do anything in STEM I would say Taylor series is mega helpful
~ 12:30 I thought that the Taylor's theorem is rather about the remainder/error term (i.e. about how precise f approximated by series). Which is also can be proven using Rolle's theorem. Isn't?
I remember using this to approximate operations on early computers with limited math libraries. Isn’t this why we use radians because the approximation is more accurate for small values of x. Also can’t you, using the series for cos, sin, and e, together with complex numbers, derive Euler’s formula, -e^-iPI = 1?
One version of Taylor's theorem that doesn't get talked about much is the multivariable one. I think books just assume that if you know the single-variable version then you can extend it to the multivariable domain but I don't think that's quite true...
Tom, can you verify that the derivative of 1/(1+sin(x)) is -(cos(x)^2)/(1+sin (x))^2 because when i calculate it manually and use a calculator the numerator is not squared.
Might be useful for students to memorize the easiest Maclaurin series off of Wikipedia; - sin, cos, sinh, cosh, arctan, arctanh - e^x, ln(1-x), ln(1+x) - 1/(1-x), 1/(1-x)^2, 1/(1-x)^3 There are common patterns between Maclaurin series of some functions, with only slight differences such as adding/removing (-1)^n or adding/removing a factorial for the denominator.
I was hoping for an actual proof of why the power series actually converges to the value of the function for any input in the dominion (provided the function is continuous and differentiable infinitely many times).
Hi Tom, I have a question on the Yang-Mills and Mass gap Millennium prize problem. So if somebody solves this problem will he get the Nobel Prize or the Fields medal??
Could we say that in the limit, as n approaches infinity, any function f(x) equals the Taylor series of that function? Edit: continuus function f, if f is not continuus at a point, then I think the taylor series wont reach it, no matter of how many terms it consists
The explanations of Taylor polynomials are helpful, and well-thought-out. Unfortunately the title of the video is "Taylor's Theorem," which gets very short (even misleading) shrift in this presentation. Taylor's Theorem is much more specific than "the approximation gets closer to f(x) the more terms you add." I came here to learn and understand the theorem, and was disappointed to find that the video doesn't live up to its title. If the video were titled "Taylor approximations" or "Taylor polynomials", it would live up to its advertising, and would not mislead people who are looking for an explanation of Taylor's theorem. BTW, the "FREE" Maple Learn worksheet doesn't deliver: even once you create an account, you can't view the worksheet (view limit reached for "this limited trial version").
Does that also mean that if for two differentiable functions in one point all derivatives are equal, both functions are equal in very point? That is quite mind blowing, because it means that you can define every function through a single point.
Dunno if I understood it right, but if you mean to say that if two functions have the same derivative at a point, the functions are equal (and that's how we can define any function through the same point), that can't be true. If we consider a function f(x), then f(x) + a (where a is some constant) for any value of 'a' will have the same derivative. But, that doesn't mean they are the same function. Like, sinx+1 doesn't equal sinx+120.
For analytic functions, if you know all derivatives at a point, then that describes the function (within the interval of convergence). This is the definition on an analytic function, a function that has a power series representation. Most functions you see are analytic at least in some open interval. (Note: all derivatives includes the 0th derivative, so they won't differ by a constant. ) But there are infinitely differentiable functions that are not analytic, and knowing all derivatives at a point doesn't distinguish them.
No, the expression that we set for the 'kth' derivative of our approximating polynomial at 'a' must hold true for any value of k from 0 till n. That is our rule.
Probably means to define the polynomial "in the neighbourhood" of that point. In the sense that when we expand around "a", we make all values really close to "a" permissable. That's why the Taylor expansion Tom wrote for cosx and ln(1+sinx) only work for a
So by Koshis mean value theorem f(x) and Pn(x) is almost the same. Well I know another, named Cauchy's mean value theorem which ends in a remainder term and this remainder term is not always zero, for example you cannot calculate the natural logarithm of 3: "log(3)" expanding in Taylor series the function Log(x+1) around the point "a=0", so I don;t know what you are teaching here.
Rather than using any assumptions why not derive Taylor/McLaurin from 1st principles thus justifying the polynomial? Why not begin with the obvious concept that theoretically any F(X) = the power integral of F^1(M^1) . (X-a)/1! where F^1(M^1) represents an implicit mean derivative Since F^1 (M^1) is not explicit we replace it with F^1(a) whose numerical value is explicit. But power integrating F^1 (a) results in an error = F^1(M^1) minus F^1(a) . (X-a). But the difference between F^1(M^1) and F^1(a) is a construct of the next higher derivative i.e. F^2. This is the precise reason why we then try to reduce this error by taking Numerical (explicit) F^2(a) and power integrating it (twice) = F^2 (a)/2! . (X-a)^2 thus creating the next term of the Polynomial P2 (X). A smaller error = F^2(M^2) minus F^2(a) . (X-a)^2/2! will remain which in turn is reduced by the next higher iteration of the above steps, successively reducing a declining error with each iteration. The n Factorial in the divisor is part and parcel of the progressive Power Integration of F^n(a) over (X-a)^n. Thus no novel assumptions are used in developing the Taylor polynomial which incidentally uses the easiest of integrals namely Power integrals. rivative = F^2(a)
Test yourself with some exercises on Taylor's Theorem with this FREE worksheet in Maple Learn: learn.maplesoft.com/?d=CFBSHQOGDODGKJHTHOMHGFBTIKNTKIISOJNIGONSILHHPHEQFUERNSGQHHEKLFPKGRINBJJNJTESJNHPCOKKEQOIJTMRJNLUCNPO
@@VenkataB123 even i am from Asia India but I am in 8th but till ik in furthrr classes there is no taylor series...... I just heard of trigonometry and other theorems
@@janav5624 True. I'm completing my 12th this year and though we have a bit of trigonometry with calculus, we don't have Taylor series. So yeah, the person who wrote the original comment must be bluffing😂
Didn't know that Machine Gun Kelly's music career failed and he became a math professor instead.
couldn't agree more. but he is good
Are you implying that math professors are failures? 😮😂
A concise and cogent summary! I remember this from about 30 years ago in University. Lovely. Absolutely essential when developing functions in modeling physical phenomena for me as an amateur science buff.
I remember "learning" Taylor series in university but didn't grasp the concept. I could replicate the steps but didn't actually understand what I was doing. 😅 Now I (think) I know! Thanks, Tom! Subscribed!
Glad it was helpful!
Always interesting in the Taylor's Theorem. Never got around to learning it in Uni :( But great job on always explaining in fun detail
This was quite helpful for revising the topic for my engineering mathematics paper.
Thanks from a fellow engineering student 🇮🇳
4:28 is there a a n missing?
I’m going to see Taylor Series at my Uni in a few months!
This video is like a blessing !
Wonderful explanation ❤❤❤.
At 4:29 what happened to the a(n) coefficient in the k-th term of the second derivative?
Loved this! Very clear explanation, and enjoyed the pace at which you speak.
Excellent video as always, keep them coming!
Think that is one of the most powerful tools for applied mathematicians and the most underrated
As an AS student this hurt my brain 🤯 Think I should come back in a couple years...
This isn't a part of AS right?
Not back in 2021
Don't know about now
To get the first few terms for log(1+sinx) it might be easier to substitute the series for sinx into the series for log(1+x).
log(1+x) = x - x^2/2 + x^3/3 - ..., and sinx = x - x^3/6 + ...
-> log(1+sinx) = (x - x^3/6) - (x - x^3/6)^2/2 + (x - x^3/6)^3/3 = x - x^2/2 + x^3/6 + ...
Here we used degree-3 polynomials for log(1+x) and sinx so our result for log(1+sinx) will be accurate up to the x^3 term
Maple Learn Calculator is a Life Safer, I can't believe it's free
I had this in my previous semester , it was pretty fun doing it ngl
Hello, I found the video you uploaded about calculus really useful and I would like to thank you for the valuable information you shared with us, but I have a suggestion for you to have this video in more streams. It also expands your subtitle options. Thanks in advance
24:00 The derivatives are only getting more complicated because no simplification has been done, which sounds like a tautology. However,
(-cos² x)/(1+sin x)² - sin x/(1 + sin x) = -(1 + sin x)/(1 + sin x)²
and, because we are interested in small x, so (1 + sin x) ≠ 0, we get -1/(1 + sin x) which differentiates to (cos x)/(1 + sin x)²
Without cancelling the (1 + sin x), you get the derivative (cos x)(1 + sin x)²/(1 + sin x)⁴
which is almost what maple's long answer simplifies to: it gives (cos x)(1 + sin x)/(1 + sin x)³ which is one factor fewer of (1 + sin x) top and bottom.
Unless I've messed up my algebra somewhere.
At the end of the day, unless playing with algebra is your way of relaxing, I guess it's easier and more reliable to just use a tool. :-)
At 11:00 Could someone explain how “Cauchy’s Mean Value Theorem” requires Pn(x) to be a infinite polynomial?
Very easy to understand, thank you sir for uploading
It is very interesting to see that calculus apparently exists. In germany in CS we learn Analysis 1 + 2 + 3. I developed this strange paranoia that on every corner there's a proof to do.
Hey tom, curently in year 12 looking at the maclaurin expansion, unfortunatly we dont do the taylor series at a level maths or futher maths but i always wanted to learn it so i found this awsome. Keep up the great work, also cant wait too see you saturday :)
What exam board are you doing? I know Taylor series is in Edexcel if you opt for further pure 1
@@InexorableVideos aqa :( we have a higher focus on statistics and mechanics
@@anawilliams1332 ah that's a shame. If you are really interested in more pure stuff there is technically nothing stopping you from getting an Edexcel further pure 1 book and doing bits of it for fun. If you wanna go to uni to do anything in STEM I would say Taylor series is mega helpful
@@InexorableVideos yeah its really the key to approximations. In the summer between year 12 and year 13 i want too look at it
Thank you!! Very insightful video
Wish i had tom as my math lecturer when i was at uni
الله يرحم الواليدين على هاد الشرح الأكثر من رائع ❤
~ 12:30 I thought that the Taylor's theorem is rather about the remainder/error term (i.e. about how precise f approximated by series). Which is also can be proven using Rolle's theorem. Isn't?
Beautiful Tom ty!
this is so beautifully done, i’m in sixth form and this is crazy interesting
A million thank yous! This is awesome!
Glad it was helpful!
Wow Likando 😮this is relieving
Amazing! Thank you very much😃
You're welcome 😊
Tom, your footage is superb and thank you for that. Can I ask if you teach Physics along with your mathematics? 👍
I just teach maths, but I do a lot of the physics-type topics such as quantum theory and relativity
For once, I could get the gist of Taylors Theorem. I have looking it up all over the internet. Thank you for the vedio explanation.
Thanks Tom!
For the cos(x) expansion, why is a equal to zero?
The worksheet is not available. I just get taken to the Maple start screen and no worksheet. Please check. Thanks.
Hi Tom, great explanation. Do you have worked solutions for the question sheet? Thanks!
I think maple might be my new best friend
it's awesome isn't it?
god !! this is really really difficult , seriously just missed this out in college and it feels like ive been left a light year behind everyone else
The people called Taylor who are watching: does someone say my name?
Back in my day (Waterloo in the 80s), when the prof mentioned Taylor Series, the class would gently booooo. Never really sure why.
Nice. Thank you.
Great video Tom!
But I still don't get what it means to expand around a.
Why don't we just leave it at x? Why is the Maclaurin series at a=0?
Next one could be fourier series, that has a similar idea
Just refreshing my maths skills. I did physics and astronomy at Uni. It’s been 40 years since I did this. I feel old.
I remember using this to approximate operations on early computers with limited math libraries. Isn’t this why we use radians because the approximation is more accurate for small values of x. Also can’t you, using the series for cos, sin, and e, together with complex numbers, derive Euler’s formula, -e^-iPI = 1?
You can do that if you define exp( i x) as a power series.
Can we just apply the Mean Value Theorem instead of the generalised Cauchy's Mean Value Theorem?
is this valid for large angles?
Hey Tom, I would suggest for a video try explaining or solving problems with Foo Fighters songs as background music. Would you like to do it someday?
youtube copyright might have something to say about that... (I do love the Foo fighters though!)
@@TomRocksMaths Hoping copyright does not bring the Channel down, it would be so cool!
@@TomRocksMaths "and if we extend this series to n = infinity (Everlong)..."
One version of Taylor's theorem that doesn't get talked about much is the multivariable one. I think books just assume that if you know the single-variable version then you can extend it to the multivariable domain but I don't think that's quite true...
The same principles still hold as long as you’re careful with those partial derivatives!
@@TomRocksMaths yup, it’s just tricky!
@@TomRocksMaths Isn't it that you take weighted average of the partials? Eg. 1/4 of xx, 1/2 of xy and 1/4 of yy?
tom can you please do a video on the laplacian, please
Tom, can you verify that the derivative of 1/(1+sin(x)) is -(cos(x)^2)/(1+sin (x))^2 because when i calculate it manually and use a calculator the numerator is not squared.
great stuff
Might be useful for students to memorize the easiest Maclaurin series off of Wikipedia;
- sin, cos, sinh, cosh, arctan, arctanh
- e^x, ln(1-x), ln(1+x)
- 1/(1-x), 1/(1-x)^2, 1/(1-x)^3
There are common patterns between Maclaurin series of some functions, with only slight differences such as adding/removing (-1)^n or adding/removing a factorial for the denominator.
This is the explanation I needed in 2nd year. Now I just need someone to explain what those Epsilon Delta proofs were about.
That's easy. So many math TH-camrs have made vedios for epsilon delta proofs and epsilon delta worked examples. Just Google it!
beautiful handwriting
How would you prove that the error of the taylor series goes to 0 as n goes to infinity?
Make a video on application of calculus
nice vid!
Nice sir
I was hoping for an actual proof of why the power series actually converges to the value of the function for any input in the dominion (provided the function is continuous and differentiable infinitely many times).
Awesome 👍
Thanks 🤗
Hi Tom, I have a question on the Yang-Mills and Mass gap Millennium prize problem. So if somebody solves this problem will he get the Nobel Prize or the Fields medal??
Fields medal as there isn't a Nobel prize for maths unfortunately.
Could we say that in the limit, as n approaches infinity, any function f(x) equals the Taylor series of that function?
Edit: continuus function f, if f is not continuus at a point, then I think the taylor series wont reach it, no matter of how many terms it consists
The explanations of Taylor polynomials are helpful, and well-thought-out. Unfortunately the title of the video is "Taylor's Theorem," which gets very short (even misleading) shrift in this presentation. Taylor's Theorem is much more specific than "the approximation gets closer to f(x) the more terms you add." I came here to learn and understand the theorem, and was disappointed to find that the video doesn't live up to its title. If the video were titled "Taylor approximations" or "Taylor polynomials", it would live up to its advertising, and would not mislead people who are looking for an explanation of Taylor's theorem.
BTW, the "FREE" Maple Learn worksheet doesn't deliver: even once you create an account, you can't view the worksheet (view limit reached for "this limited trial version").
Does that also mean that if for two differentiable functions in one point all derivatives are equal, both functions are equal in very point? That is quite mind blowing, because it means that you can define every function through a single point.
Dunno if I understood it right, but if you mean to say that if two functions have the same derivative at a point, the functions are equal (and that's how we can define any function through the same point), that can't be true. If we consider a function f(x), then f(x) + a (where a is some constant) for any value of 'a' will have the same derivative. But, that doesn't mean they are the same function. Like, sinx+1 doesn't equal sinx+120.
For analytic functions, if you know all derivatives at a point, then that describes the function (within the interval of convergence). This is the definition on an analytic function, a function that has a power series representation. Most functions you see are analytic at least in some open interval.
(Note: all derivatives includes the 0th derivative, so they won't differ by a constant. )
But there are infinitely differentiable functions that are not analytic, and knowing all derivatives at a point doesn't distinguish them.
No, the expression that we set for the 'kth' derivative of our approximating polynomial at 'a' must hold true for any value of k from 0 till n. That is our rule.
What does “expanding around a point a “ means?
Probably means to define the polynomial "in the neighbourhood" of that point. In the sense that when we expand around "a", we make all values really close to "a" permissable. That's why the Taylor expansion Tom wrote for cosx and ln(1+sinx) only work for a
machine gun kelly from another timeline
So by Koshis mean value theorem f(x) and Pn(x) is almost the same. Well I know another, named Cauchy's mean value theorem which ends in a remainder term and this remainder term is not always zero, for example you cannot calculate the natural logarithm of 3: "log(3)" expanding in Taylor series the function Log(x+1) around the point "a=0", so I don;t know what you are teaching here.
Here I am watching this at midnight instead of preparing for the MAT in a week 😭😭😭
I just see your video and like
Rather than using any assumptions why not derive Taylor/McLaurin from 1st principles
thus justifying the polynomial?
Why not begin with the obvious concept that theoretically any F(X) = the power integral of F^1(M^1) . (X-a)/1! where F^1(M^1) represents an implicit mean derivative
Since F^1 (M^1) is not explicit we replace it with F^1(a) whose numerical value is explicit. But power integrating F^1 (a) results in an error = F^1(M^1) minus F^1(a) . (X-a).
But the difference between F^1(M^1) and F^1(a) is a construct of the next higher derivative i.e. F^2.
This is the precise reason why we then try to reduce this error by taking Numerical (explicit) F^2(a) and power integrating it (twice) = F^2 (a)/2! . (X-a)^2 thus creating the next term of the Polynomial P2 (X).
A smaller error = F^2(M^2) minus F^2(a) . (X-a)^2/2! will remain which in turn is reduced by the next higher iteration of the above steps, successively reducing a declining error with each iteration.
The n Factorial in the divisor is part and parcel of the progressive Power Integration of F^n(a) over (X-a)^n.
Thus no novel assumptions are used in developing the Taylor polynomial which incidentally uses the easiest of integrals namely Power integrals.
rivative = F^2(a)
I don't mean to be a prick, but you forgot a "an" right at 4:30.
Such a beauty
Khai triển hàm số có ý nghĩa gì.
It is not easy! The worst part is to calculate the derivatives.
youre like the anime professor of math professors
I’ll take it
🎉
🎉🎉
🎉🎉🎉
Taylor series (Taylors version)
(Taylor's Version)
after i watched it for 4times i got it
Test yourself with some exercises on Taylor's Theorem with this FREE worksheet in Maple Learn: learn.maplesoft.com/?d=CFBSHQOGDODGKJHTHOMHGFBTIKNTKIISOJNIGONSILHHPHEQFUERNSGQHHEKLFPKGRINBJJNJTESJNHPCOKKEQOIJTMRJNLUCNPO
Thank you prof.
In Asia you probably learn this at age 15 😔
Are u kidding me?? Really
Uhh nope. I'm Asian and this is the first time I'm learning this. Hasn't even been taught to me in college but I thought I'd learn it because why not?
@@VenkataB123 ye same
@@VenkataB123 even i am from Asia India but I am in 8th but till ik in furthrr classes there is no taylor series...... I just heard of trigonometry and other theorems
@@janav5624 True. I'm completing my 12th this year and though we have a bit of trigonometry with calculus, we don't have Taylor series. So yeah, the person who wrote the original comment must be bluffing😂
for me I finish the school
but my color is pink
sir btw you dont like a teacher or a proff here in india its like different
Ayaa inelec student win rakom
Madhava serie
Slaaaay
Machine Gun Kelly does math. (First thing that popped into my head. Superfluous comment...)
please stop writing downwards
how do u write upwards?