well implicit differentiation just feels like a half assed chain rule to me. half assed because we don´t do half the work right away^^ (other than that it is a chain rule we could solve for y in a sepperate line and take the derivative but that would defeat the purpose of implicit differentiation. you only do half the work because you hope you can save quite a chunk of the work you just have ahead of you. like: figuring out the "inner function" algebraically)
Yes, that is correct. When using the generalized logarithm function defined for complex numbers, this proof can be shown to work over the entire complex plane.
Meh; I still find the old binomial version more memorable. Maybe it’s to do with it not requiring anything other than the basic definition of a derivative; no trickery or complications needed, just straightforward variable-juggling. And you really only need the first two terms of the expansion, since when you expand (x+h)^n, you can factor h^2 out of all the other terms, meaning they still have a factor of h after diving by h, so you know they all will go to 0 with the limit and can toss them out without having to write all of them; it’s not as messy or scary as it may seem at first blush. But perhaps it’s mostly because of a really cool geometric intuition shown by 3blue1brown in his Essence of Calculus series: th-cam.com/video/S0_qX4VJhMQ/w-d-xo.html That analogy seems to make it easy for me to get a grasp on the binomial version, while this one doesn’t have such an easy representation. Heck, he offers a lot of geometric representations of other derivative properties which are way more intuitive and easy to understand than any way of juggling variables. Then again, the binomial version only works for counting numbers, so I suppose we need this one anyways.
@@willsunnn Well, this method only proves this within a function's positive domain. So theoretically, it shouldn't have to work with stuff like -x²-1 at all. You would need to add a limit as c approaches infinity of y+c to the left side to be able to use these formulas for every function, but that ruins the log properties
I don't know if this is necessarily easier as you use one rule to prove the other rule rather than the binomial proof not relying on a more advanced derivative rule. And you usually learn the derivative of ln(f(x)) way after you learn the power rule.
i dont think i've used the quotient rule since high school calculus because it's usually easier to use the product rule with the denominator raised to the -1 power. this was a cool video though, i loved these derivations
I discovered that trick quite by mistake in high school because I thought that it was weird there are two different ways to look at a quotient and wanted to be sure they gave the same answer!
My calc professor was Chinese and when teaching the quotient rule, she would always say, "low-d-high, high-d-low" with a Cantonese accent. I never forgot, she was an awesome instructor!
Also, with regards to the power-to-the-power rule, if you distribute, you will find that dy/dx = g*f^(g - 1)*f’ + ln f*f^g*g’. This is interesting because it is the sum of the ordinary power rule as applied to y if g is treated as constant and of the exponential rule as applied if f is treated as a constant.
For those concerned with the commentary that this only works if x^r is positive, or if f*g or f/g or f^g is positive. This is not true. There is no restriction on what f, g, and x^r can be valued as long as the input x is real-valued. The reason is that the natural logarithm function as well as the exponential function are both well-defined if their domains are the set of complex numbers. Complex-valued functions are allowed inside the limit definition of derivatives, because the limiting variable and the inputs of those functions in the limits are real-valued, which is what is required. So f, g, and x^r can be complex-valued, with r being complex-valued, as long as x is real-valued. Hence there is no fallacy in what BPRP has done in the video.
Just realized something. The derivative of f^g is the sum of the derivative assuming g is constant (g*f^(g-1)*f’) and the derivative assuming f is a constant (f^g*ln(f)*g’)
@@blackpenredpen I dont know if this person is Dutch, but in the Netherlands we have different math classes: A,B,C,D. A&C are focussed on Statistics and Probability, with algebra and simple differentiation. B is focussed on Calculus, Trigonometry and Geometry. D goes further than A,B and C with complex numbers, group theory and matrices and some other stuff.
In the case of the f(x)^g(x) derivative, you could write the final expression in a nicer way. If you distribute the f^g, you get f^g(gf'/f)+f^g(ln f) which is equal to gf^(g-1)f'+ln(f^g'). I find this expression nicer because the first term resembles a simple power rule and the second term is similar to the original function. Also, the first term doesn't include the g' and the second term doesn't include f', so the first one gives us information about f' and the second one about g'. I love this
_One dTwo plus Two dOne, that's the why we'll have some fun_ _Low dHigh minus High dLow, square the bottom and away we go_ Also, the integral by parts is _ultraviolet supervoodoo._
J.J. Shank Interesting, though I generally remember which functions go where in the quotient rule by doing a quick sanity check with x/1. And I never bothered to use a mnemonic for integration by parts, both because of the D-I method used by BPRP and elsewhere, but before even that, it’s pretty trivial to get it from the product rule.
Proof by logs is so easy it shouldn't be allowed lol.... technically these proofs constrain the functions (monotonically increasing, nonzero, etc) because logs blow up if you're not careful. Logs are great for sketching a proof before doing the heavy lifting. Fabulous video!
You can’t really use ln(y) to prove the chain rule, since its derivative in terms of x depends on the chain rule; it would be circular reasoning. It’s reasonably simple anyway: Let y = f(g(x)) and let u = g(x). By implicit differentiation, dy/dx = dy/du * du/dx. dy/du = f’(u) = f’(g(x)) du/dx = g’(x) Thus dy/dx = f’(g(x)) * g’(x)
@@paulchapman8023 hello man, I was wondering, is the substitution with u and y necessary or is it just to write it in a cleaner way ? Trying to understand without much knowledge sorry
Interestingly enough if you do d(x^x)/dx you get (x^x)(1 + ln(x)). This was the same thing discussed in one of the previous videos using a different method to explain something different but I believe he got to the same result which I think ties up stuff nicely.
i've heard that called the "generalized power rule." i remember looking it up a few years back for some CAS software because i had never learned how to differentiate a function raised to the power of another function. it's really nice to see where it comes from
Proving the power rule using e We start with some algebra to get it into the right form: 1) y = x^r 2) ln(y) = ln(x^r) 3) ln(y) = r*ln(x) 3) e^(ln(y)) = e^(r*ln(x)) 4) y = e^(r*ln(x)) And now we're ready for the only piece of calculus in this proof, using the e rule and the chain rule: 5) dy/dx = e^(r*ln(x)) * r/x Some more algebra to finish it up: 6) dy/dx = e^(ln(x^r)) * r/x 7) dy/dx = x^r * r/x And finally we arrive at the answer: 8) dy/dx = r * x^(r-1) You're welcome ;)
Of course the power rule in particular is fairly easy to show just with the limit definition of the derivative and some intuition about binomial expansion (the x^r term will cancel, and every term with h to a power greater than 1 in the original expansion will go to zero when you take the limit, so only one term remains) but using logs and implicit differentiation is a nice trick once you go through all the work of finding the derivative of the log.
I thought I'd share the mnemonics my first calculus teacher gave us for the product and quoptient rules: 1 d 2 plus 2 d 1, calculus is so much fun and 2 d 1 minus 1 d 2, draw a line, and square below
The easiest way to prove the quotient rule is the use the product rule and make the denominator function to the power of -1. Proceed with the product rule and use the power rule for the -1
I've got an interesting question. When you have (-1)ⁿ in a series, it means that the signs of terms will be alternating (+-+-). Does there exist a similar formula that would make every 3rd (or nth for that matter) term have a different sign (++-++-)? Or maybe even follow some kind of pattern, like 2 positive and 3 negative, etc?
AtomiX It's easy to generate patterns like this using sine/cosine. For example, "1-sqrt((sin(n*π/3))^2)*2/sqrt(3)" is equal to 0 at n=1, 2, 4, 5, 7, 8; and equal to 1 at n=0, 3, 6, 9. Of course, you can do it with -1 and 1 as well by using different constants.
Юрій Ярош I think you're right, you cannot take the logarithm if x^r is negative. But there is a simple way to proof that the formula is also right for negstive values of x^r. Let's say the formula in the video is right for all x>0 in case r is odd (so x^r for x
Hey Blackpenredpen. I love your videos about calculus and I am currently stuck on an integral. I believe it involves the Zeta Function, which I could not find anywhere relating to integrals and solving them. Can you do a video on this integral please: integral from 0 to infinity of x^7/(1-e^x) dx. Thank you!
let me ask you a question "why you didn't put the abs sign when you take the ln both side?" is that because your assumption in the earlier? assume all function are nice. thanks
Hi , please can you help me with a question?? It is: What is the sum of the following terms: 1+e^(-x)+e^(-2x)... And over what range of x is the solution valid?
If we use a substitution, u=e^x, we can simplify the sum as 1+ 1/u + 1/u^2 +... This an infinite geometric series, equal to 1/(1-1/u). Expanding u, we get 1/(1-e^-x). I'm guessing you mean over what domain of x does the solution converge, which is when |e^(-x)|0, which is the domain of the solution.
The sum is a geometric progression (GP), with first term a=1 [basically e^(0)] and common multiple r=e^(-x). (HOW - 1st term is e^0, 2nd term is [e^(-x)], 3rd term is [e^(-x)]^2 = [e^(-2x)], and so on) Now general formula for sum of a GP is S = a[1-(r^n)]/[1-r], where n is no of terms in the progression. If n-->inf, then for the sum to be convergent (approach to a finite number), |r|
Blackpenredpen, can you go over more about complexifying an integral? I'm currently in calc 2 and I'm loving it so far so I want to be very good at integrals, do you know where I can find hard practice problems with answers/go over process ?
integration by parts can easily be proven via product rule. not sure about integration by substitution but I imagine it is easy to prove via chain rule
You also can prove the product rule with the binomial theorem, using that in the derivative limit definition, using (x+h)^n as the binomio. I'm from México, so, sorry about my english
This is a easy way to culculate derivative rules, but I think it will be not sufficiented for proofing. Because LN(x) is positive(>0) all of x. so if y(x) or f,g(x) is negative(
Lookup "Differential algebra". An exponential is a function where differentiating is the identity, e.g. dF=id(F) then F is an exponential. Since you only need the chain-rule to show what the derivative of the inverse function is, it is sufficient to have both this definition of an exponential and the chain-rule for the derivative of it's inverse function.
The derivative of any function (if it exists ofc) can be obtained using "first principle". Knowledge of derivative of other functions isn't reqd for that, knowledge of limits is.
f(x)=x! is not continuous, so it doesn't really have a definitive, but if do some fancy analytic continuation and a lot of maths, you can end up with something called the gamma function (look it up - en.wikipedia.org/wiki/Gamma_function), which is essentially the same thing (Gamma(n)=(n-1)!) and that does indeed have a derivative.
Why do people always say the power rule is only for real numbers? I saw no operation in this proof that was real number specific, so why can't it work with complex numbers?
It can and does work for complex numbers as well. People just say it is limited to real numbers, to keep it simple as a Calc 1 topic. There is a proof that it works for complex numbers as well, using the complex logarithm.
0:53 And that works only for all r€lR, x>0. So you proved it for a certain f : lR*+ ---> lR x ---> x^r And you did not generalize it for g : lR ---> lR x ---> x^r
Mathematics is just the perfect example of the infinite power and immutability of God. Everything in math is circular reasoning, and we do things just because they always have and always will work. How much more trust, then, can we place in the God of the Bible who promises that all things will contribute to the good of those who love Him, who have been called according to His purpose?
Blue Grovyle 1. Mathematics has no relation to any entities, let alone any deities of any religion such as the so-called God you talked about. 2. No, mathematics is not circular reasoning. Mathematics constitutes a set of formal interpreted theories, which contain a set of chosen axioms, and every theorem is derivable from these axioms based on truth-preserving inference rules, which is determined by the semantics of the language on which the axioms are formulated. 3. Yes, we do these things because they work, and as such, they are reliable. However, this does not warrant that we put any trust in deities, because it cannot be demonstrated that these deities are reliable, which is not the case with mathematics, for which you can prove its reliability. Therefore, under the burden of proof, we cannot assume God or the Bible are reliable, as they are not connected to mathematics - and in fact, statements in the Bible are often in direct contradiction with mathematics - and thus, we should not rely on them until proven otherwise. 4. Can you provide evidence or proof that God exists? If not, can you then provide a reason why we should simply have faith instead of obeying the burden of proof, which is the only reliable epistemic principle by which we can operate thus far?
Implicit differentiation is like a magic trick. It feels like we haven't done any real work and still got the answer.
well implicit differentiation just feels like a half assed chain rule to me. half assed because we don´t do half the work right away^^ (other than that it is a chain rule we could solve for y in a sepperate line and take the derivative but that would defeat the purpose of implicit differentiation. you only do half the work because you hope you can save quite a chunk of the work you just have ahead of you. like: figuring out the "inner function" algebraically)
@@Metalhammer1993 it's also half assed because the domain can only be _half_ of all real numbers, i.e. x>0, since that's the domain for ln(x)
Yes, all of these proofs only verify for positive real values of x. Proof of all real values requires more work.
@@williamestey7294 Probably works on negative numbers too if you allow the result to be complex valued.
Yes, that is correct. When using the generalized logarithm function defined for complex numbers, this proof can be shown to work over the entire complex plane.
Damn, that power rule proof was way easier than the proof with the binomial theorem and the definition of the derivative.
Neil Gupta , and that proof that uses the binomial theorem only proves it is true for positive integers
Meh; I still find the old binomial version more memorable. Maybe it’s to do with it not requiring anything other than the basic definition of a derivative; no trickery or complications needed, just straightforward variable-juggling.
And you really only need the first two terms of the expansion, since when you expand (x+h)^n, you can factor h^2 out of all the other terms, meaning they still have a factor of h after diving by h, so you know they all will go to 0 with the limit and can toss them out without having to write all of them; it’s not as messy or scary as it may seem at first blush.
But perhaps it’s mostly because of a really cool geometric intuition shown by 3blue1brown in his Essence of Calculus series: th-cam.com/video/S0_qX4VJhMQ/w-d-xo.html That analogy seems to make it easy for me to get a grasp on the binomial version, while this one doesn’t have such an easy representation. Heck, he offers a lot of geometric representations of other derivative properties which are way more intuitive and easy to understand than any way of juggling variables.
Then again, the binomial version only works for counting numbers, so I suppose we need this one anyways.
@@willsunnn Well, this method only proves this within a function's positive domain. So theoretically, it shouldn't have to work with stuff like -x²-1 at all. You would need to add a limit as c approaches infinity of y+c to the left side to be able to use these formulas for every function, but that ruins the log properties
I don't know if this is necessarily easier as you use one rule to prove the other rule rather than the binomial proof not relying on a more advanced derivative rule. And you usually learn the derivative of ln(f(x)) way after you learn the power rule.
@@yoavmor9002 This was my first thought, seeing this proof
i dont think i've used the quotient rule since high school calculus because it's usually easier to use the product rule with the denominator raised to the -1 power. this was a cool video though, i loved these derivations
Same bro
I am disappointed I’ve never thought of this but thanks for the tip
I always used this to prove the quotient rule, but I never considered using it as an actual method
@@fahrenheit2101 it works well. One less rule to remember and potentially mess up
I discovered that trick quite by mistake in high school because I thought that it was weird there are two different ways to look at a quotient and wanted to be sure they gave the same answer!
My calc professor was Chinese and when teaching the quotient rule, she would always say, "low-d-high, high-d-low" with a Cantonese accent. I never forgot, she was an awesome instructor!
One of my students remembered the quotient rule as “down d-up - up d-down all over down squared”
Also, with regards to the power-to-the-power rule, if you distribute, you will find that dy/dx = g*f^(g - 1)*f’ + ln f*f^g*g’. This is interesting because it is the sum of the ordinary power rule as applied to y if g is treated as constant and of the exponential rule as applied if f is treated as a constant.
proofs using ln() are very helpful. a lot easier to understand than the normal limit proofs.
perhaps Blackpenredpen will explain Taylor polynomials.
It’s amazing that with just the chain rule and the derivatives of ln and of sin, one can differentiate almost every single function.
You only need e to the x to find the derivative of ln
For those concerned with the commentary that this only works if x^r is positive, or if f*g or f/g or f^g is positive. This is not true. There is no restriction on what f, g, and x^r can be valued as long as the input x is real-valued. The reason is that the natural logarithm function as well as the exponential function are both well-defined if their domains are the set of complex numbers. Complex-valued functions are allowed inside the limit definition of derivatives, because the limiting variable and the inputs of those functions in the limits are real-valued, which is what is required. So f, g, and x^r can be complex-valued, with r being complex-valued, as long as x is real-valued. Hence there is no fallacy in what BPRP has done in the video.
Fancy jacket 👌👌
1234Daan4321 thank you!!
Could you prove the same but using the definition of derivative?
Love your vids!!
bmw123ck yes. But just longer.
Yeees, but not so obvious and systematic as it is using logarithmic differentiation
Just realized something. The derivative of f^g is the sum of the derivative assuming g is constant (g*f^(g-1)*f’) and the derivative assuming f is a constant (f^g*ln(f)*g’)
That is the verbal way of remembering the derivative of a function raised to another function, as shown in calculus books.
Beautiful. Always wanted to know the proof, shame they don't show it in math B.
Teddy S what math B?
@@blackpenredpen I dont know if this person is Dutch, but in the Netherlands we have different math classes: A,B,C,D. A&C are focussed on Statistics and Probability, with algebra and simple differentiation. B is focussed on Calculus, Trigonometry and Geometry. D goes further than A,B and C with complex numbers, group theory and matrices and some other stuff.
In the case of the f(x)^g(x) derivative, you could write the final expression in a nicer way. If you distribute the f^g, you get f^g(gf'/f)+f^g(ln f) which is equal to gf^(g-1)f'+ln(f^g'). I find this expression nicer because the first term resembles a simple power rule and the second term is similar to the original function. Also, the first term doesn't include the g' and the second term doesn't include f', so the first one gives us information about f' and the second one about g'. I love this
(x^a)'=ax^(a-1). (a^y)'=ln(a)*a^y. (x^y)'=yx^(y-1)*x'+ln(x)*x^y*y'.
This is like a gradient or something like that
_One dTwo plus Two dOne, that's the why we'll have some fun_
_Low dHigh minus High dLow, square the bottom and away we go_
Also, the integral by parts is _ultraviolet supervoodoo._
J.J. Shank Interesting, though I generally remember which functions go where in the quotient rule by doing a quick sanity check with x/1. And I never bothered to use a mnemonic for integration by parts, both because of the D-I method used by BPRP and elsewhere, but before even that, it’s pretty trivial to get it from the product rule.
Proof by logs is so easy it shouldn't be allowed lol.... technically these proofs constrain the functions (monotonically increasing, nonzero, etc) because logs blow up if you're not careful. Logs are great for sketching a proof before doing the heavy lifting. Fabulous video!
Encore!
Next, the chain rule please :D
You can’t really use ln(y) to prove the chain rule, since its derivative in terms of x depends on the chain rule; it would be circular reasoning.
It’s reasonably simple anyway: Let y = f(g(x)) and let u = g(x).
By implicit differentiation, dy/dx = dy/du * du/dx.
dy/du = f’(u) = f’(g(x))
du/dx = g’(x)
Thus dy/dx = f’(g(x)) * g’(x)
@@paulchapman8023 hello man, I was wondering, is the substitution with u and y necessary or is it just to write it in a cleaner way ? Trying to understand without much knowledge sorry
@@rainbobow_8125 chain rule is easy to prove using the product rule. Skip to about 4 min in: th-cam.com/video/5RyEpMHGg1Q/w-d-xo.html
Interestingly enough if you do d(x^x)/dx you get (x^x)(1 + ln(x)). This was the same thing discussed in one of the previous videos using a different method to explain something different but I believe he got to the same result which I think ties up stuff nicely.
i've heard that called the "generalized power rule." i remember looking it up a few years back for some CAS software because i had never learned how to differentiate a function raised to the power of another function. it's really nice to see where it comes from
fun exercise: try to differentiate e^(kx), a^x, x^n etc. using the last formula at 17:40 :)
ISN'T IT
NAMELY
Proving the power rule using e
We start with some algebra to get it into the right form:
1) y = x^r
2) ln(y) = ln(x^r)
3) ln(y) = r*ln(x)
3) e^(ln(y)) = e^(r*ln(x))
4) y = e^(r*ln(x))
And now we're ready for the only piece of calculus in this proof, using the e rule and the chain rule:
5) dy/dx = e^(r*ln(x)) * r/x
Some more algebra to finish it up:
6) dy/dx = e^(ln(x^r)) * r/x
7) dy/dx = x^r * r/x
And finally we arrive at the answer:
8) dy/dx = r * x^(r-1)
You're welcome ;)
Similar thing can be done for the f^g rule. Why use implicit differentiation when the e method works?
Steps 2, 3, and 3 (you wrote it twice) were unnecessary
I really enjoyed this. Do you have any plans to prove the chain rule in a future video?
Patrick Conan I asked peyam for it already. Maybe we will work something out
3blue1brown
Of course the power rule in particular is fairly easy to show just with the limit definition of the derivative and some intuition about binomial expansion (the x^r term will cancel, and every term with h to a power greater than 1 in the original expansion will go to zero when you take the limit, so only one term remains) but using logs and implicit differentiation is a nice trick once you go through all the work of finding the derivative of the log.
the binomial proof is only valid for integer powers this is valid for all
I thought I'd share the mnemonics my first calculus teacher gave us for the product and quoptient rules:
1 d 2 plus 2 d 1, calculus is so much fun
and
2 d 1 minus 1 d 2, draw a line, and square below
Why is the derivative of ln(x) video private?
Nice. One can check with either f or g as a constant or x and verify that it all works out.
The easiest way to prove the quotient rule is the use the product rule and make the denominator function to the power of -1. Proceed with the product rule and use the power rule for the -1
I've got an interesting question.
When you have (-1)ⁿ in a series, it means that the signs of terms will be alternating (+-+-). Does there exist a similar formula that would make every 3rd (or nth for that matter) term have a different sign (++-++-)? Or maybe even follow some kind of pattern, like 2 positive and 3 negative, etc?
AtomiX omg! That's actually on my to-do list. I used to ask my students about finding a formula for 0,0,1.
Hint: use a trig function
blackpenredpen Thanks, I'll look into it :)
You could also use a rounding function like the floor function, eg (-1)^Floor[n/2] or a combination of them to make a pattern
AtomiX It's easy to generate patterns like this using sine/cosine. For example, "1-sqrt((sin(n*π/3))^2)*2/sqrt(3)" is equal to 0 at n=1, 2, 4, 5, 7, 8; and equal to 1 at n=0, 3, 6, 9. Of course, you can do it with -1 and 1 as well by using different constants.
I see, it's not as complicated as I thought it would be
What's a mathetician's favourite talk show host?
ln(DeGeneres).
Very simple and elegant!
Thanks.
New favorite proof, law of cosines or geometric summation are close seconds
Can we really take natural logarithm on both sides ? x^r can take negative values so we probably can't take logarithm ?
Юрій Ярош
I think you're right, you cannot take the logarithm if x^r is negative. But there is a simple way to proof that the formula is also right for negstive values of x^r. Let's say the formula in the video is right for all x>0 in case r is odd (so x^r for x
Юрій Ярош
A similar method can be used for the other proofs, I'm just gonna show it on the product rule for the others it works the same way. If y
What stops you from taking the ln of a negative number?
ln(-R) = i*π + ln(R) with R ∈ ℝ ∖ 0 (I am not sure what happens when R is not real)
@@happygimp0 and if we differentiate, the imaginary part becomes 0, so we doesn't have to care about neg. Numbers.
The last one should be called the REAL power rule
Looking sharp, professor Chow hahaha
Thanks.
Hey Blackpenredpen. I love your videos about calculus and I am currently stuck on an integral. I believe it involves the Zeta Function, which I could not find anywhere relating to integrals and solving them. Can you do a video on this integral please: integral from 0 to infinity of x^7/(1-e^x) dx. Thank you!
That was a really beautiful presentation.
Thanks, this really helped with my year 5 maths exam!
I really enjoy your proofs and derivations.
تبارك الرحمن، افضل قناة..
So good. 😎
And i like the outro music.
LOVE YOUR VIDEOS MATE. I'm taking IB Math AA HL and damn your videos are fine.
Very cool explanation. I did not know it before, but now I understand how to get these rules. Thank you 👍
d/dx (x^x) thus is equal to x^x(ln x + 1)
Thanks!!!!! I finally know!!!!!
but u can only take natural log of positive numbers… so why does this apply to negative y?
What stops you from taking the ln of a negative number?
ln(-R) = i*π + ln(R) with R ∈ ℝ ∖ 0 (I am not sure what happens when R is not real)
Product: One prime two plus two prime one, isn't mathematics fun?
Quotient: Low d high less high d low, over the denominator squared we'll go.
Came for the bonus, now back to my stupid test
let me ask you a question "why you didn't put the abs sign when you take the ln both side?" is that because your assumption in the earlier? assume all function are nice. thanks
This looks GREAT!! Thanks
Hi , please can you help me with a question?? It is:
What is the sum of the following terms:
1+e^(-x)+e^(-2x)...
And over what range of x is the solution valid?
Its a simple geometric series
1+e^(-x)+e^(-2x)... = 1+e^(-x)+(e^(-x))^2... = 1/(1-e^(-x))
Its convergent when e^-x < 1 so x > 0
If we use a substitution, u=e^x, we can simplify the sum as 1+ 1/u + 1/u^2 +... This an infinite geometric series, equal to 1/(1-1/u). Expanding u, we get 1/(1-e^-x). I'm guessing you mean over what domain of x does the solution converge, which is when |e^(-x)|0, which is the domain of the solution.
The sum is a geometric progression (GP), with first term a=1 [basically e^(0)] and common multiple r=e^(-x).
(HOW - 1st term is e^0, 2nd term is [e^(-x)], 3rd term is [e^(-x)]^2 = [e^(-2x)], and so on)
Now general formula for sum of a GP is S = a[1-(r^n)]/[1-r], where n is no of terms in the progression.
If n-->inf, then for the sum to be convergent (approach to a finite number), |r|
Blackpenredpen, can you go over more about complexifying an integral? I'm currently in calc 2 and I'm loving it so far so I want to be very good at integrals, do you know where I can find hard practice problems with answers/go over process ?
Ryan Lian u can check out my playlist of integral battles. th-cam.com/play/PLj7p5OoL6vGzuQqjZfMsAX7-R5fNji0cO.html
Wow that's good. I'd never seen those formulas proven that way before. .. Nice.
Great video with wonderful explanations :)
Love the enthusiasm ❤️
I can't stop watching your videos!! aaaaaaaaaaagh
Is it possible to do a similar thing for integration?
integration by parts can easily be proven via product rule. not sure about integration by substitution but I imagine it is easy to prove via chain rule
You also can prove the product rule with the binomial theorem, using that in the derivative limit definition, using (x+h)^n as the binomio. I'm from México, so, sorry about my english
Magaña Drums u mean the power rule. Also that only works if n is positive whole number
Oh yes yes, the power rule, sorry
for the last one, you could have arranged the bracket to show it was product rule for the function g*ln(f), and then written it as (g*lnf)'
Could we have a little more insight into what is meant by a "nice" function? Does it mean bijective? Continuous? Differentiable? Or something else?
In this context, it means continuous, differentiable, with a single-valued output.
This is a easy way to culculate derivative rules, but I think it will be not sufficiented for proofing.
Because LN(x) is positive(>0) all of x. so if y(x) or f,g(x) is negative(
neat and well presented. love it!
There was a mnemonic a classmate taught me in college for the quotient rule: "Low D high minus high D low, square below"
But isn't the derivative of ln(x) based on the derivative of e^x which is based on its expansion, which requires the power rule?
HelloItsMe the derivative of e^x doesn't require the power rule, it only needs the definition of e and limits
HelloItsMe Derivative of ln(x) can be proved without e^x.
Lookup "Differential algebra". An exponential is a function where differentiating is the identity, e.g. dF=id(F) then F is an exponential. Since you only need the chain-rule to show what the derivative of the inverse function is, it is sufficient to have both this definition of an exponential and the chain-rule for the derivative of it's inverse function.
The derivative of any function (if it exists ofc) can be obtained using "first principle". Knowledge of derivative of other functions isn't reqd for that, knowledge of limits is.
Nice. So we can get the derivative of x^x using the general formula, giving us the same answer: x^x*(1+ln(x))
trucid2 yes
You can't take ln() of every function.
The domain matter my friend. You ignore the domain in a lot of videos.
Wooow ! But what if g or f is negative ??
This is just the innocent property of logarithms. :).
U can also prove differentiation of constant power of x
If we are taking the log of everything, is it not necessary to have all of them a positive number or positive function. So are these proof ligitimate.
Shivaji Garg Not really. If we use complex numbers, then we are allowed to use any real function or number as input.
Hey, you're using circular reasoning to find the derivative of natural log to prove the power rule.
blackpenredpen
Please do a video on differential logarithm, please.
Naimul Haq ?
Like what?
We know all of these derivatives, yet it feels one is quite elusive.
What is the derivative of x factorial?
f(x)=x! is not continuous, so it doesn't really have a definitive, but if do some fancy analytic continuation and a lot of maths, you can end up with something called the gamma function (look it up - en.wikipedia.org/wiki/Gamma_function), which is essentially the same thing (Gamma(n)=(n-1)!) and that does indeed have a derivative.
search up the gamma function and the digamma function :D
Gee, someone knows in which video his friend Payam makes a tower of e's?
Why do people always say the power rule is only for real numbers? I saw no operation in this proof that was real number specific, so why can't it work with complex numbers?
It can and does work for complex numbers as well. People just say it is limited to real numbers, to keep it simple as a Calc 1 topic. There is a proof that it works for complex numbers as well, using the complex logarithm.
What if I have no g of f?
Differentiate the x^x with this power to the power rule... d/dx( x^x ) = x^x * ( 1 + ln(x) ) ?
Maybe this all useful {f(x), g(x)} rules there is?
0:53 And that works only for all r€lR, x>0. So you proved it for a certain
f : lR*+ ---> lR
x ---> x^r
And you did not generalize it for
g : lR ---> lR
x ---> x^r
the video about the derivative of ln(x) is private
I solved my existential crisis
Wonderful!
We have to use the CHEN LU!
I prefer dy/dx = f^(g-1)*(g*f'+g'*f*ln(f)) as the final form.
You can prove the quotient rule using the power rule and the product rule, no need for ln
Awesomeee i love logarithms
Link doesnt work
Dapper jacket!!! Boii!!
Awesome!
Easier way to prove the power rule is with mathematical induction
Completion: The order of addition doesn't matter.
Handsome af and smart
Hello, can you try and integrate e^x*cos(x) without using integration by parts?
Mário Marques does complexifying the integral count? If so, I did that already.
blackpenredpen You mean converting the expression in the integral to a complex expression? Yes, thats how I want to solve it.
Mário Marques here th-cam.com/video/m0C70WQDHNg/w-d-xo.html
blackpenredpen Amazing, thanks
Mathematics is just the perfect example of the infinite power and immutability of God. Everything in math is circular reasoning, and we do things just because they always have and always will work. How much more trust, then, can we place in the God of the Bible who promises that all things will contribute to the good of those who love Him, who have been called according to His purpose?
Blue Grovyle 1. Mathematics has no relation to any entities, let alone any deities of any religion such as the so-called God you talked about.
2. No, mathematics is not circular reasoning. Mathematics constitutes a set of formal interpreted theories, which contain a set of chosen axioms, and every theorem is derivable from these axioms based on truth-preserving inference rules, which is determined by the semantics of the language on which the axioms are formulated.
3. Yes, we do these things because they work, and as such, they are reliable. However, this does not warrant that we put any trust in deities, because it cannot be demonstrated that these deities are reliable, which is not the case with mathematics, for which you can prove its reliability. Therefore, under the burden of proof, we cannot assume God or the Bible are reliable, as they are not connected to mathematics - and in fact, statements in the Bible are often in direct contradiction with mathematics - and thus, we should not rely on them until proven otherwise.
4. Can you provide evidence or proof that God exists? If not, can you then provide a reason why we should simply have faith instead of obeying the burden of proof, which is the only reliable epistemic principle by which we can operate thus far?
Prove, using differential equations and laplace transforms, that sin(ix) = isinh(x)
Cool.
Logarithms rules!
logarithms of negative values yayy
4:30
Prove the chain rule!!!!!!
Isn't it.
Can these derivations be extended into the regions x < 0, and/or y < 0? - I mean, without complexification!
Muito bom!
nice