17:41 I really appreciate the statement that finding the actual sum is out of reach for a calc 2 class. It explains why infinite sums are such a mystery to me even though I got straight A's in my undergrad calc classes!
Somehow, this reminds me of a hilarious comment on mathematicians and generalization by the unforgettable Raymond Smullyan, which appears in his puzzle book _The Lady or the tiger_ : Mathematicians are very fond of generalizing! It is typical for one mathematician X to prove a theorem, and six months after the theorem is published, for mathematician Y to come along and say to himself, "Aha, a very nice theorem X has proved, but I can prove something even more general!" So he publishes a paper titled "A Generalization of X's Theorem." Or Y might perhaps be a little more foxy and do the following: he first _privately_ generalizes X's theorem, and then he obtains a special case of his own generalization, and this special case appears so different from X's original theorem that Y is able to publish it as a new theorem. Then, of course, another mathematician, Z, comes along who is haunted by the feeling that _somewhere_ there lies _something_ of an important nature common to both X's theorem and Y's theorem, and after much labor, he finds a common principle. Z then publishes a paper in which he states and proves this new general principle, and adds: "Both X's theorem and Y's theorem can be obtained as special cases of my theorem by the following arguments ... ,"
i would actually rewrite the double factorial in the last expression too. notice that you can multiply the numerator and denominator by (2n)!! to "fill the gaps" in (2n+1)!!. then you apply the same rewriting as before to the numerator. that leaves us with the sum over 2^n*(n!)²/(2n+1)! which i think looks easier at first glance since it doesn't involve uncommon notation
You can do the exact same thing with the infinite alternating harmonic series, the infinite sum of (-1)^(n+1)/n, which equal to the natural log of two. Using Euler's Transformation, you can change it to the infinite sum of 1/(n * 2^n). I think that's an incredibly fascinating transformation.
15:51 The double factorial should not be confused with the factorial function iterated twice (sequence A000197 in the OEIS), which is written as (n!)! and not be n!! 18:18 Good Place To Stop
Euler's transformation looks very much like a "discrete convolution", and thus the result is probably somehow related to the fourier transform of the original series?
It would be really interesting to explore in this direction! Fourier analysis gives a lot of insights and "tricks" to solve this kind of problems (i mean evaluate sums)
(2n+1)!! can also be rewritten in terms of the regular factorial: (2n+1)!! = (2n+1)!/(2n)!! = (2n+1)!/(2^n n!) So the identity becomes pi/2 = sum 2^n (n!)^2 / (2n+1)! = sum 2^n / ((2n,n) (2n+1)). Notice that by the beta function, (n!)^2 / (2n+1)! = B(n+1, n+1) = int x^n (1-x)^n dx from x = 0 to 1, which leads to an alternative way of proving the 'crazier' identity.
If you want to keep going: (2n+1)!!=(2n+1)!/(2n!!)=(2n+1)!/(2^n*n!) So pi/2=sum of 2^n*(n!)^2/(2n+1)! Which I think looks wilder because a double factorial is (counterintuitively) less than a factorial but a factorial squared *looks* dangerous
For the first identity, some care needs to be taken with the integral-series exchange, because the whole thing is not absolutely summable. So we are outside the scope of the usual theorems.
I played with Python code def pi(n): s,x = 0,1 b = True k = 0 while b: s+=x k+=1 x*=k/(2*k+1) b = (k != n) return 2*s I think that accuracy is 2^{-n} where n is the number of iterations
@@khoozu7802 I'll have to check in more detail, but my gut tells me that this might make the interchanging of integral and infinite sum earlier invalid.
Interesting, I haven't seen this formula before but plugging the eta function definition into it seems to extend its definition to the left half of the complex plane
Viewer suggestion: the following functional equation is quite fun. Find all functions f:R->R that satisfy: f is continuous at zero, and: f(x+y)=f(x)+f(y)+xy*(x+y). I will comment a solution if asked. This is a problem from Engel’s “Problem Solving Strategies”.
So I messed around with the partial sums of both the regular series for pi as well as the accelerated one, and got some expected as well as some unexpected results. n: 1 error using regular sum: 27.32395447351627% error using accelerated sum: 36.33802276324186% ------------ n: 2 error using regular sum: 15.117363684322473% error using accelerated sum: 15.117363684322488% ------------ n: 3 error using regular sum: 10.347427210380774% error using accelerated sum: 6.6291000527547395% ------------ n: 4 error using regular sum: 7.841709142978686% error using accelerated sum: 2.9912727820828446% ------------ n: 10 error using regular sum: 3.175237710923643% error using accelerated sum: 0.032292025775455105% ------------ n: 50 error using regular sum: 0.6365561421825672% error using accelerated sum: 2.8271597168564595e-14% ------------ n: 200 error using regular sum: 0.1591539484045989% error using accelerated sum: 2.8271597168564595e-14% ------------ n: 1000 error using regular sum: 0.03183098066059948% error using accelerated sum: 2.8271597168564595e-14% ------------ For the first term, the regular sum is closer to the actual value of pi/4 compared to the accelerated sum. At n=2 the partial sums are exactly the same, and after that the accelerated sum consistently beats the regular sum by a massive difference, just as Michael said due to the quickly decreasing values of each term. However, the unexpected behavior arises in how the percentage after some point stays consistently at about 2.8x10^-14 %, which is certainly odd for a series that is supposed to converge to the true value, especially an "accelerated one". The regular sum does not display this behavior. Is there a mathematical reason for this, or is is the value of pi stored in python not that accurate, or is it some floating point error that causes this? If someone is able to figure it out please reply. Here is my code: import math def double_factorial(n): return math.prod(range(n, 0, -2)) def regular_partial_sum(n): s = 0 for i in range(0, n): s += ((-1)**i)/(2*i+1) return s def accelerated_partial_sum(n): s = 0 for i in range(0, n): s += math.factorial(i)/double_factorial(2*i+1) return 0.5*s def percentage_error(actual_value): expected_value = math.pi/4 return str(abs((actual_value - expected_value)/expected_value)*100) + "%" for n in [1, 2, 3, 4, 10, 50, 200, 1000]: print("n: " + str(n)) print("error using regular sum: " + percentage_error(regular_partial_sum(n))) print("error using accelerated sum: " + percentage_error(accelerated_partial_sum(n))) print("------------")
@@jamscone_ Thank you. I edit about 4 of the channel’s videos a month, starting in Nov, and this is the most recent one i did. I am really flattered by your comments here. Feels good to to have one’s work recognized like this. ❤❤❤
Americans should stick to 2"x4" beams, 4"x4" timber studs and other construction materials which better fit their level of intelligence and and leave mathematics to others.
I get why it's a useful thing to have a shorthand for sometimes, but I will never stop wishing that we had a notation where n!! was not different from (n!)!. Maybe n!₂ or something.
Well the factorial of the factorial never really arises naturally so I think the notation n!! is fine for the double factorial. I do understand where you're coming from though because most people will mistake n!! for (n!)! the first time they see the notation, but the same could be said for other notations such as sin^2(x), sin^(-1)(x), log(x), log^2(x), etc. - once you know about the notation you're fine
17:41 I really appreciate the statement that finding the actual sum is out of reach for a calc 2 class. It explains why infinite sums are such a mystery to me even though I got straight A's in my undergrad calc classes!
Somehow, this reminds me of a hilarious comment on mathematicians and generalization by the unforgettable Raymond Smullyan, which appears in his puzzle book _The Lady or the tiger_ :
Mathematicians are very fond of generalizing! It is typical for one mathematician X to prove a theorem, and six months after the theorem is published, for mathematician Y to come along and say to himself, "Aha, a very nice theorem X has proved, but I can prove something even more general!" So he publishes a paper titled "A Generalization of X's Theorem."
Or Y might perhaps be a little more foxy and do the following: he first _privately_ generalizes X's theorem, and then he obtains a special case of his own generalization, and this special case appears so different from X's original theorem that Y is able to publish it as a new theorem.
Then, of course, another mathematician, Z, comes along who is haunted by the feeling that _somewhere_ there lies _something_ of an important nature common to both X's theorem and Y's theorem, and after much labor, he finds a common principle. Z then publishes a paper in which he states and proves this new general principle, and adds: "Both X's theorem and Y's theorem can be obtained as special cases of my theorem by the following arguments ... ,"
2:20 Now you HAVE to make a video about this! Fubini's theorem, the monotone convergence theorem, the dominated convergence theorem - the works!
Can it be done without also introducing Lebesgue measure theory etc? I think the measure theory is critical to understanding Fubini etc.
i would actually rewrite the double factorial in the last expression too. notice that you can multiply the numerator and denominator by (2n)!! to "fill the gaps" in (2n+1)!!. then you apply the same rewriting as before to the numerator. that leaves us with the sum over 2^n*(n!)²/(2n+1)! which i think looks easier at first glance since it doesn't involve uncommon notation
Yeah, it would be nice to see the proof of switching the order of summation and integration on the channel ;)
I agree!
You can do the exact same thing with the infinite alternating harmonic series, the infinite sum of (-1)^(n+1)/n, which equal to the natural log of two. Using Euler's Transformation, you can change it to the infinite sum of 1/(n * 2^n). I think that's an incredibly fascinating transformation.
15:51 The double factorial should not be confused with the factorial function iterated twice (sequence A000197 in the OEIS), which is written as (n!)! and not be n!!
18:18 Good Place To Stop
Euler's transformation looks very much like a "discrete convolution", and thus the result is probably somehow related to the fourier transform of the original series?
It would be really interesting to explore in this direction! Fourier analysis gives a lot of insights and "tricks" to solve this kind of problems (i mean evaluate sums)
oh that is a BRILLIANT connection!! now you've got me curious too!!
Your I(n) can be again calculated by using the beta function very quickly.
first, x^2 = u, so
I(n) = (1/2) * int_0^1 u^(-1/2) * (1-u)^n du = (1/2) * beta(1/2, n + 1) = (1/2) * Gamma (1/2) * Gamma (n + 1) / Gamma (3/2 + n) = (1/2) sqrt(pi) * n! / ((1/2 + n)*Gamma(1/2 + n)) = (sqrt(pi)/ (2n+1)) 2^n n! / (2n-1)!! * sqrt(pi) = (2n)!! / (2n+1)!!
Identities used:
int_0^1 u^(a-1) * (1-x)^(b-1) du = Beta(a, b) = Gamma(a)*Gamma(b)/Gamma(a + b)
Gamma(z + 1) = z*Gamma(z) = z!
Gamma(z + 1/2) = sqrt(pi)*(2z - 1)!!/2^z
Gamma(1/2) = sqrt(pi)
(2n+1)!! can also be rewritten in terms of the regular factorial: (2n+1)!! = (2n+1)!/(2n)!! = (2n+1)!/(2^n n!)
So the identity becomes pi/2 = sum 2^n (n!)^2 / (2n+1)! = sum 2^n / ((2n,n) (2n+1)).
Notice that by the beta function, (n!)^2 / (2n+1)! = B(n+1, n+1) = int x^n (1-x)^n dx from x = 0 to 1, which leads to an alternative way of proving the 'crazier' identity.
If you want to keep going:
(2n+1)!!=(2n+1)!/(2n!!)=(2n+1)!/(2^n*n!)
So pi/2=sum of 2^n*(n!)^2/(2n+1)!
Which I think looks wilder because a double factorial is (counterintuitively) less than a factorial but a factorial squared *looks* dangerous
Ones of the most trickiest identities are stand behind the reciprocal of the odd numbers alternatif
For the first identity, some care needs to be taken with the integral-series exchange, because the whole thing is not absolutely summable. So we are outside the scope of the usual theorems.
I played with Python code
def pi(n):
s,x = 0,1
b = True
k = 0
while b:
s+=x
k+=1
x*=k/(2*k+1)
b = (k != n)
return 2*s
I think that accuracy is 2^{-n}
where n is the number of iterations
Could you do a video on Lagrange multipliers and intuition behind them?
At 13:00, the term x^2(1-x^2)^n at 0 and 1 is omitted due to it being zero - but is that also the case for n=0?
I think we can take the limit for n=0
@@khoozu7802 I'll have to check in more detail, but my gut tells me that this might make the interchanging of integral and infinite sum earlier invalid.
رائع كالعادة.لو وجد اليوتيوب سنة 1997 لاستفدنا كثيرا و أتممنا المسار في الرياضيات .
طلبة هذا العصر محظوظون جدا
Can we talk about how neatly he writes too oml
Interesting, I haven't seen this formula before but plugging the eta function definition into it seems to extend its definition to the left half of the complex plane
What a fascinating journey this was!!
Although complicated, it was clear
all the way through !!
I think this series is a consequence of some Ramanujan series that involve hypergeometric series.
Viewer suggestion: the following functional equation is quite fun. Find all functions f:R->R that satisfy: f is continuous at zero, and: f(x+y)=f(x)+f(y)+xy*(x+y). I will comment a solution if asked. This is a problem from Engel’s “Problem Solving Strategies”.
Interesting to see where these odd series expansions for Pi come from
So I messed around with the partial sums of both the regular series for pi as well as the accelerated one, and got some expected as well as some unexpected results.
n: 1
error using regular sum: 27.32395447351627%
error using accelerated sum: 36.33802276324186%
------------
n: 2
error using regular sum: 15.117363684322473%
error using accelerated sum: 15.117363684322488%
------------
n: 3
error using regular sum: 10.347427210380774%
error using accelerated sum: 6.6291000527547395%
------------
n: 4
error using regular sum: 7.841709142978686%
error using accelerated sum: 2.9912727820828446%
------------
n: 10
error using regular sum: 3.175237710923643%
error using accelerated sum: 0.032292025775455105%
------------
n: 50
error using regular sum: 0.6365561421825672%
error using accelerated sum: 2.8271597168564595e-14%
------------
n: 200
error using regular sum: 0.1591539484045989%
error using accelerated sum: 2.8271597168564595e-14%
------------
n: 1000
error using regular sum: 0.03183098066059948%
error using accelerated sum: 2.8271597168564595e-14%
------------
For the first term, the regular sum is closer to the actual value of pi/4 compared to the accelerated sum. At n=2 the partial sums are exactly the same, and after that the accelerated sum consistently beats the regular sum by a massive difference, just as Michael said due to the quickly decreasing values of each term. However, the unexpected behavior arises in how the percentage after some point stays consistently at about 2.8x10^-14 %, which is certainly odd for a series that is supposed to converge to the true value, especially an "accelerated one". The regular sum does not display this behavior. Is there a mathematical reason for this, or is is the value of pi stored in python not that accurate, or is it some floating point error that causes this? If someone is able to figure it out please reply. Here is my code:
import math
def double_factorial(n):
return math.prod(range(n, 0, -2))
def regular_partial_sum(n):
s = 0
for i in range(0, n):
s += ((-1)**i)/(2*i+1)
return s
def accelerated_partial_sum(n):
s = 0
for i in range(0, n):
s += math.factorial(i)/double_factorial(2*i+1)
return 0.5*s
def percentage_error(actual_value):
expected_value = math.pi/4
return str(abs((actual_value - expected_value)/expected_value)*100) + "%"
for n in [1, 2, 3, 4, 10, 50, 200, 1000]:
print("n: " + str(n))
print("error using regular sum: " + percentage_error(regular_partial_sum(n)))
print("error using accelerated sum: " + percentage_error(accelerated_partial_sum(n)))
print("------------")
Hi Sir. I am from India and I have a question related to Calculus
What is the Faa Di Bruno formula for m functions in R^n??
This is actually really cute. You would be able to prove lots of useful identities like this.
please produce this
take the integral from it
lncosx
Can you give me the link the proof of euler’s transformation , thank you very much
Pretty easy to find on TH-cam: th-cam.com/video/s873ihX7yuM/w-d-xo.html
pi/8 is the final value, not pi/2, right?
Notice that he multiplies by 2, as opposed to what you thought he did, i.e., divide by 2
Whoa! Am I mistaken but has there been a sudden jump up in video quality?
What do you mean?
@@NotoriousSRG im assuming they mean the new video fade transitions instead of the old jumpcuts, as can be seen at 4:03
@@jamscone_ Thank you. I edit about 4 of the channel’s videos a month, starting in Nov, and this is the most recent one i did. I am really flattered by your comments here. Feels good to to have one’s work recognized like this. ❤❤❤
@@NotoriousSRG thanks for your work on the channel, it makes a real difference !!
🥰🥰🥰🥰🥰🥰 thank you
We can make further transformation in terms of gamma function , we know that n!= gamma(n+1) and (2n+1)!!=((2^(n+1))/sqr(pi))*gamma(n+3/2).
What is that trick of integrating the summation called? (please leave a like if you reply with the answer so I get a notification for it)
314k-242k (December 9, 2022) / 22 = 3.27k new subscribers / day. Not going to make it I'm afraid.
??????????????????
♥♥
FIRST
Not quite lol
3rd
Woah... This video was just uploaded 3 mins ago
Americans should stick to 2"x4" beams, 4"x4" timber studs and other construction materials which better fit their level of intelligence and and leave mathematics to others.
And who are you to act so arrogant?
I get why it's a useful thing to have a shorthand for sometimes, but I will never stop wishing that we had a notation where n!! was not different from (n!)!. Maybe n!₂ or something.
Well the factorial of the factorial never really arises naturally so I think the notation n!! is fine for the double factorial. I do understand where you're coming from though because most people will mistake n!! for (n!)! the first time they see the notation, but the same could be said for other notations such as sin^2(x), sin^(-1)(x), log(x), log^2(x), etc. - once you know about the notation you're fine