@@jacemandtdon't you mean NO ONE not even Ramanujan or me or any other math whiz. I don't see anyone no matter how smart or trained you are coming gupnwothnthis. So isn't this pointless contrived BS??
Something of a running theme in the Stack Exchange videos. Michael didn't think this up either, hence looking for solutions for a viewer submitted problem online, but just verifying that this guy's approach is valid is a good way to start making sense of it, and also maybe equally important, he thought it was cool and wanted to show us.
This sum came up on one of my university problem sheets, but with a completely different approach. You first find the discrete Fourier transform of the sequence X_n = n, for n between 0 and N-1, and then use Parseval's theorem to get the desired result. This method uses some very fun techniques, but I do wonder if there's any more easily motivated solution...
@@Alan-zf2tt Well interestingly, WolframAlpha gets the correct answer for general n, but doesn't give any working. Perhaps it samples the sum for a few values, and because it's a fairly simple closed form, can figure out that it fits that model? Or perhaps it's been asked enough times before that it's learnt the answer now? Would be intrigued to see the inner workings.
@@malignusvonbottershnike563 it would be good to know how mathematica worked it out. I wondered what would happen if ai was unleashed on math and how it might evolve solutions in ai way
@@malignusvonbottershnike563for what it’s worth, a big part of how wolfram products solve certain integrals and PDEs is just by having been fed the literature, like Gradshteyn and Ryzhik
I think there is an easier to do this exercice still not easy but at least you do not need to have the intuition to go work straight with chebychev polynnomials you just need to develop it and decompose it and work with simple polynomials such as P = Z**n -1 and P prime over P Ik it sounds strange but for those who already attended the fraction decomposition class could know fairly easily these tricks
I think the problem could be solved by studying the polynomial (X+i)^n-(X-i)^n, whose zeros are cot(k*π/n), 1≤k≤n-1. Vieta's formulas ( en.wikipedia.org/wiki/Vieta%27s_formulas ) could then lead us to a simple expression of the double sum cot(p*π/n)cot(q*π/n), which could lead us to the sum of cot^2(k*π/n) and thus the sum of csc^2(k*π/n). Haven't tried it yet, but I think it's feasible.
I thoight ofnreeriting each 1/sin m/n^2 term as cosecant (m/n(^2 and then remembered cosecsnt squared is the derivative of cotangejt m/n..is that where you got cotangent from too likeme?..not sure what you meant about using Bieta fornulas..can you clarify? Couldn't you instead then reerite cotangent as cosine/sine and rhen relate that to 1/sine^2..and then some terms cancel out?
@@leif1075 When I said that the sum of cot^2(k*π/n) could lead us to the sum of csc^2(k*π/n), I meant using csc^2(x)=cot^2(x)+1. Also, Vieta's formulas ( en.wikipedia.org/wiki/Vieta%27s_formulas ) are formulas that relate the coefficients of a polynomial to its zeros. For example, for the particular case of a quadratic polynomial ax^2+bx+c, whose zeros are x_1 and x_2, we have : x_1+x_2=-b/a and x_1*x_2=c/a.
@@leif1075 I was just making a joke haha. But indeed, even if you know about Tchebyshev polynomials it's still not an obvious thing to use them in this problem. I think that sometimes people use certain tricks and manipulate certain algebraic expressions so much, it becomes second nature and they recognise, or try to see them anytime they can.
Actually, they are not similar. Here, we have a single sum with a sine squared in the denominator, but in Navier's solution we have two sums with e.g. sine of (m*pi*x/a) multiplied by sine of (n*pi*y/b) in the numerator. In fact, by using Navier's solution, we define plate's out-of-plane displacement w(x,y) as a double sum as well as any external distributed load P(x,y), and then by putting all the terms in one side under one double sum, we are able to set the coefficient of sine functions equal to zero to obtain the non-trivial solution, by considering its boundary conditions.
1 / sin^2 (m/n pi) = 1 / (1/2 - 1/2 cos(2m/n pi) = 2 / (1 - cos(2m/n pi)) . This is of the form 2 (1 / (1 - ym)), which can be written as a geometric series in ym. I don't know if this is helpful.
7:00 you didn't need the limits at all. All that did was help you dissociate the values 1 and -1 from the derivation of p(x), which making a "we'll set x to this value later" comment/notation (like, well, the limit of a function continuous at the value, I guess) would've done just fine.
WAIT isnt that WRONG at 6:00 the numerator is clearly NOT the derivative ofcthe denominator...the derivative of a cosine term would be a sine term and the numerator is all cosine terms..there are no sine terms..so isnt his obviously wrong?
@@minamagdy4126What??he literally says in the video he does a substitution x_m equals cosine mpi/n...didnt you see that? So obviously x depends on cosine..so why are yiu saying something seemingly nonsensical? Am I missing something and how do you not see exactly what I see?
@leif1075 x is an independent variable of the function. When taking the derivatine with respect to x, we completely ignore all other variables, here the parameter m of x_m. This is because m is seen as completely independent of x, as well as x being completely independent of m. This also means that x_m is completely independent of x, and is thus considered constant relative to the derivative. The fact that both x and x_m carry the label x is an unfortunate coincidence that seems to have confused you.
x=pi*k/n is the solution for (cot+i)^n=(-1)^k*sin^(-n). Binomially expand the left side, and the imaginary part has to be zero. This imaginary part is a polynomial of cot^2 (for n odd; if n even, there is an extra cot factor which can be easily gotten rid of), with the leading terms (n n-1)*cot^(n-1)-(n n-3)*cot^(n-3)+... The solutions to this polynomial equation are exactly cot(pi*k/n)^2, so the sum of all (n-1)/2 solutions is (n n-3)/(n n-1)=(n-1)*(n-2)/3!. We want to sum over all n-1 cot^2, so times 2 and get (n^2-3n+2)/3. The original question asks for sum(csc^2), but csc^2=cot^2+1, so the answer is (n^2-3n+2)/3+(n-1)=(n^2-1)/3
Honestly followed until we broke out the T(n) functions. Would love a deep dive going over the theory of using the Chebyshev polynomials in this way. 🤯
"And that is a good place to stop." was the conclusion after finding a formula for the sum. I think one should go one step further and show how the solution of the Basel problem follows from the sum.
There might be a solution with power functions for g and h, since the inverse of a power function is again a power function, and also the quotient of two are again a power function
Might as well call 1/h(x)=j(x) f=gj, f^-1=g^-1 j^-1 id=f(f^-1) =f(g^-1 j^-1) =g(g^-1 j^-1) j(g^-1 j^-1) So, I'm not sure it covers every case, but I guess you either have trivial solutions, so that the j^-1 inside the g thing doesn't affect anything and the g^-1 the j thing doesn't affect anything, or you have that g(g^-1 j^-1) =1/j(g^-1 j^-1) = h(g^-1 j^-1) So that probably implies g=h, or at least for most cases it does, so g/h, and therefore f, sort of becomes trivial. But maybe there are more cases. I don't know.
Why does negative integrate. [from 1 to (1+e^(1/e))/2] k^(2k)^(2k)^((1/2)k)^((1/4)k)^((1/8)k)^...^((1/2^n)k) dk turn out to be the same as asking 'what is the earliest negative number(s) that equals itself choose itself-minus-one?'? It kind of scares me. How can an integral suggest that it is possible to choose from less options than 0?
It's not clear to me when you jump from showing that Tn'(x) = 0 when using xm (makes sense and well described) to saying when you use x as a real number (instead of the form cos theta) you will get the given product. I think you need some words at this point in the video to link these steps up. Something like "because we are evaluating x at 1 or -1, this would be when cos theta produces such values, which would be when it's derivative is zero".
In fact Michael derived formula for T_{n}(x) in one of earlier videos with complex numbers approach (by comparing de Moivre's formula and binomial expansion of Re(e^{inx})) I tried different approaches - recurrence relation and ordinary differential equation (both are linear second order) From recurrence relation I have got coefficients of T_{n}(x) expressed in terms of sum which i cannot to calculate From differential equation i have two subsequences for coefficients expressed in terms of product then I merged these two subsequences into one
WAIT isnt that WRONGG at 6:00 the numerator is clearly NOT the derivative ofcthe denominator...the derivative of a cosine term would be a sine term and the numerator is all cosine terms..there are no sine terms..so isnt his obviously wrong?
And isnt it u clear tomeveryone else if n is held constant or ot chsnges kver the sum akong with m..I dont see why anyone wluld think to eritr it as the limit like that. Why would they??
This video should be titled "Chock Full of Techniques You Will Never Think Of On Your Own".
And of course by "You", I mean "I"
@@jacemandt You can include me in the "we", too. ;)
@@jacemandtdon't you mean NO ONE not even Ramanujan or me or any other math whiz. I don't see anyone no matter how smart or trained you are coming gupnwothnthis. So isn't this pointless contrived BS??
Qmd how does he get limit as theta approaches zero for sine ntheta over sine theta equals n squared..with lhopitals.rule? You wouldn't think so.
Something of a running theme in the Stack Exchange videos. Michael didn't think this up either, hence looking for solutions for a viewer submitted problem online, but just verifying that this guy's approach is valid is a good way to start making sense of it, and also maybe equally important, he thought it was cool and wanted to show us.
This sum came up on one of my university problem sheets, but with a completely different approach. You first find the discrete Fourier transform of the sequence X_n = n, for n between 0 and N-1, and then use Parseval's theorem to get the desired result. This method uses some very fun techniques, but I do wonder if there's any more easily motivated solution...
yup - wolfram mathematica? (never used it tho)
@@Alan-zf2tt Well interestingly, WolframAlpha gets the correct answer for general n, but doesn't give any working. Perhaps it samples the sum for a few values, and because it's a fairly simple closed form, can figure out that it fits that model? Or perhaps it's been asked enough times before that it's learnt the answer now? Would be intrigued to see the inner workings.
@@malignusvonbottershnike563 it would be good to know how mathematica worked it out.
I wondered what would happen if ai was unleashed on math and how it might evolve solutions in ai way
@@malignusvonbottershnike563for what it’s worth, a big part of how wolfram products solve certain integrals and PDEs is just by having been fed the literature, like Gradshteyn and Ryzhik
I think there is an easier to do this exercice still not easy but at least you do not need to have the intuition to go work straight with chebychev polynnomials you just need to develop it and decompose it and work with simple polynomials such as P = Z**n -1 and P prime over P Ik it sounds strange but for those who already attended the fraction decomposition class could know fairly easily these tricks
One of the best math videos I've ever seen.
Absolutely brilliant!!!
💯❤
I think the problem could be solved by studying the polynomial (X+i)^n-(X-i)^n, whose zeros are cot(k*π/n), 1≤k≤n-1. Vieta's formulas ( en.wikipedia.org/wiki/Vieta%27s_formulas ) could then lead us to a simple expression of the double sum cot(p*π/n)cot(q*π/n), which could lead us to the sum of cot^2(k*π/n) and thus the sum of csc^2(k*π/n). Haven't tried it yet, but I think it's feasible.
I thoight ofnreeriting each 1/sin m/n^2 term as cosecant (m/n(^2 and then remembered cosecsnt squared is the derivative of cotangejt m/n..is that where you got cotangent from too likeme?..not sure what you meant about using Bieta fornulas..can you clarify? Couldn't you instead then reerite cotangent as cosine/sine and rhen relate that to 1/sine^2..and then some terms cancel out?
@@leif1075 When I said that the sum of cot^2(k*π/n) could lead us to the sum of csc^2(k*π/n), I meant using csc^2(x)=cot^2(x)+1. Also, Vieta's formulas ( en.wikipedia.org/wiki/Vieta%27s_formulas ) are formulas that relate the coefficients of a polynomial to its zeros. For example, for the particular case of a quadratic polynomial ax^2+bx+c, whose zeros are x_1 and x_2, we have : x_1+x_2=-b/a and x_1*x_2=c/a.
19:28
Ah yes Tchebyshev polynomials! My favourite XD
I don't see why anyone would ever.think lf doing this no matter how smart and trained you are though do you seriously?
@@leif1075 I was just making a joke haha. But indeed, even if you know about Tchebyshev polynomials it's still not an obvious thing to use them in this problem. I think that sometimes people use certain tricks and manipulate certain algebraic expressions so much, it becomes second nature and they recognise, or try to see them anytime they can.
similar to Navier's solution to plate bending
Actually, they are not similar. Here, we have a single sum with a sine squared in the denominator, but in Navier's solution we have two sums with e.g. sine of (m*pi*x/a) multiplied by sine of (n*pi*y/b) in the numerator.
In fact, by using Navier's solution, we define plate's out-of-plane displacement w(x,y) as a double sum as well as any external distributed load P(x,y), and then by putting all the terms in one side under one double sum, we are able to set the coefficient of sine functions equal to zero to obtain the non-trivial solution, by considering its boundary conditions.
1 / sin^2 (m/n pi) = 1 / (1/2 - 1/2 cos(2m/n pi) = 2 / (1 - cos(2m/n pi)) . This is of the form 2 (1 / (1 - ym)), which can be written as a geometric series in ym. I don't know if this is helpful.
7:00 you didn't need the limits at all. All that did was help you dissociate the values 1 and -1 from the derivation of p(x), which making a "we'll set x to this value later" comment/notation (like, well, the limit of a function continuous at the value, I guess) would've done just fine.
WAIT isnt that WRONG at 6:00 the numerator is clearly NOT the derivative ofcthe denominator...the derivative of a cosine term would be a sine term and the numerator is all cosine terms..there are no sine terms..so isnt his obviously wrong?
The cosine terms include no x's. You can treat x_m as a constant relative to x
@@minamagdy4126What??he literally says in the video he does a substitution x_m equals cosine mpi/n...didnt you see that? So obviously x depends on cosine..so why are yiu saying something seemingly nonsensical? Am I missing something and how do you not see exactly what I see?
@leif1075 x is an independent variable of the function. When taking the derivatine with respect to x, we completely ignore all other variables, here the parameter m of x_m. This is because m is seen as completely independent of x, as well as x being completely independent of m. This also means that x_m is completely independent of x, and is thus considered constant relative to the derivative. The fact that both x and x_m carry the label x is an unfortunate coincidence that seems to have confused you.
x=pi*k/n is the solution for (cot+i)^n=(-1)^k*sin^(-n). Binomially expand the left side, and the imaginary part has to be zero. This imaginary part is a polynomial of cot^2 (for n odd; if n even, there is an extra cot factor which can be easily gotten rid of), with the leading terms (n n-1)*cot^(n-1)-(n n-3)*cot^(n-3)+... The solutions to this polynomial equation are exactly cot(pi*k/n)^2, so the sum of all (n-1)/2 solutions is (n n-3)/(n n-1)=(n-1)*(n-2)/3!. We want to sum over all n-1 cot^2, so times 2 and get (n^2-3n+2)/3. The original question asks for sum(csc^2), but csc^2=cot^2+1, so the answer is (n^2-3n+2)/3+(n-1)=(n^2-1)/3
Brilliant🎉
Honestly followed until we broke out the T(n) functions. Would love a deep dive going over the theory of using the Chebyshev polynomials in this way. 🤯
"And that is a good place to stop." was the conclusion after finding a formula for the sum.
I think one should go one step further and show how the solution of the Basel problem follows from the sum.
After the P’(x)/P(x) I couldn’t do further… lack of knowledge lmao
i absolutely LOVED this, sir
Michael, I have a question! Is there any function R to R f(x)=g(x)/h(x) so that the inverses have this behaviour : f^-1(x)=g^-1(x)/h^-1(x) ?
There might be a solution with power functions for g and h, since the inverse of a power function is again a power function, and also the quotient of two are again a power function
Might as well call 1/h(x)=j(x)
f=gj, f^-1=g^-1 j^-1
id=f(f^-1)
=f(g^-1 j^-1)
=g(g^-1 j^-1) j(g^-1 j^-1)
So, I'm not sure it covers every case, but I guess you either have trivial solutions, so that the j^-1 inside the g thing doesn't affect anything and the g^-1 the j thing doesn't affect anything, or you have that g(g^-1 j^-1) =1/j(g^-1 j^-1) = h(g^-1 j^-1)
So that probably implies g=h, or at least for most cases it does, so g/h, and therefore f, sort of becomes trivial.
But maybe there are more cases. I don't know.
What if we have a half multiplying inside the sin?
Why does negative integrate. [from 1 to (1+e^(1/e))/2] k^(2k)^(2k)^((1/2)k)^((1/4)k)^((1/8)k)^...^((1/2^n)k) dk turn out to be the same as asking 'what is the earliest negative number(s) that equals itself choose itself-minus-one?'? It kind of scares me. How can an integral suggest that it is possible to choose from less options than 0?
Ridiculous!
It's not clear to me when you jump from showing that Tn'(x) = 0 when using xm (makes sense and well described) to saying when you use x as a real number (instead of the form cos theta) you will get the given product. I think you need some words at this point in the video to link these steps up.
Something like "because we are evaluating x at 1 or -1, this would be when cos theta produces such values, which would be when it's derivative is zero".
How you can calculate this by yourself 🤔?
Wolfram Alpha can calculate this sum
After solving recurrence relation I have got the sum which Wollfram Alpha does not compute correctly
In fact Michael derived formula for T_{n}(x) in one of earlier videos
with complex numbers approach (by comparing de Moivre's formula and binomial expansion of Re(e^{inx}))
I tried different approaches - recurrence relation and ordinary differential equation (both are linear second order)
From recurrence relation I have got coefficients of T_{n}(x) expressed in terms of sum which i cannot to calculate
From differential equation i have two subsequences for coefficients expressed in terms of product
then I merged these two subsequences into one
this is funnier (but less easy to understand) if you skip putting x into the expression and just write it as the derivative with respect to 1
Why is that funnier..and I don't see why writing as x is necessary..it's needlessly convoluted isn't it?
@@leif1075i don't know about you but taking the derivative of an expression with respect to a constant value within that expression makes _me_ laugh
une bonne gymnastique 😮
WAIT isnt that WRONGG at 6:00 the numerator is clearly NOT the derivative ofcthe denominator...the derivative of a cosine term would be a sine term and the numerator is all cosine terms..there are no sine terms..so isnt his obviously wrong?
No, Xm is not a function of x so they’re constant
Why on earth write the second term asminus minus..why nkt juet erite it as plus??
And isnt it u clear tomeveryone else if n is held constant or ot chsnges kver the sum akong with m..I dont see why anyone wluld think to eritr it as the limit like that. Why would they??
asnwer=1isit
Slowly write and clearly write better for you and for everyone PAZA M C69AoneA