Mathematicians: Look at my integral of my dreams. Physicists: Cool. But does that serve any purpose? Mathematicians: NO, but look at it. It's so magical. ;p
I would took a little twist over the improper integral, by applying a laplace transform which matches with the definition : F(s) = L { f(t) } = integral from 0 to inf of f(t). e^(-st) dt .
Except the original quote was “Any sufficiently advanced technology is indistinguishable from magic” from Arthur C. Clarke‘s book „Profiles of the Future: An Inquiry into the Limits of the Possible“ (1962). But I agree this integral is pretty much nightmare stuff if you haven‘t seen once how to solve it.
He did not give an argument, though. He just mentioned "uniform convergence". But why would this sum converge uniformly? ln(x) has a singularity at 0, so I am not sure about uniform convergance on [0,1].
@@HeinrichHartmann I can try to fill in the details for anyone interested: x log(x) is bounded on (0,1]: I will not do this here but it is concave up, has a minimum, and the limit at both 0 and 1 is 0. Therefore there's some closed interval containing all values of x log x for x in (0,1]. The power series of e^x converges uniformly on any closed subinterval of it's interval of convergence R, so the series for e^(x log x) converges uniformly for x in (0,1].
@@markusdemedeiros8513 One could just say that x log(x) is continuous on (0,1] and can be extended continuously to [0,1] as it converges to 0 in 0. The extended function is bounded because of 'extreme value theorem' and thus x log(x) is bounded on (0,1] I may be misspelling things a bit
It is. But I still hate integrals :) I never had much issues with other mathematics (up to a masters in EE) but integrals always turn into these crappy little puzzles that apparently I'm just to dumb to solve.
Damn i got stuck watching this video and the integral of e^-x^2 in loop because at the end of each video the guy says “click on the video on the screen” and its an infinite loop :D
Oh no… I actually just arrived at this video from a different video, but I could end up in the same loop as well Next step: make sure that every sequence of video links eventually leads to this specific loop. Reminds me of the Collatz Conjecture… 🤔
@@zebran4 By analytically you mean in terms of "non trivial" functions/expressions? If so it's very unlikely this can be expressed like that, just as a gut feeling
0:17 Well, we don't _have_ to. The power rule gives xx^(x-1) = x^x, the exponential rule gives ln(x)x^x, so the total derivative is the sum: x^x + ln(x)x^x.
@@qq3088 It generally works. It doesn't have to be exponentiation, and the functions don't need to be the same. It's a general property of differentiation that is used extensively. In other words, every derivative of a function with multiple instances of x can be realized as the sum of all "partial derivatives" with respect to each instance of x.
@@EpicMathTime "every derivative of a function with multiple instances of x can be realized as the sum of all partial derivatives with respect to each instance of x", damn that looks like a powerful statement. Do you know a proof for this?
@@dawnstudios7813 The simplest way to see this is to replace each instance of x with a separate variable (say x, y, etc), and take the total derivative with respect to t. Then, set x = y = ... = t. This collapses the total derivative to the special case of the single variable derivative. This idea underpins differentiation very intimately. You're already doing it when you take any derivative, we just don't phrase it that way. For example, let's take the derivative of sin(x)cos(x) using the statement you just quoted. I'll treat the first instance of x as a constant, making sin(x) a "coefficient", so that 'partial derivative' is -sin(x)². Now I'll treat the second instance of x as constant, and likewise, that 'partial derivative' is cos(x)². Hence, the derivative is the sum of the "partials": cos(x)² - sin(x)². Although I phrased it in this different way, what we did there is precisely the product rule. In other words, the product rule itself is a specific instance of doing the quoted statement.
I find this so pretty. Almost like discrete sum (over all integers) of sinx/x = pi and integral (-inf to +inf) of sinx/x also equals pi. Amazing and yet baffling.
I think what is amazing is that the integral of x^x within the same limits gives the same summation but with a (-1)^n, hence having alternating plus and minsu. So the integral of this video outputs a greater value than integral of x^x within the same limits, which makes sense. Because x^-x is bigger than x^x in this interval of 0 to 1.
It's even crazier how fast it converges. For the first 7 values of n you literally have n digits of precision, after that it the rate of precision keeps getting higher.
I am a university student in Korea. I was always interested in math, and I happened to see your TH-cam while I was looking for a related TH-cam while preparing for a math test. I think there are a lot of fun and informative contents. I hope your TH-cam will be better and I will continue to look for it often. Thank you!
Wolphram Alpha says the final sum is approximately: 1.2912859970626635404072825905956005414986193682745223173100024451369445387652344555588170411294297089849950709248154305484104874192848641975791635559479136964969741568780207997291779482730090256492305507209666381284670120536857459787030012778941292882535517702223833753193457492599677796483008495491110669649755010519757429116210970215616695328976892427890058093908147880940367993055895352006337161104650946386068088649986065310218534124791597373052710686824652246770336860469870234201965831431339687388172956893553685179852142066626416543806122456994096635604388523996938130448401015323385569895478992261465970681807533429122890910049951364103584723741679660994037428872280908239472403012423375069665874314768350298347009659693019807122059415474239188849548892043147840373896935928327449373018601817579524681909135596506205768427008907326547137233834847185623248044173423385652705113744822086069838116970644789631554803110868684680780701057034230000954776628299270222642661822130291609344850492556799951212817650810621807347685511270748919272166418829000073661836619726956875357964537813752368262924072016883803114377731170
Got a similar problem in a calc 2 exam, I was very confused and thought it was unsolvable, still processing how to get a numerical value for the solution, very nice video!
That's because calculus is a subject you can really sink your teeth into! And if anyone is thinking "That joke really bites", I beat you to it. Chew on that one!
Uniforme convergence isn't the reason you can do the important early swap sum integral, the hypotesis are : if we note u_n to be the function inside the sum (here x^n/n!) Then we can use the theorem under the conditions that sum(u_n) converges (i believe not even necessarly uniformly), integral(u_n) converges and sum(integral(absolute value(u_n))) converges. Not a lot of these has to do with uniforme convergence
*We can keep going on exploring & doing maths .. cuz it only demands three qualities of our mind* 1. *Curiosity to know* 2. *Using only knowledge i.e. No belief system* 3. (most important) *Focused mind to dig deep into the question*
I'm in the sophomore year so I understand anything when start caculus, but I still loving your content, Ive always been ahead of the current math subject of my school so I tjink that watchint this will also help a bit more. For now I'm studying analytical geometry, is easy and I like, and calculus I'll some time soon
Even if you know all of these properties, there is so much knowledge that goes into applying them in ways that are helpful. Can't imagine figuring this out!
I tried thinking about this in a different way. I began by viewing the original (improper) integral as something I will call L (i.e., limiting sum for the improper integral). I take log(L) and then move the log operation on the inside of the integration. I doubt this obeys all the rules for logarithmic operations on (improper?) integrals. So now I am integrating the function -x log(x) dx on the same upper and lower bounds and still calling this L. The indefinite integral of this is computed to be (x^2)/4 - (1/2)(x^2) log x. Evaluating this at the limits gives 1/4 (the limit for the second term can be evaluated at the lower bound using rules for indeterminate forms and evaluates to a limiting value of 0, there from the right. Anyway, the upshot is that L = 1/4 which makes the original integral e^(1/4) or approximately 1.28, which is close to the result from the derivation in the video, but not identical. Why is this even close? I know something I've done must be wrong, probably because the integration must invoke the complex log function in some way, at least at the lower bound of integration.
Something that surprised me more than the continuous sum being equal to the discrete sum is the bounds of those sums. The continuous sum of x^(-x) from 0 to 1 equals the discrete sum of n^(-n) from 1 to infinity... *SAY WHAT?!?!?*
d/dx (x^(-x)) = (-x)*x^(-x-1) = -x^(-x) * (1 + ln(x)) Since the integrand x^(-x) * (1 + ln(x)) cannot be simplified further, we conclude that the integral of x^(-x) cannot be expressed in terms of elementary functions. However, the integral can be expressed in terms of a special function called the exponential integral: ∫ x^(-x) dx = Ei(-ln(x)) + C
A continuous sum becomes a discrete sum. Totally wish you extended the video by 1 minute to really nail that in for the younger audience that may be casually watching this fantastic puzzle
th-cam.com/video/lHi53QereHA/w-d-xo.html 3:18 Actually, from here: we can pull out (-1)^n/n! outside the integral and the remaining integral from 0 to 1: ∫(xlnx)^ndx becomes a variant of the gamma function: -(-1/(n+1))^(n+1) gamma(n+1) Gamma of n + 1 is also n! so the final result is: summation from 0 to ∞[ {(-1)^n/n!} * n! * -1 * (-1)^n+1 * (1/n+1)^n+1 ] n factorials cancel out and the exponents of 1 are added up: ...[(-1)^(2n + 2) * (1/n+1)^(n+1)] Since the exponent of -1 is always even as we are taking a discrete sum of whole numbers, it is always positive 1 so we can remove it. = summation from 0 to ∞ of (1/n+1)^(n+1) changing the bounds of the summation by +1 and subtracting 1 from the n terms we get: summation from 1 to ∞ of (1/n)^n Since 1/n = n^-1 Answer = summation from 1 to ∞ of n^(-n)
6:15 why did you replace n plus 1 by v? its a constant so does it really matter? i mean sure youve got the summation but with respect to the integration its just a constant? maybe im not smart enough yet to understand this
definatley not MORE sophoMORE's dream out there or is it?! haha that would make my day if i had the dream of me having a dream which of i was having a dreaming that i was dreaming in my dream and in that dream i was doing this intergral😵😵
Remark: One may also justify the exchange of integration and summation via the monotone convergence theorem, by considering the integral to be a Lebesgue integral.
How does the result... make sense? I may be lost when ganma function, but the summatory of n^(-n) isn't just number + number + ...? So the derivative will be... 0?
I would be unable to do it by myself without guidance But the whole video was a beautiful journey where I was smiling at each new trick Just disappointed it didn't arrived to some usual function development
A solution that doesn't require substitutions or knowing the gamma function is to integrate (-ln x)^n*x^n between 0 and 1 by parts n times to find that it is n!/(n+1)^(n+1) and the final results comes naturally.
isnt a definite integral supposed to have a numerical value? It is not clear to me how it became an infiinite sum. Is that result with the infinite sum convergent or divergent?
🎓Become a Math Master With My Intro To Proofs Course! (FREE ON TH-cam)
th-cam.com/video/3czgfHULZCs/w-d-xo.html
At first I was amazed that he can do backwards writing so neatly. Then realised he just flipped the video
I was amazed but then you ruined the magic for me!!
What????
I was just going to comment the same thing.
that is pretty obvious...
@@manasaprakash7125 sarcasm dude 😅
Mathematicians: Look at my integral of my dreams.
Physicists: Cool. But does that serve any purpose?
Mathematicians: NO, but look at it. It's so magical. ;p
truest thing i have heard
Next century physicist : hey guys, you will never believe what weird function I'm trying to integrate today
@@mathieuaurousseau100 - this century's pure mathematics is next century's applied mathematics, because of those meddling physicists.
😂So True!
@@BriTheMathGuy woah you saw my comment. Thanks bro you made my day 😊
Holy cow that’s the prettiest integral I have ever seen
I think so too!
👌
And he’s doing it backwards!
I would took a little twist over the improper integral, by applying a laplace transform which matches with the definition : F(s) = L { f(t) } = integral from 0 to inf of f(t). e^(-st) dt .
The screen inversion to get his writing right totally blown my mind to the point that I’m unable to focus on what he says.
Never have I understood "Sufficiently advanced math is indistinguishable from magic" more than this very moment.
Except the original quote was “Any sufficiently advanced technology is indistinguishable from magic” from Arthur C. Clarke‘s book „Profiles of the Future: An Inquiry into the Limits of the Possible“ (1962).
But I agree this integral is pretty much nightmare stuff if you haven‘t seen once how to solve it.
@@GreenCaulerpa Yes thank you for explaining the joke. You get an internet cookie. Congratulations.
@@tnk4me4 yummy, thanks for that cookie!
@@GreenCaulerpa nice quote!
What's most fascinating is the way he looks to be writing from right to left for us. It's surely inverted but stil.. Thanks for the vid
2:40 dude nice thank you for being aware that you can’t just interchange infinite sums and integrals willy nilly.
He did not give an argument, though. He just mentioned "uniform convergence". But why would this sum converge uniformly? ln(x) has a singularity at 0, so I am not sure about uniform convergance on [0,1].
@@HeinrichHartmann Series for e^x converges absolutely
@@HeinrichHartmann I can try to fill in the details for anyone interested:
x log(x) is bounded on (0,1]: I will not do this here but it is concave up, has a minimum, and the limit at both 0 and 1 is 0. Therefore there's some closed interval containing all values of x log x for x in (0,1]. The power series of e^x converges uniformly on any closed subinterval of it's interval of convergence R, so the series for e^(x log x) converges uniformly for x in (0,1].
@@markusdemedeiros8513 One could just say that x log(x) is continuous on (0,1] and can be extended continuously to [0,1] as it converges to 0 in 0. The extended function is bounded because of 'extreme value theorem' and thus x log(x) is bounded on (0,1]
I may be misspelling things a bit
@@markusdemedeiros8513 thanks, I appreciate that summary.
That was NOT the result I was expecting form this. Absolutely beautiful
Glad you enjoyed it!
It is. But I still hate integrals :) I never had much issues with other mathematics (up to a masters in EE) but integrals always turn into these crappy little puzzles that apparently I'm just to dumb to solve.
we need more integrals like this, this is amazing
Damn i got stuck watching this video and the integral of e^-x^2 in loop because at the end of each video the guy says “click on the video on the screen” and its an infinite loop :D
You've fallen into my trap!!
Oh no… I actually just arrived at this video from a different video, but I could end up in the same loop as well
Next step: make sure that every sequence of video links eventually leads to this specific loop. Reminds me of the Collatz Conjecture… 🤔
This was the cutest introduction of solution I have ever seen in addition to the handsomeness of the one who introduced it. 😅🤭 Bravo!
Nice result, but now you should explain what is the value of the infinite sum 🛡️
It's maybe a bit late, but the value is round about 1.2912859970626636
@@johannes8144 Thank you! Did you compute that analyticaly or numericaly?
@@zebran4 It’s not possible to compute the value analytically as of this moment.
@@zebran4 By analytically you mean in terms of "non trivial" functions/expressions? If so it's very unlikely this can be expressed like that, just as a gut feeling
@@user_2793 Yes. By trivial expresions too.
As an engineering student my first instinct was to use a euler's method of approximation cause "fuck that work" LOL
stupid approximateurs >:(
It’s ancient, but it works
You know you're an engineer when using π=3 does not seem like an approximation
@@chungus478, and a mathematics or physics student if it does.
uncivilized imbecile!
I have my term exams in few days and watching this is satisfying ❤️
Same here, good luck
@@tamazimuqeria6496 good luck
Best of luck all!!
How was it?
How he interchanged the summation and integral signs at 2:30 please someone help me😢(btw i am class 11th student and jee aspirant)
0:17 Well, we don't _have_ to. The power rule gives xx^(x-1) = x^x, the exponential rule gives ln(x)x^x, so the total derivative is the sum: x^x + ln(x)x^x.
That works for x^x and x^(-x). But does this work for any derivative of f(x)^f(x)? Or only those cases?
@@qq3088 It generally works. It doesn't have to be exponentiation, and the functions don't need to be the same. It's a general property of differentiation that is used extensively. In other words, every derivative of a function with multiple instances of x can be realized as the sum of all "partial derivatives" with respect to each instance of x.
@@EpicMathTime l never knew this!
@@EpicMathTime "every derivative of a function with multiple instances of x can be realized as the sum of all partial derivatives with respect to each instance of x", damn that looks like a powerful statement. Do you know a proof for this?
@@dawnstudios7813 The simplest way to see this is to replace each instance of x with a separate variable (say x, y, etc), and take the total derivative with respect to t. Then, set x = y = ... = t. This collapses the total derivative to the special case of the single variable derivative.
This idea underpins differentiation very intimately. You're already doing it when you take any derivative, we just don't phrase it that way.
For example, let's take the derivative of sin(x)cos(x) using the statement you just quoted.
I'll treat the first instance of x as a constant, making sin(x) a "coefficient", so that 'partial derivative' is -sin(x)².
Now I'll treat the second instance of x as constant, and likewise, that 'partial derivative' is cos(x)².
Hence, the derivative is the sum of the "partials": cos(x)² - sin(x)².
Although I phrased it in this different way, what we did there is precisely the product rule. In other words, the product rule itself is a specific instance of doing the quoted statement.
Love your content! You can really feel your love for the math
Glad you enjoy it!
blew my mind. Never seen summation and integrals after each other.
Pretty cool right?
@@BriTheMathGuy yeah but it also feels intimidating for someone who still has to pass his Calc 2.
You can do it though!
Wow, this was much better than i expected! Truly beautiful!
I really admire the way you explain, not in a hurry
The actually important explanation for interchanging sum and integral is brushed away like nothing. This took away the beauty of it.
I find this so pretty. Almost like discrete sum (over all integers) of sinx/x = pi and integral (-inf to +inf) of sinx/x also equals pi. Amazing and yet baffling.
Glass pane works really well. If you can dim the lights over your hand it will be much better.
Wow, that was sum-thing else; thank you so much for sharing!
Glad you enjoyed it!
Friggin high school maths still giving me headache. Good job
I think what is amazing is that the integral of x^x within the same limits gives the same summation but with a (-1)^n, hence having alternating plus and minsu. So the integral of this video outputs a greater value than integral of x^x within the same limits, which makes sense. Because x^-x is bigger than x^x in this interval of 0 to 1.
the pause at 5:50 was so relatable haha struggling to do simple differentiation after doing many things that are a lot more complicated
It's even crazier how fast it converges. For the first 7 values of n you literally have n digits of precision, after that it the rate of precision keeps getting higher.
Care to share an example? I am admittedly too lazy to figure out the value of the sum and how fast it gets to these values.
@@captainhd9741 use desmos and input sum for sum and int for integral
@@jackweslycamacho8982 I prefer Wolfram but good idea!
@@captainhd9741
1 1
2 1.25
3 1.28703703703704
4 1.29094328703704
5 1.29126328703704
6 1.29128472050754
7 1.29128593477322
8 1.29128599437787
9 1.29128599695904
10 1.29128599705904
11 1.29128599706255
12 1.29128599706266
13 1.29128599706266
I am genuinely getting addicted to your videos !
Glad you like them!
I am a university student in Korea. I was always interested in math, and I happened to see your TH-cam while I was looking for a related TH-cam while preparing for a math test. I think there are a lot of fun and informative contents. I hope your TH-cam will be better and I will continue to look for it often. Thank you!
do u guys learn calculus in high school in korea?
@@limagabriel7 Yes, I do learn, but for example, in the case of calculus that utilizes two or more variables, I learn properly in college.
Why are you named Apple Boss
That answer is beautiful.
uniform convergence is not sufficient to invert limit and integral, because the integration interval is not a segment (ln is not defined as 0)
Thank you for sharing this beauty. Keep shining brother
You bet!
Wolphram Alpha says the final sum is approximately:
1.2912859970626635404072825905956005414986193682745223173100024451369445387652344555588170411294297089849950709248154305484104874192848641975791635559479136964969741568780207997291779482730090256492305507209666381284670120536857459787030012778941292882535517702223833753193457492599677796483008495491110669649755010519757429116210970215616695328976892427890058093908147880940367993055895352006337161104650946386068088649986065310218534124791597373052710686824652246770336860469870234201965831431339687388172956893553685179852142066626416543806122456994096635604388523996938130448401015323385569895478992261465970681807533429122890910049951364103584723741679660994037428872280908239472403012423375069665874314768350298347009659693019807122059415474239188849548892043147840373896935928327449373018601817579524681909135596506205768427008907326547137233834847185623248044173423385652705113744822086069838116970644789631554803110868684680780701057034230000954776628299270222642661822130291609344850492556799951212817650810621807347685511270748919272166418829000073661836619726956875357964537813752368262924072016883803114377731170
I'm here to comment just to make your video more popular
Thanks so much!
Got a similar problem in a calc 2 exam, I was very confused and thought it was unsolvable, still processing how to get a numerical value for the solution, very nice video!
For an integral like that. You don’t get a numerical value
Really nice results - I assume there is no closed form for the sum, but it made me a bit surprised at the end that you never touched on that topic.
There is, it equals sin(pi) / gamma(pi/2)
@@assasin1992m What is sine doing here? 🤔
@@assasin1992m makes me wonder if there is a complex extension for z^(-z) integral
Isn't sin(pi) 0?
@@ha14mu yes, but the limit toward pi in this expression converges to a non zero result
I love the fact that a video about calculus was interrupted by an ad that talks about partials (dentiures).
😂
That's because calculus is a subject you can really sink your teeth into!
And if anyone is thinking "That joke really bites", I beat you to it.
Chew on that one!
Wow, this is my kind of rollercoaster I enjoyed during lockdown, thanks math man
Glad to hear it!
I've just finished with my Advanced Higher Mathematic course... just re-watching some of these videos for some good memories..
Great job!
Uniforme convergence isn't the reason you can do the important early swap sum integral, the hypotesis are : if we note u_n to be the function inside the sum (here x^n/n!) Then we can use the theorem under the conditions that sum(u_n) converges (i believe not even necessarly uniformly), integral(u_n) converges and sum(integral(absolute value(u_n))) converges. Not a lot of these has to do with uniforme convergence
*We can keep going on exploring & doing maths .. cuz it only demands three qualities of our mind*
1. *Curiosity to know*
2. *Using only knowledge i.e. No belief system*
3. (most important) *Focused mind to dig deep into the question*
I'm in the sophomore year so I understand anything when start caculus, but I still loving your content, Ive always been ahead of the current math subject of my school so I tjink that watchint this will also help a bit more. For now I'm studying analytical geometry, is easy and I like, and calculus I'll some time soon
Even if you know all of these properties, there is so much knowledge that goes into applying them in ways that are helpful. Can't imagine figuring this out!
Does the final infinite sum converge? Awesome integral btw!
Thanks! and yes it most certainly does! (around 1.29 or so)
@@BriTheMathGuy is there an exact identity for what it converges to, or did you just get this by approximation?
@@sophiophile there is no closed form for it sadly so all you can do is solve it numerically.
The good news is the convergence is extremely rapid. The first ten terms of the sum give you the value of the integral to about 3 parts in a trillion.
@@tBagley43 almost all this kind of stuff has no closed form
What I don’t understand is how mathematicians make such amazingly leaps such as the various substitutions to get to the answer.
Its a lot of trial and error, looking at past results and seeing if there are parallels, and a lot of luck :)
I tried thinking about this in a different way. I began by viewing the original (improper) integral as something I will call L (i.e., limiting sum for the improper integral). I take log(L) and then move the log operation on the inside of the integration. I doubt this obeys all the rules for logarithmic operations on (improper?) integrals. So now I am integrating the function -x log(x) dx on the same upper and lower bounds and still calling this L. The indefinite integral of this is computed to be (x^2)/4 - (1/2)(x^2) log x. Evaluating this at the limits gives 1/4 (the limit for the second term can be evaluated at the lower bound using rules for indeterminate forms and evaluates to a limiting value of 0, there from the right. Anyway, the upshot is that L = 1/4 which makes the original integral e^(1/4) or approximately 1.28, which is close to the result from the derivation in the video, but not identical. Why is this even close? I know something I've done must be wrong, probably because the integration must invoke the complex log function in some way, at least at the lower bound of integration.
Something that surprised me more than the continuous sum being equal to the discrete sum is the bounds of those sums.
The continuous sum of x^(-x) from 0 to 1 equals the discrete sum of n^(-n) from 1 to infinity... *SAY WHAT?!?!?*
This channel is amazing !!!!!!
Well done! This is really amazzzing !
7:08 moment of satisfaction
That is a very remarkable and beautiful result.
An effective channel. Thank you
Glad you think so!
I honestly think I'm more impressed by how good you are at writing backwards. LOL! Good video
He’s not writing backwards it’s just mirrored lol
I'm impressed by how well he can write mirrored then /jk
approximately 1.29
Amazing video!!
Extraordinary! I didn't see it coming.
Love the way you speak and write.
Thanks very much and thanks for watching!
Its intresting how he uses just SMALL PART of BOARD to explain such complex problems whereas for our teacher need two full boards
Good explanation!
Glad you think so!
I freakin love calculus. I thought this was gonna be really scary at first.
I was gonna discretize the domain and calculate the area by numerical methods.
Omg the twist at the end is quite a shocker
Great video, cool result. Thanks for this.
Glad you liked it!
0:45 can you explain more detail? Don’t really understand this step
d/dx (x^(-x)) = (-x)*x^(-x-1) = -x^(-x) * (1 + ln(x))
Since the integrand x^(-x) * (1 + ln(x)) cannot be simplified further, we conclude that the integral of x^(-x) cannot be expressed in terms of elementary functions.
However, the integral can be expressed in terms of a special function called the exponential integral:
∫ x^(-x) dx = Ei(-ln(x)) + C
And i'm the only one who wanna know the way to use the backside screen?
maybe more like a sophomore's nightmare to some i'd imagine
😂
Math is so beautiful!!
I have a question
Isn't the final summation converged to something like π/ ( something) ?
The sum equals exactly 1.29129
I think you might me referring to a similar sum:
n goes from 0 to infinity, 1/n^2
The sum is equal to pi^2/6.
A continuous sum becomes a discrete sum. Totally wish you extended the video by 1 minute to really nail that in for the younger audience that may be casually watching this fantastic puzzle
th-cam.com/video/lHi53QereHA/w-d-xo.html
3:18
Actually, from here: we can pull out (-1)^n/n! outside the integral and the remaining integral from 0 to 1: ∫(xlnx)^ndx becomes a variant of the gamma function: -(-1/(n+1))^(n+1) gamma(n+1)
Gamma of n + 1 is also n! so the final result is:
summation from 0 to ∞[ {(-1)^n/n!} * n! * -1 * (-1)^n+1 * (1/n+1)^n+1 ]
n factorials cancel out and the exponents of 1 are added up:
...[(-1)^(2n + 2) * (1/n+1)^(n+1)]
Since the exponent of -1 is always even as we are taking a discrete sum of whole numbers, it is always positive 1 so we can remove it.
= summation from 0 to ∞ of (1/n+1)^(n+1)
changing the bounds of the summation by +1 and subtracting 1 from the n terms we get:
summation from 1 to ∞ of (1/n)^n
Since 1/n = n^-1
Answer = summation from 1 to ∞ of n^(-n)
what always fascinates me is that he wrote the whole thing backwards. like how do you even do that
6:15 why did you replace n plus 1 by v? its a constant so does it really matter? i mean sure youve got the summation but with respect to the integration its just a constant? maybe im not smart enough yet to understand this
It's ≈ 1.291291≈ 430/333
Are you writing in mirror-image, or reflecting the entire video afterwards?
I always felt (if not knew) that integration from 0 to 1 is sum from 1 to infinity
I pressumed the result should have been a numerical value, right ?
Try to integrate it to infinity. Integral converges. But this way will not work
definatley not MORE sophoMORE's dream out there or is it?!
haha that would make my day if i had the dream of me having a dream which of i was having a dreaming that i was dreaming in my dream and in that dream i was doing this intergral😵😵
Remark: One may also justify the exchange of integration and summation via the monotone convergence theorem, by considering the integral to be a Lebesgue integral.
we can easily solve it by taking natural log and apply integratiom by parts
How does the result... make sense? I may be lost when ganma function, but the summatory of n^(-n) isn't just number + number + ...? So the derivative will be... 0?
I would be unable to do it by myself without guidance
But the whole video was a beautiful journey where I was smiling at each new trick
Just disappointed it didn't arrived to some usual function development
You made it so simple :)
Glad you think so!
Just do the transform y = -x and then you solve in dy! we already know the answer!
Whole video like...
Question. Which letter is next after letter A in alphabet ?
Answer. Which letter is before C in alphabet ?
Could never work that out myself but it fun to look at.
Awesome vid! Good job!
Thanks for the visit!
I have no idea what is going on and im thinking of studying applied maths....
That was beautiful - and scary!
So what is the value??
Figure it out
im dying to know how you are able to write backwards. is the video afterwards just mirrored?
A solution that doesn't require substitutions or knowing the gamma function is to integrate (-ln x)^n*x^n between 0 and 1 by parts n times to find that it is n!/(n+1)^(n+1) and the final results comes naturally.
Well that’s just how you prove the gamma function evaluated at n+1 is the same as n!
I don't know anything about calculus(only took precal my senior year of highschool) and this is scaring me and I have no idea what's going om
THE SIMPLEST OF THAT FUNCTION IS CROSSING OUT X AS X/X=1
dude i graduated with my engineering degree why am I still watching Math videos? beautiful vid btw
isnt a definite integral supposed to have a numerical value? It is not clear to me how it became an infiinite sum. Is that result with the infinite sum convergent or divergent?
How do you write laterally invertedly by the left hand?
Video editing :)