The n-dimensional integral does limit to 1: Because we are integrating over a mass of one, we can easily think of this as an expectation. In particular, we can define Y = x_1 * x_2 * ... * x_n, and think of x_1,x_2,...,x_n and Y as random variables, and the integral is then E(Y^Y). The x_i are iid distributed uniformly on [0,1]. As n goes to infinity, Y converges in probability to 0. Because f(x) = x^x is continuous on [0, 1] (taking f(0) = 1), this means E(Y^Y) limits to E(f(0)) = 1. To show Y converges in probability to 0, I think it's sufficient to see that it is positive, and E(Y) converges to 0. And E(Y) = E(x_1) * E(x_2) * ... * E(x_n) = (1/2)^n, which limits to 0.
Since x^x is larger than x on (0,1), then whilst the base is getting smaller, the smaller exponent is making it larger. So the two forces are pulling in opposite directions.
For the n goes to infinity terms, I guess the power goes to zero as it's a product of infinitely many numbers less than one, which means that were just going to find the area of the unit n-cube which is obviously one.
Another proof the limit is 1. It seems to be easier to work with the logarithm of that function. And since we expect the values are close 1 most of the time, that crude bound is enough. f_n(...) = x1 x2 ...xn (log(x1)+...+log(xn)) Since exp(x)>=1+x \int exp(f_n(...)) >= \int_n 1 + \int f_n(...) = 1 - \int -f_n(...) \int goes over n-th dimensional hypercbe On the left side we have our integral from the video. On the right side we already have 1, and we subtract an integral of a positive function. Let's show it is small. \int -f_n(...) = \int -x1 x2 ...xn (log(x1)+...+log(xn)) = \sum_i=1^n \int -log(x_i) x1...xn Each integral is the same, so let's change names of variables and we can write it as one term ...= n* \int -log(x_1) x1 x2...xn -x1 log(x1) is bounded by a constant (1/e to be precise)and positive, everything else is positive, so even without namedroping Hoelder inequality: ...= 1- n/e *2^-(n-1) ->1
No need double change of variable Perform the single change of variable u(x)=xy to obtain integral from x=0 to 1 of 1/x integrate from u=0 to x of u^u then, perform the integration by parts dv=1/x
Actually, you can, as long as the t-variable integrand is unambiguous, which, in this case, it is. The general substitution theorem works in such cases no problem. You might be wondering as to how is every integral not zero then if you can always make a substitution which makes the end points equal to each other. You _can_ make this substitution, but in general you wouldn't be able to reverse the integrand function unambiguously in terms of the new variable, so that's what actually causes the problem, not the limits themselves
I can't quite show directly that it's indeed would be greater than original integral for n>2, but here my steps: Change variables to x_1 = u_1/(u_2 * ... * u_n) x_2 = u_2 ... the jacobian then would be 1/(u_2 ... u_n) The integral changes in this way: J_n = \int _ [0,1]^n d^n x (x_1 * ... * x_n)^(x_1 * ... * x_n) = \int_S u_1^u_1 * 1/(u_2 * ... * u_n) d^n u A bit of thinking about parametrization so that x_1 is the "last" variable to integrate leads to these limits: \int_S (...) = \int_0^1 du_1 u_1^(u_1) \int_(u_1)^1 du^2/u^2 \int_(u_1/u_2)^1 du_3/u_3 ... \int_(u_1/(u_2 ... u_n-1)^1 du_n/u_n Changing variables once more (but in u_1 it's just relabeling) to ln(u_i) = y_i (pretty straightforward change) \int_0^1 du_1 u_1^(u_1) \int_(y_1)^0 dy_2 \int_(y_1 - y_2)^0 dy_3 ... \int_(y_1 - y_2 - ... - y_(n-1)) dy_n One can see that this is a volume of something (something like a symplex but not quite, mostly a pyramid?). Let's change the sign of variables and define: V_n(r) = \int_0^r dx_1 * \int_0^(r-x_1) dx_2 * ... * \int_0^(r-x_1-...-x_n-1) dx_n And we see that the original integral is: J_n = \int _ [0,1]^n d^n x (x_1 * ... * x_n)^(x_1 * ... * x_n) = \int_0^1 x^x dx * V_{n-1}(-ln(x)) It's easy to spot recursive relation of V's: V_n(r) = \int_0^r du V_{n-1}(r-u) = \int_0^r du V_n-1(u); V_1(r) = r Pretty easy to say that V_n(r) = r^k/k! Then I dont really know how to proceed, these V_n(r)'s are not increasing if x is fixed, but they seem (in case of V_n(-ln x) like an approximating function of delta function (cuz they are normalized to 1 and localize to a spike at x=0). So in a sense J_n -|n->inf|-> lim_{x->0} x^x = 1 Why there's a feeling that they are increasing - that's because they "localize" the rise at the graph of x^x near x=0. It's pretty obvious if we get a rectangle bumps limit for delta function, but it may not be true for V_n(-ln x). And I guess to actually calculate the limit explicitly one can try using method of steepest descent (after changing variables -ln x = u) for J_n = \int_0^+inf e^(-u(1+e^-u))u^k / k! du
Have not watched it yet but .... thinking a quarter to a quarter is greater than a (quarter to a quarter) all squared but will it survive Michael's testabilities, analysis and computational skills? Start the video!
BTW, the integral of x^x from 0 to 1 is a famous result called the Sophomore's Dream: en.wikipedia.org/wiki/Sophomore%27s_dream So this result, that the double integral equals the single integral should be called the Junior's Dream?!
It is a good question! My answer is some do care, some do not care and some are just oblivious to the question, the answer and any intrigue attached to the topic. And there is nothing wrong with that at all - it is perfectly fine and dandy.
The n-dimensional integral does limit to 1:
Because we are integrating over a mass of one, we can easily think of this as an expectation. In particular, we can define Y = x_1 * x_2 * ... * x_n, and think of x_1,x_2,...,x_n and Y as random variables, and the integral is then E(Y^Y). The x_i are iid distributed uniformly on [0,1]. As n goes to infinity, Y converges in probability to 0. Because f(x) = x^x is continuous on [0, 1] (taking f(0) = 1), this means E(Y^Y) limits to E(f(0)) = 1.
To show Y converges in probability to 0, I think it's sufficient to see that it is positive, and E(Y) converges to 0. And E(Y) = E(x_1) * E(x_2) * ... * E(x_n) = (1/2)^n, which limits to 0.
pretty cool, thx!
wow that's a very cool trick. Never would have expected stochastics here 😂
After staring at the thumbnail, I decided the single integral was larger. The solution was a pleasant surprise.
Surprising! I assumed the xy one was smaller, since you're multiplying a number in (0,1) by another number in (0,1) which shrinks the product
Since x^x is larger than x on (0,1), then whilst the base is getting smaller, the smaller exponent is making it larger. So the two forces are pulling in opposite directions.
This would be clearer if you had used a dummy variable v, and then set:
x = v
y = u / x
and transformed from (x,y) to (u,v).
For the n goes to infinity terms, I guess the power goes to zero as it's a product of infinitely many numbers less than one, which means that were just going to find the area of the unit n-cube which is obviously one.
The base also goes to 0 though so it goes to 0^0. I believe you're right that the limit is 1 in this case though
but you have infinitely many integrals upfront - isn't that making this argument fall apart?
That's actually what I was thinking!
Great observation!
1:11 This was charitable: "now that we've got that maybe recalled ..."
It was new to me.
Another proof the limit is 1.
It seems to be easier to work with the logarithm of that function. And since we expect the values are close 1 most of the time, that crude bound is enough.
f_n(...) = x1 x2 ...xn (log(x1)+...+log(xn))
Since
exp(x)>=1+x
\int exp(f_n(...)) >= \int_n 1 + \int f_n(...) = 1 - \int -f_n(...)
\int goes over n-th dimensional hypercbe
On the left side we have our integral from the video. On the right side we already have 1, and we subtract an integral of a positive function. Let's show it is small.
\int -f_n(...) = \int -x1 x2 ...xn (log(x1)+...+log(xn)) = \sum_i=1^n \int -log(x_i) x1...xn
Each integral is the same, so let's change names of variables and we can write it as one term
...= n* \int -log(x_1) x1 x2...xn
-x1 log(x1) is bounded by a constant (1/e to be precise)and positive, everything else is positive, so even without namedroping Hoelder inequality:
...= 1- n/e *2^-(n-1) ->1
No need double change of variable Perform the single change of variable u(x)=xy to obtain integral from x=0 to 1 of 1/x integrate from u=0 to x of u^u then, perform the integration by parts dv=1/x
But u^u is not monotonic on [0,1] so you cannot do such substitution. You have to split interval to [0,1/e] and [1/e,1] and integrate separately
just came here to say this, im glad im not going insane here because i was looking at that and thinking "there is no way that's correct"
Monotony is not necessary for change of variables. Look at the proof for the theorem that uses fundamental theorem of calculus
Ok, let's split it then.
Let (1/e)^(1/e) = C
Ane then the result is
int_0^1/e (1+ln u) u^u du + int_1/e^1(1+ln u) u^u du = int_1^C dt + int_C^1 dt = 0
Actually, you can, as long as the t-variable integrand is unambiguous, which, in this case, it is. The general substitution theorem works in such cases no problem.
You might be wondering as to how is every integral not zero then if you can always make a substitution which makes the end points equal to each other. You _can_ make this substitution, but in general you wouldn't be able to reverse the integrand function unambiguously in terms of the new variable, so that's what actually causes the problem, not the limits themselves
Great explanation.
In general, I guess the integral is equal to \frac{1}{(n-1)!} \int_0^1 u^u (1 / \ln{u})^{n-1} du, which goes to 1 if n goes to infinity
12:00
I can't quite show directly that it's indeed would be greater than original integral for n>2, but here my steps:
Change variables to
x_1 = u_1/(u_2 * ... * u_n)
x_2 = u_2 ...
the jacobian then would be 1/(u_2 ... u_n)
The integral changes in this way:
J_n = \int _ [0,1]^n d^n x (x_1 * ... * x_n)^(x_1 * ... * x_n) = \int_S u_1^u_1 * 1/(u_2 * ... * u_n) d^n u
A bit of thinking about parametrization so that x_1 is the "last" variable to integrate leads to these limits:
\int_S (...) = \int_0^1 du_1 u_1^(u_1) \int_(u_1)^1 du^2/u^2 \int_(u_1/u_2)^1 du_3/u_3 ... \int_(u_1/(u_2 ... u_n-1)^1 du_n/u_n
Changing variables once more (but in u_1 it's just relabeling) to ln(u_i) = y_i (pretty straightforward change)
\int_0^1 du_1 u_1^(u_1) \int_(y_1)^0 dy_2 \int_(y_1 - y_2)^0 dy_3 ... \int_(y_1 - y_2 - ... - y_(n-1)) dy_n
One can see that this is a volume of something (something like a symplex but not quite, mostly a pyramid?). Let's change the sign of variables and define:
V_n(r) = \int_0^r dx_1 * \int_0^(r-x_1) dx_2 * ... * \int_0^(r-x_1-...-x_n-1) dx_n
And we see that the original integral is:
J_n = \int _ [0,1]^n d^n x (x_1 * ... * x_n)^(x_1 * ... * x_n) = \int_0^1 x^x dx * V_{n-1}(-ln(x))
It's easy to spot recursive relation of V's:
V_n(r) = \int_0^r du V_{n-1}(r-u) = \int_0^r du V_n-1(u); V_1(r) = r
Pretty easy to say that V_n(r) = r^k/k!
Then I dont really know how to proceed, these V_n(r)'s are not increasing if x is fixed, but they seem (in case of V_n(-ln x) like an approximating function of delta function (cuz they are normalized to 1 and localize to a spike at x=0). So in a sense J_n -|n->inf|-> lim_{x->0} x^x = 1
Why there's a feeling that they are increasing - that's because they "localize" the rise at the graph of x^x near x=0. It's pretty obvious if we get a rectangle bumps limit for delta function, but it may not be true for V_n(-ln x).
And I guess to actually calculate the limit explicitly one can try using method of steepest descent (after changing variables -ln x = u) for
J_n = \int_0^+inf e^(-u(1+e^-u))u^k / k! du
Intg1 1->0 = 1/2•dx; Intg1•Intg 2 = 1/4 •dxdy [1/2•dx] 1-0 > [1/4•dxdy]1->0 The integral Intg1 (1->0) x^x•dx is larger
According to desmos they’re both equal
My guess is that the x^x will be larger
Have not watched it yet but .... thinking a quarter to a quarter is greater than a (quarter to a quarter) all squared but will it survive Michael's testabilities, analysis and computational skills?
Start the video!
Which is larger lenght area or volume? Hahaha
The volume equals the area ... like one is the derivative of the other
Comunque un altro bel video, grazie😊
BTW, the integral of x^x from 0 to 1 is a famous result called the Sophomore's Dream: en.wikipedia.org/wiki/Sophomore%27s_dream
So this result, that the double integral equals the single integral should be called the Junior's Dream?!
What about this question dear?
The indefinte integral of x^x
Someone gave me to solve and I could not solve.
If you solve me it would be your ...
It is not an elementary function
@@methatis3013was it proven?
@@ThanksGodsYouAlive pretty sure it is
sum from n = 1 to inf of (-1)^n-1/n^n
@@minhnguyen1338 how is a constant a solution to an idnefinite integral?
Funny 👕
Which is larger? Bho? And... who cares?
It is a good question!
My answer is some do care, some do not care and some are just oblivious to the question, the answer and any intrigue attached to the topic.
And there is nothing wrong with that at all - it is perfectly fine and dandy.
Who cares about math questions on a math channel? So unexpected!