The sum of ln(1+k) from k=1 is = ln(2) + ln(3) + ln(4) + ..., then for ln(k) from k=1 is = ln(1) + ln(2) + ln(3) + ..., but since ln(1)=0 this is also just: sum ln(k) = ln(2) + ln(3) + ... = sum of ln(1+k)
I'm having trouble justifying the choice of M and N to be approx. equal....it seems like the summation couldn't converge to the area if you are forcing the delta x's to be as big as 1 (don't you need delta x -> 0 as M -> infinity?)
WOW I derived this just 3 days ago! Let's see how you do it in this video Edit: Ahh I did it with the saddle point method where you taylor n ln(x)-x around its maximum x_0=n and substitute that into n! (written as a gamme function). Then it is easy going!
I'm a little bit confused with the integration of ln x from 1 to N. It is N ln N - N +1, isn't it? I'm assuming that you've dropped the 1 because you're considering a huge N.
It converges very quickly if you use the full version of the approximation. n! ~ sqrt(2*pi*n) * (n/e)^n The ratio of these functions approaches 1 at frightening speed!
It can be used to approximate very large factorials, like he says in the video. In some situations, such as power series, factorials are quite difficult to work with- Stirling's approximation can be used to turn factorials into much more cooperative exponential terms (namely, some number times (n/e)^n).
I like the way you shorten the pronounciation of the natural log to what sounds like "lawn"
Makes things so much more convenient!
my professor pronounces it the same way
At 3:05 why do we need to add 1 as the start of the interval for x_k = 1 + k delta x?
th-cam.com/video/aBAdEzlvsuE/w-d-xo.html
The region of integration is [1,n], so need to start at 1 so that 1+k*deltax lands in the first subinterval.
I was expecting you to do it in terms of gamma function.
Well ! Joy for non-math people.
3:50 did not understand this part
and I got Nln(N)-N+1 in the last integral.
he dropped the 1 for N very large
The sum of ln(1+k) from k=1 is = ln(2) + ln(3) + ln(4) + ..., then for ln(k) from k=1 is = ln(1) + ln(2) + ln(3) + ..., but since ln(1)=0 this is also just: sum ln(k) = ln(2) + ln(3) + ... = sum of ln(1+k)
@@duckymomo7935 don't teach people if you don't understand... see @Starxel's explanation.... it's not about N very large or not, but ln(1)=0
I'm having trouble justifying the choice of M and N to be approx. equal....it seems like the summation couldn't converge to the area if you are forcing the delta x's to be as big as 1 (don't you need delta x -> 0 as M -> infinity?)
Always à Nice thing to see a new video of you ! Any differential géomètry coming soon ?
Thank you! Hopefully in the next couple of weeks or so!
WOW I derived this just 3 days ago! Let's see how you do it in this video
Edit:
Ahh I did it with the saddle point method where you taylor n ln(x)-x around its maximum x_0=n and substitute that into n! (written as a gamme function). Then it is easy going!
Oh boy, looks like you just hinted the alternative derivation I'll be providing later. It'll involve the Laplace Method, like you mentioned!
I would like to see this proof of yours. eule franz
Hi,
At 3:55, at k=1how does summation of ln(1+k) ~ ln(k)?
Wont it be ln(1+1)or ln(2) ?
I think xk should be [1 + (k - 1) delta x] since x1 = 1 and it's not that the summation of ln(1 + k) ~ ln(k)
The sum is equal because ln(1) =0, so ln(2) is the first non-zero term in either case.
What did you use to write things for deriving that , what application?
no square root of 2 pi n?
I'm a little bit confused with the integration of ln x from 1 to N. It is N ln N - N +1, isn't it? I'm assuming that you've dropped the 1 because you're considering a huge N.
Yes you're right, but I dropped the 1 because N is large. I'll make that addition to the description; thank you for mentioning!
Me: *clicks on the first video I see
This dude: “Greetings students a-“
Me: Wait a minute
Awesome video! Thank you!
you should do the asymptotic formula for n! as well now that is epic
Thanks for this presentation, but could you tell me which program did you use for it?
Thanks!
Nice!
You've got to bound the error though
nice handwriting
awsome video
th-cam.com/video/aBAdEzlvsuE/w-d-xo.html
Very annoying that stirling is not converging. In fact, the larger the n the larger the absolute error.
It's the opposite of what you said
@@jinjunliu2401 The error quotient goes to zero, yes. The absolute difference does not. it grows.
@@ShroomLab Oooh like that, sorry I misunderstood your first comment
It converges very quickly if you use the full version of the approximation. n! ~ sqrt(2*pi*n) * (n/e)^n
The ratio of these functions approaches 1 at frightening speed!
stirling formula is used for
It can be used to approximate very large factorials, like he says in the video.
In some situations, such as power series, factorials are quite difficult to work with- Stirling's approximation can be used to turn factorials into much more cooperative exponential terms (namely, some number times (n/e)^n).
You can do this via gamma function but this method is much faster
This is not Stirling's formula - check wiki.
The wiki page gives this as a simple version of the formula. It would take quite a bit longer than five minutes to reach the full formula haha