This except unironically. I was taught last year that there are cases where they are not interchangeable, but that we will never consider them. I still have no idea what determines whether interchanging is allowed or not
@@Assault_Butter_Knife The general idea is that if it can happen in physics, then it can't be pathologic. Usually, when you meet an infinite sum combined with an integral in physics, you can give a physical interpretation of what that infinite sum and that integral represent, and there should be no "natural" difference in exchanging the terms: you simply *assume* that an equation faithful enough to a certain physical process just won't give any problems of this kind. This mathematical sloppiness ultimately brought to tons of problems in quantum field theory.
@@CiuccioeCorraz exactly right, in fact you see that operator exchange all the time in PDEs when building the solution (first example that comes to mind), and applying physical properties kinda saves the day During the procedure there comes a time you need to switch integral and sum, and you'd usually justify it with something like "well yeah this function is a temperature so it must be bounded"... But as you correctly said, this kind of reasoning doesn't work everywhere -- I'm not even sure it works in all of classical mechanics, but that idk, I'm just an algebraist :p
Lol, I was thinking about this for a while, but didn't really know much so couldn't resolve it. Glad to see you talking about it and explaining it so well!
The limit at 3:00 is actually 2^{-x}-lim n 2^{-n x} = 2^{-x} - DiracDelta(x)/ln 2. Having taken the limit using distributions, the sum sign and the integral sign may be interchanged.
Hi Bri This has been a great video for me. I have always been confused about this since I did a course on Lebesgue and Daniell integration many moons ago.
you used the absolute convergence theorem, which is fine but you also could have us the uniformed convergence of f_n. If so the limit of n to inf sum f_n does not depend on x,, thus interchange of limits is allowed and since a integral is nothing else than a limit you are allowed to do this. Also uniform convergence implies absolute convergence. I think this more beatifull approach however great video !
Wow, this was sum very interesting information. I know that I have a long way to go in my studies, but you make learning so much fun. Thanks for sharing!
both of them for any arbitrary \epsilon>0 yield 2^{-\epsilon}/\ln 2, son in the \epsilon -> 0 limit yielt the same value. But this is a further limit you're taking right. So 1. Limit on the sum to make it a series. 2. Limit on the upper half of the integral for it to go to infinity. 3. Limit on the lower part of the integral (\epsilon ->0). Now, a mathematician will very well ask: taking the limits 1 then 2 then 3 is the same as 1 then 3 then 2? the same as 2 then 3 then 1? the same as 2 then 1 then 3? ... etc.
3:51 POV, you had a exam on school when your teacher puts you a wrong mark in a question of true/false if 0*infinity is 0 And the next class is saying you that is correct and your grade is still the same 😂😂😂😂
(Just to make the context clear) I watched this video IMMEDIATELY after your video on 0 * ♾, and I think you have (intentionally or by accident) given the impression that 0 * ♾ = 0, by using the discrete summation. (In case someone reading this comment gets confused, here's how) The discrete summation said that start from n = 1, and go up all the way to n = ♾ and every time you do it, take a 0 and add it. What I mean is that for n = 1 you take a 0. Now you go to n = 2 and take another 0, another one for n = 3 and so on till n = ♾. Now we know that a + a + a + a + ...... (b times) = a * b. Applying the same logic to the above scenario, the summation can be written as ♾* 0 (or 0 * ♾) which was evaluated as 0 (at 3:48)
how can the 0 (being of measure 0 in the integral) have such wild effects on this? We're always told that one single point doesn't influence the value of the integral. I think here we may have an interesting case of 0 x infty, as the dx -> 0 around 0 but the sum diverges...
It's true that changing the value of the function at one point wouldn't change the value of the integral - but here, the problem is precisely that the function *isn't even defined* at x = 0: it works out to be the sum of -1 from n=1 to infinity, which of course diverges.
Taking the limit as a approaches 0+ will give the correct result. But if a is exactly 0 you get different results as said in the video. This is to be expected if the function absolutely converges for any x>0 but not for 0, there's a discontinuity there!
is this because you're also interchanging the limit? and if you put the limit outside the function you would get a non 0 number?, Bri if I'm wrong, pls tell me if so and why so EDIT: I did the work on your example and that is indeed correct, it wasn't the order of summation that mattered, it was the order of the limits that did.
Yeah switching "sums" is really a red herring here, it's all about limits of functions. You have the sum inside the integral, but the limit of the sum is outside the integral. It's a question of wether you can pass the limit into the integral.
@@MK-13337 Correct. In fact, all of these theorems are a consequence of the Moore-Osgood theorem, which is the theorem that tells you when limits can be interchanged, and it ultimately requires uniform convergence.
Actually you explained it Mathematically I am in high school in India actually I hoped that you would give more visual kind of proof like what does it actually mean to perform a continuous sum if a discrete sum or otherwise...hoping you will take this in your video or guide me how to visualise it
I have a general question about derivatives: in Physics, often one encounters that neglecting second order differentials (like ' dx * dy ') is fine because it's small compared to first order derivatives. Thus, terms are 'pruned' at the first order, which makes them easily applicable to intergration. While I see the validity of this argument, is there a rigorous way to prove that this is mathematically sound?
Yes, in essence you work it through with epsilons and then observe that for sufficiently small epsilons, the second- and higher-order terms vanish sufficiently fast.
4:07 you have one too little discrete summations if you don't mean that your fn is a discrete sum itself, I think these were badly made examples when you just could have given an easy intuitive video on this topic under 3 minutes.
How can we be sure that adding more and more blocks we get the Final Area? Why? Because the area between the rectangles and the curve its infinitesimal small. So what? There is infinit amount of these small areas. How can we assume their sum is finite? Ok because they are very tiny so what ? 1/n is still very tiny nevertheless the sum diverges....we neglect a sum of very tiny areas but there is a lot of cases that diverges
But how do we know that the sum of absolute values that we take as g(x) not only converges for all necessary x but is also integrable? Or is our sum being absolutely convergent for all necessary x sufficient to justify interchanging integration and summation?
What if instead of an integral and a sum we have two infinite summations? This is a case where I have no idea when I can interchange order of summation.
Is there anything like integration...but for infinite multiplication..... or in other words can we perform any oparation infinitely on a elements of a set which is closed under that operation...
There is a multiplicative integral, and it is completely analogous to the integral in its definition. Also, it actually can be expressed in terms of the regular integral. The product integral of f is equal to exp of the integral of ln(f).
3:52 yes but there is an infinite number of those zeros. So I'm not sure if we can take the result of adding an infinite number of zeroes as zero like we take it in the finite case. Care to explain?
Typically we define the sum of an infinite series to be the limit of the partial sum. Since the partial sum of a sequence of 0's is 0, the limit of that partial sum is also 0 (the limit of a constant is a constant). Thus we can conclude the overall infinite sum of 0s is 0.
@@BriTheMathGuy Thanks for replying. However, it leads to some paradoxical results consider the infinite nested radical √(0+√(0+√(0+√(0+...))...)). If we take the meaning of the value as the limit of partial sums as you say, the answer is 0. However, if we let it equal to some 'x'. Then squaring both sides we get, 0 + x = x^2 => x =1. So which is it 0 or 1?
@@Nickesponja Yes but why not 1. I mean why is 1 to be discarded if the solution yields it as a valid answer ? It is not like we introduced it during the solution process.
@@Stefan-ls3pb Thanks for making it very clear! However, you say that we must first prove that limit exists. So how do we do that? Also " 1 does not fulfill epsilon -delta " begs the question. So that argument is kinda vacant.
You can't do it in terms of elementary functions - you would have to use Bessel functions. Refer to math.stackexchange.com/questions/2468863/what-is-the-integral-of-e-cos-x.
i know i AM the best mathematician if someone tests what i adding for its validity FOR me, i didnt fail the functional analysis 586, i fuking gave up. so like... auditing the course. worse than failing, i couldnt understand the math lingo fast, upside down A , circle dot, circle, ||a||, flipped epsilion + 1 million more symbols. it seemed that there was every other convergence theorem for every other summation. dominant, weak convergent, weak law conver.. chevy, cauchy 1,cauchy 2, cauchy3...., i cant remember how many are there
I think what you stated is wrong. The dominating function g must also be integrable so not only the series of functions must converge absolutely for every x, but also it must be integrable. The problem at the end is not that the series doesn't converge absolutely on x=0, because the dominated converge theorem is stated in terms of almost every point, so it doesn't matter the value of the function g(x)=sum_{n=1}^inf abs(f_n(x)) at x=0, the problem is that, althoug the series of function you are working with is absolutely convergent on (0,+inf), it is not integrable.
@@BriTheMathGuy But not when you stated the theorem in terms of the absolutent convergent series. And what I'm trying to say is that the problem at the end is not that the series of functions doesn't converge absolutely when x=0, it is that the function g(x)=sum_{n=1}^infty abs(f_n(x)) is not integrable (the integral on (0, infty) diverges). If it was integrable, then we could use it as a bound in the dominated convergence theorem, because we could state any value for g at x=0 and it would still be a dominant function and integrable. So, it could happen in another example that some series of functions is not absolutely integrable at x=0, but we could still apply the dominated convergece theorem in the terms of absolute convergence you stated it. Have I explained it better this time? *For f_n(x) I mean the sequence of functions you take as example in the video. (And sorry about my English :/)
What is really going on is that the explanation that integrals are continuous sums is just inaccurate. While the explanation does build an intuition that allows learners to move forward in satisfaction without asking too many questions, when you look at the definition, it becomes evident how inaccurate this explanation actually is. This also happens with series. We tend to describe series as just being summation, but with infinitely many quantities, and again, this is inaccurate. While sums do play a role in how series and integrals are defined, there is an ingredient more essentially than them being sums, and that is, that they are limits. In fact, if a learner is already familiar with limits, then I think a healthier intuition to build is to explain integrals not as continuous sums and series not as infinite sums, but rather, as limits of continuous functions and of sequences, respectively. Once you understand that series and integrals are limits, things such as the Riemann rearrangement theorem and the fact that integrals and series cannot always be interchanged become rather intuitive. This is because we no longer think of it in terms of sums, but in terms of limits. And there is no obvious reason to expect an interchange of different limit operations to leave a result unchanged.
🎓Become a Math Master With My Intro To Proofs Course! (FREE ON TH-cam)
th-cam.com/video/3czgfHULZCs/w-d-xo.html
just switch over to physics and it becomes always true
Of course, also the sum of all naturals equals -1/12
This except unironically. I was taught last year that there are cases where they are not interchangeable, but that we will never consider them. I still have no idea what determines whether interchanging is allowed or not
@@Assault_Butter_Knife The general idea is that if it can happen in physics, then it can't be pathologic. Usually, when you meet an infinite sum combined with an integral in physics, you can give a physical interpretation of what that infinite sum and that integral represent, and there should be no "natural" difference in exchanging the terms: you simply *assume* that an equation faithful enough to a certain physical process just won't give any problems of this kind.
This mathematical sloppiness ultimately brought to tons of problems in quantum field theory.
@@CiuccioeCorraz exactly right, in fact you see that operator exchange all the time in PDEs when building the solution (first example that comes to mind), and applying physical properties kinda saves the day
During the procedure there comes a time you need to switch integral and sum, and you'd usually justify it with something like "well yeah this function is a temperature so it must be bounded"... But as you correctly said, this kind of reasoning doesn't work everywhere -- I'm not even sure it works in all of classical mechanics, but that idk, I'm just an algebraist :p
Nooooooooooooooooooooooooooooooo
Summation: Integral, you can interchange with me anytime, anywhere!
Infinity: I am about to ruin this man's whole career
These are facts ^^
Lol, I was thinking about this for a while, but didn't really know much so couldn't resolve it. Glad to see you talking about it and explaining it so well!
Thanks for watching!
Thank you.
Easy on the eyes, the ears and the mind.
You are welcome!
Thanks for this video, keep up the great content.
Thanks and thanks for watching!
Why hello
The limit at 3:00 is actually 2^{-x}-lim n 2^{-n x} = 2^{-x} - DiracDelta(x)/ln 2. Having taken the limit using distributions, the sum sign and the integral sign may be interchanged.
Hi Bri This has been a great video for me. I have always been confused about this since I did a course on Lebesgue and Daniell integration many moons ago.
What other math facts do you take for granted?
meh
Hmm, the Jordan curve theorem ?
@@goodplacetostop2973 hii
The fact that thr cartesian product of any family of nonempty sets is nonempty.
Crossing out diferentials in fractions
Of course they're equal, that's how we do Borel summation!😁
you used the absolute convergence theorem, which is fine but you also could have us the uniformed convergence of f_n. If so the limit of n to inf sum f_n does not depend on x,, thus interchange of limits is allowed and since a integral is nothing else than a limit you are allowed to do this. Also uniform convergence implies absolute convergence. I think this more beatifull approach however great video !
Thanks! Very important subject to talk about. Will help me with my teaching.
I'm sharing the video with my Colleagues 🙂
Thanks for sharing!!
Excellent presentation !. vow !!
Thank you! Have a great day!
nice 👌 thanks 🙃
... so many convergence theorems
Welcome!
Discovered your channel and your fantastic math examples only few days ago, and I love them! Thank you!
Indeed 👍
Wow, this was sum very interesting information. I know that I have a long way to go in my studies, but you make learning so much fun. Thanks for sharing!
Glad it was helpful! Best of luck!
It was sum very interesting information indeed, I hope you can integrate it into your studies!
@@WindsorMason I see what you did there, and it was sigmagnificent!
See, I always would use "sum of integral" and "integral of sum" interchangeably, but that was only if the sum was convergent.
5:40 also called d'Alembert criterion
Wow! Awesome interesting math fact! Love this! TH-cam algorithm did a great job recommending this to me
Great explanation!
Glad you think so!
I love this kind of math
I liked the Emphasis And importance of the role the ratio of convergence plays. Thx
Thanks for watching!
U make such good math videos bringing out the depths and all the facts . thanks man
Glad you like them!
I had been wondering about this for a bit thank you for the great video
You bet! Thanks for watching!
This is so helpful! Thanks!
This is exactly what I'm seeing on college right now. Any tips?
I do wonder, for that integral you computed, what happens if you take the limit as the lower bound approaches 0 from the positive direction?
both of them for any arbitrary \epsilon>0 yield 2^{-\epsilon}/\ln 2, son in the \epsilon -> 0 limit yielt the same value. But this is a further limit you're taking right. So 1. Limit on the sum to make it a series. 2. Limit on the upper half of the integral for it to go to infinity. 3. Limit on the lower part of the integral (\epsilon ->0). Now, a mathematician will very well ask: taking the limits 1 then 2 then 3 is the same as 1 then 3 then 2? the same as 2 then 3 then 1? the same as 2 then 1 then 3? ... etc.
You're lookin kinda jacked bro.
Nice Content.... Great inspiration....I really love mathematics....
Glad you enjoy it!
wish somebody taught me like this during my college
3:51
POV, you had a exam on school when your teacher puts you a wrong mark in a question of true/false if 0*infinity is 0
And the next class is saying you that is correct and your grade is still the same
😂😂😂😂
Excellent vidéo ! Theoric and riguorous maths, so rare in ytb...
Glad you think so!
Thanks a lot for the video. Can you do a proof? Thant would be very interesting, too.
(Just to make the context clear) I watched this video IMMEDIATELY after your video on 0 * ♾, and I think you have (intentionally or by accident) given the impression that 0 * ♾ = 0, by using the discrete summation.
(In case someone reading this comment gets confused, here's how)
The discrete summation said that start from n = 1, and go up all the way to n = ♾ and every time you do it, take a 0 and add it. What I mean is that for n = 1 you take a 0. Now you go to n = 2 and take another 0, another one for n = 3 and so on till n = ♾.
Now we know that a + a + a + a + ...... (b times) = a * b. Applying the same logic to the above scenario, the summation can be written as ♾* 0 (or 0 * ♾) which was evaluated as 0 (at 3:48)
how can the 0 (being of measure 0 in the integral) have such wild effects on this? We're always told that one single point doesn't influence the value of the integral. I think here we may have an interesting case of 0 x infty, as the dx -> 0 around 0 but the sum diverges...
It's true that changing the value of the function at one point wouldn't change the value of the integral - but here, the problem is precisely that the function *isn't even defined* at x = 0: it works out to be the sum of -1 from n=1 to infinity, which of course diverges.
Thank you for helping me to solve riemann hypothesis
Happy to help!
This problem was also addressed in real analysis by prof Terence tao
physicists: that sign can't stop me because i can't read
What if the lower bound of the integration is a and we take the limit as a approaches 0+? We would be in the region of convergence.
I came to ask the same thing!
Taking the limit as a approaches 0+ will give the correct result. But if a is exactly 0 you get different results as said in the video. This is to be expected if the function absolutely converges for any x>0 but not for 0, there's a discontinuity there!
@@Nickesponja yeah, but the integral over the open interval should be the same as the closed one. The single point has a lebesgue measure of zero.
@@victorscarpes That's not really true. This is where uniform convergence matters.
is this because you're also interchanging the limit? and if you put the limit outside the function you would get a non 0 number?, Bri if I'm wrong, pls tell me if so and why so
EDIT: I did the work on your example and that is indeed correct, it wasn't the order of summation that mattered, it was the order of the limits that did.
Switching the limits is really the heart of the matter indeed! (We can think of the infinite sum as the limit of a finite sum)
Yeah switching "sums" is really a red herring here, it's all about limits of functions. You have the sum inside the integral, but the limit of the sum is outside the integral. It's a question of wether you can pass the limit into the integral.
@@MK-13337 Correct. In fact, all of these theorems are a consequence of the Moore-Osgood theorem, which is the theorem that tells you when limits can be interchanged, and it ultimately requires uniform convergence.
I've never thinked that these to symbols could be together
Thanks for watching and have a great day!
You can always change the order of integration/summation, buy it definitely will not help in all cases
great videos!!!!!
Thanks so much!
welcome
Why are you allowed to remove the 'n' from the expression n2^{-nx} when taking the limit of the partial sum at (2:55)?
I don't understand adding a bunch of zeros because we will have infinite zeros which is 0× inf which is an Indeterminate format
..beautiful !!
6:44 How did you get -1
Actually you explained it
Mathematically I am in high school in India actually I hoped that you would give more visual kind of proof like what does it actually mean to perform a continuous sum if a discrete sum or otherwise...hoping you will take this in your video or guide me how to visualise it
I have a general question about derivatives: in Physics, often one encounters that neglecting second order differentials (like ' dx * dy ') is fine because it's small compared to first order derivatives. Thus, terms are 'pruned' at the first order, which makes them easily applicable to intergration. While I see the validity of this argument, is there a rigorous way to prove that this is mathematically sound?
Yes, in essence you work it through with epsilons and then observe that for sufficiently small epsilons, the second- and higher-order terms vanish sufficiently fast.
@@amritlohia8240 👍 great. Is there a website I could see how this is done?
4:07 you have one too little discrete summations if you don't mean that your fn is a discrete sum itself, I think these were badly made examples when you just could have given an easy intuitive video on this topic under 3 minutes.
Another method I learn that when we interchange summation and integral apart from uniform convergence method . thanks
Excellent!
How can we be sure that adding more and more blocks we get the Final Area? Why? Because the area between the rectangles and the curve its infinitesimal small. So what? There is infinit amount of these small areas. How can we assume their sum is finite? Ok because they are very tiny so what ? 1/n is still very tiny nevertheless the sum diverges....we neglect a sum of very tiny areas but there is a lot of cases that diverges
Its true for convergent or finite functions
Here's a hard one:
Is ther any expression that's equal to an over expression but can be mathematically converted into it.
"Count the entire distance from my feet to my head and everything in between....." I see what you did there...
😅
New subscriber! Amazing concept understanding! I learned something new today!
Great to hear! Thanks!
What if we say lim(x->0+) instead of taking 1?
bro started from the real basics 💀💀
Very good
But how do we know that the sum of absolute values that we take as g(x) not only converges for all necessary x but is also integrable? Or is our sum being absolutely convergent for all necessary x sufficient to justify interchanging integration and summation?
What if instead of an integral and a sum we have two infinite summations? This is a case where I have no idea when I can interchange order of summation.
It is a similar theorem.
Is there anything like integration...but for infinite multiplication..... or in other words can we perform any oparation infinitely on a elements of a set which is closed under that operation...
I'm pretty sure there is, I think it's called "the product integral"
There is a multiplicative integral, and it is completely analogous to the integral in its definition. Also, it actually can be expressed in terms of the regular integral. The product integral of f is equal to exp of the integral of ln(f).
I think this is related to the geometric derivative and integral
Very interesting
Glad you think so!
which level of calculus is good for game development and software development
Fantastic explanation, but only I think infinite sum of zeros is undetermined.
No, it is definitely 0. This is because the sequence of partial sums is the zero sequence, and the zero sequence converges to 0.
In the next video, we will examine the zero formula: \dfrac{ (2n-1) ^ { 2 } }{ 3 } +1 = 0
Isn't this on of the first theorems you do in calculus 2?
انت مبدع حقا يا صديق
Have a wonderful day!
@@BriTheMathGuy اكثر من رائعة يا صديق
It's not Ratio test but D'Alembert criteria 😎
In physics everything commute xD LOL
😎
The big question is how tall are you ?
Well i am right now at uniform convergemce
but your sum doesn't converge in the first place, so your example gives no insight into why the order matters
The real problem here is switching the order of limits
so, can you state it as a summary?
Absolute convergence of the sum implies switching is okay 👌
@@BriTheMathGuy and if there is no absolute convergence, does it imply switching is nor okay?
i know, this actually
Glad to hear it!
@@BriTheMathGuy :)
3:52 yes but there is an infinite number of those zeros. So I'm not sure if we can take the result of adding an infinite number of zeroes as zero like we take it in the finite case. Care to explain?
Typically we define the sum of an infinite series to be the limit of the partial sum. Since the partial sum of a sequence of 0's is 0, the limit of that partial sum is also 0 (the limit of a constant is a constant). Thus we can conclude the overall infinite sum of 0s is 0.
@@BriTheMathGuy Thanks for replying. However, it leads to some paradoxical results consider the infinite nested radical √(0+√(0+√(0+√(0+...))...)). If we take the meaning of the value as the limit of partial sums as you say, the answer is 0.
However, if we let it equal to some 'x'. Then squaring both sides we get, 0 + x = x^2 => x =1. So which is it 0 or 1?
@@kabsantoor3251 x=0 is also a solution to that equation, and definitely the correct solution
@@Nickesponja Yes but why not 1. I mean why is 1 to be discarded if the solution yields it as a valid answer ? It is not like we introduced it during the solution process.
@@Stefan-ls3pb Thanks for making it very clear! However, you say that we must first prove that limit exists. So how do we do that? Also " 1 does not fulfill epsilon -delta " begs the question. So that argument is kinda vacant.
amazing!so i have a question that how to integrate e^(cos(x)) from 0 to pi?i hope my English works cause i am not a native。
You can't do it in terms of elementary functions - you would have to use Bessel functions. Refer to math.stackexchange.com/questions/2468863/what-is-the-integral-of-e-cos-x.
@@amritlohia8240 thanks for your works🤓
one question, why would you change the lower bound to 1 and not, say, 0+?
That would also work, yes. I guess he just chose 1 as an easy example
The discrepancy is to do with the different grouping of the terms in the infinite sum. The integration it seems to me only a distraction.
i know i AM the best mathematician if someone tests what i adding for its validity FOR me, i didnt fail the functional analysis 586, i fuking gave up. so like... auditing the course. worse than failing, i couldnt understand the math lingo fast, upside down A , circle dot, circle, ||a||, flipped epsilion + 1 million more symbols.
it seemed that there was every other convergence theorem for every other summation.
dominant, weak convergent, weak law conver.. chevy, cauchy 1,cauchy 2, cauchy3...., i cant remember how many are there
It 's more complicared for me
I think what you stated is wrong. The dominating function g must also be integrable so not only the series of functions must converge absolutely for every x, but also it must be integrable. The problem at the end is not that the series doesn't converge absolutely on x=0, because the dominated converge theorem is stated in terms of almost every point, so it doesn't matter the value of the function g(x)=sum_{n=1}^inf abs(f_n(x)) at x=0, the problem is that, althoug the series of function you are working with is absolutely convergent on (0,+inf), it is not integrable.
I believe I stated g must must integrable :)
@@BriTheMathGuy But not when you stated the theorem in terms of the absolutent convergent series. And what I'm trying to say is that the problem at the end is not that the series of functions doesn't converge absolutely when x=0, it is that the function g(x)=sum_{n=1}^infty abs(f_n(x)) is not integrable (the integral on (0, infty) diverges). If it was integrable, then we could use it as a bound in the dominated convergence theorem, because we could state any value for g at x=0 and it would still be a dominant function and integrable. So, it could happen in another example that some series of functions is not absolutely integrable at x=0, but we could still apply the dominated convergece theorem in the terms of absolute convergence you stated it. Have I explained it better this time? *For f_n(x) I mean the sequence of functions you take as example in the video. (And sorry about my English :/)
This makes sense. I wasn't convinced by the argument in the video
Nnno.... an infinite sum of things approaching 0 is (usually) not 0
As a physicist I'm triggered at the pairing between that thumbnail and that title. /s.
Fair enough 😄
cool!
Glad you thought so!
In foreign 😂 it is 1,2,3 cal but in india all of them in one year just some 3 chapters
How's it going?
Good :)
Ok now sum from epsilon with epsilon arbitrarily close to 0
ok, but how tall are you?
Amazing
Thanks! Have a great day!
There is a mistake at 3:33 : you take integrals from the parts of each term, not from each term.
🔥🔥
Nice good
Thanks!
Just one thing shouldn't in be 2 to the power minus x greater than minus 1 and less than positive one, and so the last answer -x less than zero.
👏🏽👏🏽👏🏽👏🏽👏🏽
Have a wonderful day!
5:46 Twitter user
(I'm a Calc student and these videos are so much more interesting than when I didn't know all the Calc stuff)
👍👍👍👍👍👍
What is really going on is that the explanation that integrals are continuous sums is just inaccurate. While the explanation does build an intuition that allows learners to move forward in satisfaction without asking too many questions, when you look at the definition, it becomes evident how inaccurate this explanation actually is. This also happens with series. We tend to describe series as just being summation, but with infinitely many quantities, and again, this is inaccurate. While sums do play a role in how series and integrals are defined, there is an ingredient more essentially than them being sums, and that is, that they are limits. In fact, if a learner is already familiar with limits, then I think a healthier intuition to build is to explain integrals not as continuous sums and series not as infinite sums, but rather, as limits of continuous functions and of sequences, respectively.
Once you understand that series and integrals are limits, things such as the Riemann rearrangement theorem and the fact that integrals and series cannot always be interchanged become rather intuitive. This is because we no longer think of it in terms of sums, but in terms of limits. And there is no obvious reason to expect an interchange of different limit operations to leave a result unchanged.