Dear Prof Grinfeld, after this lecture I felt that you were" pulling a fast one on us" when it came to the decomposition of x²+x. When dealing with vectors out of all the possible inner products we chose the dot product based on geometrical arguments (i.e. the cos(α) did what we needed it to do). On the other had when it came to polynomials you just presented one possible inner product and hence we obtained "orthogonal" polynomials but these were explicitly linked to the inner product you chose ∫+1-1 p(x)q(x) dx. Here is where I felt a little short changed: Could you comment on other sets of othogonal polymnomials that one could get using a different inne product and then explain of how mathematicians chose amongs the different orthogonal bases. Thank you, George
You are hitting the nail right on the head, George! First, your initial point: that the geometric inner product was "natural" and the polynomial inner product is "arbitrary". That's exactly right! We saw how great the natural dot product was and then, by extracting its three governing properties (commutativity, distributivity and positive definiteness), generalized it arbitrary inner products. Yes, the inner product I used in this example is quite arbitrary, as are all of them, by definition. Similarly, most of the bases I used in the example earlier in the course are arbitrary, again by definition. It is a specific problem that dictates the choice of - earlier - bases, and - now - inner products. When you watch the video that explains Gaussian quadrature, you will see that this particular inner product is natural for that problem. If the limits of integration changed, the inner product would change accordingly. If we were dealing with functions on the unit disc, then ∫+1-1 (r*p(r)*q(r)) would be more natural. Chebyshev polynomials make a different choice for other reasons. And on and on. Please let me know if this is helpful.
The single greatest thing I learned from mathematics is to know when I don't fully understand something (i.e. most of the time). I'm glad my question was not completely off the mark and I will now try to gain a more profound insight into how one can use different inner products to best adapt to the problem at hand. I will obviously start with your explanation of Gausian quadrature. Большое спасибо
I have 3 questions. 1. I can see why the monomials are a terrible basis in this inner product space, but is there an inner product space where the monomials do form an orthogonal basis? It would probably be a useful inner product for studying Taylor series and analytic functions. 2. What is the span of the set of all Legendre polynomials? Is it the set of all analytic functions just like the monomials which build Taylor series? 3. I wouldn’t think the Graham Schmitt process could change the span of the basis vectors, but is it possible when you have an infinite dimensional vector space?
All excellent questions and warrant separate discussions. 1. Yes. In any space, take any basis b1 b2 b3, and ~~~define~~~ the inner product to be such that the basis is orthonormal. This is a valid definition and it defines a unique inner product. For example, what is the inner product of u = u1b1 + u2b2 + u3b3 and v = v1b1 + v2b2 + v3b3 ? 2. You always need to be careful when discussing spans in infinite-dimensional spaces with infinite linear combinations. But I think in this case it is safe to say that the span is the same as for monomials since any monomial can be expressed in terms of a finite number of Legendre polynomials. 3. This question is just a generalization of #2? So I would give the same answer for the same reason.
Amazing work and really got the intuition behind the concept of Numerical Analysis thanks to your lectures but I have one question. Why do we limit the concept of orthogonality to the interval [-1,1]?
Around the 3:30 mark, when you give the "magnitude of error" argument to justifiy why one basis is better than the other, I could not help but think about the concept of continuity. It seems that for the first basis which causes a small error to provoke [0,2] to go to [2.5,0], the "function" that would associate an error of measurement to the vector representation in the basis would be "less continuous" than that one of the 2nd, orthogonal basis, since for the first basis, if I change sligthly the input, it causes vast changes in the output. I'm also tempted to say that the first "fonction" would not be "continuous" at all since it can happen that small changes swap two zeroes in the output vector basis representation. To my intuition, continuity in this context should keep the 0 components of the vectors where they are and not swap them like the example did. I'm not sure if "continuity" is the correct terminology to communicate what I'm trying to say, but this notion just struck me while you were explaining it! Thanks for your videos, they are awesome!
This was so insightful! If ever I construct basis functions (Legendre or something else), this will be an additional reason to perform a Gram-Schmidt process 👏🏽 Funny how I only have a high school degree but I feel like I’m learning sooo much because of educators like you, on TH-cam 😭🙏🏽🎊
Dear Prof Grinfeld, this was an amazing insight for me in understanding why orthogonal matrices are well-conditioned! Thanks. One quick question, if you can help: why did you say that you should have said "x^7 and x^9" instead of "x^7 and x^8" ? Just because x^7 and x^9 are very similar also in the interval (-1,0) or for some deeper reason?
I believe this was simply because when considered over the larger range of -1 to +1 (instead of 0 to +1 as in his hand drawn diagram) x^7 and x^8 do diverge drastically over the negative x values (x^7 goes down and x^8 goes up) :)
What troubles me is that even though {1, x, x^2, ...} is a terrible basis, we still have to specify Legendre polynomials in terms of that basis. So, how do we know we can avoid some computer precision problems with Legendre polynomials if we're going to run into precision problems defining them in the first place?
Hi Alex, that's a good point. There does seem to be a logical flaw there somewhere. I can't quite put my finger on the reason why, but I think that Legendre polynomials are "clean". Perhaps it's because we did the decomposition symbolically rather than in the context of limited precision.
@@MathTheBeautiful Thank you for your reply! Also, as long as I'm thanking you for things, I should thank you for your Gaussian Quadrature videos. I'm working on a project for my master's thesis, and I have a 3D integral that I need to calculate repeatedly, so I need it to be fast. I figured that learning how Gaussian Quadrature works would at least be a first step in figuring how to choose my methods, and your videos did the trick. So, thanks!
Wouldn't a computer program store the polynomial as [x0, x1, x2, ..., xn] where xn is the coefficient for the nth Legendre polynomial rather than storing it as the coefficients for each power?
@@matthewgraham790 Sure, you can store and manipulate the polynomial that way, but what I meant was when you want to evaluate the polynomial at some specific value. I'm finding that Python/NumPy/SciPy has a lot of functionality to make it easy to deal with many different flavors of orthogonal polynomials, but I suspect that underneath, when you ask it to evaluate it, it falls back to 1, x, x^2, ... and probably uses something like Horner's method to evaluate it.
@@alexcwagner Isn't the only problem with precision from using 1, x, x^2 .. the representation of polynomials using a given basis? So it only comes about when trying to represent a polynomial in algebraic form as a linear combination of other polynomials in algebraic form. At no point does there need to be an evaluation of that polynomial at any given x, and after the representation in a given basis has been found, the precision problem ceases to exist
Do you have a png of the image in your final note available somewhere for download? Does the scaling you used imply that the ones in this chart are orthonormal?
The graphic representation of the polynomial functions has nothing to do with the vector space that they create. So IMHO your argument at 4:45 is moot. Actually, the set {1,x,x^2} is the standard base of the polynomial vector space with 3 dimensions and as such also orthonormal. I guess that makes it the perfect base?
Looks like you are right. I admit I haven't done the calculus, however, I have read it before that the monomial set is the standard basis for the vector space of polynomials. I found now 2 references: "By definition, the standard basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis.. There is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials." en.wikipedia.org/wiki/Standard_basis "In P2, where P2 is the set of all polynomials of degree at most 2, {1, x, x^2} is the standard basis." en.wikipedia.org/wiki/Basis_(linear_algebra)
Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.
Dear Prof Grinfeld, after this lecture I felt that you were" pulling a fast one on us" when it came to the decomposition of x²+x. When dealing with vectors out of all the possible inner products we chose the dot product based on geometrical arguments (i.e. the cos(α) did what we needed it to do). On the other had when it came to polynomials you just presented one possible inner product and hence we obtained "orthogonal" polynomials but these were explicitly linked to the inner product you chose ∫+1-1 p(x)q(x) dx. Here is where I felt a little short changed: Could you comment on other sets of othogonal polymnomials that one could get using a different inne product and then explain of how mathematicians chose amongs the different orthogonal bases. Thank you, George
You are hitting the nail right on the head, George!
First, your initial point: that the geometric inner product was "natural" and the polynomial inner product is "arbitrary". That's exactly right! We saw how great the natural dot product was and then, by extracting its three governing properties (commutativity, distributivity and positive definiteness), generalized it arbitrary inner products. Yes, the inner product I used in this example is quite arbitrary, as are all of them, by definition. Similarly, most of the bases I used in the example earlier in the course are arbitrary, again by definition. It is a specific problem that dictates the choice of - earlier - bases, and - now - inner products.
When you watch the video that explains Gaussian quadrature, you will see that this particular inner product is natural for that problem. If the limits of integration changed, the inner product would change accordingly. If we were dealing with functions on the unit disc, then ∫+1-1 (r*p(r)*q(r)) would be more natural. Chebyshev polynomials make a different choice for other reasons. And on and on.
Please let me know if this is helpful.
The single greatest thing I learned from mathematics is to know when I don't fully understand something (i.e. most of the time). I'm glad my question was not completely off the mark and I will now try to gain a more profound insight into how one can use different inner products to best adapt to the problem at hand. I will obviously start with your explanation of Gausian quadrature. Большое спасибо
На здоровье!
I have 3 questions.
1. I can see why the monomials are a terrible basis in this inner product space, but is there an inner product space where the monomials do form an orthogonal basis? It would probably be a useful inner product for studying Taylor series and analytic functions.
2. What is the span of the set of all Legendre polynomials? Is it the set of all analytic functions just like the monomials which build Taylor series?
3. I wouldn’t think the Graham Schmitt process could change the span of the basis vectors, but is it possible when you have an infinite dimensional vector space?
All excellent questions and warrant separate discussions.
1. Yes. In any space, take any basis b1 b2 b3, and ~~~define~~~ the inner product to be such that the basis is orthonormal. This is a valid definition and it defines a unique inner product. For example, what is the inner product of u = u1b1 + u2b2 + u3b3 and v = v1b1 + v2b2 + v3b3 ?
2. You always need to be careful when discussing spans in infinite-dimensional spaces with infinite linear combinations. But I think in this case it is safe to say that the span is the same as for monomials since any monomial can be expressed in terms of a finite number of Legendre polynomials.
3. This question is just a generalization of #2? So I would give the same answer for the same reason.
Amazing work and really got the intuition behind the concept of Numerical Analysis thanks to your lectures but I have one question. Why do we limit the concept of orthogonality to the interval [-1,1]?
Around the 3:30 mark, when you give the "magnitude of error" argument to justifiy why one basis is better than the other, I could not help but think about the concept of continuity.
It seems that for the first basis which causes a small error to provoke [0,2] to go to [2.5,0], the "function" that would associate an error of measurement to the vector representation in the basis would be "less continuous" than that one of the 2nd, orthogonal basis, since for the first basis, if I change sligthly the input, it causes vast changes in the output.
I'm also tempted to say that the first "fonction" would not be "continuous" at all since it can happen that small changes swap two zeroes in the output vector basis representation.
To my intuition, continuity in this context should keep the 0 components of the vectors where they are and not swap them like the example did.
I'm not sure if "continuity" is the correct terminology to communicate what I'm trying to say, but this notion just struck me while you were explaining it!
Thanks for your videos, they are awesome!
This was so insightful! If ever I construct basis functions (Legendre or something else), this will be an additional reason to perform a Gram-Schmidt process 👏🏽 Funny how I only have a high school degree but I feel like I’m learning sooo much because of educators like you, on TH-cam 😭🙏🏽🎊
Thanks, that means a lot!
Never thought of approximation perspective ! Thanks
Dear Prof Grinfeld, this was an amazing insight for me in understanding why orthogonal matrices are well-conditioned! Thanks.
One quick question, if you can help: why did you say that you should have said "x^7 and x^9" instead of "x^7 and x^8" ? Just because x^7 and x^9 are very similar also in the interval (-1,0) or for some deeper reason?
I believe this was simply because when considered over the larger range of -1 to +1 (instead of 0 to +1 as in his hand drawn diagram) x^7 and x^8 do diverge drastically over the negative x values (x^7 goes down and x^8 goes up) :)
@@samwhite4284 thanks!
Orthogonal is easier to find coefficients of linear combinations
What troubles me is that even though {1, x, x^2, ...} is a terrible basis, we still have to specify Legendre polynomials in terms of that basis. So, how do we know we can avoid some computer precision problems with Legendre polynomials if we're going to run into precision problems defining them in the first place?
Hi Alex, that's a good point. There does seem to be a logical flaw there somewhere. I can't quite put my finger on the reason why, but I think that Legendre polynomials are "clean". Perhaps it's because we did the decomposition symbolically rather than in the context of limited precision.
@@MathTheBeautiful Thank you for your reply! Also, as long as I'm thanking you for things, I should thank you for your Gaussian Quadrature videos. I'm working on a project for my master's thesis, and I have a 3D integral that I need to calculate repeatedly, so I need it to be fast. I figured that learning how Gaussian Quadrature works would at least be a first step in figuring how to choose my methods, and your videos did the trick. So, thanks!
Wouldn't a computer program store the polynomial as [x0, x1, x2, ..., xn] where xn is the coefficient for the nth Legendre polynomial rather than storing it as the coefficients for each power?
@@matthewgraham790 Sure, you can store and manipulate the polynomial that way, but what I meant was when you want to evaluate the polynomial at some specific value. I'm finding that Python/NumPy/SciPy has a lot of functionality to make it easy to deal with many different flavors of orthogonal polynomials, but I suspect that underneath, when you ask it to evaluate it, it falls back to 1, x, x^2, ... and probably uses something like Horner's method to evaluate it.
@@alexcwagner Isn't the only problem with precision from using 1, x, x^2 .. the representation of polynomials using a given basis? So it only comes about when trying to represent a polynomial in algebraic form as a linear combination of other polynomials in algebraic form. At no point does there need to be an evaluation of that polynomial at any given x, and after the representation in a given basis has been found, the precision problem ceases to exist
Are two functions said to be orthogonal if their points of intersections are in the interval where they form the orthogonal condition?
Actually, no. The answer is in the playlist bit.ly/InnerProducts
oh okay, yea there are many types of orthogonal polynomials. It just depends on your inner product definition.
Exactly!
great video! thank you!
Do you have a png of the image in your final note available somewhere for download? Does the scaling you used imply that the ones in this chart are orthonormal?
Yes, I'll provide the PNG. They are not orthonormal: they are scaled so that p_n(1) = 1
Thanks!
Thank you for this Great lecture.
Stable, linear, under small perturbations small errors turn into linear functions of small errors. "First order analysis". In physics speak
2:50
Amazing! Very thanks!
Yeah, I thought it was pretty good!
The graphic representation of the polynomial functions has nothing to do with the vector space that they create. So IMHO your argument at 4:45 is moot. Actually, the set {1,x,x^2} is the standard base of the polynomial vector space with 3 dimensions and as such also orthonormal. I guess that makes it the perfect base?
Orthonormal with respect to what inner product? B is certainly not orthonormal with respect to the inner product discussed in the video.
Looks like you are right. I admit I haven't done the calculus, however, I have read it before that the monomial set is the standard basis for the vector space of polynomials. I found now 2 references:
"By definition, the standard basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis.. There is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials."
en.wikipedia.org/wiki/Standard_basis
"In P2, where P2 is the set of all polynomials of degree at most 2, {1, x, x^2} is the standard basis."
en.wikipedia.org/wiki/Basis_(linear_algebra)
Ahhh yes, Scientific Computing...where it all comes together.
Professor! This is realy important.
What is that joke?
Unbelievable!
What a clickbait title!
I appreciate the Trump joke.