Awesome video Sir! Thank you! I already implemented the polynomial technique a few weeks ago using SymPy (Symbolic Python) in Python. In fact, I have 2 versions of it: 1st version uses vandermonde polynomials to interpolate the stencil points and the other uses lagrange polynomials. The lagrange version is twice as fast the vandermonde version since it's just for loops and completely skips the matrix inversion part. I get my functions to print the coefficients as rational numbers instead of floating-point numbers, though that is also an option. Still, your lecture is very insightful sir especially 1) how this polynomial technique can be done completely numerically without the h's; 2) and how there are constant multipliers for the higher order derivative approximations such as 2 for 2nd derivatives. I also actually discovered this too in my experience when I substituted the taylor series expansion in the finite difference formulas and discover that on the right-hand side of the equation, the coefficient of the derivative term that I'm trying to find is non-unity e.g. 1/2, 1/6, etc. So I have SymPy get that coefficient and divide both sides of the equation with it 😁. However, from your lecture, I realize those multipliers can be obtained via factorials, taking into account the derivative order that one desires. Again, thank you very much sir. Great, insightful lectures as usual
to get finite differences approximation for the derivatives we can multiply by a diagonal matrix of factorials 1 to i, then if the differencial equation is linear we multiply by a vector of coefiction of the function and its derivatives
Great observation. I think this is the case only for uniform and symmetric finite-differences. If the terms from which the finite-differences are calculated are not uniformly spaced or not symmetric about the point where the derivative is being estimated, this is no longer the case. Why is it the case for uniform and symmetric finite-differences? I don't know! I will have to think about it.
@@empossible1577 Ah yea, it's not necessarily always true for non-uniformly spaced points. My intuition is that it probably has something to with symmetry and uniform distribution, since the coefficients are only dependent on the location of x. If you play around with this calculator I found, we still get a 0 summation with a non-uniform distribution, which is interesting: web.media.mit.edu/~crtaylor/calculator.html At any rate, thanks for your video. It was helpful for my computational fluid dynamics coursework. Edit: I checked my math again and I think I made a mistake. I’m actually having trouble finding a case where the sum is non-zero! Linear algebra and hard proofs isn’t my thing, but I would like to see a pure maths guy prove/disprove this.
Awesome video Sir! Thank you! I already implemented the polynomial technique a few weeks ago using SymPy (Symbolic Python) in Python. In fact, I have 2 versions of it: 1st version uses vandermonde polynomials to interpolate the stencil points and the other uses lagrange polynomials. The lagrange version is twice as fast the vandermonde version since it's just for loops and completely skips the matrix inversion part. I get my functions to print the coefficients as rational numbers instead of floating-point numbers, though that is also an option.
Still, your lecture is very insightful sir especially 1) how this polynomial technique can be done completely numerically without the h's; 2) and how there are constant multipliers for the higher order derivative approximations such as 2 for 2nd derivatives. I also actually discovered this too in my experience when I substituted the taylor series expansion in the finite difference formulas and discover that on the right-hand side of the equation, the coefficient of the derivative term that I'm trying to find is non-unity e.g. 1/2, 1/6, etc. So I have SymPy get that coefficient and divide both sides of the equation with it 😁. However, from your lecture, I realize those multipliers can be obtained via factorials, taking into account the derivative order that one desires.
Again, thank you very much sir. Great, insightful lectures as usual
This is great to hear! Sounds like you are doing awesome stuff!
to get finite differences approximation for the derivatives we can multiply by a diagonal matrix of factorials 1 to i,
then if the differencial equation is linear we multiply by a vector of coefiction of the function and its derivatives
Is there a reason why the summation of the finite difference coefficients equals 0?
Great observation. I think this is the case only for uniform and symmetric finite-differences. If the terms from which the finite-differences are calculated are not uniformly spaced or not symmetric about the point where the derivative is being estimated, this is no longer the case.
Why is it the case for uniform and symmetric finite-differences? I don't know! I will have to think about it.
@@empossible1577 Ah yea, it's not necessarily always true for non-uniformly spaced points. My intuition is that it probably has something to with symmetry and uniform distribution, since the coefficients are only dependent on the location of x.
If you play around with this calculator I found, we still get a 0 summation with a non-uniform distribution, which is interesting:
web.media.mit.edu/~crtaylor/calculator.html
At any rate, thanks for your video. It was helpful for my computational fluid dynamics coursework.
Edit: I checked my math again and I think I made a mistake. I’m actually having trouble finding a case where the sum is non-zero! Linear algebra and hard proofs isn’t my thing, but I would like to see a pure maths guy prove/disprove this.
is there a reference to use the same method for a biased mesh?
Essentially how to build a more general transfer matrix than 1/h matrix?
The method is the same. I should add an example to illustrate this. That is now on my to-do list!