Numerical Differentiation: Second Derivatives and Differentiating Data

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ก.ย. 2024

ความคิดเห็น • 35

  • @lgl_137noname6
    @lgl_137noname6 ปีที่แล้ว +16

    First Homework @ 2:22
    Second Homework @ 3:55
    Third Homework @ 6:37

    • @corey1222
      @corey1222 ปีที่แล้ว +2

      Fourth Homework @ 29:52

    • @rizkamilandgamilenio9806
      @rizkamilandgamilenio9806 ปีที่แล้ว

      Do you know where is the solution for the first homework?

  • @jamesrav
    @jamesrav ปีที่แล้ว +7

    better lectures than my professors in college (eons ago), and for free ! Admittedly, textbooks are good for this type of stuff, since its pretty straightforward math, but nice to hear it explained out loud.

  • @clarenceroopnarine6180
    @clarenceroopnarine6180 ปีที่แล้ว +2

    I appreciate your genuine effort to make these lectures as clear and understandable as possible. They are excellent! Well done!

  • @thomasjefferson6225
    @thomasjefferson6225 11 หลายเดือนก่อน

    Hot damn. I watched this video because i didnt understand my lecture. Thanks Steve, you helped a lot.
    I go and say yeah lemme look at this weeks voluntary assignment since in europe we dont get graded homework.
    second question after doing the central aprox for the 1st and second dervative is multiplying delta out by 2!!!!!!!!!!!!!!!
    hilarious!
    Thanks a million steve, youre helping this adult learner more than you can imagine.

  • @smallwang
    @smallwang ปีที่แล้ว +3

    Remind of the paper~

  • @byronreams7307
    @byronreams7307 ปีที่แล้ว

    Around 33:00, Steve talks about the error at the endpoints. Specifically, that the error for derivative of sin(x) was small because sin(0) and sin(pi) have zero slope. Just to reinforce that, you can change the original function f from sin(x) to cos(x). Then later in the plot change the 'true derivative' plot to np.sin(-x)...minus because derivative of cos(x) is -sin(x) which is the same as sin(-x). It clearly illustrates that the backward and forward differences used at the endpoints are not as accurate. Those computed derivative points at each end deviate significantly from the true derivative.

  • @joaopedrorocha4790
    @joaopedrorocha4790 12 วันที่ผ่านมา

    have some experience with the telescope thing ... beyond the discrete timing of the pictures thenselves, i would add the intervals due to clouds coming in, due to turbulence in the sky, due to instruments going nuts and you troubleshooting it , due to you falling asleep, due to you realizing you've felt asleep and going get some coffee, you going to the bathroom, instrument re-calibration pauses, and so on ... hehehe
    It's great to know that the schemes works without modifications for unevenly points !!!

  • @woowooNeedsFaith
    @woowooNeedsFaith ปีที่แล้ว +4

    Links for 25:33, please? I also wonder why most of the videos on this playlist are marked as unlisted. Those videos hardly get views. Apparently those videos won't be recommended.

  • @ash9788
    @ash9788 ปีที่แล้ว +1

    25:25
    Papers you wanted to put the link in here.

    • @Eigensteve
      @Eigensteve  ปีที่แล้ว

      Thanks for the reminder!! www.pnas.org/doi/10.1073/pnas.1814058116

  • @klave8511
    @klave8511 ปีที่แล้ว +1

    Is the foreword or backward difference error really an error or is it just a time shifted (nearly) exact value? Or in other words, will a small time shift reduce the error over the whole data set.
    Measured data (Y axis) will have discreet (integer) values which puts a limit on the required error. Computed (eg FEA) can obviously use floats so simulations will be more concerned about the theoretical error.

    • @naturallyinterested7569
      @naturallyinterested7569 ปีที่แล้ว

      No, this is really error. Here we are just talking about any function of one variable - this could be x, y, z, w, g, q, etc. also t. The error is also in df/dx, so it is (if we see error naively as a "spread" of values) a "spread" of the derivative, so the real derivative would be "somewhere close" to our calculated value.
      Now, in the case of a function in time (where t is time), let's say f(t) is a position, then this error would be in the velocity (df/dx), which would lead to a spread in the future position (which depends on the current velocity which we just calculated with error), so it's uncertainty. The calculated future velocity/position we get from a simulation might be off to certain degrees depending on this error, so an error of higher order (as in o(dt^n)) is better, as increasing our time resolution can "fix" this so we can have great confidence in our result.

  • @milos_radovanovic
    @milos_radovanovic ปีที่แล้ว

    Could you analyze in a video an error behavior of first order numerical differentiation techniques for analytic functions with the "Complex Step Differentiation" showcased on MathWorks Blogs?

  • @mohammedhassan5571
    @mohammedhassan5571 ปีที่แล้ว

    excellent lecture
    excuse me, I have a question, why you did not explain in any video how to solve non-linear system of PDEs. because I want to learn about them and model some complex phenomena of heat and mass transfer. I really want to know about the famous solution algorithms of SIMPLE, SIMPLER, Coupled, and PISO.

    • @rushabhyeshwante
      @rushabhyeshwante ปีที่แล้ว

      for learning SIMPLE there's TH-cam channel named fluid mechanics 101 by Aidan Wimshurst.

  • @alyssonrpg
    @alyssonrpg ปีที่แล้ว

    have you written a book (or have some in mind to indicate) that covers this subject? as a complementary source? :)

    • @enisten
      @enisten ปีที่แล้ว +2

      He has a book called "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control", but according to the contents you check out on Amazon, it doesn't cover specifically this subject. It covers more advanced subjects.

    • @MrHaggyy
      @MrHaggyy ปีที่แล้ว +2

      He has not but you might wonna pick up a book about numerical methods or numerical analysis if you want the deep dive into math. Springer and Hugendubble are two publisher's with books named that way.
      Numerical Python is a book that takes a more applied approach in python. But once you can code this stuff up in Python you can code it up in basically any language you want.

  • @jessicapriscilacerqueiraba3493
    @jessicapriscilacerqueiraba3493 7 หลายเดือนก่อน

    thanks

  • @elwood.downey
    @elwood.downey ปีที่แล้ว

    What if the data samples are not taken at regular intervals?

    • @MrHaggyy
      @MrHaggyy ปีที่แล้ว

      XD shoot the one collecting the data. Jokes aside there are reasons to store at none fixed timepoints. You get different timestamps delta t for forward or backward difference but it works the same. If you compute any central difference or higher order difference you get different delta t between timestamps in each calculation. So instead of 2x dt you will get summs of dt's like dt_-1(from -1 to 0) + dt_1(from 0 to 1).
      You might wonna take a line, sqare (something simple) and do it by hand the first time alongside the taylor expansion before you code it up.
      Keep in mind that your error will vary over the dataset, so you might want to assume the worst error for all datapoints or calculate the error alongside the derivertive.

  • @NoamWhy
    @NoamWhy ปีที่แล้ว +1

    Call me lazy, but I just take a polynomial regression of N neighboring points, and the coefficients of this polynomial give me the first, second, third, ... derivatives of the function at the origin. Done! - You're welcome 🙂

    • @aarontoderash6028
      @aarontoderash6028 ปีที่แล้ว

      Thank you so much for this comment.

    • @NoamWhy
      @NoamWhy ปีที่แล้ว

      @@aarontoderash6028 Any time!

  • @chensong254
    @chensong254 ปีที่แล้ว

    At 32:12, I think the reason why there is a larger error in the middle is that there is a bug. The central difference scheme should actually be dfdx[i] = (f[i + 1] - 2 * f[i] + f[i - 1]) / (x[i + 1] - x[i - 1])

    • @woowooNeedsFaith
      @woowooNeedsFaith ปีที่แล้ว +2

      I think you are confusing with 2nd derivative central difference formula at 10:43. (I had similar false memory/confusion.) Code example at 26:27 is for first derivate central difference (as code title says). And the comment at 32:08 originates from the previous lecture (Numerical Differentiation with Finite Difference Derivatives) on this playlist.

    • @chensong254
      @chensong254 ปีที่แล้ว

      ​@@woowooNeedsFaith Thank you for the clarification!

  • @marco_burderi
    @marco_burderi 11 หลายเดือนก่อน

    Thank you so much for the amazing work that you do. It's just fantastic.

    • @Eigensteve
      @Eigensteve  10 หลายเดือนก่อน

      Glad you enjoy it!

  • @milos_radovanovic
    @milos_radovanovic ปีที่แล้ว

    Am I permabanned from commenting on your videos? YT keeps deleting my comments and questions.

  • @arturoeugster7228
    @arturoeugster7228 ปีที่แล้ว

    A far better way to differentiate numerically, evading the huge errors due to high 'frequency ' data inaccurracy is to use a FFT and ignore the higher terms and multyply with the indeces and do a IFFT, reliable fast way to avoid ' noise amplification. FFT is fast Fourier transform.

  • @ECHO2LIGHT
    @ECHO2LIGHT 4 หลายเดือนก่อน

    Using T instead of X is confusing in handwriting

  • @lorenzovasquez1201
    @lorenzovasquez1201 ปีที่แล้ว

    So fun!!! Don't miss out - Promo*SM!!!