Ito’s lemma, also known as Ito’s formula, or Stochastic chain rule: Proof

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ธ.ค. 2024

ความคิดเห็น • 37

  • @xinnywillwin
    @xinnywillwin 5 ปีที่แล้ว +9

    I'm so glad that I found this series! you are awesome!!!!

    • @quantpie
      @quantpie  5 ปีที่แล้ว +1

      thanks!

  • @justbecause9613
    @justbecause9613 2 ปีที่แล้ว +1

    Thank you, this was incredibly intuitive to understand and feels like a fundamental step in understanding more complex derivative functions

    • @quantpie
      @quantpie  2 ปีที่แล้ว

      You're welcome! Glad you found it helpful, thank you!

  • @fondueeundof3351
    @fondueeundof3351 2 ปีที่แล้ว +1

    8:52: Why does the expectation grow slower than n^2?

  • @Gilloup
    @Gilloup 2 ปีที่แล้ว

    Fondue Eundof is right, at 09:00 your sum of the squared second derivatives explodes. You have to divide it by n, as a result (t/n)² becomes t²/n but still goes to zero. Then you have to discuss how the average of the squared second derivatives can converge in mean (not mean square). By LLN it converges only in probability so you need additional assumptions (eg Lebesgue-Vitali, Riesz-Scheffé) to have it in mean too. But I don't remember that Ito lemma requires such assumptions in order to hold.

  • @lividpudding8565
    @lividpudding8565 4 ปีที่แล้ว +2

    Love your channel! Great stuff

    • @quantpie
      @quantpie  4 ปีที่แล้ว +1

      thanks!!

  • @sengichin3250
    @sengichin3250 4 ปีที่แล้ว +2

    I kind of don't understand the conditional expectation used in the tower property of expectations. When conditioned on F_{t_{k-1}} so t_{k-1} is fixed inside that expectation, shouldn't terms with subscript k be extract outside instead of terms with suscript j?

    • @sengichin3250
      @sengichin3250 4 ปีที่แล้ว +2

      What is F_{t_{k-1}} anyway?

    • @quantpie
      @quantpie  4 ปีที่แล้ว

      This represents the filtration- think of this as the information that is available when we are at time t_{k-1}. Could you reference the time in the video where the terms are extracted pls? There are a lot of steps and I might end up not answering your question.

    • @MrConanalexander
      @MrConanalexander 3 ปีที่แล้ว

      @@quantpie I don't understand why we introduce the iterated expectation value in the first place. Can't we proceed without the iterated expectation value and use E[XY]=E[X]E[Y] for X,Y independent instead to separate B_{t_k} - B_{t_{k-1}} type terms from f"(B_{t_{k-1}}) type terms ?
      (P.S. these are really great videos that many need but are hard to find btw, Hope you continue this amazing work!)

  • @CONGKAMAEL
    @CONGKAMAEL 5 ปีที่แล้ว +3

    Very clear derivation! Thanks!

    • @quantpie
      @quantpie  5 ปีที่แล้ว +1

      Thanks @Cong!

  • @mikhailkudinov
    @mikhailkudinov 3 ปีที่แล้ว +1

    At 1:39 why do we write dX_t = X_t - X_0 ? If dX_t is a differential than this equality is not exact

    • @quantpie
      @quantpie  3 ปีที่แล้ว

      Many thanks for the question! It is because the change in X, assuming we know it’s start and end value, would just be the difference between the two.

  • @NathanCrock
    @NathanCrock 4 ปีที่แล้ว

    I want to share another perspective I'm grappling with, for whenever the time comes to make that video about stochastic differentials. Please disregard this if I am becoming obnoxious. At 12:31 in your quadratic variation video, you define B_kt/n - B_(k-1)t/n = delta B_nk. Then, as you said, at 5:36 in your Ito versus Stratonovich video you define dB_s to have meaning in the definition of the integral, which I interpret (perhaps incorrectly) as the limiting case of B_kt/n - B_(k-1)t/n as n -> infinity. So I see how delta B -> dB. Here, however, at 1:34, you define dX_t = X_t - X_0. This seems a bit more like a delta X_t notion, not a dX_t. I think I still struggle with the role of the limit in these definitions. When is something a delta X vs dX, or are they the same?

    • @quantpie
      @quantpie  4 ปีที่แล้ว +1

      Hello, sorry missed this question! Great question as always, and thanks for cross referencing! You can think of this as the integral version of the chain rule (and Ito's as the stochastic version of the deterministic chain rule): when you have the variable itself, then you don't need the chain rule, but when you consider a function of the variable, then you use the chain rule. So when we have just dX_t in terms of X_t, integration means adding the small increments, and if we know the values at x_t and x_0, we know the total amount of change over 0 to t, which is just x_t-x_0. But when you are integrating a function of X with respect to X, then we need the finer details of Riemann - Stieltjes - Ito - Stratonovic etc. Hope this helps!

    • @NathanCrock
      @NathanCrock 4 ปีที่แล้ว +1

      @@quantpie I see. I think I am finally starting to understand. I spent so much time trying to understand exactly how the stochastic differential was defined, but it seems like it doesn't really have a definition, it is more like a notational shorthand for referencing the integral equation without the integrals. I don't have it concrete yet, but is this the right way to think about it?

    • @quantpie
      @quantpie  4 ปีที่แล้ว +1

      correct!

  • @jaredf2067
    @jaredf2067 4 ปีที่แล้ว +1

    It's christmas time again.. I am listening to chirstmas in LA while looking at this vid. The voice happens to be on beat most of the time!

    • @quantpie
      @quantpie  4 ปีที่แล้ว

      haha thanks!!

  • @samuelblattner7733
    @samuelblattner7733 4 ปีที่แล้ว

    The proof takes showing ms-convergence and convergence in probability. Is there a "precise" way to formulate the result, X_t - X_0 convergences to the RHS in probability, or something like that?

    • @quantpie
      @quantpie  4 ปีที่แล้ว +1

      thanks Samuel for the question! May i check if you mean something along the lines of what we discuss at 1705 in this video- th-cam.com/video/CuOyRX0pZ9M/w-d-xo.html

    • @samuelblattner7733
      @samuelblattner7733 4 ปีที่แล้ว +1

      @@quantpie Hi, yes, sorry for the unclear questions. I just want to make sure I'm thinking about things correctly.
      Ito's formula gives an expression for dX_t as the sum of a first-order and a second-order term. The first-order term is just Ito's integral in the limit (using mean-squared convergence), and the second order term is also an integral in the limit (using convergence in probability). So dX_t is the sum of two limits, but under different modes of convergence. But since mean-squared convergence implies convergence in probability, the first-order term also converges in probability, which implies the whole expression does. So what I'm asking is whether the completely formal statement of Ito's formula is in fact a limit statement using convergence in probability as the mode of convergence? I'm using standard calculus as the comparison. In standard calculus, we write df = f'(x)dx, but this is really a statement about limits: it is saying f(x')-f(x) = f'(x)(x'-x) + O(x'-x).

    • @quantpie
      @quantpie  4 ปีที่แล้ว +1

      @@samuelblattner7733 yes that's correct in this context, though one can prove the same result in the stronger sense. Thanks for the detailed explanation!

  • @69erthx1138
    @69erthx1138 3 ปีที่แล้ว +1

    Don't even attempt to model the volitility of real markets without dipping into the basics of the ito process first. Lots of rigors in this video, excellent work. The richnrss of this non-stochastic model is superb.

    • @quantpie
      @quantpie  3 ปีที่แล้ว

      very true! thank you!

  • @mhadhbihazem6485
    @mhadhbihazem6485 2 ปีที่แล้ว +1

    la démonstration en Cas de deux variables stochastiques please

    • @quantpie
      @quantpie  2 ปีที่แล้ว

      thanks! sure!

  • @mainjordan7270
    @mainjordan7270 4 ปีที่แล้ว +3

    Great videos. I feel your voice is a bit of like the machine voice, hope your voice could be more natural. Thank you so much.

    • @quantpie
      @quantpie  4 ปีที่แล้ว +2

      many thanks, we have been changing the voices over time, hope the voice quality in the recent videos is better!

  • @wowZhenek
    @wowZhenek 6 ปีที่แล้ว +1

    Do the GIrsanov's theorem, change of measure and some BSM calculations using the change of measure

    • @quantpie
      @quantpie  6 ปีที่แล้ว

      Yes two videos away!

    • @quantpie
      @quantpie  6 ปีที่แล้ว +5

      Change of measures, and Girsanov theorem are up! Applications to follow

  • @kidbornbrat1812
    @kidbornbrat1812 4 ปีที่แล้ว +1

    This voice keeps us away from yo videos

    • @quantpie
      @quantpie  4 ปีที่แล้ว

      Thanks for the feedback! Much appreciated!!