derivative of x to a matrix power.

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • 🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/.... The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟
    🌟Support the channel🌟
    Patreon: / michaelpennmath
    Channel Membership: / @michaelpennmath
    Merch: teespring.com/...
    My amazon shop: www.amazon.com...
    🟢 Discord: / discord
    🌟my other channels🌟
    mathmajor: / @mathmajor
    pennpav podcast: / @thepennpavpodcast7878
    🌟My Links🌟
    Personal Website: www.michael-pen...
    Instagram: / melp2718
    Twitter: / michaelpennmath
    Randolph College Math: www.randolphcol...
    Research Gate profile: www.researchga...
    Google Scholar profile: scholar.google...
    🌟How I make Thumbnails🌟
    Canva: partner.canva....
    Color Pallet: coolors.co/?re...
    🌟Suggest a problem🌟
    forms.gle/ea7P...

ความคิดเห็น • 48

  • @MichaelPennMath
    @MichaelPennMath  ปีที่แล้ว

    🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/michaelpenn. The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟

  • @mesomatics1571
    @mesomatics1571 ปีที่แล้ว +74

    The last case can be written as AX^(A-I) like other cases

    • @chrisdaley2852
      @chrisdaley2852 ปีที่แล้ว +11

      Yeah, I was looking at it like "what would you naturally expect here?" and calculated Ax^(A-I) and it was right. Surprised Michael didn't pick up on that.

    • @esajpsasipes2822
      @esajpsasipes2822 ปีที่แล้ว +7

      I was waiting the entire video to see if the pattern holds, glad it does.
      Can we conclude based on these cases that derivative of any matrix A is always Ax^(A-I) ?

  • @Circuito28
    @Circuito28 ปีที่แล้ว +2

    It would be amazing if you'd continue with the differential forms series, you didn't get to explain the generalised stokes theorem and it would be awesome, that series helped me a lot, guys like this comment to make it visible to Dr. Penn!

  • @romajimamulo
    @romajimamulo ปีที่แล้ว +6

    Your editor seems to be leaving in a lot more cases where you said the wrong thing and then jump cutting to you correcting yourself rather than cutting out you saying the wrong thing

  • @goodplacetostop2973
    @goodplacetostop2973 ปีที่แล้ว +14

    19:26 Follow-up question
    19:41 Good Place To Stop

  • @schweinmachtbree1013
    @schweinmachtbree1013 ปีที่แล้ว

    Very cool!

  • @digxx
    @digxx ปีที่แล้ว

    How do you prove that log(x^A)=A*log(x)?? I don't think that's a-priori clear for matrices?

    • @Jeity_
      @Jeity_ ปีที่แล้ว +7

      I think he's just defining x^A when A is a matrix. It's clear how to define e^A thanks to the power series, and you can exploit this definition since, for real numbers, x^a = e^(a*log(x)). By analogy, you can define x^A to be e^(A*log(x)).

  • @christianwolinski3787
    @christianwolinski3787 ปีที่แล้ว +2

    ln(x)*lambda = ln(x^lambda). How about x=1/2+3*I*Pi, lambda= 1/4+5*I*Pi.

    • @megauser8512
      @megauser8512 ปีที่แล้ว +1

      Wolfram Alpha says that the real parts of those answers are the same, but the imaginary parts are different, so these are not equal, but that is because 1 or both of x and lambda are complex in your example, but I assume that Micheal was assuming that both x and lambda are only real numbers.

  • @emre_galois
    @emre_galois ปีที่แล้ว +4

    it feels like a magic

    • @eduardomalacarne9024
      @eduardomalacarne9024 ปีที่แล้ว +2

      Its half-about

    • @kayragunes5050
      @kayragunes5050 ปีที่แล้ว +1

      Matematik güzel şey

    • @Alan-zf2tt
      @Alan-zf2tt ปีที่แล้ว

      In a sense it is really - there is a start point and a hoped for, nay intended end point sooo ... how to make it fit?
      In a sense it is a bit like epsilon-delta-N-n stuff.
      We know the start, sense the end point then fudge about in the middle to justify both?
      And maybe (?) (!) that is the job of abstract algebra. To show why something works and why it should work most of the time, what pitfalls there are and identify real or imaginary humongous monsters.
      For example what effect does chaos theory and minor perturbations have on matrix calculations?

    • @skilz8098
      @skilz8098 ปีที่แล้ว +2

      @@Alan-zf2tt Well, consider the fact that all scalar values are 1x1 Matrices and start from there since a 1x1 matrix is still a square matrix. Food for thought!

  • @birefringent2851
    @birefringent2851 ปีที่แล้ว +25

    I would do d/dx x^A = d/dx exp(A log x) and then using the well known formula for derivative of a matrix exponential, d/dt exp(A t) = A exp(A t) you get d/dx x^A = (1/x) A exp(A log x) = (1/x) A x^A = A x^{A-1}

  • @kiranduggirala2786
    @kiranduggirala2786 ปีที่แล้ว +22

    Is that last matrix for the non diagonalizable case not just Ax^(A-I) as well? I did the matrix multiplication in my head so it might be wrong but it seemed to check out.

    • @mickodillon1480
      @mickodillon1480 ปีที่แล้ว +7

      Good point

    • @Jono98806
      @Jono98806 ปีที่แล้ว +2

      Interesting. If true then that would correspond nicely to the usual power rule, except that the exponent is a matrix rather than a number.

    • @travisnell6849
      @travisnell6849 ปีที่แล้ว +7

      @@Jono98806 Yup. Another commenter has commented the bigger idea. If you show that a) The derivative exists, is well defined, and is continuous for an arbitrary matrix (this is not difficult from the definition of the derivative) and b) On a dense set of matrices (the diagonalizable ones), it has the formula d/dx(x^A) = Ax^(A-I), then the formula works everywhere.

  • @NutziHD
    @NutziHD ปีที่แล้ว +13

    The derivative of x^A is straight forward, if A is diagonizable (as you showed). Since d/dx x^A is continous and almost all matrices are diagonizable (over C), this proves d/dx x^A = A x^{A-I} for all matrices A (in arbitrary dimension)

    • @Idk-hp3oo
      @Idk-hp3oo ปีที่แล้ว +1

      Turns out that not all matrices are diagonizable even over C since the eigenspaces may be degenerate, which means 1 dimensional space for a 2 fold eigenvalue

    • @panadrame3928
      @panadrame3928 ปีที่แล้ว +5

      @@Idk-hp3oo "almost all" means the subset of diagonalisable matrices is dense in the set of matrices. Therefore by the argument of the continuity of d/dx, the conclusion follows

    • @chuckaway6580
      @chuckaway6580 ปีที่แล้ว

      This is where my mind went first.

    • @craftexx15
      @craftexx15 ปีที่แล้ว

      ​​​@@panadrame3928is continuity enough here or would you need an identy principle of holomorphic functions?

    • @jelinlikeafelin
      @jelinlikeafelin ปีที่แล้ว

      how do you know d/dx x^A is continuous wrt A? It seems true but non-trivial to prove.

  • @kono152
    @kono152 ปีที่แล้ว +8

    loving all these linear algebra vids recently!

  • @hgnb1001
    @hgnb1001 ปีที่แล้ว +2

    Very nice, linear algebra + calculus = math porn.

    • @yanntal954
      @yanntal954 ปีที่แล้ว

      Or in other words, calc 3.

  • @Keithfert490
    @Keithfert490 ปีที่แล้ว +6

    There's no need to split into cases as long as you differentiate term wise in the definition of the exponential (in the following, d represents derivative with respect to x):
    d(x^A)=d sum_{n=0}^infty A^n [ln(x)]^n / n!
    d(x^A)=sum_{n=0}^infty A^n n [ln(x)]^(n-1) / [n! x]
    d(x^A)=(A/x) sum_{n=0}^infty A^n [ln(x)]^n / n!
    d(x^A)=(A/x)*x^A
    d(x^A)=Ax^(A-I)

    • @gustavinho1986
      @gustavinho1986 ปีที่แล้ว +1

      This is how I did it too

    • @ranshen1486
      @ranshen1486 ปีที่แล้ว +2

      only if we can justify the change of order between differentiation and limit.

    • @Keithfert490
      @Keithfert490 ปีที่แล้ว +3

      @@ranshen1486 but you can justify it because the sum converges absolutely

  • @s4623
    @s4623 ปีที่แล้ว +3

    Have you been assuming that the eigenvalues do not equal to 0 for the diagonalizable cases? Those don't seem to follow the same power rules.

    • @burk314
      @burk314 ปีที่แล้ว +1

      What do you mean? Using the power rule for the exponent 0 we get d/dx x^0 = 0x^(-1) = 0 which is correct.

  • @demenion3521
    @demenion3521 ปีที่แล้ว +1

    for diagonal matrices you can just use that f(A) is the same as the diagonal matrix A where f is applied to each entry of the diagonal.
    you can also use the Cayley-Hamilton-Theorem again in the form i mentioned in your last video to calculate matrix functions very easily.

  • @vectorshift401
    @vectorshift401 3 หลายเดือนก่อน

    This guy sucks. He has over crossing lines and fragmented derivation paths going from one "step" to the next in his "jump about " explanation of what he's doing. AND he stands in front of the board while ha writes out and "explains" what he's writing down.

  • @tharunsankar4926
    @tharunsankar4926 ปีที่แล้ว

    What is the significance of a matrix power? What does it mean geometrically?

  • @mickodillon1480
    @mickodillon1480 ปีที่แล้ว +2

    Great video

  • @iithomepatnamanojsir
    @iithomepatnamanojsir ปีที่แล้ว

    Very interesting sir

  • @paosusuu
    @paosusuu ปีที่แล้ว

    i love it when he casually says "so that's pretty cool" about stuff i'd loudly claim to be unbelievably beautiful

  • @Sup3rdud4
    @Sup3rdud4 ปีที่แล้ว

    I think the way I would write the last matrix is
    (λ/x)*(x^(A-λI))

  • @bradfordtaylor3530
    @bradfordtaylor3530 ปีที่แล้ว +2

    What is practical use for this. How does the need for this come about?

    • @nadavslotky
      @nadavslotky ปีที่แล้ว +4

      I think you're on the wrong channel, dude

  • @stanleysimon6723
    @stanleysimon6723 ปีที่แล้ว +2

    Bro you are much more boring than a book in which theories are complicated.Your explanations are one sided and goes over head.