Matrix to a matrix

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ธ.ค. 2024

ความคิดเห็น • 123

  • @xBoBox333
    @xBoBox333 5 ปีที่แล้ว +86

    2:20 best dog i've ever seen

    • @blackpenredpen
      @blackpenredpen 5 ปีที่แล้ว +7

      xBoBox333
      Totally agree!!!

    • @mastershooter64
      @mastershooter64 2 ปีที่แล้ว +1

      almost like a topologist's dog

  • @blackpenredpen
    @blackpenredpen 5 ปีที่แล้ว +46

    Next: do A^A^A

    • @johannesh7610
      @johannesh7610 5 ปีที่แล้ว +8

      A^^infinity (A^A^...)

    • @ericthegreat7805
      @ericthegreat7805 5 ปีที่แล้ว +3

      Ath super square root of A

    • @haoli9220
      @haoli9220 3 ปีที่แล้ว +1

      Oh does it create fractals

  • @christopherthomas6124
    @christopherthomas6124 5 ปีที่แล้ว +21

    Dr Peyam this video really got my interest and I wanted to work out a few examples for myself. After a couple hours of reading, it turns out that if you have a matrix X and matrix Y where X is non-singular and normal and Y is complex, the X^Y is defined as X^Y = e^(log(X)*Y) which is a right exponential because we aren't assuming X and Y communte. This means if you do e^(Y*log(X)) you actually get the left exponential. I have been teaching myself operations in quaternion space and this fits the same pattern as operations in Q!!! Really cool stuff

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว +5

      Yes, in general those are 2 different things

    • @christopherthomas6124
      @christopherthomas6124 5 ปีที่แล้ว

      @@drpeyam so do you have any idea of any useful applications for this?

    • @axelnils
      @axelnils 5 ปีที่แล้ว +7

      Christopher Thomas Applications? What are you, an engineer lol

    • @rodrigorodders7173
      @rodrigorodders7173 4 ปีที่แล้ว +2

      Axs one does not simple ask a pure mathematician for application

  • @blackpenredpen
    @blackpenredpen 5 ปีที่แล้ว +16

    Can wolframalpha do this?

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว +7

      HA! Probably not

    • @nuklearboysymbiote
      @nuklearboysymbiote 5 ปีที่แล้ว +6

      Trust the math

    • @arindamsaha5589
      @arindamsaha5589 3 ปีที่แล้ว +2

      Two great mathematicians speaking !!!!...Extraordinary!

  • @shambosaha9727
    @shambosaha9727 5 ปีที่แล้ว +32

    Dr πm: Attempts to draw a dog
    Meme artist: Am I a joke to you?
    Dr πm: Yes, I am a mathematician

  • @Ricocossa1
    @Ricocossa1 5 ปีที่แล้ว +4

    I think the notation A^B would be acceptable only if A commutes with B, otherwise you have to give more information and write e^(ln(A)B). Also A has to be strictly positive definite. It's an interesting idea. I wonder if it has some applications like sqrt(A) does.

  • @Pedritox0953
    @Pedritox0953 4 ปีที่แล้ว +2

    Akways amaze me this kind of videos!! Thanks Professor!!

  • @AirAdventurer194
    @AirAdventurer194 4 ปีที่แล้ว +2

    Maybe it should only be defined if ln(A) and B commute?

  • @AaronRotenberg
    @AaronRotenberg 5 ปีที่แล้ว +9

    Dr. Peyam, what can we do with the ability to raise a matrix to the power of another matrix? There are lots of applications for the e^A matrix exponential (e.g. systems of differential equations), but are there any interesting applications for the A^B matrix-matrix exponential?

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว +6

      I’m not quite sure actually!

  • @kingk.crimson6633
    @kingk.crimson6633 5 ปีที่แล้ว +6

    Great video. I was caught off guard by the diagonalization to calculate ln(A), I'd only ever seen that in the context of raising A to a power. Really cool!

  • @shiina_mahiru_9067
    @shiina_mahiru_9067 5 ปีที่แล้ว +37

    Next: do the derivative of a matrix wrt a matrix

    • @quantumsoul3495
      @quantumsoul3495 5 ปีที่แล้ว +2

      So good

    • @TheReligiousAtheists
      @TheReligiousAtheists 5 ปีที่แล้ว +14

      The matrix-th derivative of a matrix

    • @TheReligiousAtheists
      @TheReligiousAtheists 5 ปีที่แล้ว

      @Sebastian Henkins Isn't a fractional matrix also just another matrix? 😂

    • @yadinandyanay
      @yadinandyanay 5 ปีที่แล้ว

      sadly thats a tensor bro

    • @yadinandyanay
      @yadinandyanay 5 ปีที่แล้ว

      Because it is very hard to explain comprehensively or write on a 2D surface

  • @noahtaul
    @noahtaul 5 ปีที่แล้ว +2

    Hello Dr Peyam, at 11:02 could you explain how that first number in the bottom row becomes negative and the second number becomes positive? Thanks!

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว

      Arithmetic error

  • @smortemm2438
    @smortemm2438 2 ปีที่แล้ว +2

    this man is so enthusiastic about this stuff, it truly makes me smile

  • @Lance.2451
    @Lance.2451 5 ปีที่แล้ว +10

    Dr could you make a video of doing some like taking the derivative or an integral of a function that is just X^(of a matrix). Really interested to see the result

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว +4

      That is interesting!

    • @Lance.2451
      @Lance.2451 5 ปีที่แล้ว +1

      @Hyeronimus I'm trying to figure out if we measure the 1 that is subtracted as the determinant, then maybe it could exist an entire space for potential "1" matrices and how that could change the derivative, and how something like could be applied

  • @poutineausyropderable7108
    @poutineausyropderable7108 5 ปีที่แล้ว +24

    Heres a nice idea: What about trig identities for matrices? Like sin^2(x) + cos^2(x)= 1. Would it give the identity matrix?

    • @anderrafaellinaresrojas3772
      @anderrafaellinaresrojas3772 4 ปีที่แล้ว +2

      yes

    • @skylardeslypere9909
      @skylardeslypere9909 3 ปีที่แล้ว

      I believe so, since the reason you can apply functions to diagonal matrices is just to do with the fact that we can write these functions as power series.
      The result sin²(x)+cos²(x)=1 follows from these power series definitions.

    • @skylardeslypere9909
      @skylardeslypere9909 3 ปีที่แล้ว +1

      In particular, we have:
      sin(A)= [exp(iA)-exp(-iA)] / 2i
      cos(A)=[exp(iA)+exp(-iA)] / 2
      Notice, if you square these, you will eventually get
      sin²A+cos²A = [exp(iA)exp(-iA)+exp(-iA)exp(iA)]/2.
      This will cancel out to the identity matrix.

  • @Royvan7
    @Royvan7 5 ปีที่แล้ว +5

    i wonder if you you could define differentiation of a variable matrix. such as d[A^B]/dA = ???

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว +4

      That would be cool!

    • @thedoublehelix5661
      @thedoublehelix5661 5 ปีที่แล้ว +1

      It can be like the directional derivative in multivariable calculus.

    • @112BALAGE112
      @112BALAGE112 5 ปีที่แล้ว

      You're gonna love en.wikipedia.org/wiki/Matrix_calculus.

    • @Royvan7
      @Royvan7 5 ปีที่แล้ว

      @@112BALAGE112 nice, thx. actually think i have seen the vector derivative notation before. didn't think of extending it to martixs though.

  • @cbbuntz
    @cbbuntz 3 ปีที่แล้ว

    Fun trick about square roots of matrices. Let A be a matrix of column vectors and B be the Gram matrix A'*A. Take the inverse square root of B and then compute C = A*B, then C will be an orthonormal transformation of A since (A*(A'*A)^(-1/2))' * A*(A'*A)^(-1/2) = A'*A*(A'*A)^(-2/2) = identity matrix

  • @dr.rahulgupta7573
    @dr.rahulgupta7573 4 ปีที่แล้ว

    Excellent presentation of the topics. Thanks.DrRahul Rohtak Haryana India

  • @emmepombar3328
    @emmepombar3328 5 ปีที่แล้ว +1

    10 seconds in the video. Yepp, that is already great. Take my like.

  • @virat.chauhan
    @virat.chauhan 5 ปีที่แล้ว +1

    Dr π m ....this is really amazing...

  • @dr.rahulgupta7573
    @dr.rahulgupta7573 4 ปีที่แล้ว

    Excellent presentation Sir .Thanks DrRahul Rohtak Haryana India

  • @abhishekpadhi6014
    @abhishekpadhi6014 5 ปีที่แล้ว +2

    Got a test tomorrow and I so don't need to watch this, but I'm gonna

  • @foreachepsilon
    @foreachepsilon 5 ปีที่แล้ว

    There seems to be some arithmetic error at 11:02 with entry (2,1)… the sign is negative but looks like it should be a positive number from the right hand side of the equation. Same with entry (2,2)

  • @sam-kx3ty
    @sam-kx3ty 4 ปีที่แล้ว

    Thanks for this video .

  • @socraticmathtutor1869
    @socraticmathtutor1869 3 ปีที่แล้ว

    Cool video. Good to keep in mind though that matrix exponentiation is non-injective. E.g. If Id = ((1,0), (0,1)) is the identity matrix and J = ((0,-1),(1,0)) is the encoding of the imaginary unit i as a matrix, then exp(0) = Id and exp(2 * pi * J) = Id. This tells us that log(Id) has more than one element. So in general, log(B) should be regarded as a set of matrices; namely, the set of all matrices A for which exp(A) = B. Therefore, to do the math properly, many of these equals signs in the video need to be replaced by set-theoretic comparisons (e.g. "is a subset of", "includes", "shares an element with" etc.) to account for this added complexity. Note also that matrix exponentiation is non-surjective. In particular, it's easy to show that exp(A) is always invertible for any matrix A; this follows from Jacobi's identity; it can also be proved by observing that A and -A commute, and hence that exp(A) * exp(-A) = exp(A - A) = exp(0) = Id. As a consequence, we deduce that for a non-invertible matrix B, the set of matrices denoted by log(B) is always empty. So matrix logarithms are sometimes empty. Anyway....... really enjoyed the video.

  • @remlatzargonix1329
    @remlatzargonix1329 5 ปีที่แล้ว +3

    Great intro!

  • @marcellomarianetti1770
    @marcellomarianetti1770 4 ปีที่แล้ว

    I think there is something wrong around 11:20 when you calculate with Wolfram alpha: how can -2 + 2^12 be equal to -4094, and -10-2^11=2058, it looks like you inverted the signs or am I missing something?

  • @박태석-x3p
    @박태석-x3p 4 ปีที่แล้ว +1

    Hello Dr Peyam
    I have a question
    Can we say e^(A)=e^(B)
    then A=B?
    I want to prove this but this is very very difficult to me

    • @drpeyam
      @drpeyam  4 ปีที่แล้ว +1

      That’s a really interesting question! In the diagonalizable case you can prove it directly. My guess is that it’s still true in general and I think for this you just use the Jordan form

    • @박태석-x3p
      @박태석-x3p 4 ปีที่แล้ว +1

      @@drpeyam I found that
      e^X=I
      X=2pi 0
      0 -2pi
      I think if
      eigenvalue is real then only X=0

  • @NAMEhzj
    @NAMEhzj 5 ปีที่แล้ว

    Hey Dr. Peyam, nice video!
    At 4:07 you say "there is no good definition of ln here", but there is right? Its the inverse of the exponential, so e^ln(A) = A. And when e^A = P e^D P^-1 then by that your definition is actually exactly the one you want, isnt it?

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว +1

      It’s good if A is diagonalizable, but not every matrix is

    • @NAMEhzj
      @NAMEhzj 5 ปีที่แล้ว

      @@drpeyam right okay^^

  • @xzy7196
    @xzy7196 4 ปีที่แล้ว +1

    The triple integral of a (matrix^i)/(x!^e).

  • @sayanmaji2845
    @sayanmaji2845 5 ปีที่แล้ว +1

    Thanks

  • @61rmd1
    @61rmd1 3 ปีที่แล้ว

    So, what's the correct answer? The computation with e^(lnA)B or with e^B(lnA)? or both are incorrect? sorry, but I don't see it...

    • @drpeyam
      @drpeyam  3 ปีที่แล้ว

      Either answer is correct depending on your convention

  • @ankurmazumder5590
    @ankurmazumder5590 5 ปีที่แล้ว

    Sir does the algorithm works o ly for diagonalizable matrices? Is there a way to go about fot non diagonalizable matrices?

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว

      Jordan form

  • @caldersheagren
    @caldersheagren 5 ปีที่แล้ว +1

    Can you do a video on exponential of a direct sum or tensor product of matrices?

    • @miro.s
      @miro.s 4 ปีที่แล้ว

      Interesting would be counting with exponential of higher order tensors

  • @haoli9220
    @haoli9220 3 ปีที่แล้ว

    Great? But is there a way to do it when it is non diagonaliable?

    • @drpeyam
      @drpeyam  3 ปีที่แล้ว +1

      Omg hi Hao! In that case use Jordan form I think!

    • @haoli9220
      @haoli9220 3 ปีที่แล้ว

      @@drpeyam thanks! I might explore this

  • @MagicGonads
    @MagicGonads 5 ปีที่แล้ว

    If you diagonalise A first and then apply e^ln(lambda)*B for each lambda eigenvalue in the diagonalised matrix, then the ambiguity about the order doesn't appear (as long as the logarithm of the eigenvalues is commutative with the other scalars). Which order ends up being correct? My guess is that this will give yet a third different answer but I'm not sure.
    This means you don't have to apply the logarithm of a matrix, you just have to use some polynomial expansion of e^kB for many values

    • @MagicGonads
      @MagicGonads 5 ปีที่แล้ว

      Or you could diagonalise B (if it is diagonalisable) so you don't have to approximate the infinite series and it would become a bunch of e^kj for every k the logarithm of an eigenvalue of A and every j the logarithm of an eigenvalue of B
      I think actually this would output a matrix with matrix values so an nxn matrix of mxm matrices of the original scalars of the problem, which would be isomorphic to an nmxnm matrix (where A is nxn and B is mxm)

    • @MagicGonads
      @MagicGonads 5 ปีที่แล้ว

      If A is (nxn) and = PDP' , B is (mxm) and = QEQ'
      then A^B = P(D^B)P' = P(D^(QEQ'))P'
      if we assume that any function of a diagonal matrix is componentwise on the diagonal (this is at least true for matrix valued polynomials and so would it work for exp if we use the infinite series)
      then every diagonal component of D^B is some e^kB = Q(e^kE)Q' where k is the logarithm of a diagonal component of D, and each diagonal component of each e^kE is some e^kj where j is the logarithm of a diagonal component of E.
      Then you can construct an isomorphism between this P(D^(QEQ'))P' matrix and some nmxnm matrix (such as converting it into a block matrix), and this is what you could call A^B

    • @MagicGonads
      @MagicGonads 4 ปีที่แล้ว

      So I did the working for my method to see how it behaves when compared to complex exponentiation
      It took me a while haha
      if a and b are complex numbers and a* or b* are the conjugates of said complex numbers (and a^b* is a to the conjugate of b not the conjugate of a to the b, to make what follows easier to type out)
      where A and B are the matrices which represent a and b respectively
      a^b = a^b
      A^B = 1/4 (
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* , ia^b + ia^b* - ia*^b - ia*^b* , - a^b + a^b* + a*^b - a*^b* ;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* , a^b - a^b* - a*^b + a*^b* , ia^b + ia^b* - ia*^b - ia*^b* ;
      - ia^b - ia^b* + ia*^b + ia*^b* , a^b - a^b* - a*^b + a*^b* , a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* ;
      - a^b + a^b* + a*^b - a*^b* , - ia^b - ia^b* + ia*^b + ia*^b* , - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      )
      which certainly doesn't look like it's equal to a^b but perhaps using the same sort of way we classify [x , -y ; y , x] as x + iy, this can be found to be equivalent to a^b ?
      It looks cool I guess
      I also would like to know (but would dread to do the working out) if this matrix is commutative
      the matrix has a kind of symmetry like for each component if you compared it to the transpose of the matrix's component to find the sign difference
      [+ - - +;
      - + + -;
      - + + -;
      + - - +]
      compared to how the complex numbers work
      [+ -;
      - +]
      it has like this cell placed into each quadrant applying this rule to itself

    • @MagicGonads
      @MagicGonads 4 ปีที่แล้ว

      YES! as it turns out using that symmetry/equivalence one can reduce it to a 1d number which = a^b
      the equivalence notion is that [x , -y; y, x] = x + yi
      this is the way we will contract the 4x4 matrix into a 2x2 matrix and then the 2x2 matrix into a 1x1 matrix
      A^B = 1/4 (
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* , ia^b + ia^b* - ia*^b - ia*^b* , - a^b + a^b* + a*^b - a*^b* ;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* , a^b - a^b* - a*^b + a*^b* , ia^b + ia^b* - ia*^b - ia*^b* ;
      - ia^b - ia^b* + ia*^b + ia*^b* , a^b - a^b* - a*^b + a*^b* , a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* ;
      - a^b + a^b* + a*^b - a*^b* , - ia^b - ia^b* + ia*^b + ia*^b* , - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      ) (as given in my previous reply)
      we notice that there are 2x2 matrices in each quadrant.
      If we label the top left quadrant as matrix X and the bottom left quadrant matrix Y
      A^B = 1/4 (
      [ X , -Y ;
      Y , X ] )
      where
      X =
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b*;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      Y =
      [- ia^b - ia^b* + ia*^b + ia*^b* , a^b - a^b* - a*^b + a*^b*;
      - a^b + a^b* + a*^b - a*^b* , - ia^b - ia^b* + ia*^b + ia*^b*]
      then A^B = 1/4( X + Yi )
      -i = 1/i so
      Yi =
      [a^b + a^b* - a*^b - a*^b* , ia^b - ia^b* - ia*^b + ia*^b*;
      - ia^b + ia^b* + ia*^b - ia*^b*, a^b + a^b* - a*^b - a*^b*]
      so A^B = 1/4 (
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b*;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      +
      [a^b + a^b* - a*^b - a*^b* , ia^b - ia^b* - ia*^b + ia*^b*;
      - ia^b + ia^b* + ia*^b - ia*^b*, a^b + a^b* - a*^b - a*^b*]
      )
      = 1/4 (
      [ 2a^b + 2a^b* , 2ia^b - 2ia^b*;
      -2ia^b +2ia^b* , 2a^b + 2a^b* ]
      )
      then applying that equivalence again
      A^B = 1/4 (2a^b + 2a^b* - (2ia^b - 2ia^b*)i) = 1/4 (2a^b + 2a^b* + 2a^b - 2a^b*)
      A^B = 1/4 (4a^b)
      A^B = a^b

  • @edwardhuff4727
    @edwardhuff4727 5 ปีที่แล้ว

    Finally getting around to watching... At very start, IIRC bprp would throw that dried out marker across the room and take a new one. I'd appreciate it. Old eyes and all... Edit: I see you did replace it. Yay!!!

  • @neilgerace355
    @neilgerace355 5 ปีที่แล้ว

    The other PDP-1:
    "DEC PDP display" infolab.stanford.edu/pub/voy/museum/pictures/display/3-3.htm

  • @stevenbanton5073
    @stevenbanton5073 5 ปีที่แล้ว +1

    It is interesting to research how we can define matrix to a power of matrix without losing nice properties (e.g. 14:05). I suppose method in video implies that both matrices are diagonalizable, for other matrices we can't define it in such way (or there is some trick?)
    I have some idea about 2x2 matrices. In fact there is isomorphism between complex numbers and 2x2 matrices of certain form (math.stackexchange.com/questions/180849/why-is-the-complex-number-z-abi-equivalent-to-the-matrix-form-left-begins). For only those 2x2 matrices we can easily define power operation, because we can easily calculate complex number to a power of complex number. All nice properties hold automatically due to the isomorphism. Then maybe we can (or can't - research needed) to extend it for wider class or all 2x2 matrices by some kind of decomposition of them to complex-number isomorphic class.

    • @vicktorioalhakim3666
      @vicktorioalhakim3666 5 ปีที่แล้ว

      The nice property you refer to @14:05 is not that nice, it requires that B and C commute. For a generalized functions on non-diagonalizable matrices, see en.wikipedia.org/wiki/Jordan_normal_form. For more information about the link between 2x2 matrices and complex numbers, see Lie groups. It's nothing new, and all of the ideas you discuss here have been developed for decades, just open any serious analysis book.

    • @stevenbanton5073
      @stevenbanton5073 5 ปีที่แล้ว

      @@vicktorioalhakim3666
      I refer not to "nice property" but to "losing nice property" due to the fact it requires commuting matrices (loss of generality).
      I'm not claiming there is something new. I am trying to say there is another ways of defining that operation. How about trivial one - calculating powers of corresponding elements? ( C{i,j} = A{i,j}^(B{i,j}) )

  • @nandakumarcheiro
    @nandakumarcheiro 5 ปีที่แล้ว

    Matrics is a form of piezo electric diagonalisation forming a powered by another matrics are we on the verge producing a new crystal producing an exponential crystal?

  • @tracyh5751
    @tracyh5751 5 ปีที่แล้ว +1

    You should find conditions where equality occurs. :D At least in small dimension

  • @hammadansari6309
    @hammadansari6309 4 ปีที่แล้ว

    How can there be two different answers for the same thing

  • @Ensivion
    @Ensivion 5 ปีที่แล้ว

    This is a weird mathematical statement. Because in both cases I think you're applying a function to be B and a separate function to A. Exponentiating A and taking the equivalent anti-exponentiation, logrithmic function to B. This second function is equivalent to e^B where instead of a constant, you have a matrix. It isn't clear which one happens first.

  • @วิระพิทักษ์ถิร
    @วิระพิทักษ์ถิร ปีที่แล้ว

    Dear professor , this topic is very hard for me .

  • @mosab643
    @mosab643 4 ปีที่แล้ว

    Am I loosing my sight?

  • @paulkohl9267
    @paulkohl9267 5 ปีที่แล้ว

    From dimensionality and completeness I would have expected that an n x n matrix brought to an m x m matrix to be a set of nm x nm order matrices that have no ambiguity in definition. Am I expecting too much? The approach in the video seems like a "naive" use of the function of a matrix idea using diagonalization.... Seems like there should be another approach that is more high-brow as it were... Monodromic evaluation of the natural logarithm and some operator ordering principle straight from complex analysis (asuming C for algebraic completeness but using any algebraically complete field should also be allowed). Just a thought.

    • @paulkohl9267
      @paulkohl9267 5 ปีที่แล้ว +1

      Dr Peyam, thanks for the heart, love your videos! The answer to my own query occured to me just this morning. (e^A)^B = e^(A * B) where * is a tensor product. Then you would have the requisite dimensions I described. Operator ordering is still an issue. Thanks for the heart.

    • @MagicGonads
      @MagicGonads 5 ปีที่แล้ว +1

      @@paulkohl9267 I think if you diagonalise both A and B then you get something isomorphic to an nmxnm matrix where each value is the result of a scalars multiplied by exp of the product of two scalars from the original domains. And this would not depend on operator ordering as long as the scalars are commutative

    • @MagicGonads
      @MagicGonads 5 ปีที่แล้ว +1

      If A is (nxn) and = PDP' , B is (mxm) and = QEQ'
      then A^B = P(D^B)P' = P(D^(QEQ'))P'
      if we assume that any function of a diagonal matrix is componentwise on the diagonal (this is at least true for matrix valued polynomials and so it would work for exp if we use the infinite series)
      then every diagonal component of D^B is some e^kB = Q(e^kE)Q' where k is the logarithm of a diagonal component of D, and each diagonal component of each e^kE is some e^kj where j is the logarithm of a diagonal component of E.
      Then you can construct an isomorphism between this P(D^(QEQ'))P' matrix and some nmxnm matrix (such as converting it into a block matrix), and this is what you could call A^B

    • @paulkohl9267
      @paulkohl9267 5 ปีที่แล้ว +2

      @@MagicGonads, thanks for the heuristic argument showing why an nxn brought to an mxm is going to be nm x nm, but I think your calculations are off. The idea to just take the entries in A as a base to bring them to the power of each will give an nm x nm matrix, but it does not capture the other structure associated with matrices. For instance, complex numbers brought to other complex numbers are multivalued, why shouldn't matrices be as well when they can include complex numbers as entries? Does not make sense.
      Given that A is nxn, B is mxm and both are diagonalizable,
      A^B = e^(B * ln A) or e^(ln A * B)
      where * is a tensor product, which does not commute. Using your diagonalizations,
      ln A = P (ln D) P',
      but then the natural log of D is multivalued with n monodromic degrees of freedom. If K is a diagonal matrix with integers on the diagonal and zero off-diagonal, then
      ln D = [ ln D_ii ] + 2 pi sqrt(-1) K
      where [ ln D_ii ] is diagonal matrix with zero off-diagonal and ln of D's diagonal entries in the same spot. This completely characterizes the solutions to A^B. What is the meaning or application? Who knows.

    • @MagicGonads
      @MagicGonads 5 ปีที่แล้ว +1

      @@paulkohl9267 I don't see why we should assume that any property of e is preserved for matrices aside from it being the inverse of any branch of ln. eg e^(ln(A)) = A yes, so A^B = e^(ln(A))^B but that doesn't mean that we can bring that power down into the power of e, I don't think power rules necessarily apply here, so that's why I take this method. Though are you saying that the new power rule should be that you have to use the tensor product to bring it down?
      Also I prefer mine still since AB is a regular matrix product and that is what we use for the polynomials in the infinite series representation that defines exp(A) not something that isn't closed like the tensor product (consider how would you add two matrices of completely different dimension? that is what you would get if you use the tensor product for the definition of exp(A)). But I don't know much about the tensor product, both methods should reduce to regular exponentiation when both A and B are 1x1.
      And these matrices would still be multivalued cus whenever you take the ln of a complex eigenvalue, you can take all the branches of those logarithms into account, which means for every complex eigenvalue on either A or B you have an extra degree of countable solutions to A^B
      Recall I said that k and j are *logarithms* of the diagonal components of D and E respectively, which are the eigenvalues of A and B. So in fact for every element on the diagonal in A^B, if all eigenvalues of A and B are complex, gives you a bunch of e to the products of complex logarithms, which makes the results of the branches non-trivial.

  • @Flanlaina
    @Flanlaina 4 ปีที่แล้ว

    Challenge:
    1. Sine of a matrix?
    2. What happen if we take the square root of a matrix with negative eigenvalues?

    • @drpeyam
      @drpeyam  4 ปีที่แล้ว +1

      1) Already done (with cosine)
      2) Imaginary numbers

    • @Flanlaina
      @Flanlaina 4 ปีที่แล้ว

      What if we take the square root of a matrix with complex eigenvalues?

    • @aaronsmith6632
      @aaronsmith6632 4 ปีที่แล้ว

      @@Flanlaina You'll get complex values in the resulting matrix.

  • @quantumsoul3495
    @quantumsoul3495 5 ปีที่แล้ว +1

    Nice dog !

  • @blackpenredpen
    @blackpenredpen 5 ปีที่แล้ว

    found it: knowyourmeme.com/memes/if-a-dog-wore-pants

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว

      Hahaha, that’s the one!!!

  • @kma6881
    @kma6881 3 ปีที่แล้ว

    A complex matrix raised to a complex matrix, maybe?

    • @drpeyam
      @drpeyam  3 ปีที่แล้ว +1

      I mean same strategy :)

  • @chimetimepaprika
    @chimetimepaprika 2 ปีที่แล้ว

    Silly Rabbit! [ma]trix are for kids.

  • @Flanlaina
    @Flanlaina 4 ปีที่แล้ว

    Next: do tan(A)

    • @drpeyam
      @drpeyam  4 ปีที่แล้ว

      I did arctan(A)

    • @linhcoi3847
      @linhcoi3847 4 ปีที่แล้ว

      Just do sin(A)/cos(A)

  • @scorch25able
    @scorch25able 3 ปีที่แล้ว

    Crazy thought: matrix root of a matrix lol

    • @drpeyam
      @drpeyam  3 ปีที่แล้ว

      Wow that would be cool

  • @rolandd2804
    @rolandd2804 4 ปีที่แล้ว

    I like that shit :D

  • @euler7586
    @euler7586 5 ปีที่แล้ว

    1/8

  • @allaincumming6313
    @allaincumming6313 5 ปีที่แล้ว

    Ahuevo

  • @housamkak646
    @housamkak646 5 ปีที่แล้ว +1

    First oneee

    • @drpeyam
      @drpeyam  5 ปีที่แล้ว

      Congrats!

  • @KANA-rd8bz
    @KANA-rd8bz 11 หลายเดือนก่อน

    xd

    • @KANA-rd8bz
      @KANA-rd8bz 11 หลายเดือนก่อน +1

      love ya man

  • @renardtahar4432
    @renardtahar4432 3 ปีที่แล้ว

    hhhhhhhhhhhhhhhhhhhhhhhhhhh good

  • @stevenwilson5556
    @stevenwilson5556 4 ปีที่แล้ว

    badly behaved.. lol.. "Bad Matrix!!" *, *…