I stumbled upon this 25 years ago in a dissertation and was puzzled by what it meant a matrix as exponent. Unfortunately there was no TH-cam back then to give me the answer. Thanks for clearing another impediment before I can die.
In general, you can always find a polynomial cancelled out by your matrix A (the characteristic polynomial, or the minimal polynomial if you're lucky) and then you can do 2 things to make the computation of the matrix powers easier: 1) naive : use that to get a recursion relation between A^n and smaller powers of A, which you have already computed 2)smart: for all integer p, compute an Euclidean division of X^p by said polynomial: when remplacing X by A, the term which is a multiple of said polynomial just cancels out and you are left with a simple polynomial in A of degree < N if N is the size of your matrix; this works nicely if you have an explicit expression of your matrix A and the polynomial cancelled out by A
Hi Dr. Peyam! I started my Master Degree in the last tusday and because of it, I stayed with my eyes in papers and books in the last four days to undestand this kind of exercise and I found a way to solve it, but I did not completly sure about my solution until I find your video! Your video confirm all my conclusions about this topic, thank you very much!
You can't do it because it's an infinite series. For example how would you find exp([1 2; 3 4])? There's no way that you can multiply this matrix infinitely many times by itself. The series would be divergent. Unless you want your matrix to become [inf inf; inf inf]).
@@nathanisbored I’m replying even though it’s two years too late. In the formula A^N = P D^N P^-1, the D matrix is diagonal, so raising it to N merely means raising its elements to N. So D^N is simple to calculate and you just stick it between P and P^-1 to get A^N
I forgot about this then I found your video after confused about how to solve for the matrix power of an exponential in a quantum computing function. I love your channel keep up the great work!
Very nice presentation. You might want to do something about a minor error at 8:07, where you said (correctly) "e^A = P e^D P^-1" but wrote "e^A = P e^D e^-1".
I remember that these matrix exponentials were very useful in computer science for modeling dynamic systems with one of more feedback loops. I wonder where I can find that mysterious Feigenbaum constant in here.
Thank you very much! I am always a big fan of your teaching you-tube videos. Here, at 10:50, I am puzzled. You wrote: x(t)=e^(At) * c , however, my textbook says: x(t)=e^(At)*x(0) while x(0)=P*c The final answer should be x=c1*e^t*(5,1) + c2*e^(5t)*(1,1) Again, thank you so much for teaching us! I greatly appreciate it.
Dr Peyam, we love your videos and they are quite informative and good refresher. I have one suggestion, when you refer to some previous video for some concept, you can add the link in info button or in the description. Thank you for the lovely content.
Use the formula for exponentials. Suppose D is a diagonal matrix. Then e^D = ∑[n=0,∞] D^n/n!. Remember that sums of matrices are computed component-wise, so when doing an infinite series of matrices, we are really doing an infinite series of each component of these matrices. Let's take the sum of the (i,j)-entries of all of the matrices where i≠j. But for any n, the (i,j)-entry of D^n is 0. So we have that the sum of the (i,j)-entries is ∑[n=0,∞] 0/n! = 0. Now, for i=j, suppose d is the (i,i)-entry of D. Then the (i,i)-entry of D^n is d^n. So the sum of the (i,i)-entries of these matrices are ∑[n=0,∞] d^n/n! = e^d.
How do you define the factorial of a matrix?: A! I would say, A_5! = A_5 * A_4 * A_3 * A_2 * I, where I is identity matrix, and A_n has 1 lower determinant than A_n-1 Or use Gamma(A_5). Something like that. But what interests me more, is there a quick rule how the result changes if you permute the factors of a Matrixproduct? Result1 = A*B*C*D*E*F*G --> Result2 = F*A*G*D*B*E*C without recalculating the entire thing. The determinants of the results have to be same, I think.
@@Handelsbilanzdefizit pretty sure you could use the gamma function. not certain how you would go about calculating the integral tho. as for the rearranging thing. yep, the determinant of result 1 and 2 has to be identical. the only rearranging rule i know is A*B = C => A^t*B^t = C^t (^t is transpose) let's see... A*B*C*D*E*F*G = R G^t*[A*B*C*D*E*F]^t = R^t G^t*[D*E*F]^t*[A*B*C]^t = R^t G^t*F^t*[D*E]^t*C^t*[A*B]^t = R^t G^t*F^t*E^t*D^t*C^t*B^t*A^t = R^t huh... so you can just reverse the order no matter how many. don't think this will be useful for any other permutation then just reverse transpose.
Hi Dr Peyam: Thank you for the helpful video on how to compute the exponential matrices. I need your help, on how to find a matrix D (to be found) such that: e^(D ) = C (given square matrix).
@@drpeyam : Thank you for the prompt reply Dear Dr. I have tried it for D=(a,b;c,d) such that d11=a,d12=b,d21=c and d22=d and also, where C=(-1,0;0,-4) such that c11=-1,c12=0,c21=0 and c22=-4 which is a diagonal matrix. If I used your method I couldn't find a matrix, but I have seen the question as find a matrix that satisfies that. Thanks.i.e ln(-1) is not defined.
here's a question that came to my mind: set β := { [1, 0; 0, 0], [0, 1; 0, 0], [0, 0; 1, 0], [0, 0; 0, 1] } (*) of 2x2 matrices is a basis for the linear space of 2x2 matrices. i've checked, that the set exp(β) := { exp(B) | B in β } = { [e, 0; 0, 1], [1, 1; 0, 1], [1, 0; 1, 1], [1, 0; 0, e] } is a basis for this space as well. is that true for any basis of linear space of NxN matrices? (*) here: comma means 'in the same row', and semicolon means 'in the next row'.
I have an exam tomorrow and stumbled upon an example: calculate e^(At) if A=[[3,0],[2,4]]. I've read that if the matrix is upper triangle or already diagonal, we only rise it to the power of e on the diagonal and derivate it (d/dt) on non diagonal places. That would make this be e^(At)=[[e^3t,0],[2te^2t,e^4t]]. However, I haven't found the information for down triangle matrix such as this one in the example, is it the case same as for upper triangle matrices?
I have heard that diagonalizable matrices are dense in the matrices. Could you use continuity of the matrix exponential and this fact to calculate the matrix exponential of a non-diagonalizable matrix?
I didin't get why e^D is what you showed :( I feel like you used what we are doing, the ides was to know how to solve e^matrix but you just showed the answer of e^D
I'm still trying to wrap my head around the concept. I know this is used for solving differential equations and I'm trying to figure out what it actually means in an intuitive way, but it's not intuitive. If you plug in a sequence of integers on a superdiagonal or subdiagonal and the rest is zero, you get a triangular matrix of binomial coefficients. How? I'm guessing it's probably something about the factorials in the power series reducing to n choose k, but I don't get how it works. Another interesting property is if your matrix is skew symmetric, the matrix is orthogonal. I tried sticking the same sequence on the superdiagnals and a negative copy on the subdiagonal and I got something that looked pretty random. Then I tried A'*A and got back the identity. (that's matlab for A transpose dot A). I tried a bunch of different sequences and it seemed like as long as there was nothing on the main diagonal and everything else was anti-symmetric, the resulting matrix was even orthonormalized. I'm having a hard time seeing a pattern in what was happening. I'm probably gonna try some even weirder stuff. Like what does the gamma function do to a matrix? Hell, maybe I'll even try some bessel functions just because i can.
Thank you for your video! I was wondering if you could explain why the answer you get in your video doesn't seem to match the one calculated by WolframAlpha?
If all you need is a power series of a matrix to use PDP^-1 (and the fact that it's diagonalizable), would it be possible to find 1/(1-A) for a matrix, since f(x) = 1/(1-x) is represented by Sum from 0 to inf x^n, |x|
Thanks for this nice explanation, but to consider the general case, all matrices are not diagonalizable. What should we do then?! Use Jordan Form? But calculating the powers of J is not that easy. Linear control systems and in general Dynamic systems which can be described using state-space equations as you also mentioned in video can use this trick, if the problem of diagonalizable system matrix be solved.
Thank you, great explenation, but I don't understand why you multiply at the end by the C vector, I guess the solution of x'(t) = A x(t) is just x(t) =x0 e(exp(At)). Hmm... ok, x(t) is not an escalar function but a vector function
I don’t get why the diagonal matrix can be written as a exponential of the entries of that diagonal matrix , yet an exponential of a ordinary matrix cannot be written as the exponential of the the entries, in other words why do we have to diagonalise it ?
Basically because in order to take A^2 you don’t just take the squares of its components, but this is true for diagonal matrices. Matrix multiplication is weird
I stumbled upon this 25 years ago in a dissertation and was puzzled by what it meant a matrix as exponent. Unfortunately there was no TH-cam back then to give me the answer. Thanks for clearing another impediment before I can die.
Thank the math gods for TH-cam!
Shorter of breath and one proof closer to death.
now you can die in peace brother 😸 /j
Exponential matrices are useful in solving relativistic wave equations. (Dirac equation for Hydrogen atom).
They actually occur in any textbook on ordinary differential equations that deals with systems of linear ode's
I speak very little English, but I understood most of the explanation, seriously thank you very much from Bolivia
In general, you can always find a polynomial cancelled out by your matrix A (the characteristic polynomial, or the minimal polynomial if you're lucky) and then you can do 2 things to make the computation of the matrix powers easier:
1) naive : use that to get a recursion relation between A^n and smaller powers of A, which you have already computed
2)smart: for all integer p, compute an Euclidean division of X^p by said polynomial: when remplacing X by A, the term which is a multiple of said polynomial just cancels out and you are left with a simple polynomial in A of degree < N if N is the size of your matrix; this works nicely if you have an explicit expression of your matrix A and the polynomial cancelled out by A
Love the linear algebra. Keep it up Dr. P.
Also Dr. P, you should definitely do a video on general matrix exponentiation. I would find such a video very satisfying.
sir you are a gem❤
Hi Dr. Peyam! I started my Master Degree in the last tusday and because of it, I stayed with my eyes in papers and books in the last four days to undestand this kind of exercise and I found a way to solve it, but I did not completly sure about my solution until I find your video! Your video confirm all my conclusions about this topic, thank you very much!
Thank you!!!!
Osadharon koriye6en sir,khub valo laglo
This is so cool, thanks! This is an idea I stumbled upon in college, and it's neat to see that it has real application in mathematics.
You don't need it to be diagonalizable though since you can always raise to a power by hand, it's just really hard to calculate without eigenvectors
Using Jordan blocks can make it easier too.
i dont understand how you would find A^n directly, since the direct method depends on knowing what n is
You can't do it because it's an infinite series. For example how would you find exp([1 2; 3 4])? There's no way that you can multiply this matrix infinitely many times by itself. The series would be divergent. Unless you want your matrix to become [inf inf; inf inf]).
@@nathanisbored I’m replying even though it’s two years too late. In the formula A^N = P D^N P^-1, the D matrix is diagonal, so raising it to N merely means raising its elements to N. So D^N is simple to calculate and you just stick it between P and P^-1 to get A^N
This is an absolute GOD TIER explanation. Thank you
I forgot about this then I found your video after confused about how to solve for the matrix power of an exponential in a quantum computing function. I love your channel keep up the great work!
Very nice presentation. You might want to do something about a minor error at 8:07, where you said (correctly) "e^A = P e^D P^-1" but wrote "e^A = P e^D e^-1".
pee xd
Thank you so much professor, this helped me a lot.
I love this enthusiasm for maths, you earned a sub sir. Keep up the good work.
That was a fantastic explanation....Dr .P !!
Soon we will have matrix derivatives on this channel
Hahaha, great idea!!!
Amazing video.. Thank you so much Dr. Peyam
Thank you so much! from a Tel-Aviv University mechanical engineering student!
היי! Open University Computer Science student
this was a great video to help me in times of need, mainly right before the exam, thanks you for breaking down everything so adequately.
I remember that these matrix exponentials were very useful in computer science for modeling dynamic systems with one of more feedback loops. I wonder where I can find that mysterious Feigenbaum constant in here.
Thank you so much
Internet is a blessing
Thank you very much! I am always a big fan of your teaching you-tube videos. Here, at 10:50, I am puzzled. You wrote: x(t)=e^(At) * c , however, my textbook says: x(t)=e^(At)*x(0) while x(0)=P*c
The final answer should be x=c1*e^t*(5,1) + c2*e^(5t)*(1,1)
Again, thank you so much for teaching us! I greatly appreciate it.
I mean c is an arbitrary constant, so if c is arbitrary, so is P*c. The second formula is a bit more specific
Man, you answer all the questions that I've been looking for in books and , as always, I never find them .
Your videos are really great ! Thank you sir ❤️
In general you can use the Jordan normal form.
Thanks for clearing this out, Keep up the great work mate.
Dr Peyam, we love your videos and they are quite informative and good refresher. I have one suggestion, when you refer to some previous video for some concept, you can add the link in info button or in the description.
Thank you for the lovely content.
What a great video! Thank you for the wonderful explanation :)
7:42 ... but why?? why does this make sense to do, why is it "allowed"
Use the formula for exponentials.
Suppose D is a diagonal matrix. Then e^D = ∑[n=0,∞] D^n/n!.
Remember that sums of matrices are computed component-wise, so when doing an infinite series of matrices, we are really doing an infinite series of each component of these matrices.
Let's take the sum of the (i,j)-entries of all of the matrices where i≠j. But for any n, the (i,j)-entry of D^n is 0. So we have that the sum of the (i,j)-entries is ∑[n=0,∞] 0/n! = 0.
Now, for i=j, suppose d is the (i,i)-entry of D. Then the (i,i)-entry of D^n is d^n. So the sum of the (i,i)-entries of these matrices are ∑[n=0,∞] d^n/n! = e^d.
Thank you so much. I understand everything you explained.
Great Explanation!
Your sweet video made my day ♥
Stright to the point. Thanks.
How do you define the factorial of a matrix?: A!
I would say, A_5! = A_5 * A_4 * A_3 * A_2 * I, where I is identity matrix, and A_n has 1 lower determinant than A_n-1
Or use Gamma(A_5). Something like that. But what interests me more, is there a quick rule how the result changes if you permute the factors of a Matrixproduct?
Result1 = A*B*C*D*E*F*G --> Result2 = F*A*G*D*B*E*C without recalculating the entire thing. The determinants of the results have to be same, I think.
Not sure :) I mean what is even f(x) !
@@drpeyam
Greetings from Germany, by the way.
My english is not the best.
@@Handelsbilanzdefizit pretty sure you could use the gamma function. not certain how you would go about calculating the integral tho.
as for the rearranging thing. yep, the determinant of result 1 and 2 has to be identical. the only rearranging rule i know is A*B = C => A^t*B^t = C^t (^t is transpose)
let's see...
A*B*C*D*E*F*G = R
G^t*[A*B*C*D*E*F]^t = R^t
G^t*[D*E*F]^t*[A*B*C]^t = R^t
G^t*F^t*[D*E]^t*C^t*[A*B]^t = R^t
G^t*F^t*E^t*D^t*C^t*B^t*A^t = R^t
huh... so you can just reverse the order no matter how many. don't think this will be useful for any other permutation then just reverse transpose.
thx for the video 😀really simple and exact examples !!
Please tell me how P and P^(-1 ) act as scalars wrt sum
I believe it’s that they aren’t affected by N, so by the properties of a series, you can take them out. Constant factor and whatnot.
Nice. Gonna catch up to this whole linear alg series soon.
I just use Laplace transform. We use this all the time in Control Systems.
Hi Dr Peyam: Thank you for the helpful video on how to compute the exponential matrices. I need your help, on how to find a matrix D (to be found) such that: e^(D ) = C (given square matrix).
D = ln(C) so take ln of all the eigenvalues
@@drpeyam : Thank you for the prompt reply Dear Dr. I have tried it for D=(a,b;c,d) such that d11=a,d12=b,d21=c and d22=d and also, where C=(-1,0;0,-4) such that c11=-1,c12=0,c21=0 and c22=-4 which is a diagonal matrix. If I used your method I couldn't find a matrix, but I have seen the question as find a matrix that satisfies that. Thanks.i.e ln(-1) is not defined.
I need your help! Please integrate e^x/x with no limits or domains like 0,inf,etc.
Not possible, I think
That's why I need your help because impossible itself says I'm possible
If limits are there we can use Simpsons rule etc otherwise it is not possible
With your presentations everything is so easy! Is there any video on the Jordan form?
Yeah
Check out my video on the Jordan Canonical Form
Thanks Doctor ... it’s perfect 👌
thank you for the video! It really helped me :)
I like ur passion
omfg this is insane, tyssm sir
here's a question that came to my mind:
set β := { [1, 0; 0, 0], [0, 1; 0, 0], [0, 0; 1, 0], [0, 0; 0, 1] } (*) of 2x2 matrices is a basis for the linear space of 2x2 matrices.
i've checked, that the set
exp(β) := { exp(B) | B in β } = { [e, 0; 0, 1], [1, 1; 0, 1], [1, 0; 1, 1], [1, 0; 0, e] } is a basis for this space as well.
is that true for any basis of linear space of NxN matrices?
(*) here: comma means 'in the same row', and semicolon means 'in the next row'.
I have an exam tomorrow and stumbled upon an example: calculate e^(At) if A=[[3,0],[2,4]]. I've read that if the matrix is upper triangle or already diagonal, we only rise it to the power of e on the diagonal and derivate it (d/dt) on non diagonal places. That would make this be e^(At)=[[e^3t,0],[2te^2t,e^4t]]. However, I haven't found the information for down triangle matrix such as this one in the example, is it the case same as for upper triangle matrices?
I have heard that diagonalizable matrices are dense in the matrices. Could you use continuity of the matrix exponential and this fact to calculate the matrix exponential of a non-diagonalizable matrix?
Well, except the P in PDP^-1 might change as well! The Jordan form is much better in those situations
Thank you so much! Great video as always!!
thanks mate u helped a lot
Thanks. How can we comut A^(1.5)?
hi can we replace matrix A by its trace to solve e to power A when A is Pauli matrix?
Taylor series are wonderful!
I didin't get why e^D is what you showed :( I feel like you used what we are doing, the ides was to know how to solve e^matrix but you just showed the answer of e^D
Yeah he said he wasn't gonna explain it I thought that was the point
I know. I feel I need to know why. 😩
It is time to introduce Dirac notation, and the Einstein-Reimann notation.
Make videos on matrix derivatives
I'm still trying to wrap my head around the concept. I know this is used for solving differential equations and I'm trying to figure out what it actually means in an intuitive way, but it's not intuitive. If you plug in a sequence of integers on a superdiagonal or subdiagonal and the rest is zero, you get a triangular matrix of binomial coefficients. How? I'm guessing it's probably something about the factorials in the power series reducing to n choose k, but I don't get how it works.
Another interesting property is if your matrix is skew symmetric, the matrix is orthogonal. I tried sticking the same sequence on the superdiagnals and a negative copy on the subdiagonal and I got something that looked pretty random. Then I tried A'*A and got back the identity. (that's matlab for A transpose dot A). I tried a bunch of different sequences and it seemed like as long as there was nothing on the main diagonal and everything else was anti-symmetric, the resulting matrix was even orthonormalized.
I'm having a hard time seeing a pattern in what was happening.
I'm probably gonna try some even weirder stuff. Like what does the gamma function do to a matrix? Hell, maybe I'll even try some bessel functions just because i can.
Thank you for your video! I was wondering if you could explain why the answer you get in your video doesn't seem to match the one calculated by WolframAlpha?
Believe in the math, not wolframalpha 😁 I’m definitely right here
@@drpeyam So WolframAlpha is wrong about something!? :o I'll submit a ticket and lyk if they say something interesting!
Hahaha
thank you docter Pe^De^-1
Awwww
why not use caley hamilton theorem to do it instead???
If all you need is a power series of a matrix to use PDP^-1 (and the fact that it's diagonalizable), would it be possible to find 1/(1-A) for a matrix, since f(x) = 1/(1-x) is represented by Sum from 0 to inf x^n, |x|
Yes, of course!
Imagine, in particular, how easy it would be to find that inverse if A^k = 0 for some k
Nice result!
This possibly means you can find sines and cosines of matrices? With presumably a similar argument.
Yes and there’s a video on that actually
Could you do the logarithm of a matrix? Or the log of a matrix with a matrix base?
Yeah, probably, at least log(1+A) using power series, and you can get log_A (B) using ln(A) (ln(B))^-1
@@drpeyam Matrix factorial? Maybe we could approximate it using sterlings formula anyway.
hOREP Stirling’s
@@dougr.2398 If you want to go grammar Nazi on me, please ensure that you use a full stop at the end of your sentence. Thanks.
Thanks for this nice explanation, but to consider the general case, all matrices are not diagonalizable. What should we do then?! Use Jordan Form? But calculating the powers of J is not that easy. Linear control systems and in general Dynamic systems which can be described using state-space equations as you also mentioned in video can use this trick, if the problem of diagonalizable system matrix be solved.
Yep Jordan form
@@drpeyam Thank you! And nice reply speed 😅👌👍
What about A^A dr peyam?
I did a video on A^B
@@drpeyam you gave me a heart at 4:20 my life is complete
Can you explain gamma matrix functions
Now do a triple integral of a matrix.
I'm here from BPRP (BlackPen_RedPen)... I'm a programmer and this explanation was stunning
Thank you, great explenation, but I don't understand why you multiply at the end by the C vector, I guess the solution of x'(t) = A x(t) is just x(t) =x0 e(exp(At)). Hmm... ok, x(t) is not an escalar function but a vector function
x0 is your C, except xo exp(At) doesn’t make sense matrix multiplication wise, so you do the opposite
@@drpeyam Thanks, yes x0 goes on the right. By the way there is a very important instance of this problem: the Schrödinger equation :-)
I don’t get why the diagonal matrix can be written as a exponential of the entries of that diagonal matrix , yet an exponential of a ordinary matrix cannot be written as the exponential of the the entries, in other words why do we have to diagonalise it ?
Basically because in order to take A^2 you don’t just take the squares of its components, but this is true for diagonal matrices. Matrix multiplication is weird
amazing video. one question though : is X'=AX the matrix form of a differential equation because it looks identical to dx/dt=ax.
Yeah
@@drpeyam cool, so the entries of X are functions and the entries of X' are their respective derivatives, correct?
Yeah
I first saw this in Quantum Mechanics in the context of the time-evolution operator:
en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)
god bless you so much
8:07 e^-1?
Ofc P^(-1) :)
How did you post a month before?
oh my god ~!! this is.... mathermatics. i got it !!!!!
Amazing!
Thanks
THANKYOU
What about a B^A?
e^(ln(B) A)
thank you!
5:37 It reads POOP lmao 😂
(I'm going to fail exams)
A year ago I didn't think I will use this...
So nice
So clearly!!!
برو بچ تبریز و عشقه
merci merci
Thank you!!!!!!!!!
why e^D at 7.45 [e 0;0 e^5] I don't know but in my mind [e 1;1 e^5]
Wait that's -not- illegal
Ummm ... the e^x series that he wrote is the Maclaurin series....It is only true in the vecinity of cero, not for all values...It is comun mistake
@profefernandoo4400 thats wrong dont take my word for it you can use any convergence test or ask the internet.
golden
❤❤❤❤❤.
60 fps is orgasmic.
Haha 5:35 looks like POOP^-1
sebmata did you the transpose of POOP is POOP cool!
Moayd Sparklug POOP^T is
People
Order
Our
Patties
groetjes van karel
A = PDP = PewDiePie?
Hahahahaha
Are you what? 9 years old?
so does that make pew the inverse of pie
wtf