Hey everyone, thanks for watching this video! If you have any questions or comments that you'd like me to see, please leave them under this comment so that I get notified and can respond. Cheers!
@@ahmadalghooneh2105 I knew someone would ask and I should have been prepared. Yes I can ... but the code was written quickly to get this video out so it is horribly written. I'll get it up on Github shortly and let you know.
Brian Douglas so much thanks brian, no matter the order, I just wanted to know how did you manage to make the plots so cool, A huge fan of yours, Regards
@@speedracer1702 Been more than 2 years, but here goes ahah. I am in a robotics undergrad class and we are mainly dealing with PID. Anything cool you can share about LQR control for your robot? What did it do and why did you go with the LQR method? Anything really. Very curious to learn more thanks!
Hey Brian, what an absolutely mind boggling video. I can now literally explain my Professor what LQR control is all about, thanks to you. Just brilliant! Who says only actors can be superstars? You are one. Thanks a lot.
These kind of videos can have the universities close their doors, and be left with certification only or distant learning. I mean who needs professors' lectures after this, really. :) I have been familiar with pole placement and LQR for years, but really wanted to find the video explaining it in simple words for my own sake. The whole series is brilliant, I am leaving comment of gratefulness to this one in particular, for the reason I stumbled on it firstly. Great job is a diminutive here.
I'd like to thank Brian for that fantastic take on the LQR explanation. I had understood the math behind it but I hadn't get the intuition I should have gotten while developing my LQR model and now I feel really ready to give it a shot.
I'm studying Control and we have a space-state module which at first it was easy, but gradually getting harder to understand. However, after your videos it really help me a lot since you made it really simple to understand and even gave different methods, but what help me the must its showing how they can be apply to different systems. Now I even think Space-State is actually really cool and would like to learn more about it and even apply it in my personal project and such. Thank you so much.
The issue I come across is choosing "optional' values for Q and R. I prefer pole placement for SISO systems. It is easy to determine the response and effort. I usually want to place the poles on the negative real axis. I can move the poles to the left ( more negative ) until I hit one of two limits. 1 output saturation as mentioned. However, usually the problem is feed back resolution. Placing the closed loop pole on the negative real axis does not guarantee that there will be no over shoot. The closed loop zeros can cause over shoot if the closed loop zeros are closer to the origin than the closed loop polse. Sometimes the closed loop zeros must be placed too. LQR has an advantage for MIMO systems. If the Q matrix is chosen correctly, the closed loop zeros will be close to the close loop poles almost canceling them out and improving performance.
I haven't finished the video yet but I found exactly what I was looking for. Thank you for sharing this in such an intuitive way to understand what I am doing haha. Keep shining :D
Great to hear! I also have a video on the Algebraic Riccati Equation which might help as well: th-cam.com/video/ZktL3YjTbB4/w-d-xo.htmlsi=MWLW8nn0S9zPjjCG
It was really nice to see your videos, they were quite helpful as refresh and also maybe to understand better what I studied some years ago. Many thanks!
Sir, just now I have watched this video lecture. Thank you for explaining with the simple terms. I have a question. Why the state should be zero for a observer or other design ? Can you please explain me?
If someone is having problems finding the scaling term, try using simulink to see the step response of the system. Matlab and simulink responses are very different. This might be my mistake mishandling matlab, but if anyone is trying to implement this in a real system and nothing works it might be because of this. To give you some context the step response in matlab gave me a scaling term of around 3000 while with simulink I get the gain around 1,6 and with this my system works perfectly.
Excellent video as always but why leave out the part about solving the algebraic reccati equation to find the optimal solution to the cost function, since its just matrix maths and doesn't go into the complex derivations ?
In retrospect that probably would have been better. I didn’t for two reasons though: 1) I usually shoot for 12-15 minute videos and this was getting long and 2) I want to do another video in the future just on this topic. Thanks for the comment.
@@BrianBDouglas oh an I just want to say thank you for your videos both here and on your channel, I'm in my 3rd year on a robotics engineering course and I don't know if I would have made it through my 1st year without your videos.
Brian, this video is only for regulation, and the controller will push all the system states toward zero, what if you want your states to be pushed toward a particular desired point? I guess the cost function would be different then, it would be integral(e.T*Q*e + u.T*B*u) right? but how to implement that in matlab using the lqr command ?
@@gustavstreicher4867 Hi, thank you for taking your time answering my question. you see you can actually do this when you are constructing the cost function and optimizer by any numerical optimizer, but when you are using matlab, there is a problem, because matlab lqr command expects A,B,Q and R matrices, you dont have the dynamics of the error with the form of edot= Ae+Bu, thats my problem
@@ahmadalghooneh2105 Hey, I just noticed you actually implied the same thing in your cost function by using e. Sorry about that. I think it doesn't really matter what values you want the states to go to because the lqr function essentially produces a regulator solution that implies a zeta and wn value. This same damping and natural frequency is present in the natural response as well as in tracking a step input, for example. So the same K solution will work for whatever state reference you are tracking.
@@gustavstreicher4867 Yes Dear Gustav, What you say is absolutely true, The K works for tracking any reference. yet my problem was how to include error in cost function, which I found the solution, you can use servo control type 1 construction for state space, you can find it in Ogata
@BrianBDouglas Great work! thank you very much. Question, did I misunderstood this or you didn't have to calculate Kr here because a Kr=1 works fine for this example? other cases we also need to calculate Kr
This example doesn't need a Kr term since we're trying to drive all of the states to 0 (regulator). If this was a tracking problem (non-zero states is our goal) then a Kr term could be needed.
Hi Brian, thanks for your lecture on LQR. You and Chris have done a great Job on this. Please my question is on the other diagonal that makes my matrix positive definite or semi positive definite in case of R., How can I choose members in case for nxn matrix, for n>=3. its easier for 2x2 to just pick any number and repeat it , just like zero if I am using an identity matrix, but for n>3 is my question. Secondly please can you do a video also for MPC?
Hi, can you explain how the angular acceleration is dependent on the angle by a factor of 0.01? I was thinking dynamics should be like T = I*(angle)'' and no angle terms in it. Thanks for such an amazing video btw
Ha! The cow was a bit of video editing trickery so it's not in the code. But the rest of it does exist in the visualization if you check out the code in the description.
@@BrianBDouglas , haha well it was awesome! Little stuff like that makes it way more fun, I was going around showing my colleagues! Your videos are definitely helping me with OJT. Let me know if you're ever interested in visiting NASA AFRC and maybe giving a talk to our new professionals group and I'll try to coordinate something!
Forever in your debt. I have a question on timestamp 6:49 -> Im trying to make sense of the relationship between the integral equation and it's meaning, which is {the cumulative error of the states plus the cumulative effort of the actuator}. I don't understand how the first part is equivalent to the cumulative error of the states. Are you assuming that the state vectors should be at a reference of 0? That doesn't make much sense to me. Intuitively, I would expect the following to be the way to calculate the error of a state at a given time: steady state required by the reference - current state.
That's exactly right! This is a regulator which means it's trying to drive the system states to 0. Like you said, if you want this to follow a reference, you can redefine the origin of your states by subtracting out the reference but this adds in some steady state error which is usually countered with a feedforward term or an integrator. Look up LQI function in MATLAB to see what I mean.
Hey Brian, Can you help me about this problem? I am working on 3Omni-wheeled mobile robot. In robot frame X’r=AXr+BU, where Xr=[xr’,yr’,theta’] linear state space model. But I when use global coordinates with rotation matrix R(theta), that’s Xr=R(theta)Xw, then Xr’=R(theta)’Xw+R(theta)Xw”. It becomes non linear model. How can I use LQR controller for control this robot with trajectory tracking? Thanks!!!
Hey everyone, thanks for watching this video! If you have any questions or comments that you'd like me to see, please leave them under this comment so that I get notified and can respond. Cheers!
Hi Brian, Could you please share the code?
@@ahmadalghooneh2105 I knew someone would ask and I should have been prepared. Yes I can ... but the code was written quickly to get this video out so it is horribly written. I'll get it up on Github shortly and let you know.
Brian Douglas so much thanks brian, no matter the order, I just wanted to know how did you manage to make the plots so cool,
A huge fan of yours,
Regards
Best presentation. deep knowing and great experience can be seen on your video
It's good to see you on matlab's channel! Loved your videos all along
As usual, Brian Douglas creates another great video! I especially like the easy to understand example at 2:46 😉.
Two Legends! I like both this video and your video on LQR control. Your video helped us implement our project for Robot Control in grad school!
@@speedracer1702 Been more than 2 years, but here goes ahah. I am in a robotics undergrad class and we are mainly dealing with PID. Anything cool you can share about LQR control for your robot? What did it do and why did you go with the LQR method? Anything really. Very curious to learn more thanks!
"That's how you rotate to pick up a cow" is a sentence I never thought I'd hear.
Hey Brian, what an absolutely mind boggling video. I can now literally explain my Professor what LQR control is all about, thanks to you. Just brilliant! Who says only actors can be superstars? You are one. Thanks a lot.
Your dedication to the animation is something to behold! You even animated the magnitude of the thrusters 😂
I've got a test in two days and your new videos on the topic are like a gift from god 👏🏼
This is the best explanation of LQR I have ever seen. Thank you Brian. :)
These kind of videos can have the universities close their doors, and be left with certification only or distant learning. I mean who needs professors' lectures after this, really. :) I have been familiar with pole placement and LQR for years, but really wanted to find the video explaining it in simple words for my own sake. The whole series is brilliant, I am leaving comment of gratefulness to this one in particular, for the reason I stumbled on it firstly. Great job is a diminutive here.
Brian, you can't make things look so easy man. Dope work.
Thank you!
easy and excellent explanation ever ! thanks a lot :)
🤗
this is the funniest video I've ever seen about something related with engineering
I'd like to thank Brian for that fantastic take on the LQR explanation. I had understood the math behind it but I hadn't get the intuition I should have gotten while developing my LQR model and now I feel really ready to give it a shot.
I'm studying Control and we have a space-state module which at first it was easy, but gradually getting harder to understand. However, after your videos it really help me a lot since you made it really simple to understand and even gave different methods, but what help me the must its showing how they can be apply to different systems. Now I even think Space-State is actually really cool and would like to learn more about it and even apply it in my personal project and such. Thank you so much.
I really appreciate your comment! That's great to hear :)
Holy shit, my professor went through all the mathematics that I could'nt even track the concepts. Thank you for this video
You are just the best tutor on youtube.. thanks ..
I appreciate that!
The issue I come across is choosing "optional' values for Q and R. I prefer pole placement for SISO systems. It is easy to determine the response and effort. I usually want to place the poles on the negative real axis. I can move the poles to the left ( more negative ) until I hit one of two limits. 1 output saturation as mentioned. However, usually the problem is feed back resolution. Placing the closed loop pole on the negative real axis does not guarantee that there will be no over shoot. The closed loop zeros can cause over shoot if the closed loop zeros are closer to the origin than the closed loop polse. Sometimes the closed loop zeros must be placed too. LQR has an advantage for MIMO systems. If the Q matrix is chosen correctly, the closed loop zeros will be close to the close loop poles almost canceling them out and improving performance.
Brilliant class. I hope I learned control systems from a professor like you. Thanks.
I haven't finished the video yet but I found exactly what I was looking for. Thank you for sharing this in such an intuitive way to understand what I am doing haha. Keep shining :D
Wow optimal control is amazing!! Way cooler than PID
I knew it was brian the moment this video started, you are truly an amazing educator, some of your videos built my foundation in control!
Excellent introduction to LQR. Thank you Brian
You mentioning Lum and Brunton feels like watching Spiderman in Multiverse 😂
What's crazy to me is that Lum and Brunton both live within 20 minutes of me (what are the chances!) and I get to hang out with them occasionally.
Crazy ! @@BrianBDouglas
Best video I have ever seen on youtube
Amazing explanation! Better than 1 year of college!
This is definitely a great and simple explanation I've seen so far
Excellent illustration
I wish Brian create more tutorials...he is the best.
Dear Brian,
It is one of the best explanation. Thank you for your time and effort.
Nice video...helps a lot to get a clear explanation of optimal control and how to actually use it in a realistic manner.
thank you so much! Your video helped me a lot, while i'm studying MPC
Really good video, content and form, both great! Very enjoyable
I wish I could present this way
This is awesome! It's much easier to understand. Thank you!
Great to hear! I also have a video on the Algebraic Riccati Equation which might help as well: th-cam.com/video/ZktL3YjTbB4/w-d-xo.htmlsi=MWLW8nn0S9zPjjCG
Obrigado! IFSP- Eng. Controle e Automação-São Paulo Brasil!
Wowww. Man I am just amazed. Really been following your videos for a long time. You are awesome!!!!
OMG I think I am gonna ace my Controls class now!
watched all 4 parts.. keep up the good work ;)!
Thanks Man !
Your videos are better than whole academic education !! Please create more videos
Thank you Brian !!! Wonderful explanation !!
Thank you so much. Your lecture really helped me understanding the control system basics.
Awesome video. Good starting point and gives a good essence of the subject matter.
Glad you liked it!
Wow perfectly explained
Great video! Glad for another reference for LQR intuition and usage!
Great and simple explanation. Thank you. I hope I will so good and make great videos too.
Wonderful explanation sir, thank you.
UW AE511 brought me here. Great video, will be checking out more in the future.
these series of videos are amazing & well described. cheers.
What a wonderful lecture. Thank you so much
This is a really really good video for understanding the basisc of LQR...thank you so much sir
What an excellent explanation
Awesome! Glad you liked it.
Excellent exposition. Good effort. Well done
good, easy to understand video. shout out to Professor Lum.
It was really nice to see your videos, they were quite helpful as refresh and also maybe to understand better what I studied some years ago. Many thanks!
Sir, just now I have watched this video lecture. Thank you for explaining with the simple terms. I have a question. Why the state should be zero for a observer or other design ? Can you please explain me?
Wow! This is exactly what I needed to know to develop interest in this. Cool animation model btw
Why Professers avoid this. They tell every thing in abstract way. Thank you Brain. You are great.
If someone is having problems finding the scaling term, try using simulink to see the step response of the system. Matlab and simulink responses are very different. This might be my mistake mishandling matlab, but if anyone is trying to implement this in a real system and nothing works it might be because of this. To give you some context the step response in matlab gave me a scaling term of around 3000 while with simulink I get the gain around 1,6 and with this my system works perfectly.
Excellent explanation
Great frame 3:45 = "you don´t have infinite money to maximize your performance, and, you don´t have unlimited time to minimize your expenses."
@Brian Douglas I love you for making these videos!!
Thank you for the explanation ! You made it very easy to understand :)
why in LQR, we dont need to introduce Kr for reference error any more?
Thank you so much for this video, it's so straight forward and easy to understand ! :)
really awesome explanation, thank you!
Hope to get lessons on Sliding Mode Controller in up coming days and its application.
such a great video, I like all your other videos on control theory as well, thank you!
thank you sir for the great explanation
As always, great video Brian. I have a question in regard to LQR. Where is the difference between LQR and MPC in a practical and a mathematical way?
where can I find the fundation of why I can apply the gain matrix to the state error?
min: 1:35
I was looking for that one too, and I did not find anything. If you found any references, please share them with us.
THAT'S SO HELPFUL!!!!
thnx bro.....you are the best
Fantastic video!
Awesome as usual !
What an amazing explanation!
You could use low gain feedback at the end of the video
You are the best!
You save my credit, thanks a lot
Amazing video! Thanks
Great video as always Brian!
Hey Brian, would it be possible to continue this spacecraft example but using MPC? I'd appreciate it greatly. Love.
Excellent video as always but why leave out the part about solving the algebraic reccati equation to find the optimal solution to the cost function, since its just matrix maths and doesn't go into the complex derivations ?
In retrospect that probably would have been better. I didn’t for two reasons though: 1) I usually shoot for 12-15 minute videos and this was getting long and 2) I want to do another video in the future just on this topic. Thanks for the comment.
@@BrianBDouglas oh an I just want to say thank you for your videos both here and on your channel, I'm in my 3rd year on a robotics engineering course and I don't know if I would have made it through my 1st year without your videos.
@@21stcenturycotyk 👍
UW AE511: Great video, thank you!
Well said👏
Great video with great explanations.
Thankyou. A big fan of urs control lessons.
where is cow in code in github
perfect explanation!
Very good video. Thank you so much for your work.
Brian, this video is only for regulation, and the controller will push all the system states toward zero, what if you want your states to be pushed toward a particular desired point?
I guess the cost function would be different then, it would be integral(e.T*Q*e + u.T*B*u) right? but how to implement that in matlab using the lqr command ?
@@gustavstreicher4867 Hi, thank you for taking your time answering my question. you see you can actually do this when you are constructing the cost function and optimizer by any numerical optimizer, but when you are using matlab, there is a problem, because matlab lqr command expects A,B,Q and R matrices, you dont have the dynamics of the error with the form of edot= Ae+Bu, thats my problem
@@ahmadalghooneh2105 Hey, I just noticed you actually implied the same thing in your cost function by using e. Sorry about that. I think it doesn't really matter what values you want the states to go to because the lqr function essentially produces a regulator solution that implies a zeta and wn value. This same damping and natural frequency is present in the natural response as well as in tracking a step input, for example. So the same K solution will work for whatever state reference you are tracking.
@@gustavstreicher4867 Yes Dear Gustav, What you say is absolutely true, The K works for tracking any reference. yet my problem was how to include error in cost function, which I found the solution, you can use servo control type 1 construction for state space, you can find it in Ogata
Amazing explanation, thank you!
the matlab code doesnt work for me. I get the following error:
Unrecognized function or variable 'rot'.
same here............
@BrianBDouglas Great work! thank you very much. Question, did I misunderstood this or you didn't have to calculate Kr here because a Kr=1 works fine for this example? other cases we also need to calculate Kr
This example doesn't need a Kr term since we're trying to drive all of the states to 0 (regulator). If this was a tracking problem (non-zero states is our goal) then a Kr term could be needed.
Hi Brian, thanks for your lecture on LQR. You and Chris have done a great Job on this. Please my question is on the other diagonal that makes my matrix positive definite or semi positive definite in case of R., How can I choose members in case for nxn matrix, for n>=3. its easier for 2x2 to just pick any number and repeat it , just like zero if I am using an identity matrix, but for n>3 is my question. Secondly please can you do a video also for MPC?
Hi, can you explain how the angular acceleration is dependent on the angle by a factor of 0.01? I was thinking dynamics should be like T = I*(angle)'' and no angle terms in it. Thanks for such an amazing video btw
I would like to know this as well, did you figure this out? :-)
Great job, Thank you Brian!
It's really interesting and explanations are really nice. I didn't find the R_tuning.m file. Can you tell me where I can find it ? Thank you so much
Where is the cow 🐮? Is there anyway to get the code for that as part of the animation?
Thanks for the awesome video!!
Ha! The cow was a bit of video editing trickery so it's not in the code. But the rest of it does exist in the visualization if you check out the code in the description.
@@BrianBDouglas , haha well it was awesome! Little stuff like that makes it way more fun, I was going around showing my colleagues!
Your videos are definitely helping me with OJT. Let me know if you're ever interested in visiting NASA AFRC and maybe giving a talk to our new professionals group and I'll try to coordinate something!
Just brilliant!
Forever in your debt. I have a question on timestamp 6:49 -> Im trying to make sense of the relationship between the integral equation and it's meaning, which is {the cumulative error of the states plus the cumulative effort of the actuator}. I don't understand how the first part is equivalent to the cumulative error of the states. Are you assuming that the state vectors should be at a reference of 0? That doesn't make much sense to me. Intuitively, I would expect the following to be the way to calculate the error of a state at a given time: steady state required by the reference - current state.
That's exactly right! This is a regulator which means it's trying to drive the system states to 0. Like you said, if you want this to follow a reference, you can redefine the origin of your states by subtracting out the reference but this adds in some steady state error which is usually countered with a feedforward term or an integrator. Look up LQI function in MATLAB to see what I mean.
Hey Brian, Can you help me about this problem? I am working on 3Omni-wheeled mobile robot. In robot frame X’r=AXr+BU, where Xr=[xr’,yr’,theta’] linear state space model. But I when use global coordinates with rotation matrix R(theta), that’s Xr=R(theta)Xw, then Xr’=R(theta)’Xw+R(theta)Xw”. It becomes non linear model. How can I use LQR controller for control this robot with trajectory tracking? Thanks!!!
Great video! Tank u so much👍