When you are doing the reduction and leaving out some states from your new model, are you assuming that those left out states are already stable? My understanding is that when your system is fully controllable, you can arbitrarily place the eigenvalues. But if you have unstable states that get reduced, are you leaving out information about the stability of your system?
Great question. That is absolutely right. You would need those states that you are truncating to be stable. When we talk about how to find these states in the next lectures, we will look at the controllability and observability Gramians. These won't even be finite if there are unstable states. There are extensions to balanced truncation for systems with unstable states, but that is a bit more advanced.
In pure ML problems, dimensionality reduction is often addressed. While watching this video the Principal Component Analysis (PCA) method came to mind. It projects the dimensions into a space that maximizes variance. Could something like that be useful in these scenarios? Excellent video series!
Du unterrichts unglaublich gut😍😍😍😍😍
When you are doing the reduction and leaving out some states from your new model, are you assuming that those left out states are already stable? My understanding is that when your system is fully controllable, you can arbitrarily place the eigenvalues. But if you have unstable states that get reduced, are you leaving out information about the stability of your system?
Great question. That is absolutely right. You would need those states that you are truncating to be stable. When we talk about how to find these states in the next lectures, we will look at the controllability and observability Gramians. These won't even be finite if there are unstable states. There are extensions to balanced truncation for systems with unstable states, but that is a bit more advanced.
In pure ML problems, dimensionality reduction is often addressed. While watching this video the Principal Component Analysis (PCA) method came to mind. It projects the dimensions into a space that maximizes variance. Could something like that be useful in these scenarios? Excellent video series!
At 5:15, y in the reduced system will change to \tilde{y} but its dimensions will not change. u is the same in both cases.
Thanks a lot for your amazing Videos
❤
Very nice explanation Dear Sir ❤️
Sir do you have any book's name about o.reduction?
Hey I have an urgent project for model order reduction for bilinear system. Can you help me a little to find the source to see something about it?
How did you record this video?? Do you write backwards in a glass in front of you? Or are you left-handed and you flip horizontally afterwards?
I assume he writes normally, but then the video is flipped.
As evidence, look: his hair parts to the right in this video. But if you look at a non-flipped video or picture of him, it parts to the left.
hi, shouldnt the C matrix at 11:52 be the size of A square matrix, so that its [1 0; 0 10^-10]?
No, it should not be related with A. IMO, size of C depends both on # of y and # of x. If y is a scalar, then c should be 1 * 2 (# of state x)
@@jyxbb0825 yes, you are correct, C must be the size q x n where q is # of outputs and n is # of states, while A is n x n
top