I've been reading up on compressive sensing literature (e.g. by Richard Baraniuk et al.), and they are really hard follow for a math lightweight like me. Your explanations are so much clearer. Looking forward to see more in this series.
Good morning Professor, Thank you for the nice videos. I have one question though: why do we want "the sparsest" s to be our solution? Shouldn't we look just for the "right" s? How can we claim that the sparsest s is the right one? Thank you, Matteo
Great question. We want the sparsest vector because we have the observation that signals in nature are almost always very sparse. So solving for the sparsest vector is often a proxy for solving for the "natural" vector. This is extremely peculiar, and not at all obvious at first.
Hello, thank you for this great lecture. I have a question, in the equation (y=C.x ) isn't possible that there are many xs that give the same y? Thank you in advance.
Hi Professor, I have a question about the measurement matrix C. I see in literature that most seem to portrait it as a dense random matrix, not a spiky one on each row as you show here. So I guess you show C as a spiky matrix, just because that it is incoherent with the Fourier basis so it would function as well as a dense random matrix? I think I was confused originally when I saw you (in one of the previous video) taking random data points in time domain for super-positioned sine-waves, rather than taking random "combination" of all data points in time domain (which a random matrix would do). So, hopefully my above understanding is correct. Thanks for all the videos!
is x here represents the compressed version of the original image since we are inferring the "active" Fourier coefficients? So it shouldn't be the high-resolution image. Am I correct?
Hi Professor, had a question regarding the equivalent formulation. Could we also reformulate the original convex problem as minimization of (L2 norm) ||Θs-y|| subject to constraint (L1 norm) ||s||
yea, lasso is analogous to L1 regularization, which encourages sparse answers stats.stackexchange.com/questions/200416/is-regression-with-l1-regularization-the-same-as-lasso-and-with-l2-regularizati
I appreciate you making videos on this topic for the not-so-bright people like me. I'm fine with utilizing math, but when that math is presented without context it drains all life out of me.
Once again you are providing an excellent high level overview in very simple terms (for the mathematically trained). Really enjoy your videos.
I'm currently studying a Phd and your work and videos really inspire. Thanks for this great video series!
I have never come across this...what a powerful idea! You are doing a great job covering it. Thx
2 months ago how?
Today is a GREAT day! Profesor Brunton upload a new vid :)
You are a legend steve. I dont care about this method but still watch it because you make it interesting and understandable
Thank you for all your videos. This is how all professors should give a lecture at university.
I've been reading up on compressive sensing literature (e.g. by Richard Baraniuk et al.), and they are really hard follow for a math lightweight like me. Your explanations are so much clearer. Looking forward to see more in this series.
Thank you so much for making this. This helped me a lot. Please make more videos on compressive sensing. 😊
Good morning Professor,
Thank you for the nice videos.
I have one question though: why do we want "the sparsest" s to be our solution? Shouldn't we look just for the "right" s? How can we claim that the sparsest s is the right one?
Thank you,
Matteo
Great question. We want the sparsest vector because we have the observation that signals in nature are almost always very sparse. So solving for the sparsest vector is often a proxy for solving for the "natural" vector. This is extremely peculiar, and not at all obvious at first.
What is the reason of adding the penty term "lamda |s|1"?
1:45 do you mean underdetermined or undetermined?
Underdetermined... I wrote it wrong on the board.
@Steve Brunton thanks! Great lecture!
Love it! Why Does anyone need Netflix :)
great!!! cant wait for the next lecture.
Thanks a lot! Cant wait for next lecture! Greetings from México 🙂
Hello, thank you for this great lecture.
I have a question, in the equation (y=C.x ) isn't possible that there are many xs that give the same y?
Thank you in advance.
does the sparsest s constraint is sufficient to determine the wanted x?
Hi Professor, I have a question about the measurement matrix C.
I see in literature that most seem to portrait it as a dense random matrix, not a spiky one on each row as you show here.
So I guess you show C as a spiky matrix, just because that it is incoherent with the Fourier basis so it would function as well as a dense random matrix?
I think I was confused originally when I saw you (in one of the previous video) taking random data points in time domain for super-positioned sine-waves, rather than taking random "combination" of all data points in time domain (which a random matrix would do). So, hopefully my above understanding is correct.
Thanks for all the videos!
is x here represents the compressed version of the original image since we are inferring the "active" Fourier coefficients? So it shouldn't be the high-resolution image. Am I correct?
Love this topic series
Hi Professor, had a question regarding the equivalent formulation. Could we also reformulate the original convex problem as minimization of (L2 norm) ||Θs-y|| subject to constraint (L1 norm) ||s||
Is the sparsest solution to the underdetermined problem unique?
Love this guy!
Is this somehow connected to LASSO and Ridge regressions? Love you videos! Thanks a lot!
yea, lasso is analogous to L1 regularization, which encourages sparse answers stats.stackexchange.com/questions/200416/is-regression-with-l1-regularization-the-same-as-lasso-and-with-l2-regularizati
but what if we have s and we want to find the right fourier basis?
Hey please add the code for image compression using DCT,FFT,Wavelet
Do your homework
SIR COULD MAKE VIDEO ON USING KALMAN FILTER WITH C++ LANGUAGE
OR PYTHON
Extremely relevant insights into the topic, professor. Thank you for discussing these.
Thanks for this video ok
I appreciate you making videos on this topic for the not-so-bright people like me. I'm fine with utilizing math, but when that math is presented without context it drains all life out of me.
عاشت ايدك
Thank you very much for your complete and helpful explanation. If possible, I would like to have your email and ask you some questions.
I can't work out the transformation that he is using to write on the board 🤣. He's writing in reverse, right?!! :-O
If I'm not mistaken, it's mirror-flipped and he's actually writing with his left hand.