@@dr.inkwell1070 Most likely writing forwards. Imagine setting up a piece of glass between you and the camera. When recording, you write normally on the piece of glass. Then, in editing, you mirror the entire video. That’s how this “writing backwards” trick is normally done
After watching so many videos and reading material, finally I understood what inner product of two functions are and how they maps to inner product of vectors. Thank you!
You make great connections all over the place. So clear and articulate. Absolutely love your lectures. The insights are as good as 3b1b and there're more videos. Thank you for these, wish I could do my ML PhD under you :p
This is the first time I've binge watched scientific content :) Great videos, thanks so much for it and making it free. That's really the spirit of science!
I'm pretty slow when it comes to math because I think I need a lot of practice and refreshing in past math concepts. In this video he talks about Reimann sums and inner product. I used these other videos to remind me of those two concepts: Cross and Dot product: th-cam.com/video/h0NJK4mEIJU/w-d-xo.html Reimann sums: th-cam.com/video/CXCtqBlEZ7g/w-d-xo.html th-cam.com/video/YTKQswb60Pw/w-d-xo.html I watched this video 3 times and realized he is actually explaining things really well, I just learn at my own personal pace! Thank you Steve!
This was great! The write-up on Wikipedia went over my head as usual... so I struggled with the intuition for this fundamental concept for a long time... it's foundational to modern physics. Thanks!
It's awesome to see how you derive these abstract concepts from first principles! I had a similar picture in mind, but no proof, it's cool to see that it's true, thank you Sir.
this deserves more views! you make one of the best videos on the topics and i just noticed smth are you writing normally on the glass and then flipping it in editing? i think so
Good video and explanation. I currently struggle to understand fisher information. something tells me that your videos will be very useful about getting the basic concepts right.
Underscoring so many of the comments here, I took functional analysis in university - and basically didn't understand it. Had the lecturer explained it as plainly as this (or if I had the opportunity to watch this video back then!) - everything would have been different.
Thank you very much for these fantastic lectures. You are an incredible teacher. May I suggest adding the lecture number to the titles? It just helps with navigating the content. Thank you very much once again.
I've never seen in my life such great videos about math concepts. At this point, I'm wondering why I am paying for university as my teachers can't explain the theory like that.
Awesome! Please don‘t confuse the FUNCTION f and its FUNCTIONAL VALUE f(x) that might be a real or complex number. For instance, the inner product is not defined for f(x) and g(x) but for f and g.
Suggestion: This fine series would be - and this just in terms of titular information - enhanced if the various episodes were *numbered*. This would help those of us who are not entirely amongst the cognescenti to be able to recognize which lecture we should tackle in which order.
Thanks for the great suggestion. These are currently in playlists where they are ordered. Not exactly sure how to make it so that the correct playlist is obvious if you just click on the video though...
That’s really interesting. You derived everything from first principles! However, I wanted to ask why we take the complex conjugate. Why not just the regular?
I have been reading Thomas Garrity’s “All the Mathematics You Missed” which is supposedly “light” to read and he made the topic of inner products so dry and opaque yet in this video it’s so logical and easy to follow THANK YOU, Steve u r a real G 💯, valid as FUCK 🔥
Something that is not clear if you ask me is why multiply the summation by Delta_x normalize the expression. Great videos by the way, thanks for sharing knowledge. Best regards from Brasil
In the lecture it was mentioned that as the number of discrete points increase then the terms in the summation of (Σf(x_k)*g(x_k)) increases. This increases the magnitude of the summation due to increase in the number of terms in the summation. So to keep this in check what we do is multiply the expression f(x_k)*g(x_k) with Δx. To understand this let's take an example from the lecture of doubling the number of discrete points. If the number of discrete points increases then the Δx value decreases, this ensures that increasing the number of discrete points doesn't increase the sum by compensating it with decreased contribution of each term in the summation. You can also try to reason it better with a decrease in the number of discrete points of the function. @Steve Brunton Correct me if my reasoning is incorrect.
Hi Steve, thanks for the great lectures! Can you share how these lectures are made? I'm a faculty at WWU in Bellingham and with all the courses switching to online due to the virus, I'd really like to do something like how your lectures are set up.
We have a studio with a large piece of construction glass, a camera, and some hardware+software that mixes and flips the images. It is called a "lightboard".
Why must the inner product of complex function f include the complex conjugate of g and not just g? Thank you so much for the great content! I actually cited you work on SVDs for my thesis a few years back! Really great stuff!!
What I think: As you keep on increasing the number of data points (sample points increase) at the same time delta x decreases, but the also increases. Thus, one term increases and another decreases, so their product makes the final inner product definition stable and prevents it from blowing up as we increase the sample points. Is it make sense ??
I am very curious about how you make this video, with you facing the audience with a screen in front of you.. is it a huge transparant screent? So cool
Really good question. Yes for complex inner products (Hermitian inner products), it does matter what order you compute this in. So = conj(), where cong(.) is the complex conjugate.
@@TheKnowledgeInstitute Yeah right,it should have been f^Tg. I think that is where he did a slight mistake. Because, represents f belonging to a duel space while g belonging to the original vector space. And since f belongs to duel, it should have been f^T. But apart from that the sense of why discrete sum is being replaced by an integral, is completely understood.
@@nileshpandey8032 for the deeper meaning of this noncommutative inner product see Alain Connes, "Music of Shapes" lecture or his Alain Connes - Temps et aléa du quantique Apr 14, 2015 "If you take very seriously that the origin of variability doesn't come from the passing of time but comes simply from the formalism of quantum mechanics, from the Hilbert Space. Then it's absolutely vital that you can relate to ordinary time and time evolution as we know. What I'm saying is...this solution depends on the fact that you have a subsystem, it depends on the factorization... [noncommutative].... "You need to understand intuitively...this is the most difficult to explain...that this time evolution is UNavoidable....You can not suppress it... It's not an inner automorphism. It has the amazing property....it's in the center of the group of inner automorphisms. Any other automorphisms of the algebra will commute with it. It's canonical. It doesn't depend on any choice. ... "You take a system that you repeat... it repeats everywhere ...to infinity...it's repetition that allows you to see time evolution...it's the factorizations which are infinite repetitions that give you this time evolution... Otherwise you wouldn't see it. ...Hilbert space and Hilbert Space operators KNOW and know a lot more than we think....the passing of time is due to our partial knowledge, because we don't know the full system.... "What are the observables for gravitation? Who can we say where we are? The answer is spectral [frequency]. ...It's not enough to know the spectral operators... ...Two noncommutative shapes that are Isospectral [i.e. both Perfect Fifth]...They have the same spectrum but they do not have the same second invariant. [Second-order tensors may be described in terms of shape and orientation.] "You find three types of notes of the spectrum. Integers plus 1/4 [Perfect Fourth], Integers plus 1/2 [Perfect Fifth] and Integers in the square of the spectrum [Octave] there are three kinds of NOTES. When you look at the possible chords - this is like the piano in which you can play...because they are three kinds of notes. The chords of two notes are possible for some shapes [Perfect Fifth] but not other shapes [Perfect Fourth]. The point is spectral, given by correlations between the eigenvalues (frequencies) of the Dirac operator. "There are factorizations with infinite degrees of freedom, that they generate, their own time; and this is a partial knowledge and of course it's related to thermodynamics and temperature and all that. ...The time evolution would not be the same if you changed T to Minus T. If it were the same when you changed T to Minus T then it would be trivial. "What I want is to transmit a mathematical fact...it's extremely striking. It suggests a philosophical fact ...which is that the fundamental variability is quantum other than the passing of time. "It's unique up to inner automorphisms; it means you have the flexibility to change it locally. So you can be locally out of equilibrium. You can have a pure density matrix which on the subalgebra in the factorization you see something which is not pure of course. Factor 4 x 4 matrices as 2 x 2 matrix times 2 x 2 matrix. Now take a vector, pure in four dimensional Hilbert Space... And then you do the inner transfer product of the four dimensional Hilbert space. ...That will be a factorization of Type III. ...A corresponding vector and you just repeat it. That's enough to get the time evolution....The time evolution is in the Subparts, it's not in the full thing. "By the way, I should say, of course this was the motivation for why I spent many years studying noncommutative geometry...
Hello Professor, I want to ask a different question. When you were talking about the dot product, you said that the inner product indicates how similar the 2 functions are. This reminds me the autocorrelation operation. Can we say that inner product and autocorrelation operations are analogous? I mean that inner product is measure of similarity in deterministic cases, while autocorrelation is a measure of stochastic cases. Associating different concepts makes them easier to understand and memorize. That's why I'm asking this question.
Great question. The autocorrelation function may be thought of as generated by continuously computing the dot product between a function and a time-shifted version of itself. For no time-shift (or time-shift = 0) the autocorrelation is "1" because the function has an inner product of 1 with itself. Then the correlation (i.e. the inner product) may change when the function is shifted. So the inner product is a measure of correlation.
So the main lesson is every function can be represented as vectors living in the Hilbert space. Therefore we can define the inner product through this infinite dimensional vectors. Am I right ?
I thinks is connected. Google Christopher Hail as he has a Real Analysis course at Georgia Tech that makes that connection. It is something like where the T is a translation operator. The same connection can be made with the Autocorrelation. I don't think it is possible to create a Hilbert Space where the convolution would be inner space used (I may be wrong, but it does not seem to satisfy the inner product rules)
Wait so if I have two audio samples, can I just take the inner product to see how similar they are? (Sorry if this is a dumb question, I'm new to this stuff)
In principle yes, but this isn't practical for long samples. So you would just want to take the inner product of a short segment. Even this won't be robust, as even a small phase shift will cause them to have a small inner product. I'd probably FFT a short sample and then take the inner product of the power spectrum. (essentially see how similar the spectrograms are)
I dont understand one point. Lets assume that two signals are identical and their inner product is a relatively large number, then, if we again take these two identical signals but changing one of them at some instant point x by adding some positive value, does the inner product becomes greater than the previous case although they are not identical signals? The inner product metric seems like useless here.
Doesn't the inner product of the two functions only reflect how "close" the two function are if each has an integral value of zero over the given range? As drawn, with both functions having positive value across the range, increasing the value of either will increase the inner product, regardless of divergence between the two.
He's writing normally on glass in front of him and the video is flipped. There's a reason 90% of people in videos like this appear to be left-handed, rather than the usual 10%.
@ 7:20. Surely to normalise the equation by del(x) you are dividing the RHS of the equation by del(x) and not multiplying on both sides which is effectively doing nothing ?
If we divide the numerator by deltax, then on increasing the sample points, increases and denominator deltax decreases, so finally the total sum will increases on increases the sample point, but the point here is to make the sum ( Integration in limiting condition ) stable even samples points goes to infinity ( in limiting case) (LiKE: mapping infinite term to a finite term )
I was confused at first, because I've been learning Dirac notation and the operands of the inner product seem flipped to me (i.e. I would have thought that f must be the adjoint, not g). Is this another common form of notation?
Only had a understanding of this because I saw a video prior which presented the linear algebra necessary to see where this derivation comes from. The process is clear but where the process comes from is not very clear.
I think it was a 'poetic license' to show that the definition of integral the continuous space matches with the definition in the discrete space. It is nice to to see that way but there is more reasons behind all this. These sums can become infinite series (because the interval dx go to zero and/or n goes to infinite and f and g themselves can be infinite series instead of sampled delta functions) and we need to know when they to converge so that the problem becomes well defined. Note the integral above is not defined for all f(x) and g(x) too. So imo there is no reason behind it. These are different inner products defined on different fields. But it is kind of nice de induce some intuition again imo.
Initially, he takes n samples from each function, and multiplies them together using the dot product. He knows that his goal is to get the dot product to converge toward some value as he takes more and more samples. But he has a problem - if he increases the number of samples, he increases the value of the dot product. As an example, say that you had two simple functions that were just flat lines, f(x) = 2, and g(x) = 4. If you were to sample from them both four times over some range between x=a and x=b, your would get vectors [2, 2, 2, 2] and [4, 4, 4, 4], and the dot product would be (2*4 + 2*4 + 2*4 + 2*4 ) = 32. If you were to double the number of samples in that range, your vectors would be [2, 2, 2, 2, 2, 2, 2, 2] and [4, 4, 4, 4, 4, 4, 4, 4], and the dot product would be (2*4 + 2*4 + 2*4 + 2*4 + 2*4 + 2*4 + 2*4 + 2*4 ) = 64. The more samples you took, the larger your dot product will be, and instead of converging to some specific value, your dot product would go to infinity. So to compensate for this, he multiplies the dot product by dx, which is goes to zero as he takes more and more samples out of the interval between a and b, balancing out the dot product's tendency to get larger and larger.
The book is currently printed with Matlab code, but we have Matlab and Python on our website databookuw.com. There are talks to print a second edition of the book with Python code.
@@Eigensteve Thank you for the fast reply professor Brunton, I love your videos they are really helping in my mathematics of machine learning class, especially in quarantine.
For some reason I watch these videos and things are just going right over my head. Maybe I should take more higher level math. The level of abstraction that is being done with these things can be difficult to understand at times. More than likely it is my fault just not knowing enough base concepts.
This is basically an undergrad college level course in Fourier. The background is linear algebra and real (and preferably complex) analysis. It's really surprising and wonderful to find such substantial, legit science on TH-cam. Like, you usually have to pay for something this good. And he's a great lecturer.
@@andrewgibson7797 is absolutely right -- the background is essentially a solid linear algebra course and some intro to complex variables. Usually this class comes after I teach linear algebra and differential equations together, so that you have a gut feeling for what eigenvalues and eigenvectors mean.
The clarity of this man's thinking is astounding.
AND he's writing BACKWARDS ! !
Exceptional presentation clarity
@@dr.inkwell1070 Most likely writing forwards.
Imagine setting up a piece of glass between you and the camera. When recording, you write normally on the piece of glass. Then, in editing, you mirror the entire video.
That’s how this “writing backwards” trick is normally done
Clarity of voice also.
@@dr.inkwell1070 He is actually writing forwards. th-cam.com/video/wCOuu0-o5YI/w-d-xo.html
After watching so many videos and reading material, finally I understood what inner product of two functions are and how they maps to inner product of vectors. Thank you!
You make great connections all over the place. So clear and articulate. Absolutely love your lectures. The insights are as good as 3b1b and there're more videos. Thank you for these, wish I could do my ML PhD under you :p
At last! Someone who takes the time to explain things so clearly that (I assume) even kindergartenn class could get it! Fantastic!!!
Thanks for watching!
This is the first time I've binge watched scientific content :) Great videos, thanks so much for it and making it free. That's really the spirit of science!
Sir your videos are just amazing... Too good...
- your youtube student from India
Such a clear and concise explanation. I can't seem to get enough of your material.
Extremely well explained. These videos get better and better as I am going through the playlist.
That was very clear. Great way of explaining inner products of functions. Kudos.
I'm pretty slow when it comes to math because I think I need a lot of practice and refreshing in past math concepts.
In this video he talks about Reimann sums and inner product. I used these other videos to remind me of those two concepts:
Cross and Dot product:
th-cam.com/video/h0NJK4mEIJU/w-d-xo.html
Reimann sums:
th-cam.com/video/CXCtqBlEZ7g/w-d-xo.html
th-cam.com/video/YTKQswb60Pw/w-d-xo.html
I watched this video 3 times and realized he is actually explaining things really well, I just learn at my own personal pace!
Thank you Steve!
This was great! The write-up on Wikipedia went over my head as usual... so I struggled with the intuition for this fundamental concept for a long time... it's foundational to modern physics. Thanks!
It's awesome to see how you derive these abstract concepts from first principles! I had a similar picture in mind, but no proof, it's cool to see that it's true, thank you Sir.
Best explaination that I've ever met
Thanks!
this deserves more views! you make one of the best videos on the topics
and i just noticed smth are you writing normally on the glass and then flipping it in editing? i think so
Thanks for this. I am actually gaining something from this and I am neither a mathmetician nor a physicist.
You look like Harrison Wells from The flash
This person knows his metal very well but more than that he knows how to melt it and make it to flow.
Good video and explanation. I currently struggle to understand fisher information. something tells me that your videos will be very useful about getting the basic concepts right.
Underscoring so many of the comments here, I took functional analysis in university - and basically didn't understand it. Had the lecturer explained it as plainly as this (or if I had the opportunity to watch this video back then!) - everything would have been different.
Thank you very much for these fantastic lectures. You are an incredible teacher. May I suggest adding the lecture number to the titles? It just helps with navigating the content. Thank you very much once again.
I've never seen in my life such great videos about math concepts. At this point, I'm wondering why I am paying for university as my teachers can't explain the theory like that.
Schools and universities are no place to learn
What he said is simply brilliant equals to brilliantly simple.
Thanks!
Thank you so much for this video! I wish I was told this a long time ago!
You are so welcome!
Awesome! Please don‘t confuse the FUNCTION f and its FUNCTIONAL VALUE f(x) that might be a real or complex number. For instance, the inner product is not defined for f(x) and g(x) but for f and g.
Instructors’ inconsistent use of mathematical notation really made life unnecessarily difficult for me when I was an undergrad.
Suggestion: This fine series would be - and this just in terms of titular information - enhanced if the various episodes were *numbered*. This would help those of us who are not entirely amongst the cognescenti to be able to recognize which lecture we should tackle in which order.
Thanks for the great suggestion. These are currently in playlists where they are ordered. Not exactly sure how to make it so that the correct playlist is obvious if you just click on the video though...
That’s really interesting. You derived everything from first principles! However, I wanted to ask why we take the complex conjugate. Why not just the regular?
Why do you multiply the inner product between the data vectors by delta x? It seems just like a trick to make the expression an integral in the limit.
I have been reading Thomas Garrity’s “All the Mathematics You Missed” which is supposedly “light” to read and he made the topic of inner products so dry and opaque yet in this video it’s so logical and easy to follow THANK YOU, Steve u r a real G 💯, valid as FUCK 🔥
Something that is not clear if you ask me is why multiply the summation by Delta_x normalize the expression. Great videos by the way, thanks for sharing knowledge. Best regards from Brasil
I also am not entirely sure about how that step works.
In the lecture it was mentioned that as the number of discrete points increase then the terms in the summation of (Σf(x_k)*g(x_k)) increases. This increases the magnitude of the summation due to increase in the number of terms in the summation. So to keep this in check what we do is multiply the expression f(x_k)*g(x_k) with Δx.
To understand this let's take an example from the lecture of doubling the number of discrete points. If the number of discrete points increases then the Δx value decreases, this ensures that increasing the number of discrete points doesn't increase the sum by compensating it with decreased contribution of each term in the summation.
You can also try to reason it better with a decrease in the number of discrete points of the function.
@Steve Brunton Correct me if my reasoning is incorrect.
@@akhileshprasad1441
Thank you, this all makes sense but I suppose what confused me was writing Δx on left hand side of the equation as well.
@@akhileshprasad1441 Flawless man, thanks a lot.
@@AxanLderE me too about this point. as Δx goes to 0, right side becomes integral, but why and how the Δx in left side disappear?
Hi Steve, thanks for the great lectures! Can you share how these lectures are made? I'm a faculty at WWU in Bellingham and with all the courses switching to online due to the virus, I'd really like to do something like how your lectures are set up.
The lectures are very great I would like to know how these lectures are made.
Thanks
We have a studio with a large piece of construction glass, a camera, and some hardware+software that mixes and flips the images. It is called a "lightboard".
@@Eigensteve Thank you Steve.
I understood nothing but well done! nice video
Great Job, Steve
Why must the inner product of complex function f include the complex conjugate of g and not just g? Thank you so much for the great content! I actually cited you work on SVDs for my thesis a few years back! Really great stuff!!
The delta x at 6:54 feels wrong. Taking the limit as delta x goes to 0 means the LHS should also go to zero, I would think?
What I think: As you keep on increasing the number of data points (sample points increase) at the same time delta x decreases, but the also increases. Thus, one term increases and another decreases, so their product makes the final inner product definition stable and prevents it from blowing up as we increase the sample points.
Is it make sense ??
I am very curious about how you make this video, with you facing the audience with a screen in front of you.. is it a huge transparant screent? So cool
Very well explained! Thakyou very much!
Great lectures!!
Q. Can't we take f^T g ? I believe this will make a difference if f is a complex valued function?
Thanks
Really good question. Yes for complex inner products (Hermitian inner products), it does matter what order you compute this in. So = conj(), where cong(.) is the complex conjugate.
Steve Brunton Thank you!
f^T g = g^T f is this true? Are they commute? Because the text book that I use says {f|g} = f^T g
@@TheKnowledgeInstitute
Yeah right,it should have been f^Tg. I think that is where he did a slight mistake. Because, represents f belonging to a duel space while g belonging to the original vector space. And since f belongs to duel, it should have been f^T. But apart from that the sense of why discrete sum is being replaced by an integral, is completely understood.
@@nileshpandey8032 for the deeper meaning of this noncommutative inner product see Alain Connes, "Music of Shapes" lecture or his Alain Connes - Temps et aléa du quantique Apr 14, 2015 "If you take very seriously that the origin of variability doesn't come from the passing of time but comes simply from the formalism of quantum mechanics, from the Hilbert Space. Then it's absolutely vital that you can relate to ordinary time and time evolution as we know. What I'm saying is...this solution depends on the fact that you have a subsystem, it depends on the factorization... [noncommutative]....
"You need to understand intuitively...this is the most difficult to explain...that this time evolution is UNavoidable....You can not suppress it... It's not an inner automorphism. It has the amazing property....it's in the center of the group of inner automorphisms. Any other automorphisms of the algebra will commute with it. It's canonical. It doesn't depend on any choice. ...
"You take a system that you repeat... it repeats everywhere ...to infinity...it's repetition that allows you to see time evolution...it's the factorizations which are infinite repetitions that give you this time evolution... Otherwise you wouldn't see it. ...Hilbert space and Hilbert Space operators KNOW and know a lot more than we think....the passing of time is due to our partial knowledge, because we don't know the full system....
"What are the observables for gravitation? Who can we say where we are? The answer is spectral [frequency]. ...It's not enough to know the spectral operators... ...Two noncommutative shapes that are Isospectral [i.e. both Perfect Fifth]...They have the same spectrum but they do not have the same second invariant. [Second-order tensors may be described in terms of shape and orientation.]
"You find three types of notes of the spectrum. Integers plus 1/4 [Perfect Fourth], Integers plus 1/2 [Perfect Fifth] and Integers in the square of the spectrum [Octave] there are three kinds of NOTES. When you look at the possible chords - this is like the piano in which you can play...because they are three kinds of notes. The chords of two notes are possible for some shapes [Perfect Fifth] but not other shapes [Perfect Fourth]. The point is spectral, given by correlations between the eigenvalues (frequencies) of the Dirac operator.
"There are factorizations with infinite degrees of freedom, that they generate, their own time; and this is a partial knowledge and of course it's related to thermodynamics and temperature and all that. ...The time evolution would not be the same if you changed T to Minus T. If it were the same when you changed T to Minus T then it would be trivial.
"What I want is to transmit a mathematical fact...it's extremely striking. It suggests a philosophical fact ...which is that the fundamental variability is quantum other than the passing of time.
"It's unique up to inner automorphisms; it means you have the flexibility to change it locally. So you can be locally out of equilibrium. You can have a pure density matrix which on the subalgebra in the factorization you see something which is not pure of course. Factor 4 x 4 matrices as 2 x 2 matrix times 2 x 2 matrix. Now take a vector, pure in four dimensional Hilbert Space... And then you do the inner transfer product of the four dimensional Hilbert space. ...That will be a factorization of Type III. ...A corresponding vector and you just repeat it. That's enough to get the time evolution....The time evolution is in the Subparts, it's not in the full thing.
"By the way, I should say, of course this was the motivation for why I spent many years studying noncommutative geometry...
It's interesting that the bra-ket notation Brunton is using is conjugated on the opposite variables as bra-ket in quantum mechanics.
Hello Professor,
I want to ask a different question. When you were talking about the dot product, you said that the inner product indicates how similar the 2 functions are. This reminds me the autocorrelation operation. Can we say that inner product and autocorrelation operations are analogous? I mean that inner product is measure of similarity in deterministic cases, while autocorrelation is a measure of stochastic cases. Associating different concepts makes them easier to understand and memorize. That's why I'm asking this question.
Great question. The autocorrelation function may be thought of as generated by continuously computing the dot product between a function and a time-shifted version of itself. For no time-shift (or time-shift = 0) the autocorrelation is "1" because the function has an inner product of 1 with itself. Then the correlation (i.e. the inner product) may change when the function is shifted. So the inner product is a measure of correlation.
@@Eigensteve Thank you Professor. Your videos are great!
This z Marvel studio level magically xcellent...
So the main lesson is every function can be represented as vectors living in the Hilbert space. Therefore we can define the inner product through this infinite dimensional vectors. Am I right ?
Is there any connection between inner products and convolutions?
Good question -- yes, I think of convolution holding one function fixed and taking the inner product with another function that is sliding across it.
I thinks is connected. Google Christopher Hail as he has a Real Analysis course at Georgia Tech that makes that connection. It is something like where the T is a translation operator. The same connection can be made with the Autocorrelation. I don't think it is possible to create a Hilbert Space where the convolution would be inner space used (I may be wrong, but it does not seem to satisfy the inner product rules)
This is awesome 👍
Thanks for your video!
Wait so if I have two audio samples, can I just take the inner product to see how similar they are? (Sorry if this is a dumb question, I'm new to this stuff)
In principle yes, but this isn't practical for long samples. So you would just want to take the inner product of a short segment. Even this won't be robust, as even a small phase shift will cause them to have a small inner product. I'd probably FFT a short sample and then take the inner product of the power spectrum. (essentially see how similar the spectrograms are)
@@Eigensteve oh cool, good idea :)
@@woulg I think if you are using it in that wat, it is called cosine similarity (you should normalize regarding the amplitude)
@@yacinebelhadj9749 ok awesome, thank you for these responses.
Why does professor emphasize "Hilbert space"? Are other spaces still fine for the inner product?
Thank youuuu
I dont understand one point. Lets assume that two signals are identical and their inner product is a relatively large number, then, if we again take these two identical signals but changing one of them at some instant point x by adding some positive value, does the inner product becomes greater than the previous case although they are not identical signals? The inner product metric seems like useless here.
Amazing!
Thanks!
Beautiful
7:30 The delta x on the LHS should be deleted? Someone help me please ^^.
right!
nice lecture sir
I can't tell if I'm subscribed to you or not, but this makes a lot of sense from what i know of linear algebra on 1blue3brown or 1brown3blue. 😊
how does equal (delta x)?
Doesn't the inner product of the two functions only reflect how "close" the two function are if each has an integral value of zero over the given range? As drawn, with both functions having positive value across the range, increasing the value of either will increase the inner product, regardless of divergence between the two.
Hi Dr. Brunton,
Do you actually write in the reverse order as is apparent from the video?
He's writing normally on glass in front of him and the video is flipped. There's a reason 90% of people in videos like this appear to be left-handed, rather than the usual 10%.
@@ruby_wired he's actually left handed but appears right handed in the video. So I guess the video recording is laterally flipped. :)
Amazing
@ 7:20. Surely to normalise the equation by del(x) you are dividing the RHS of the equation by del(x) and not multiplying on both sides which is effectively doing nothing ?
If we divide the numerator by deltax, then on increasing the sample points, increases and denominator deltax decreases, so finally the total sum will increases on increases the sample point, but the point here is to make the sum ( Integration in limiting condition ) stable even samples points goes to infinity ( in limiting case) (LiKE: mapping infinite term to a finite term )
I was confused at first, because I've been learning Dirac notation and the operands of the inner product seem flipped to me (i.e. I would have thought that f must be the adjoint, not g). Is this another common form of notation?
Yes, in physics usually the first term belongs to H^* but is the same thing
Only had a understanding of this because I saw a video prior which presented the linear algebra necessary to see where this derivation comes from. The process is clear but where the process comes from is not very clear.
I didn't get logic behind the normalizing the inner product with dx ? why it is a problem?
I think it was a 'poetic license' to show that the definition of integral the continuous space matches with the definition in the discrete space. It is nice to to see that way but there is more reasons behind all this. These sums can become infinite series (because the interval dx go to zero and/or n goes to infinite and f and g themselves can be infinite series instead of sampled delta functions) and we need to know when they to converge so that the problem becomes well defined. Note the integral above is not defined for all f(x) and g(x) too. So imo there is no reason behind it. These are different inner products defined on different fields. But it is kind of nice de induce some intuition again imo.
Initially, he takes n samples from each function, and multiplies them together using the dot product. He knows that his goal is to get the dot product to converge toward some value as he takes more and more samples. But he has a problem - if he increases the number of samples, he increases the value of the dot product.
As an example, say that you had two simple functions that were just flat lines, f(x) = 2, and g(x) = 4. If you were to sample from them both four times over some range between x=a and x=b, your would get vectors [2, 2, 2, 2] and [4, 4, 4, 4], and the dot product would be (2*4 + 2*4 + 2*4 + 2*4 ) = 32. If you were to double the number of samples in that range, your vectors would be [2, 2, 2, 2, 2, 2, 2, 2] and [4, 4, 4, 4, 4, 4, 4, 4], and the dot product would be (2*4 + 2*4 + 2*4 + 2*4 + 2*4 + 2*4 + 2*4 + 2*4 ) = 64. The more samples you took, the larger your dot product will be, and instead of converging to some specific value, your dot product would go to infinity.
So to compensate for this, he multiplies the dot product by dx, which is goes to zero as he takes more and more samples out of the interval between a and b, balancing out the dot product's tendency to get larger and larger.
Does the book only come in MATLAB?
The book is currently printed with Matlab code, but we have Matlab and Python on our website databookuw.com. There are talks to print a second edition of the book with Python code.
@@Eigensteve Thank you for the fast reply professor Brunton, I love your videos they are really helping in my mathematics of machine learning class, especially in quarantine.
At 1:43, why g need to be conjugate ?
Thanks Steve
So we are dealing with an infinite-dimensional Hilbert-space, right?
Yes
this is tight
Gracias :)
ha!...we meet again Hilbert Space...
Yeah, yeah... But where´s the "Hilbert space" part coming into play? I'm a little lost
What if the domain of the two functions are different?
? Then they don't belong to the same inner product space, so their inner product isn't even defined.
mathematicians formalize.
engineers simplify.
For some reason I watch these videos and things are just going right over my head. Maybe I should take more higher level math. The level of abstraction that is being done with these things can be difficult to understand at times. More than likely it is my fault just not knowing enough base concepts.
This is basically an undergrad college level course in Fourier. The background is linear algebra and real (and preferably complex) analysis. It's really surprising and wonderful to find such substantial, legit science on TH-cam. Like, you usually have to pay for something this good. And he's a great lecturer.
@@andrewgibson7797 is absolutely right -- the background is essentially a solid linear algebra course and some intro to complex variables. Usually this class comes after I teach linear algebra and differential equations together, so that you have a gut feeling for what eigenvalues and eigenvectors mean.
@@Eigensteve Sure, by I suppose functional analysis and Hilbert spaces are not taught to undergrads, at least not in engineering.
Do you have to write in reverse? 🤣 great vid btw
Hilbert space?
Great! Thanx! 😂
🤟💙
THANK YOU. GOD DAMN MY TEACHERS SUCK
Noice