CORRECTIONS / CLARIFICATIONS: Here I describe "the inner product" as an operation between two functions. This is kind of an abuse of nomenclature. The term "inner product" should actually be used to describe any operation that takes in two objects and returns a scalar quantity and obeys a set of properties (linearity, conjugate symmetry, positive definiteness); for example, the inner product on vectors in R^n is the familiar dot product from linear algebra. Indeed, you can view a real-valued function as a vector with an uncountably infinite number of entries! You can see lots of cool relationships between linear algebra and fourier analysis like this (most notably, the sense in which the FT is a change of basis, where the basis is a set of complex exponentials), but that should be its own video. Just keep in mind that the operation here is more accurately described as "the inner product defined over real-valued functions" than just "the inner product." (thanks to Kevin Lu on the SoME1 discord for pointing this out!) Also I refer to the low A on a guitar interchangeably as "A1" and "A2." This is because I keep forgetting the fact that guitars are tuned one octave lower than written, despite having played guitar for, like, my whole life. The actual name of the note is A2 (but I was right about it being 110Hz, so the arguments still stand).
That's exactly how I finally got an intuition for Fourier and all the other Fourier-related transforms, through Linear Algebra, that a Fouriertransform is simply a change of basis via the inner product. So nice there's now a video which uses the inner product to explain and visualize Fourier transforms in a very accessible way, great to get an intuition for it
I've been trying to spread the gospel, but the sum of angles formula is in my opinion one of the WORST ways to prove the Harmonic Addition Theorem! I've come up with what I think is a very elegant way. Consider the vectors and . Then, one one hand their dot product is a*sin theta + b*cos theta, but on the other hand, it is the product of their magnitudes times the cosine of the angle of between them, which is sqrt(a^2+b^2)cos(theta-arctan(a/b)) = sqrt(a^2+b^2)sin(theta+arctan(b/a)), which is what we were looking for!
this is indeed a lot nicer! i thought the sum-of-angles derivation was pretty clunky, but didn't put in the time to figure out a better one, since that's the one that was cited most places i could find info on the harmonic addition theorem. wolfram alpha oughta switch to explaining it this way instead...
This kinda depends on what sort of thing you take for granted. Like, for this you take for granted that that second identity holds, which may not be obvious to everybody
We did it because we had to take out the max of ( a cos x + bsin x ) taking a and b as sides of a right angled triangle and writing everything in terms of hypotenuse and sin p, cos p. Getting h sin (p+x).
This is such a fantastic video! You are wonderfully lucid, and a very talented expositor. I thought all your visuals were well motivated, and you constantly grounded the abstract mathematics with the physical context of our relevant problem.
very cool video! as someone who recently tried (and failed miserably) to learn about the fourier transform from my grandfather's old textbook, (I got lost pretty quickly once it started dealing with complex numbers), I love that this makes things easy to understand without any complex numbers, and the complex numbers are just letting you describe the output in the complex plane as a single number instead of as two separate values, and making it a bit easier to integrate (because e^x is the nicest possible thing to do calculus with).
And describing that single complex value in polar coordinates gives you the amplitude and phase (as an angle on an honest to god circle), which are more readable and physically meaningful than a and b values.
Absolutely brilliant video. Found this from the 3B1B summer exposition playlist and it was super illuminating, even with the handwaviness of the math at parts. :) As a scientist and (fairly decent) amateur pianist with an undergrad degree in math and hours of obsessive reading about stuff like this, I've never seen the Fourier transform explained or "derived" like this. It's amazing relearning things from a new basis, and I hope you continue making videos after defending.
this was really cool! we've never seen the fourier transform explained this way but it makes a lot of sense!!! also the text onscreen at 16:35 absolutely warmed our heart thank you
Excellent explanation. I've been interested in math since only recently but have had a life long passion for synthesizers. Never thought i would get into math in my 40s. This gives students of sound synthesis a good introduction the FT and related concepts. please make more videos like this. 😁
Thank you for this explanation. Especially getting past the Harmonic addition theorem which it is now clear was the point when my own naive adventures came crashing to a frustrating wall.
Sweet, sweet fruit picked ripe from the tree of procrastination :) Thank you so much for sharing it! Hope your dissertation is going (went?) well. This explanation traces through my exact sequence of questions about how FT works. Like, at one point I was thinking “Yeah, but how the f does all this work when the two frequencies are equal but the signals are out of phase?” and bam, you went right into that on the very next beat.
I enjoyed following your explanation. The visuals on the outcome of the correlation integrals were honestly worth the effort. Seeing the maths is one thing , but understanding the meaning behind the math is the real challange. Thanks for sharing this insight. I will recommend whomever is interested in fourier transforms your video. Keep on the great work.
This is the best Fourier transform explanation video I've ever seen. A fantastic video and definitely my absolute favourite about all the SoME1 submissions I've seen (and I've seen quite a lot of them) Thank you so much for making it
I really enjoyed this more than I would have thought. To me, FT's were just some tool that I "learned" in uni to switch from time domain to frequency domain. But from the moment you said you were going to explain it with trig and the inner product, my eyes were glued to the screen. I wish I had taken the time earlier to really see these "tools" from different perspectives as it were, it's super entertaining. Thank you!
Wonderful video! I love that you took a complicated real-world example, and used it to work the abstract math to a useful conclusion. I look forward to more from you, once you’ve earned your doctorate.
When you introduced the harmonic addition (not that I'd seen it before), I knew that this was where the complex numbers where hiding. 😃. Very neat. And I do appreciate that you recurred to Fourier himself, explaining that he developed the method without complex numbers originally. I hope to see more content from you, once you have finished your dissertation.
Thank you so much for this video! I work with other transforms such as PODs, and we were taught something similar for the Fourier transform, but your video really helped me put all the pieces together again in my head.
4:50 Strictly speaking, it's the covariance, not the correlation. But correlation is just covariance after normalization, so yeah, it's basically the correlation.
Well i wish teachers explained it like that in electrical engineering classes. It brings up a problem to be solved and derives the solution rather than starting with hard to understand and long maths eqations and then showing their use
Yessss! As a programming competition enthusiast, this is how I think of Fourier transforms: It's really all about inner products! The intuitive reason that the convolution formula works is that there is a bunch of implicit inner product fuckery going on, nothing to do with complex numbers and trigonometry
I immediately thought of 3B1B's Abstract Vector Spaces video when you brought up the inner product. I can definitely see how this can extend beyond just waves to arbitrary vector spaces. In a sense, sin(theta) and cos(theta) are acting like basis vectors and taking the inner product with them is acting like a vector projection of the waveform onto those bases. It's just that in this case, sin(theta) represents the basis i and cos(theta) represents the basis i. e^itheta is just a more compact form of writing that particular pair of bases.
Just one thing... if the frequency is unrelated, even by just a little bit, it will give an inner product of zero. So, how do you _find_ the frequencies in your trials? You might try 400Hz, 401Hz, 402Hz, etc. but the real signal is 401.53987647... so you missed it. I expected a chapter where you go over this. My thought (I don't know if this is right): in a real-world application you are not summing two abstract functions but a finite amount of time sampling. If that window is smaller than the beat frequency, they are not running for long enough to cancel out, and will show some correlation. The alignment will drift a bit over the interval being integrated over, but will not drift enough to reverse the sum. Of course, that means that noise and other nearby frequencies don't cancel completely either.
I was wondering if anyone would catch me on this! Your thought is completely correct. When you do this over a finite time interval, the rule of "if the frequency is not exactly the same you get an inner product of zero" doesn't hold - there is a little region of uncertainty around the "correct" frequency where you will still get a positive result. The amount of uncertainty is proportional to the period of the wave and inversely proportional to the length of the signal. This is very well described in 3b1b's video on "the more general uncertainty principle." For this video, I always run these simulations on signals of at least a second in length, and the lowest frequency I look at is still 100Hz, so the "frequency must be the same" rule holds to within a small margin of error, but if I were looking at, e.g. low bass notes sampled for a fraction of a second, it would be much more obviously imprecise. And if i were looking at a full minute of a high-frequency wave, then, yes, I'd need to pinpoint the frequency to many decimal places to get a result.
You can see a finite signal length as the multiplication with a rectangular window. Multiplication in the time domain will translate to a convolution in the frequency domain, which means every ideal frequency peak of the signal will be multiplied with the fourier transform (sinc) of the window (rectangle) and so will spread (and oscillate) over a range of neighboring frequencies. With the choice of another window function or the window width you can determine how the energy spreads or ideally stays confined close to the ideally expected frequency. As the fourier transform is linear you essentially prioritize the center of the signal in contrast to the edges, where the signal may not be periodic, which is what the fourier transform expects. Yet you will get optimal results, if you chose a portion of the signal, where all harmonics (so mainly the root) fit perfectly in the window. Discrete time and sampling is another factor, but not that important here.
Really interesting - I've never heard the inner product described as measuring the correlation between two functions. I must remember this for this future - thank you!
Nice :) It's an alternative approach than bitwise multiplying the signal with a citation (with a sine wave bitwise multiplied by a normal curve). It's asking "how equal" rather than "how much" the two are. I came up with a third way as well, based on how the ear does this.
There's a video from a very famous educational youtuber (at uni level) in the spanish speaking community that presents the Fourier Transform in this way (with the inner product). Though I can understand why you would not have noticed since it's in another language xd
Side note on biophysics and musical ear, and their relationship with this math explanation: Mathematically: The Fourier transform definition includes an integral over time from -inf to +inf. This implies that one would need to listen two notes forever in order to be sure they don't eventually differ (7:47 "...eventually they are going to get out of phase with one another...") Biophysical approach: The inner ear has tiny filaments with different lengths and they vibrate by sympathy if the sound contains a wave with the same length as theirs. If the frequency is not exactly the same they will start vibrating but will eventually "get out of phase" and stop vibrating, and then again in phase and again start vibrating. This is why if two strings in the guitar are slightly out of tune with each other and you play both with the same note, you hear a pulsating tone.
I would argue that the fact that slightly out of tune tones shift in and out of phase with each other and therefore alternatingly constructively and destructively interfere, has nothing to do with the anatomy of a human ear. The interference would have happened anywhere. You can even see it if you record it with Audacity. You can even see it if you just plot two sine functions. There is also a trig identity explaining exactly this.
The video was well done, but I wonder if all that effort to circumvent complex exponentials is really simplifying things and helping the understanding. The Fourier Transform (FT) is actually the best way to learn about correlation and phasors as test functions to represent idealistic oscillations. The correlation (cross-correlation) not only works for sine waves, but actually any two signals and gives you the best fitting in terms of linear least squares regression. It's important to understand, that the FT expects the signal to be periodic - otherwise you will find frequencies needed to represent the irregularities and transitions at the edges (probably including a DC term). Sine and cosine are merely projections of the complex exponential function and not the other way around. You also would need a complex signal to represent all possible signals, where amplitudes, phases and frequency might change over time. A real signal is just a special case, where positive and negative frequencies are symmetric and phase is often disregarded anyways as it's pretty ambiguous in most cases. For some types of analysis you can make use of the phase component though, like with the "analytic signal" or "reassignment method".
this is all very true, and things i'd hoped to include discussion of before i realized how long the video was taking me (i hope to do a video on windowing, for example, if i have time in the future). i will say: i learned about the FT with complex exponentials first, as everyone does, but it never really 'clicked' for me until i started thinking of the exponential as a sine-cosine pair, and only then did i see the obvious connections with wavelets / discrete cosine transforms / other projection-based signal analysis things like matching pursuit. i grant that it is not mathematically elegant way, but it appeals to me in a musical sense, which is why i focused on it; after all, musicians are always talking about sine tones and rarely talking about complex exponentials! ;)
@@watergunwatergunskyattack6846 For me it clicked actually understanding that every oscillation is actually a rotation and therefore a 2D process. That makes exponentials representing a rotation the easier/better fitting tool, than working with projections. How do you rotate a projection? How do you explain adding sine waves (same freq.) with phase differences? It all makes sense, if you have the signal as a 2D vector in the first place. Where else are magnitude and phase (2 dimensions) in the result coming from? And so understanding negative frequencies, the analytic (complex) signal and arithmetic with complex exponentials helps in simplifying various aspects of the FT. It's ok to work with the projections as well, as they are obviously the real and imaginary components, but sin/cos are not the fundamental forms. Whats easier to understand (what comes first) a circle or a sine wave?
I don't want to critique or argue for no reason. I also want people to understand these things faster and better as I spent too much time figuring out the details myself as FT is often taught confusingly and important concepts and relations are forgotten to be mentioned!
@@woulg You're referring to "instantaneous frequency", which you e.g. get with the "reassignment method" using the phase spectrum. Yet, phases are generally important to fully specify a sine wave (or complex exponential), like to reconstruct the time series from the frequency spectrum. Also it's interesting to understand how sines (e.g. of same freq.) mix, when they have different phases. Or to understand negative frequencies, where phase progresses in the opposite direction.
CORRECTIONS / CLARIFICATIONS: Here I describe "the inner product" as an operation between two functions. This is kind of an abuse of nomenclature. The term "inner product" should actually be used to describe any operation that takes in two objects and returns a scalar quantity and obeys a set of properties (linearity, conjugate symmetry, positive definiteness); for example, the inner product on vectors in R^n is the familiar dot product from linear algebra. Indeed, you can view a real-valued function as a vector with an uncountably infinite number of entries! You can see lots of cool relationships between linear algebra and fourier analysis like this (most notably, the sense in which the FT is a change of basis, where the basis is a set of complex exponentials), but that should be its own video. Just keep in mind that the operation here is more accurately described as "the inner product defined over real-valued functions" than just "the inner product." (thanks to Kevin Lu on the SoME1 discord for pointing this out!)
Also I refer to the low A on a guitar interchangeably as "A1" and "A2." This is because I keep forgetting the fact that guitars are tuned one octave lower than written, despite having played guitar for, like, my whole life. The actual name of the note is A2 (but I was right about it being 110Hz, so the arguments still stand).
That's exactly how I finally got an intuition for Fourier and all the other Fourier-related transforms, through Linear Algebra, that a Fouriertransform is simply a change of basis via the inner product. So nice there's now a video which uses the inner product to explain and visualize Fourier transforms in a very accessible way, great to get an intuition for it
7:00 - 7:27 the most universal saying for computer scientists in their math classes “you can see it just by eyeballing”
I've been trying to spread the gospel, but the sum of angles formula is in my opinion one of the WORST ways to prove the Harmonic Addition Theorem! I've come up with what I think is a very elegant way. Consider the vectors and . Then, one one hand their dot product is a*sin theta + b*cos theta, but on the other hand, it is the product of their magnitudes times the cosine of the angle of between them, which is sqrt(a^2+b^2)cos(theta-arctan(a/b)) = sqrt(a^2+b^2)sin(theta+arctan(b/a)), which is what we were looking for!
this is indeed a lot nicer! i thought the sum-of-angles derivation was pretty clunky, but didn't put in the time to figure out a better one, since that's the one that was cited most places i could find info on the harmonic addition theorem. wolfram alpha oughta switch to explaining it this way instead...
This kinda depends on what sort of thing you take for granted. Like, for this you take for granted that that second identity holds, which may not be obvious to everybody
We did it because we had to take out the max of ( a cos x + bsin x ) taking a and b as sides of a right angled triangle and writing everything in terms of hypotenuse and sin p, cos p. Getting h sin (p+x).
This is amazing, one of my favorite submissions thus far. it really resonated with me :D
I see what you did there
This is such a fantastic video! You are wonderfully lucid, and a very talented expositor. I thought all your visuals were well motivated, and you constantly grounded the abstract mathematics with the physical context of our relevant problem.
As an electronics R&D engineer, programmer, and musician, this is the best explanation of the Fourier Transform I've ever seen.
very cool video! as someone who recently tried (and failed miserably) to learn about the fourier transform from my grandfather's old textbook, (I got lost pretty quickly once it started dealing with complex numbers), I love that this makes things easy to understand without any complex numbers, and the complex numbers are just letting you describe the output in the complex plane as a single number instead of as two separate values, and making it a bit easier to integrate (because e^x is the nicest possible thing to do calculus with).
And describing that single complex value in polar coordinates gives you the amplitude and phase (as an angle on an honest to god circle), which are more readable and physically meaningful than a and b values.
Absolutely brilliant video. Found this from the 3B1B summer exposition playlist and it was super illuminating, even with the handwaviness of the math at parts. :) As a scientist and (fairly decent) amateur pianist with an undergrad degree in math and hours of obsessive reading about stuff like this, I've never seen the Fourier transform explained or "derived" like this. It's amazing relearning things from a new basis, and I hope you continue making videos after defending.
12:56 OMG I cannot believe i guessed it perfectly 😱😱😱
just a half second before you disclosed it, that clicked in my mind 🤩🤩🤩
this was really cool! we've never seen the fourier transform explained this way but it makes a lot of sense!!!
also the text onscreen at 16:35 absolutely warmed our heart thank you
(Regarding 16:35, I had been procrastinating on eating and was getting quite hungry, then laughed at that text and went to get food 😅)
Excellent explanation. I've been interested in math since only recently but have had a life long passion for synthesizers. Never thought i would get into math in my 40s.
This gives students of sound synthesis a good introduction the FT and related concepts. please make more videos like this. 😁
Thank you for this explanation. Especially getting past the Harmonic addition theorem which it is now clear was the point when my own naive adventures came crashing to a frustrating wall.
Sweet, sweet fruit picked ripe from the tree of procrastination :) Thank you so much for sharing it! Hope your dissertation is going (went?) well. This explanation traces through my exact sequence of questions about how FT works. Like, at one point I was thinking “Yeah, but how the f does all this work when the two frequencies are equal but the signals are out of phase?” and bam, you went right into that on the very next beat.
I enjoyed following your explanation. The visuals on the outcome of the correlation integrals were honestly worth the effort. Seeing the maths is one thing , but understanding the meaning behind the math is the real challange. Thanks for sharing this insight. I will recommend whomever is interested in fourier transforms your video. Keep on the great work.
That's my new favorite video on the fourier transform !
This is the best Fourier transform explanation video I've ever seen. A fantastic video and definitely my absolute favourite about all the SoME1 submissions I've seen (and I've seen quite a lot of them)
Thank you so much for making it
One of the best Fourier transform tutorial in the world !!
I really enjoyed this more than I would have thought. To me, FT's were just some tool that I "learned" in uni to switch from time domain to frequency domain. But from the moment you said you were going to explain it with trig and the inner product, my eyes were glued to the screen. I wish I had taken the time earlier to really see these "tools" from different perspectives as it were, it's super entertaining. Thank you!
Wonderful video! I love that you took a complicated real-world example, and used it to work the abstract math to a useful conclusion. I look forward to more from you, once you’ve earned your doctorate.
When you introduced the harmonic addition (not that I'd seen it before), I knew that this was where the complex numbers where hiding. 😃. Very neat.
And I do appreciate that you recurred to Fourier himself, explaining that he developed the method without complex numbers originally.
I hope to see more content from you, once you have finished your dissertation.
amazing video, thanks a lot!
I take a virtual bow for the ability to explain and the effort that went into this video.
Thank you very much for procrastinating for us, it did it for me, you‘re the best.
This is so delightful, I wish you will film more in the future!!
Thank you so much for this video! I work with other transforms such as PODs, and we were taught something similar for the Fourier transform, but your video really helped me put all the pieces together again in my head.
Awesome wideo, i have ever thought how this could work.
4:50 Strictly speaking, it's the covariance, not the correlation. But correlation is just covariance after normalization, so yeah, it's basically the correlation.
great job! the right dose of rigour and intuition, experimentation and demonstration :-)
Well i wish teachers explained it like that in electrical engineering classes. It brings up a problem to be solved and derives the solution rather than starting with hard to understand and long maths eqations and then showing their use
Yessss! As a programming competition enthusiast, this is how I think of Fourier transforms: It's really all about inner products!
The intuitive reason that the convolution formula works is that there is a bunch of implicit inner product fuckery going on, nothing to do with complex numbers and trigonometry
insanely helpful!! Thank you!
I immediately thought of 3B1B's Abstract Vector Spaces video when you brought up the inner product. I can definitely see how this can extend beyond just waves to arbitrary vector spaces. In a sense, sin(theta) and cos(theta) are acting like basis vectors and taking the inner product with them is acting like a vector projection of the waveform onto those bases. It's just that in this case, sin(theta) represents the basis i and cos(theta) represents the basis i. e^itheta is just a more compact form of writing that particular pair of bases.
Simply THE best!
Exactly the way i would have explained it, both to musicians AND electrical engineers.
Just one thing... if the frequency is unrelated, even by just a little bit, it will give an inner product of zero. So, how do you _find_ the frequencies in your trials? You might try 400Hz, 401Hz, 402Hz, etc. but the real signal is 401.53987647... so you missed it. I expected a chapter where you go over this.
My thought (I don't know if this is right): in a real-world application you are not summing two abstract functions but a finite amount of time sampling. If that window is smaller than the beat frequency, they are not running for long enough to cancel out, and will show some correlation. The alignment will drift a bit over the interval being integrated over, but will not drift enough to reverse the sum.
Of course, that means that noise and other nearby frequencies don't cancel completely either.
I was wondering if anyone would catch me on this! Your thought is completely correct. When you do this over a finite time interval, the rule of "if the frequency is not exactly the same you get an inner product of zero" doesn't hold - there is a little region of uncertainty around the "correct" frequency where you will still get a positive result. The amount of uncertainty is proportional to the period of the wave and inversely proportional to the length of the signal. This is very well described in 3b1b's video on "the more general uncertainty principle."
For this video, I always run these simulations on signals of at least a second in length, and the lowest frequency I look at is still 100Hz, so the "frequency must be the same" rule holds to within a small margin of error, but if I were looking at, e.g. low bass notes sampled for a fraction of a second, it would be much more obviously imprecise. And if i were looking at a full minute of a high-frequency wave, then, yes, I'd need to pinpoint the frequency to many decimal places to get a result.
You can see a finite signal length as the multiplication with a rectangular window.
Multiplication in the time domain will translate to a convolution in the frequency domain,
which means every ideal frequency peak of the signal will be multiplied with the fourier transform (sinc) of the window (rectangle) and so will spread (and oscillate) over a range of neighboring frequencies.
With the choice of another window function or the window width you can determine how the energy spreads or ideally stays confined close to the ideally expected frequency.
As the fourier transform is linear you essentially prioritize the center of the signal in contrast to the edges, where the signal may not be periodic, which is what the fourier transform expects.
Yet you will get optimal results, if you chose a portion of the signal, where all harmonics (so mainly the root) fit perfectly in the window.
Discrete time and sampling is another factor, but not that important here.
Really interesting - I've never heard the inner product described as measuring the correlation between two functions.
I must remember this for this future - thank you!
Nice :) It's an alternative approach than bitwise multiplying the signal with a citation (with a sine wave bitwise multiplied by a normal curve). It's asking "how equal" rather than "how much" the two are. I came up with a third way as well, based on how the ear does this.
Excellent video!!
So far my top 1 SoME1 video :D
Cool explanation about phase randomness. Lots of sources skip that part
There's a video from a very famous educational youtuber (at uni level) in the spanish speaking community that presents the Fourier Transform in this way (with the inner product).
Though I can understand why you would not have noticed since it's in another language xd
Here's the video from Traductor th-cam.com/video/Ls-NAlJTkeI/w-d-xo.html
man this is just so cool
Side note on biophysics and musical ear, and their relationship with this math explanation:
Mathematically: The Fourier transform definition includes an integral over time from -inf to +inf. This implies that one would need to listen two notes forever in order to be sure they don't eventually differ (7:47 "...eventually they are going to get out of phase with one another...")
Biophysical approach: The inner ear has tiny filaments with different lengths and they vibrate by sympathy if the sound contains a wave with the same length as theirs. If the frequency is not exactly the same they will start vibrating but will eventually "get out of phase" and stop vibrating, and then again in phase and again start vibrating. This is why if two strings in the guitar are slightly out of tune with each other and you play both with the same note, you hear a pulsating tone.
I would argue that the fact that slightly out of tune tones shift in and out of phase with each other and therefore alternatingly constructively and destructively interfere, has nothing to do with the anatomy of a human ear. The interference would have happened anywhere. You can even see it if you record it with Audacity. You can even see it if you just plot two sine functions. There is also a trig identity explaining exactly this.
Great video!
Amazing video thanks!
to fix the phase problem can't we shift the guitar sample by π/4 or π/2 radians and check for the presence of the sine wave
Shame, I wanted to do this video but couldn’t figure out how to use manim soon enough for the competition lol- good job tho
🙏 sir, Please make a video on How Fourier Transform is the special case of Laplace Transform?
Thanks a lot for this perspective!
The video was well done, but I wonder if all that effort to circumvent complex exponentials is really simplifying things and helping the understanding. The Fourier Transform (FT) is actually the best way to learn about correlation and phasors as test functions to represent idealistic oscillations.
The correlation (cross-correlation) not only works for sine waves, but actually any two signals and gives you the best fitting in terms of linear least squares regression.
It's important to understand, that the FT expects the signal to be periodic - otherwise you will find frequencies needed to represent the irregularities and transitions at the edges (probably including a DC term).
Sine and cosine are merely projections of the complex exponential function and not the other way around.
You also would need a complex signal to represent all possible signals, where amplitudes, phases and frequency might change over time.
A real signal is just a special case, where positive and negative frequencies are symmetric and phase is often disregarded anyways as it's pretty ambiguous in most cases.
For some types of analysis you can make use of the phase component though, like with the "analytic signal" or "reassignment method".
this is all very true, and things i'd hoped to include discussion of before i realized how long the video was taking me (i hope to do a video on windowing, for example, if i have time in the future).
i will say: i learned about the FT with complex exponentials first, as everyone does, but it never really 'clicked' for me until i started thinking of the exponential as a sine-cosine pair, and only then did i see the obvious connections with wavelets / discrete cosine transforms / other projection-based signal analysis things like matching pursuit. i grant that it is not mathematically elegant way, but it appeals to me in a musical sense, which is why i focused on it; after all, musicians are always talking about sine tones and rarely talking about complex exponentials! ;)
@@watergunwatergunskyattack6846 For me it clicked actually understanding that every oscillation is actually a rotation and therefore a 2D process. That makes exponentials representing a rotation the easier/better fitting tool, than working with projections.
How do you rotate a projection? How do you explain adding sine waves (same freq.) with phase differences? It all makes sense, if you have the signal as a 2D vector in the first place. Where else are magnitude and phase (2 dimensions) in the result coming from? And so understanding negative frequencies, the analytic (complex) signal and arithmetic with complex exponentials helps in simplifying various aspects of the FT.
It's ok to work with the projections as well, as they are obviously the real and imaginary components, but sin/cos are not the fundamental forms. Whats easier to understand (what comes first) a circle or a sine wave?
I don't want to critique or argue for no reason. I also want people to understand these things faster and better as I spent too much time figuring out the details myself as FT is often taught confusingly and important concepts and relations are forgotten to be mentioned!
I thought you could use the phase to figure out the "true" frequencies by taking the difference over progressing frames or something?
@@woulg You're referring to "instantaneous frequency", which you e.g. get with the "reassignment method" using the phase spectrum. Yet, phases are generally important to fully specify a sine wave (or complex exponential), like to reconstruct the time series from the frequency spectrum. Also it's interesting to understand how sines (e.g. of same freq.) mix, when they have different phases. Or to understand negative frequencies, where phase progresses in the opposite direction.
Love your nails lmao. The video was also good.
Thank you
I used Ernie Ball Super Slinky number 9s for my string theories.
👍
I just want a real time continuous wavelet transform 😭
So good
this video slaps
thank you!
Yes, I heard about the fourier transform through music, but theres not much content that really applies it to sound
what does they do in the doctorate of music technology? I too love both math and music equally and have some basic dsp knowledge
Holy shit this is fire
nice bruh
Love it
23:52 Thats a lot of noise 🙃
Just play a synthesizer and shift between waves to go through this.
Very noice
Dude damn you literally musically bottom-upped the DFT
Wait
Fruityloops4life.
I love your nails!
Do folk actually believe this compilation of misunderstandings?
Such as?
Thank you!