Thanks, indeed a nice video. Quick question: The derivative operator should be set to 0 at the Nyquist frequecy for all odd derivatives, right? In this problem, the amplitude itself is zero there, so does not matter.
[note: I confused the zero/mean mode and the Nyquist mode in my first reply. For anyone reading please scroll down in the thread.] Hi, thanks for the kind words and the question. 😊 By creating the wavenumber grid using "np.fft.rfftfreq" and ”np.fft.fftfreq" the derivative operator is already zero at the Nyquist frequency. You can also check this if you index it at [0, 0]. This actually does not keep the "mean energy" but set it to zero (which is what we want because the derivative of a constant offset is zero). If we wanted to retain the "mean energy" we would set it to (real) 1.0. Do you agree? I think by amplitude you meant "mean energy" or constant offset. How I interpret the DFT/FFT the amplitude of each mode is within the Fourier coefficients (and they change according to the derivative operator).
Sorry, I got you wrong with the initial reply. You referred to the Nyquist mode, not the zero or mean mode. Again, sorry for this confusion. I am curious: Why does the Nyquist mode need to be set to zero?
@@MachineLearningSimulation Your f is sampled only at discrete x_n. Interpolating it on continuous x will have the Nyquist contribution proportional to cos (pi N x/L), whose odd derivative is proportional to sin(pi Nx/L) which is always zero at the sampled points x_n. Please ref: math.mit.edu/~stevenj/fft-deriv.pdf
Thanks for the resource 👍. I was partly aware of it as part of the creation of the video, but maybe I interpreted it incorrectly. As far as I understand, this should not be a problem for differentiating real signals. Let's say N=6, and I want to sample cos(3 * 2*pi/L x). This will give a contribution to the Nyquist mode [at unscaled wavenumber -3] of N + 0i (N in the real part of the complex number, and zero in the imaginary part). Then, there are three scenarios: 1. I want to obtain the first derivative: I would multiply with 1j (-3), turning the coefficient into 0 - 3N j. In an inverse transform, the imaginary Nyquist component would result in an imaginary cosine signal (precisely: 3j * cos(3 * 2*pi/L x) ). However, in the video, we zero out any imaginary components (by only taking the real part of the result array), so there essentially is only a zero real signal, which is fine because the analytical derivative of the cosine at Nyquist mode would be seen as a zero signal anyway. 2. I want to obtain the second derivative efficiently: I would multiply with (1j * (-3))^2 = -9. Hence, I would just scale the real component. Transforming back would give the correct derivative signal of -9 cos(3 * 2*pi/L x). 3. I want to obtain the second derivative by applying the former first derivative twice: As mentioned in the linked resource, this would - incorrectly - also give a zero signal. [If one chooses not to discard the imaginary part of the inversely transformed signal, however, we could still get the right derivative]. This can be problematic, but the effect can also be very small because a signal with a Nyquist component likely also has higher modes that, with their aliases, produce issues anyway. In conclusion, maybe I got it wrong, but my impression was that this was not too big of an issue when differentiating **real** signals **once**. It is an interesting edge case, and I want to dedicate a video to it. Please let me know if I got something wrong. I would be highly interested in lifting a potential misunderstanding 😊.
Hi, good video. To my understanding spectral derivatives work fine for periodic BCs. However, any non periodic function would be subjected to gibbs phenomena, which kills the derivative. Have you any ideas /tipps to extend spectral derivatives for arbitary functions?
Thanks for the kind comment. 😊 Yes, Fourier-spectral derivatives require periodic boundary conditions, violating this assumption likely kills the quality of the derivative and deletes many nice properties of the FFT. For this video, I used spectral synonymous with Fourier-spectral. More generally speaking it refers to finding domain wide Ansatz functions. For instance, to handle dirichlet boundaries one can use chebychev spectral methods. Those also admit a somewhat efficient spectral derivative via the FFT (Check Trefethen's "spectral methods in Matlab" for more details). However, Fourier-spectral methods have a striking advantage over other spectral methods that with them the derivative operator diagonalizes in Fourier space. AFAIK, they are the only spectral method with that capability that can also be easily used with the FFT. That makes them so useful for solving PDEs, of course only if you assume periodic boundaries.
Great stuff, keep it coming!
More to come! 😉
Thanks, indeed a nice video. Quick question: The derivative operator should be set to 0 at the Nyquist frequecy for all odd derivatives, right? In this problem, the amplitude itself is zero there, so does not matter.
[note: I confused the zero/mean mode and the Nyquist mode in my first reply. For anyone reading please scroll down in the thread.]
Hi,
thanks for the kind words and the question. 😊
By creating the wavenumber grid using "np.fft.rfftfreq" and ”np.fft.fftfreq" the derivative operator is already zero at the Nyquist frequency. You can also check this if you index it at [0, 0]. This actually does not keep the "mean energy" but set it to zero (which is what we want because the derivative of a constant offset is zero). If we wanted to retain the "mean energy" we would set it to (real) 1.0. Do you agree? I think by amplitude you meant "mean energy" or constant offset.
How I interpret the DFT/FFT the amplitude of each mode is within the Fourier coefficients (and they change according to the derivative operator).
@@MachineLearningSimulation I think [0,0] is the DC mode, not the Nyquist frequency which occurs at 2*pi*(N/2) for either kx or ky.
Sorry, I got you wrong with the initial reply. You referred to the Nyquist mode, not the zero or mean mode. Again, sorry for this confusion.
I am curious: Why does the Nyquist mode need to be set to zero?
@@MachineLearningSimulation Your f is sampled only at discrete x_n. Interpolating it on continuous x will have the Nyquist contribution proportional to cos (pi N x/L), whose odd derivative is proportional to sin(pi Nx/L) which is always zero at the sampled points x_n. Please ref: math.mit.edu/~stevenj/fft-deriv.pdf
Thanks for the resource 👍. I was partly aware of it as part of the creation of the video, but maybe I interpreted it incorrectly.
As far as I understand, this should not be a problem for differentiating real signals. Let's say N=6, and I want to sample cos(3 * 2*pi/L x). This will give a contribution to the Nyquist mode [at unscaled wavenumber -3] of N + 0i (N in the real part of the complex number, and zero in the imaginary part). Then, there are three scenarios:
1. I want to obtain the first derivative: I would multiply with 1j (-3), turning the coefficient into 0 - 3N j. In an inverse transform, the imaginary Nyquist component would result in an imaginary cosine signal (precisely: 3j * cos(3 * 2*pi/L x) ). However, in the video, we zero out any imaginary components (by only taking the real part of the result array), so there essentially is only a zero real signal, which is fine because the analytical derivative of the cosine at Nyquist mode would be seen as a zero signal anyway.
2. I want to obtain the second derivative efficiently: I would multiply with (1j * (-3))^2 = -9. Hence, I would just scale the real component. Transforming back would give the correct derivative signal of -9 cos(3 * 2*pi/L x).
3. I want to obtain the second derivative by applying the former first derivative twice: As mentioned in the linked resource, this would - incorrectly - also give a zero signal. [If one chooses not to discard the imaginary part of the inversely transformed signal, however, we could still get the right derivative]. This can be problematic, but the effect can also be very small because a signal with a Nyquist component likely also has higher modes that, with their aliases, produce issues anyway.
In conclusion, maybe I got it wrong, but my impression was that this was not too big of an issue when differentiating **real** signals **once**. It is an interesting edge case, and I want to dedicate a video to it. Please let me know if I got something wrong. I would be highly interested in lifting a potential misunderstanding 😊.
Hi, good video. To my understanding spectral derivatives work fine for periodic BCs. However, any non periodic function would be subjected to gibbs phenomena, which kills the derivative. Have you any ideas /tipps to extend spectral derivatives for arbitary functions?
Thanks for the kind comment. 😊
Yes, Fourier-spectral derivatives require periodic boundary conditions, violating this assumption likely kills the quality of the derivative and deletes many nice properties of the FFT.
For this video, I used spectral synonymous with Fourier-spectral. More generally speaking it refers to finding domain wide Ansatz functions. For instance, to handle dirichlet boundaries one can use chebychev spectral methods. Those also admit a somewhat efficient spectral derivative via the FFT (Check Trefethen's "spectral methods in Matlab" for more details).
However, Fourier-spectral methods have a striking advantage over other spectral methods that with them the derivative operator diagonalizes in Fourier space. AFAIK, they are the only spectral method with that capability that can also be easily used with the FFT. That makes them so useful for solving PDEs, of course only if you assume periodic boundaries.
Great content video. Very nice
Thanks for visiting 😊 appreciate the kind comment.
very good video, thank you !
You are welcome! 😊