Sound Synthesis via Discrete Summation Formulas: Unexplored Terrain (sounds kind of like FM)
ฝัง
- เผยแพร่เมื่อ 24 มิ.ย. 2024
- J.A. Moorer, The Synthesis of Complex Audio Spectra by Means of Discrete Summation Formulae: ccrma.stanford.edu/files/pape...
Moppelsynth VSTs: www.verklagekasper.de/synths
Support this channel via a special purpose donation to the Georgia Tech Foundation (GTF210000920), earmarked for my work: • Support This Channel v...
Octave code (spectrogram plotting may require tweaking to work in MATLAB):
pkg load signal
fs = 48000;
tt = 0:1/fs:10;
fc = 110;
fm = 110;
theta = 2*pi*fc*tt;
betta = 2*pi*fm*tt;
%use these lines for attack/decay envelope
%a = linspace(0,0.9,floor(length(tt)/2));
%a = [a, 0.99, a(end:-1:1)];
%use this line for pure decay envelope
a = linspace(0.9,0,length(tt));
% First xx formula is for upper sidebands only, second is for double sidebands
%xx = (sin(theta) - a .* sin(theta-betta)) ./ (1 + a .^ 2 - 2*a .* cos(betta));
xx = (1 - a .^ 2) .* sin(theta-betta) ./ (1 + a .^ 2 - 2*a .* cos(betta));
figure(1);
specgram(xx,2*4096,fs);
figure(2)
plot(tt,xx);
soundsc(xx,fs)
0:00 -- Introduction
2:38 -- MATLAB/Octave code
5:12 -- Upper side bands
6:57 -- Division by "zero" conjecture
7:28 -- Percussive envelopes
9:28 -- Inharmonics (USB)
11:12 -- Double side band variation
13:23 -- Cancellation!
14:15 -- Inharmonics (DSB)
15:05 -- Beat patterns
16:32 -- Bandlimited formulas
18:29 -- Moppelsynths
omfg, what did youtube just interrupt my week with
I HAVE COME FOR YOUR PRODUCTIVITY
Hello! As a guy who makes electronic music, the sound at 10:18 is still really usable. In fact, a lot of electronic musicians use samplerate and bitrate reductions as musical effects to make harsh inharmonic tones. Or some people just mix in a bit of white noise to a square or saw wave. Or people who program additive synths intentionally make some of their partials inharmonic. So, honestly, I think I'm f_m=55 or 220*sqrt(2) sound really cool! Keep posting on this channel, we really need more university professors who are happy to make a quick 10 minute video to cover a fun topic!
I found this paper severval years ago through Noise Engineering. Their Loquelic Iteritas module(still my favorite oscillator) SS mode was inspired by this paper. They discuss some of the modifications they made to the formula in the user manual.
‘Love Math’ + ‘New Synth Novice’ + ‘Dr. Aaron Lanterman’ = WOW‼️
I found this DSF video to be well thought out, perfected with demos along the way, and the testing of parameter changes to be excellent. Many of us are more comfortable with the “try it … you’ll like it” method of learning including myself … besides, I just love to tinker.
Thanks, for the excellent TH-cam video on DSF, Dr. Lanterman !
17:09 An interesting aspect of the N parameter is to use as a brickwall filter not just near but below Nyquist, to avoid aliases (though this is great!) but also to use simply as a filter. FM (and equivalent) can get way too trebly with high levels of modulation, but it'd be interesting to play with higher modulation while capping harmonics at 10 or 20.
Nice video!
I've been using discrete summation formulas in my soft synth since a number of years, they're pretty handy, especially since you can limit the range of summation. I think I derived the formula for
sum[k=0 to N] exp(z + k*w)
where z and w are complex numbers, which gives you a bit of extra freedom.
As someone said, you can use it to get a band limited if you want somewhat low pass filtered pulse train, and integrate the result for a band limited sawtooth wave.
I got an interesting harp-like multi-pluck sound by increasing N one step at a time at regular intervals.
I also did some FM+ experiments where you replace the sin function in FM synthesis with one of these functions, and got some interesting results.
I also think that if you do the double sided sideband summation using the complex version, not everything will cancel. I guess the same would hold if you did the summation using cosine terms instead, but I think the complex version is easier to deal with.
Another nice thing about the complex version is that the imaginary part is the Hilbert transform of the real part (or possibly vice versa).
I made a complex feedback comb filter once (the samples in the delay line were complex, as was the feedback coefficient). At least when fed with the output of the complex version of the summation formula, I could get the resonance peaks to move around along the frequency axis by changing the phase of the feedback coefficient.
Toivo, can you share your derivation and perhaps some tips on using your formula?
@@bobvines00 Sure:
Ok, we're using the sum
S = sum[k=0 to N] exp(z + k*w) = exp(z) * sum[k=0 to N] exp(w)^k
First note that if z = ib, then
exp(z) = exp(ib) = cos(b) + i*sin(b)
so you can use the real part to get a cosine or the imaginary part to get a sine.
Now, let w = c + id, then I can write the sum as
S = sum[k=0 to N] exp(i(b + k*d)) * exp(c)^k
Let's say that b = 2pi*ft, d = g(2pi*t + p), then
S = sum[k=0 to N] exp(i(2pi*ft + k*g(2pi*t + p))) * exp(c)^k
= sum[k=0 to N] exp(i(2pi*(f+kg)*t + kp)) * exp(c)^k
so we get a sum of cosines/sines (depending on if we take the real or imaginary part) with frequency f+k*g, with exponentially decreasing amplitude if c < 0.
The sum starts at frequency f and ends at f + N*g. The variable p can be used to add a phase offset from one sine to the next.
I hope that explains the basic usage.
Derivation:
We exploit that it's a geometric sum. By multiplying by 1 - exp(w), the first term in the sum times -exp(w) cancels with the second term times 1, etc, which gives
(1 - exp(w))*S = exp(z) * (1 - exp(w)^(N+1))
==> S = exp(z) * (1 - exp(w)^(N+1)) / (1 - exp(w))
You can see that if N=0, you get back S = exp(z), as expected.
If N goes to infinity and real(w) < 0, exp(w)^(N+1) goes to zero. But the case with finite N is really useful, you can choose N so that f + N*g is below the Nyquist frequency to avoid aliasing, but if f + (N+1)*g is above then you're not missing out on any audible frequencies.
Hope that helps.
Super interesting, thanks for sharing! I'm pleased you're using Octave for our sake. MATLAB license cost shocked me. I could take one course at Georgia Tech and get online access with all toolboxes for less.
At the lab I worked at, nobody paid for it, if you get what I mean 😆
Such a cool video!!! I love sound synthesis, and this video was a full meal. First, a paper from Moorer himself, the guy who started my interest in sound synthesis 4 years ago. Then I learn about a new synthesis method. Then I find out you can very simply simulate discrete synthesis methods on octave? Wow, I learned so much, I'll need the rest of the week to explore all this new ideas. Thanks for the content!!
Some additive synths, but especially complex oscillators use dsf to make implementation more efficient than needing tons of individual oscillators to cover the harmonic series
One thing I've used DSFs for is to generate a bandlimited pulse train, sometimes known as a "buzz" oscillator, which can then be turned into a sawtooth for example. I think the Csound buzz oscillators also use this method.
A cool thing about the Moppelsynths is that they do the math using complex numbers, making it easy to get a stereo sound (put the real part on the left channel, the imaginary part on the right).
I wonder if instead of doing pure left and right signals, what you'd get by using the components as L+R, L-R stereo
Im gonna have the coolest wavetables on the block when I learn how to get Octave going.
I just subscribed to your channel -- great music!
@@Lantertronics thank you so much! I’m so glad you liked the music!
Thank you for your time and knowledge Professor.
these vids are so high quality caught in 4k🔥
Neat!
You'll have less denormal artifacts if you choose relatively prime parameters.
Your first sound freaked my cat out :-D
Meow! :)
This blew my whole world wide open. Synthesis really is the Wild West.
Thanks, Prof. Aaron 👍
Fascinating! Very clear explanations, thank you. Also those plugins are interesting, I will definitely keep Sonitarium in my arsenal as there are some things it approaches in a different way compared to the additive synths I use like Harmor.
The artifacts are caused by soundsc() defauting to 8 bits. Change the line to 'soundsc(xx,fs,16)' for 16 bit playback
Wow, there's a bit of CCRMA history I never knew.
Just add a lowpass filter at end of the chain and this is completely usable. Could be very fun to construct a synth like this
Hi, Aaron. I'd love to hear more about your take on Buchla'a implementation you mentioned, though I did watch your videos on topics Buchla.
The 700 is my fantasy instrument (I'm a 200e user, so I'm already living the dream tbh,, the id700 app is amazing and would love to hear your take on it.
defiantly going to set this up in max msp, do some funky modulation. Great vid!
The thing is with this function, is that a "Weierstrass function" is very spikey even for some nunbers less than 1. So those might be the "artifacts" you're hearing.
Very cool stuff. Thanks again Aaron.
It sounds like the artifacts are, like you said, basically quantization errors. I bet there are smart people who could reformulate those expressions to minimise that ;)
You tried a dissonant tritone with sqrt(2), but why not a natural consonant third (5/4), fourth (4/3) and fifth (3/2) ?
Absolutely wonderful! Doesn't the daisy seed square wave generation use something like this for band-limited square waves? I would think that that's the most useful application. This is great for basic waveforms, since wavetable synthesis requires a lot of processing if you want to be careful about aliasing
Ah, yeah, indeed this is related to the "band limited impulse train" stuff, I think...
@@Lantertronics Indeed it is, and I think it has even been used in the 90's/00's in Csound and elsewhere.
AFAIR the major pain of using this for generating sawtooth and square was handling the pitch changes without discontinuities/clicks. That would come from both varying the limit on the summation formula, and also varying the lowpass filter that makes a sawtooth out of a pulse train.
@@kfoltman surely it's not that difficult to calculate phase offsets for this? (You can use theta as a phase and beta as the time-variable, or simply transform theta=theta0 + phase?).
Unfortunately you can't get true saw or square this way because the amplitude roll-off of their harmonics goes like 1/k rather than a^k.
Thinking about this, the REAL disappointment is that unlike FM and it's equivalent PM, I don't see that this would expand to multiple modulators or multiple carriers with ease. And while the formula isn't complicated, the PM equivalent of FM really IS computationally simple: just sin( theta + a * sin( beta ) ), no? You need to scale the multiplier a differently than FM but I think you just multiply it by frequency multiplier between modulator over carrier?? (Haven't looked in 10 years)
This vaguely reminds me of IL’s Harmor synth. I am still shocked at the computational efficiency and control Harmor gives you, maybe you could do a video looking into that synth? They’re definitely doing something very very clever
Yeah Harmor is still a top synth, almost 15 years after its release!
These sounds are super cool. I’m going to see if I can get this to come out of a reasonably fast microcontroller- I know the operations are “historically expensive” but with something like Raspberry Pi Pico running at 125MHz I wonder if there are enough cycles to at least get to a decent sample rate, even if shy of 40kHz. Also curious about real-time manipulation of the parameters, and maybe taking Fc from MIDI or V/Oct.
I would expect you could - even a Pico is ridiculously fast compared to the hardware they had available in the 70s/80s when a lot of synthesis concepts like FM were born. Improvements like unrolling loops, using interpolated sine tables, and replacing divisions with shifting/subtraction could be done as needed, but today's compilers are good enough where they'll usually do stuff like this behind the scenes for you. Don't just jump into doing your own optimizations until you've looked at the emitted assembly code to see if it can actually be improved on. More than once I've been pleasantly surprised at how clever the compiler was in coming up with something efficient.
Interesting material. I'm gonna have a look a the paper and tinker with it. Thanks!
Btw: those uper harmonics remind me of an oscillator sync where the synced voice is offset a few cent or pitch modulated. (:
13:30 That patch should be called: 4′33″.
Hello professor, hope you are doing well.
I came here for a request: if it might be an interesting video idea for you, it would really be cool to see a circuit analysis of the Seymour Duncan Pickup Booster, more specifically version 2.
I really don't grasp what happens in that pedal and can't find anyone online explaining it.
The two things I understand are 1) the pickup resonance switch, which lowers down the resonant pick of the pickups by adding a condenser in parallel with ground and 2) the first pair of transistors is a differential pair.
There are another three transistors of which I honestly don't understand the function at all, a pair of them seems to be some sort of push-pull configuration.
I think there's a lot to go on about in that pedal, let me know if you think it's cool enough for a video!
That's cool, I'm designing a base pcb for the Fubarino, than it becomes easier to play with dac's.
Now i'm waiting on pcb i made with a class AB amp that's a bit different than usual, now doing the PSU part pcb.
I got a bunch of NOS "West" German made 4700uF capacitors, seems an amp needs atleast 10000uF.
Bravo!
nice!
i love this
This gets brought up in Computer Music 2nd Edition by Dodge and Jerse... I've always been curious about it and it feel underexplored. For the aliasing they say that the method theoretically produces and infinite number of harmonics so "the maximum value of a should be chosen such the components falling above Nyquist have a insignificant amplitudes". They propose multiplying by a scaling factor after the fact that is inversely proportional to a.
EDIT: oops... you talk about this lol.. I should have watched the whole video first
Hello Aaron, I wonder if the Nonlinear labs C15 synthesizer is based on this type of algorithm? Uses two sinewaves and wave shapers to cross-modulate or combine.
Can this be adapted to utilise the Walsh Hadamard Transform (a sort of digital Fourier Transform with 1 and -1 instead of sine waves, no complex numbers, a sort of a bit stream approach)? Has anyone looked into that for synthesis?
Maybe? It's not obvious to me how to adapt it. In general attempts I've seen to use Walsh functions have been kind of tricky, since the control of the coefficients winds up being really counterintuitive. Your first term is a square wave with lots of harmonics, and then you need to add terms to tame it.
If you’re getting aliasing, wouldn’t the best mitigation be just to put a low-pass filter on the waveform before sending it to your sound card? With a corner frequency of 20kHz or so. Relying on k’s maximum being low enough might not cut it if you want to play a really high note.
I think you can get bad behavior when the denominator is very near zero because significant truncation error can occur there. Fortunately it looks like you're pretty safe from it as long as a isn't too close to 1.
I bet these formulas can be entered into Serums wavetable editor…
Could the aliasing be unavoidable because you're basically doing an infinite summation, meaning you will always add partials higher than half the sampling frequency, no matter what parameters you choose?
Oh, yeah, technically speaking some of it is will always be unavoidable in the formulas I'm using, just like in regular Yamaha-style FM synthesis. You hope that the partials decay fast enough that the aliasing isn't objectionable. My suspicion here is there's also some numerical weirdness going on with the division that's making these effects sound more objectionable here... but I don't really know for sure.
Is it feasible to go from matlab->c++->HDL->xilinx for audio aplications? Like designing a synth
Mathworks has all sorts of tools for going from MATLAB to FPGA, but really for real-time synthesis would want to code it very different than I've coded it (I've coded it all with bulk vector operations). You'd probably be better off coding for the Xilinx directly, or writing C++ directly for something like an STM32.
So is this like the inverse of a Fourier transform? Build up discrete sine waves instead of breaking down into discrete sine waves?
Has Moorer already released the Deep Note source code?
here's a problem for your students: DSF lets you set the distance between subsequent partials to be any positive constant and can be implemented in analog. but what technique that can be implemented in analog lets you place partials in either increasing, or decreasing distances?
feel free to reply if you know of one :D
Single side-band mixers? You just need some adders and multipliers (VCAs).
ring modulation?
@@lqr824 nope! ring modulation only creates ai+bk and ai-bk, but to get a roughly harmonic series, you have to have a pure sine tone for B, meaning you eend up with ai+b0 and ai-b0, and then you filter out only the one sideband. that's called frequency shifting. and after frequency shifting by some constanc c=b0 you have a0+c, a1+c, a2+c, etc. but the distances between the frequencies of those partials are still the same as the distances between a0, a1, a2, and so on.
@@cheater00 Right, so a0+c, a1+c, a2+c are no longer multiples of each other. They're flat or sharp of where they should be, right? If you state the resulting frequencies as multiples, this multiple will be ascending or descending, which is what I thought you wanted. Maybe you meant to compare the frequencies' linear offsets, not exponential offsets? If so then I agree ring modulation isn't the answer. (But for me frequencies are always considered in multiples not additions.) What is the answer to your puzzle? I can't think of it.
@@lqr824 yes, what frequency shifting is doing can be already reached with DSF. Ultimately DSF and freq shift just sounds like the same tone, but amplitude modulated by a sin LFO. That doesn't really add to the timbre until you push dsf or freq shifting to its limit. What you want monotonically increasing or decreasing frequency deltas, not pitch deltas. Then, the timbre truly sounds like a different sound, that did not come out of a harmonic oscillator. No physical instruments are harmonic, they all exhibit that effect, and our ears can tell the difference.
10:40 taco bell
what the hell
Love this, except for the MatLab code. Even 15 years ago, scientists and engineers were dumping MatLab for Python (with SciPy, Pandas, MatPlotLib etc). Some people still use COBOL I suppose, but it’s not exactly helping to keep it in circulation.
For large-scale programming projects, Python is a better choice; MATLAB evolved from an interactive wrapper for FORTRAN array routines and was never really properly designed as a language. But for things like this, I find MATLAB to be a much better fit (while also being accessible without paying silly $$$ because of Octave.) It has everything I want "out of the box" without having to think about installing and importing this library or that library, and more importantly, its syntax is tightly tied to the problem domain it targets.
I could write this code much faster in MATLAB than in Python, and I think the resulting MATLAB is easier to read the the equivalent Python. To a certain extent, of course, this is a skill issue; the ratio of hours I've spent in MATLAB vs. the hours I've spent in Python approaches infinity. But being able to write a = [2 3 4; 5 6 7] is much more convenient than having to write np.array([[2, 3, 4], [5, 6,7]]), and I prefer writing A * B * C for matrix multiplication over something like np.matmul(np.matmul(A,B)),C) or even A @ B @ C. But then I seem to recall * is matrix multiplication for NumPy *matrices*, which are different than NumPy *arrays*, or something. So I can think about all of this, or I could just write MATLAB code and finish the video. ;)
We've occasionally looked at switching our ECE2026: Introduction for Signal Processing course at Georgia Tech from MATLAB to Python and have always concluded that it's just much cleaner to express things in MATLAB.
There's a new language called Julia that looks very interesting to me; it's kind of like what MATLAB could have been if it was thought through properly as a language from the get-go.
@@Lantertronics I’m pretty sure NumPy and Pandas offer operator overloading for matrix math, but I take your point and there is definitely something to be said for simplicity and familiarity. It took me years to stop writing scripts in Perl which has practically unreadable syntax to begin with.
I’m only 10 minutes in but I think the artifacts sound like the bit-depth is too low.
🤍