Given the band-limitations of human hearing we couldn't hear the arrangement of pure sines, because we percieve lower frequency content as rhythm and even lower frequency content as changes in sections of music, etc. On the other hand, we can more accurately detect both rythm and pitch than the fourier transform implies - the reason being that the fourier transform is linear, but our hearing apparatus is not. It's unclear whether anybody has listened to a pure sine for that reason, our ears add harmonics to a pure sine if it reaches our ear, and while our brains process this sensory data, whether it accurately compensates and you percieve a pure sine isn't that clear. What you hear is probably as free of harmonics as you are physically capable of hearing, but a loud enough sine with a frequency below human hearing can still produce some sort of perception through the harmonics introduced by ear anatomy.
What he said before is that harmonics are what make instruments sound interesting. That's why I think he meant "we don't just listen to arrangements of _fundamental_ sine waves." If we did, we would be living in an 80s electronic music nightmare. So I think he's pointing out the reason why A2 in the cello sounds different from A2 in an electric guitar, and that, with some saturation, we can enhance those differences and make things sound more interesting.
Nice to see you there! Love your teaching attitude, keep pushing! Wouldn't be an examination of guitar amps simulators and processors of today a great thing? Maybe you can collaborate with Dan on this? Or just ask him to do it)
I too come from a Live Audio background, something I’ve been doing for 30 years now, but I really began it all at University with a music degree and a specialty in Electro Acoustic composition. As a kid who always had audio engineering in my head, along with drum machines, sequencers, and a 4 track recorder through my teens, I had a moment when I applied to a renowned Recording Engineering school the same year I applied to music school. As much as I made abstract and very non traditional music I still saw myself more an engineer than a musician. Lo and behold I was rejected from engineering school and accepted to a music program. In my second year I signed up for this electronic composition program, and no, it wasn’t hip hop or dance music we learned about. What we had was access to a ‘studio’, which was an 8 track 1/2” recording machine, a couple of 2 track tape machines, a very very early sampler, a couple of ARP 2600’s and similar bit and bobs from the 70’s and 80’s. Our ‘engineering’ class? Exactly 2 hours (here’s a patch bay, here’s how you feed tape into the machine, here’s how you splice the tape, etc), then it was GO. Mess stuff up, make art, if you dare. I spent three years like that, making 50’ tape loops that stretched around the room, just because why not. But what I primarily learned from my professor was how to listen to sounds a certain way. Density, dynamics, harmony, dissonance, cohesion, in very abstract terms how to MIX. Eventually I moved to live audio, toured the world, and for a long time faked all the technique stuff while quietly learning and absorbing the ‘correct’ way to do things. I applied this to the studio when I could get into one, and to my live tours. At some point I could (almost) talk about correct techniques. But I’ll never forget going to do a session at the famous BBC Maida Vale studio, and watched in awe when the engineer pulled together an album ready mix of the band I’d done 150 shows with in about 20 min. That was a moment of clarity for me. There IS value in true understanding of audio engineering as a technical craft. Since then I’ve strived to top up what I cobbled together till then and really understand what is going on. Suffice to say, Dan I certainly relate to your past as a self-taught live engineer, but more than anything your way of filling in the gaps for me have been nothing short of incredible. Thanks for the videos and keep up the great work!
The intro segment is just too true. Too many so called "youtube producers" speak with full confidence about different subjects with clickbait titles ("The trick that No mAsTeRIng eNGineEr wIlL tEll yOu") when in reality it's not very accurate, or not true at all. Thank you for this. You're among the few I trust.
You totally had me when expaining "vibrato" and "tremolo", with that little kick at Leo Fender (the "trem bar") - I hope you do get in some sense what a well of good and interessting information you are! Happens to be one of the best decisions I made, when I subscribed to your channel!
to be super duper fair, you can get a true tremolo out of a trem bar if you smack it and if you mod your guitar a bit, but I'm also guessing the first Fenders didn't do that :P
@@IDDQDSound Yep, if you're like really feeling intrigued to do so :) By the way: "IDDQD" as nick is a cool idea - why did I not think about that, lol :) ? In that sense: IDKFA
My introduction to the notion of linearity in electronic circuits came in the 70s at college and then in training to be a production/repair technician. I was already a guitarist turned bassist by this time and it was very interesting to learn how my amplifiers worked. They showed us the way that amplifier circuits vary their in/out ratio at different levels and at different frequencies. Whether solid state or valve based, there would always be a reactive element to the responses of amplification stages. A particular highlight of these lessons outlined the difficulties involve with getting a linear response out of Valve based hi-fi. Winding output transformers, and coupling transformers, with their coils and inductances to give linearity was/is a major headache and the largest part of the cost of making them. Finding the straightest part of the response 'Curve' is the goal. Then we musicians do our damndest to lose that 'fidelity' for the most part in one way or another.
Not only the gain stage of ProQ is linear. Actually, all the static EQ curves are too. Mathematically what we call a linear process (more formally linear time invariant system, or LTI system) is a process that can be reversed by running the inverse of the process on the already processed signal to get you back to the unmodified signal. In other words linear processes are non-destructive! So in the case of ProQ (and any other ordinary digital EQ) you can boost 1kHz by x dB, place a second instance right after that cuts the same frequency by the same amount. Since this type of filtering is built on LTI code, the resulting signal is identical to what you fed the first instance, bar eventual _very_ low level differences introduced if the algorithm were to use some kind of (non-linear) approximation to save CPU power (or sloppy coding). Now, try the same thing with for example Mwaveshaper by Melda, which is a classic transfer curve shaper. Bend the transfer curve in a convex manner so that medium intensity signals get amplified by factor 1.5, for example. Place another instance of Mwaveshaper behind it that attenuates medium intensity signals by multiplying with 0.666... If Mwaveshaper was linear we would expect to get us back to square one, but the resulting signal is anything but identical to the one you had from the start. What causes this is non-linear expressions such as exponentials in the code. They are inherently non-linear. The problem becomes even more unsolvable if you introduce attack and release times such as in a common compressor - in such case the transfer function isn't even time invariant (in other words, the process parameters change depending on time and what signal came before the signal you are currently processing).
Hi, I respectfully but firmly disagree with your definition of an LTI system. A linear time-invariant system is just what the name says: linear (i.e. f(a+b) = f(a) + f(b), like an EQ and not like a distortion), and time-invariant (delaying the input just delays the output, like an EQ or a distortion and not like a tremolo effect). However it may very well be non-invertible: consider a notch filter, for instance. Filtering a pure sine wave at the frequency of the notch gives a result of zero, therefore whatever filtering you put afterwards you will not be able to recover the original signal. On the other hand, some nonlinear processes are perfectly invertible. If you take the cube root of a signal, the output will not be proportional to the input (hence nonlinear). But if you then take the cube power of the result, you will perfectly reconstruct the original signal. (As a matter of fact this idea is frequently used in telecommunications)
@@alexthi What you said I agree fully with. Your examples put the finger on nice examples that go against what I said. It's not as simple as what I wrote, and I may have gotten a few details wrong. I am by no means an expert. (on my third year in EE, in the middle of the first course in DSP)
A nonlinearity of x^2 for example can be negated by x^(1/2) since x^(2*1/2) = x, which is linear, so contrary to your comment, nonlinear processes CAN be reversed by using the inverse process. A better (and simpler) definition of a nonlinear process is that a nonlinearity acting on the sum of two signals is not necessarily the same as the sum of the signals with the nonlinearity applied individually.
As a developer and engineer, this is extremely interesting and makes perfect sense, but wouldn't a zero difference require an effectively infinite noise-floor? I guess that's basically what we have in a 32bit float environment anyways, but I'm curious as to whether this would actually be non-destructive in a laboratory setting, or if it's just non-destructive to our ears...
@@eugenefullstack7613 You can do the old phase reverse trick, then check the level of the summing bus. I don't think bit depth influences the cancellation as long as it stays constant. Anyhow, you should get total cancellation.
Actually, linearities include invariance by both multiplication (aka, change in volume) and addition, which is to say, in a linear plugin, if you run the sum of two signals, you should expect the same output as if you ran both signals to the plugin separately, then added them. This is the reason we often use reverbs in bus tracks. Since reverbs are linear, this achieves the same as if I had one instance of the reverb in each track.
@@iurigrang They are linear given reasonable levels. There is a point where air stops behaving linearly and the same is true for things like plate or spring reverb. But I don't think anybody doing music is pushing levels to the point where it has an audible impact.
The mighty myths: gain = distortion, tremolo = vibrato & linear = whatever the last "plugin" flogger suggests. Informative and the odd clever dig to make it amusing. Thanks.
Fascinating! I hope this is leading to a gain structure masterclass! Track level, daw input level, plug-in strip in/out, plug in fader, daw fader, send levels, bus levels, sub groups, vca, master bus plugins, master fader. Linear/non linear. So much potential to help/harm a mix just at the gain stage/fader volume level. Wisdom and guidance please. Cheers!
Profoundly educative, as always, Dan! I love how you educate the guitarists as well ;). Being an engineer AND a guitarist you got it of my chest as well :) Excellent video, thank you.
I think this could use a bit of expansion on intermodulation. A linear function follows the rule that f(a*x+b*y)=a*f(x)+b*f(y) and your change in input level only shows that f(a*x)=a*f(x). It's part of being linear, but not all of it. The other part is that a linear plugin on a bus receiving two inputs produces the same output as both inputs going through instances of the plugin and then summed. Distortion-free nonlinearities occur, when the relation above holds for x and y being levels of signal at a particular point in time, but not for x and y being the entire input function.
yes, and also intermodulation distortion has been glossed over when discussing distortion produced by analog circuits, for the sake of synthesis I'm sure. do you have an example of the latter situation?
Totally agree. It misses a big part of the picture. There are also a bunch of other processing that are nonlinear, basically every sort of dynamic plugin is intrinsically nonlinear, compressor, gates, etc. I'm thinking that spectral effect (autotune, vocoder, etc) may actually also be nonlinear.
@@lucianodebenedictis6014 I think the classic example of intermodulation using analog equipment remains an overdriven electric guitar playing power chords. On first approximation you get the sum and difference tones (and neatly enough the sum and difference frequencies of upper partials are harmonics of the difference tone resulting from the fundamentals). You have the root of the power chord and the note a 5th up, which is (ignoring finer details of tempered tuning) 1.5 times the frequency of the root. So the difference tone sits at 1.5-1=.5 times the frequency of the fundamental, which is an octave below. So playing power chords through an overdriven amp adds additional frequency content below the chord, making it sound, well, powerful. With the inversion of the power chord (smoke on the water comes to mind), you get a perfect 4th, which is 4/3 of the root, leading to a difference tone at 1/4 the root or two octaves down. Pretty much any other interval gives you some pretty wild additional tones (often outside of the western 12-tone scale). As soon as you add a third note you also get another level of intermodulation and now you are looking at the combinations |af+bg+ch|, with a,b,c in {-1,0,1} and f,g and h the respective fundamentals.
@@yass8483 Dynamic processors are all nonlinear if you take signals over time as an input. They are linear (at least if they don't model analog gear) at any given point in time. A compressor (and gates, expanders etc. have the same property) basically switches between two linear transfer functions. If it's soft knee it goes through a series of linear combinations of linear transfer functions, which are also linear. So for each individual sample it is linear, for a longer sequence of samples it isn't. If you shorten the attack and decay times, you can - as Dan does - view this as creating a non-linear transfer function. Alternatively you can view this as the same linear transfer functions as before, but the switching starts to happen at audible rates, which then introduces additional frequency content not as distortion, but as Amplitude modulation sidebands - the maths work out to the same result. Spectral processing doesn't have to be non-linear. The FFT is a linear transformation, so it will depend on what you do in the conjugate space that determines whether it is linear or not. I'm pretty sure vocoders are linear, but I know that the pitch detection for autotune, melodyne etc. is non-linear. The pitch shifting itself is linear. I don't think any of these offer side-chaining (i.e. generating the control signal from another source than the audio you are processing), but if they did given the same SC the processing would be linear, but the SC input would act in a non-linear fashion. (Edit: That would also be a good reason to view compressors as AM modulation rather than as a non-linear transfer function. If you have very short attack and release times and use a SC, the additional frequencies in your result will appear as AM modulation with the frequencies of the SC, not as harmonics of the signal).
@@simongunkel7457 What do you mean SC ? But you are right FFT is indeed a linear operation and that is also reversible. For a pitch shifting, I just can't see how the process of changing the frequency could ever be linear, or what function could affect linearly a sum of sine functions in such a way as to produce sine functions of different frequency, but maybe there is something that I am missing. I am not quite clear on which operation done in fourier space would result in a nonlinear operation in the time domain, but I am no signal theory expert, so you tell me ;) And yes, also right, vocoders are indeed also linear, as they are just modulating levels of different partials independently of one another, according to a control source, also applying it to a sum of two signals should be equivalent to applying it independently to each, and summing after.
Wow, the atmosphere of the video with this music is like you're diving into a very deep hole of secrets or conspiracy theories :) I've got so many insights from it. I always knew that Pro-Q3 is a clean EQ, but I haven't realized, HOW clean it is. So, FF Volcano 3 is also non-linear, right?
THANK YOU -- the fact that guitarists (and guitar amp manufacturers) refer to distorting/overdriving amplifiers as "high gain" has always vexed me. It's really "failed gain." Now, failed gain can sound AWESOME, but it's not "high gain."
Brilliant..well explained and from someone who’s obviously been involved in making records for many years, unlike the video channel your referring too, I too experienced it pop up on my YT feed, only to bemused by its strange regurgitation of bitsize marketing mixed with tutorial quotes from some who has yet to make a record.
Hi, i don´t know why YT algorithm doesn´t show me this channel before lol. This is the most usefull channel about audio that i found in internet. I´m enjoying watching all your videos, new sub here and i will let my like in every video, greetings from Argentina.
Hey Dan, myself and for sure many other people would appreciate an indepth series about PD and its correct usage and interpretation of its curves. Demonstrated by actual real usage case scenarios. In fact there is hardly anything useful to find on YT about it. This would be a great help imo. Any plans? Thank you.
It is great to see my major cross paths with audio engineering. Studying mechanical engineering and this semester in a experimental engineering class where the topic was doing measurements, experiments and reporting them basically with all the needed terminology. In error and measurement lesson the professor told us about linearity and the exact graph at 7:29 was used to show non-linearity. It was mentioned that in real systems, truly linear behaviour is only approximately reached, quoted exactly from the lecture slides.
sadly I know the channel he is talking about. thank you very much for not saying his name. He kinda has a point here and there and means well. But your video makes a lot of sense and it really clarified some stuff for me as well. Have a great day dear sir.
lol, I've just watched this week the video you're responding to.. you're a gentleman for not pointing it out who he is, but at the same time it's important for the youtuber to acknowledge the erroneous information he posted.. and obviously take down the video as it's wrong info and will mislead people into thinking non-linearity is an effect simply regarding eq curves.
Excellent presentation on a very complex topic The style and content is ideal for my interests and limited attention span. I often listen to instructional videos at 1.25 speed, but Dan’s presentations are so fast and concise that I watch at 1.0 to keep up with the rapid flow of info. This vid packs a lot into a short presentation so I enjoyed watching it twice.
I'm curious about UAD LUNA. It may be that they simulate the analog summing in a nonlinear way, meaning something that can't be achieved with plugins on individual tracks and/or on the master track! In an ordinary DAW the summing is always linear regardless of what saturation plugins are used.
It may also be useful to some to understand 2 tests for a system's linearity: the superposition and homogeneity principles. Superposition principle is: if you first add (mix) 2 signals together and then process them through the system, you'd get the same result as if you process them individually through 2 copies of that system and then add them together at the end. Homogeneity principle is: if you apply gain to a signal and then process it through the system, you'd get the same result as if you first process the signal through the system and then apply that gain. If a system is linear, both of these principles will apply. Effects like compression and distortion fail both of these tests and are thus nonlinear.
one of the best Channels out here! Please explain the VST Doctor Plugin in detail. That would help alot. Top be honest ... i don't what it shows me. sorry for my bad english
WOW, first time I hear (and see) a such good explanation about this. The Hammerstein analysis is a mind blowing visualization for me!!!! How cool is that!! Now I see very clearly why some saturation are more "fat" and some others more "crisp". An even cooler way would be to visualize that with an added time axis, like a waterfall, showing how a plugin reacts harmonically over time. Since no real life sound is static, maybe it would show what does a plugin reacts to a transient, for example? Different harmonics appear differently depending on the input energy - sometimes with resonance, like from transformer hysteresis. On percussive material like drums, a kind of "sweet spot" is attained when only transients being saturated, leaving most of the body of sound untouched, making gentle saturation as a nice (and surprisingly transparent) transient control.... For now, Plugin Doctor is also buggy and lagging on my almost-new processor... It might have to do with the software....
I wasn't gonna say anything but yeah he's referring to me. I'm happy to admit if my explination of certain things aren't great at times which in some parts can be interpreted certain ways and some times you get stuff wrong. In defense, I asked Dan to explain to me where he felt I went wrong prior to this video but Dan took the route to do it as video content instead which is absolutely fair game. Its youtube at the end of the day. However I don't know if Dan's saying everything I said in the video was wrong as that's actually quite far from the truth as a lot of what I said was echoed in this video At the very start I talked about how linear is mathematically precise and how what you put in is what you get out and i used fabfilter, waves f6 etc as examples of that where a non linear system like in analog isn't. The signal is impacted by being passed through the components of the gear. This can cause noise, harmonics and other non linear behaviour which is random. In regards to the curves I talked about how plugins are essentially snapshots of non linear behaviour. That's why I talked about the tolerances the TMT adds, and how convolution gives you impulse responses of the gear which will show a static snapshot of non linear behaviour. I can see the curve element being picked apart as it could be seen as calling an eq curve of one plugin non linear compared to another plugin which could probably match it if you used tons of different bands. So in a sense, I call fabfilter eq linear, but in reality it has enough bands to match another curve I deem as non linear. So logically the terminology is flawed but It's all about context. I made a video showing what kind of things to look for when comparing analog emulations. I used the term non linear when describing certain plugins which had certain frequency responses just running through the plugin that weren't completely flat and linear. I know what I meant and maybe I shouldn't have edited so much stuff out the video but I knew I had the next video planned the week after which was actually shoeing real analog curves compared to plugin curves I then show real analog non linearities caused by a non linear system. I went on about how plugins are static representations of this and showed that in bertom curve analyser. I talked about the advantages of plugins and how the actual linearity of plugins can be a benefit as you have a consistent result that isn't random. I mean can it not be argued that everything in plugins in theory isn't non linear because its coded to act in a predicable manner. It's not random like analog. A plugin isn't going to sound different from day to day. It's not going to be affected by wear, age, humility etc. We can say that fabfilter is non linear when in dynamic mode or when a plugin makes thd but surely there is still an element of linearity due to the fact that no matter how many times you load that preset it'll give the same result. It doesn't have the randomness of non linear analog behaviour where its frequency response, attack envelopes, noise, thd etc etc can be slightly altered every time you switch it on. I'm happy to eat humble pie but I'm also more interested in diving deeper into all this stuff and defending myself where I feel I made a lot of good points. I've taken this like everything I said was wrong where in reality I know it all wasn't. A lot of what Dan said, I said. Some parts we differed. Maybe certain elements I didn't explain well and maybe certain bits I just didn't get right but in the end I think it's all about sharing and learning more. Don't get me wrong I hold Dan in the highest regard. I genuinely respect his knowledge and experience which I know is greater than mine but that doesn't mean to say just because Dan Worral said Paul Third was wrong doesn't mean that you should view that as bible and that everything I've ever said or done is now suspect. I'll continue to learn and I'm happy for others to learn from any mistakes I make but in the end context is very important. Many can argue exactly what the term non linearity actually means as it has different contexts and audio is a massive melting pot of information. I'll probably get trolled a bit for standing up for myself but if it means I and many others learn from the experience then so be it.
I‘m glad he made the video, that way we all have a chance to learn something. Back in school I was taught the difference between linear and non-linear distortion, the former being frequency dependent and „correctable“ by means of eq, the ladder being an „irreparable“ change in dynamics, which naturally includes harmonic distortion and other artifacts, such as aliasing in converters. I personally would love, if more of your videos focused on how you use your favourite processors and why. Also think it would be interesting if you decided to learn dsp yourself and document your journey. Personally, learning a little dsp myself and working with the amazing Sonible team has helped me get so much more out of my go to effects. Cheers :)
@@theCloneman5 I just don't have the time haha I'm genuinely just picking up scraps as I go along from other engineers and experimenting whenever I have time. Between my day job and youtube I'm doing like 60-70 hours a week so to find time to learn about that and get experience somehow I'd need to probably give up my job 😂 Its something I plan on doing down the road for better understanding but I genuinely just don't have the time. I'm actually contemplating going down to just one video a week and ditching the autism channel all together cause I just don't have enough time Id use the Bernard's watch reference but I very much doubt you got that in Germany haha
Technical terms aside, let’s not lose sight here. You are both showing that some companies make marketing claims that are simply not true. Saying „this total behaves like the analogue gear“ but in truth it’s just 100% digital is the real problem here. Yes using technical terms right is nice, but it’s also not the end of the world. It’s important that people understand the meaning and that was perfectly clear.
Paul your video still rocks! And I agree with you. A lot of it was and still is right. It’s a matter of context and terminology I guess. Maybe experience. Keep the good work!
@@FlorianRachor1 yeah I do think there has been a clash of terminologes here. I maybe perhaps take the description of non linear too literally (which being autistic isn't a surprise 😂) as in my head I deem non linear behaviour to be the changes to the signal path caused by the components of the gear as it has no relation to the input. Just running through the circuitry alters the output in a non mathematical fashion as its creates random behaviour that can change the output almost every time you use it. By nature it's inconsistent and can change more over time and I see that as non linearity. Perhaps my terminology is wrong but I speak to some engineers who agree with me, some who agree with Dan and see non linear only as distortion. But your right I thought me and Dan were on the same page in terms of trying to show plugins under the hood and debunk some marketing BS but unfortunately he took the approach to label me with the tag of all the other youtubers which I feel is very harsh as I always have the best intentions in everything I do. I don't claim to be anything I'm not. I don't try and sell mixing courses. I just experiment with stuff, show my results and then continue to learn from the experience. I share every bit of learning I have and use others to adapt that and work harder to learn more. Personally I feel we could have worked together on this and it would have been a good way of Dan sharing his experience and perspective with someone who is willing to learn more. I could have explained where my perspective was and it could have been a good experience and also good content for all you guys to show that we do all this for your benefit. where what's happened is I've been left trying to defend myself against one of the most respected audio youtubers. It could have been done differently but I just have to dust myself off and see what learning I can take from the experience but I think I can say for sure the idea I had in my little autistic daydreams that me and Dan would join forces one day has probably been quashed now.. Ach well.. Life's a bitch haha
I haven't produced anything in my life but I could watch these videos all day. You can just tell from the delivery, the detailed and logical explanations and demos that this dude knows his shit at a special level. Not to mention the video audio sounds better-mixed than most of the music I listen to.
Thank you. Checking in as a hobby experimental bleep bloop sound maker, and I absolutely looooove too much distortion on everything. Epiano tones through a pocket metal pedal kind of ridiculousness. I appreciate learning more about what's going on there. Cheers!
Someone once said "talking about nonlinear systems is like talking about non-elephant zoology", which I think points out an interesting fact about nonlinearity: Almost everything is non-linear - something being linear is the very rare (but important) exception. Although in audio we often assume nonlinearity to mean 'adding harmonics' or 'saturating', this is not generally true: "Non-linear" just means what it says: something is "not linear" - but whatever else it might be, we in fact don't have a clue! A system where I put a Mozart sonata in and get a Beatles song out is also a nonlinear system, for example.
Well put. In addition, non-linear systems only affect the harmonics of periodic signals (sine waves, triangle waves, sounds of wind instruments, guitars to a degree, etc.) and only when the systems are time-invariant. With more complex signals all bets are off and talking about harmonics is kind of pointless. What to expect of a non-linear effect is best illustrated by this example: Applying compression to individual tracks first and then mixing them gives you different results from applying compression to their mix afterwards. It will just sound different. You can't say anything more specific about general non-linearity than that. By the way, a system that transforms a Mozart sonata to a Beatles song can be perfectly linear-- in theory. You could calculate the deconvolution kernel of the sonata and convolve the Beatles song with it to get a filter. This filter would be perfectly linear and would turn the Mozart sonata into the Beatles song. It would be infinite in time, though, and the least amount of noise on the input would totally break it, so it's not really possible in reality. A system that simply identifies the sonata and outputs whatever in return is much more plausible, and, of course, not linear.
This was a great explanation. Also the track used for this is super cool - I really enjoyed the synth bass sound having some nice saturation! Would love to hear the track by itself or even have a breakdown of it. Love your videos!
The simplest definition of nonlinear would be f(a+b) not equal f(a)+f(b), where a and b are 2 signals and f is the processing applied (eg distortion). This means that whether you apply the processing to the sum of signals (eg bus group), or independently on each signal and then sum them afterward will lead different result. That will be of course the case with distortion, but also, and I wished you mentioned it, compression! That would have been a better way to explain nonlinearities, especially since I think most people will understand that applying compression to individual channels clearly lead to different results than applying compression on your final mix.
Mostly though it's people who know better, choosing not to care. Partly this is trolling of non-guitarists, similar to how astronomers call every element past helium a "metal".
The toughest thing about learning sound engineering today is getting bombarded by 20 different ways to do a thing and everyone screaming at you that they're right and everyone else has no idea what they're talking about. Oh, that and finding a job in anything other than bland disposable electronic music
I just need to get this off my chest - all these “youtube producers” (or instagram, or tiktok) cater to beginners trying to do EDM because.. well, it’s a popular genre now and because it doesn’t actually require any instrument skills to make. That’s great, just imagine what would have happened if Mozart’s genius were to be hindered by not owning a piano. But this also means there’s a huge wave of kids who think music production is as easy as following templates. And what you get is this insane amount of “educators” profiting off the immaturity and lack of genuine experience beginners have - “this sample pack will make your productions sound professional” should sound too familiar to you all. Thanks for coming to my TED talk lol
I love the way you produce your videos. The music behind this one, as with many others of yours, really kept the momentum going and held interest while you were explaining the topic. Also, I appreciated the video distortion on the closing title. Nice touch :)
There was something seriously wrong with your voice processing in this video. There was a few very noticeable parts, but it was spread almost everywhere. Some kind of distortion or glitching. Could've been a planned effect considering the topics of the video, but it didn't really sound intentional. One example starts at 7:45.
1. Your videos at the best 2. I tried and failed to find documentation or videos explaining the Hammerstein tab on plugin doctor. Thank you! This is a godsend, as I've been doing lots of comparisons of different saturation plugins, and this tab is much more informative than the harmonics tab! 3. You say that using the "linear analysis" tab on a non-linear plugin "breaks the tool". So when I load a saturation plugin, most often (but not always!) a naive read of that tab would seem to indicate that the saturation plugin EQs the sound (often quite a lot, according to a naive read). Do you mean that this EQing is not actually happening -- but that the added harmonics will "confuse" the plugin doctor's "linear analysis" into displaying something that is misleading? I'd love to understand that more...
Dan, an electro-acoustic phenomenon that has always eluded me is what makes mono signals sound more "3D." Tube microphone for example can sound like they have more "depth" than their FET peers. Lots of gear forums discuss this effect as "mojo" or "analog magic." But clearly there is a measurable and replicable audio phenomenon happening. Can you explain?
@@charlieharding8134 its not all tube distortion that's for sure, those cheap TLA audio preamps do the opposite! I once got the opportunity to take impulse responses of a focusrite preamp that was giving me the '3d'. but the convolution didn't have the 3d in it, even though the fft was identical. So it must be dynamic in nature. I used to think it was a trait of high end gear, But by the same token people recording though really high end gear have great rooms too. So I wonder if its just certain psychoacoustically- sympathetic distortion bringing those good balanced ER's up in the recording, they wouldn't be distinctly identifiable because good ERs would be in the haas range. And we know from Bob Katz that any haas effect bias towards ambience in our brains.
As a guitarist there are blurred lines when you try to transfer the jargon over to the context of mixing. The tremolo/vibrato thing is completely skewed, so much so that the piece of hardware that many guitarists use as a bridge is still mislabelled as a tremolo on a common basis. It seems like guitarists are still misinformed from the times where a guitar amp didn’t even have an independent gain control. The preamp and power amp were tied together and the volume knob on those amps would simultaneously increase the gain in both sections so you would have to make the amp really loud if you wanted to introduce overdrive in your sound. This only really changed once amps like Marshall’s JCM800 separated them and there was a pre-amp control introduced which allowed you to dial in all the overdrive you wanted whilst being able to control the overall volume better by increasing the gain of the power amp. Thanks for another great video Dan, they’re always super informative!
I think you've made some assumptions about early amp designs that aren’t quite correct. Early amps (such as say a tweed Fenders like the champ and princeton) do have independent gain for the preamp and poweramp, it just happens that the gain is fixed for some portion of the circuit. Which portion depends on the circuit. If you look at the Fender FC1 champ circuit (below), you'll see the pot exists between the 6SJ7 Pentode preamp tube and the 6V6 poweramp tube (its the 1 Meg variable resister connected directly to the grid on the 6V6), whereas the preamp has a fixed grid resister setting the input gain. With the Fender 5F1 champ (also below), it has introduced the classic configuration of using a dual triode (the 12AX7) for the preamp, with one of the triodes acting as an input buffer, and the second controlling the gain. In this configuration, the 6V6GT poweramp tube has a fixed gain, and the volume pot (a 1 Meg variable resister this time) is installed on the grid of the second half of the preamp tube, controlling the preamp gain. Earlier 5C1 circuit diagram: i2.wp.com/myfenderchamp.files.wordpress.com/2010/03/champ_5c1.png The iconic 5F1: i.pinimg.com/originals/a3/a7/44/a3a744c0165a368c2b3d4f2456df06e3.gif I chose the champ circuits because they are the simplest circuits, and have exactly one pot which simplifies examining the diagrams.
True about information today. This is one of the few TH-cam channels I'm watching. Virgil Donati (one of the 7 drummers which auditioned for Dream Theater) said in an interview: the hardest part for him to get what he is today as a drummer was to weed out the infinite bad info floating around.
Hi Dan, thank you for these videos! You manage to explain complex topics in a way that’s easy to understand for people who didn’t receive a full audio engineer education. Greatly enjoyed watching these. :) Can I add one request? For one of your next videos, I’d love your voice analyser to be The Snail by Ircam Labs. :D
Yes I had the same wonder and feeling about matters not being represented correcly at probably the same recent video 😉. Thanks for explaining and clearing it up!
Analog gear creates it as well, but then it filters out in the converters. Digital distortion doesn't filter it out (unless you use oversampling) and creates aliasing artifacts.
I took a look at that Lindell 80 series plugin in my own Plugin Doctor recently and found something odd that isn't appearing in your video...in mine, the plugin inserts an antialiasing filter at 20kHz, regardless of the oversampling settings or project sample rate. I'm very surprised to see that it's not doing this on yours.
TH-cam university of Dan Worrall has been better for me than my year of music tech college... Now have a job in the industry and found learning the hard way to be much more effective than any classroom 😂
Thanks Dan! I've seen so many wrong uses of "non-linear" as a description I was starting to doubt my own understanding... the internet has become an out of control mega game of "broken telephone"! I think you're on to something with the myth busting... we NEED a voice of authority to explain stuff properly and comprehensively.
A lot of people new to math get confused about the meaning of linear. Linearity is two properties, *additivity* and *homogeneity* . suppose I take a signal x and feed it in to a filter f, giving me the output f(x). Suppose i have another signal y and likewise I can feed that in to the filter with the output f(y). Suppose further that I can scale either signals amplitude by A and B respectively. The filter is linear if and only if f(Ax + By) = Af(x) + Bf(y) Where f(Ax) = Af(x) Is the property called *homogeneity* and f(x+y) = f(x) + f(y) is the property called *additivity* To put this all in terms of a DAW, suppose you have two tracks with samples playing, 1 and 2. You put the same plugin with the same settings on each track, then increase the gain of each track by 3dB, respectively. Thats the right side of the equation. This will sound identical to sending both tracks to a single send channel with the same plugin with an input gain of +3dB (the left side of the equation) *if and only if* the plugin is linear.
About gain vs volume... it worth noticying than gain is dB scale but volume is sometimes a Percentage, where 0 would bring to silence. Though with gain 0 you can still have signal hot enough to pass.
That's not really relevant. In fact gain expressed as a decibel value will have to be converted to a linear amplitude factor for the actual gain change. Eg +6dB gain means multiplying your voltages or sample values by 2 (approx). -6dB means multiplying by 0.5.
@@DanWorrall that was more an argument on the 'Gain = Volume' statement than on the non linearity aspect of things, as Volume can sometimes refer to a multiplier where 0 would lead to no sound and 1 (100%) to unaltered sound. Am I confused by the terminology here ?
That Wikipedia reading is going straight into the new Boards of Canada song
Or HAB 3
Didn't expect to find you here, but I guess the is always something new to learn :D
"We dont just listen to arrangements of pure sine waves"
Laughs in Fourier
"So what I told you was true... from a certain point of view."
Given the band-limitations of human hearing we couldn't hear the arrangement of pure sines, because we percieve lower frequency content as rhythm and even lower frequency content as changes in sections of music, etc. On the other hand, we can more accurately detect both rythm and pitch than the fourier transform implies - the reason being that the fourier transform is linear, but our hearing apparatus is not. It's unclear whether anybody has listened to a pure sine for that reason, our ears add harmonics to a pure sine if it reaches our ear, and while our brains process this sensory data, whether it accurately compensates and you percieve a pure sine isn't that clear. What you hear is probably as free of harmonics as you are physically capable of hearing, but a loud enough sine with a frequency below human hearing can still produce some sort of perception through the harmonics introduced by ear anatomy.
What he said before is that harmonics are what make instruments sound interesting. That's why I think he meant "we don't just listen to arrangements of _fundamental_ sine waves." If we did, we would be living in an 80s electronic music nightmare. So I think he's pointing out the reason why A2 in the cello sounds different from A2 in an electric guitar, and that, with some saturation, we can enhance those differences and make things sound more interesting.
We also listen to cosines, so what he's said is true.
@@lambd01d We're incapable of hearing phase, so arguably "nah". On the other hand we are good at hearing phase differences, so "yay".
I'm a guitar player and just wanna thank you for this, especially the vibrato/tremolo bit! I thought I was the only one bothered by this.
You're far from alone!
Wow, Martin Miller. If you, reader of this comment, haven’t seen his live cover of Pink Floyd, I urge you do - it’s great!
And let's not start about guitar compressors... 🤦♂️🤦♂️🤦♂️
Nice to see you there! Love your teaching attitude, keep pushing!
Wouldn't be an examination
of guitar amps simulators and processors of today a great thing? Maybe you can collaborate with Dan on this?
Or just ask him to do it)
Same here
I too come from a Live Audio background, something I’ve been doing for 30 years now, but I really began it all at University with a music degree and a specialty in Electro Acoustic composition. As a kid who always had audio engineering in my head, along with drum machines, sequencers, and a 4 track recorder through my teens, I had a moment when I applied to a renowned Recording Engineering school the same year I applied to music school. As much as I made abstract and very non traditional music I still saw myself more an engineer than a musician. Lo and behold I was rejected from engineering school and accepted to a music program. In my second year I signed up for this electronic composition program, and no, it wasn’t hip hop or dance music we learned about. What we had was access to a ‘studio’, which was an 8 track 1/2” recording machine, a couple of 2 track tape machines, a very very early sampler, a couple of ARP 2600’s and similar bit and bobs from the 70’s and 80’s. Our ‘engineering’ class? Exactly 2 hours (here’s a patch bay, here’s how you feed tape into the machine, here’s how you splice the tape, etc), then it was GO. Mess stuff up, make art, if you dare.
I spent three years like that, making 50’ tape loops that stretched around the room, just because why not. But what I primarily learned from my professor was how to listen to sounds a certain way. Density, dynamics, harmony, dissonance, cohesion, in very abstract terms how to MIX.
Eventually I moved to live audio, toured the world, and for a long time faked all the technique stuff while quietly learning and absorbing the ‘correct’ way to do things. I applied this to the studio when I could get into one, and to my live tours. At some point I could (almost) talk about correct techniques. But I’ll never forget going to do a session at the famous BBC Maida Vale studio, and watched in awe when the engineer pulled together an album ready mix of the band I’d done 150 shows with in about 20 min. That was a moment of clarity for me. There IS value in true understanding of audio engineering as a technical craft. Since then I’ve strived to top up what I cobbled together till then and really understand what is going on.
Suffice to say, Dan I certainly relate to your past as a self-taught live engineer, but more than anything your way of filling in the gaps for me have been nothing short of incredible. Thanks for the videos and keep up the great work!
The intro segment is just too true. Too many so called "youtube producers" speak with full confidence about different subjects with clickbait titles ("The trick that No mAsTeRIng eNGineEr wIlL tEll yOu") when in reality it's not very accurate, or not true at all.
Thank you for this. You're among the few I trust.
so, do you know what non linearity means ?
so, do you know what non linearity means ?
@@Renaxelo do you know what an echo is?
@@Renaxelo do you know what an echo is?
@@Renaxelo Thanks to Dan's video, I have a better understanding of it.
Dan, I wish you were my teacher when I started audio engineering man. Grateful for doing this and for having this channel.
Yep. Gold.
"Like trying to weigh something using a tape measure." If this wasn't in a Douglas Adams book, it should have been.
You totally had me when expaining "vibrato" and "tremolo", with that little kick at Leo Fender (the "trem bar") - I hope you do get in some sense what a well of good and interessting information you are! Happens to be one of the best decisions I made, when I subscribed to your channel!
to be super duper fair, you can get a true tremolo out of a trem bar if you smack it and if you mod your guitar a bit, but I'm also guessing the first Fenders didn't do that :P
@@IDDQDSound Yep, if you're like really feeling intrigued to do so :) By the way: "IDDQD" as nick is a cool idea - why did I not think about that, lol :) ? In that sense: IDKFA
@@shaihulud4515 Love it when people get the reference :)
@@IDDQDSound I don't get the reference. Could explain?
@@BrunoNeureiter it's a cheat code from the old Doom video games. Iddqd unlocked invincibility :)
lmao, my man called out 99% of guitarists just like that xD
omg im sending this to 3 of my friends who said me that tremolo and vibrato are the same thing
LOL, that was unnecessary but hilarious
My introduction to the notion of linearity in electronic circuits came in the 70s at college and then in training to be a production/repair technician.
I was already a guitarist turned bassist by this time and it was very interesting to learn how my amplifiers worked.
They showed us the way that amplifier circuits vary their in/out ratio at different levels and at different frequencies.
Whether solid state or valve based, there would always be a reactive element to the responses of amplification stages.
A particular highlight of these lessons outlined the difficulties involve with getting a linear response out of Valve based hi-fi.
Winding output transformers, and coupling transformers, with their coils and inductances to give linearity was/is a major headache and the largest part of the cost of making them.
Finding the straightest part of the response 'Curve' is the goal.
Then we musicians do our damndest to lose that 'fidelity' for the most part in one way or another.
Not only the gain stage of ProQ is linear. Actually, all the static EQ curves are too. Mathematically what we call a linear process (more formally linear time invariant system, or LTI system) is a process that can be reversed by running the inverse of the process on the already processed signal to get you back to the unmodified signal. In other words linear processes are non-destructive! So in the case of ProQ (and any other ordinary digital EQ) you can boost 1kHz by x dB, place a second instance right after that cuts the same frequency by the same amount. Since this type of filtering is built on LTI code, the resulting signal is identical to what you fed the first instance, bar eventual _very_ low level differences introduced if the algorithm were to use some kind of (non-linear) approximation to save CPU power (or sloppy coding).
Now, try the same thing with for example Mwaveshaper by Melda, which is a classic transfer curve shaper. Bend the transfer curve in a convex manner so that medium intensity signals get amplified by factor 1.5, for example. Place another instance of Mwaveshaper behind it that attenuates medium intensity signals by multiplying with 0.666... If Mwaveshaper was linear we would expect to get us back to square one, but the resulting signal is anything but identical to the one you had from the start. What causes this is non-linear expressions such as exponentials in the code. They are inherently non-linear. The problem becomes even more unsolvable if you introduce attack and release times such as in a common compressor - in such case the transfer function isn't even time invariant (in other words, the process parameters change depending on time and what signal came before the signal you are currently processing).
Hi, I respectfully but firmly disagree with your definition of an LTI system. A linear time-invariant system is just what the name says: linear (i.e. f(a+b) = f(a) + f(b), like an EQ and not like a distortion), and time-invariant (delaying the input just delays the output, like an EQ or a distortion and not like a tremolo effect). However it may very well be non-invertible: consider a notch filter, for instance. Filtering a pure sine wave at the frequency of the notch gives a result of zero, therefore whatever filtering you put afterwards you will not be able to recover the original signal.
On the other hand, some nonlinear processes are perfectly invertible. If you take the cube root of a signal, the output will not be proportional to the input (hence nonlinear). But if you then take the cube power of the result, you will perfectly reconstruct the original signal. (As a matter of fact this idea is frequently used in telecommunications)
@@alexthi What you said I agree fully with. Your examples put the finger on nice examples that go against what I said. It's not as simple as what I wrote, and I may have gotten a few details wrong. I am by no means an expert. (on my third year in EE, in the middle of the first course in DSP)
A nonlinearity of x^2 for example can be negated by x^(1/2) since x^(2*1/2) = x, which is linear, so contrary to your comment, nonlinear processes CAN be reversed by using the inverse process. A better (and simpler) definition of a nonlinear process is that a nonlinearity acting on the sum of two signals is not necessarily the same as the sum of the signals with the nonlinearity applied individually.
As a developer and engineer, this is extremely interesting and makes perfect sense, but wouldn't a zero difference require an effectively infinite noise-floor? I guess that's basically what we have in a 32bit float environment anyways, but I'm curious as to whether this would actually be non-destructive in a laboratory setting, or if it's just non-destructive to our ears...
@@eugenefullstack7613 You can do the old phase reverse trick, then check the level of the summing bus. I don't think bit depth influences the cancellation as long as it stays constant. Anyhow, you should get total cancellation.
Thank you for describing the Hammerstein measuring system! What a great tool.
Actually, linearities include invariance by both multiplication (aka, change in volume) and addition, which is to say, in a linear plugin, if you run the sum of two signals, you should expect the same output as if you ran both signals to the plugin separately, then added them.
This is the reason we often use reverbs in bus tracks. Since reverbs are linear, this achieves the same as if I had one instance of the reverb in each track.
The irony here, of course, is that it's linear only if it's a digital reverb.
@@williambrewer3150 idk about analog reverbs, but real reverberation is pretty damn close to linear. That's why impulse responses even work at all.
@@iurigrang They are linear given reasonable levels. There is a point where air stops behaving linearly and the same is true for things like plate or spring reverb. But I don't think anybody doing music is pushing levels to the point where it has an audible impact.
@@simongunkel7457 yup, this is what I meant by real reverberation being close to linear. Obvious, everything here only works to a certain point.
@@simongunkel7457 ok but what if I wanted to model the reverb inside an atom bombs mushroom cloud ?! 🤔
The vibrato/tremolo dig was fantastic, thank you.
my brain is drooling over how well articulated this is
thank you so much for pumping out these kind of videos
The mighty myths: gain = distortion, tremolo = vibrato & linear = whatever the last "plugin" flogger suggests. Informative and the odd clever dig to make it amusing. Thanks.
incredible as always, Dan. when you write a book and/or narrate an audiobook I'll be preordering it.
3:20 "i need to get that off my chest". Wonderfull. Thanks for the always interesting and never banal topics.
Amazing like aways! I am happy there are a few good teachers out there explaining things correctly. Where would we be without you.
Thanks again ✌️😊
The GOTO for clear smart thoughtful audio info. Thank you.
Fascinating! I hope this is leading to a gain structure masterclass! Track level, daw input level, plug-in strip in/out, plug in fader, daw fader, send levels, bus levels, sub groups, vca, master bus plugins, master fader. Linear/non linear.
So much potential to help/harm a mix just at the gain stage/fader volume level. Wisdom and guidance please. Cheers!
Profoundly educative, as always, Dan! I love how you educate the guitarists as well ;). Being an engineer AND a guitarist you got it of my chest as well :) Excellent video, thank you.
I think this could use a bit of expansion on intermodulation. A linear function follows the rule that f(a*x+b*y)=a*f(x)+b*f(y) and your change in input level only shows that f(a*x)=a*f(x). It's part of being linear, but not all of it. The other part is that a linear plugin on a bus receiving two inputs produces the same output as both inputs going through instances of the plugin and then summed. Distortion-free nonlinearities occur, when the relation above holds for x and y being levels of signal at a particular point in time, but not for x and y being the entire input function.
yes, and also intermodulation distortion has been glossed over when discussing distortion produced by analog circuits, for the sake of synthesis I'm sure.
do you have an example of the latter situation?
Totally agree. It misses a big part of the picture. There are also a bunch of other processing that are nonlinear, basically every sort of dynamic plugin is intrinsically nonlinear, compressor, gates, etc. I'm thinking that spectral effect (autotune, vocoder, etc) may actually also be nonlinear.
@@lucianodebenedictis6014 I think the classic example of intermodulation using analog equipment remains an overdriven electric guitar playing power chords. On first approximation you get the sum and difference tones (and neatly enough the sum and difference frequencies of upper partials are harmonics of the difference tone resulting from the fundamentals). You have the root of the power chord and the note a 5th up, which is (ignoring finer details of tempered tuning) 1.5 times the frequency of the root. So the difference tone sits at 1.5-1=.5 times the frequency of the fundamental, which is an octave below. So playing power chords through an overdriven amp adds additional frequency content below the chord, making it sound, well, powerful. With the inversion of the power chord (smoke on the water comes to mind), you get a perfect 4th, which is 4/3 of the root, leading to a difference tone at 1/4 the root or two octaves down. Pretty much any other interval gives you some pretty wild additional tones (often outside of the western 12-tone scale). As soon as you add a third note you also get another level of intermodulation and now you are looking at the combinations |af+bg+ch|, with a,b,c in {-1,0,1} and f,g and h the respective fundamentals.
@@yass8483 Dynamic processors are all nonlinear if you take signals over time as an input. They are linear (at least if they don't model analog gear) at any given point in time. A compressor (and gates, expanders etc. have the same property) basically switches between two linear transfer functions. If it's soft knee it goes through a series of linear combinations of linear transfer functions, which are also linear. So for each individual sample it is linear, for a longer sequence of samples it isn't. If you shorten the attack and decay times, you can - as Dan does - view this as creating a non-linear transfer function. Alternatively you can view this as the same linear transfer functions as before, but the switching starts to happen at audible rates, which then introduces additional frequency content not as distortion, but as Amplitude modulation sidebands - the maths work out to the same result. Spectral processing doesn't have to be non-linear. The FFT is a linear transformation, so it will depend on what you do in the conjugate space that determines whether it is linear or not. I'm pretty sure vocoders are linear, but I know that the pitch detection for autotune, melodyne etc. is non-linear. The pitch shifting itself is linear. I don't think any of these offer side-chaining (i.e. generating the control signal from another source than the audio you are processing), but if they did given the same SC the processing would be linear, but the SC input would act in a non-linear fashion. (Edit: That would also be a good reason to view compressors as AM modulation rather than as a non-linear transfer function. If you have very short attack and release times and use a SC, the additional frequencies in your result will appear as AM modulation with the frequencies of the SC, not as harmonics of the signal).
@@simongunkel7457 What do you mean SC ?
But you are right FFT is indeed a linear operation and that is also reversible. For a pitch shifting, I just can't see how the process of changing the frequency could ever be linear, or what function could affect linearly a sum of sine functions in such a way as to produce sine functions of different frequency, but maybe there is something that I am missing. I am not quite clear on which operation done in fourier space would result in a nonlinear operation in the time domain, but I am no signal theory expert, so you tell me ;)
And yes, also right, vocoders are indeed also linear, as they are just modulating levels of different partials independently of one another, according to a control source, also applying it to a sum of two signals should be equivalent to applying it independently to each, and summing after.
Wow, the atmosphere of the video with this music is like you're diving into a very deep hole of secrets or conspiracy theories :)
I've got so many insights from it.
I always knew that Pro-Q3 is a clean EQ, but I haven't realized, HOW clean it is.
So, FF Volcano 3 is also non-linear, right?
THANK YOU -- the fact that guitarists (and guitar amp manufacturers) refer to distorting/overdriving amplifiers as "high gain" has always vexed me. It's really "failed gain." Now, failed gain can sound AWESOME, but it's not "high gain."
Whenever you feel the need to speak up, I'm listening. Thank you!
I second that.
Brilliant..well explained and from someone who’s obviously been involved in making records for many years, unlike the video channel your referring too, I too experienced it pop up on my YT feed, only to bemused by its strange regurgitation of bitsize marketing mixed with tutorial quotes from some who has yet to make a record.
This was great, Dan. Loved the track, too - put me in mind of moments of 'Ok Computer`. Cheers.
Hi, i don´t know why YT algorithm doesn´t show me this channel before lol. This is the most usefull channel about audio that i found in internet. I´m enjoying watching all your videos, new sub here and i will let my like in every video, greetings from Argentina.
Really digging the track in this one. Appreciate everything you do, Dan!
The rich deep vibes of every video is what keeps me coming back to the channel.
Hey Dan, myself and for sure many other people would appreciate an indepth series about PD and its correct usage and interpretation of its curves. Demonstrated by actual real usage case scenarios. In fact there is hardly anything useful to find on YT about it. This would be a great help imo. Any plans? Thank you.
PD? Wassat?
@@joelonsdale Plug in Doctor
The extra harmonics on the analyzer blew my mind. It adds so much!
I actually use broken tube on a square wave with 8 voices. It sounds really nice for a bit of ear candy in any genre honestly
It is great to see my major cross paths with audio engineering. Studying mechanical engineering and this semester in a experimental engineering class where the topic was doing measurements, experiments and reporting them basically with all the needed terminology. In error and measurement lesson the professor told us about linearity and the exact graph at 7:29 was used to show non-linearity. It was mentioned that in real systems, truly linear behaviour is only approximately reached, quoted exactly from the lecture slides.
Dan, you make the best and most understandable mixing related content on all of TH-cam and maybe anywhere. Thank you so much
sadly I know the channel he is talking about. thank you very much for not saying his name. He kinda has a point here and there and means well. But your video makes a lot of sense and it really clarified some stuff for me as well. Have a great day dear sir.
lol, I've just watched this week the video you're responding to.. you're a gentleman for not pointing it out who he is, but at the same time it's important for the youtuber to acknowledge the erroneous information he posted.. and obviously take down the video as it's wrong info and will mislead people into thinking non-linearity is an effect simply regarding eq curves.
Thanks!
Excellent presentation on a very complex topic The style and content is ideal for my interests and limited attention span. I often listen to instructional videos at 1.25 speed, but Dan’s presentations are so fast and concise that I watch at 1.0 to keep up with the rapid flow of info. This vid packs a lot into a short presentation so I enjoyed watching it twice.
I'm curious about UAD LUNA. It may be that they simulate the analog summing in a nonlinear way, meaning something that can't be achieved with plugins on individual tracks and/or on the master track! In an ordinary DAW the summing is always linear regardless of what saturation plugins are used.
Got to love Dan's directness ;-) excellent video as always!
Your background track is ace, I love the warbles
It may also be useful to some to understand 2 tests for a system's linearity: the superposition and homogeneity principles. Superposition principle is: if you first add (mix) 2 signals together and then process them through the system, you'd get the same result as if you process them individually through 2 copies of that system and then add them together at the end. Homogeneity principle is: if you apply gain to a signal and then process it through the system, you'd get the same result as if you first process the signal through the system and then apply that gain. If a system is linear, both of these principles will apply. Effects like compression and distortion fail both of these tests and are thus nonlinear.
one of the best Channels out here!
Please explain the VST Doctor Plugin in detail.
That would help alot. Top be honest ... i don't what it shows me.
sorry for my bad english
FYI this is one of the best audio TH-cam channels folks.
WOW, first time I hear (and see) a such good explanation about this. The Hammerstein analysis is a mind blowing visualization for me!!!! How cool is that!! Now I see very clearly why some saturation are more "fat" and some others more "crisp".
An even cooler way would be to visualize that with an added time axis, like a waterfall, showing how a plugin reacts harmonically over time. Since no real life sound is static, maybe it would show what does a plugin reacts to a transient, for example? Different harmonics appear differently depending on the input energy - sometimes with resonance, like from transformer hysteresis. On percussive material like drums, a kind of "sweet spot" is attained when only transients being saturated, leaving most of the body of sound untouched, making gentle saturation as a nice (and surprisingly transparent) transient control....
For now, Plugin Doctor is also buggy and lagging on my almost-new processor... It might have to do with the software....
if you look at the EQ during the Hammerstein analysis, it is running a sine sweep, so it looks like the result is compiled over time already
I wasn't gonna say anything but yeah he's referring to me. I'm happy to admit if my explination of certain things aren't great at times which in some parts can be interpreted certain ways and some times you get stuff wrong.
In defense, I asked Dan to explain to me where he felt I went wrong prior to this video but Dan took the route to do it as video content instead which is absolutely fair game. Its youtube at the end of the day.
However I don't know if Dan's saying everything I said in the video was wrong as that's actually quite far from the truth as a lot of what I said was echoed in this video
At the very start I talked about how linear is mathematically precise and how what you put in is what you get out and i used fabfilter, waves f6 etc as examples of that where a non linear system like in analog isn't. The signal is impacted by being passed through the components of the gear. This can cause noise, harmonics and other non linear behaviour which is random.
In regards to the curves I talked about how plugins are essentially snapshots of non linear behaviour. That's why I talked about the tolerances the TMT adds, and how convolution gives you impulse responses of the gear which will show a static snapshot of non linear behaviour.
I can see the curve element being picked apart as it could be seen as calling an eq curve of one plugin non linear compared to another plugin which could probably match it if you used tons of different bands. So in a sense, I call fabfilter eq linear, but in reality it has enough bands to match another curve I deem as non linear. So logically the terminology is flawed but It's all about context. I made a video showing what kind of things to look for when comparing analog emulations. I used the term non linear when describing certain plugins which had certain frequency responses just running through the plugin that weren't completely flat and linear. I know what I meant and maybe I shouldn't have edited so much stuff out the video but I knew I had the next video planned the week after which was actually shoeing real analog curves compared to plugin curves
I then show real analog non linearities caused by a non linear system. I went on about how plugins are static representations of this and showed that in bertom curve analyser. I talked about the advantages of plugins and how the actual linearity of plugins can be a benefit as you have a consistent result that isn't random.
I mean can it not be argued that everything in plugins in theory isn't non linear because its coded to act in a predicable manner. It's not random like analog. A plugin isn't going to sound different from day to day. It's not going to be affected by wear, age, humility etc.
We can say that fabfilter is non linear when in dynamic mode or when a plugin makes thd but surely there is still an element of linearity due to the fact that no matter how many times you load that preset it'll give the same result. It doesn't have the randomness of non linear analog behaviour where its frequency response, attack envelopes, noise, thd etc etc can be slightly altered every time you switch it on.
I'm happy to eat humble pie but I'm also more interested in diving deeper into all this stuff and defending myself where I feel I made a lot of good points. I've taken this like everything I said was wrong where in reality I know it all wasn't. A lot of what Dan said, I said. Some parts we differed. Maybe certain elements I didn't explain well and maybe certain bits I just didn't get right but in the end I think it's all about sharing and learning more.
Don't get me wrong I hold Dan in the highest regard. I genuinely respect his knowledge and experience which I know is greater than mine but that doesn't mean to say just because Dan Worral said Paul Third was wrong doesn't mean that you should view that as bible and that everything I've ever said or done is now suspect.
I'll continue to learn and I'm happy for others to learn from any mistakes I make but in the end context is very important.
Many can argue exactly what the term non linearity actually means as it has different contexts and audio is a massive melting pot of information.
I'll probably get trolled a bit for standing up for myself but if it means I and many others learn from the experience then so be it.
I‘m glad he made the video, that way we all have a chance to learn something. Back in school I was taught the difference between linear and non-linear distortion, the former being frequency dependent and „correctable“ by means of eq, the ladder being an „irreparable“ change in dynamics, which naturally includes harmonic distortion and other artifacts, such as aliasing in converters. I personally would love, if more of your videos focused on how you use your favourite processors and why. Also think it would be interesting if you decided to learn dsp yourself and document your journey. Personally, learning a little dsp myself and working with the amazing Sonible team has helped me get so much more out of my go to effects. Cheers :)
@@theCloneman5 I just don't have the time haha I'm genuinely just picking up scraps as I go along from other engineers and experimenting whenever I have time. Between my day job and youtube I'm doing like 60-70 hours a week so to find time to learn about that and get experience somehow I'd need to probably give up my job 😂
Its something I plan on doing down the road for better understanding but I genuinely just don't have the time. I'm actually contemplating going down to just one video a week and ditching the autism channel all together cause I just don't have enough time
Id use the Bernard's watch reference but I very much doubt you got that in Germany haha
Technical terms aside, let’s not lose sight here. You are both showing that some companies make marketing claims that are simply not true.
Saying „this total behaves like the analogue gear“ but in truth it’s just 100% digital is the real problem here.
Yes using technical terms right is nice, but it’s also not the end of the world. It’s important that people understand the meaning and that was perfectly clear.
Paul your video still rocks! And I agree with you. A lot of it was and still is right. It’s a matter of context and terminology I guess. Maybe experience. Keep the good work!
@@FlorianRachor1 yeah I do think there has been a clash of terminologes here. I maybe perhaps take the description of non linear too literally (which being autistic isn't a surprise 😂) as in my head I deem non linear behaviour to be the changes to the signal path caused by the components of the gear as it has no relation to the input. Just running through the circuitry alters the output in a non mathematical fashion as its creates random behaviour that can change the output almost every time you use it. By nature it's inconsistent and can change more over time and I see that as non linearity. Perhaps my terminology is wrong but I speak to some engineers who agree with me, some who agree with Dan and see non linear only as distortion. But your right I thought me and Dan were on the same page in terms of trying to show plugins under the hood and debunk some marketing BS but unfortunately he took the approach to label me with the tag of all the other youtubers which I feel is very harsh as I always have the best intentions in everything I do. I don't claim to be anything I'm not. I don't try and sell mixing courses. I just experiment with stuff, show my results and then continue to learn from the experience. I share every bit of learning I have and use others to adapt that and work harder to learn more.
Personally I feel we could have worked together on this and it would have been a good way of Dan sharing his experience and perspective with someone who is willing to learn more. I could have explained where my perspective was and it could have been a good experience and also good content for all you guys to show that we do all this for your benefit. where what's happened is I've been left trying to defend myself against one of the most respected audio youtubers.
It could have been done differently but I just have to dust myself off and see what learning I can take from the experience but I think I can say for sure the idea I had in my little autistic daydreams that me and Dan would join forces one day has probably been quashed now.. Ach well.. Life's a bitch haha
So is this in reference to Paul Thirds videos and how he was measured frequency response of plugins?
This is the best video that I've ever seen in my entire life on any subject. Thank you!
Bomber tune Dan!
I haven't produced anything in my life but I could watch these videos all day. You can just tell from the delivery, the detailed and logical explanations and demos that this dude knows his shit at a special level. Not to mention the video audio sounds better-mixed than most of the music I listen to.
Informative as always. Was really digging that music too
Thank you. Checking in as a hobby experimental bleep bloop sound maker, and I absolutely looooove too much distortion on everything. Epiano tones through a pocket metal pedal kind of ridiculousness.
I appreciate learning more about what's going on there. Cheers!
Someone once said "talking about nonlinear systems is like talking about non-elephant zoology", which I think points out an interesting fact about nonlinearity: Almost everything is non-linear - something being linear is the very rare (but important) exception. Although in audio we often assume nonlinearity to mean 'adding harmonics' or 'saturating', this is not generally true: "Non-linear" just means what it says: something is "not linear" - but whatever else it might be, we in fact don't have a clue! A system where I put a Mozart sonata in and get a Beatles song out is also a nonlinear system, for example.
Well put. In addition, non-linear systems only affect the harmonics of periodic signals (sine waves, triangle waves, sounds of wind instruments, guitars to a degree, etc.) and only when the systems are time-invariant. With more complex signals all bets are off and talking about harmonics is kind of pointless.
What to expect of a non-linear effect is best illustrated by this example: Applying compression to individual tracks first and then mixing them gives you different results from applying compression to their mix afterwards. It will just sound different. You can't say anything more specific about general non-linearity than that.
By the way, a system that transforms a Mozart sonata to a Beatles song can be perfectly linear-- in theory. You could calculate the deconvolution kernel of the sonata and convolve the Beatles song with it to get a filter. This filter would be perfectly linear and would turn the Mozart sonata into the Beatles song. It would be infinite in time, though, and the least amount of noise on the input would totally break it, so it's not really possible in reality. A system that simply identifies the sonata and outputs whatever in return is much more plausible, and, of course, not linear.
it is really good video .I learnt a lot of worth things ,I got stuck about this linear and nonlinear case .Now I completely got it .Very thank you sir
This was a great explanation. Also the track used for this is super cool - I really enjoyed the synth bass sound having some nice saturation! Would love to hear the track by itself or even have a breakdown of it. Love your videos!
The simplest definition of nonlinear would be
f(a+b) not equal f(a)+f(b),
where a and b are 2 signals and f is the processing applied (eg distortion). This means that whether you apply the processing to the sum of signals (eg bus group), or independently on each signal and then sum them afterward will lead different result. That will be of course the case with distortion, but also, and I wished you mentioned it, compression! That would have been a better way to explain nonlinearities, especially since I think most people will understand that applying compression to individual channels clearly lead to different results than applying compression on your final mix.
Fender's TREMOLO ARM *does* produce some Amplitude Modulation, too. The string height above the pickups increases as the arm goes down...
Mostly though it's people who know better, choosing not to care. Partly this is trolling of non-guitarists, similar to how astronomers call every element past helium a "metal".
The toughest thing about learning sound engineering today is getting bombarded by 20 different ways to do a thing and everyone screaming at you that they're right and everyone else has no idea what they're talking about. Oh, that and finding a job in anything other than bland disposable electronic music
Crazy right
Bands are coming back tho
Ppl care more than I’ve seen in a while
I just need to get this off my chest - all these “youtube producers” (or instagram, or tiktok) cater to beginners trying to do EDM because.. well, it’s a popular genre now and because it doesn’t actually require any instrument skills to make. That’s great, just imagine what would have happened if Mozart’s genius were to be hindered by not owning a piano. But this also means there’s a huge wave of kids who think music production is as easy as following templates. And what you get is this insane amount of “educators” profiting off the immaturity and lack of genuine experience beginners have - “this sample pack will make your productions sound professional” should sound too familiar to you all.
Thanks for coming to my TED talk lol
Your trippy glitch music is always a real draw imo. Not to ignore the fantastic information and dry humour of course.
I love the way you produce your videos. The music behind this one, as with many others of yours, really kept the momentum going and held interest while you were explaining the topic. Also, I appreciated the video distortion on the closing title. Nice touch :)
There was something seriously wrong with your voice processing in this video. There was a few very noticeable parts, but it was spread almost everywhere. Some kind of distortion or glitching. Could've been a planned effect considering the topics of the video, but it didn't really sound intentional. One example starts at 7:45.
1. Your videos at the best
2. I tried and failed to find documentation or videos explaining the Hammerstein tab on plugin doctor. Thank you! This is a godsend, as I've been doing lots of comparisons of different saturation plugins, and this tab is much more informative than the harmonics tab!
3. You say that using the "linear analysis" tab on a non-linear plugin "breaks the tool". So when I load a saturation plugin, most often (but not always!) a naive read of that tab would seem to indicate that the saturation plugin EQs the sound (often quite a lot, according to a naive read). Do you mean that this EQing is not actually happening -- but that the added harmonics will "confuse" the plugin doctor's "linear analysis" into displaying something that is misleading? I'd love to understand that more...
👋Thank you Dan your videos bring so much clarity on every subject you cover.
Keep up the great work 🤘😌🎶
Dan, an electro-acoustic phenomenon that has always eluded me is what makes mono signals sound more "3D." Tube microphone for example can sound like they have more "depth" than their FET peers. Lots of gear forums discuss this effect as "mojo" or "analog magic." But clearly there is a measurable and replicable audio phenomenon happening. Can you explain?
i know exactly what you mean. there is a front to back perception that doesn't change in mono. Always fascinated me too.
@@Bthelick What is it!?
@@charlieharding8134 its not all tube distortion that's for sure, those cheap TLA audio preamps do the opposite!
I once got the opportunity to take impulse responses of a focusrite preamp that was giving me the '3d'. but the convolution didn't have the 3d in it, even though the fft was identical. So it must be dynamic in nature.
I used to think it was a trait of high end gear, But by the same token people recording though really high end gear have great rooms too. So I wonder if its just certain psychoacoustically- sympathetic distortion bringing those good balanced ER's up in the recording, they wouldn't be distinctly identifiable because good ERs would be in the haas range. And we know from Bob Katz that any haas effect bias towards ambience in our brains.
The cool thing about this is that I already forgot 90% of it, now I have to watch it again.
Gold. Pure gold. And thx for bringing up the Fender trem/vibrato point. 🎊
Brilliant commentary on this topic, I quite enjoyed it!
Absolutely correct on the Internet being filled with audio advice, its a severe case of TMI.
As a guitarist there are blurred lines when you try to transfer the jargon over to the context of mixing.
The tremolo/vibrato thing is completely skewed, so much so that the piece of hardware that many guitarists use as a bridge is still mislabelled as a tremolo on a common basis.
It seems like guitarists are still misinformed from the times where a guitar amp didn’t even have an independent gain control. The preamp and power amp were tied together and the volume knob on those amps would simultaneously increase the gain in both sections so you would have to make the amp really loud if you wanted to introduce overdrive in your sound.
This only really changed once amps like Marshall’s JCM800 separated them and there was a pre-amp control introduced which allowed you to dial in all the overdrive you wanted whilst being able to control the overall volume better by increasing the gain of the power amp.
Thanks for another great video Dan, they’re always super informative!
I think you've made some assumptions about early amp designs that aren’t quite correct. Early amps (such as say a tweed Fenders like the champ and princeton) do have independent gain for the preamp and poweramp, it just happens that the gain is fixed for some portion of the circuit. Which portion depends on the circuit.
If you look at the Fender FC1 champ circuit (below), you'll see the pot exists between the 6SJ7 Pentode preamp tube and the 6V6 poweramp tube (its the 1 Meg variable resister connected directly to the grid on the 6V6), whereas the preamp has a fixed grid resister setting the input gain.
With the Fender 5F1 champ (also below), it has introduced the classic configuration of using a dual triode (the 12AX7) for the preamp, with one of the triodes acting as an input buffer, and the second controlling the gain. In this configuration, the 6V6GT poweramp tube has a fixed gain, and the volume pot (a 1 Meg variable resister this time) is installed on the grid of the second half of the preamp tube, controlling the preamp gain.
Earlier 5C1 circuit diagram: i2.wp.com/myfenderchamp.files.wordpress.com/2010/03/champ_5c1.png
The iconic 5F1: i.pinimg.com/originals/a3/a7/44/a3a744c0165a368c2b3d4f2456df06e3.gif
I chose the champ circuits because they are the simplest circuits, and have exactly one pot which simplifies examining the diagrams.
I'm really glad I discovered this channel. Thank you, from a recording school student.
Great composition, Dan - this is great.
True about information today. This is one of the few TH-cam channels I'm watching.
Virgil Donati (one of the 7 drummers which auditioned for Dream Theater) said in an interview: the hardest part for him to get what he is today as a drummer was to weed out the infinite bad info floating around.
In analog opamps gain is very simply defined, it's simply equal to output over input. (Or, the change in output over the change in input, if you like)
9:40 Tube is symmetric
Assymetric, hence the even harmonics. Symmetrical distortion only generates odd harmonics.
@@DanWorrall Well, Spectre's manual says otherwise.
@@APaclin I had to correct Spectre's developer on the same point! Don't take my word for it, have a little Google.
Bravo Dan. Excellent video as always
Great video as usual, Dan. On a side note, what do you think about that Lindell strip? Yae or nay?
Only just got it, but so far I like everything except the "dual concentric" knobs.
@@DanWorrall I hear you
Hi Dan, thank you for these videos! You manage to explain complex topics in a way that’s easy to understand for people who didn’t receive a full audio engineer education. Greatly enjoyed watching these. :)
Can I add one request? For one of your next videos, I’d love your voice analyser to be The Snail by Ircam Labs. :D
Thanks so much, was not able to figure out how on Earth to use that Hammerstein analysis until now, very handy!
Yes I had the same wonder and feeling about matters not being represented correcly at probably the same recent video 😉. Thanks for explaining and clearing it up!
Can't wait for Dan Worrall plugins...Instant buy for sure.
The soul of the music is made of non-linear concepts.
From gear to rhythm, non-linear is the cherry on top.
5:36 does digital distortion really create >22k hz harmonics? I didn't have a single idea, well, not that I'd know in the first place
Analog gear creates it as well, but then it filters out in the converters. Digital distortion doesn't filter it out (unless you use oversampling) and creates aliasing artifacts.
More on how to use Plugin Doctor on Analog and digital gear, please.
Wonderful primer! I often tire of people citing "non-linearities" as some sort of holy grail. Thanks Dan!
I took a look at that Lindell 80 series plugin in my own Plugin Doctor recently and found something odd that isn't appearing in your video...in mine, the plugin inserts an antialiasing filter at 20kHz, regardless of the oversampling settings or project sample rate. I'm very surprised to see that it's not doing this on yours.
I thoroughly enjoy your videos. Thank you sir!
Loved it, Funny and educational at the same time.. brilliant..
Excellent explanation, in Dan‘s own inimitable style.
Best audio channel in TH-cam.
It's always a good day when Dan uploads a video!
TH-cam university of Dan Worrall has been better for me than my year of music tech college... Now have a job in the industry and found learning the hard way to be much more effective than any classroom 😂
Thanks Dan! I've seen so many wrong uses of "non-linear" as a description I was starting to doubt my own understanding... the internet has become an out of control mega game of "broken telephone"! I think you're on to something with the myth busting... we NEED a voice of authority to explain stuff properly and comprehensively.
Love your work as always 🙏 Would love to see that Saturn clean tube algo on the Hammerstein.
A lot of people new to math get confused about the meaning of linear.
Linearity is two properties, *additivity* and *homogeneity* .
suppose I take a signal x and feed it in to a filter f, giving me the output f(x). Suppose i have another signal y and likewise I can feed that in to the filter with the output f(y). Suppose further that I can scale either signals amplitude by A and B respectively.
The filter is linear if and only if
f(Ax + By) = Af(x) + Bf(y)
Where
f(Ax) = Af(x)
Is the property called *homogeneity*
and
f(x+y) = f(x) + f(y)
is the property called *additivity*
To put this all in terms of a DAW, suppose you have two tracks with samples playing, 1 and 2. You put the same plugin with the same settings on each track, then increase the gain of each track by 3dB, respectively. Thats the right side of the equation. This will sound identical to sending both tracks to a single send channel with the same plugin with an input gain of +3dB (the left side of the equation) *if and only if* the plugin is linear.
Wonderful explanations and graphics, thanks.
i love the quality of your videos
About gain vs volume... it worth noticying than gain is dB scale but volume is sometimes a Percentage, where 0 would bring to silence. Though with gain 0 you can still have signal hot enough to pass.
That's not really relevant. In fact gain expressed as a decibel value will have to be converted to a linear amplitude factor for the actual gain change. Eg +6dB gain means multiplying your voltages or sample values by 2 (approx). -6dB means multiplying by 0.5.
@@DanWorrall that was more an argument on the 'Gain = Volume' statement than on the non linearity aspect of things, as Volume can sometimes refer to a multiplier where 0 would lead to no sound and 1 (100%) to unaltered sound. Am I confused by the terminology here ?
@@XRaym in that case 50% means -6dB of gain. It's still gain, whether you define as a decibel value or an amplitude factor.
Wow Dan, I gotta admit I never really used the Hammerstein setting because I just confused me. Til now.. Thanks a lot as always.