The thing about pre-ringing is that it's... counter-intuitive. We see it on a signal plot, but it's not an artifact. It's just what filtering *means* . If you think about that impulse, it has energy at all frequencies. So it makes sense that if you low-pass it, you get a smeared out impulse containing only the low frequencies. With a minimum phase filter, the smearing happens after the impulse; with a linear phase filter, the smearing happens symmetrically around the impulse. And then of course, if you high-pass it, you're going to get the exact opposite... so the impulse, swept down since you've removed the low frequencies. And that's what you get. And since this is all beautiful math, it indeed all re-combines perfectly - because it was never really an artifact, it's just a fundamental thing about what filtering does to signals. But what about the actual ringing down below zero, I hear you say? Well, Dan used the wrong kind of wave editor here :-). He should've used one that does proper sinc interpolation between samples, which is the *actual* analog signal you get when you play back those digital signals. And if he'd done that, you'd have seen that... the ringing around zero was already there to begin with, in the original Dirac impulse! It's just that the sample points line up with the zero crossings so you don't see it, and the filtering makes it evident by mis-aligning that. So why can we hear pre-ringing when you use settings which are extreme enough on something like a kick drum, if it's not really an artifact? Because our ears don't work in the frequency domain - they work in both the frequency domain and the time domain. The switch between those two "modes" happens at around 20Hz or so (but it's not a hard boundary). Stuff below that we hear as events in time; stuff above that we hear as tones and frequencies. So when our ears process a filtered signal where the filter is at a very low frequency, they hear the signal as events in time, not as a frequency. And indeed, the pre-ringing caused by *frequency* processing with a linear phase EQ ends up moving some signal *backwards in time* , so when our ears hear it *in time* since it's such a low frequency, that sounds off. Conclusion: EQs are broken at low frequencies, because our ears *don't work* in the frequency domain at low frequencies, so an EQ is doing the wrong thing. But if you must use one and majorly screw around with things that far down, then yeah, minimum phase ends up sounding better, because it doesn't violate causality and so is less offensive in the time domain.
Best audio engineering content on youtube! Fight me! I'm so sick and tired of this "top5 eq/compressors/limiters/saturators" crap, it's video equivalent of shitposting. This is real content! Great work Dan!
My dissertation at uni was based on an experiment where I shift phase of white noise around a frequency on one side of the headphones. This creates an artefakt of our auditory system where we hear a tone in a location. I got people to record where they heard the tone spacially. People couldn't believe that when they swapped headphones round the tone stayed in the same location. Binaural phenomena are interesting for constructing a model of how our auditory system works.
Phase is one of the ways your brain can locate the direction of sounds, so it is not surprising that we can perceive phase differences between ears but not on their own.
I remember doing some stuff with modulating two filters on each channel over some sound a while ago, and I felt like when it got more lowpassed, it felt like it coming from a direction behind me. I'll have to experiment with this idea again with some white noise. Maybe I'll try out a plugin doctor software to see the phase.
To my surprise I can hear a difference when you toggle bypass on and off at 4:57 listening on Airpod Pros. Sound a bit more nasal when it’s turned on. But I’m not 100% sure because it might simply be differences in how you speak. It could also be that the TH-cam codec exaggerates the effect compared to the WAV that you were hearing when you made the video.
To me there's an audible difference when I listen on my hi-fi speaker system. In a blind test I could tell a lot of the times when it flipped. Minimal phase makes the voice sounds a bit thinner, a bit as if you're listening to somone close to a wall. Everything is still there, but there is a ever so slight ringing boxiness to it. There are sentences where it is more obvious than others, probably due to there being more or less content in the 500Hz region.
With the crossover in, it is as though I hear more mids or upper mids; i.e. the voice sounds a bit more articulate. None of this makes sense to me, as I would have expected the opposite. Anyway, there it is. I can hear it. I'd like to think this is a result of years of mastering, but actually I think it is probably something I always could hear. I know I am sensitive to phase, and it sometimes distracts me from more important stuff.
I'm glad other people can hear that as well, I was starting to think I'm crazy!!! I'm just kidding of course, but I do actually agree with basically everything said here. In another comment I posted I compared it to using slightly different but very similar microphones on the same source. Audible, but probably nothing to *really* worry about. Just another consideration.
Yes, i'm hearing the same thing. one way the low mids appear a bit more prominent and the other way i hear more upper mids. it's very slight but i can definitely hear it. Listening on VSX
Regarding the symmetry of the ringing in the oversampling filters of Saturn 2, it is correct that for the equivalent minimum and linear phase filters, there is a difference in whether the ringing occurs after the impulse, or symmetrically before and after respectively. However, this only really applies to FIR filters, which can be minimum or linear phase, or anything in between. As the name suggests, the ringing in IIR filters is potentially unbounded, so it's not really meaningful to compare the impulse response of an IIR and a FIR filter in this way, because it's not really practical to have an infinite pre-ring in the first place. If you design a windowed FIR filter however, you will see this exact difference between a minimum phase and a linear phase variant of it. The reason why the responses in the case of Saturn 2 are different relates to this, and is perhaps deceivingly simple: the minimum phase oversampling filter is altogether different from the linear phase filter; the former being some sort of an IIR filter, and the latter of course a FIR filter. Not only that though, but if you look at the magnitude response the minimum phase oversampling filter is quite a bit steeper than the the linear phase filter, and consequently has more ringing as well. Additionally, the "interesting pattern of differences" you see from the The Drop's linear phase oversampling is most likely caused by the window function of the FIR filter kernel. Because the impulse response has to be finite, the window function crops it to a certain width, making it imperfect and leaving behind ripples like this in the passband. For a well designed FIR filter such as this one though this is not an issue, as the ripple is well below any sort of audible threshold. Hope I got everything right there, please correct any of my mistakes in the replies :)
at first, i was all like... errr this sounds boring. but Dan always makes the dull sh1t sound interesting. Teaches it well and makes you laugh with his dry and intelligent humour. I learnt a lot watching this.
Dan, thank you for this insightful video and your empirical approach. I'm not an audio professional (I just listen to music a lot), but I'm fascinated by the technology and have been using EQ extensively for a while now. This video answered many questions I've been thinking about for months, and it has given me even more things to think about in the future. This goes for many past videos too. You really motivate me to delve into the tougher, less surface level parts of music production and processing. Keep it up!
Love the new content, Dan, great to see your videos regularly appearing in my subscription feed again! Any chance of some more of your Channel Members mix feedback content some time soon too?
I hear it. With the crossover switched in it's definitely thinner and more honky around what I'm guesstimating is 500hz. The bypassed version is fuller and more natural to my ears. Not sure I'd notice if it were left bypassed or engaged statically, but very noticeable in a A/B comparison.
I hear it. That is actually kinda dramatic to me and if I look away I can tell when he's switching it on or off. To me this also isn't surprising though. Sounds that are further away are phase shifted against themselves as they bounce off of other services. that's how we psycho-acoustically perceives the z plane(also the air itself rolls off the high frequencies some). I'm not talking about something that's perceptible as a time delay. the sound hits your ears as one summation. To me it sounds further away. The presence of the sound is pushed back. That the best I can describe it.
I would describle the crossover phase shift as a "laser-like" sound. Whenever there is a vertical line on the waveform (e.g. transients, pure saw/square), it will become "dispersed". Lower frequency arrives late. Like a superfast sine sweep from high to low frequency. I learnt to tell the difference by extensive use of the disperser effect. Which is basically crossover filter stacked multiple times. The crossover phase shift sound is a very, very subtle version of the disperser. The sound will also be much more apparent when followed by distortion. Especially on pure saw wave. When distorting/waveshaping a pure saw wave, it will basically become a square, losing even harmonics. If there is a crossover filter before the distortion, the reaction on the harmonics is much more different. The even harmonics will be preserved, the frequency response will also be changed.
Excellent material, as always I learned a lot. One nitpick though - that I believe is important. With two bands, you don’t need two filters, but one - one signal is “x”, rest is “input - x”. What this means for the pre ringing null test - it doesn’t mean it is not there, just that they cancel (opposite). I know you make this point later in the video, but this mathematical perspective in my opinion makes it easier to understand when it is going to be transparent. When dynamics reduce a band by 6dB, only half of the ringing will be cancelled by other band flipped phase.
I can definitely hear the difference. I thought I was just imagining it but on the pink noise part it was very very obvious. It's difficult to put into words what the difference is, but it's kind of like the same feeling as looking at yourself in the mirror versus looking at picture of yourself. It's just off in ways that are difficult to describe but are very clear to you.
My experience with phase shift when done in a binaural setting (like in this video) is always that the audio seems to "lean" towards the unshifted channel. This is especially prominent in the pink noise test, where the unshifted channel feels as if it has a boost in volume at the phase shift frequency. I made sure I was listening to a perfectly binaural signal (IEMs included) and this is still the case - almost as if my brain is doing the summing and notching within my head.
That's because your brain is interpreting the delay to mean the source is further away from that ear, much like it does if you delay one channel by a small amount.
Hey Dan, Thanks so much for making this video! It's something I've been struggling to understand and research for over a year. It took me a while to figure out that Fabfilter's Q setting was different to that of my other EQ's (And that the frequency at the cutoff frequency was only being attenuated by 3db). This really helped clear up a few of the questions that I had left about crossovers.
The manifestation of pre-ringing in linear phase filters is a vivid demonstration of a rarely acknowledged property of high-order filters: they are mathematically equivalent to over-damped oscillators. Ringing occurs at the crossover frequency of the filter whenever its high-Q internal feedback circuits are triggered by an external input. The amount of over-damping determines how quickly the ring decays before and after each impulse. If you increase the Q to the point of critical damping, the ring will persist indefinitely, and if you pass that point into under-damping, the filter will spontaneously oscillate. The upshot of this analysis is that the ring is not an amplification of the input pulse - it is a transient signal generated by the filter itself. @16:05 The reason the minimum phase oversampler produces more ringing than the linear phase oversampler is because the steep low-latency filter in the MP oversampler is less over-damped than the higher-latency filter in the LP oversampler.
5:21 i hear like something in the high frequency, something like artifacts but could be some placebo effect.. the voice is more natural without the fab filter on... Could be placebo btw
... to me, i can hear the difference. My translation is: it's like an Imager. When u activate/deactivate MB, the image from mono turns 45 degrees left/right, lower and higher (360degrees) with a slight fattening on the low frequencies. Of course i can give an example, but it's an image in my mind. Love ur surgical work! Best regards!
Sadly you are not special. It's because of how headphones work, the driver/housing imperfection. Simply said the driver doesn't sound the same when pushed outwards vs inwards. You are hearing imperfections of the headphones. Some are better than others at this. Chances are you are not going to hear the same thing on a set if monitors.
@@BUPETA3351 I'm listening on my Adam A7xs and heard the same thing as OP. When activated, the voice to me sounded "brighter", yet pushed back, or just quieter. Without it, I heard a little more low end and it sounded "cleaner" to my ears. Honestly, it doesn't sound bad by any means. It was SUPER subtle and I tried closing my eyes, but couldn't tell exactly. So idk, I heard something very similar but it's not like it makes me think one is better than the other.
Are you talking about when Dan applies the phase shift to only the left channel (after 5:30)? Because that should be a pretty obvious difference. Or can you hear a difference when he is applying the filter to both channels at 5:00?
Hey Dan! First of all: Thank you for all your amazing videos. Big fan here! Any possibilities that you upload a video to "softclipping" on the master bus? Recently I more and more heared about "softclipping instead of limiting". Thank you so much and keep up the great work! :)
Oh wow, I can hear the difference night and day, I wouldn’t even say it’s that subtle but it is hard to describe: with the crossover turned off there is a perceived fuller lowend on your voice, the stereo image feels a bit wider and strangely I notice the sibilance more (perhaps due to transients). With the crossover switched in everything feels tighter, narrower in image, the low end doesn’t have the same fullness (but it’s still present 🤷♂️). There’s a sense of softness to high frequency transients with the crossover engaged. I did not expect to hear it clearly so started closing my eyes to test and pausing as soon as I knew. Was right every time. Using Apple earbuds! Weird, and great video :)
Watching youtube videos NOT on your phone is such a good way to go. Anyway, I do know of people who can hear transients being off of the beat. It's something that I haven't acquired the skill to listen for yet, but I do know of people who are able to hear some 1-2 millisecond issues and know that elements aren't matching up the way they should be. Perhaps that skill transfers to being able to spot weirdness more reliably with phasing issues?
Jo, another thing to elaborate on is why you can hear a difference in sound when just changing the phase in 1 channel is due to Haas. The Haas effect is often seen by producers as an effect of creating an interesting stereo field by delaying one of 2 channels, hence creating a difference, but that is not actually what the Haas effect means. The Haas effect describes that you will perceive a sound to come from the direction you hear it from FIRST, even if you hear it from somewhere else way louder. Meaning that even a slight change in phase in 1 channel will change where you feel the sound is coming from and its directionality, with all else being equal. For how this makes sense, imagine you're in a cave. You might hear a sound coming from somewhere amplified through a cave system very loudly, or a landscape where you hear a sound loudly reflected from a cliff. However regardless of how quiet it is, if you hear it from someplace else first you know that is the shortest path between you and the thing you're trying to locate. Phase difference, or more accurately, the time difference between 2 channels is very significant to how we perceive directionality in sound. This is actively used by the "agent" system in our brain to determine what a sound means to us and where it is coming from. 2 other interesting things related to this, when figuring out where a sound is coming from, people will usually turn their head, as the change in phase observed during this rotation or comparing of the initial with the latter allows the brain to pinpoint a direction something is coming from to an insane accuracy. The thing that it checks for is phase differences between both ears, not loudness. The other thing to consider is that this phase change perception is also related to the wavelength of the frequency, as turning your head will have a different effect on the phase of each individual frequency. This also explains why we hear differences in stereo channels with different intensities across the frequency spectrum. Little to none in the bass (as the wavelengths are so large that the phase change is very minimal between both ears), most in low mids with a gradual falloff to the top.
In response to your call to action I can totally hear a difference with the filter engaged. When engaged theres a difference in how certain frequency ranges present themselves in relationship to one other on the time domain albeit quite subtle. One way I sometimes think about it is imagining the signal as a 2d vertical line as if it were coming from a single mono speaker where top to bottom represent 20k to 20 hz. As phase shifts are added the signal feels less like 1 full sound arriving at once, and more like individual pieces arriving slightly offset from one another. Almost like a train coming towards you vs a bunch of cars on a highway. Ive heard some describe it as "whooshing" but thats more in relation to describing HPF phase shifts around their cutoff point. In reality, ive probably done a horrid job explaining this and just seem batshit crazy at this point. I just know most days being able to hear slight offsets like this is a curse above all as it drives my OCD up the wall and will derail a mixing session if im the only one in the room.
I can't say I can hear a single rotation of phase, but I do recall once I started adding more, I'd hear almost a wetness (like almost watery or liquid) or thickness in the sound, which would eventually turn into what sounded like a lazerbeam effect with the highiest pole filters.
That sounds like you are hearing the frequency dispersion as the group delay difference from high to low frequencies becomes audible. Makes sense: this is exactly what Kilobearts's Disperser plugin is known to do, and there are demos of it turning kick transients into laser zaps.
Thanks to Dan's videos, I came to love messing with phase. It's just another tone control for me. I routinely tune crossovers to a freq/note that's not in key, or dial out a resonance. Sometimes it works, sometimes it doesn't, but it's a good bit of fun to listen to the slight freq shifts and how they affect the sound. You can also roll your own stereo spreader if you know what you're doing.
We can all definitely hear relative phase differences but only between the ears. This is an important part for locating sounds in the horizontal plane. IIRC we are most sensitive to that in the 200-800 Hz range. Above it's loudness differences, below the sense of location gets lost the lower a sound gets. The wavelengths become so large that phase difference between our ears is too small.
5:19 I can hear a difference, one that's quite obvious to my ears when switching on and off. To my ears, it seems that when the plugin isn't bypassed the low end of your voice becomes more apparent. The whole sounds maybe a bit more muffled and I'd say the low end of your voice sounds less "tight" or "controlled". If I were to hear them in isolation however, I don't think I would be able to tell a difference. I'm a VO artist, so my project usually contain only a single track, so the cpu usage isn't much of an issue. If I run a pretty hefty hpf or an instance io pro mb, I usually prefer the sound of linear phase slightly. If my hpf is especially high, I like running it in linear phase so it leaves me more headroom in any dynamics processors or limiters. Luckily Pro-L2 can account for this, but not all the plugins in my chain can.
Once again a clear and informative tutorial, thanks for all these videos Dan. As for the minimum phase toggling test @4:54, much to my surprise, I can hear a difference. To me with x-over switched in (bypass light off), your voice has ever so slightly more edge, a focused presence if you will. X-over off your voice loses some of that focus and gets wider, more relaxed in tonal balance. To me this gives an image of it extending more evenly to lower frequencies. Listened on laptop with cheapo earbuds, will be interesting to try this later in my regular studio setup. Now isn't that great, finding just another thing to worry about. 😜
That static phase shift introduced in Pro-MB minimum phase mode is definitely audible on certain sources - I've actually intentionally used it on kick drums, it gives a weird kind of electro-thump depending on where you set the crossover
4:54 So I definitely had the thought, "can't wait to see everyone in the comments pretending to hear a difference" I heard no difference on near field monitors. Then I put headphones on and was pretty shocked. I heard/felt a difference, but am not sure I have the vocabulary for exactly what it is. It is as if the sound was moving behind me, and without a change in frequency response. Not entirely convincingly behind me, but something like it. Honestly wondering if anything related to this is implemented in things like KLANG.
Think yourself into a sferic image. 360 degrees each way, by the size of your volume. Volume is actually the size of those. That's how I picture sound. iZotope Insight has those ways of showing actual sound (analyzer). Some people imagine sound as fab's analyzer. From 0 to infinite (nyquist???). Keep up the good work guys!
I can hear fixed phase shift but only when fairly extreme, when people have used sharp low cut filters on bass sounds that have repeating sharp transients (sawtooth, square, etc.) It dhanges the character of the sound due to post-ringing artifacts and introduces a resonant character at the crossover point. Try the plugin "Disperser" by Khz. I believe it uses a time delay rather than phase shift, so not quite the same (no post-ringing, I presume), but it sounds like a more extreme version of a sharp low cut on a sharp waveform. A sharp transient (like a Dirac spike) becomes a downwards sweep in a similar way to phase shift on a low-cut filter.
@@cornoc No. Phase shift relates to frequency. For instance, 45° of phase shift means a sinewave of a particular frequency shifting through its 360° cycle by 45°. A time delay is measured in seconds (or milliseconds), so would shift different frequencies by different amounts in phase.
@@PrincipalAudio i understand that the units are different but what i meant was, isn't e.g. a 180 degree phase shift of a 20 hz since wave identical to a time delay of 0.025 seconds? if you're doing it in a frequency-dependent way, how do the two differ in what effect they have on a signal? from how it sounds to my ears, i assumed Disperser applied a different time delay to different frequencies so that they all get smeared out and arrive at different times. is that different than a different phase shift for each frequency?
@@cornoc Oh, my apologies. I'd been responding to someone else's comment on a different video earlier where they were also talking about phase shift but from a different perspective. I got totally confused! Doh! Yeah, you're right, Disperser does do a time delay at different frequencies and it gives the same effect as with phase shift on an EQ with a very sharp HPF slope. I 'presume' they designed the plugin to emulate this kind of phase shift "smearing" effect. It sounds the same to me - aside from the fact it does no filtering and doesn't have a resonant ringing around any cutoff frequency. But in response to your initial comment: >>"aren't time delays and phase shifts identical" The end result may be the same, that some frequencies are shifted more than others in time, but technically they're two different beasts. I think of phase shift as being a _"time delay relative to frequency, measured in degrees",_ and a time delay as _"a time delay",_ because it isn't frequency dependent. The term phase shift really comes from Electronics Engineering where AC theory comes into play. Capacitive and inductive reactance, and how much shifting occurs at which frequencies of interest.
I can hear the impact of IIR crossovers on bass (it sounds clouded) and have switched exclusively to perfect-reconstruction FIR filters when making low-end crossovers. But lots of people find the phase shift from IIR inaudible, including sophisticated plugin developers and textbook authors. Thankfully, PluginDoctor recently added square & saw waves to its oscilloscope view, which are helpful for understanding a plugin's filter topology (phase shift isn't kind to sharp corners in a waveform).
Hi Dan, Cool Video! This time I would like to give you full 5 of 5 Stars. Thanks a lot for pointing some issues out concerning my questions. Blessings, Tim! :-)
@8:12 yeah I honestly dont believe anyone can hear 'perfect phase'. however we definitely have the ability to detect phase differences across our two ears, because It's actually a very important part of how our brains work out where sound sources are located in 3D space. Our ears are physically spaced apart so if a sound reaches one ear before the other, then the sound will have a time delay and therefore a phase difference, because it has to travel a distance around your head. Our brain decodes phase inconsistencies between left and right ears as stereo image. Thats why the stero image leans towards the bypassed channel in the binaural demo, your brain is getting tricked into thinking the sound is reaching one ear first and therefore must be coming from that direction.
It could be completely placebo, but when bypassed your voice sounded much more natural, is the only way I can describe it For reference, I’m watching on my youtube tv in my living room through bose soundsystem
Didn't listen blindly but I think I heard the static phase shift already in the mono signal. It's subtle though. I've done some (unscientific) tests before though and static phase shift audibility seems to also depend on the source material. I remember something acoustic, pureish, continuous and spectrally rich like a good cello sound being very easily audible. I've always been baffled by the whole phase is inaudible thing.
To me, that phase shift sounded like a bump in perception of mids. Maybe kind of like a wide bell +2dB at say 2kish. Maybe a bell around 300 -2dB. Maybe its placebo. But, I'm eating dinner at my desk in my studio and I shot my head up from my plate twice when I thought I heard the same change in your voice. Both times you'd brought PRO-MB back into the mix. So, there you go.
5:11 I did a blind test with both headphones and monitors and to me, it seems to be audible. There is a slight tonal difference, which sounds almost a bit like eq, except much more subtle and with a very intricate response. A bit like, how different but similar microphones will sound different on the same source. This could of course also be explained by you moving slightly in front of the microphone and certainly isn't something you should really worry about. In the following tests, the difference is huge to me. It's not merely a shift in frequency response. I hear the sound in stereo with very audible wrecked phase. It sounds almost painfully broken. It is quite confusing, actually. I can hear this both with headphones and on monitors, with surprisingly little difference. This might suggest to me that I can, in fact, hear absolute phase. And I can not overstate how irritating the feeling is, you get from those tests. I can also clearly hear "which way" the phase is broken. To me it sounds like a terribly out of phase stereo recording that has also been slightly panned to one side, except it's not quite the same. It sound like it's panned to one side, yet I can also clearly hear that it is not. It is quite the bizarre sensation, really. This is also something, I often times hear on youtube. Especially with stereo camera mics etc. It can be so annoying and irritating I stop watching. (and usually is when it occurs) I'm sorry for rambling, I thought I'd just write my experience down while it's still "fresh". That being said, I did know I can hear phase before (since I tried out the ableton utility effect for the first time ;D) Being quite honest I always thought this was something everyone can hear, but I guess it might not be something everyone experiences the same way (is anything, actually? xD) I'd love to hear some other people's thoughts, it's always interesting to understand other experiences! I hope this was interesting or helpful to anyone, Cheers!
Hi Dan! Would you consider doing a tutorial on Goniometers? I understand basically what they do and what they show (thanks to your earlier videos among other resources) but there's still so much about them that I don't get. Why is it that white noise creates a mist across the entire field? Why do stereo Bass synths create these weird and cool geometrical patterns? Is there something to be learned from the shape of sounds in the goniometer when the shape is geometrical versus not, or should they just be enjoyed as cool visuals, and the real info is how much correlation there is and what will be lost in mono, etc. If that made sense :)
I have been able to hear phase shift in mono, but only in extreme circumstances. If you run a pure saw wave through a bunch (more than 7) of allpass filters in random positions, it sounds softer, almost like a chorus effect, but without the modulation. An alternative is to randomize the phases of the saw wave harmonics if you have a synthesizer that can do that (sytrus from FL, and xfer's serum I believe), and the effect is even more pronounced.
When the Minimum Phase eq is engaged your voice becomes lighter and has a hint of distortion. It's a bit like running a signal really quietly through a distortion pedal; even when no distortion is happening, you still get an effect. But, in this case, only a tiny amount. Cheers!
5:21 When you switch off the pro mb I hear more and more natural sounding lows. Switched on these lows get more to the back, and to me, it is more harsh in the upper frequencies especially on words that contain more wovels.
Yep can definitely tell the change between bypass and not bypass even on iPhone speakers (iPhone 13pro stereo speakers)- not sure I could tell the difference long term, maybe it’s the point the change occurs that I can hear - would need a longer sequence for a/b. Very interesting!!
of course we can hear the difference in left and right; it's kind of like applying the haas effect only on a small part of the frequency spectrum. if you don't have the two signals (left and right) to go into your head simultaneously, why would it otherwise be audible if you delay some frequencies by a tiny bit?
Awesome video as always, the tune used for the binaural test was fucking awesome. Difference was clear as day, sounded like lopsided phasing issues to me lol. You really need to do a full length glitch hop/bass music album. Do you by any chance know of the artist Tipper? Very reminiscent of his early work. I always hear tippery influences in your music and was wondering if thats just me imprinting my taste onto your music. If you are familiar with Tipper I am curious about what you think of how he masters his music. His more aggressive stuff sits around -5 lufs and dynamic stuff around -13 lufs.
This video has got me wondering about the creative applications of deliberate and very audible pre-ringing. Seems like it might be an interesting diffusion effect for something with sharp transients like a clean guitar part.
@@TransistorLSD Interesting thought! I know they commonly use Schroeder allpasses for similar reasons. The sound of a Schroeder allpass is very different from the smooth sound of pre-ringing, but it has a similar effect of spreading the energy out over time.
@@isetpeoplaflame5936 Cool, I'll look into that, thanks! I've managed to get some interesting effects by completely misusing ReaFIR, the FFT plugin in Reaper.
I did actually hear a pretty clear difference listening on headphones. I assumed it was a placebo effect, but listening over it again I'm confident that there's some difference there. Your voice sounded fuller with the plugin bypassed, and when the plugin was active, it felt like there was a subtle kind of buzz going on.
You can still achieve a linear phase crossover with an IIR by using the difference of the lowpass (or highpass, works both ways). Its only when you use two filters that there is phase shift… then theres linkwitz riley filters which are essentially two filters stacked up, like what dan did in the last video, but that phase shift turns into a 360 degree phase shift and as such can be described as linear phase, although linkwitz-riley filters fall apart when you have too many of them close together and the phase shifts interact. Filters really are quite wonderful, all this stuff is just delays, add/subs and multiplies and arranged in different ways give us so many interesting results! What filters/processes haven’t been discovered yet?
I can't listen any difference with the phase shift on your voice (Headphones connected to my laptop here), but I can listen it, sometimes, in my studio with some tracks. It was once quite obvious when I tried it while mastering a Toy Piano recording but didn't do any blind test, so I could be wrong here. My monitoring setup has some Phase Correction Filters in my Repaer Monitoring section so I can enable and disable the phase correction quite easily. KH310 speakers. Edit: Ok, you ask about what differences we listen, in that Toy Piano project, the differences where in the transient response. When having different phase responses in L vs R (As in Slick EQ Dual Mono Phi button) the difference becomes quite more obvious as our ears are tuned to pick timing diffrences between them. Finally: My guess about that drastic ringing in the IR of the minimun phase filter; maybe that's because that sudden impulse has, is in fact, lot's of content around 22Khz, and that's where the filter is doing it's work, so thats the frequency with most audible ringing. A worst case scenario in a frequency that tipicalliy will have much less energy and is quite inaudible May it be?
Being able to hear phase difference is if I recall correctly a real thing. I believe I once heard that's one of the ways we are able to determine which direction audio comes from (both the phase difference from one wave first hitting one ear and then the other, and due to various reflections in different parts of a room).
Yup, but only in the middle-ish frequencies. Too low and the we can't perceive the difference, and too high and the difference becomes too small. Thus higher frequencies rely upon loudness for perception of direction, while direction becomes difficult to discern with lower frequencies. Where these changes occur in the frequency spectrum depends upon the size of your head.
Second comment - on why linear phase results in less ringing than minimum phase. Dirac delta is all frequencies that happen from infinity to infinity. Luckily, they cancel each other out except for one sample. Linear phase ringing is symmetric, but smaller as it results only from those very high frequencies that were present in the original delta, but we removed them with a down sampling, antialiasing filter. With a minimum phase filter instead, we shift all frequencies in relation to each other - including ones below new Nyquist - and they no longer cancel out at sample positions so nicely.
Hey Dan -- you might enjoy my Psychoacoustics series; part 4 especially covers some interesting effects having to do with strange ways phase can be perceived by humans. Thanks for your video!
Headphone listener here To me it sounds like a bass boost when the eq is engaged. That was when it was just your voice. With the mono and stereo mixes it sounded like the audio was canting in the direction of the channel with the eq active.
While the difference is subtle I can definitely hear it in the A/B test with your voice in minimum phase mode. With it on it sounds almost like the lower frequencies around 500hz are being distorted slightly, causing some harmonics to be created. It honestly reminds me a little bit of a low-quality audio codec, but again much more subtle. While I can't tell you exactly why this is, that's just what I hear.
when you was toggling on and off i would say maybe around 1-4/6k sounded abit distorted fuzzy almost when toggled on that's where i found i most noticeable (minimum phase)
Wearing headphones, that's just a widening of the stereo image on the low end. Many older "stereoizer" plugins do something similar by delaying the left (or right) channel by a very small amount, creating a difference between the left and right channels and thus a stereo image (that doesn't mix to mono very well). I assume the eq is essentially doing the same thing - a slight phase shift on the low end causing a difference between the left and right channels and creating some "width" on the low end.
I don't have plugin doctor, but I'd wager that a time delay plugin would be displayed on a phase-frequency graph as a horizontal straight line that is offset vertically.
A better and more mono compatible version of that trick is to use mid/side encoding. Starting with a mono sound which would have signal on the mid channel and silence on the side channel: Rather than delaying the left side or right side, you create a side channel out of nothing by using a delayed version of the mid channel. This causes very slight comb filtering on either side - positive on one and negative on the other. The frequencies that get cancelled on one side get reinforced on the other and vice versa. The best bit is that it's completely mono compatible because the differences cancel out and leave only the untouched mid channel.
I think i can hear the first static phase shift on headphones, but it's veery subtle. To me it sounds like a slight loss of presence in the low mids, without affecting the highs or the lows at all. I had to pause the video and come back to it a couple times to be sure I experienced a difference, but it could still be placebo. I also have very slight damage to my hearing (Missing a few dBs of mids on my right ear), so I experience all kinds of strange things I can't really explain or put my finger on that other engineers can't hear at all.
@Dan Worrall: Great video! I can hear a very small difference in your voice when bypassing the filter. It's a tiny kind of "boxiness" to the sound when enabling the crossover.
The heaphone test is quite obvious, I hear it as a shift in the stereo image but also frequency response. Like with the pink noise one side will get louder in a certain frequency range in one ear.
5:00 with the crossover it sounds subtly "phasy", and strangly your voice moves up (verticly) in space. with a kickdrum phaseshifts are much better to hear
Great vid as always. BTW I have a question triggered by topic of antialiasing filters. Why are there different outcomes when resampling? Shouldn't it be mathematically identical with just changed/moved sample points if there is only ONE shape recreated sinusoid can take? This is one of the last things within sampling theory for beginners I cannot grasp ;)
Oddly enough Dan, I heard an example yesterday where I *could* hear the difference of absolute phase, having said many times that I wasn't sure if I can ! And, I *think* I can hear it on your voice in this example, too. If I'm not fooling myself, it sounds less warm and full, especially near 100 Hz. I'm hearing less "boom" in your voice, just as I heard less yesterday in the bass synth and kick drum of the example I was listening too, although in that case I was flipping the phase of the entire mix. I scored 4/5 blind yesterday before I got bored, feel free to send me the files if you'd like and I'll try the same test with this example too 😛 (BTW I can very clearly hear the single-channel test in stereo, on both speakers and headphones, even just on your voice. I hear it as a shift in the image, either left or right depending which channel is active)
5:04/20:28 the change in timber in your voice is a side effect of the filtering, a second side effect witch is obvious is the lack of lows in your voice when the filtering is applied, the result is your voices is more nasal, knowing that they are minimum phase high pass filters, they tend to smear the signal, what happens as a result is that the image of the sound shifts to the upper mid-range, as a result the change in timber or signal degradation.
With the minimum phase lowshelf filter engaged, it sounds like the bass frequences are coming from behind me slightly left when the eq is on the right channel, and behind me slightly right when the eq is on the left channel. That's the only difference I hear.
If you (through the crossover) introduce a 180 degree phase shift for a certain frequency on one of the two binaural channels, that basically means, you swap the mid- and the side-signal for that frequency. Of corse that is extremely audible. Is it fair to say, that you’re hearing the phase shift thought?? I mean, basically you’re hearing differences in the panning of all your instruments in that frequency. But that’s a result of the phase shift and not the phase shift itself… right?
I always suspected that it was subtracting one band from the full signal, to obtain a second, if that makes any sense. Because there's no latency and it nulls with dry signal.
Dan is the only person than make me put on headphones even though I don't feel like it.
Hhahaha yeah
5:35 I heard that through the stereo speakers on my phone and went "ok ok Dan will put some damn headphones"
6:40 "Dan stop, you won"
I feel this comment much.
The thing about pre-ringing is that it's... counter-intuitive. We see it on a signal plot, but it's not an artifact. It's just what filtering *means* . If you think about that impulse, it has energy at all frequencies. So it makes sense that if you low-pass it, you get a smeared out impulse containing only the low frequencies. With a minimum phase filter, the smearing happens after the impulse; with a linear phase filter, the smearing happens symmetrically around the impulse. And then of course, if you high-pass it, you're going to get the exact opposite... so the impulse, swept down since you've removed the low frequencies. And that's what you get. And since this is all beautiful math, it indeed all re-combines perfectly - because it was never really an artifact, it's just a fundamental thing about what filtering does to signals.
But what about the actual ringing down below zero, I hear you say? Well, Dan used the wrong kind of wave editor here :-). He should've used one that does proper sinc interpolation between samples, which is the *actual* analog signal you get when you play back those digital signals. And if he'd done that, you'd have seen that... the ringing around zero was already there to begin with, in the original Dirac impulse! It's just that the sample points line up with the zero crossings so you don't see it, and the filtering makes it evident by mis-aligning that.
So why can we hear pre-ringing when you use settings which are extreme enough on something like a kick drum, if it's not really an artifact? Because our ears don't work in the frequency domain - they work in both the frequency domain and the time domain. The switch between those two "modes" happens at around 20Hz or so (but it's not a hard boundary). Stuff below that we hear as events in time; stuff above that we hear as tones and frequencies. So when our ears process a filtered signal where the filter is at a very low frequency, they hear the signal as events in time, not as a frequency. And indeed, the pre-ringing caused by *frequency* processing with a linear phase EQ ends up moving some signal *backwards in time* , so when our ears hear it *in time* since it's such a low frequency, that sounds off.
Conclusion: EQs are broken at low frequencies, because our ears *don't work* in the frequency domain at low frequencies, so an EQ is doing the wrong thing. But if you must use one and majorly screw around with things that far down, then yeah, minimum phase ends up sounding better, because it doesn't violate causality and so is less offensive in the time domain.
good writeup, very interesting. thank you
👍 thx
Incredibly insightful! Never heard the explanation about time/frequency-domain like that. Makes a ton of sense. Thank you!!
This is both so academically and musically pleasing to my brain. Thank you for combining those words in that order.
All rise for Dan! 😎
Best audio engineering content on youtube! Fight me! I'm so sick and tired of this "top5 eq/compressors/limiters/saturators" crap, it's video equivalent of shitposting. This is real content! Great work Dan!
My dissertation at uni was based on an experiment where I shift phase of white noise around a frequency on one side of the headphones. This creates an artefakt of our auditory system where we hear a tone in a location. I got people to record where they heard the tone spacially. People couldn't believe that when they swapped headphones round the tone stayed in the same location.
Binaural phenomena are interesting for constructing a model of how our auditory system works.
Phase is one of the ways your brain can locate the direction of sounds, so it is not surprising that we can perceive phase differences between ears but not on their own.
:|
I remember doing some stuff with modulating two filters on each channel over some sound a while ago, and I felt like when it got more lowpassed, it felt like it coming from a direction behind me. I'll have to experiment with this idea again with some white noise. Maybe I'll try out a plugin doctor software to see the phase.
Love your channel - incredible video!
HI ROOMIE
I SEE YOU ARE LEARNING MIXING
YES MY CAPSLOCK KEY IS BROKEN
oh look it's roomieo
Never expected to see you here 😌
@@ProdDJD Yeh.. Na.
@@Rhythmattica huh
Dan, you're Mr. Einstein in audio! Bringing all the small complex details together no other can do. This is brilliant.
To my surprise I can hear a difference when you toggle bypass on and off at 4:57 listening on Airpod Pros. Sound a bit more nasal when it’s turned on. But I’m not 100% sure because it might simply be differences in how you speak. It could also be that the TH-cam codec exaggerates the effect compared to the WAV that you were hearing when you made the video.
To me there's an audible difference when I listen on my hi-fi speaker system. In a blind test I could tell a lot of the times when it flipped. Minimal phase makes the voice sounds a bit thinner, a bit as if you're listening to somone close to a wall. Everything is still there, but there is a ever so slight ringing boxiness to it. There are sentences where it is more obvious than others, probably due to there being more or less content in the 500Hz region.
With the crossover in, it is as though I hear more mids or upper mids; i.e. the voice sounds a bit more articulate. None of this makes sense to me, as I would have expected the opposite. Anyway, there it is. I can hear it. I'd like to think this is a result of years of mastering, but actually I think it is probably something I always could hear. I know I am sensitive to phase, and it sometimes distracts me from more important stuff.
I'm glad other people can hear that as well, I was starting to think I'm crazy!!! I'm just kidding of course, but I do actually agree with basically everything said here. In another comment I posted I compared it to using slightly different but very similar microphones on the same source. Audible, but probably nothing to *really* worry about. Just another consideration.
Yes, i'm hearing the same thing. one way the low mids appear a bit more prominent and the other way i hear more upper mids. it's very slight but i can definitely hear it. Listening on VSX
Every DW video is a priceless master class. He is the only "youtuber" that makes me need to pause the video and take notes.
Regarding the symmetry of the ringing in the oversampling filters of Saturn 2, it is correct that for the equivalent minimum and linear phase filters, there is a difference in whether the ringing occurs after the impulse, or symmetrically before and after respectively. However, this only really applies to FIR filters, which can be minimum or linear phase, or anything in between. As the name suggests, the ringing in IIR filters is potentially unbounded, so it's not really meaningful to compare the impulse response of an IIR and a FIR filter in this way, because it's not really practical to have an infinite pre-ring in the first place. If you design a windowed FIR filter however, you will see this exact difference between a minimum phase and a linear phase variant of it.
The reason why the responses in the case of Saturn 2 are different relates to this, and is perhaps deceivingly simple: the minimum phase oversampling filter is altogether different from the linear phase filter; the former being some sort of an IIR filter, and the latter of course a FIR filter. Not only that though, but if you look at the magnitude response the minimum phase oversampling filter is quite a bit steeper than the the linear phase filter, and consequently has more ringing as well.
Additionally, the "interesting pattern of differences" you see from the The Drop's linear phase oversampling is most likely caused by the window function of the FIR filter kernel. Because the impulse response has to be finite, the window function crops it to a certain width, making it imperfect and leaving behind ripples like this in the passband. For a well designed FIR filter such as this one though this is not an issue, as the ripple is well below any sort of audible threshold.
Hope I got everything right there, please correct any of my mistakes in the replies :)
Dan Worrall: the David Attenborough of daws. Thank you for showing me the majesty of phase in its natural habitat.
at first, i was all like... errr this sounds boring. but Dan always makes the dull sh1t sound interesting. Teaches it well and makes you laugh with his dry and intelligent humour. I learnt a lot watching this.
6:07 - I was listening on monitors, but kept on smacking the "mono" button to see what it sounded like 😊Thank you for another great video!
Is unbelievable how clear and precise your explanations are.
Thank you for your amazing job done to prepare this kind of video.
Cheers from Italy!
Dan, thank you for this insightful video and your empirical approach.
I'm not an audio professional (I just listen to music a lot), but I'm fascinated by the technology and have been using EQ extensively for a while now. This video answered many questions I've been thinking about for months, and it has given me even more things to think about in the future. This goes for many past videos too. You really motivate me to delve into the tougher, less surface level parts of music production and processing. Keep it up!
Love the new content, Dan, great to see your videos regularly appearing in my subscription feed again! Any chance of some more of your Channel Members mix feedback content some time soon too?
I hear it. With the crossover switched in it's definitely thinner and more honky around what I'm guesstimating is 500hz. The bypassed version is fuller and more natural to my ears. Not sure I'd notice if it were left bypassed or engaged statically, but very noticeable in a A/B comparison.
Me too, is sutil anyway, but its a difference. I think I only hear when I'm seeing the bypass change, in a blind test is more difficult to detect.
I hear it. That is actually kinda dramatic to me and if I look away I can tell when he's switching it on or off. To me this also isn't surprising though. Sounds that are further away are phase shifted against themselves as they bounce off of other services. that's how we psycho-acoustically perceives the z plane(also the air itself rolls off the high frequencies some). I'm not talking about something that's perceptible as a time delay. the sound hits your ears as one summation. To me it sounds further away. The presence of the sound is pushed back. That the best I can describe it.
I would describle the crossover phase shift as a "laser-like" sound. Whenever there is a vertical line on the waveform (e.g. transients, pure saw/square), it will become "dispersed". Lower frequency arrives late. Like a superfast sine sweep from high to low frequency.
I learnt to tell the difference by extensive use of the disperser effect. Which is basically crossover filter stacked multiple times. The crossover phase shift sound is a very, very subtle version of the disperser.
The sound will also be much more apparent when followed by distortion. Especially on pure saw wave. When distorting/waveshaping a pure saw wave, it will basically become a square, losing even harmonics. If there is a crossover filter before the distortion, the reaction on the harmonics is much more different. The even harmonics will be preserved, the frequency response will also be changed.
Excellent material, as always I learned a lot.
One nitpick though - that I believe is important. With two bands, you don’t need two filters, but one - one signal is “x”, rest is “input - x”. What this means for the pre ringing null test - it doesn’t mean it is not there, just that they cancel (opposite). I know you make this point later in the video, but this mathematical perspective in my opinion makes it easier to understand when it is going to be transparent.
When dynamics reduce a band by 6dB, only half of the ringing will be cancelled by other band flipped phase.
I can definitely hear the difference. I thought I was just imagining it but on the pink noise part it was very very obvious. It's difficult to put into words what the difference is, but it's kind of like the same feeling as looking at yourself in the mirror versus looking at picture of yourself. It's just off in ways that are difficult to describe but are very clear to you.
My experience with phase shift when done in a binaural setting (like in this video) is always that the audio seems to "lean" towards the unshifted channel. This is especially prominent in the pink noise test, where the unshifted channel feels as if it has a boost in volume at the phase shift frequency. I made sure I was listening to a perfectly binaural signal (IEMs included) and this is still the case - almost as if my brain is doing the summing and notching within my head.
That's because your brain is interpreting the delay to mean the source is further away from that ear, much like it does if you delay one channel by a small amount.
sounds like the haas effect
@@TheOfficalAndI Thanks, I couldn't remember the name!
Hey Dan,
Thanks so much for making this video! It's something I've been struggling to understand and research for over a year. It took me a while to figure out that Fabfilter's Q setting was different to that of my other EQ's (And that the frequency at the cutoff frequency was only being attenuated by 3db). This really helped clear up a few of the questions that I had left about crossovers.
The manifestation of pre-ringing in linear phase filters is a vivid demonstration of a rarely acknowledged property of high-order filters: they are mathematically equivalent to over-damped oscillators. Ringing occurs at the crossover frequency of the filter whenever its high-Q internal feedback circuits are triggered by an external input. The amount of over-damping determines how quickly the ring decays before and after each impulse. If you increase the Q to the point of critical damping, the ring will persist indefinitely, and if you pass that point into under-damping, the filter will spontaneously oscillate. The upshot of this analysis is that the ring is not an amplification of the input pulse - it is a transient signal generated by the filter itself.
@16:05 The reason the minimum phase oversampler produces more ringing than the linear phase oversampler is because the steep low-latency filter in the MP oversampler is less over-damped than the higher-latency filter in the LP oversampler.
5:21 i hear like something in the high frequency, something like artifacts but could be some placebo effect.. the voice is more natural without the fab filter on... Could be placebo btw
... to me, i can hear the difference. My translation is: it's like an Imager. When u activate/deactivate MB, the image from mono turns 45 degrees left/right, lower and higher (360degrees) with a slight fattening on the low frequencies. Of course i can give an example, but it's an image in my mind. Love ur surgical work! Best regards!
Yea, audio jumped to different location when switching.
It sounds a little more washed out to me when it is turned on. But honestly I have no idea if it is just placebo or not.
Sadly you are not special. It's because of how headphones work, the driver/housing imperfection. Simply said the driver doesn't sound the same when pushed outwards vs inwards. You are hearing imperfections of the headphones. Some are better than others at this. Chances are you are not going to hear the same thing on a set if monitors.
@@BUPETA3351 I'm listening on my Adam A7xs and heard the same thing as OP. When activated, the voice to me sounded "brighter", yet pushed back, or just quieter. Without it, I heard a little more low end and it sounded "cleaner" to my ears.
Honestly, it doesn't sound bad by any means. It was SUPER subtle and I tried closing my eyes, but couldn't tell exactly. So idk, I heard something very similar but it's not like it makes me think one is better than the other.
Are you talking about when Dan applies the phase shift to only the left channel (after 5:30)? Because that should be a pretty obvious difference.
Or can you hear a difference when he is applying the filter to both channels at 5:00?
Hey Dan! First of all: Thank you for all your amazing videos. Big fan here! Any possibilities that you upload a video to "softclipping" on the master bus? Recently I more and more heared about "softclipping instead of limiting". Thank you so much and keep up the great work! :)
Oh wow, I can hear the difference night and day, I wouldn’t even say it’s that subtle but it is hard to describe: with the crossover turned off there is a perceived fuller lowend on your voice, the stereo image feels a bit wider and strangely I notice the sibilance more (perhaps due to transients). With the crossover switched in everything feels tighter, narrower in image, the low end doesn’t have the same fullness (but it’s still present 🤷♂️). There’s a sense of softness to high frequency transients with the crossover engaged. I did not expect to hear it clearly so started closing my eyes to test and pausing as soon as I knew. Was right every time. Using Apple earbuds! Weird, and great video :)
Watching youtube videos NOT on your phone is such a good way to go. Anyway, I do know of people who can hear transients being off of the beat. It's something that I haven't acquired the skill to listen for yet, but I do know of people who are able to hear some 1-2 millisecond issues and know that elements aren't matching up the way they should be. Perhaps that skill transfers to being able to spot weirdness more reliably with phasing issues?
Jo, another thing to elaborate on is why you can hear a difference in sound when just changing the phase in 1 channel is due to Haas.
The Haas effect is often seen by producers as an effect of creating an interesting stereo field by delaying one of 2 channels, hence creating a difference, but that is not actually what the Haas effect means.
The Haas effect describes that you will perceive a sound to come from the direction you hear it from FIRST, even if you hear it from somewhere else way louder.
Meaning that even a slight change in phase in 1 channel will change where you feel the sound is coming from and its directionality, with all else being equal.
For how this makes sense, imagine you're in a cave. You might hear a sound coming from somewhere amplified through a cave system very loudly, or a landscape where you hear a sound loudly reflected from a cliff.
However regardless of how quiet it is, if you hear it from someplace else first you know that is the shortest path between you and the thing you're trying to locate.
Phase difference, or more accurately, the time difference between 2 channels is very significant to how we perceive directionality in sound.
This is actively used by the "agent" system in our brain to determine what a sound means to us and where it is coming from.
2 other interesting things related to this, when figuring out where a sound is coming from, people will usually turn their head, as the change in phase observed during this rotation or comparing of the initial with the latter allows the brain to pinpoint a direction something is coming from to an insane accuracy.
The thing that it checks for is phase differences between both ears, not loudness.
The other thing to consider is that this phase change perception is also related to the wavelength of the frequency, as turning your head will have a different effect on the phase of each individual frequency. This also explains why we hear differences in stereo channels with different intensities across the frequency spectrum. Little to none in the bass (as the wavelengths are so large that the phase change is very minimal between both ears), most in low mids with a gradual falloff to the top.
In response to your call to action I can totally hear a difference with the filter engaged. When engaged theres a difference in how certain frequency ranges present themselves in relationship to one other on the time domain albeit quite subtle. One way I sometimes think about it is imagining the signal as a 2d vertical line as if it were coming from a single mono speaker where top to bottom represent 20k to 20 hz. As phase shifts are added the signal feels less like 1 full sound arriving at once, and more like individual pieces arriving slightly offset from one another. Almost like a train coming towards you vs a bunch of cars on a highway. Ive heard some describe it as "whooshing" but thats more in relation to describing HPF phase shifts around their cutoff point. In reality, ive probably done a horrid job explaining this and just seem batshit crazy at this point. I just know most days being able to hear slight offsets like this is a curse above all as it drives my OCD up the wall and will derail a mixing session if im the only one in the room.
"-100 dB is negligible."
n OTTs with 20:1 thresholds: Allow us to introduce ourselves.
It's massive (If you can hear it) 🔥
Here for the thumbnail
I can't say I can hear a single rotation of phase, but I do recall once I started adding more, I'd hear almost a wetness (like almost watery or liquid) or thickness in the sound, which would eventually turn into what sounded like a lazerbeam effect with the highiest pole filters.
That sounds like you are hearing the frequency dispersion as the group delay difference from high to low frequencies becomes audible. Makes sense: this is exactly what Kilobearts's Disperser plugin is known to do, and there are demos of it turning kick transients into laser zaps.
@05:15 I can hear a difference when you toggle. When it toggles on it sounds like slight noise is added to the mix, like a different mic setup almost.
As soon as I start to feel like I'm a pro, Dan Worrall drops a video that humbles me into thinking different.
Thanks to Dan's videos, I came to love messing with phase. It's just another tone control for me. I routinely tune crossovers to a freq/note that's not in key, or dial out a resonance. Sometimes it works, sometimes it doesn't, but it's a good bit of fun to listen to the slight freq shifts and how they affect the sound. You can also roll your own stereo spreader if you know what you're doing.
We can all definitely hear relative phase differences but only between the ears. This is an important part for locating sounds in the horizontal plane. IIRC we are most sensitive to that in the 200-800 Hz range. Above it's loudness differences, below the sense of location gets lost the lower a sound gets. The wavelengths become so large that phase difference between our ears is too small.
5:19
I can hear a difference, one that's quite obvious to my ears when switching on and off. To my ears, it seems that when the plugin isn't bypassed the low end of your voice becomes more apparent. The whole sounds maybe a bit more muffled and I'd say the low end of your voice sounds less "tight" or "controlled".
If I were to hear them in isolation however, I don't think I would be able to tell a difference.
I'm a VO artist, so my project usually contain only a single track, so the cpu usage isn't much of an issue. If I run a pretty hefty hpf or an instance io pro mb, I usually prefer the sound of linear phase slightly. If my hpf is especially high, I like running it in linear phase so it leaves me more headroom in any dynamics processors or limiters. Luckily Pro-L2 can account for this, but not all the plugins in my chain can.
Once again a clear and informative tutorial, thanks for all these videos Dan.
As for the minimum phase toggling test @4:54, much to my surprise, I can hear a difference. To me with x-over switched in (bypass light off), your voice has ever so slightly more edge, a focused presence if you will. X-over off your voice loses some of that focus and gets wider, more relaxed in tonal balance. To me this gives an image of it extending more evenly to lower frequencies.
Listened on laptop with cheapo earbuds, will be interesting to try this later in my regular studio setup. Now isn't that great, finding just another thing to worry about. 😜
Same for me! Heard the same thing.
I did not expect to hear it but I can. To me it makes the sound move behind me, I don’t know how to explain it better.
That static phase shift introduced in Pro-MB minimum phase mode is definitely audible on certain sources - I've actually intentionally used it on kick drums, it gives a weird kind of electro-thump depending on where you set the crossover
Phase shift and even polarity shift appears occasionally audible on some low frequency sounds, possibly more on asymmetric waveforms.
4:54 So I definitely had the thought, "can't wait to see everyone in the comments pretending to hear a difference"
I heard no difference on near field monitors. Then I put headphones on and was pretty shocked. I heard/felt a difference, but am not sure I have the vocabulary for exactly what it is. It is as if the sound was moving behind me, and without a change in frequency response. Not entirely convincingly behind me, but something like it. Honestly wondering if anything related to this is implemented in things like KLANG.
Think yourself into a sferic image. 360 degrees each way, by the size of your volume. Volume is actually the size of those. That's how I picture sound. iZotope Insight has those ways of showing actual sound (analyzer). Some people imagine sound as fab's analyzer. From 0 to infinite (nyquist???). Keep up the good work guys!
I can hear fixed phase shift but only when fairly extreme, when people have used sharp low cut filters on bass sounds that have repeating sharp transients (sawtooth, square, etc.) It dhanges the character of the sound due to post-ringing artifacts and introduces a resonant character at the crossover point. Try the plugin "Disperser" by Khz. I believe it uses a time delay rather than phase shift, so not quite the same (no post-ringing, I presume), but it sounds like a more extreme version of a sharp low cut on a sharp waveform. A sharp transient (like a Dirac spike) becomes a downwards sweep in a similar way to phase shift on a low-cut filter.
aren't time delays and phase shifts identical
@@cornoc No. Phase shift relates to frequency. For instance, 45° of phase shift means a sinewave of a particular frequency shifting through its 360° cycle by 45°. A time delay is measured in seconds (or milliseconds), so would shift different frequencies by different amounts in phase.
@@PrincipalAudio i understand that the units are different but what i meant was, isn't e.g. a 180 degree phase shift of a 20 hz since wave identical to a time delay of 0.025 seconds? if you're doing it in a frequency-dependent way, how do the two differ in what effect they have on a signal? from how it sounds to my ears, i assumed Disperser applied a different time delay to different frequencies so that they all get smeared out and arrive at different times. is that different than a different phase shift for each frequency?
@@cornoc Oh, my apologies. I'd been responding to someone else's comment on a different video earlier where they were also talking about phase shift but from a different perspective. I got totally confused! Doh!
Yeah, you're right, Disperser does do a time delay at different frequencies and it gives the same effect as with phase shift on an EQ with a very sharp HPF slope. I 'presume' they designed the plugin to emulate this kind of phase shift "smearing" effect. It sounds the same to me - aside from the fact it does no filtering and doesn't have a resonant ringing around any cutoff frequency.
But in response to your initial comment:
>>"aren't time delays and phase shifts identical"
The end result may be the same, that some frequencies are shifted more than others in time, but technically they're two different beasts. I think of phase shift as being a _"time delay relative to frequency, measured in degrees",_ and a time delay as _"a time delay",_ because it isn't frequency dependent. The term phase shift really comes from Electronics Engineering where AC theory comes into play. Capacitive and inductive reactance, and how much shifting occurs at which frequencies of interest.
@@PrincipalAudio i see, thank you.
Please make a series on sound engeneering basics to advanced.
I can hear the impact of IIR crossovers on bass (it sounds clouded) and have switched exclusively to perfect-reconstruction FIR filters when making low-end crossovers. But lots of people find the phase shift from IIR inaudible, including sophisticated plugin developers and textbook authors. Thankfully, PluginDoctor recently added square & saw waves to its oscilloscope view, which are helpful for understanding a plugin's filter topology (phase shift isn't kind to sharp corners in a waveform).
Hi Dan, Cool Video! This time I would like to give you full 5 of 5 Stars. Thanks a lot for pointing some issues out concerning my questions. Blessings, Tim! :-)
You need a “Anyway let’s do a null test” t-shirt
@8:12 yeah I honestly dont believe anyone can hear 'perfect phase'. however we definitely have the ability to detect phase differences across our two ears, because It's actually a very important part of how our brains work out where sound sources are located in 3D space. Our ears are physically spaced apart so if a sound reaches one ear before the other, then the sound will have a time delay and therefore a phase difference, because it has to travel a distance around your head. Our brain decodes phase inconsistencies between left and right ears as stereo image. Thats why the stero image leans towards the bypassed channel in the binaural demo, your brain is getting tricked into thinking the sound is reaching one ear first and therefore must be coming from that direction.
Lmao I’m here for the thumbnail change
It could be completely placebo, but when bypassed your voice sounded much more natural, is the only way I can describe it
For reference, I’m watching on my youtube tv in my living room through bose soundsystem
I haven't even watched this video at the time of writing this comment and I already know I'm going to enjoy it
As always really good info from Dan.
Didn't listen blindly but I think I heard the static phase shift already in the mono signal. It's subtle though. I've done some (unscientific) tests before though and static phase shift audibility seems to also depend on the source material. I remember something acoustic, pureish, continuous and spectrally rich like a good cello sound being very easily audible. I've always been baffled by the whole phase is inaudible thing.
To me, that phase shift sounded like a bump in perception of mids. Maybe kind of like a wide bell +2dB at say 2kish. Maybe a bell around 300 -2dB.
Maybe its placebo. But, I'm eating dinner at my desk in my studio and I shot my head up from my plate twice when I thought I heard the same change in your voice. Both times you'd brought PRO-MB back into the mix.
So, there you go.
I love how Dan doesn't upload from some time and then he upload like three or four videos that are related to each other
5:11 I did a blind test with both headphones and monitors and to me, it seems to be audible. There is a slight tonal difference, which sounds almost a bit like eq, except much more subtle and with a very intricate response. A bit like, how different but similar microphones will sound different on the same source. This could of course also be explained by you moving slightly in front of the microphone and certainly isn't something you should really worry about.
In the following tests, the difference is huge to me. It's not merely a shift in frequency response. I hear the sound in stereo with very audible wrecked phase. It sounds almost painfully broken. It is quite confusing, actually. I can hear this both with headphones and on monitors, with surprisingly little difference. This might suggest to me that I can, in fact, hear absolute phase. And I can not overstate how irritating the feeling is, you get from those tests. I can also clearly hear "which way" the phase is broken. To me it sounds like a terribly out of phase stereo recording that has also been slightly panned to one side, except it's not quite the same. It sound like it's panned to one side, yet I can also clearly hear that it is not. It is quite the bizarre sensation, really.
This is also something, I often times hear on youtube. Especially with stereo camera mics etc. It can be so annoying and irritating I stop watching. (and usually is when it occurs)
I'm sorry for rambling, I thought I'd just write my experience down while it's still "fresh". That being said, I did know I can hear phase before (since I tried out the ableton utility effect for the first time ;D) Being quite honest I always thought this was something everyone can hear, but I guess it might not be something everyone experiences the same way (is anything, actually? xD) I'd love to hear some other people's thoughts, it's always interesting to understand other experiences!
I hope this was interesting or helpful to anyone, Cheers!
Hi Dan! Would you consider doing a tutorial on Goniometers? I understand basically what they do and what they show (thanks to your earlier videos among other resources) but there's still so much about them that I don't get. Why is it that white noise creates a mist across the entire field? Why do stereo Bass synths create these weird and cool geometrical patterns? Is there something to be learned from the shape of sounds in the goniometer when the shape is geometrical versus not, or should they just be enjoyed as cool visuals, and the real info is how much correlation there is and what will be lost in mono, etc.
If that made sense :)
Oh yes, I would love to understand how a Goniometer works!
I have been able to hear phase shift in mono, but only in extreme circumstances. If you run a pure saw wave through a bunch (more than 7) of allpass filters in random positions, it sounds softer, almost like a chorus effect, but without the modulation.
An alternative is to randomize the phases of the saw wave harmonics if you have a synthesizer that can do that (sytrus from FL, and xfer's serum I believe), and the effect is even more pronounced.
Oh, and I just reproduced the effect with a saw wave at C4. It's probably less noticeable at high frequencies
When the Minimum Phase eq is engaged your voice becomes lighter and has a hint of distortion. It's a bit like running a signal really quietly through a distortion pedal; even when no distortion is happening, you still get an effect. But, in this case, only a tiny amount. Cheers!
5:21
When you switch off the pro mb I hear more and more natural sounding lows.
Switched on these lows get more to the back, and to me, it is more harsh in the upper frequencies especially on words that contain more wovels.
I already watched this video, I came back for the new thumbnail, still worth it.
Yep can definitely tell the change between bypass and not bypass even on iPhone speakers (iPhone 13pro stereo speakers)- not sure I could tell the difference long term, maybe it’s the point the change occurs that I can hear - would need a longer sequence for a/b. Very interesting!!
of course we can hear the difference in left and right; it's kind of like applying the haas effect only on a small part of the frequency spectrum.
if you don't have the two signals (left and right) to go into your head simultaneously, why would it otherwise be audible if you delay some frequencies by a tiny bit?
Came for the thumbnail ;-)
Awesome video as always, the tune used for the binaural test was fucking awesome. Difference was clear as day, sounded like lopsided phasing issues to me lol.
You really need to do a full length glitch hop/bass music album. Do you by any chance know of the artist Tipper? Very reminiscent of his early work. I always hear tippery influences in your music and was wondering if thats just me imprinting my taste onto your music.
If you are familiar with Tipper I am curious about what you think of how he masters his music. His more aggressive stuff sits around -5 lufs and dynamic stuff around -13 lufs.
This video has got me wondering about the creative applications of deliberate and very audible pre-ringing. Seems like it might be an interesting diffusion effect for something with sharp transients like a clean guitar part.
I wonder if reverb units intentionally use pre-ringing somewhere.
MFreeformPhase can create super audible pre-ring that does sound kinda interesting on leads and mid basses
@@TransistorLSD Interesting thought! I know they commonly use Schroeder allpasses for similar reasons. The sound of a Schroeder allpass is very different from the smooth sound of pre-ringing, but it has a similar effect of spreading the energy out over time.
@@isetpeoplaflame5936 Cool, I'll look into that, thanks!
I've managed to get some interesting effects by completely misusing ReaFIR, the FFT plugin in Reaper.
@@eyeball226 notches everywhere :) Mfreeformphase is free btw!
I did actually hear a pretty clear difference listening on headphones. I assumed it was a placebo effect, but listening over it again I'm confident that there's some difference there. Your voice sounded fuller with the plugin bypassed, and when the plugin was active, it felt like there was a subtle kind of buzz going on.
You can still achieve a linear phase crossover with an IIR by using the difference of the lowpass (or highpass, works both ways). Its only when you use two filters that there is phase shift… then theres linkwitz riley filters which are essentially two filters stacked up, like what dan did in the last video, but that phase shift turns into a 360 degree phase shift and as such can be described as linear phase, although linkwitz-riley filters fall apart when you have too many of them close together and the phase shifts interact. Filters really are quite wonderful, all this stuff is just delays, add/subs and multiplies and arranged in different ways give us so many interesting results! What filters/processes haven’t been discovered yet?
I can't listen any difference with the phase shift on your voice (Headphones connected to my laptop here), but I can listen it, sometimes, in my studio with some tracks. It was once quite obvious when I tried it while mastering a Toy Piano recording but didn't do any blind test, so I could be wrong here. My monitoring setup has some Phase Correction Filters in my Repaer Monitoring section so I can enable and disable the phase correction quite easily. KH310 speakers.
Edit: Ok, you ask about what differences we listen, in that Toy Piano project, the differences where in the transient response.
When having different phase responses in L vs R (As in Slick EQ Dual Mono Phi button) the difference becomes quite more obvious as our ears are tuned to pick timing diffrences between them.
Finally: My guess about that drastic ringing in the IR of the minimun phase filter; maybe that's because that sudden impulse has, is in fact, lot's of content around 22Khz, and that's where the filter is doing it's work, so thats the frequency with most audible ringing. A worst case scenario in a frequency that tipicalliy will have much less energy and is quite inaudible May it be?
I could hear the difference on the minimum phase crossover. It felt like there was a shift in the distance and a slight boost in high frequencies.
stunning episode, very cool
Being able to hear phase difference is if I recall correctly a real thing. I believe I once heard that's one of the ways we are able to determine which direction audio comes from (both the phase difference from one wave first hitting one ear and then the other, and due to various reflections in different parts of a room).
Yup, but only in the middle-ish frequencies. Too low and the we can't perceive the difference, and too high and the difference becomes too small. Thus higher frequencies rely upon loudness for perception of direction, while direction becomes difficult to discern with lower frequencies. Where these changes occur in the frequency spectrum depends upon the size of your head.
@@thomdabomb5067 interesting... didn't know that yet thanks
I love your videos! Any chance to see your templates/customizations/workflow on Reaper explored in future video?
Second comment - on why linear phase results in less ringing than minimum phase.
Dirac delta is all frequencies that happen from infinity to infinity. Luckily, they cancel each other out except for one sample. Linear phase ringing is symmetric, but smaller as it results only from those very high frequencies that were present in the original delta, but we removed them with a down sampling, antialiasing filter.
With a minimum phase filter instead, we shift all frequencies in relation to each other - including ones below new Nyquist - and they no longer cancel out at sample positions so nicely.
Confirm - I heard a change each time. 06:45 to 07:48
I already know that binaural extrapolation works for me as I use it for binaural sleep tones.
Hey Dan -- you might enjoy my Psychoacoustics series; part 4 especially covers some interesting effects having to do with strange ways phase can be perceived by humans. Thanks for your video!
thanks for commenting ,now I have your whole channel to check out too ! ahha
Headphone listener here
To me it sounds like a bass boost when the eq is engaged. That was when it was just your voice.
With the mono and stereo mixes it sounded like the audio was canting in the direction of the channel with the eq active.
Now with a proper thumnail I will finally watch the video!👌
While the difference is subtle I can definitely hear it in the A/B test with your voice in minimum phase mode. With it on it sounds almost like the lower frequencies around 500hz are being distorted slightly, causing some harmonics to be created. It honestly reminds me a little bit of a low-quality audio codec, but again much more subtle. While I can't tell you exactly why this is, that's just what I hear.
when you was toggling on and off i would say maybe around 1-4/6k sounded abit distorted fuzzy almost when toggled on that's where i found i most noticeable (minimum phase)
Wearing headphones, that's just a widening of the stereo image on the low end. Many older "stereoizer" plugins do something similar by delaying the left (or right) channel by a very small amount, creating a difference between the left and right channels and thus a stereo image (that doesn't mix to mono very well). I assume the eq is essentially doing the same thing - a slight phase shift on the low end causing a difference between the left and right channels and creating some "width" on the low end.
I don't have plugin doctor, but I'd wager that a time delay plugin would be displayed on a phase-frequency graph as a horizontal straight line that is offset vertically.
A better and more mono compatible version of that trick is to use mid/side encoding.
Starting with a mono sound which would have signal on the mid channel and silence on the side channel: Rather than delaying the left side or right side, you create a side channel out of nothing by using a delayed version of the mid channel. This causes very slight comb filtering on either side - positive on one and negative on the other. The frequencies that get cancelled on one side get reinforced on the other and vice versa. The best bit is that it's completely mono compatible because the differences cancel out and leave only the untouched mid channel.
I think i can hear the first static phase shift on headphones, but it's veery subtle. To me it sounds like a slight loss of presence in the low mids, without affecting the highs or the lows at all. I had to pause the video and come back to it a couple times to be sure I experienced a difference, but it could still be placebo.
I also have very slight damage to my hearing (Missing a few dBs of mids on my right ear), so I experience all kinds of strange things I can't really explain or put my finger on that other engineers can't hear at all.
@Dan Worrall:
Great video!
I can hear a very small difference in your voice when bypassing the filter.
It's a tiny kind of "boxiness" to the sound when enabling the crossover.
The heaphone test is quite obvious, I hear it as a shift in the stereo image but also frequency response. Like with the pink noise one side will get louder in a certain frequency range in one ear.
Yo on the iPhone, that experiment you did at 5:00 does make a difference. Less low-end when toggled on.
5:00 with the crossover it sounds subtly "phasy", and strangly your voice moves up (verticly) in space. with a kickdrum phaseshifts are much better to hear
You can learn how memorize pitches aka perfect pitch......there are only 12 tones. We use 26 letters everyday 🤯 Great Video
So did changing the thumbnail on your video work? I watched again but wondering if there was an uptick in the views :)
Great vid as always. BTW I have a question triggered by topic of antialiasing filters. Why are there different outcomes when resampling? Shouldn't it be mathematically identical with just changed/moved sample points if there is only ONE shape recreated sinusoid can take? This is one of the last things within sampling theory for beginners I cannot grasp ;)
Oddly enough Dan, I heard an example yesterday where I *could* hear the difference of absolute phase, having said many times that I wasn't sure if I can !
And, I *think* I can hear it on your voice in this example, too. If I'm not fooling myself, it sounds less warm and full, especially near 100 Hz. I'm hearing less "boom" in your voice, just as I heard less yesterday in the bass synth and kick drum of the example I was listening too, although in that case I was flipping the phase of the entire mix. I scored 4/5 blind yesterday before I got bored, feel free to send me the files if you'd like and I'll try the same test with this example too 😛
(BTW I can very clearly hear the single-channel test in stereo, on both speakers and headphones, even just on your voice. I hear it as a shift in the image, either left or right depending which channel is active)
5:04/20:28 the change in timber in your voice is a side effect of the filtering, a second side effect witch is obvious is the lack of lows in your voice when the filtering is applied, the result is your voices is more nasal, knowing that they are minimum phase high pass filters, they tend to smear the signal, what happens as a result is that the image of the sound shifts to the upper mid-range, as a result the change in timber or signal degradation.
Thanks a lot Dan. Best !
I'm using headphones anc each time you were activating a compressor on one side, I was hearing the opposite side as louder and "behind my head".
With the minimum phase lowshelf filter engaged, it sounds like the bass frequences are coming from behind me slightly left when the eq is on the right channel, and behind me slightly right when the eq is on the left channel. That's the only difference I hear.
If you (through the crossover) introduce a 180 degree phase shift for a certain frequency on one of the two binaural channels, that basically means, you swap the mid- and the side-signal for that frequency. Of corse that is extremely audible. Is it fair to say, that you’re hearing the phase shift thought?? I mean, basically you’re hearing differences in the panning of all your instruments in that frequency. But that’s a result of the phase shift and not the phase shift itself… right?
I'm back for a 2nd view as I'm so engaged with the new thumbnail 😄
Yeah this thumbnail is now absolutely genius
Reaxcomp is not linear phase, but doesn't cause phase shift when flat. Is it dynamic phase?
Haha. No. It's quirky! I've been meaning to cover that one, maybe now is the time..
Actually I suppose strictly speaking it is dynamic phase. But not in the same way as the FabFilter plug.
I always suspected that it was subtracting one band from the full signal, to obtain a second, if that makes any sense. Because there's no latency and it nulls with dry signal.
@@StevieBoyesmusic I came to the same conclusion. The result is a little peculiar, but still useful sometimes, and it keeps the CPU use low!
can you do a video about the different linear phase settings