You can never truly be a pro engineer without understanding these concepts. This is the clearest explanation of phase shifting I have ever seen. Thank you!
The more I learn about audio engineering the more I realize how much money I've wasted on plugins that I don't really understand. Truly, all audio comes down to frequency and amplitude within the domain of time. You could potentially do just about anything you want with just a stock EQ and versatile analyzer if you understand how these basic elements work together to create FX we call chorus, delay, phase shift, pitch shift, etc. Thanks for making us all better audio enthusiasts and engineers by taking us back to the basics.
this is a great statement man..... I just purchased a ton of plugin of plugins and I know what I've bought but I want to have a better understanding of what I bought...
And : "Mixing is about managing 2 basic things: gain and reverberation" And more succinctly to your observation: "Sound analysis is all about resonance vs dampening in any environment,...i.e. pick your materials of choice, and put them somewhere."
Weeks/months ago, I posted a comment asking for a video explaining phase issues of EQ, and you replied saying you would make it in the future. Thanks for keeping the promise 😍😍😍 I still didn't understand the direct reason why EQing cause phase shifts. Nevertheless, all these explanations about their audible results, and solutions, etc. are really informative. THANK YOU!
As a producer, you can spend hours using dC offset, stereo imagers, flangers/phasers/choruses, regular LCR-panning etc, to get your stereo image where you want it. You might even get it to work and sound good BOTH in mono and stereo. The thing with "fancy phase-magic" is that it is often only noticed by either people who themselves make music - or by the people with the expensive sound systems/headphones. The common consumer of music usually listens to music through either in-ears, non flat shitty beatboxes that are sometimes just straight mono (btw!) or just through their laptop speakers or gaming-headphones. What I'm trying to tell you with all this is because of the above stated facts makes it very risky to fuck with phase. The mix engineer or the master engineer is going to slam a phase-corrector (like a linear EQ) and just throw all that hard phasing-work you did out the window because that is going to get lost in transmission when people are using sound systems not capable of making sense of the signal. And believe me. Listening to a wide, magically phase-choreographed mix , with a perfect balance between all points of its stereo image, on a system that is sub-par or just cheap, or for that case IN MONO. It can sound like half of the song got sliced out. This is what so many people struggle to understand!!! Don't mess with phase or dithering because if you get a placement, some other engineer (if not the mix/master-guy then the recording engineer for the artist) is going to hear that phase being not how it should and slam it linear... all good and safe... and boring... but that's a bulletproof way of ensuring ur mixes don't gets lost between different listening mediums.
Trying to cater to low quality gear and undiscerning listeners will often result in sacrificing quality for those listening on decent gear. I prefer not to compromise for people that don't care all that much what the mid range or the panning or the phase relationships sound like. I take an "it is what is is" approach to situations where the finer details of the mix don't matter. I would rather be creative than to dull down my mix for the masses. That's not to say I don't test in mono or listen on every device I can, but I take the low end listening experiences with a grain (or two) of salt. As long as the rhythm and the vocals are clear, and the music is at least somewhat discernable, I'm good with how it sounds on an iPhone or Bluetooth speaker.
it's important to understand the audio phenomena caused by the tools we use, but at the end of the day what matters is the artistic intention when choosing what to use. Passive equalizers are widely used in mastering precisely because of the resonance they cause. Great video. You have gold content
I like how Ozone has variable phase as well as analogue EQ, or even the Pultec-ish module. It can be flat with linear phase, but sometimes flat is what's needed, or bumps/depth is needed...if the mix has that kind of thing going on. Like widening stereo it can be too much or too little, so its handy to just pass thru each band imhe. Heck, i phase aligned guitar mics before and unaligned was better. Rules are for knowing and you have to know them to break them [successfully] don't u?
One of the best channels right now on YT. Please keep it coming. Such short basic explanations of audio topics is very helpful as we (begginers) hear all those terms but never understand them completely.
I use NO EQ except in very special situations, like enhancing a recording made many decades ago, pressed to vinyl, etc. I especially enjoyed the explanation of the Haas effect. Bill P.
Can't express how much I love this channel men! Your videos are the best videos with the best explanation I have found all over this 4 years of producing. I'm really thankful sage, you're awesome 😎
People tend to make a big deal out of phase. In reality, using parallel eq will only give you a slightly different curve than the graph is showing. Hi, low and band pass filters can get wonky. But, you're not going to get some weird phaser sound. If you don't hear a problem, there is no problem. EQ drum spot mics, use parallel EQ. IT'S OK.
lets say we're using a graphic eq like the pro q3 on the master, and we are using it for additive eq(low and high shelfs etc), should we be using it in linear phase mode or just regular mode?
I'm such a mastering noob, I know what phase shifting is, but I had no Idea that eq plugins could have this effect due to how they are operating. I have heard it before and wondered what was causing it. In the past I past I have had to dial back what I was doing and find another way or try something else. I did not realise this was the cause.Thanks Great channel
Hey Jack - great to hear that the video helped you understand how these plugins affect phase. Hopefully linear or natural phase can be a good alternative. SageAudio.com
Correct me if I’m wrong, but I think zero phase is not FFT, but the opposite. FFT EQ is a type of linear phase equaliser, according to some articles I’ve read, FFT is what would make the “perfect low pass filter” possible, by simply removing all bands above a frequency, but FFT has problems unless you had tens of thousands of bands, which would add a lot o latency.
i usually use natural phase mode on all individual tracks and linear phase on the mix bus, Do you agree with this practice? i feel like it would be impractical to put linear phase on every track
The Haas effect can occur in mono if different frequency ranges are delayed. The effect of non-linear EQ could be that the treble you boosted may be "pushed back" and this is the most noticable effect to me. Sometimes it can be subjectively nice in a way though, adding a 3d-ness but it can quickly make a master start sounding lost in the space, lacking impact, actually losing the thing you tried to bring up. *Haas effect merely describes the repetition of sounds (delay) being indistiguishable as separate sounds, and instead is heard as phasal changes (~30hz approximately,). Phasal changes allow stereo location though so its common example is Haas 'panning'
Linear phase eq is good mainly on Sub Group Stereo Buss and Master Buss outs..if your using linear phase eq plugins, use the same plugin manufacturer ! one on each instance - Dont mess with Low or highs 'qualities.. keep them all the same value..I wouldn't bother using linear on individual tracks.. my opinion, but its through trial and error..Use Linear EQ over the Buss's? - Yes but on All of them.. So no delay / latency between Non and Linear processing - And Yes They are all leading to the Master Buss - sure.. If you want More Master Linear phase EQ?? You will be "linear Phasing" already linear eq'd stuff ? , it doesn't matter - because you have used linear phase on each Sub group leading to the master.. Meaning? Actual phase issues won't be a problem and any Bell artefacts will not be so apparent, ( I seriously doubt you will need crazy settings at this stage) as you are working on pre mixed buss's..Keep it all stereo , don't play with different "quality" values per instance.. I would Not use on individual tracks... let the art work before correction... unless the art is correction :-) Peace.
it is a challenging one. since sometimes the LP can smear the transients, so choosing the LP of MP is material depended. either way. your videos are super informative. I appreciate your effort and knowledge.
Love the physics lesson.. was a reminder of the stuff I learnt and forgot in engineering.... however, when it came to the part comparing the linear phase eq vs the zero phase eq for the low end, I couldnt hear any difference whatsoever... what are we supposed to be listening for?
So basically digital EQ's don't ACTUALLY effect the RAW audio file - but rather act as an influence field on the raw audio. Bc this 'influence' is calculated by computers and not solid state electricity - theres sample gate and timing limitations which cause mild inconsistencies in the calculation of this 'influence' field we call digital EQ .You add that onto the innate phase inconsistencies introduced that even an analog EQ would produce naturally. So you get an extra layer of phase distortion with digital eq.? in analog EQs the inherent distortion is caused by shifting a complicated harmonic curve up or downwards in a very arbitrary but specific spot - which strains the amplitude curves of the original audio - causing distortions prior and after the change in an effort to keep a mathematically coherent curve moving through time. Am I close? Thank you :)
So does it make sense then to just place a phase correction plug in like auto align after the equalizer? Or maybe if the problem occurs to greater effect in low frequencies use Hpf as the very first effect, then phase correction and then all the other stuff?
You had 2 identical acoustic guitar tracks. One was processed with a zero latency EQ. The other was processed (or not processed)? If it wasn't processed then where did the latency come from that created the HAAS effect?
Thanks for watching! The latency comes from the processed track - changing the filter alters the phase. When combined with the unaffected or unprocessed track, the differences in phase cause the HAAS effect.
My issue was with an AKG C7 condenser microphone that has a non-switchable 12db HP filter at 150 Hz that was producing an asymmetrical waveform when recording spoken word. On tone, from a sound source, the waveform was perfect. Turns out that I had to apply a +90 degree phase correction with the graphical phase effect in Audition to get the waveform to come right, allpass2 track 95 1 with Audacity. So long as an HP filter is not applied, this is not an issue with a Shure beta 58A and/or a Neumann KMS 105
What Linear phase should be used for vocals? When switching from zero to Linear High in mixing stage the vocal itself sounds better to my ears. Just trying to understand the concept.
What is YOUR recommendation working with Fabfilter Pro Q3? Linear or zero latency? Is there any other kind of eq that you have in mind that doesn't work this way? Thanks for any given answers. Great video content 👌 Cheers.
If I'm equalizing higher frequencies, I stick to zero latency, but if low ones are involved, I like the low-latency linear phase option. Thanks for watching! SageAudio.com
I almost never use linear phase EQ. Maybe I'm not that advanced yet. I actually use high pass filter sometimes on my bass guitar to shift the phase if needed. It can actually make the bass more audible.
Thanks for watching Brian! It's interesting how changing the phase can actually make it sound better in some situations. We typically think of these things as 'good' and 'bad' but given the complex nature of every mix, sometimes changing the phase can help! SageAudio.com
@@sageaudio Pultec EQ is a good example of taking advantage of phase, you have boost and cut in close frequency, that is magic box for being creative with bass
Tell me if I'm wrong, but would this be a good argument for analog EQ at the point of recording, rather than afterwards? I know the potential downsides to doing that, but it means you won't be making big moves with EQ's and messing with the phase later on?
Thanks for this video. For years I've been searching for a understandable explanation of EQ phase, and while I'm still a bit dazed and confused, your information has helped move the needle. Is there good use of the phase curve shown on some EQs? I mean, if a track needs a frequency cut, can a phase curve indicate bad moves vs. good ones, or is just a pretty thing to look at?
Hey Dan! Glad to hear that the video is helpful! Although this probably isn't a straight-forward answer, it depends. Since the mix or master you're working on has a lot of various frequencies, changing all the time with the recorded performance, any change to the phase can either help or hinder your mix/master. For example, if you used a zero latency EQ and it shifts the phase of the bass, maybe this causes some destructive interference with the kick drum that for one reason or another causes both to sound more distinct. That said, knowing if the phase has been changed doesn't let you know how that phase change will affect other frequencies, or if it will be bad or good. Your best bet is to use your ears! SageAudio.com
@@sageaudio I love this explanation. This got a bit confusing for me but hearing you say trust your ears is very helpful. Working on mixing and mastering my EP now and I want to implement a lot more technical knowledge but some of it is overwhelming. The things that may be there that I'm not hearing is what concerns me more than anything. I'm still trying to understand how to find those issues.
does the phase still exist in the eq example with the hpf cutting the low, like i would do with my vocals, it introduces the alt eq curve, but what if i render that into audio, is it still there?
That's for watching! The changing of phase relations is occurring within the plugin. So for example, if you use a high-pass filter on the vocal, the area right above the cut off would be amplified slightly. When you bounced the audio, that amplification would still be occurring within the plugin, and as a result the audio would be equalized with that bump or amplification included.
to be honest i didn't hear that much of a difference and i didn't fully understand all the concepts but maybe i can remember the key words and have everything click one day. same as how i finally understand what correlation is and how mid-side eq changes the sound. now when i feel that the mix sounds muddy and certain frequency bands sound muffled i won't rush to the multi-band eq. i still need some visual cues for stuff like that but with experience i will trust my ears to spot the problems
trying to undestand this: so the phase curve is different than the EQ curve? To use a metaphor- the EQ curve is the wind that influences the surface of the water(phase). This wind is irregular and thus influences the curvature of the water in irregular ways. Why can't we make an EQ that directly modifies the raw phase curve? Or is that control only possible with synths not sampled files?
Bro please tell me, should we use a linear phase when doing parallel compression on vocals? I allways need to hipass the parallel compressed vocal to keep my low end clean
Yes you should! The compressor likely won't have linear phase, so don't worry about trying to make the compressor linear phase. If you use EQ on a parallel channel, you should set it to linear phase mode
Hi, a bit confused. Use linear phase only to avoid phase cancellation problems or also in a creative way ? If it introduces latency, better to work with high latency in Daw settings ? what about minimum phase ?
It really depends on what sounds best to be honest - the differences will be small. Linear phase results in the same phase relationship as the pre-processed signal, but the changes caused by zero latency or minimum phase filters may not sound bad. SageAudio.com
Thanks for watching! It's mainly because notes are closer together in lower frequencies. So any shifts will affect a larger portion of the scale. SageAudio.com
@@sageaudio Ah! Right. An Octave is a doubling of frequency, so, lower freqs are closer together. Makes sense. So, is the Phase Shift merely a processing/latency delay? Or does the filtering implicitly shift the phase bc of ‘capacitance/inductance’ properties? (That’s analog thinking, right? Rather than FFT thinking?). Thanks!
Thanks for great video. Do you suggest to use Linear Phase EQ on hard panned tracks (for example 2 guitar tracks playing the same thing panned left/right). If I want to EQ one, would you suggest Linear Phase EQ to avoid creating Hass effect? I mix a lot of hard panned guitars playing same thing, should I use Linea Phase EQ on them when for example applying HPF? Thank you.
@@sageaudio Thank you so much. So correct my if I'm wrong. When I only EQ one hard panned guitar it's better to use Linear Phase EQ to avoid phase shift. What if I want to EQ differently both hard panned guitars? In my studio I use analogue SSL EQ. If I eq both with analogue regular EQ will it cause phase shift problems? Thank you.
@@LucasMichalski Just for verifications sake, both gtr tracks were recorded individually and are 2 separate recording 'takes', just that they're both performing the same part, correct? If so, I'd take it even further than just EQ'ing one side differently from the other. I'd perhaps try using 2 totally different EQ plugins and different EQ adjustments for both sides, in addition to anything that would also provide some tonal coloration to yield a stronger contrast. Reason being, given this particular kind of scenario where 2 different recordings of the same content are hard panned, any/all variations between the tracks will result in DESIRABLE phase discrepancies THAT'LL WIDEN THE STEREO IMAGE! For instance, take the last bit of the video when he demonstrates what & how the HAAS effect sounds like. Now I dunno about you, but the overall stereo image seemed pretty weak or tame prior to the EQ change to one side! But, because he messed with the phase relationship between the L and R tracks, a far more robust sense of stereo imaging, albeit, not the most flattering or pleasant stereo effect (HAAS), was established as a result!
Hey Mohammad! Thanks for watching! It depends on what sounds best. Sometimes unexpected phase changes can have a positive effect on the mix or master. SageAudio.com
Most of the time linear phase EQ is a solution looking for a problem. Sample rate conversion is its best use. Also most min phase DSP doesn't use fft as far as I know. EG a 6dB lowpass filter is just a single sample delay with feedback.
Thanks for watching! When using equalizers you need to worry a little about phase. All processors will affect the phase in one way or another, but that doesn't mean the phase changes are a bad thing. SageAudio.com
This is a really good topic. A question here… Usually I record two inputs for my acoustic guitar, one is line in and another one is using condenser mic. The purpose is to make it panned hard left and right to get wider stereo effect and two audio tracks contents different tone that make the ac gtr track more body and depth. But the thing is the phase issue. My approach is to use the melda auto aligned plugin on both audio tracks. But I didn’t apply the linear phase EQ or linear phase multiband compressor. I have the waves one. Is it necessary to use it on my guitar tracks ? And what if I want to apply different EQ curve to one and another ?
To avoid phase issues in this situation, use the linear phase eq on both signals. Since they're delayed, you can use different curves without an issue. SageAudio.com
@@nightdew4934 am glad some say it. Don't like when people time align drums to overheads for example. It seems strange to think you can fix drum phase that way perhaps.
I beleive most processors use a bit depth of 16 or higher, so your 4096 sample size example (also you don't clearly communciate size vs. rate) is for a 12 bit system. Perhaps the reasons given describe why phase shift happens in digital signals... doesn't not explain why they also happen in analog systems. Most people don't understand FFT and the math to catch this and your video while sounding really good and raising some good points, doesn't fully clear up the topic of how phase is introduced as many think it does.
Sure! In this case, the EQ is using 'Samples' to measure the signal at various points and then recreate the signal using that info. I'm not the best at explaining the math behind coding a digital EQ (it uses an algorithm called the 'Fast Fourier Transform'). But think of each sample as an estimation of the amplitude of a frequency. The more samples we can take, the better the approximation of the value. When an EQ takes 4096 samples, that means that it's measuring various parts of the full frequency range 4096 times, which gives the EQ a fairly comprehensive range of values with which it can approximate the signal's amplitude at these various frequencies. The more samples taken, the more accurate the approximation; however, the more processing power it takes. Sorry, I know I'm not the best at explaining this (people who code or are more knowledgeable about math would be better), but maybe I can make a more detailed video on the topic in the future.
Did engineers of the 80s ,90s knew about these concepts or are these relatively new concepts that engineers discovered due to the advancement of technology ?
Internally, the EQ is using changes in phase to create the desired frequency changes. Additionally, the plugin is causing 'phase rotation' around the filters, which is another way of saying it shifts the orientation of a waveform's peaks and troughs. This video should help explain that concept better than I can in a comment: th-cam.com/video/0csZlyPC_lg/w-d-xo.html&ab_channel=SageAudio
if i may humbly offer a correction, phase in and of itself doesn’t have anything to do with timing. the demonstration of moving the pure tones forward and backward to get varying amounts of cancellation is perhaps misleading and not comparable to the phase shift introduced by EQing. It’s true that time aligning different elements can move them in and out of phase, but it would be better to precise that phase shit can occur without timing differences and purely by rotating the phase of the signal, as is the case with EQ. peace and love ~
How do you rotate phase without time adjustment? Isn't it exactly what it means and isn't it the same thing? Correct me if i am wrong Like sin to cos one peaks when one is at zero, and that is 90°, because they start at different values. I think i understand that this effect of changing an overtone, could have a knockon effect in a fourier transform that could be described as being rotated when such frequencies are 'pulled out' of the wave shape
This is correct. Most videos on phase are some mix of confusing signals and waves, phase and delay, and so on. Phase is one part of the frequency response, the other being magnitude. They represent a complex Phasor at each frequency, and come from a Fourier transform of the output divided by the input. So these videos should all be taken as very introductory to the actual fact.
You can never truly be a pro engineer without understanding these concepts. This is the clearest explanation of phase shifting I have ever seen. Thank you!
Thanks for watching! Knowing this stuff does come in handy more often than you'd think!
SageAudio.com
This is such a great channel for all who take mixing and mastering serious, whether your're a pro or a enthusiastic homerecorder.
Thank you for watching Nikolaus! Great to hear that you're enjoying the channel!
SageAudio.com
The more I learn about audio engineering the more I realize how much money I've wasted on plugins that I don't really understand. Truly, all audio comes down to frequency and amplitude within the domain of time. You could potentially do just about anything you want with just a stock EQ and versatile analyzer if you understand how these basic elements work together to create FX we call chorus, delay, phase shift, pitch shift, etc. Thanks for making us all better audio enthusiasts and engineers by taking us back to the basics.
this is a great statement man..... I just purchased a ton of plugin of plugins and I know what I've bought but I want to have a better understanding of what I bought...
You and me both! Knowledge truly is power. Grateful for channels like this! :)
I'd love to see stock plugins like RBass
Exactly. That’s why I’ve stopped to spend money in that and I’ve hired a pro teaching me everything
And : "Mixing is about managing 2 basic things: gain and reverberation"
And more succinctly to your observation: "Sound analysis is all about resonance vs dampening in any environment,...i.e. pick your materials of choice, and put them somewhere."
I’m blessed to have found this channel tbh. I know this channel is gonna be huge one day. People sleeping on these videos
Bro I swear🤷🏾♂️
Thank you for watching Decon 21! We appreciate your support a lot!
SageAudio.com
@Decon 21, Same as well. The subscribers of SageAudio are growing so fast nowadays.
Omg bro same here! Crazy amount of info yall are dropping!
Weeks/months ago, I posted a comment asking for a video explaining phase issues of EQ, and you replied saying you would make it in the future.
Thanks for keeping the promise 😍😍😍
I still didn't understand the direct reason why EQing cause phase shifts. Nevertheless, all these explanations about their audible results, and solutions, etc. are really informative. THANK YOU!
Hey Dulmin! I remember you suggesting that a while back - it's a really interesting topic so thank you for bringing it up!
SageAudio.com
so we need to eq with linear phase in the master or even in individual tracks and mix bus?
As a producer, you can spend hours using dC offset, stereo imagers, flangers/phasers/choruses, regular LCR-panning etc, to get your stereo image where you want it. You might even get it to work and sound good BOTH in mono and stereo. The thing with "fancy phase-magic" is that it is often only noticed by either people who themselves make music - or by the people with the expensive sound systems/headphones. The common consumer of music usually listens to music through either in-ears, non flat shitty beatboxes that are sometimes just straight mono (btw!) or just through their laptop speakers or gaming-headphones.
What I'm trying to tell you with all this is because of the above stated facts makes it very risky to fuck with phase. The mix engineer or the master engineer is going to slam a phase-corrector (like a linear EQ) and just throw all that hard phasing-work you did out the window because that is going to get lost in transmission when people are using sound systems not capable of making sense of the signal.
And believe me. Listening to a wide, magically phase-choreographed mix , with a perfect balance between all points of its stereo image, on a system that is sub-par or just cheap, or for that case IN MONO. It can sound like half of the song got sliced out. This is what so many people struggle to understand!!! Don't mess with phase or dithering because if you get a placement, some other engineer (if not the mix/master-guy then the recording engineer for the artist) is going to hear that phase being not how it should and slam it linear... all good and safe... and boring... but that's a bulletproof way of ensuring ur mixes don't gets lost between different listening mediums.
Trying to cater to low quality gear and undiscerning listeners will often result in sacrificing quality for those listening on decent gear. I prefer not to compromise for people that don't care all that much what the mid range or the panning or the phase relationships sound like. I take an "it is what is is" approach to situations where the finer details of the mix don't matter. I would rather be creative than to dull down my mix for the masses. That's not to say I don't test in mono or listen on every device I can, but I take the low end listening experiences with a grain (or two) of salt. As long as the rhythm and the vocals are clear, and the music is at least somewhat discernable, I'm good with how it sounds on an iPhone or Bluetooth speaker.
What phase eq and how slam?
it's important to understand the audio phenomena caused by the tools we use, but at the end of the day what matters is the artistic intention when choosing what to use. Passive equalizers are widely used in mastering precisely because of the resonance they cause. Great video. You have gold content
Very true! It really comes down to what sounds good, but it helps to know what's happening to the signal. Thanks for watching!
SageAudio.com
I like how Ozone has variable phase as well as analogue EQ, or even the Pultec-ish module. It can be flat with linear phase, but sometimes flat is what's needed, or bumps/depth is needed...if the mix has that kind of thing going on. Like widening stereo it can be too much or too little, so its handy to just pass thru each band imhe.
Heck, i phase aligned guitar mics before and unaligned was better. Rules are for knowing and you have to know them to break them [successfully] don't u?
One of the best channels right now on YT. Please keep it coming. Such short basic explanations of audio topics is very helpful as we (begginers) hear all those terms but never understand them completely.
That's awesome! Great to hear that you're enjoying the channel and the info in our videos!
SageAudio.com
10 years ago i mastered some tracks with Sage Audio with premium results... i dont miss a video
I use NO EQ except in very special situations, like enhancing a recording made many decades ago, pressed to vinyl, etc.
I especially enjoyed the explanation of the Haas effect.
Bill P.
Thanks for watching - that's a cool mixing method, I'll try that out
@@sageaudio Give it a try, record correctly in the first place, no need for EQ !
Bill P.
Can't express how much I love this channel men! Your videos are the best videos with the best explanation I have found all over this 4 years of producing. I'm really thankful sage, you're awesome 😎
So good to hear that you're enjoying the channel and that it's been helpful!
SageAudio.com
As an audiophile, i never understood the mechanics behind an EQ and phase shift. This is eye opening.
That's awesome, thanks for watching!
Absolutely best description of both phase and EQ and phase
Awesome! Thank you for watching!
People tend to make a big deal out of phase. In reality, using parallel eq will only give you a slightly different curve than the graph is showing. Hi, low and band pass filters can get wonky. But, you're not going to get some weird phaser sound. If you don't hear a problem, there is no problem. EQ drum spot mics, use parallel EQ. IT'S OK.
lets say we're using a graphic eq like the pro q3 on the master, and we are using it for additive eq(low and high shelfs etc), should we be using it in linear phase mode or just regular mode?
First and probably the only channel I'm hitting the bell icon for !!
I'm such a mastering noob, I know what phase shifting is, but I had no Idea that eq plugins could have this effect due to how they are operating. I have heard it before and wondered what was causing it. In the past I past I have had to dial back what I was doing and find another way or try something else. I did not realise this was the cause.Thanks
Great channel
Hey Jack - great to hear that the video helped you understand how these plugins affect phase. Hopefully linear or natural phase can be a good alternative.
SageAudio.com
Correct me if I’m wrong, but I think zero phase is not FFT, but the opposite. FFT EQ is a type of linear phase equaliser, according to some articles I’ve read, FFT is what would make the “perfect low pass filter” possible, by simply removing all bands above a frequency, but FFT has problems unless you had tens of thousands of bands, which would add a lot o latency.
i usually use natural phase mode on all individual tracks and linear phase on the mix bus, Do you agree with this practice? i feel like it would be impractical to put linear phase on every track
This guys make such amazing stuff to learn from. Thank you so much.
Thank you for watching Rockton! Glad you like the video!
SageAudio.com
It's fantastic everything you do, thank you again.
Shango from cameroon.
Thank you Shango!
SageAudio.com
The Haas effect can occur in mono if different frequency ranges are delayed. The effect of non-linear EQ could be that the treble you boosted may be "pushed back" and this is the most noticable effect to me. Sometimes it can be subjectively nice in a way though, adding a 3d-ness but it can quickly make a master start sounding lost in the space, lacking impact, actually losing the thing you tried to bring up.
*Haas effect merely describes the repetition of sounds (delay) being indistiguishable as separate sounds, and instead is heard as phasal changes (~30hz approximately,). Phasal changes allow stereo location though so its common example is Haas 'panning'
Im really happy I found this channel. amazing delivery
Awesome! Thanks Ikan!
SageAudio.com
You guys are amazing and appreciated ❤️
Thank you so much for watching!
SageAudio.com
Linear phase eq is good mainly on Sub Group Stereo Buss and Master Buss outs..if your using linear phase eq plugins, use the same plugin manufacturer ! one on each instance - Dont mess with Low or highs 'qualities.. keep them all the same value..I wouldn't bother using linear on individual tracks.. my opinion, but its through trial and error..Use Linear EQ over the Buss's? - Yes but on All of them.. So no delay / latency between Non and Linear processing - And Yes They are all leading to the Master Buss - sure.. If you want More Master Linear phase EQ?? You will be "linear Phasing" already linear eq'd stuff ? , it doesn't matter - because you have used linear phase on each Sub group leading to the master.. Meaning? Actual phase issues won't be a problem and any Bell artefacts will not be so apparent, ( I seriously doubt you will need crazy settings at this stage) as you are working on pre mixed buss's..Keep it all stereo , don't play with different "quality" values per instance.. I would Not use on individual tracks... let the art work before correction... unless the art is correction :-) Peace.
information Ive been finding for years, thanks!
Hey Jonas! Thanks for watching!
SageAudio.com
I think this is important specially to check things out when we have issues and check from where they are coming from?
Thank you for the class!
I gotta save this one.
Please do! Thanks for watching and commenting!
SageAudio.com
all your videos are great man
Awesome! Thanks for watching!
SageAudio.com
what can be done to keep constant track of phase and which habits to make sure the output is the cleanest possible?
Briliant! I've looking for exacly that for so long!, and finaly- something about fase changes coused by EQ. Thank you guys!!
Why not just use “face”
@@junting605 Hahaha, I had😄
Thank you for watching Janek! Great to hear you enjoyed the video!
SageAudio.com
Is that an actual Haas effect or are we just lowering the amplitude of one track so that the other hard panned track is louder?
it is a challenging one. since sometimes the LP can smear the transients, so choosing the LP of MP is material depended. either way. your videos are super informative. I appreciate your effort and knowledge.
Very true! Sometimes I actually like how LP does this on harsh material. Thanks for watching!
SageAudio.com
@@sageaudio Is this a good rule of thumb - dont use LP on transient material?
What exactly does the natural phase setting do in pro q 3?
Fantastic! Respect for your job! Thank you very much.
Thank you so much!
Wow…..thank you so much for this! Easily the best video on TH-cam explaining phase 🙏🏽
That's awesome! Glad you enjoyed it and thanks for watching!
SageAudio.com
Love the physics lesson.. was a reminder of the stuff I learnt and forgot in engineering.... however, when it came to the part comparing the linear phase eq vs the zero phase eq for the low end, I couldnt hear any difference whatsoever... what are we supposed to be listening for?
Wooow!! This one is sooo valuable to me. Thanks a lot!
Great content as always. Keep it up!
Thank you for watching and leaving a comment!
SageAudio.com
So basically digital EQ's don't ACTUALLY effect the RAW audio file - but rather act as an influence field on the raw audio. Bc this 'influence' is calculated by computers and not solid state electricity - theres sample gate and timing limitations which cause mild inconsistencies in the calculation of this 'influence' field we call digital EQ .You add that onto the innate phase inconsistencies introduced that even an analog EQ would produce naturally. So you get an extra layer of phase distortion with digital eq.?
in analog EQs the inherent distortion is caused by shifting a complicated harmonic curve up or downwards in a very arbitrary but specific spot - which strains the amplitude curves of the original audio - causing distortions prior and after the change in an effort to keep a mathematically coherent curve moving through time. Am I close? Thank you :)
So does it make sense then to just place a phase correction plug in like auto align after the equalizer? Or maybe if the problem occurs to greater effect in low frequencies use Hpf as the very first effect, then phase correction and then all the other stuff?
You had 2 identical acoustic guitar tracks. One was processed with a zero latency EQ. The other was processed (or not processed)? If it wasn't processed then where did the latency come from that created the HAAS effect?
Thanks for watching! The latency comes from the processed track - changing the filter alters the phase. When combined with the unaffected or unprocessed track, the differences in phase cause the HAAS effect.
@@sageaudio That's what I thought, just wanted to be sure, thank you!
You guys literally rock 🤟 Thank you for another great video.
Thank you for watching Gary! Glad you liked it!
SageAudio.com
My issue was with an AKG C7 condenser microphone that has a non-switchable 12db HP filter at 150 Hz that was producing an asymmetrical waveform when recording spoken word. On tone, from a sound source, the waveform was perfect. Turns out that I had to apply a +90 degree phase correction with the graphical phase effect in Audition to get the waveform to come right, allpass2 track 95 1 with Audacity. So long as an HP filter is not applied, this is not an issue with a Shure beta 58A and/or a Neumann KMS 105
What Linear phase should be used for vocals? When switching from zero to Linear High in mixing stage the vocal itself sounds better to my ears. Just trying to understand the concept.
Brilliant as always.
Thanks Self-Law!
SageAudio.com
What is YOUR recommendation working with Fabfilter Pro Q3?
Linear or zero latency?
Is there any other kind of eq that you have in mind that doesn't work this way?
Thanks for any given answers.
Great video content 👌
Cheers.
If I'm equalizing higher frequencies, I stick to zero latency, but if low ones are involved, I like the low-latency linear phase option. Thanks for watching!
SageAudio.com
@@sageaudio thanks for your time and help.
Great TH-cam channel.
@@sageaudio but what if i high pass and low pass at the same eq for a single sound ? For example piano or lead which is mixed frequencies?
謝謝你提供的訊息,終於讓我了解到phase
棒極了!很高興聽到視頻很有幫助!
That's awesome! Great to hear that the video was helpful!
SageAudio.com
very impressive i appreciate all your effort
Thanks for watching!
SageAudio.com
I almost never use linear phase EQ. Maybe I'm not that advanced yet. I actually use high pass filter sometimes on my bass guitar to shift the phase if needed. It can actually make the bass more audible.
Thanks for watching Brian! It's interesting how changing the phase can actually make it sound better in some situations. We typically think of these things as 'good' and 'bad' but given the complex nature of every mix, sometimes changing the phase can help!
SageAudio.com
@@sageaudio Pultec EQ is a good example of taking advantage of phase, you have boost and cut in close frequency, that is magic box for being creative with bass
Fantastic explanation!
Thank you!
Tell me if I'm wrong, but would this be a good argument for analog EQ at the point of recording, rather than afterwards? I know the potential downsides to doing that, but it means you won't be making big moves with EQ's and messing with the phase later on?
Thanks for this video. For years I've been searching for a understandable explanation of EQ phase, and while I'm still a bit dazed and confused, your information has helped move the needle. Is there good use of the phase curve shown on some EQs? I mean, if a track needs a frequency cut, can a phase curve indicate bad moves vs. good ones, or is just a pretty thing to look at?
Hey Dan! Glad to hear that the video is helpful! Although this probably isn't a straight-forward answer, it depends. Since the mix or master you're working on has a lot of various frequencies, changing all the time with the recorded performance, any change to the phase can either help or hinder your mix/master.
For example, if you used a zero latency EQ and it shifts the phase of the bass, maybe this causes some destructive interference with the kick drum that for one reason or another causes both to sound more distinct. That said, knowing if the phase has been changed doesn't let you know how that phase change will affect other frequencies, or if it will be bad or good. Your best bet is to use your ears!
SageAudio.com
@@sageaudio I love this explanation. This got a bit confusing for me but hearing you say trust your ears is very helpful. Working on mixing and mastering my EP now and I want to implement a lot more technical knowledge but some of it is overwhelming. The things that may be there that I'm not hearing is what concerns me more than anything. I'm still trying to understand how to find those issues.
does the phase still exist in the eq example with the hpf cutting the low, like i would do with my vocals, it introduces the alt eq curve, but what if i render that into audio, is it still there?
That's for watching! The changing of phase relations is occurring within the plugin. So for example, if you use a high-pass filter on the vocal, the area right above the cut off would be amplified slightly. When you bounced the audio, that amplification would still be occurring within the plugin, and as a result the audio would be equalized with that bump or amplification included.
to be honest i didn't hear that much of a difference and i didn't fully understand all the concepts but maybe i can remember the key words and have everything click one day. same as how i finally understand what correlation is and how mid-side eq changes the sound. now when i feel that the mix sounds muddy and certain frequency bands sound muffled i won't rush to the multi-band eq. i still need some visual cues for stuff like that but with experience i will trust my ears to spot the problems
Is the phase issue true only for signals of near frequencies?
trying to undestand this: so the phase curve is different than the EQ curve? To use a metaphor- the EQ curve is the wind that influences the surface of the water(phase). This wind is irregular and thus influences the curvature of the water in irregular ways. Why can't we make an EQ that directly modifies the raw phase curve? Or is that control only possible with synths not sampled files?
Bro please tell me, should we use a linear phase when doing parallel compression on vocals? I allways need to hipass the parallel compressed vocal to keep my low end clean
Yes you should! The compressor likely won't have linear phase, so don't worry about trying to make the compressor linear phase. If you use EQ on a parallel channel, you should set it to linear phase mode
Still trying to get my head around this..but thank you the explanation
Thanks for watching! Glad you liked the video!
SageAudio.com
Omg.. Loved it.. Thank you so much.. 🙏❤
Ok so how do we use a linear phase EQ and combat pre ringing. And how do we use and EQ and minimise phase shifts?
Great video
Thanks for watching this one Wade!
SageAudio.com
Great channel, I really appreciate the content TKS guys!
Thanks for watching Wes! Glad you're enjoying the channel!
SageAudio.com
this video saved me
Awesome! Thanks for watching!
So is it only overall amplitude on the master channel that suffers when two tracks are out of phase?
The amplitude may be lowered, but you might also hear the effects of phase cancellation (notch filters, comb filters, phasing effects, etc.)
Hi, a bit confused. Use linear phase only to avoid phase cancellation problems or also in a creative way ? If it introduces latency, better to work with high latency in Daw settings ? what about minimum phase ?
It really depends on what sounds best to be honest - the differences will be small. Linear phase results in the same phase relationship as the pre-processed signal, but the changes caused by zero latency or minimum phase filters may not sound bad.
SageAudio.com
Nice. Why is phase shift emphasized at lower freqs? Bc 0Hz is a dead end?
Thanks for watching! It's mainly because notes are closer together in lower frequencies. So any shifts will affect a larger portion of the scale.
SageAudio.com
@@sageaudio Ah! Right. An Octave is a doubling of frequency, so, lower freqs are closer together. Makes sense.
So, is the Phase Shift merely a processing/latency delay? Or does the filtering implicitly shift the phase bc of ‘capacitance/inductance’ properties? (That’s analog thinking, right? Rather than FFT thinking?).
Thanks!
Thanks for great video. Do you suggest to use Linear Phase EQ on hard panned tracks (for example 2 guitar tracks playing the same thing panned left/right). If I want to EQ one, would you suggest Linear Phase EQ to avoid creating Hass effect? I mix a lot of hard panned guitars playing same thing, should I use Linea Phase EQ on them when for example applying HPF? Thank you.
Hey Lucas thank you for watching! Yes, I'd recommend that you use linear phase if you're using the EQ on one channel and not the other!
SageAudio.com
@@sageaudio Thank you so much. So correct my if I'm wrong. When I only EQ one hard panned guitar it's better to use Linear Phase EQ to avoid phase shift. What if I want to EQ differently both hard panned guitars? In my studio I use analogue SSL EQ. If I eq both with analogue regular EQ will it cause phase shift problems?
Thank you.
@@LucasMichalski Just for verifications sake, both gtr tracks were recorded individually and are 2 separate recording 'takes', just that they're both performing the same part, correct? If so, I'd take it even further than just EQ'ing one side differently from the other. I'd perhaps try using 2 totally different EQ plugins and different EQ adjustments for both sides, in addition to anything that would also provide some tonal coloration to yield a stronger contrast. Reason being, given this particular kind of scenario where 2 different recordings of the same content are hard panned, any/all variations between the tracks will result in DESIRABLE phase discrepancies THAT'LL WIDEN THE STEREO IMAGE!
For instance, take the last bit of the video when he demonstrates what & how the HAAS effect sounds like. Now I dunno about you, but the overall stereo image seemed pretty weak or tame prior to the EQ change to one side! But, because he messed with the phase relationship between the L and R tracks, a far more robust sense of stereo imaging, albeit, not the most flattering or pleasant stereo effect (HAAS), was established as a result!
Also, you don't have 22050 "frequencies". There are rational frequencies, like 100.5 Hz
so should i use linier phase mode just on mastar buss?
Hey Mohammad! Thanks for watching! It depends on what sounds best. Sometimes unexpected phase changes can have a positive effect on the mix or master.
SageAudio.com
Most of the time linear phase EQ is a solution looking for a problem.
Sample rate conversion is its best use.
Also most min phase DSP doesn't use fft as far as I know. EG a 6dB lowpass filter is just a single sample delay with feedback.
does this phase change effects vocals?
i mean does it delays the vocals from actual tempo?
It doesn't! The latency is compensated for by most DAWs.
Thanks for watching!
SageAudio.com
Hey pls help.
I think untill we do parallel processing we don't need to worry about phase in mastering stage? Is this right?
Thanks for watching! When using equalizers you need to worry a little about phase. All processors will affect the phase in one way or another, but that doesn't mean the phase changes are a bad thing.
SageAudio.com
I mostly use the "natural phase" or "Zero latency" dynamic EQ of Pro-Q3.
Same!
No surprise here, that song is sick.
Haha thanks for watching Blake!
SageAudio.com
Is shelfing has the same impact on phase as filtering?
Hey Bonzvy! It has a similar impact.
SageAudio.com
what intro song
I love your video edit! What do you use for those liquid transitions? :)
Thanks for watching Marek! It's a transition filter from motionvfx
SageAudio.com
This is a really good topic. A question here…
Usually I record two inputs for my acoustic guitar, one is line in and another one is using condenser mic. The purpose is to make it panned hard left and right to get wider stereo effect and two audio tracks contents different tone that make the ac gtr track more body and depth.
But the thing is the phase issue. My approach is to use the melda auto aligned plugin on both audio tracks. But I didn’t apply the linear phase EQ or linear phase multiband compressor. I have the waves one. Is it necessary to use it on my guitar tracks ? And what if I want to apply different EQ curve to one and another ?
To avoid phase issues in this situation, use the linear phase eq on both signals. Since they're delayed, you can use different curves without an issue.
SageAudio.com
@@sageaudio thanks for the suggestion and advice!
Everytime i do a low cut, should i always use linear phase eq to avoid phase problems?
Sometimes phase "problems" have a positive effect, depending on the tracks. Could be an idea to try both and let your ears decide what sounds best.
Hey Jawa! It's best to cycle through the different settings and try to find the best sounding one, like @nightdew said.
SageAudio.com
@@sageaudio Thank you!!
@@nightdew4934 am glad some say it. Don't like when people time align drums to overheads for example. It seems strange to think you can fix drum phase that way perhaps.
thanks allot no wonder my mix sometimes gets killed at the end especially my sub kik and 808s low end
Thanks for watching! Great to hear that this was helpful
How strong should the linear phase level be in mastering?
Thanks for watching! I keep it at a low level!
SageAudio.com
Thank you. ☺️🙏
Thanks for watching!
SageAudio.com
I beleive most processors use a bit depth of 16 or higher, so your 4096 sample size example (also you don't clearly communciate size vs. rate) is for a 12 bit system. Perhaps the reasons given describe why phase shift happens in digital signals... doesn't not explain why they also happen in analog systems. Most people don't understand FFT and the math to catch this and your video while sounding really good and raising some good points, doesn't fully clear up the topic of how phase is introduced as many think it does.
Can you elaborate on why EQs like this one only use 4096 “samples”. And what in this case a “sample” is referring to?
Sure! In this case, the EQ is using 'Samples' to measure the signal at various points and then recreate the signal using that info. I'm not the best at explaining the math behind coding a digital EQ (it uses an algorithm called the 'Fast Fourier Transform').
But think of each sample as an estimation of the amplitude of a frequency. The more samples we can take, the better the approximation of the value. When an EQ takes 4096 samples, that means that it's measuring various parts of the full frequency range 4096 times, which gives the EQ a fairly comprehensive range of values with which it can approximate the signal's amplitude at these various frequencies. The more samples taken, the more accurate the approximation; however, the more processing power it takes.
Sorry, I know I'm not the best at explaining this (people who code or are more knowledgeable about math would be better), but maybe I can make a more detailed video on the topic in the future.
Did engineers of the 80s ,90s knew about these concepts or are these relatively new concepts that engineers discovered due to the advancement of technology ?
Thanks for watching HOD STUDIO! They knew and used the concepts to help design equipment and equalizers.
SageAudio.com
i just slap the pro q3 on and draw some shapes to my hearing, bruhh i dont even think about those technicalities
Thanks for watching Yahy! If it sounds good it sounds good!
SageAudio.com
Can you please make a vocal preset with only fabilter plug in?
Sure! We'll make some presets in the future and provide download links.
SageAudio.com
MVP!!!!! @@sageaudio
Awesome. Appreciate what you are doing for us!
Hmm... 3 years late.
You may have explained it but I think I missed it. Why does EQing shift the phase?
as an edm producer for 15 years now I'd say that eq phase shift is not a big deal at all. it matters when eq'ing recorded live drums
Thanks for watching!
SageAudio.com
Fantastic
now i know why it sounds more bassy when i do a 20hz cut in fabfilter. this is a pain in the ass
Yep! Its the amplification of frequencies right above the cut off. Thanks for watching
Gem 💎
Thanks for watching!
SageAudio.com
I actually though the Haas effect sounded great lool
Thanks for watching Jack!
SageAudio.com
So basically don’t use zero latency?
But WHY do EQ's cause phase shifts in the first place? When turning frequencies up or down is an amplitude thing.
Internally, the EQ is using changes in phase to create the desired frequency changes.
Additionally, the plugin is causing 'phase rotation' around the filters, which is another way of saying it shifts the orientation of a waveform's peaks and troughs. This video should help explain that concept better than I can in a comment: th-cam.com/video/0csZlyPC_lg/w-d-xo.html&ab_channel=SageAudio
@@sageaudio thank you!
i dont know if its just me but i never heard impressive digital eqs. always feel some sort of loss in the sound 3d scape
How do you get in your face sound like professional Artist songs
Thanks for watching Ay Dee! Combine low-level compression and limiting!
SageAudio.com
Never realised the math I learned in engineering would apply in my music endeavours ❄️
if i may humbly offer a correction, phase in and of itself doesn’t have anything to do with timing. the demonstration of moving the pure tones forward and backward to get varying amounts of cancellation is perhaps misleading and not comparable to the phase shift introduced by EQing. It’s true that time aligning different elements can move them in and out of phase, but it would be better to precise that phase shit can occur without timing differences and purely by rotating the phase of the signal, as is the case with EQ. peace and love ~
How do you rotate phase without time adjustment? Isn't it exactly what it means and isn't it the same thing? Correct me if i am wrong
Like sin to cos one peaks when one is at zero, and that is 90°, because they start at different values.
I think i understand that this effect of changing an overtone, could have a knockon effect in a fourier transform that could be described as being rotated when such frequencies are 'pulled out' of the wave shape
This is correct. Most videos on phase are some mix of confusing signals and waves, phase and delay, and so on. Phase is one part of the frequency response, the other being magnitude. They represent a complex Phasor at each frequency, and come from a Fourier transform of the output divided by the input. So these videos should all be taken as very introductory to the actual fact.