Look up "Pre-ringing EQ artifacts" or similar and you'll find more related information. He also gives a very good tip about dealing with transients. I think I'll try to duplicate his set-ups in my DAW (cubase pro 10) to understand it better.
The red line meter represents the phase shift. From what I understand, lower phase shift directs the direction of frequency towards the negative and the hence the higher phase shift does the opposite. I am referring to the direction of the frequency, when im talking about those positive and negative directions. + means the phase shifts towards the higher frequencies and - means towards the lower frequencies. In this example the phase shift line forms like a leter S pattern, this means the shifts from the left starts from slight negative, further negative and then back towards 0 and further positive. The phase shift is like pulling those specific frequencies on to specific directions depending on the shift pattern. When the entire phase shift meter/graph results into a straight line it means there is no phase shift and everything is on 0. I have not watched the entire video so please excuse me if in case what I am saying doesn't makes sense to what you have watched. I too am new to this phase alignment stuff.
Again, spoken like a true audio ENGINEER... I wish more people would care about the science and function of things as much as they care about what they "feel" or "believe"....
As a producer and DJ, people highly overvalue my work compared to audio engineers, mastering engineers, mixers etc. I just do what sounds good and even though I'm interested in theory like this, it isnt necessary. To be an engineer you need so much knowledge that it abstracts yet you also need ears, proper hearing especially for mastering! A good friend of mine is an audio engineer and I have so much respect for what he does yet when I'm DJing, I'm the one with the audience so I always try to pass it on to him and express my respect for what he does
Well, to be fair....quite a bit of 'good' music has come from violently opposing science and function in favor of 'feel' and 'belief.' Punk rock pretty much exclusively came from that. It's important to remember that we live in a time where pristine clear recordings are the standard, but there was a time when even having a clear recording was a 'new vibe' in and of itself. My point is, it's great to get to the heart of the science behind music, but it's much more important to remember that our response to music is largely emotional. If pristine clear well-engineered and performed songs were subjectively desirable, there would be more than just philistines listening to the radio these days. Literally everything on the radio is engineered extremely well scientifically speaking, and it's all absolute trash. Food for thought. The day when 'science' takes precedent over 'belief' in music, is the day that music dies. If you can't vibe to it, you might as well just listen to scales. I'm not religious, this is just one instance where science loses every time. Art picks up where science stops. It's kind of like grammar vs. semantics. The semantic meaning in music is what moves you, and yes the semantic meaning can come from just phonemes or syntactical play, but personal semantic meaning is not scientifically quantifiable. Nor should it be, or why are we even here.
@@eugenefullstack7613 It is about understanding what is happening behind the GUI, not about being scientifically accurate all the time. It is about the nonsensical explanations on how a plugin sounds to someone who reviews it versus the understanding what is actually there. When I hear someone is talking about '3D' sound or some other similarly magical property I open the Plugin Doctor. People are too religious about their gear. With that being said, it is nice to know the science behind your gear if it does not drive your OCD crazy, then you can relax and just use the tools without thinking about the numbers too much.
@@eugenefullstack7613 Science literally explains why "feel", "belief" and "non-pristine" recordings sound good to us. Punk never rejected science as it is impossible to do so in any art form and literally all records require engineers and I've yet to come across a punk band, or any band rejecting live sound engineers and all expect to sound "good" - Whatever their interpretation of that is. Science doesn't say that records must be pristine or perfect, it merely attempts to explain something achieves a certain result, ideally repeatedly in the world of music. Punk was never stupid and its sad so many "fans" cling to this idea it all happened without any knowledge whatsoever about music or audio. That's an insult to all the great art that's come from it.
Dan you are mad underrated, one of the only engineers out there doing real content explaining real fundamentals from a very technical yet understandable perspective. In TH-cam, there are a lot of content creators trying to teach, but no many of them owns their shit the way you do. Thank you very much for all the knowledge you help to spread, you are awesome!
@@anhumblemessengerofthelawo3858 I meant underrated in terms of TH-cam numbers: Andrew Huang having 2.5 million subs against Dan's humble 100k, for instance. One can measure the amount of people out of phase in the audio community.
I'm interested to hear more about how EQ effects phase shift, as well as MB compression. Please do more videos on this subject. And thanks for sharing these very useful insights.
But, then, you've got the ringing to contend with. That's why multiband compression is not recommended much unless you're mastering and there's simply no other tool you can use.
A great service. Used to be we learned the basics on this kind of stuff at a desk, then had it shown to us by the head engineer we studied under...long before we ever touched a tape machine. The kids today have ZERO background, so your stuff is PRICELESS...and us old guys get something too, as you go seriously in depth...intelligently.
With so many incorrect videos on the topic of phase and polarity, this video is the way to go for anyone who is not interested in understanding phase via complex numbers (although that is where phase comes from). It shows that when you focus on a single frequency, the phase can be the same across three operations, while the underlying operations are different. So you can get a phase shift of 180 degrees from 1) a polarity flip (180 degrees at ALL frequencies), 2) A time delay (180 degrees at a few discrete frequencies only), and 3) an allpass filter (180 degrees at some frequency). But they do differ in general, as transient behaviors are different, and their group delays are not the same, although their phase delays at that particular frequency are the same, since the phases are the same. For group delay, the operations need to be viewed across the entire frequency range, and so zooming in on a single frequency only tells us part of the picture.
3:23 “this is why mixing delayed signals with the original results in comb filtering.” Very well explained. In my head I always imagined different slopes partially cancelling each other out but your phrasing is simplified while being more comprehensible. Thank you for explaining that. Now why is a short delay the basis for karplus-strong? What is it about a short delay that is physically similar to a vibrating string?
Short delay *plus feedback* so that it resonates at a specific frequency when excited by an impulse. I think KS generally also includes a low pass filter in the feedback path to emulate damping and make higher frequencies decay faster. I can't give you the mathematical proof, but you can kind of understand it intuitively if you think of it in those terms.
Dan Worrall Always liked when you did the advanced synth videos. What you’re saying totally makes sense. The excitation source acting as a plectrum makes sense then too I suppose. Since a short delay can’t be used as a sound source on it’s own no matter how high you crank the feedback. Once it gets a sound source though, like a quick burst of noise, it’s off to the races! MSoundFactory lets you analyze a sample and effectively make a kind of custom comb filter shape for physically modeling pretty much any resonant body. Truly advanced stuff. Still a little confused about how EQ’s or filters delay frequency specific areas. Going to rewatch your video and hope for a eureka moment :)
the unsung hero (sic!) of the video is, I believe , the music track in the background! Am I the only one to notice ? It grooves a lot ! Needless to say that the research you have done and explain to us is unique in all of your videos! Simply essentials and go to's at any time
Can't wrap my head around to how to mix/blend real + sampled kick tracks when EQing 2 tracks independently before summing to a kick bus (and then add more processing) I made sure both tracks were visually time and phase aligned. Then I was forced to downmix the two tracks to save some CPU, and started going crazy. In the resulting waveform, both kicks had moved slightly in time, creating a strange and longer waveform. I had made sure that the recorded kick was time/phase aligned with the overheads (did the same with the rest of the drum tracks. In fact, I've become obsessed with this lately), and now everything was all over the place because of individual eq processing. I am 100% sure it is not a latency compensation issue. When mixing down/freezing tracks, my daw compensates for plugin latency automatically. So I decided to mixdown each track individually to see what was going on. Zero phase eq processing created a 3x longer kick on each track (because of different frequency phase missalignment, I guess), and linear phase created lots of pre-ringing and missing transients. The less destructive setting was linear phase in "minimum" setting, but still had issues. All of this considering I was using a sample kick sound that was nothing like the recorded one, except for adjusting the pitch to match the fundamental note. I got rid of HPFs and LSFs I was using to polish low end, and things started to improve a little. Then applied those filters back in the kick bus afterwards, and things kind of started working again. But now I'm not confident anymore about the kick staying phase aligned with the rest of the drum tracks (specially OHs) unless I bounce the bus output to audio and compare the timing to the overheads. And that's a bummer. If I've understood correctly how this works, despite of visually aligning the recorded audio tracks and its transients, heavy eq processing can missalign back the timing of frequencies separately, and the only way to make sure everything's ok, you have to use your ear, or bounce the individual/bus tracks to audio after processing to re-check the alignment before summing in a drum bus. My final thoughts: the best you record things and the later you process drum tracks after summing, the better. There's only so much correction and tweaking you can do before things start to get all oven the place and out of control. I really struggle to get things right when I receive badly recorded tracks for working on a mix. I feel like I spend 90% fixing the impossible before I can even enjoy the process.
You will get a better intuitive understanding of what is happening to the signal in time domain if you look at group delay graph instead of phase. In the typical loudspeaker crossover design software, group delay is one of the core graphs one can look at. Basically the delay is local to the range of frequencies affected by filters and not the whole signal. It's like stretching a rubber band locally between two anchor points. Talking about analog style minimum phase filters here, not phase linear FIR processing.
Dan, can you please make a "nuts and bolts" video, the way only you can, on exactly how equalizers work and how they cause phase shift and resonance? I'm sure I'm not alone in my desire to understand on a deeper level how EQ works so that I can not only avoid the unintended consequences of equalization, but maybe even use it to my advantage the way that you seem to be able to do, and I just can't seem to find that much information on the subject. My basic understanding is that what is actually going on is that it is the mixing of delayed (phase shifted) signal with the original that causes changes in frequency response, but I don't understand how, from that basic principal, all the different filter shapes are possible, how a filter's bandwidth can be manipulated, or how resonance fits into the equation. I'm just now becoming acquainted with the idea that equalizers are tools to affect the envelope of a frequency as much as its volume, and I want to understand more. Thank you.
I agree with you actually more or less. But I feel like to make this argument you should really have tried to null against a slightly different filter: Mix a band-stop filtered version with a delayed and band-passed filtered version. This is what people are arguing the filter essentially does, so do it. This would give a clearer picture of what is happening. As it stands the primary reason your example doesn't null is that a patched delay phase shifts EVERY frequency, even the infinite ones cause by the jump discontinuity, whereas the all pass filter you showed only phase shifts a finite frequency window (shown in the graph by the size of the phase response bulge). Of course that isn't going to null.
Please do more videos like this on EQ phase shift and how EQs work. How do I interpret the phase shift by just looking at the EQ. Lots of questions still. Please teach me more.
I'm sorry, Dan, I have to disagree with you on this. The phase is described by the formula φ= ω៰t. Where the "t" is literally a delay. So phase ALWAYS means delay. And by the way, "invert phase" (so popular in DAW) does not make sense technically for this reason. If there is no delay, there is no phase. And in this case there is no delay. It is technically correct to call it "invert polarity". But from a mathematical point of view, there is no difference between A sin(ωt + 180°) = -A sin(ωt). Where the first is a phase reversal and the second is a polarity reversal. Because the signal is considered infinite in these calculations. It never starts and never ends. And what you say about the all-pass filter and the difference in sound - concerns the transition process and is not directly related to the phase at all. In fact, it's just a filter ringing, its response to an impulse. You even mark it yourself later, but somehow you don't relate it to the difference in the beginning. And although I'm sure you understand all this perfectly, the way you explained it can still mislead the audience.
@@EG_John if phase shift is delay, and a minimum phase EQ can shift phase in *both directions*, that means the EQ can see into the future and apply negative delay.
@@DanWorrall This is a misconception (in my opinion). Above I have already written that mathematically there will be no difference between A sin(ωt + 180°) = -A sin(ωt). This is also true for any other phase shift. Including the negative shift.
Dirac spikes are interesting to use for analysing filters, since they contain all frequencies (within the Nyquist limit, of course). I was amazed at how much ringing occurs in EQs, especially high-pass filters where it can seriously affect the sound of bass frequencies. Kicks and bass sounds with sharp impulses can be affected quite dramatically. Far too many people use sharp filter slopes in dance music nowadays, when all it does is cause ringing effects that aren't usually audible to most people without trained ears. Suppose it doesn't matter as much, but when you know what it sounds like, it really hurts to hear it.
Great educational video again Dan. One quick question, how do you manage to get the extra options in the right click render menu? I can't render a selected area of a track like in your menu.
Optioins menu -> Customise menus/toolbars -> select "track control panel context" at the top -> add the extra render options from the Actions list (with optional seperators as I did) -> Save
Very informative video Dan. It's also interesting (not mentioned here), different audio editors, although dealing with same material, process them all differently, because of their coded (programmed) structure... just like your dither video showed. Hence, basic ones will create more profound (and interesting) anomalies, to your ear... whereas, advanced ones, might automatically hide it.
2:50 isn't this only true if you're using very basic waveforms where each cycle is identical? if you move a kick drum ahead 5ms, you're not getting an inverted kick drum
Indeed, I said it's only true at 100Hz. A kick drum is composed of many individual sine wave partials at different frequencies, and 5ms of delay will shift the phase by different amounts for each.
thanks for this! phase shifting has always confused me a bit, especially because some effects have a continuous phase shift setting in degrees. That wouldn't make sense without that phase shift being specific to one frequency, isn't it? Otherwise it would only be possible to have 0 and 180 deg.
You can shift the phase of all frequencies by an arbitrary amount using FIR filters: the same kind used for linear phase EQ. Or you can shift the phase of a specific frequency by 180 degs using an IIR allpass filter. I know, its a bit confusing!
@@DanWorrall aah, I see now, thanks!. Then you have to selectively shift each frequency a different amount of samples, so that the rotation belongs to the phase... It sounds mathematically intense, but it makes all the sense now. I'm gonna try it out.
Hi Dan :) I remember seen your videos on phase issues when cutting with non linear phase EQs, which were of great help as we regularly ran into problems while parallel processing. 🙏🏻 But now this: today we put a correlation meter in front and behind an EQ8 and used a simple low cut - no parallel processing or anything. A plain and simple lowcut. And surprise, the meter showed that the correlation dropped significantly. Then instead of EQ8 we used ProQ3 in linear phase mode. And still: the correlationmeter would display a significant drop in correlation 🤷 We have no clue other than we found glitch in the matrix... Any idea?
Dan, can you please help me figure out how to monitor the total phase curve, when using a minimum phase eq in a parallel track? i noticed you used an older VST Plugin Analyzer in a previous clip, is that the best tool for this? i have Plugin doctor at the moment, but i cant figure out how to view the total.
And just like that, i found it xD lol.... nevermind! love your videos, they made me have the biggest AHA! moment about parallel processing and EQ =) Cheers!
Grab Metaplugin (also from ddmf) and load it into Plugin Doctor. Then set up your parallel chain inside Metaplugin, and you can analyse it just like a single plugin. I think the demo version will work if you don't need to save your setup.
@@DanWorrall Thanks Dan, I just noticed that you can load 2 plugin instances in plugin doctor these days and then sum the signals on the 1-2 button in the upper right corner. Thought that was pretty much it, but i will def look at metaplugin :)
I still couldn't understand the phase shift that happens from an EQ because by logic, let's say we do the same phase shift at 100 Hz but on full song track; if the 100 Hz is on 180° and is 100% null, then what comes after it at both sides should reduce the volume of the frequencies according to the slopes. Well, it doesn't sound like this, or maybe the phase shift graph does not correlate with the amplitude? Because by this logic, this should make a wide surgical EQ cut which should be very noticeable cancellation, can you explain this please?
Thanks for another great explanation, watched an older one you made on this topic too, but I still don't understand why eq causes phase shift? I can see that it does, but why?
@@DanWorrallthanks! since I had asked this question, I gained a clear understanding of what's happening. It's very cool how most effects are just delay
This may seem like a strange question but I have someone that wants me to recreate some old software for him and he claims that they panned these square waves from the left earphone to the right using phase shifting....is that possible? If so how does that work? I used a regular audio component to pan the sound from one ear to the other but according to him it doesn't give the same result...to me it sounds exactly the same....
Do EQ cuts sound worse because the part of the signal that's phase shifted is "lowered" relative to the rest of the signal during a cut, whereas in a boost the part of the signal that's phase shifted is higher/louder so you can hear the phase distortion more?
Hi Dan, I've been researching the topic quite a bit and am still not perfectly clear about the origins. Most threads end up explaining how phase shifted signals cancel each other, but don't get to the point why the phase shift initially happens. I've heard people saying, it's due to latency issues. But when I interpret the phase shift curves in the eq correctly, it seems more like certain bands around the point of application get shifted. Can you maybe explain how that physically happens?
There are various different causes of phase shift. Simplest is just a polarity inversion, which is 180 degs phase shift for all frequencies. Or there's delay: this will shift the phase by different amounts depending on the frequency, so for a given delay time 200Hz will get twice as much phase shift as 100Hz. Or there's phase shift from filters, which tends to happen where there's a gradient on the frequency response: for a bell EQ band the target frequency will remain at zero phase, but there will be positive and negative phase shift either side. That last one requires a mathematical explanation, which I'm no way qualified to provide. But when you represent the signal in terms of complex numbers, the imaginary part goes high or low whenever the real part is at zero: in other words all possible phases of the signal are present already, and just need to be extracted by the filter * * Any real DSP guru reading this is probably now either laughing or crying: if that's you feel free to correct me!
@@DanWorrall great dan, thx for your time! So I think I'm on the right track. I happen to know the math, but haven't found folks who know more technical details. So I was doubting my intuition. The details on this complex number thing (it also works without complex numbers, but oddly it's easier with them) is that the frequency spectrum is derived from a convolution of the amplitudes over time. There core fact is, that every mathematical curve can be split into a superposition of (co)sinewaves of different frequencies. e.g. Audiosignal = 3*sin(1Hz)+1*sin(2Hz)+42*sin(3Hz)+.... etc. (It is the same theorem that shows that a square wave only consists of odd harmonics and a saw is only made of even harmonics) The math implies that the frequency spectrum (1Hz = 1/second) is inversely proportional to the audio signal (second). Wich means a tiny/steep change in the frequency spectrum. Will 1/(something tiny)-ly affect the signal, therefore resulting in a great impact to the audible signal. Nonetheless, I find it hard to pin point and hear the difference and the broadening of the signal. Some renowned folks like robert babicz recommend a "-inf/octave" low cut at +/-30 Hz and claim having a tighter, more defined low end from this. Given the maths, this seems like the worst recommendation imaginable. So the entire matter is still kind of confusing... (If anyone else is interested in the maths: 3blue1brown has some nice visualizations on fourier analysis.)
I wish Crane Song would hire you to make tutorials about Phoenix II, or to compare it with other saturation plugins. The thing is too subtle for me to evaluate if it is worth it to open Pro Tools just to use Phoenix II, or if I should just stick to Saturn on Reaper.
great video as always dan ! you can pls answer me how to identify phase problems in mono with a correlation meter? I use span and more, and even if there are sounds that sound horrible together (phasing) and they are (mono) the span recognizes them as 1+ (good relations)
Pan them hard left and right, temporarily if you like. The correlation meter shows correlation between the left and right channels. A mono signal will have 100% correlation, no matter how phasey and weird it sounds.
@@DanWorrall hey dan , do you think its good idea to use phase alignment plugins like (evo in) to shift the phase for good position and use that as the last plugin in mixer chain (for fixing eq phase shift and bad phase relation) ?
@@racebase1470 if you've got phase cancellation issues then this is a good way to fix them. EQ phase shift is not itself an issue and doesn't need fixing.
Hi there thank you for the video. I got doubt with one of my productions I've just picked up a piano and listen it in SOLO without any processing it's a Full DRY signal and I noticed that in some octaves the phase polarity tend to goes to negative -1 and I don't hear any deleting informations in the stereo field, in fact when I play the full song my phase meter tells me that the song goes from 0 to positive +1 and it's ok, just in solo MODE when I hear just the melody part I got this doubt that I don't understand in the very long tail of the notes goes to negative, thanks if you will answer me :D
@@DanWorrall I checked in mono also for monocompatibility and the piano it's ok there is no information cancelled in the stereo field the track like I said goes from 0 and +1, but I'm curiuos why in solo some element like that do that kind of work on the phase it feels like illogic, and I was worried about the track. Thanks for answering.
Reverb tails and such like will often bounce into negative correlation. Normal and nothing to worry about: its part of what makes it sound spacious and wide.
@@DanWorrall Ok this is nice to hear I feel more calm now ahah :), the fact is because the acoustic piano from Kontakt an emulation of a real one there is no reverb on it so it's a full DRY signal, maybe could be the microphones of the recordings of the piano that do tha kind of thing on the phase? just for knowing thank you again for keep me updated.
I'm guessing the piano was recorded in a room? And a piano body just traps a body of air, like a room. So the tall of a piano note is not so different from a reverb tail.
I never promised one. LCR is LCR, there is no fix. The video about making a nice wide mix that also works in mono is on its way, but it won't be on this channel: I need to charge a fee to do that one properly ;)
I'm waiting for the hour long video to outright explain what a digital filter and/or EQ does for real. All those side notes ("thats what truly happens anyway" sort of stuff) only makes us curious to study the engineering
Doesn't the processing of different frequencies take the same amount of time delay, no matter if you're boosting or cutting, implying that the opposite boost or cut would still cancel itself out seeing as the amount of delay would be the same in both cases? Meaning your comment about seeing into the future makes no sense?
E.G. boosting 10dB at 60hz and cutting 10dB at 60hz both have the same amount of delay due to the processing time required at that hz and therefore the processed signals still cancel each other? You could look and see if the resulting signal is delayed despite the EQ cancelling itself by comparing it to a completely unprocessed signal
I'm not entirely clear what you're suggesting. But the fact is a 12dB cut can undo the phase shift of a 12dB boost. Even with zero latency plugins. Therefore the phase shift cannot be delay.
Jos Driessen.... Hell no, it isn't lol As funny as your comment might sound, it is also incorrect. Learning and understanding should be "hard-coded" and fun for everyone. Just like I keep telling my mom, whenever she asks me #1004 question about her laptop I bought for her... play with it. If you don't, you won't learn it. Everybody has their own way of doing and using things. No description of anything, written by only one or even few people, will explain the product fully.
So I came into the video guessing I'd learn something about phase shift. All I know now is there are multiple types of phase shift caused by multiple things and I know none of them...fk...what does phase ringing mean? etc etc etc
It's not simple to explain, but since there is a very hard edge in the waveform, it is more or less like an impulse. And if you want to express an impulse by adding up a series of frequencies, you'd need all of them (within the available bandwith). Search for Fourier transforms and how you can express a square wave using them. And don't be discouraged if you need some time to understand them.
Think of the attack of a kick drum, which can be modeled by quickly pitchshifting a sinewave from high to low, do that reversed and then forward and take that to the limit of 1 digital sample.
The sudden burst of all available frequencies doesn’t make sense to me either and I don’t find the replies to your question helpful. The problem I have with that explanation is that if you do indeed have a burst of all frequencies then that means you don’t necessarily have to have an increase in volume. I think the keyword might be “burst”. By burst I think he means “you have an increase in volume across the board at all frequencies” perhaps... which should really just be stated as “an increase in volume (regardless of frequency)”. But even that phrasing might not be scientifically accurate.
The phase shift makes all the individual component sine waves add up differently, so the peak level changes. It doesn't necessarily get higher, but you only notice when it does.
Hi Dan! Love your videos! Unfortunately, from 0:45 the "telling lies" from the music really interferes with your voice, making it hard to listen to what you are communicating. It works with my monitors but on my phone it was near impossible! Cheers, looking forward to more great content!
This video left me with more questions than answers
I was thinking the same thing. Good start of a series I think.
Living life with the decision to never call delay phase shift helps.
Look up "Pre-ringing EQ artifacts" or similar and you'll find more related information. He also gives a very good tip about dealing with transients. I think I'll try to duplicate his set-ups in my DAW (cubase pro 10) to understand it better.
The red line meter represents the phase shift. From what I understand, lower phase shift directs the direction of frequency towards the negative and the hence the higher phase shift does the opposite.
I am referring to the direction of the frequency, when im talking about those positive and negative directions. + means the phase shifts towards the higher frequencies and - means towards the lower frequencies.
In this example the phase shift line forms like a leter S pattern, this means the shifts from the left starts from slight negative, further negative and then back towards 0 and further positive. The phase shift is like pulling those specific frequencies on to specific directions depending on the shift pattern. When the entire phase shift meter/graph results into a straight line it means there is no phase shift and everything is on 0.
I have not watched the entire video so please excuse me if in case what I am saying doesn't makes sense to what you have watched. I too am new to this phase alignment stuff.
I think Dan's purpose is precisely to make people think.
Again, spoken like a true audio ENGINEER... I wish more people would care about the science and function of things as much as they care about what they "feel" or "believe"....
As a producer and DJ, people highly overvalue my work compared to audio engineers, mastering engineers, mixers etc. I just do what sounds good and even though I'm interested in theory like this, it isnt necessary. To be an engineer you need so much knowledge that it abstracts yet you also need ears, proper hearing especially for mastering! A good friend of mine is an audio engineer and I have so much respect for what he does yet when I'm DJing, I'm the one with the audience so I always try to pass it on to him and express my respect for what he does
Could not agree more.
Well, to be fair....quite a bit of 'good' music has come from violently opposing science and function in favor of 'feel' and 'belief.' Punk rock pretty much exclusively came from that. It's important to remember that we live in a time where pristine clear recordings are the standard, but there was a time when even having a clear recording was a 'new vibe' in and of itself.
My point is, it's great to get to the heart of the science behind music, but it's much more important to remember that our response to music is largely emotional.
If pristine clear well-engineered and performed songs were subjectively desirable, there would be more than just philistines listening to the radio these days. Literally everything on the radio is engineered extremely well scientifically speaking, and it's all absolute trash. Food for thought. The day when 'science' takes precedent over 'belief' in music, is the day that music dies. If you can't vibe to it, you might as well just listen to scales. I'm not religious, this is just one instance where science loses every time. Art picks up where science stops. It's kind of like grammar vs. semantics. The semantic meaning in music is what moves you, and yes the semantic meaning can come from just phonemes or syntactical play, but personal semantic meaning is not scientifically quantifiable. Nor should it be, or why are we even here.
@@eugenefullstack7613 It is about understanding what is happening behind the GUI, not about being scientifically accurate all the time. It is about the nonsensical explanations on how a plugin sounds to someone who reviews it versus the understanding what is actually there. When I hear someone is talking about '3D' sound or some other similarly magical property I open the Plugin Doctor. People are too religious about their gear. With that being said, it is nice to know the science behind your gear if it does not drive your OCD crazy, then you can relax and just use the tools without thinking about the numbers too much.
@@eugenefullstack7613 Science literally explains why "feel", "belief" and "non-pristine" recordings sound good to us. Punk never rejected science as it is impossible to do so in any art form and literally all records require engineers and I've yet to come across a punk band, or any band rejecting live sound engineers and all expect to sound "good" - Whatever their interpretation of that is.
Science doesn't say that records must be pristine or perfect, it merely attempts to explain something achieves a certain result, ideally repeatedly in the world of music.
Punk was never stupid and its sad so many "fans" cling to this idea it all happened without any knowledge whatsoever about music or audio. That's an insult to all the great art that's come from it.
Dan you are mad underrated, one of the only engineers out there doing real content explaining real fundamentals from a very technical yet understandable perspective. In TH-cam, there are a lot of content creators trying to teach, but no many of them owns their shit the way you do. Thank you very much for all the knowledge you help to spread, you are awesome!
"Underrated." Such a silly term.
@@anhumblemessengerofthelawo3858 I meant underrated in terms of TH-cam numbers: Andrew Huang having 2.5 million subs against Dan's humble 100k, for instance.
One can measure the amount of people out of phase in the audio community.
I'm interested to hear more about how EQ effects phase shift, as well as MB compression. Please do more videos on this subject. And thanks for sharing these very useful insights.
But, then, you've got the ringing to contend with. That's why multiband compression is not recommended much unless you're mastering and there's simply no other tool you can use.
A great service. Used to be we learned the basics on this kind of stuff at a desk, then had it shown to us by the head engineer we studied under...long before we ever touched a tape machine. The kids today have ZERO background, so your stuff is PRICELESS...and us old guys get something too, as you go seriously in depth...intelligently.
Best techy-audio explanations on TH-cam! Thank you, Dan!
Nothing like starting your day with audio knowledge from Dan. Thanks man!
This is the most educational channel out there
With so many incorrect videos on the topic of phase and polarity, this video is the way to go for anyone who is not interested in understanding phase via complex numbers (although that is where phase comes from). It shows that when you focus on a single frequency, the phase can be the same across three operations, while the underlying operations are different. So you can get a phase shift of 180 degrees from 1) a polarity flip (180 degrees at ALL frequencies), 2) A time delay (180 degrees at a few discrete frequencies only), and 3) an allpass filter (180 degrees at some frequency). But they do differ in general, as transient behaviors are different, and their group delays are not the same, although their phase delays at that particular frequency are the same, since the phases are the same. For group delay, the operations need to be viewed across the entire frequency range, and so zooming in on a single frequency only tells us part of the picture.
Well, this one of the very informative and educating channels, i am glad to have found it!
Me too!!
Still feel like a beginner, when I see content like this!
Thank you, Master!
Totally revealing information. Thanks! But don't stop here...keep going! ☺️
3:23 “this is why mixing delayed signals with the original results in comb filtering.” Very well explained. In my head I always imagined different slopes partially cancelling each other out but your phrasing is simplified while being more comprehensible. Thank you for explaining that.
Now why is a short delay the basis for karplus-strong? What is it about a short delay that is physically similar to a vibrating string?
Short delay *plus feedback* so that it resonates at a specific frequency when excited by an impulse. I think KS generally also includes a low pass filter in the feedback path to emulate damping and make higher frequencies decay faster. I can't give you the mathematical proof, but you can kind of understand it intuitively if you think of it in those terms.
Dan Worrall Always liked when you did the advanced synth videos. What you’re saying totally makes sense.
The excitation source acting as a plectrum makes sense then too I suppose. Since a short delay can’t be used as a sound source on it’s own no matter how high you crank the feedback. Once it gets a sound source though, like a quick burst of noise, it’s off to the races!
MSoundFactory lets you analyze a sample and effectively make a kind of custom comb filter shape for physically modeling pretty much any resonant body. Truly advanced stuff.
Still a little confused about how EQ’s or filters delay frequency specific areas. Going to rewatch your video and hope for a eureka moment :)
the unsung hero (sic!) of the video is, I believe , the music track in the background! Am I the only one to notice ? It grooves a lot ! Needless to say that the research you have done and explain to us is unique in all of your videos! Simply essentials and go to's at any time
Fascinating. Always learning something new, thank you Dan.
Can't wrap my head around to how to mix/blend real + sampled kick tracks when EQing 2 tracks independently before summing to a kick bus (and then add more processing) I made sure both tracks were visually time and phase aligned. Then I was forced to downmix the two tracks to save some CPU, and started going crazy. In the resulting waveform, both kicks had moved slightly in time, creating a strange and longer waveform. I had made sure that the recorded kick was time/phase aligned with the overheads (did the same with the rest of the drum tracks. In fact, I've become obsessed with this lately), and now everything was all over the place because of individual eq processing. I am 100% sure it is not a latency compensation issue. When mixing down/freezing tracks, my daw compensates for plugin latency automatically. So I decided to mixdown each track individually to see what was going on. Zero phase eq processing created a 3x longer kick on each track (because of different frequency phase missalignment, I guess), and linear phase created lots of pre-ringing and missing transients. The less destructive setting was linear phase in "minimum" setting, but still had issues. All of this considering I was using a sample kick sound that was nothing like the recorded one, except for adjusting the pitch to match the fundamental note.
I got rid of HPFs and LSFs I was using to polish low end, and things started to improve a little. Then applied those filters back in the kick bus afterwards, and things kind of started working again. But now I'm not confident anymore about the kick staying phase aligned with the rest of the drum tracks (specially OHs) unless I bounce the bus output to audio and compare the timing to the overheads. And that's a bummer.
If I've understood correctly how this works, despite of visually aligning the recorded audio tracks and its transients, heavy eq processing can missalign back the timing of frequencies separately, and the only way to make sure everything's ok, you have to use your ear, or bounce the individual/bus tracks to audio after processing to re-check the alignment before summing in a drum bus. My final thoughts: the best you record things and the later you process drum tracks after summing, the better. There's only so much correction and tweaking you can do before things start to get all oven the place and out of control. I really struggle to get things right when I receive badly recorded tracks for working on a mix. I feel like I spend 90% fixing the impossible before I can even enjoy the process.
You will get a better intuitive understanding of what is happening to the signal in time domain if you look at group delay graph instead of phase. In the typical loudspeaker crossover design software, group delay is one of the core graphs one can look at. Basically the delay is local to the range of frequencies affected by filters and not the whole signal. It's like stretching a rubber band locally between two anchor points. Talking about analog style minimum phase filters here, not phase linear FIR processing.
I really need to watch all your videos
Dan, can you please make a "nuts and bolts" video, the way only you can, on exactly how equalizers work and how they cause phase shift and resonance? I'm sure I'm not alone in my desire to understand on a deeper level how EQ works so that I can not only avoid the unintended consequences of equalization, but maybe even use it to my advantage the way that you seem to be able to do, and I just can't seem to find that much information on the subject. My basic understanding is that what is actually going on is that it is the mixing of delayed (phase shifted) signal with the original that causes changes in frequency response, but I don't understand how, from that basic principal, all the different filter shapes are possible, how a filter's bandwidth can be manipulated, or how resonance fits into the equation. I'm just now becoming acquainted with the idea that equalizers are tools to affect the envelope of a frequency as much as its volume, and I want to understand more. Thank you.
You know it's getting real when the music stops in a Dan Worrall video :)
Love your videos! Thank you for another excellent upload!
謝謝!
Thanks :)
Happy new year to you too!
Great video. Looking forward to the next EQ one.
That was brilliant!
Happy new year again
I agree with you actually more or less.
But I feel like to make this argument you should really have tried to null against a slightly different filter:
Mix a band-stop filtered version with a delayed and band-passed filtered version. This is what people are arguing the filter essentially does, so do it. This would give a clearer picture of what is happening.
As it stands the primary reason your example doesn't null is that a patched delay phase shifts EVERY frequency, even the infinite ones cause by the jump discontinuity, whereas the all pass filter you showed only phase shifts a finite frequency window (shown in the graph by the size of the phase response bulge). Of course that isn't going to null.
Thank you! This was incredibly helpful.
Please do more videos like this on EQ phase shift and how EQs work. How do I interpret the phase shift by just looking at the EQ. Lots of questions still. Please teach me more.
I'm sorry, Dan, I have to disagree with you on this.
The phase is described by the formula φ= ω៰t. Where the "t" is literally a delay. So phase ALWAYS means delay. And by the way, "invert phase" (so popular in DAW) does not make sense technically for this reason. If there is no delay, there is no phase. And in this case there is no delay. It is technically correct to call it "invert polarity".
But from a mathematical point of view, there is no difference between A sin(ωt + 180°) = -A sin(ωt). Where the first is a phase reversal and the second is a polarity reversal. Because the signal is considered infinite in these calculations. It never starts and never ends.
And what you say about the all-pass filter and the difference in sound - concerns the transition process and is not directly related to the phase at all. In fact, it's just a filter ringing, its response to an impulse. You even mark it yourself later, but somehow you don't relate it to the difference in the beginning. And although I'm sure you understand all this perfectly, the way you explained it can still mislead the audience.
So how does a plugin with no latency apply negative delay?
@@DanWorrall And why should it? It doesn't create a negative delay.
@@EG_John if phase shift is delay, and a minimum phase EQ can shift phase in *both directions*, that means the EQ can see into the future and apply negative delay.
@@DanWorrall This is a misconception (in my opinion). Above I have already written that mathematically there will be no difference between A sin(ωt + 180°) = -A sin(ωt). This is also true for any other phase shift. Including the negative shift.
@@EG_John yer, but does it make my cow bell sit better in the track? ;-)
These videos are so good
Brilliant!
Dan: Happy New Year
2020: not so much
Thanks.
Dirac spikes are interesting to use for analysing filters, since they contain all frequencies (within the Nyquist limit, of course). I was amazed at how much ringing occurs in EQs, especially high-pass filters where it can seriously affect the sound of bass frequencies. Kicks and bass sounds with sharp impulses can be affected quite dramatically.
Far too many people use sharp filter slopes in dance music nowadays, when all it does is cause ringing effects that aren't usually audible to most people without trained ears. Suppose it doesn't matter as much, but when you know what it sounds like, it really hurts to hear it.
Principal Audio that’s why I never high pass steeper than 18db/octave if I can help it. And for mixdowns, I high pass the side channel at 12db/octave.
Great educational video again Dan. One quick question, how do you manage to get the extra options in the right click render menu? I can't render a selected area of a track like in your menu.
Optioins menu -> Customise menus/toolbars -> select "track control panel context" at the top -> add the extra render options from the Actions list (with optional seperators as I did) -> Save
@@DanWorrall Nice one. Good to share that knowledge :D
Thank you!
Thank you !!
Figured I'd fall asleep to Dans' soothing voice, but the topic is too damn interesting..
The Professor of Plugins
Hello, I was wondering when we turn a radio frequency, what makes the phase get tune in along with the resonant frequency.
Support this channel!
Very informative video Dan. It's also interesting (not mentioned here), different audio editors, although dealing with same material, process them all differently, because of their coded (programmed) structure... just like your dither video showed. Hence, basic ones will create more profound (and interesting) anomalies, to your ear... whereas, advanced ones, might automatically hide it.
Great video!! So what about the phase response of bell filters? Is that just a less pronounced version of this?
Yes
@@DanWorrall thank you!
thank you for this great video! what's the software you are using sir?
Reaper
Wow, super interesting! Can you make a video about getting higher peak values through phase shifts? 🤔
2:50 isn't this only true if you're using very basic waveforms where each cycle is identical? if you move a kick drum ahead 5ms, you're not getting an inverted kick drum
Indeed, I said it's only true at 100Hz. A kick drum is composed of many individual sine wave partials at different frequencies, and 5ms of delay will shift the phase by different amounts for each.
You've just explained why Analog/Modular Synths sound so much better than Digital...
thanks for this! phase shifting has always confused me a bit, especially because some effects have a continuous phase shift setting in degrees. That wouldn't make sense without that phase shift being specific to one frequency, isn't it? Otherwise it would only be possible to have 0 and 180 deg.
You can shift the phase of all frequencies by an arbitrary amount using FIR filters: the same kind used for linear phase EQ. Or you can shift the phase of a specific frequency by 180 degs using an IIR allpass filter. I know, its a bit confusing!
@@DanWorrall aah, I see now, thanks!. Then you have to selectively shift each frequency a different amount of samples, so that the rotation belongs to the phase... It sounds mathematically intense, but it makes all the sense now. I'm gonna try it out.
Hi Dan :)
I remember seen your videos on phase issues when cutting with non linear phase EQs, which were of great help as we regularly ran into problems while parallel processing. 🙏🏻
But now this: today we put a correlation meter in front and behind an EQ8 and used a simple low cut - no parallel processing or anything. A plain and simple lowcut. And surprise, the meter showed that the correlation dropped significantly. Then instead of EQ8 we used ProQ3 in linear phase mode. And still: the correlationmeter would display a significant drop in correlation 🤷
We have no clue other than we found glitch in the matrix... Any idea?
Any change will mean reduced correlation, even if the phase hasn't shifted.
thanx for this
Will you share the vocoder track that's playing as the outro? 🙏
Yes probably, at some point...
Dan should be President one day. I would vote for him
You can do full inversion with no delay, but how does one shift phase a different angle without delay? And how is the angle determined?
SUBBED
Dan, can you please help me figure out how to monitor the total phase curve, when using a minimum phase eq in a parallel track? i noticed you used an older VST Plugin Analyzer in a previous clip, is that the best tool for this? i have Plugin doctor at the moment, but i cant figure out how to view the total.
And just like that, i found it xD lol.... nevermind! love your videos, they made me have the biggest AHA! moment about parallel processing and EQ =) Cheers!
Grab Metaplugin (also from ddmf) and load it into Plugin Doctor. Then set up your parallel chain inside Metaplugin, and you can analyse it just like a single plugin. I think the demo version will work if you don't need to save your setup.
@@DanWorrall Thanks Dan,
I just noticed that you can load 2 plugin instances in plugin doctor these days and then sum the signals on the 1-2 button in the upper right corner. Thought that was pretty much it, but i will def look at metaplugin :)
I still couldn't understand the phase shift that happens from an EQ because by logic, let's say we do the same phase shift at 100 Hz but on full song track; if the 100 Hz is on 180° and is 100% null, then what comes after it at both sides should reduce the volume of the frequencies according to the slopes. Well, it doesn't sound like this, or maybe the phase shift graph does not correlate with the amplitude? Because by this logic, this should make a wide surgical EQ cut which should be very noticeable cancellation, can you explain this please?
Thanks for another great explanation, watched an older one you made on this topic too, but I still don't understand why eq causes phase shift? I can see that it does, but why?
Sorry I missed this question. You've kind of got it the wrong way around: EQ doesn't cause phase shift, phase shift causes EQ.
Btw th-cam.com/video/eSyCuuSzuE4/w-d-xo.html ;)
@@DanWorrallthanks! since I had asked this question, I gained a clear understanding of what's happening. It's very cool how most effects are just delay
This may seem like a strange question but I have someone that wants me to recreate some old software for him and he claims that they panned these square waves from the left earphone to the right using phase shifting....is that possible? If so how does that work? I used a regular audio component to pan the sound from one ear to the other but according to him it doesn't give the same result...to me it sounds exactly the same....
No idea, sorry. Need more information...
you deserve more views. but you're gonna have to start naming your videos something like " why the bass sucks in your mix!"
Do EQ cuts sound worse because the part of the signal that's phase shifted is "lowered" relative to the rest of the signal during a cut, whereas in a boost the part of the signal that's phase shifted is higher/louder so you can hear the phase distortion more?
The best
What's the name of the music in the beginning and in thе end of the video?
"Double Down". Its not released anywhere yet...
@@DanWorrall Alas. Thank you anyways.
Hi Dan,
I've been researching the topic quite a bit and am still not perfectly clear about the origins.
Most threads end up explaining how phase shifted signals cancel each other, but don't get to the point why the phase shift initially happens. I've heard people saying, it's due to latency issues. But when I interpret the phase shift curves in the eq correctly, it seems more like certain bands around the point of application get shifted. Can you maybe explain how that physically happens?
There are various different causes of phase shift. Simplest is just a polarity inversion, which is 180 degs phase shift for all frequencies.
Or there's delay: this will shift the phase by different amounts depending on the frequency, so for a given delay time 200Hz will get twice as much phase shift as 100Hz.
Or there's phase shift from filters, which tends to happen where there's a gradient on the frequency response: for a bell EQ band the target frequency will remain at zero phase, but there will be positive and negative phase shift either side.
That last one requires a mathematical explanation, which I'm no way qualified to provide. But when you represent the signal in terms of complex numbers, the imaginary part goes high or low whenever the real part is at zero: in other words all possible phases of the signal are present already, and just need to be extracted by the filter *
* Any real DSP guru reading this is probably now either laughing or crying: if that's you feel free to correct me!
@@DanWorrall great dan, thx for your time!
So I think I'm on the right track. I happen to know the math, but haven't found folks who know more technical details. So I was doubting my intuition.
The details on this complex number thing (it also works without complex numbers, but oddly it's easier with them) is that the frequency spectrum is derived from a convolution of the amplitudes over time.
There core fact is, that every mathematical curve can be split into a superposition of (co)sinewaves of different frequencies. e.g. Audiosignal = 3*sin(1Hz)+1*sin(2Hz)+42*sin(3Hz)+.... etc.
(It is the same theorem that shows that a square wave only consists of odd harmonics and a saw is only made of even harmonics)
The math implies that the frequency spectrum (1Hz = 1/second) is inversely proportional to the audio signal (second).
Wich means a tiny/steep change in the frequency spectrum. Will 1/(something tiny)-ly affect the signal, therefore resulting in a great impact to the audible signal.
Nonetheless, I find it hard to pin point and hear the difference and the broadening of the signal.
Some renowned folks like robert babicz recommend a "-inf/octave" low cut at +/-30 Hz and claim having a tighter, more defined low end from this. Given the maths, this seems like the worst recommendation imaginable.
So the entire matter is still kind of confusing...
(If anyone else is interested in the maths:
3blue1brown has some nice visualizations on fourier analysis.)
I wish Crane Song would hire you to make tutorials about Phoenix II, or to compare it with other saturation plugins. The thing is too subtle for me to evaluate if it is worth it to open Pro Tools just to use Phoenix II, or if I should just stick to Saturn on Reaper.
great video as always dan !
you can pls answer me how to identify phase problems in mono with a correlation meter?
I use span and more, and even if there are sounds that sound horrible together (phasing) and they are (mono) the span recognizes them as 1+
(good relations)
Pan them hard left and right, temporarily if you like. The correlation meter shows correlation between the left and right channels. A mono signal will have 100% correlation, no matter how phasey and weird it sounds.
@@DanWorrall thank you 🙌
@@DanWorrall hey dan , do you think its good idea to use phase alignment plugins like (evo in) to shift the phase for good position and use that as the last plugin in mixer chain (for fixing eq phase shift and bad phase relation) ?
@@racebase1470 if you've got phase cancellation issues then this is a good way to fix them. EQ phase shift is not itself an issue and doesn't need fixing.
@@DanWorrall and just for remember, eq phase shift issue is when you have 2 same signals with different eq settings (zero latency mode) ?
Is there any way to avoid or controlling those peaks caused by phase shift? I really don’t wanna use just a rude clipper.
Hi there thank you for the video.
I got doubt with one of my productions I've just picked up a piano and listen it in SOLO without any processing it's a Full DRY signal and I noticed that in some octaves the phase polarity tend to goes to negative -1 and I don't hear any deleting informations in the stereo field, in fact when I play the full song my phase meter tells me that the song goes from 0 to positive +1 and it's ok, just in solo MODE when I hear just the melody part I got this doubt that I don't understand in the very long tail of the notes goes to negative, thanks if you will answer me :D
Check it in mono. If it sounds ok, stop worrying about it. ;)
@@DanWorrall I checked in mono also for monocompatibility and the piano it's ok there is no information cancelled in the stereo field the track like I said goes from 0 and +1, but I'm curiuos why in solo some element like that do that kind of work on the phase it feels like illogic, and I was worried about the track.
Thanks for answering.
Reverb tails and such like will often bounce into negative correlation. Normal and nothing to worry about: its part of what makes it sound spacious and wide.
@@DanWorrall Ok this is nice to hear I feel more calm now ahah :), the fact is because the acoustic piano from Kontakt an emulation of a real one there is no reverb on it so it's a full DRY signal, maybe could be the microphones of the recordings of the piano that do tha kind of thing on the phase? just for knowing thank you again for keep me updated.
I'm guessing the piano was recorded in a room? And a piano body just traps a body of air, like a room. So the tall of a piano note is not so different from a reverb tail.
verry cool.
Top voice in the game, I want this dude to narrate my death
Phase shift is the filter when you want to kill false engineers
Tremendo, me dejó pensando
Where is that LCR fix video?
I never promised one. LCR is LCR, there is no fix. The video about making a nice wide mix that also works in mono is on its way, but it won't be on this channel: I need to charge a fee to do that one properly ;)
@@DanWorrall oh all right, thanks!
I'm waiting too. Where will be?
@@rudakudrnacek3442 will be on TH-cam, I'll post here when it goes up.
@@DanWorrall Thank you!! Best tutorials ever!!!
I admire you god damn
I'm waiting for the hour long video to outright explain what a digital filter and/or EQ does for real. All those side notes ("thats what truly happens anyway" sort of stuff) only makes us curious to study the engineering
When you think you finally got it... think again
Doesn't the processing of different frequencies take the same amount of time delay, no matter if you're boosting or cutting, implying that the opposite boost or cut would still cancel itself out seeing as the amount of delay would be the same in both cases? Meaning your comment about seeing into the future makes no sense?
E.G. boosting 10dB at 60hz and cutting 10dB at 60hz both have the same amount of delay due to the processing time required at that hz and therefore the processed signals still cancel each other? You could look and see if the resulting signal is delayed despite the EQ cancelling itself by comparing it to a completely unprocessed signal
You can't correct delay with more delay. You need negative delay. Which is impossible*, ergo it's not delay.
*In a real-time plugin
@@DanWorrall I'm not suggesting the delay would be fixed, actually the opposite.
I'm not entirely clear what you're suggesting. But the fact is a 12dB cut can undo the phase shift of a 12dB boost. Even with zero latency plugins. Therefore the phase shift cannot be delay.
It seems like phase shifting can produce a crude addressable led strip.
This stuff is above my paygrade.
Jos Driessen.... Hell no, it isn't lol
As funny as your comment might sound, it is also incorrect.
Learning and understanding should be "hard-coded" and fun for everyone. Just like I keep telling my mom, whenever she asks me #1004 question about her laptop I bought for her... play with it. If you don't, you won't learn it.
Everybody has their own way of doing and using things. No description of anything, written by only one or even few people, will explain the product fully.
thanks for video about this black magic 😂
Cutting EQ makes for a Louder mix? not that I want a "louder mix" - More depth perhaps ;-)
Polarity and phase are not the same thing. A sawtooth wave shows this clearly. The shapes of waves play a part in their tone.
Off topic question: Do you preffer filters or shelves to clean tracks? And Why?
I don't really understand the question, sorry.
@@DanWorrall With "filters" I mean HPF and LPF.
Food for your brain. :)
So I came into the video guessing I'd learn something about phase shift. All I know now is there are multiple types of phase shift caused by multiple things and I know none of them...fk...what does phase ringing mean? etc etc etc
🤯
"That sudden spike is a short burst of all frequencies" - Can someone explain why this is to me? ^^;
It's not simple to explain, but since there is a very hard edge in the waveform, it is more or less like an impulse. And if you want to express an impulse by adding up a series of frequencies, you'd need all of them (within the available bandwith). Search for Fourier transforms and how you can express a square wave using them. And don't be discouraged if you need some time to understand them.
Think of the attack of a kick drum, which can be modeled by quickly pitchshifting a sinewave from high to low, do that reversed and then forward and take that to the limit of 1 digital sample.
The sudden burst of all available frequencies doesn’t make sense to me either and I don’t find the replies to your question helpful. The problem I have with that explanation is that if you do indeed have a burst of all frequencies then that means you don’t necessarily have to have an increase in volume. I think the keyword might be “burst”. By burst I think he means “you have an increase in volume across the board at all frequencies” perhaps... which should really just be stated as “an increase in volume (regardless of frequency)”. But even that phrasing might not be scientifically accurate.
I actually watched this DESPITE the Comic Sans thumbnail.
How come phase shift makes my peaks levels higher!!! 😮 I don't need sleep I need answers!!!! 😱😱
The phase shift makes all the individual component sine waves add up differently, so the peak level changes. It doesn't necessarily get higher, but you only notice when it does.
And this is how the universe was created.
I understand the meanings of each word you said.
But when put together, I have no clue what you are talking about.
Hi Dan! Love your videos! Unfortunately, from 0:45 the "telling lies" from the music really interferes with your voice, making it hard to listen to what you are communicating. It works with my monitors but on my phone it was near impossible! Cheers, looking forward to more great content!
I am still confused
I couldn't understand ...
he sounds almost exactly like mads mikkelsen its so damn uncanny
Oof the music ruins it...
I skipped ahead to this one.
Duuuuude hahaha xD
Nobody knows as well what they're talking about as this guy
id probably listen more if you didnt use reaper :'(