If you have a solid understanding of kinematics and fourier transforms it makes this stuff easier to grasp, but that is honestly more esoteric than what most people need to actually apply this in practice. This is BY FAR one of the best intuitive explanations of how phase works with sound reinforcement that I have ever come across. It's something I've understood but always had a hard time explaining in detail because it's such a math heavy topic to really get in to. Great job bridging that gap! Would love to see more.
Inexpensive marginal test gear has released a tidal wave of inaccurate, maybe, and outright made-up-shit from folks focused more on ad revenue than knowing what the phuk they are spewing.
He really isnt. You just need to broaden your horizons a bit. the videos by xiph are superb. there are many tacking it at the level of PCB design for audio as well.
@@jeremyglover5541you're not wrong, but it's also kind of telling that your example is xiph, because Monty's videos are uniquely the best resource on their topics despite being around 15 years old now.
Wait, that was 30 minutes? I just restarted the video to read the comments. I was too busy soaking in the info to read comments or notice half an hour passed on my first time through
That was one of the best descriptions I've ever seen for the behaviors of the "signal, driver, soundwave, mic" relationship. It's a very complicated process, and this is a great way to break it down and understand each component. For your next challenge, you should try to do the same type of breakdown for a bass-reflex cabinet. That is one of the most important and least understood elements of speaker/cabinet design. I've been struggling to wrap my head around it for ages.
Amazing! Amazing amazing. This is the kind of content I've always wished existed. Now we just need a pipeline for absolute beginners to get to this point
I believe there's a term called Jerk when decribes the rate of cahnge of acceleration. Also wonder how damping factor and power bandwidth comes into play. Awesome video, thanks!
Some other terms I may have made up - bounce (sharktooth), ramp (sawtooth), step (square wave), impulse, doublet (N-wave), jerk, etc. Not many are taught the elementary waveforms.
We may be jerks, but we also have a sense of humor, the next three derivatives are snap, crackle, and pop. Although, back when I was playing with this stuff, "jounce" was the derivative of jerk.
In the near-field of a dipole, which is where you're measuring with your microphone, the pressure and the velocity are 180 degrees out of phase. The driver cone determines the air velocity and the microphone picks up the pressure field. That's where your phase shift at 14:00 comes from.
Could you, please, decipher your thought for us? The wave of 20Hz is about 55 feet long.How will a microphone position offset by a feet or two affect the result?
I don't know much about audio engineering, but I believe he's saying the driver creates an inverse pressure wave with an in phase velocity. Imagine that the biggest change in pressure doesn't happen when the speaker goes from rest/neutral to outermost, but actually when it goes from outermost to innermost positions. The hz doesn't so much matter as the speed of sound itself, too. A very low frequency wouldn't be captured on a mic further away, unless the velocity was astronomical. Edit: I realized right after posting that my explanation would result in 90 degrees in shift, and also I had paused right before he started explaining velocity being pressure's derivative, which isn't what this poster was talking about, I don't think. The video's explanation at around 24:00 sums it up, no?
@@LuxLucidOfficial Yes, the explanation at 23-25m is a good summary how the air pressure goes in the near field. My question was about "The driver determines cone velocity and the mic picks up the pressure.." To me it sounded like the driver throws apples and the mic picks up oranges, so what's the rationale here?
It is not a dipole. It is a baffled driver. Pressure and velocity are not generally 180 degrees out of phase, since that will depend on the acoustic environment.
@@Rene_Christensen In the very near-field it doesn't matter if it is baffled or not. The moving membrane displaces the air in a way that requires a dominant dipole moment of the multipole series. And the near-field does not know anything about the acoustic environment, by definition. Check out the derivation of the velocity potential of an oscillating solid sphere, in which case you get a pure dipole. The pressure is the negative time derivative of the velocity potential and the velocity of the object coincides with the field velocity at the surface, making both the spatial derivative of the velocity potential. You can then determine yourself that they are exactly pi out of phase near the membrane. Only at a distance that is significantly greater than the membrane radius you will start to see a significant deviation from this relationship. You can find the relevant derivations here for example: www.math.fsu.edu/~hju/cht12.htm
Love it when i can feel like i understand a complex subject without actually understanding the any of it 😅. But in all honesty, you did an excellent job of explaining things i know little about without losing me and the length of the video perfectly pushed my brain all the way out to its limits!
I totally love the detailed technical explanation. Fantastic video. Little brain numb (in a good way) after watching it with high focus. I learned quite a lot from this video. Thanks for the great work! I really didn't expect to get info on both the advanced electronics I expected, but on fluid dynamics as well. I never thought about that before, but it really does make sense having to factor air pressure, displacement etc. And that's all Fluid Dynamics/Mechanics. Totally awesome.
Hands down the best video I've seen describing audio phase relationships both in thoroughness and in critical thinking. I can only hope for more people to enjoy this!
This must be one of the most beautiful videos I have seen. I watched it 3 times already and will keep doing it. It explains so much and rises so many new questions...
When i was at university we learned that particle velocity and sound pressure are out of phase in the nearfield and in phase in the far field. And considering that the Driver stimulate the particles directly, resulting in particle velocity and the microphone converting sound pressure to electrical current, your results would make sense (basically also what you explained at 23:55). It would be interesting to see the same measurements in the far field.
I can already tell you what happens in the Farfield because we did do those measurements, they just didn’t make it in this video. What happens is that the mic measurement remains +180 degrees out of polarity with the laser but the mic actually gets slightly ahead of the laser by about 1’ or so. This shift happens very quickly, within a few feet of the driver, and remains consistent all the way out to 200’ and applies at all frequencies. We realized this has to do with the fact that the acoustic center of the driver is actually about 1 foot in front of it. Might make a whole video on this topic.
@@devinlsheets_alphasound That would be great! Because heard and read alot about phase effects of drivers and sound but never really understood what it all means physically. I love how your experiment does exactly that: what is moving and how. Maybe you could even try to visualize how the air moves with dry ice or small plastic balls on strings? just a thought
@@devinlsheets_alphasound "We realized this has to do with the fact that the acoustic center of the driver is actually about 1 foot in front of it" Yes, it varies for different cabinets. Would be nice to do the measurements to verify (by multiple independent entities!!!) the model(s)/math.
@@devinlsheets_alphasound That is as it should be. For somewhat free-field conditions (I realize you are in a room, but still), the pressure is in-phase with acceleration of the piston (not the acceleration where the pressure is measured!, for which in the far-field the velocity indeed is in-phase with pressure for pure propagation, as mentioned in some posts here), because the acoustic environment seen from the piston is mass-like at lower frequencies for a typical cone size. It is not quite a perfect match between piston acceleration and pressure, but for example for a flat 18 inch piston in a baffle at 1 m distance, the two are only 5 degrees out of phase at 300 Hz and even closer at lower frequencies. With acceleration being 180 degrees out of phase with displacement, what you see in your measurements is what you should see. I should say that the phase coming the distance travelled from the piston to the microphone (linear phase) needs to be removed in these calculations, so in your measurement setup, you need to either input this distance in some setting, or manually get rid of it.
The phase relationship between the input signal and the output from the speakers is actually much more complicated in practice. The reason is that the equation m.a+c.v+k.x=F(x) applies to established sinusoidal stimuli. In practice, the musical signal is not like that. It is rather a transient process in which the behavior of the speaker is determined by an inhomogeneous system of differential equations, which makes the issue of instantaneous phase shift quite complicated. Thank a lot for video.
Wish i’d found your channel earlier. I just finished my bachelor degree in music engineering and your videos would’ve made my time much much easier lol. your videos are so detailed and intuitive please keep making these.
Great work! This is the level of detail I would love to see from all audio hardware testing. The only mistake I noticed is that you didn't deinterlace the video around 16:30 which results in horizontal comb artefacts in video. Another example of missing deinterlacing can be seen around 18:22. I'd recommend using ffmpeg for deinterlacing because it has resulted in best quality for me but other options do exist, too.
Immediately subscribed! The fact YT just recommended your video to me tells me its algorithm isn't as good as it should be. Awesome work! This must have taken a lot of time!
The current lags behind the voltage in an inductive AC line. Since speaker lines are more or less varied-voltage AC, and you're coiling it around that ferrite ring, you're creating inductance. That inductance will affect the phase and frequency of the output. This is why we use coil inductors to make low-pass circuits. And that's not to mention all the other interference from various amp stages and whatever else. So long as the delay is no more than ~2ms it's fine, humans generally can't hear intervals that small anyway.
Came here looking for a comment like this. Maybe you can remove the permanent magnet part of the speaker and drive the air coil in front of the mic to check.
There's a small mistake on the "AC coupled" waveform drawing (graph on the right) at 1:55 , after the "DC" portion of the signal, the AC coupled signal should not have any positive slope since there is no positive slope in the original signal. (like when the DC battery is released at 2:12) Thanks for the nice thorough video again :)
As the first waveform is passed intact, this is not a large enough time snippet to say if the flat section is even DC or just a bit of lower frequency out of context. Most likely the flat "DC" bit would come through DC blocking with a little slope and a slow rebound if the pulse is not followed shortly by a matching negative excursion.
18:50 I think the delay is caused by the inertia of the moving parts of the speaker. And that cannot get worse than 180 degrees because if it were delayed more, it would catch the next incoming electrical wave and that would result in effective speed-up of the movement reducing the delay to less than 180 degrees again. If you have constant latency, it's caused by some kind of processing, not by physical movement of the speaker.
Absolutely INCREDIBLE content and presentation - lifetime producer and audio & physics enthusiast here. You are one of the top TH-camr's I've come across mate
Wow this all makes sooo much sense. Wish I found your channel sooner. This has answered alot of questions I've had when figuring out phase relationships between multiple driver setups. I can actually hear phase differences in drivers after some listening. I went and bought rta mics to see if I could see what I was hearing. Turns out I was right. This video puts everything in perspective for me fromy own experience. It all makes perfect sense. Finally a channel that makes my brain tingle! I love it. ❤
one thing I can add , as I know a sound person or two, is that delay lines are often used with big systems to ensure all drivers are in phase with each other. And if any are at a different distance either ahead or behind the main drivers, then the sound from them is phase corrected with what is comming from ahead or behind them
Delay lines use delays to make the sound sources in "time" with each other, not in phase with each other which would be impossible to achieve being that they are radiating from 2 different locations. Without the delay you would hear a very distinct echo from hearing the delay speaker first and then the main speakers some time after that. It is very disorientating. That echo starts when the two sources are more then about 40 ms apart in time (about 40 feet of distance). If 2 sound sources are less then 40 ms apart then they will sound as one signal. This is called the "Haas Effect". Now what is interesting is that the brain will locate the sound from what sound signal it hears first even if the second signal is somewhat louder then the first. For a delay line, as long as the delay speaker is "slightly" behind in time ~20ms of the main speakers, your brain will still think the source of the sound is from the main speaker (stage) even though the delay speaker is louder and may be off to the side or above you or even slightly behind you.
While nothing new to the folks designing such hardware, this is an awesome first dive into this topic. Great work! Interesting things I've noticed: - As pointed out in 2:30 1st order HPF will create a 90° phase shift. A 2nd order filter will create a 180° phase shift. - In 17:10 you can see the driver reach 180° phase shift. At 180° the speakers output becomes pretty useless. This seems very similar in behavior to the Gain Bandwidth Limit(GBW) of an OpAmp. (The GBW basically dictates how much of your maximum amplification you can use for a certain frequency). When driving the speaker at a higher volume, I would expect the amplitude to drop even quicker, but the phase behavior to remain the same. - I would love to see a plot of actual time delay instead of phase
This video is brilliant. Taking advantage of the fact that your branch is also music and not just a sound scientist, make experiences with real music (even if you don't have the ability to explain what you see, and therefore it would be much more fragile conjectures). They are all individual or isolated signals, when in reality we work with complex signals. It is true that pink noise is a complex signal, but it still lacks the transient component, which is a key attribute in music.
In a car, what pushes you (pressure) back in the seat? Acceleration. Not speed and not position. Similarly the air in front of the speaker cone becomes pressurised by the acceleration of the cone, not the speed or the position. This should also explain why high frequencies have more acoustic power thab lows at the same driver displacment - acceleration is higher. EDIT: In hindsight I misspoke. Pressure is not created by acceleration, but sound (modulation of pressure) is. Thus g-force felt in a car is analogous to sound energy.
No, pressure is caused by the motion of the cone. By your logic, a skydiver would feel no air pressure once he reached terminal velocity. you are confusing the potential energy of sitting in a car seat at constant velocity, with the kinetic energy of a column of air being moved by a cone.
@joshua43214 You make a great point! I don't think it's a perfect analogy. However I would note that sound is not a constant pressure (if the cone were moving at a constant velocity) because that is DC. Sound is changes in pressure (cone changing velocity or accel / decel) and in truth this is a continuous cycle of conversion between potential energy (compressed / rarefied medium) and kinetic energy (medium in motion expanding / contracting). In the case of a sky diver the medium is in motion past the body (wind) but sound is more like the force felt by the sky diver who will indeed feel weightless at terminal velocity. Your comment made me think hard, thanks!
@@Audio_Simon I think you have mistaken the skydiver example, the diver doesn't feel weightlessness at terminal velocity. It's only just at the moment of jumping. At terminal velocity, you feel your weight supported by the air.
@tusharjamwal You are quite right that air will resist the motion of the falling body in the form of drag. That said, from an object in free-fall is generally considered effectively weightless because gravity is pulling down but there is no reaction force from the ground. There is quite a fun thought experiment along these lines to do with Einstein's theory of relativity, I think Veritasium made a video about it.
A saw cutting wood might be a better analogy. Same length of stroke, same number of teeth, you'll be expending far more energy cutting at 10Khz vs 60hz. Part of that energy goes towards the friction of the cutting. Part of it counteracting the momentum of the saw itself. The energy lost to counter momentum increases with the mass of the saw. Subwoofers are far larger and heavier than tweeters, ala more energy is lost purely in the motion irrespective of how much actual cutting (sound generated) occurs.
@ 23:20 interesting how due to the phaseshift there is a 2nd order distortion to the laser measured output at the begining cycles. In this tone output it settles down to low distortion in a few cycles I see but with music signal that is not a constant tone it would be more or less all the time distortion and imd. Amplifier damping factor would influence this to an extent I think, high damping = quick recovery but larger amplitude distortion, low damping = slow recovery but less distortion of amplitude. Very interesting topic, thank you so much for making this higly educational video and thank Filipo @ B&C.
Hi, first of all, amazing video, thank you! question: why are you using interlaced video in 2024? I can see the interlaced artifact in several parts of the video.
@@devinlsheets_alphasound It might seem like a meaningless detail but you effectively converted pristine 60 fps footage to 30 fps which additionally now contains a bunch of combing artifacts. It distracts the audience from the content of your video and is completely unnecessary
25:27 this has been a huge thing in my brain...i feel as if there is a huge difference in a seald chamber pressure Vs sound wavefront pressure. This i feel is exaggerated in car audio, especially when comparing sealed sound quality vs Spl. There are so many opinions at the moment as to what is best. I enjoy the power of horn loaded low frequency systems and have experimented with some designs and in my opinion nothing feels like a 20hz horn, the pressurization at the speed of sound is different than a seald cabin. Please do something on this topic. This video is a wealth of info. Thank you.
I have to say that watching this video in the middle of the night accompanied by a glass of whiskey is a wonderful experience. Some parts you understand, while others seem familiar, just like the alcohol.
Amazing video, really helps understand all those microphone measurements! Would be interesting to see how a cardiod condenser microphone(capsule with 2 exposed capacitive membranes) would measure, instead of an omnidirectional one, because it would capture a difference in pressure between two sides of the capsule, and not average pressure around it.
I think it would be nice to watch more of these in action. How would it be form the floor? how woul it be from above? Beautiful, I was thinking abount these set up for a while and here we are! Cheers, Congratulations for your knowledge and imagination and dedication and sharing!!!
AC coupling is just a lower cutoff frequency low cut filter... oh hey you said it yourself. I better stop commenting. Oh yeah by the way, absolute phase does not matter as much (no human can hear absolute phase differences from a single source). Of course group delay is more important (yes they are related by frequency), but even then up until a certain point mostly unnoticable in a PA kind of situation (room modes and accoustics in general will make sure of that). I still respect this kind of research and compilation of knowledge.
Amazing video. The mic is measuring the acceleration of the mic’s diaphragm not the acceleration of the speaker driver right? Still a second derivative but for different reasons (and subject to different resonant characteristics)
I'm only partway through the video, but I gotta say I'm impressed by this video a lot. I'm an electronics engineer and took a course on audio engineering in college (the kind where we talk about speaker low frequency dynamics, Thiel-Small models, psychoacoustics, etc) and seeing the same content from an actual audio engineer's perspective is really fresh and interesting. 13:29 -- There is a phase shift between the position and pressure waveform because of several factors actually; I believe because the acceleration of the movement of the speaker cone and therefore the air molecules it is pushing against is the 2nd derivative of the position, and since one derivative imparts 90 degree of phase advance (think about the derivative of sin(x) being cos(x), which is 90 degrees up), you will immediately see a 180 degree phase advance of the acceleration. Roughly speaking force = acceleration, and force is pressure * area, so pressure (that's SPL) and acceleration is in-phase. In the Thiel-Small model it talks about volume velocity; the relationship between volume velocity and particle velocity is analogous to pressure and force.
@19:00 Speakers are an inductive load. At Dc they are shorts, at higher and higher frequencies they "open up". This is due to the physical sizing of the inductor and how the core saturates. Inductors also introduce a phase lag in the current signal which is why at higher frequencies you are seeing 1 pi radians of shift.
Brilliant stuff. I'd love to see the comparisson between the laser and the mic with a complex wave, to see if the various consituent frequencies all track at 180 degrees to each other, or if there's a compounding effect.
I wonder if it would be possible to numerically integrate the microphone reading twice in order to covert the reading from an "acceleration" reading back to a "position" graph. Of course that would require that the reading start out in complete silence so that the missing constant for velocity and position we have to add back are known (velocity would be zero in silence, and so would the position relative to the resting point of the speaker). Even then, the reading would have to be accurate enough so that velocity and position don't shift with time. Although maybe it would be possible to correct for those constants being unavailable or the reading being slightly inaccurate by simply adding a high-pass filter to the output?
Amazingly well presented explanation of the very interesting behaviour! 5/5 will view again 'cuz I'm not entierly sure I got it nailed down on first attempt.
Excellent video! Loved the bit about sound wave trough created at speaker movement crest (sound crest 180° out of phase with speaker movement crest). And how for 30 Hz and 200 Hz, speaker movement crest is 90°-180° out of phase with the driving electrical crest (although I'm not sure I grasped why for that part).
Absolutely fantastic. Add me to the fan base. Could you PLEASE do a video on what’s going on *inside a sealed speaker enclosure with attention to energy that is forced back through the driver diaphragm-especially regarding how this effect is or is not captured but standard measurement specs? I think that would be highly illuminating for many in the loudspeaker design community. Thanks for your work.
Thank you very much! You put so much passion and work in your investigations and the video. Awesome setup, respect! All of your argumentations sound logic to me. The only thing I need to think about again in detail is what happens here with the acoustic nearfield/farfield with the longitudinal waves we produce. With the mic distance we should be in nearfield with 90 degrees phaseshift between pressure and velocity but with 200Hz not. I wonder if it matters, because what sensor is capturing what? Mic: captures pressure Laser: captures excursion and translate it to voltage, what represents pressure at the source without nearfield/farfield acoustic effects…good reference btw! …so it is possible, that we also see some of these acoustic effects, what you mentioned and explained with the mic excursion.
My god, this is the most physically correct description I have ever seen of the microphone - speaker interaction. Do you plan on solving the differential equations for the speaker-air-microphone-system? This should give a nice analytical solution, where the theoretical phase should be easy to calculate. (Because if you ain't going to do so, I feel like I want to do it ^^)
19:15... ".... at higher and higher frequencies. It doesn’t seem to be a set amount of time, >, but it levels out at around the minus 180 degree mark or so. Basically, above 200Hz, frequencies are going to be about a half-cycle behind in their full steady-state formation." >> To keep your phase plot 'on screen' - your oscilloscope software adds 360 degrees to the (indicated) phase data. This will always be the case no matter what phenomenon you are probing. The phase in your graph is in fact continuous laging well past 180 degrees >> it does not level out. Every apparent vertical jump in the plot indicates a 360 degree instrument scale change. Check out 600 Hz - your instrument is indicating a 360 degree phase shift. Good luck with your endeavors..... If something doesn't look as expected - you have either discovered something new or some peer review might help you to learn something new. >> Keep up your inquisitive nature.
I believe that what is shown at 26:45 is wrong. The first 90° phase shift is caused by the air pressure that is built up by the velocity and not the acceleration of the driver and so the microphone's diaphragm will move 90° later. The other 90° is caused from the voltage that the microphone produces because the voltage is produced due to the velocity of the diaphragm. We need a slow motion of the speaker driver and some other surface in front it to confirm but correct me if I am wrong
I think you would be correct about the 90° coming from the mic if it were a dynamic mic. Which is why, when we do use a dynamic mic, we see a total of 270 degrees of shift in the relevant frequency ranges, not just 180. Test condenser mics don’t have phase shift, so the 180 degrees is due only to factors leading up to the mic.
@@devinlsheets_alphasound Very interesting. I forgot that condenser mics where a thing. Also I was wondering because I am building a rotary subwoofer how much phase shift will it produce compared to an ordinary one because it functions completely differently.
@@legomachines5970interesting, not sure how rotary subs will respond in terms of phase. My guess is that it will be similar to a regular driver but I haven’t tested it or read too much about it!
Regarding the 180 phase shift between the laser sensor and the microphone at 13.30. The first is a position sensor, the second a pressure sensor. The pressure at the speakers is proportional and in phase with the acceleration of the membrane. There is a mathematical relationship between position, velocity and acceleration of a quantity varying according to the sin law. relationship position - velocity d(t)sin(x)=cos(x) - they are 90 degrees apart. velocity - acceleration d(t)cos(x)= - sin(x) - they are again at 90 degrees. Position-acceleration relationship d(t)d(t)sin(x)= - sin(x) - they are 180 degrees out of phase.
Really love your technical videos, especially with practical examples such as this one and the line array effect! You've confirmed that the pressure/location is allways 180 degrees phase shifted, regardless of frequency. But it might be interesting to mention that, above FS, the input voltage and location are also 180 degrees phase shifted, which results that the microphone phase matches the amplifer voltage in phase! In a 3way speaker, the midrange and tweeter are (/should be) used above their FS which means these frequencies should be in phase on the microphone compared with the amplifier voltage. After delay compensation of course... Also, crossover may (will) mess up stuf depending on implementation but its a fun fact that after all the phase stuff it ends up back were it started for a large part of the frequency range. Sadly it is hard to show this in measurements, because any starting/stopping a sine has additional frequency contents that may (will) fal outside of this frequency range where above remarks are valid...
Amazing video. This has left me paranoid and questioning everything. Question: Is there latency in the time it takes the force from the voice coil to travel all the way through the diaphragm to the surround? Hence could that be the reason you saw some displacement at 200 Hz during the "LED polarity test"?
Yes, when you push on "any" object there is a latency between when you apply that force to when that force arrives at some distance along that object. Essentially a speed of "push". That speed of "push" is exactly equal to the speed of sound in that material. Example: The speed of sound in wood is ~ 5 km/s. If I had a stick 5 km long and I give one side of the stick a wack, it would take an entire second before the far end of the stick starts to move no matter how hard I wack it. Note: Tony Andrews from Funktion-One speakers religiously only uses paper driver cones as the speed of sound through paper is closer to the speed of sound through air (though still much higher) then other materials like aluminum or poly, etc.
This is something that's well known by anybody with a basic electronics education. If fact, most of this stuff has been known in the early 1900s already.
@@Max24871 Yes. The internet has opposite effect on general public level of knowledge. Negative effect. They know less about the world. They need a TH-cam video on BASIC subjects, phenomena. Like there was no books (you know, real paper ones) etc. Pathetic...
Why would you assume the physical position of the dust cap or any particular spot on the diaphragm matches the transverse waveform? It's not necessarily a phase shift because that implies a comparative source. You are seeing a delay. If you now consider 2 transducers and the amount of delay is the same, they are considered in phase at the chosen frequency.
question at 12:00 when viewing 2 sine waves, how can you be sure wave 2 is leading or lagging? by just visually looking at two repeating sine waves, isnt it impossible to tell? question 2: you earlier displayed on scope a single wave 'impulse' signal. do you make that available as a wav file? ps - youve got an excellent command of this stuff. thanks for sharing.
Interesting insight, this raises many questions in what is the objective of sound reproduction, and what matters and what does not. The basis of sound reproduction is for our ears to hear the same sound i.e. original sound versus sampling the original sound and attempting to reproduce the same experience of the listener at a later time. Do the very low frequencies matter and if so is phase significant or how much is the question.
Great Video, The laser measures position of the speaker membrane. But position is not what's making sound, that's probably speed or even acceleration. Which is why the Freq. Response of the laser position took a nose dive. Because position is the integral of speed, and speed is the integral of acceleration. And an integration is actually a low pass filter operation. So, you have 1 or 2 low pass filters to compare against sound measurement.
What a great video! I feel like sticking to the driver movement when thinking of the sound is misleading. When making sounds what is characteristic is the change in pressure or the frequency of it. Now I am wondering about the electric energy used for acceleration and deceleration of the driver and of the air which by the way is a proportional and not a derivative of some degree. Maybe there is some hidden magic to get more from a speaker in its confined movement range.
10:32 its just a voice coil's inductance-resistance effect, inevitable first order lowpass filter. right? or theres something else? it actually can have an effect on bass and even sub region if L/R ratio is too high.
Is the « slowness » of the system at 200hz due to the resistance of the surround and the weight of the moving parts or does back emf go up with frequency. I’m not sure but faradays law states that back emf is given by the rate of change of the magnetic flux (Wikipedia). At a higher frequency I would say that the rate of change of magnetic flux is higher since the coil is moving in and out of the magnet faster to try and match the input signal. I’m sure the answer is somewhere in the middle, the mechanical resistance and inertia of the moving parts will impead the movement of the cone at higher frequency’s, but I would also guess emf has something to do with it. Lovely video though, it’s so nice to have people to break down the fundamental phenomena at play in speakers, thank you and well done 😊
FACT: - its not the signal (voltage) that moves the speaker. - The speaker moved by magnetic flux, a result of electric current flowing in the coil. - Higher frequency's result in to more current delay inside the coil. (this is exactly the part missing from this video) There for the speaker output is a combination of mechanical and Emf delay. agree: Lovely video though, it’s so nice to have people to break down the fundamental phenomena at play in speakers, thank you and well done 😎
The issue is that most audio engineers are not really engineers / scientists, which is fine. But they cannot interpret measurements correctly. So, people create these very strong believes about how things work based on what they "measured" themselves. Interpreting experimental data and performing proper measurements is difficult, and it is something that people go to school for for many years. So, if you have not, do not expect to be able to properly measure things and interpret the data correctly.
This is normal since the light is faster than sound ;) Assuming the electronics measuring this do not add any/same phase shift or time delay. Well done!
Anyone else find interlaced video to be distracting for some reason? This is super interesting.. I wonder how hard it would be to create a frequency based delay filter that would compensate for the delay, and how that would sound Thanks for making this! I'm gunna have to look at your other videos, Subscribed!
What should my take away be to greatly improve my ability to mix a band? What can i practically implement to yeald noticeably superior results? How will this help, or change anything about my workflow at tomorrows tour date? I have no idea how to implement this level of marco detail into a phase rabbit hole. Phase only matters with acoustic arrivals from multiple sources with differing transit path lengths and even then oniy at lower frequencies. The ear-brain system interpolates differently to any microphone-analysis system, and given the target (paying) customers choose to experience concerts via their ears, an objective to achieve perfection in the measurement realm *should* be very low on the priority list. This kind of deep dive might be more applicable at the design and manufacture stage. Its a struggle to find any value to the gigging fader pushing professional.
I all my life play Viola in symphony orchestras, really good ones. I sit where Bach, Mozart and Beethoven sat in the middle voices and directly in front of the conductor. I sit where the listener sits and I worked in public radio as a classical music host and announcer. I could hear what a lot pretended to hear.
So the phase is all over the place beyond 1kHz because your delay constants are in terms of integer milliseconds, which means changing a delay from 12ms to 13ms means a whole 1kHz wave was just added and the phase looped, correct?
I think the reason it’s chaotic above 1kHz has more to do with the fact that the source signal is just too low at that point to distinguish from noise. Especially with the laser reading, there’s almost no physical movement above 1kHz in comparison to the main operational frequency range below 100Hz
isn't the woofer a displacement transducer (moreover driven by current, not voltage) and the microphone a pressure transducer? So those signals can't be in phase from the get-go
Adding such an iron mass and making an inductor with the speaker wire! I know this is low frequency but won't that change the phase angle? Correct me.if I'm wrong please. Thanks.
Hi very interesting video for the most part above my ability to understand Anyway i have a question I wonder what would happen with a diode placed in series with a woofer What would the sound be like ? This should prevent the woofer cone to move backward ... only from the zero point forward (generating a positive pressure) I ask this because if we measure the sound pressure during a concert at the listening spot the pressure will vary only above zero Never negative i guess Instead a cone when moves backwards will cause a negative pressure towards the listening spot This could be unnatural ? The goal is always to reproduce the reality
You've gotta be one of the most passionate, most interested, most curious, and smartest people in audio on the Internet right now.
If you have a solid understanding of kinematics and fourier transforms it makes this stuff easier to grasp, but that is honestly more esoteric than what most people need to actually apply this in practice. This is BY FAR one of the best intuitive explanations of how phase works with sound reinforcement that I have ever come across. It's something I've understood but always had a hard time explaining in detail because it's such a math heavy topic to really get in to. Great job bridging that gap! Would love to see more.
Inexpensive marginal test gear has released a tidal wave of inaccurate, maybe, and outright made-up-shit from folks focused more on ad revenue than knowing what the phuk they are spewing.
You are the only one in the web teaching actual science with audio engineering. Things most audio “engineers” have no clue about. Thank you
He really isnt. You just need to broaden your horizons a bit. the videos by xiph are superb. there are many tacking it at the level of PCB design for audio as well.
I'm not saying this isnt a good video, because it is, but it isnt uniquely good by any means.
@@jeremyglover5541 Are you talking about the Audio University channel, when you say "xiph"?
www.youtube.com/@AudioUniversity/videos
@@jeremyglover5541you're not wrong, but it's also kind of telling that your example is xiph, because Monty's videos are uniquely the best resource on their topics despite being around 15 years old now.
@@jeremyglover5541do you have any recommendations for people doing audio PCB design?
An insane amount of useful information and I’m all here for it! The visualizations are super helpful too, thank you so much for the effort!
For the first time ever, I finally understand the affects that phase response has on a system because of this demo. Thank you!
sorry to be that guy, but effects*
Most educational half hour I've spent in a long time. Awesome. Thanks!
Wait, that was 30 minutes? I just restarted the video to read the comments. I was too busy soaking in the info to read comments or notice half an hour passed on my first time through
That was one of the best descriptions I've ever seen for the behaviors of the "signal, driver, soundwave, mic" relationship. It's a very complicated process, and this is a great way to break it down and understand each component. For your next challenge, you should try to do the same type of breakdown for a bass-reflex cabinet. That is one of the most important and least understood elements of speaker/cabinet design. I've been struggling to wrap my head around it for ages.
This is now one of my favourite videos on tech. It's just... the applied sound engineering and exploration, I had to rewatch it to gain insight
Love it, this reminds me greatly of my electrical engineering classes at uni. You're like connecting all the dots when it comes to audio applications
Amazing! Amazing amazing. This is the kind of content I've always wished existed. Now we just need a pipeline for absolute beginners to get to this point
Great video. I really like how you've taken something that can very dry, mathematical and abstract and make it understandable in an intuitive way
I believe there's a term called Jerk when decribes the rate of cahnge of acceleration. Also wonder how damping factor and power bandwidth comes into play. Awesome video, thanks!
Yes, physics behaves just like jerks! 😂👍 Grinds the hecks outta ya, for shiz'n'giggles 😃👌
Some other terms I may have made up - bounce (sharktooth), ramp (sawtooth), step (square wave), impulse, doublet (N-wave), jerk, etc. Not many are taught the elementary waveforms.
We may be jerks, but we also have a sense of humor, the next three derivatives are snap, crackle, and pop. Although, back when I was playing with this stuff, "jounce" was the derivative of jerk.
The derivatives go: position, velocity, acceleration, jerk, snap crackle, pop
True. And snap is the ROC of jerk, crackle the ROC of snap and pop the ROC of crackle.
In the near-field of a dipole, which is where you're measuring with your microphone, the pressure and the velocity are 180 degrees out of phase. The driver cone determines the air velocity and the microphone picks up the pressure field. That's where your phase shift at 14:00 comes from.
Could you, please, decipher your thought for us? The wave of 20Hz is about 55 feet long.How will a microphone position offset by a feet or two affect the result?
I don't know much about audio engineering, but I believe he's saying the driver creates an inverse pressure wave with an in phase velocity. Imagine that the biggest change in pressure doesn't happen when the speaker goes from rest/neutral to outermost, but actually when it goes from outermost to innermost positions. The hz doesn't so much matter as the speed of sound itself, too. A very low frequency wouldn't be captured on a mic further away, unless the velocity was astronomical.
Edit: I realized right after posting that my explanation would result in 90 degrees in shift, and also I had paused right before he started explaining velocity being pressure's derivative, which isn't what this poster was talking about, I don't think. The video's explanation at around 24:00 sums it up, no?
@@LuxLucidOfficial Yes, the explanation at 23-25m is a good summary how the air pressure goes in the near field. My question was about "The driver determines cone velocity and the mic picks up the pressure.." To me it sounded like the driver throws apples and the mic picks up oranges, so what's the rationale here?
It is not a dipole. It is a baffled driver. Pressure and velocity are not generally 180 degrees out of phase, since that will depend on the acoustic environment.
@@Rene_Christensen In the very near-field it doesn't matter if it is baffled or not. The moving membrane displaces the air in a way that requires a dominant dipole moment of the multipole series. And the near-field does not know anything about the acoustic environment, by definition. Check out the derivation of the velocity potential of an oscillating solid sphere, in which case you get a pure dipole. The pressure is the negative time derivative of the velocity potential and the velocity of the object coincides with the field velocity at the surface, making both the spatial derivative of the velocity potential. You can then determine yourself that they are exactly pi out of phase near the membrane. Only at a distance that is significantly greater than the membrane radius you will start to see a significant deviation from this relationship.
You can find the relevant derivations here for example: www.math.fsu.edu/~hju/cht12.htm
Love it when i can feel like i understand a complex subject without actually understanding the any of it 😅. But in all honesty, you did an excellent job of explaining things i know little about without losing me and the length of the video perfectly pushed my brain all the way out to its limits!
I totally love the detailed technical explanation. Fantastic video.
Little brain numb (in a good way) after watching it with high focus.
I learned quite a lot from this video. Thanks for the great work!
I really didn't expect to get info on both the advanced electronics I expected, but on fluid dynamics as well. I never thought about that before, but it really does make sense having to factor air pressure, displacement etc. And that's all Fluid Dynamics/Mechanics. Totally awesome.
I look at those Yamaha Dante interfaces everyday,This a great video. Great channel
Hands down the best video I've seen describing audio phase relationships both in thoroughness and in critical thinking. I can only hope for more people to enjoy this!
This must be one of the most beautiful videos I have seen. I watched it 3 times already and will keep doing it. It explains so much and rises so many new questions...
When i was at university we learned that particle velocity and sound pressure are out of phase in the nearfield and in phase in the far field. And considering that the Driver stimulate the particles directly, resulting in particle velocity and the microphone converting sound pressure to electrical current, your results would make sense (basically also what you explained at 23:55). It would be interesting to see the same measurements in the far field.
I can already tell you what happens in the Farfield because we did do those measurements, they just didn’t make it in this video. What happens is that the mic measurement remains +180 degrees out of polarity with the laser but the mic actually gets slightly ahead of the laser by about 1’ or so. This shift happens very quickly, within a few feet of the driver, and remains consistent all the way out to 200’ and applies at all frequencies. We realized this has to do with the fact that the acoustic center of the driver is actually about 1 foot in front of it. Might make a whole video on this topic.
@@devinlsheets_alphasound That would be great! Because heard and read alot about phase effects of drivers and sound but never really understood what it all means physically. I love how your experiment does exactly that: what is moving and how. Maybe you could even try to visualize how the air moves with dry ice or small plastic balls on strings? just a thought
@@devinlsheets_alphasound "We realized this has to do with the fact that the acoustic center of the driver is actually about 1 foot in front of it"
Yes, it varies for different cabinets.
Would be nice to do the measurements to verify (by multiple independent entities!!!) the model(s)/math.
@@devinlsheets_alphasound That is as it should be. For somewhat free-field conditions (I realize you are in a room, but still), the pressure is in-phase with acceleration of the piston (not the acceleration where the pressure is measured!, for which in the far-field the velocity indeed is in-phase with pressure for pure propagation, as mentioned in some posts here), because the acoustic environment seen from the piston is mass-like at lower frequencies for a typical cone size. It is not quite a perfect match between piston acceleration and pressure, but for example for a flat 18 inch piston in a baffle at 1 m distance, the two are only 5 degrees out of phase at 300 Hz and even closer at lower frequencies. With acceleration being 180 degrees out of phase with displacement, what you see in your measurements is what you should see. I should say that the phase coming the distance travelled from the piston to the microphone (linear phase) needs to be removed in these calculations, so in your measurement setup, you need to either input this distance in some setting, or manually get rid of it.
That's an incredibly inciteful tutorial. I had no idea how much went in in the journey of music through my sound system! Terrific demonstration!
I'm not sure I'll ever need any of this knowledge, but it was super interesting to watch.
Your explanations are very detailed, yet easy to grasp.
The phase relationship between the input signal and the output from the speakers is actually much more complicated in practice. The reason is that the equation m.a+c.v+k.x=F(x) applies to established sinusoidal stimuli. In practice, the musical signal is not like that. It is rather a transient process in which the behavior of the speaker is determined by an inhomogeneous system of differential equations, which makes the issue of instantaneous phase shift quite complicated. Thank a lot for video.
Wish i’d found your channel earlier. I just finished my bachelor degree in music engineering and your videos would’ve made my time much much easier lol. your videos are so detailed and intuitive please keep making these.
Great work! This is the level of detail I would love to see from all audio hardware testing. The only mistake I noticed is that you didn't deinterlace the video around 16:30 which results in horizontal comb artefacts in video. Another example of missing deinterlacing can be seen around 18:22. I'd recommend using ffmpeg for deinterlacing because it has resulted in best quality for me but other options do exist, too.
Immediately subscribed! The fact YT just recommended your video to me tells me its algorithm isn't as good as it should be. Awesome work! This must have taken a lot of time!
The current lags behind the voltage in an inductive AC line. Since speaker lines are more or less varied-voltage AC, and you're coiling it around that ferrite ring, you're creating inductance. That inductance will affect the phase and frequency of the output. This is why we use coil inductors to make low-pass circuits.
And that's not to mention all the other interference from various amp stages and whatever else.
So long as the delay is no more than ~2ms it's fine, humans generally can't hear intervals that small anyway.
The animation 24 minutes in was such an awesome lightbulb moment, thank you!! Epically useful video!
Have you thought about the mic being too close to the coil and reading magnetic force instead of air pressure?
Good thought! I wonder flux extends far enough though but still, interesting thought!
Came here looking for a comment like this. Maybe you can remove the permanent magnet part of the speaker and drive the air coil in front of the mic to check.
I changed my mind after watching the complete video, it all makes sense now. Microphone is reporting acceleration, not position.
There's a small mistake on the "AC coupled" waveform drawing (graph on the right) at 1:55 , after the "DC" portion of the signal, the AC coupled signal should not have any positive slope since there is no positive slope in the original signal. (like when the DC battery is released at 2:12)
Thanks for the nice thorough video again :)
TRUE lol that’s what happens when I make quick and dirty graphics with PowerPoint on long airplane rides.
And btw DC is blocked because running 2 kilowatts of DC to a speaker will ruin your day very fast and you won't hear a thing.
As the first waveform is passed intact, this is not a large enough time snippet to say if the flat section is even DC or just a bit of lower frequency out of context. Most likely the flat "DC" bit would come through DC blocking with a little slope and a slow rebound if the pulse is not followed shortly by a matching negative excursion.
18:50 I think the delay is caused by the inertia of the moving parts of the speaker. And that cannot get worse than 180 degrees because if it were delayed more, it would catch the next incoming electrical wave and that would result in effective speed-up of the movement reducing the delay to less than 180 degrees again. If you have constant latency, it's caused by some kind of processing, not by physical movement of the speaker.
Absolutely INCREDIBLE content and presentation - lifetime producer and audio & physics enthusiast here. You are one of the top TH-camr's I've come across mate
nothing is in phase my friend
Except the grid
Actually....lasèrs...they got their shit together by definition.
😆
😂
reminds me of Rick & Morty with "true level" experience
Wow this all makes sooo much sense. Wish I found your channel sooner. This has answered alot of questions I've had when figuring out phase relationships between multiple driver setups. I can actually hear phase differences in drivers after some listening. I went and bought rta mics to see if I could see what I was hearing. Turns out I was right. This video puts everything in perspective for me fromy own experience. It all makes perfect sense. Finally a channel that makes my brain tingle! I love it. ❤
Love it, appreciate your curiousity and determination to understand things. Some nuggets of gold in here for those into audio system measurements
one thing I can add , as I know a sound person or two, is that delay lines are often used with big systems to ensure all drivers are in phase with each other. And if any are at a different distance either ahead or behind the main drivers, then the sound from them is phase corrected with what is comming from ahead or behind them
Delay lines use delays to make the sound sources in "time" with each other, not in phase with each other which would be impossible to achieve being that they are radiating from 2 different locations. Without the delay you would hear a very distinct echo from hearing the delay speaker first and then the main speakers some time after that. It is very disorientating. That echo starts when the two sources are more then about 40 ms apart in time (about 40 feet of distance). If 2 sound sources are less then 40 ms apart then they will sound as one signal. This is called the "Haas Effect". Now what is interesting is that the brain will locate the sound from what sound signal it hears first even if the second signal is somewhat louder then the first. For a delay line, as long as the delay speaker is "slightly" behind in time ~20ms of the main speakers, your brain will still think the source of the sound is from the main speaker (stage) even though the delay speaker is louder and may be off to the side or above you or even slightly behind you.
While nothing new to the folks designing such hardware, this is an awesome first dive into this topic. Great work!
Interesting things I've noticed:
- As pointed out in 2:30 1st order HPF will create a 90° phase shift. A 2nd order filter will create a 180° phase shift.
- In 17:10 you can see the driver reach 180° phase shift. At 180° the speakers output becomes pretty useless. This seems very similar in behavior to the Gain Bandwidth Limit(GBW) of an OpAmp. (The GBW basically dictates how much of your maximum amplification you can use for a certain frequency). When driving the speaker at a higher volume, I would expect the amplitude to drop even quicker, but the phase behavior to remain the same.
- I would love to see a plot of actual time delay instead of phase
This video is brilliant. Taking advantage of the fact that your branch is also music and not just a sound scientist, make experiences with real music (even if you don't have the ability to explain what you see, and therefore it would be much more fragile conjectures). They are all individual or isolated signals, when in reality we work with complex signals. It is true that pink noise is a complex signal, but it still lacks the transient component, which is a key attribute in music.
Fascinating.
You were able to come up with great ways to show what I have suspected for years.
In a car, what pushes you (pressure) back in the seat? Acceleration. Not speed and not position.
Similarly the air in front of the speaker cone becomes pressurised by the acceleration of the cone, not the speed or the position.
This should also explain why high frequencies have more acoustic power thab lows at the same driver displacment - acceleration is higher.
EDIT: In hindsight I misspoke. Pressure is not created by acceleration, but sound (modulation of pressure) is. Thus g-force felt in a car is analogous to sound energy.
No, pressure is caused by the motion of the cone.
By your logic, a skydiver would feel no air pressure once he reached terminal velocity.
you are confusing the potential energy of sitting in a car seat at constant velocity, with the kinetic energy of a column of air being moved by a cone.
@joshua43214 You make a great point! I don't think it's a perfect analogy. However I would note that sound is not a constant pressure (if the cone were moving at a constant velocity) because that is DC. Sound is changes in pressure (cone changing velocity or accel / decel) and in truth this is a continuous cycle of conversion between potential energy (compressed / rarefied medium) and kinetic energy (medium in motion expanding / contracting).
In the case of a sky diver the medium is in motion past the body (wind) but sound is more like the force felt by the sky diver who will indeed feel weightless at terminal velocity.
Your comment made me think hard, thanks!
@@Audio_Simon I think you have mistaken the skydiver example, the diver doesn't feel weightlessness at terminal velocity. It's only just at the moment of jumping. At terminal velocity, you feel your weight supported by the air.
@tusharjamwal You are quite right that air will resist the motion of the falling body in the form of drag. That said, from an object in free-fall is generally considered effectively weightless because gravity is pulling down but there is no reaction force from the ground. There is quite a fun thought experiment along these lines to do with Einstein's theory of relativity, I think Veritasium made a video about it.
A saw cutting wood might be a better analogy.
Same length of stroke, same number of teeth, you'll be expending far more energy cutting at 10Khz vs 60hz.
Part of that energy goes towards the friction of the cutting. Part of it counteracting the momentum of the saw itself.
The energy lost to counter momentum increases with the mass of the saw.
Subwoofers are far larger and heavier than tweeters, ala more energy is lost purely in the motion irrespective of how much actual cutting (sound generated) occurs.
@ 23:20 interesting how due to the phaseshift there is a 2nd order distortion to the laser measured output at the begining cycles. In this tone output it settles down to low distortion in a few cycles I see but with music signal that is not a constant tone it would be more or less all the time distortion and imd. Amplifier damping factor would influence this to an extent I think, high damping = quick recovery but larger amplitude distortion, low damping = slow recovery but less distortion of amplitude. Very interesting topic, thank you so much for making this higly educational video and thank Filipo @ B&C.
Hi, first of all, amazing video, thank you!
question: why are you using interlaced video in 2024? I can see the interlaced artifact in several parts of the video.
It was set to interlaced somehow when I exported in Davinci and I didn’t catch it because I’m an audio guy and literally pay no attention to video LOL
@@devinlsheets_alphasound It might seem like a meaningless detail but you effectively converted pristine 60 fps footage to 30 fps which additionally now contains a bunch of combing artifacts. It distracts the audience from the content of your video and is completely unnecessary
21:00 This was the most interesting part for me! Great work explaining the microphone behavior.
25:27 this has been a huge thing in my brain...i feel as if there is a huge difference in a seald chamber pressure Vs sound wavefront pressure. This i feel is exaggerated in car audio, especially when comparing sealed sound quality vs Spl. There are so many opinions at the moment as to what is best. I enjoy the power of horn loaded low frequency systems and have experimented with some designs and in my opinion nothing feels like a 20hz horn, the pressurization at the speed of sound is different than a seald cabin. Please do something on this topic. This video is a wealth of info. Thank you.
I have to say that watching this video in the middle of the night accompanied by a glass of whiskey is a wonderful experience. Some parts you understand, while others seem familiar, just like the alcohol.
I love audio but this is totally over my head.
It’s awesome! Thank you for your videos.
Amazing video, really helps understand all those microphone measurements!
Would be interesting to see how a cardiod condenser microphone(capsule with 2 exposed capacitive membranes) would measure, instead of an omnidirectional one, because it would capture a difference in pressure between two sides of the capsule, and not average pressure around it.
I think it would be nice to watch more of these in action. How would it be form the floor? how woul it be from above? Beautiful, I was thinking abount these set up for a while and here we are! Cheers, Congratulations for your knowledge and imagination and dedication and sharing!!!
AC coupling is just a lower cutoff frequency low cut filter... oh hey you said it yourself. I better stop commenting. Oh yeah by the way, absolute phase does not matter as much (no human can hear absolute phase differences from a single source). Of course group delay is more important (yes they are related by frequency), but even then up until a certain point mostly unnoticable in a PA kind of situation (room modes and accoustics in general will make sure of that). I still respect this kind of research and compilation of knowledge.
Amazing video. The mic is measuring the acceleration of the mic’s diaphragm not the acceleration of the speaker driver right? Still a second derivative but for different reasons (and subject to different resonant characteristics)
I'm only partway through the video, but I gotta say I'm impressed by this video a lot. I'm an electronics engineer and took a course on audio engineering in college (the kind where we talk about speaker low frequency dynamics, Thiel-Small models, psychoacoustics, etc) and seeing the same content from an actual audio engineer's perspective is really fresh and interesting.
13:29 -- There is a phase shift between the position and pressure waveform because of several factors actually; I believe because the acceleration of the movement of the speaker cone and therefore the air molecules it is pushing against is the 2nd derivative of the position, and since one derivative imparts 90 degree of phase advance (think about the derivative of sin(x) being cos(x), which is 90 degrees up), you will immediately see a 180 degree phase advance of the acceleration. Roughly speaking force = acceleration, and force is pressure * area, so pressure (that's SPL) and acceleration is in-phase.
In the Thiel-Small model it talks about volume velocity; the relationship between volume velocity and particle velocity is analogous to pressure and force.
Wow that was an excellent presentation! I have one question. How did account for amplifier damping factor influence on the speaker?
@19:00 Speakers are an inductive load. At Dc they are shorts, at higher and higher frequencies they "open up". This is due to the physical sizing of the inductor and how the core saturates. Inductors also introduce a phase lag in the current signal which is why at higher frequencies you are seeing 1 pi radians of shift.
Only five minutes in but this is sick! Love it!
god damn, it took me years to understand but this video helped ma A LOT!
First time viewer. Awesome stuff. Would love to see a part ii of what this means in practice
Brilliant stuff. I'd love to see the comparisson between the laser and the mic with a complex wave, to see if the various consituent frequencies all track at 180 degrees to each other, or if there's a compounding effect.
Thanks for this insight!
Exactly the content for my morning coffee (and then I have to watch it another 5 times to somehow comprehend 😅)
I wonder if it would be possible to numerically integrate the microphone reading twice in order to covert the reading from an "acceleration" reading back to a "position" graph. Of course that would require that the reading start out in complete silence so that the missing constant for velocity and position we have to add back are known (velocity would be zero in silence, and so would the position relative to the resting point of the speaker). Even then, the reading would have to be accurate enough so that velocity and position don't shift with time. Although maybe it would be possible to correct for those constants being unavailable or the reading being slightly inaccurate by simply adding a high-pass filter to the output?
Congrats on the bump in views. Great content.
This content was very nice and blissfully pleasant to watch! Thanks
Amazingly well presented explanation of the very interesting behaviour! 5/5 will view again 'cuz I'm not entierly sure I got it nailed down on first attempt.
Thank you for this excellent presentation of complex topic!
Excellent video! Loved the bit about sound wave trough created at speaker movement crest (sound crest 180° out of phase with speaker movement crest). And how for 30 Hz and 200 Hz, speaker movement crest is 90°-180° out of phase with the driving electrical crest (although I'm not sure I grasped why for that part).
This changed my life
It was all an illusion...
Absolutely fantastic. Add me to the fan base. Could you PLEASE do a video on what’s going on *inside a sealed speaker enclosure with attention to energy that is forced back through the driver diaphragm-especially regarding how this effect is or is not captured but standard measurement specs? I think that would be highly illuminating for many in the loudspeaker design community. Thanks for your work.
Thank you very much! You put so much passion and work in your investigations and the video. Awesome setup, respect!
All of your argumentations sound logic to me.
The only thing I need to think about again in detail is what happens here with the acoustic nearfield/farfield with the longitudinal waves we produce. With the mic distance we should be in nearfield with 90 degrees phaseshift between pressure and velocity but with 200Hz not. I wonder if it matters, because what sensor is capturing what?
Mic: captures pressure
Laser: captures excursion and translate it to voltage, what represents pressure at the source without nearfield/farfield acoustic effects…good reference btw!
…so it is possible, that we also see some of these acoustic effects, what you mentioned and explained with the mic excursion.
My god, this is the most physically correct description I have ever seen of the microphone - speaker interaction. Do you plan on solving the differential equations for the speaker-air-microphone-system? This should give a nice analytical solution, where the theoretical phase should be easy to calculate. (Because if you ain't going to do so, I feel like I want to do it ^^)
19:15...
".... at higher and higher frequencies. It doesn’t seem to be a set amount of time,
>,
but it levels out at around the minus 180 degree mark or so. Basically, above 200Hz, frequencies are going to be about a half-cycle behind in their full steady-state formation."
>> To keep your phase plot 'on screen' - your oscilloscope software adds 360 degrees to the (indicated) phase data. This will always be the case no matter what phenomenon you are probing. The phase in your graph is in fact continuous laging well past 180 degrees >> it does not level out. Every apparent vertical jump in the plot indicates a 360 degree instrument scale change. Check out 600 Hz - your instrument is indicating a 360 degree phase shift.
Good luck with your endeavors..... If something doesn't look as expected - you have either discovered something new or some peer review might help you to learn something new.
>> Keep up your inquisitive nature.
I believe that what is shown at 26:45 is wrong. The first 90° phase shift is caused by the air pressure that is built up by the velocity and not the acceleration of the driver and so the microphone's diaphragm will move 90° later. The other 90° is caused from the voltage that the microphone produces because the voltage is produced due to the velocity of the diaphragm. We need a slow motion of the speaker driver and some other surface in front it to confirm but correct me if I am wrong
I think you would be correct about the 90° coming from the mic if it were a dynamic mic. Which is why, when we do use a dynamic mic, we see a total of 270 degrees of shift in the relevant frequency ranges, not just 180. Test condenser mics don’t have phase shift, so the 180 degrees is due only to factors leading up to the mic.
@@devinlsheets_alphasound Very interesting. I forgot that condenser mics where a thing. Also I was wondering because I am building a rotary subwoofer how much phase shift will it produce compared to an ordinary one because it functions completely differently.
@@legomachines5970interesting, not sure how rotary subs will respond in terms of phase. My guess is that it will be similar to a regular driver but I haven’t tested it or read too much about it!
Wow This was so much fun stepping this out the way you have.. Learning tons.. Love it...
Even for a layman like myself this was incredibly insightful!
Regarding the 180 phase shift between the laser sensor and the microphone at 13.30. The first is a position sensor, the second a pressure sensor. The pressure at the speakers is proportional and in phase with the acceleration of the membrane. There is a mathematical relationship between position, velocity and acceleration of a quantity varying according to the sin law. relationship position - velocity d(t)sin(x)=cos(x) - they are 90 degrees apart. velocity - acceleration d(t)cos(x)= - sin(x) - they are again at 90 degrees. Position-acceleration relationship d(t)d(t)sin(x)= - sin(x) - they are 180 degrees out of phase.
Curiosity, realworld orginal tests, weird data that makes it more interesting. very very informative.. thanks man, I was full today too...
Really love your technical videos, especially with practical examples such as this one and the line array effect!
You've confirmed that the pressure/location is allways 180 degrees phase shifted, regardless of frequency. But it might be interesting to mention that, above FS, the input voltage and location are also 180 degrees phase shifted, which results that the microphone phase matches the amplifer voltage in phase! In a 3way speaker, the midrange and tweeter are (/should be) used above their FS which means these frequencies should be in phase on the microphone compared with the amplifier voltage. After delay compensation of course... Also, crossover may (will) mess up stuf depending on implementation but its a fun fact that after all the phase stuff it ends up back were it started for a large part of the frequency range. Sadly it is hard to show this in measurements, because any starting/stopping a sine has additional frequency contents that may (will) fal outside of this frequency range where above remarks are valid...
Amazing video. This has left me paranoid and questioning everything. Question: Is there latency in the time it takes the force from the voice coil to travel all the way through the diaphragm to the surround? Hence could that be the reason you saw some displacement at 200 Hz during the "LED polarity test"?
Yes, when you push on "any" object there is a latency between when you apply that force to when that force arrives at some distance along that object. Essentially a speed of "push". That speed of "push" is exactly equal to the speed of sound in that material.
Example: The speed of sound in wood is ~ 5 km/s. If I had a stick 5 km long and I give one side of the stick a wack, it would take an entire second before the far end of the stick starts to move no matter how hard I wack it.
Note: Tony Andrews from Funktion-One speakers religiously only uses paper driver cones as the speed of sound through paper is closer to the speed of sound through air (though still much higher) then other materials like aluminum or poly, etc.
man your research is underrated, this must be put on papers❤
This is something that's well known by anybody with a basic electronics education. If fact, most of this stuff has been known in the early 1900s already.
@@Max24871 Ah. ty then
@@Max24871 Yes. The internet has opposite effect on general public level of knowledge. Negative effect. They know less about the world. They need a TH-cam video on BASIC subjects, phenomena. Like there was no books (you know, real paper ones) etc. Pathetic...
This is good content. Thanks for the effort!
Why would you assume the physical position of the dust cap or any particular spot on the diaphragm matches the transverse waveform? It's not necessarily a phase shift because that implies a comparative source. You are seeing a delay. If you now consider 2 transducers and the amount of delay is the same, they are considered in phase at the chosen frequency.
question at 12:00 when viewing 2 sine waves, how can you be sure wave 2 is leading or lagging? by just visually looking at two repeating sine waves, isnt it impossible to tell? question 2: you earlier displayed on scope a single wave 'impulse' signal. do you make that available as a wav file? ps - youve got an excellent command of this stuff. thanks for sharing.
Interesting insight, this raises many questions in what is the objective of sound reproduction, and what matters and what does not. The basis of sound reproduction is for our ears to hear the same sound i.e. original sound versus sampling the original sound and attempting to reproduce the same experience of the listener at a later time. Do the very low frequencies matter and if so is phase significant or how much is the question.
Great Video,
The laser measures position of the speaker membrane.
But position is not what's making sound, that's probably speed or even acceleration.
Which is why the Freq. Response of the laser position took a nose dive.
Because position is the integral of speed, and speed is the integral of acceleration.
And an integration is actually a low pass filter operation.
So, you have 1 or 2 low pass filters to compare against sound measurement.
Hi what DATS version do you have?, and can you measure a impedance sweep that ranges over 20kHz?
What a great video! I feel like sticking to the driver movement when thinking of the sound is misleading. When making sounds what is characteristic is the change in pressure or the frequency of it.
Now I am wondering about the electric energy used for acceleration and deceleration of the driver and of the air which by the way is a proportional and not a derivative of some degree.
Maybe there is some hidden magic to get more from a speaker in its confined movement range.
10:32 its just a voice coil's inductance-resistance effect, inevitable first order lowpass filter. right? or theres something else?
it actually can have an effect on bass and even sub region if L/R ratio is too high.
Is the « slowness » of the system at 200hz due to the resistance of the surround and the weight of the moving parts or does back emf go up with frequency.
I’m not sure but faradays law states that back emf is given by the rate of change of the magnetic flux (Wikipedia). At a higher frequency I would say that the rate of change of magnetic flux is higher since the coil is moving in and out of the magnet faster to try and match the input signal. I’m sure the answer is somewhere in the middle, the mechanical resistance and inertia of the moving parts will impead the movement of the cone at higher frequency’s, but I would also guess emf has something to do with it.
Lovely video though, it’s so nice to have people to break down the fundamental phenomena at play in speakers, thank you and well done 😊
FACT: - its not the signal (voltage) that moves the speaker.
- The speaker moved by magnetic flux, a result of electric current flowing in the coil.
- Higher frequency's result in to more current delay inside the coil.
(this is exactly the part missing from this video)
There for the speaker output is a combination of mechanical and Emf delay.
agree: Lovely video though, it’s so nice to have people to break down the fundamental phenomena at play in speakers, thank you and well done 😎
yup. particle velocities ... pressure ... reactive near-field acoustic energy (air) flowa ... mechanical impedances.
great video.
The issue is that most audio engineers are not really engineers / scientists, which is fine. But they cannot interpret measurements correctly. So, people create these very strong believes about how things work based on what they "measured" themselves. Interpreting experimental data and performing proper measurements is difficult, and it is something that people go to school for for many years. So, if you have not, do not expect to be able to properly measure things and interpret the data correctly.
This is normal since the light is faster than sound ;) Assuming the electronics measuring this do not add any/same phase shift or time delay. Well done!
Anyone else find interlaced video to be distracting for some reason?
This is super interesting.. I wonder how hard it would be to create a frequency based delay filter that would compensate for the delay, and how that would sound
Thanks for making this! I'm gunna have to look at your other videos, Subscribed!
Could you mitigate the issues regarding phase shift and driver velocity by using a low mass equivalent such as a planar panel with large surface area?
What should my take away be to greatly improve my ability to mix a band? What can i practically implement to yeald noticeably superior results? How will this help, or change anything about my workflow at tomorrows tour date? I have no idea how to implement this level of marco detail into a phase rabbit hole. Phase only matters with acoustic arrivals from multiple sources with differing transit path lengths and even then oniy at lower frequencies. The ear-brain system interpolates differently to any microphone-analysis system, and given the target (paying) customers choose to experience concerts via their ears, an objective to achieve perfection in the measurement realm *should* be very low on the priority list. This kind of deep dive might be more applicable at the design and manufacture stage. Its a struggle to find any value to the gigging fader pushing professional.
We had to let a guy go because he spent more time phase aligning a PA than the gig ran for, all for a speaking only event.
@@mattj5025sums up my approach to everything
what if the amp has more power to move the cone faster, would the delay be less?
Crazy setup. Put a lot of effort into this !
I all my life play Viola in symphony orchestras, really good ones. I sit where Bach, Mozart and Beethoven sat in the middle voices and directly in front of the conductor. I sit where the listener sits and I worked in public radio as a classical music host and announcer. I could hear what a lot pretended to hear.
So the phase is all over the place beyond 1kHz because your delay constants are in terms of integer milliseconds, which means changing a delay from 12ms to 13ms means a whole 1kHz wave was just added and the phase looped, correct?
I think the reason it’s chaotic above 1kHz has more to do with the fact that the source signal is just too low at that point to distinguish from noise. Especially with the laser reading, there’s almost no physical movement above 1kHz in comparison to the main operational frequency range below 100Hz
isn't the woofer a displacement transducer (moreover driven by current, not voltage) and the microphone a pressure transducer? So those signals can't be in phase from the get-go
Fascinating.. as well as a sound guy and electronics engineer..😊
Adding such an iron mass and making an inductor with the speaker wire! I know this is low frequency but won't that change the phase angle? Correct me.if I'm wrong please. Thanks.
You are the only one in the web teaching actual science with audio engineering. Things most audio “engineers” have no clue about. Thank you
Hi very interesting video for the most part above my ability to understand Anyway i have a question
I wonder what would happen with a diode placed in series with a woofer What would the sound be like ?
This should prevent the woofer cone to move backward ... only from the zero point forward (generating a positive pressure)
I ask this because if we measure the sound pressure during a concert at the listening spot the pressure will vary only above zero Never negative i guess Instead a cone when moves backwards will cause a negative pressure towards the listening spot This could be unnatural ?
The goal is always to reproduce the reality