As a young man I immersed myself in music, mixing and production. I'm now middle-aged and wish I'd undertaken something less complex like rocket science.
The reason why audio is harder is that we're dealing with something that is both tangible and intangible at the same time. Rocket science, you can rely on hard numbers to produce a predictable result. Audio, move the speaker slightly, and it messes up everything and you have to do everything again from the beginning.
He’s absolutely right, but he’s only proving a point, whilst missing the bigger picture: The majority of people today listen to music with headphones. Only a tiny percentage of that majority listen on headphones that are worth a damn. I’ve been in the game for 25 years and mixed for several A-list artists, and I understand completely… but if I were smart, I’d probably just save my money and time, and mix on Apple wired earbuds, because at the end of the day, that’s still what the majority of people use, and this fact and trend isn’t going anywhere. The days of audiophiles with large, expensive gear, with large expensive speakers sitting on pyramid points with little slabs of slate under each point, are over. In 10 to 20 years, that generation will be dead, and so will the days of the home living room audiophile listener. Don’t even get me started on car stereo systems 🙄 There isn’t a seat in the car that comes close to providing an accurate stereo image. So if we were all smart, instead of being purists, we would mix on those Apple earbuds, and check our mix in the latest, highest selling, shitty economy car, and let the past go. It’s the hard truth, but it’s the truth ✌🏻
@@leaveitorsinkit242 I'll take a stab at it. The speed of sound is roughly 1ms per foot, so if you delay the sides by 1ms (millisecond), it will sound like it was recorded from 1 foot back from the mids. It helps give it some separation, specifically depth. It is essentially shifting the phase of the sides in comparison to the mids. Anytime you introduce a tiny delay, it shifts the phase slightly. You can drive yourself crazy messing with the phase correlation and making sure the phase is "perfect", but in reality you don't really want it to be "perfect", you just want it to sound good and make sure that it's not canceling the frequencies that you want of the instrument.(fully understanding phase cancelation is critical, and what I consider the quintessential intermediate mixing knowledge) If you mic up a drum kit with close mics, overheads and room mics. All of these mics will be getting the signal at slightly different times, and that is what we want, otherwise it would all sound like it was a bunch of mics the same distance away and wouldn't sound like a 3d space. It's the subtle phase differences that allow our ears to perceive the distance involved and therefore depth. Of course we take great care at the time of tracking to make sure the phase isnt causing cancelation, by slightly moving the mics closer or further away (inches of difference) until it sounds full and rich when listening to all the mics together. If you went through and manually lined up all the tracks where all the peaks and valleys were perfect, than the phase would be "perfect " but it wouldn't sound like it had the space that you were going for when you put up all those mics. It would actually ruin the effect. So knowing that, you can artificially recreate this effect by using short delays, like 30ms and below, remembering the 1ms=1ft rule. Those types of delays can help you shape the soundscape from front to back, along with many other things. But keep in mind that you will only hear the effect of the delay spatially if you are listening to the delayed track in reference to a same or similar un-delayed track (just like the sides are percieved against the mids in this track because a lot of the same information is in both the mids and sides). A delayed track on its own will only sound like it was played a little later, but heard with the same signal un-delayed, it starts to change the spatial relationship. There are also creative effects with short delays. Look up a video on the Hass effect, where you can actually make something sound like it's even further left or right than hard pan LR. That trick has been used in many famous productions and I feel like it will help you understand what this trick in the video is doing, just slightly more exaggerated with the Hass effect. Phsycoacoustics. Yay! Hope this helps, 🙏 👌 have a great time on your mixing journey.
@@aholder4471 just to confirm, if I went into audacity, and delayed the right side (assuming I planned both channels left and right) by 1ms, it would provide the Illusion that the soundstage is wider?
@@TimberWulfIsHere Yes, but he is using a middle/side feature of a plug in and delaying the sides by a ms in this video, not left and right. If you went in and delayed the right channel of the entire mix, it would honestly just mess your Soundstage up and probably not be what you're looking for. I would leave the delay of left vs right to individual instrument tracks, not the whole mix but hey, it can't hurt nothing to try it. That's the best and worst part of digital audio, you can try a bunch of stuff and not have to commit to anything.
@@aholder4471 I just played around with it, it actually does work, but i can also see what you mean; it does sound ever so slightly out of sync for some sounds at least.
I'm a developer for FL studio mobile and this video inspired me to add m/s delays to it in my first week of working there - just thanks to this video. Thanks Dan 🔻 Exit: talking about the spacer module in flm
I’m always in awe of Dan’s ability to meld his technical and artistic talents into these intimidating displays of audio engineering skill. I’m an engineer and a musician, and to me it feels like he has the analytical capabilities of the most rigorous scientists combined with the empathy of the deepest creative. And then he’s generous enough to share lessons like this with us. I share this channel with anyone who will listen. Thanks Dan.
There’s a part of me that realizes that this is incredibly insightful and important, and then there’s the part of me that doesn’t quite understand anything Dan talking about. I’ll have to watch again tomorrow.
Personally ive never understood this "no one listens in mono" argument. Even if your mix is never played back in mono, using M/S monitoring to shift density and dynamics makes a worlds difference in getting a mix that is perceivably "wide" but maintains depth and focus.
exactly, but i do understand those who don't get mono mixing. Because the newer generation that do their work from home, only use stereo mix. and then its up to the one who makes the master of the song, to do the mono lvl'ing (as i would call it, when its only mixed in stereo speakers :) )
but then again many newer headphones that mainstream users buy are fake stereo where there is a driver that spread the sound between the 2 mono left right speaker so it sounds "true", and by true i mean massively coloured and therefore not the true mix you are hearing :)
There's another thing that's AMAZING about this technique that might not be so obvious at first: Mono-compatibility. Utilizing a mid/side channel delay will NOT affect the timbre OR level in mono playback at all (give or take a fraction of a ms of delay on whatever you apply the technique to). This is because when converting to mono, hard-panned material will not change in level, and the side channel will disappear completely. This is unlike an LR imaging technique like Hass delay or LR filtering, which will introduce a sort of flange/comb effect when you convert to mono. Genius!
Yes indeed! I was initially flabbergasted about how this doesn't screw with mono-compatibility at all. I'm still trying to wrap my head around what I'm hearing exactly, and why. I mean I know what M/S processing is and how the encoder/decoder works (which is simple enough in itself), but I still find it generally quite counter intuitive how things behave sonically when processing them separately. I suppose the M/S terminology is already confusing. Sum/Difference captures the meaning of the process more accurately imho. But anyway, after playing around a bit with delaying the mid-channel, I wouldn't slap a mid-channel delay on the mix-bus at the end of a mix session as it screws around too drastically with some very specific mixing choices I made. But this technique certainly opens up some very intriguing possibilities when applied with focus and creativity! (as Dan demonstrated extremely well)
@@bluestringmusic Correct. I've seen hundreds of "tutorials" on music production to this day. Most of them hardly taught me anything. With this video however, I feel like I just went on an experimental trip. Tons of great information here, something many do not include into their videos. This is QUALITY demonstration, not some over-the-top animations, 10 second cuts of the work process or time-lapses. Some people also like to just slap bunch of filters on their mix, while they're convinced it's always the same process to mixing and then they call it a day. I wouldn't know delay made such a profound difference, or how the channels are encoded after panning hard left, right, mid. It feels good to hear a difference, but also UNDERSTAND why there is one.
I have been making music in DAWs for 20 years, since I was 12. This video finally explained to me and made me understand, what the hell a sound engineer's job is, and why it's different and separate from what I've been doing.
This is a game changer! I've always hated hard panning my tracks because It felt unnatural, yet I always heard people talking about LCR mixing. This filled in that gap I had in my understanding and I'm excited to play with this new knowledge. Thanks a million!
It is absolutely incredible the fact that you are handing out knowledge of this depth for free. This is specially great for vocals or any kind of non-digital instrument that one may record on studio. Amazing stuff man, keep it up.
I have over 10k in home theatre, studio, pro audio, and simple Bluetooth speakers everywhere in my house. My wife still listens to all her music on the phone.
If I watch more than one of Dan’s tutorials in a row I need to have a nap for my brain to recover. Amazing content. I think I am done with all other tutorial channels.
I have absolutely zero use for this information but that title alone got me to click and watch the entire thing. I now understand why some mixes sound incredible on headphones and some don't while the ones that don't will still sound great on stereo speakers.
Sometimes listening to Dan is like listening to a warlock explain the dark arts. That delay effect is so drastic and sounds so good yet so easy to achieve, souls must have been bargained with to learn the trick! 🤯
You have really upped the ante here! All I can say is wow. The best "tinkerer" ever. Your last trick is just like what we do with voltage control circuits in synths to shape the sound. Cheers
Fantastically explained, i've noticed that on a lot of vintage songs the panning is hard left and right. More modern songs tend to use the special stereo headphone sound
I personally love the un naturalness of hard panning headphones. I mean it’s easy to go over board but it’s also easy to make something sound really interesting.
Done as an effect, it’s fine. But the problem is most stereo mixing comes down to a choice of what equipment to cater to. When it’s done poorly, it’s really jarring to me. Loads of old stereo stuff clearly wasn’t made for headphones. I wish there was a quick correction for it at the music player level.
@@CosmicHippopotamus old stereo stuff from the 60s for example where the drums are hard left, bass hard right, only vocals centered or whatever. really jarring
Been watching different mixing, mastering and processing channels and none of them even mentioned that dilemma. Glad I found out about you via FabFilter and I admit I stick to your content, because it's easily explained, well represented and good to recreate. Thanks Dan, keep it up 😊
Quick Tip: If you are trying to do the Voxengo Sound delay phase in Abelton Live, don't pan the whole track, put in a utility before the delay and pan that. The pan on the actual track will affect all of the audio in the effect chain, and it won't work.
I love how you find the things which people argue is only possible by using analog gear and show us how easy and simple it is, its all philosophy and physics! Thanks Dan, yet another awesome video!
I use Melda's MUtility for this kind of stuff, because it's able to do a lot of things besides what's on this video, while still remaining free; do haas, do invert stereo, mid delays etc.
This is the single most brain-melting video about mixing I have ever seen. I'm gonna put it down, watch it again tomorrow, and see if I can catch everything that just went over my head. This very well might have just solved some major issues in my mixes, and I don't even necessarily mix LCR
Hey Dan, did some research regarding headphones earlier this year - the answer I came to for classifying the how headphones and stereo speakers interact with our ears was the terms "Diotic" and "Dichotic". Dichotic meaning that both ears hear different stimuli, and diotic meaning that both ears hear the same stimuli. I posited that dichotic and diotic properties could exist on a spectrum (how different sound is at each ear), and also that it was a statement about the physical properties of listening via loudspeakers/headphones rather than production qualities. With stereo loudspeakers, hearing in a diotic manner is never possible, as naturally you note that the head (and pinnae) shape and change audio flowing into the contralateral ear from the left or right speaker respectively. Thus listening 'in the open' (loudspeakers) can never truly give both ears the same exact stimuli (which is part of our binaural localisation system for placing sounds in a 3D environment). But It also may come down to how accurately one wanted to measure the differences in both ears to measure (unsure if currently a word) 'dioticity' (how similar your ears hear any given sound?..). And whether the properties of dichotic and diotic should be restricted to properties of the physical realm (listening device), or if they should be considered holisically - including production considerations (i.e. if mono audio). Hope to have provided something to consider! Keep the videos coming 😁 EDIT: Oh and also yeah, I personally (for whatever that is worth) consider stereo NOT to relate to whether something is binaural, dichotic or diotic. To me, stereo is strictly about having two channels of audio. I do feel that we need a more expansive vocabulary for discussing the differences between headphones and loudspeakers, not suggesting that this is squarely your responsibility of course haha! Hopefully this may help spark thought in other's minds though 😊
Yes, when he says it's "pedantic" to call headphones "binaural" and speakers "stereo", I'm not convinced it's pedantry. I think it's actually just wrong. :-D "Stereo" is not just used to describe two-channel audio, it's more of a medical or scientific term to denote that something has correlated pairs of stimulus. Think of 3D glasses, for example. There's a vernacular implication that when you say stereo, you mean audio content that has a soundfield presented with two channels of audio. Contrast that to dual mono, which is just two channels that may or may not have anything at all to do with each other. Or binaural, which I've always taken to convey something a little harder to describe -- like a stereo presentation that is meant to convey 3D space the way your ears would perceive if they were _in_ that space. Suffice to say, I don't think stereo vs. binaural has anything at all to do with the mixing of audio in space on playback. If it were used in that context as a clarifying aid for the purposes of discussion, then fine. But I feel a blanket statement like "headphones are not stereo (they are binaural)" is probably verging on a fallacious statement for the purpose of being sensational.
I have absolutely no frickin idea what you're talking about most of the time. This is why producers with a creative bent need engineers with technical expertise. You could probably do my job and a tech's job no problem. Incredible.
You just blew my mind with the trick of EQ'ing just the mids or the side in minimal phase mode. From my own experimentation, I knew that 1 ms would make a huge difference with the mid-side delay.
"Something that never happens in real life" - unless you're Brian Wilson... thanks for these videos, Dan, they are the best music production tutorials I've found...
i've been doing all sorts of ITD and HRTF panning in my work for some time and i'm happy to see something along the same lines here - f the phase police indeed, 2-channel audio is a playground
I kinda love these pedantic video titles; because though occasionally they are just click-bait and I get fleeced, sometimes they are videos like this one, which I wouldn't normally think to click on, much less search for. What's more is actually being something I could directly try out myself in my sound software without requiring a complicated setup. Great video!
I've learned to watch all of Dan's videos regardless of if I think I care about the topic... (what do I care about LCR mixing?) -- I always end up learning a ton of fascinating general audio art and science. Thanks Dan.
@@TjMoon91 Dude I got a masters degree in Music Production at Berklee College and we didn't even scratch the surface of this kinda stuff Dan explains so eloquently. Could've saved me a boatload of cash!
anytime i watch Dan’s videos i’m sat there in awe… he has a way of taking these highly technical concepts, explaining them and showing you how in practice they can be extremely helpful and produce incredible results!!
Fantastic video! I actually used this very same technique to make a mono-to-stereo, fully monocompatible processor (i made a reaktor ensemble). It injects mono signal into the side and delays it by an amount chosen by the user (small amounts btw). I discovered that this works very well also on stereo tracks, not only mono ones! By watching this video i realized that dalaying the mid channel is SO COOL! Also the all-pass is a very great idea! I sould modify my ensemble to make it a powerhouse filled with Dan’s knowledge 😀
When I like something, I want to know how that thing/process works, and most of the times that leads me to the deepest roots of that thing (even to electrons or quantum theory). On that journey, besides finding new ways of editing the final product, I also find new ways of exploring the creativity/artistic process. It gives me new posibilities, new ideas ... it widens the horizons ... It is so beautiful to learn about how things work. We are so lucky to have Dan Worrall sharing his knowledge with us!
When I mix to stereo (I'm moving more toward surround) I feel that it's absolutely necessary to have motion. Some parts can just sit where I place them in the field but other parts really need to travel. I've noticed that some note sequences gain emphasis through motion, like as if our hunter instincts are trying to pin down their location. While other parts can be de-emphasized by scattering them. Some notes need to start in one place and reach completion in another. I frequently automate 'Wider' to send particular parts out or in. I seldom use algorithms, like LFO's, to produce motion but perhaps I'll begin more to follow your example at the end of this video to give static parts more of a living pulse. I certainly agree with your contention about headphones with the addendum that it's utterly impossible to keep our heads absolutely motionless unless one were to painfully clamp one's head into a fixed position. Perhaps this is another reason why headphones contribute so much to ear fatigue. I like to watch people's heads move as they listen to my work.
So now at last we get to know how you get the depth in the recordings you've used in your tutorials. As you've hinted at before - phase is your friend. And like many people who've commented here, I'm going to watch again straightaway.
5:30 was still a tad difficult to pick up on, until you solo'd an element out, thanks for teaching me something I always felt but didn't know how to acknowledge, or label
which instruments do you use it on? I tried it on hard panned guitars, and this effect adds an unpleasant random panning effect plus to the depth. I mean real guitars, not that cloudy obscured background guitars Dan had in this mix :)
Such a coincidence, that last night I watched his first video about LCR mixing and now I see this in my recommendations, noticing it's from 4 hours ago. I guess it's a sign for me to subscribe.
That Reaper trick at the end with the allpass filter just confirms to me both the genius of your teachings, Dan, and the genius of Reaper as a DAW. I'm utterly impressed with that one!! Thank you :)
The most natural trick I personally use is utilizing some Early Reflections. It somehow emulates the natural room and brings the hard panned elements into a real space. I also tend to give the other elements of the mix the same Early Reflections so that the whole mix sounds more cohesive. Amazing video BTW Dan!
I am the same, I like having a small space reverb with mainly ER (like a studio convolution or something) and feed everything to it to fill in the distance between instruments.
There are people who got this recommended, but if you know how to set up your stuff, this demo is kinda "anstrengend" to listen but also gold. True understanding begins with listening^^
AMAZING video. A hard-panned channel not being solely on the Side blew my mind. If I ever do an LCR mix or want to add depth to a song I'll come back to this.
I've been looking now for a few years for the answer to this 'thing' that pro mixes/masters seem to do differently when I A/B reference tracks. They've always been able to acheive a kind of 3D feeling to the instrument positioning weather it be on headphones or speakers, but also not lose anything when listening in mono like on a google home speaker. I put it down to expensive hardware that the pros use that I could never imagine affording in a thousand lifetimes. Turns out I was wrong, THIS is it. This is what I've been searching for, this is the answer. By god, thank you so much for this. This is the best 16 minutes I've ever spent looking to further my knowledge in audio production, I am extremely grateful for your time and effort in sharing this. Easy sub, gonna grab a drink before I binge everything you've uploaded.
Well this was really interesting and mind-expanding. The clickbait title was answered within 90 seconds, but as I heard him expanding on what you can do with it once you know that, I couldn't look away. This seems like the result of decades of engineering experience, the sort of stuff I'm not even sure Alan Parsons was fully aware of and utilising. Add to that the fact that all this improv source material is by the same guy. Absolutely brilliant. I will have to watch this a couple more times before I fully understand it, I think.
I love this stuff! Dan’s great, and if we all just appreciated having the input of an experienced sound engineer for free or very little money, there is no argument. And if Dan did not have a point of view of one flavour or another, it would make him rudderless just like it would make anyone rudderless. Once you have a POV you can begin to work from there, and even account for your own bias in having it. I have come to the conclusion about binaural, that it’s superior to stereo, and in fact, is what early on people may have expected or wanted, stereo to be? That’s just my POV. I was listening to the increasingly great-sounding (I’m older) Tangerine Dream, and some Edgar Froese, and the ability to completely send a source sound to only one ear, means the stereo effect can be more radical? So this TD track, has a shimmering guitar chord that seems to be roaming R to L so much, it’s effect is it is rotating around your head, I realise. This is pretty good for the 1970’s or 1980’s. It doesn’t quite sound like it’s at the back of your head, it’s more like it goes around behind a barrier that’s behind you, but it’s awfully good. I also used earbuds to listen to a Robin Trower track (‘What lies beneath’ I think), and in this binaural mode of listening, I can picture with a lot of precision, the exact location in 3D space where the different instruments are appearing to emanate from. This really doesn’t happen with a pair of stereo speakers - you’d need a surround-sound system with a curved array rather like pixels of a modern computer screen/TV. Earbuds are a lot cheaper?! Anyway, these are my modest thoughts on the matter. I have one poser for you Dan - it MUST in theory be possible to make that TD track guitar that seems to rotate around me in binaural listening, actually sound like it’s going behind me? It must be possible to fake it, with only two devices attached to me, because I only have two ears, yet, I can tell in reality, when something is behind me. Is it too subtle an effect for us to capture without some AI system processing the sound, and simulating the effect of our ACTUAL ears, being hit from behind the ear instead of in front? Here’s my wacky theory - that it has not been done before now suggests to me it involves an effect that simple headphones or stereo earbuds cannot simulate, and that could possibly be moving air, hitting the minute hairs on the ear surface, and your brain by this method, knows the sound is from behind. Over a period of years, your brain will learn, what certain common sounds, sound like, from every possible angle. This database you possess, is much more complete however, for everyday sounds eg a motor vehicle, than say a guitar chord. So, making a motor vehicle sound, and getting it to sound as if it’s coming from behind the listener, must be easier, than trying to get a guitar chord to sound as if it’s doing exactly the same motion relative to our heads/ears? I maintain, that if some algorithm could be built (or generated, hence I said AI), to process sounds to make them appear to come from behind us, then it could presumably be tweaked to accommodate for our individual head dimensions and hair even, to really fool us. But, if the surface of the ear is involved, and the hairs, the brain won’t be fooled by earbuds alone - it will need a vacuous pair of headphones if not a ‘head-dome’ so that these sounds can in effecto of course, REALLY be made to come from behind us. As I say it would best be effected by a few dozen very small speakers spread across like pixels are, entirely ‘coating’ the inside of the ‘head-dome’, but this would be sort of cheating…because you’ve REALLY made the sound emanate from behind the ear! Surround sound also cheats, because again, it simply makes the sound come from behind you and therefore, is the same phenomenon in reality as a ‘real’ sound in the real world. I just wonder if it’s not possible to fake a fully rotating sound around your head, for this reason? In other words, why does my brain simply and clearly assume and arrange, stereo sounds, to be at a 180 degree arc in FRONT of me, as they seem to be, when using either binaural or stereo sources to listen to? There is some phenomenon at work, that means my brain doesn’t accidentally suddenly think the sound at say 1 o’clock to the right in front of me, isn’t at 5 o’clock behind me instead? I suppose you could do a test, by blowing air gently and silently at your ears, while listening to a track or sound, and see if your brain suddenly decided to treat the source as behind you not in front? Anyway, I’m waxing on. Thanks Dan for another enlightening video and tutorial on audio science. I’m off to get our cooling fan, and put my earbuds in, and see if Robin or the TD track, can be made to seem to be behind me, if my ears sense moving air from that direction!
I don't know about creating a 360 degree environment in processing, interesting idea. But binaural recording achieves what you describe. You might already be familiar, but it uses microphones positioned inside the ears of a fake human head. Check out virtual barbershop for an example.
I use Goodhertz Midside for this type of stuff, especially monoing the bass. I've never realised I could be introducing any phase issues before and I presume the Goodhertz people are smart enough to have taken it into consideration.
@@nexusobserve So, maybe all of us have been! Isn't a point of using Mid/Side that you CAN HPF your sides to clean up the bass. So... rather than casually high passing our sides, what's the new intentional option?
@@makrowelle Just kidding. His tutorials really give you superpowers. But as soon as everyone knows about them superpowers will become standard powers 😁
Thanks! That 0.3ms mid delay is the best tip I've come across in several years. I have a couple songs I like that were mixed LCR (which I don't like). I pulled one up in my DAW and delayed the mid by 0.3ms and now it sounds much nicer across the stereo field. :)
It'll take me a couple of watches to digest this but I already know it sounds great. Can you use this technique on channels that aren't hard panned but that are moved just left or right of center to bring them back into cohesion when listening on headphones? I don't usually pan very much at all because it grates on me even when listening with stereo speakers but this could help to form a better image perhaps?
Dont tell this to any "audiophile", since they are gonna be livid. They will probably start telling you how headphone x presents music as though one was sitting in a concert hall. Well, provided you use some bogus 2000 buck cable and wait for the stars to align in a certain way.
Best TH-cam homepage recommendation I've gotten in months, how had I never come across this channel before?! Immediately subscribed, your content is just what I needed to nerd out on!
The next dimension in mixing (which some people are already working with) is random (and other forms of algorithmic) movement. This is brilliant material, and while it is something I've thought about, I haven't had the time to come up with some practical applications at the mix level - only instrument and effect params - this is a great spring board of ideas around creative phase randomization. Excellent channel - and as others have noted, you have a great style and voice for explaining the steps and concepts. (Haas-n-pepper?)
Many EQs. Just look at a picture of the plugin. Sometimes it's referred to as Lateral/Vertical, Mono/Stereo, Mid/Side, Sum/Diff or simply M/S. If you use a highpass filter and set it as low as possible this will have the same effect.
@@willfrank961 I have pro q and a few other m/s capable plugins, but none of them have an allpass option. A highpass filter set as low as possible is NOT the same thing as an allpass, even on sounds that do not contain low frequency content.
@@elirosenkim3659 I didn't say it was the same. I said it has the same effect. As in is causes a phase shift. Is that incorrect? Happy to learn something new.
@@willfrank961 they both cause a phase shift, but the m/s eq would cause a shift in frequency response as well, and the allpass would be more flexible in tweaking the phase response.
Dan Worrall you are the ultimate Professor of Audio. Thanks for empowering the next generation of technicians. People like you are saving me $100,000 in engineering education. Thanks for offering the skills to help people go and work this out for themselves. Your experience shines through mate.
goodness :o first time I've ever seen anyone acknowledge the existence of Voxengo Sound Delay! I spent a while trying to find something exactly like that for my stream setup because various effects were delaying my spoken audio that I needed to find a way to delay the PC audio so that it went back in sync I had never even imagined to use it in my tracks
Hi Dan! I keep coming back to your channel. This is supreme content. I am by no means as educated on the topic of sound as you are. But being able to learn why my mixing skills suck is kind of priceless. I have watched and listened to all of your videos on this channel (the fabfilter ones as well) And i got to say: Everytime i consume your content i get amazed. Spot on. To the point. And most importantly why! Thank you for the awesomeness!
Man, this is brilliant. I'm not going to pretend I understand the nitty gritty, or what should sound good or the technique. But I know for a fact that the basis here of phase effecting how you here panned stereo is easily applicable. I don't need to understand all of it right now to know just being aware of this will improve my stereo mixes. Tell you what, helps that you make these videos like you're a mystery youtuber talking about an unsolved murder. But instead it's an indepth explanation of audio engineering.
So, TH-cam decided I should be interested in this. While I'm not much into audio production, and I'm not an audiophile, this did remind me of something I did. A number of years ago, back when Winamp was *the* player to use, I sometimes encountered songs that were not great with headphones, usually because the mixing was just "this instrument goes in the left ear, this other one in the right ear". So what I did was grab the audio processor plugin, and I wrote myself a little audio processing program, that would take the input, apply a small delay, and add the result to the opposite channel with lower volume. The difference in the experience was amazing. I don't listen to music as much as I used to, these days. Not because I don't enjoy music, but rather because I have come to enjoy the silence.
Brilliant title, I instantly knew I'm gonna learn something new here. Well done. As a side note I do love hard panned instrumets on headphones, theres nothing unnatural there for me.
Wow. One of the most educating and enjoyable videos I’ve watched in a while. The production quality is beyond this world!!!!! I’m not surprised that an audio engineer has that good of voiceover audio, I’m just flabbergasted by how much effort you put into your videos!
Wow! that was a real metaphorical ear opener! I'd listened to the video a couple of times and didn't get it, but when I actually tried it in one of my sessions, the difference was stark. Actually I still don't understand why this effect happens but it has blown my mind - or perhaps that's why! Many thanks!
No idea why the algorithm put this in front of me, but I'm not objecting too much. Most of that went over my head but I did get a small insight into what as a layman will just call "sound". I can see an almost infinitely deep rabbit hole here that I could lose myself in. I think for my sanity I shall carefully go around it. That said I will carry with me a greater respect for those people in the field.
As a young man I immersed myself in music, mixing and production. I'm now middle-aged and wish I'd undertaken something less complex like rocket science.
lol
I went from molecular pharmacology and medicinal chemistry to audio engineering and it's far more convoluted and mysterious.
In rocket science, fuel oxidizer, and a spark will always go "boom"
No joke, thousands of hours in audio production/recording/editing, hundreds in Kerbal Space Program, and the audio stuff is harder by far.
The reason why audio is harder is that we're dealing with something that is both tangible and intangible at the same time.
Rocket science, you can rely on hard numbers to produce a predictable result.
Audio, move the speaker slightly, and it messes up everything and you have to do everything again from the beginning.
This is absolutely wild and you're a great explainer.
Nice to seee you here Jeremy
Dan Worrall is tutorial perfection
Dude, he's amazing.
It’s not that ‘wild’ … practical if anything
He’s absolutely right, but he’s only proving a point, whilst missing the bigger picture:
The majority of people today listen to music with headphones. Only a tiny percentage of that majority listen on headphones that are worth a damn. I’ve been in the game for 25 years and mixed for several A-list artists, and I understand completely…
but if I were smart, I’d probably just save my money and time, and mix on Apple wired earbuds, because at the end of the day, that’s still what the majority of people use, and this fact and trend isn’t going anywhere.
The days of audiophiles with large, expensive gear, with large expensive speakers sitting on pyramid points with little slabs of slate under each point, are over. In 10 to 20 years, that generation will be dead, and so will the days of the home living room audiophile listener.
Don’t even get me started on car stereo systems 🙄 There isn’t a seat in the car that comes close to providing an accurate stereo image.
So if we were all smart, instead of being purists, we would mix on those Apple earbuds, and check our mix in the latest, highest selling, shitty economy car, and let the past go.
It’s the hard truth, but it’s the truth ✌🏻
The most simple but dramatically effective thing I've ever seen in music production so far is literally 1ms of delay. I'm floored.
What’s it doing to the mix exactly?
@@leaveitorsinkit242 I'll take a stab at it. The speed of sound is roughly 1ms per foot, so if you delay the sides by 1ms (millisecond), it will sound like it was recorded from 1 foot back from the mids. It helps give it some separation, specifically depth. It is essentially shifting the phase of the sides in comparison to the mids. Anytime you introduce a tiny delay, it shifts the phase slightly. You can drive yourself crazy messing with the phase correlation and making sure the phase is "perfect", but in reality you don't really want it to be "perfect", you just want it to sound good and make sure that it's not canceling the frequencies that you want of the instrument.(fully understanding phase cancelation is critical, and what I consider the quintessential intermediate mixing knowledge) If you mic up a drum kit with close mics, overheads and room mics. All of these mics will be getting the signal at slightly different times, and that is what we want, otherwise it would all sound like it was a bunch of mics the same distance away and wouldn't sound like a 3d space. It's the subtle phase differences that allow our ears to perceive the distance involved and therefore depth. Of course we take great care at the time of tracking to make sure the phase isnt causing cancelation, by slightly moving the mics closer or further away (inches of difference) until it sounds full and rich when listening to all the mics together. If you went through and manually lined up all the tracks where all the peaks and valleys were perfect, than the phase would be "perfect " but it wouldn't sound like it had the space that you were going for when you put up all those mics. It would actually ruin the effect. So knowing that, you can artificially recreate this effect by using short delays, like 30ms and below, remembering the 1ms=1ft rule. Those types of delays can help you shape the soundscape from front to back, along with many other things. But keep in mind that you will only hear the effect of the delay spatially if you are listening to the delayed track in reference to a same or similar un-delayed track (just like the sides are percieved against the mids in this track because a lot of the same information is in both the mids and sides). A delayed track on its own will only sound like it was played a little later, but heard with the same signal un-delayed, it starts to change the spatial relationship. There are also creative effects with short delays. Look up a video on the Hass effect, where you can actually make something sound like it's even further left or right than hard pan LR. That trick has been used in many famous productions and I feel like it will help you understand what this trick in the video is doing, just slightly more exaggerated with the Hass effect. Phsycoacoustics. Yay! Hope this helps, 🙏 👌 have a great time on your mixing journey.
@@aholder4471 just to confirm, if I went into audacity, and delayed the right side (assuming I planned both channels left and right) by 1ms, it would provide the Illusion that the soundstage is wider?
@@TimberWulfIsHere Yes, but he is using a middle/side feature of a plug in and delaying the sides by a ms in this video, not left and right. If you went in and delayed the right channel of the entire mix, it would honestly just mess your Soundstage up and probably not be what you're looking for. I would leave the delay of left vs right to individual instrument tracks, not the whole mix but hey, it can't hurt nothing to try it. That's the best and worst part of digital audio, you can try a bunch of stuff and not have to commit to anything.
@@aholder4471 I just played around with it, it actually does work, but i can also see what you mean; it does sound ever so slightly out of sync for some sounds at least.
I'm a developer for FL studio mobile and this video inspired me to add m/s delays to it in my first week of working there - just thanks to this video.
Thanks Dan 🔻
Exit: talking about the spacer module in flm
👋
hey I started with mobile man that´s so cool
@@tysk_dt eyyyo xD
I’m always in awe of Dan’s ability to meld his technical and artistic talents into these intimidating displays of audio engineering skill. I’m an engineer and a musician, and to me it feels like he has the analytical capabilities of the most rigorous scientists combined with the empathy of the deepest creative. And then he’s generous enough to share lessons like this with us. I share this channel with anyone who will listen. Thanks Dan.
Agreed
Artistic talents?
talented artist
I so second that emotion!
And then to have a speaking voice like butter on top of all that!
Just when you think you understand something, Dan absolutely blows your mind and makes you realize how much is left to learn and discover in audio
And how much of it is right, if you add just a little phase shift. ;)
There’s a part of me that realizes that this is incredibly insightful and important, and then there’s the part of me that doesn’t quite understand anything Dan talking about. I’ll have to watch again tomorrow.
Watch Dan's video about mid-side processing first and that should make it easier to wrap your head around.
You're not the only one
There's also me who doesn't know how most of this is done in Reaper
@@marcchurch1245 do you know the name of the video sir ?
@@gooda864 it's called Mid-Side Demystified. It is a Dan Worall video but it's not on his personal channel, it's on Fab Filter's channel.
Personally ive never understood this "no one listens in mono" argument. Even if your mix is never played back in mono, using M/S monitoring to shift density and dynamics makes a worlds difference in getting a mix that is perceivably "wide" but maintains depth and focus.
Michael Paul Stavrou agrees with you, and I agree with him.
earth to star trek bridge control: nobody cares, go home
People with functioning ears care.
exactly, but i do understand those who don't get mono mixing. Because the newer generation that do their work from home, only use stereo mix. and then its up to the one who makes the master of the song, to do the mono lvl'ing (as i would call it, when its only mixed in stereo speakers :) )
but then again many newer headphones that mainstream users buy are fake stereo where there is a driver that spread the sound between the 2 mono left right speaker so it sounds "true", and by true i mean massively coloured and therefore not the true mix you are hearing :)
Now we'll see 30 "next level" tutorials within the same month with this secet trick of the pros mid side delay trick.
Invade the channels and bring em here
Lmaoooo I literally had the same thought hahahahaha
oh god
There's another thing that's AMAZING about this technique that might not be so obvious at first: Mono-compatibility. Utilizing a mid/side channel delay will NOT affect the timbre OR level in mono playback at all (give or take a fraction of a ms of delay on whatever you apply the technique to). This is because when converting to mono, hard-panned material will not change in level, and the side channel will disappear completely. This is unlike an LR imaging technique like Hass delay or LR filtering, which will introduce a sort of flange/comb effect when you convert to mono. Genius!
Yes indeed! I was initially flabbergasted about how this doesn't screw with mono-compatibility at all.
I'm still trying to wrap my head around what I'm hearing exactly, and why. I mean I know what M/S processing is and how the encoder/decoder works (which is simple enough in itself), but I still find it generally quite counter intuitive how things behave sonically when processing them separately.
I suppose the M/S terminology is already confusing. Sum/Difference captures the meaning of the process more accurately imho.
But anyway, after playing around a bit with delaying the mid-channel, I wouldn't slap a mid-channel delay on the mix-bus at the end of a mix session as it screws around too drastically with some very specific mixing choices I made. But this technique certainly opens up some very intriguing possibilities when applied with focus and creativity! (as Dan demonstrated extremely well)
Wow, mind blown 🤯! If Dan did a mix of a song from scratch, showing his workflow, plugins and what he listens for, I would pay good money for that.
why there are millions of videos on youtube like that already
@@codesymphony Quality vs quantity my friend. Thousands of videos with half baked or outright incorrect information is no good.
@@bluestringmusic Correct. I've seen hundreds of "tutorials" on music production to this day. Most of them hardly taught me anything. With this video however, I feel like I just went on an experimental trip. Tons of great information here, something many do not include into their videos. This is QUALITY demonstration, not some over-the-top animations, 10 second cuts of the work process or time-lapses. Some people also like to just slap bunch of filters on their mix, while they're convinced it's always the same process to mixing and then they call it a day. I wouldn't know delay made such a profound difference, or how the channels are encoded after panning hard left, right, mid. It feels good to hear a difference, but also UNDERSTAND why there is one.
I have been making music in DAWs for 20 years, since I was 12.
This video finally explained to me and made me understand, what the hell a sound engineer's job is, and why it's different and separate from what I've been doing.
Damn this is incredible. Been chasing "depth" for the longest time, and I'm just blown away right now.
I just had that revelation, too. It's been depth I've been seeking. Mind blown.
@@FranklynMonk Yes. It's literally like Width got shifted 90 degrees.
This is a game changer! I've always hated hard panning my tracks because It felt unnatural, yet I always heard people talking about LCR mixing. This filled in that gap I had in my understanding and I'm excited to play with this new knowledge. Thanks a million!
Dan is in a league of his own when it comes to getting technical while still making it engaging and easy to understand.
It is absolutely incredible the fact that you are handing out knowledge of this depth for free. This is specially great for vocals or any kind of non-digital instrument that one may record on studio. Amazing stuff man, keep it up.
I can’t believe I was doing this on vocals already. I guess you really should trust your ears
I have over 10k in home theatre, studio, pro audio, and simple Bluetooth speakers everywhere in my house. My wife still listens to all her music on the phone.
Yup, this is relatable.
Convenience is usually what people will go for, I admit I'm guilty of doing that myself when I can't be bothered to plug it in to something :D
I hate when she plays music on her phone... All I hear is “chk, chk, chk...”
Each eardrum is a world...
@@whereismyleafage1887 not even mono.
I hope they never heard Led zeppelin on it :)
If I watch more than one of Dan’s tutorials in a row I need to have a nap for my brain to recover. Amazing content. I think I am done with all other tutorial channels.
That background track was really relaxing. I was almost meditating over the issue of video. good job 👌🧘♂️
Pink Floyd-ish ;)
it's tecnicly wrong to call it background track in this case lol
In The Air Tonight drum vibes.
Agreed
I have absolutely zero use for this information but that title alone got me to click and watch the entire thing. I now understand why some mixes sound incredible on headphones and some don't while the ones that don't will still sound great on stereo speakers.
Sometimes listening to Dan is like listening to a warlock explain the dark arts. That delay effect is so drastic and sounds so good yet so easy to achieve, souls must have been bargained with to learn the trick! 🤯
I don’t even hear what’s going on sonically. Like… how’re all his hard panned elements “chasing” each other in the 3-D spectrum?
@@leaveitorsinkit242 watch it on a good pair of headphones
You have really upped the ante here! All I can say is wow. The best "tinkerer" ever. Your last trick is just like what we do with voltage control circuits in synths to shape the sound. Cheers
Fuck yeah Dan. I can't tell you how much I appreciate your sharing these pearls gleaned from years of deep diving. 🙏
Fantastically explained, i've noticed that on a lot of vintage songs the panning is hard left and right. More modern songs tend to use the special stereo headphone sound
It's called anti-phase
Please, do a video on that ambient guitar effect! Sounds awesome
I love airwindows
I zoomed in and it said RealVerb
@@nexusobserve well i just made a wild guess and i'm pretty sure it would yield in a similiar sound
No Airwindows. No phaser. Guess again ;)
@@DanWorrall my first guess would've beena spring reverb
I personally love the un naturalness of hard panning headphones. I mean it’s easy to go over board but it’s also easy to make something sound really interesting.
Done as an effect, it’s fine. But the problem is most stereo mixing comes down to a choice of what equipment to cater to. When it’s done poorly, it’s really jarring to me. Loads of old stereo stuff clearly wasn’t made for headphones. I wish there was a quick correction for it at the music player level.
@@CosmicHippopotamus old stereo stuff from the 60s for example where the drums are hard left, bass hard right, only vocals centered or whatever. really jarring
Been watching different mixing, mastering and processing channels and none of them even mentioned that dilemma.
Glad I found out about you via FabFilter and I admit I stick to your content, because it's easily explained, well represented and good to recreate.
Thanks Dan, keep it up 😊
Quick Tip: If you are trying to do the Voxengo Sound delay phase in Abelton Live, don't pan the whole track, put in a utility before the delay and pan that. The pan on the actual track will affect all of the audio in the effect chain, and it won't work.
this is insane. I’d kill to have a tenth of this mans knowledge
I love how you find the things which people argue is only possible by using analog gear and show us how easy and simple it is, its all philosophy and physics!
Thanks Dan, yet another awesome video!
I use Melda's MUtility for this kind of stuff, because it's able to do a lot of things besides what's on this video, while still remaining free; do haas, do invert stereo, mid delays etc.
If you don't mind me asking, how do you delay only the mid channel with MUtility? Can't figure it out :/
Thanks for the tip! I realize I already have Mutility, but have never even taken a look at it.
This is the single most brain-melting video about mixing I have ever seen. I'm gonna put it down, watch it again tomorrow, and see if I can catch everything that just went over my head. This very well might have just solved some major issues in my mixes, and I don't even necessarily mix LCR
I don't like the pace of the video. It's good information but it really gets on my nerves at some points.
@@bigboss-qv7pe Good to know
Hey Dan, did some research regarding headphones earlier this year - the answer I came to for classifying the how headphones and stereo speakers interact with our ears was the terms "Diotic" and "Dichotic".
Dichotic meaning that both ears hear different stimuli, and diotic meaning that both ears hear the same stimuli.
I posited that dichotic and diotic properties could exist on a spectrum (how different sound is at each ear), and also that it was a statement about the physical properties of listening via loudspeakers/headphones rather than production qualities.
With stereo loudspeakers, hearing in a diotic manner is never possible, as naturally you note that the head (and pinnae) shape and change audio flowing into the contralateral ear from the left or right speaker respectively.
Thus listening 'in the open' (loudspeakers) can never truly give both ears the same exact stimuli (which is part of our binaural localisation system for placing sounds in a 3D environment).
But It also may come down to how accurately one wanted to measure the differences in both ears to measure (unsure if currently a word) 'dioticity' (how similar your ears hear any given sound?..). And whether the properties of dichotic and diotic should be restricted to properties of the physical realm (listening device), or if they should be considered holisically - including production considerations (i.e. if mono audio).
Hope to have provided something to consider!
Keep the videos coming 😁
EDIT: Oh and also yeah, I personally (for whatever that is worth) consider stereo NOT to relate to whether something is binaural, dichotic or diotic. To me, stereo is strictly about having two channels of audio. I do feel that we need a more expansive vocabulary for discussing the differences between headphones and loudspeakers, not suggesting that this is squarely your responsibility of course haha! Hopefully this may help spark thought in other's minds though 😊
Yes... You have intiated some response flurries in my cerebral cortex.
Yes, when he says it's "pedantic" to call headphones "binaural" and speakers "stereo", I'm not convinced it's pedantry. I think it's actually just wrong. :-D "Stereo" is not just used to describe two-channel audio, it's more of a medical or scientific term to denote that something has correlated pairs of stimulus. Think of 3D glasses, for example.
There's a vernacular implication that when you say stereo, you mean audio content that has a soundfield presented with two channels of audio. Contrast that to dual mono, which is just two channels that may or may not have anything at all to do with each other. Or binaural, which I've always taken to convey something a little harder to describe -- like a stereo presentation that is meant to convey 3D space the way your ears would perceive if they were _in_ that space.
Suffice to say, I don't think stereo vs. binaural has anything at all to do with the mixing of audio in space on playback. If it were used in that context as a clarifying aid for the purposes of discussion, then fine. But I feel a blanket statement like "headphones are not stereo (they are binaural)" is probably verging on a fallacious statement for the purpose of being sensational.
That "mysterious little option" instantly added so much depth, like night and day.
dear lord ...
the depth of this man's knowledge in all aspects of sonic production is apparently unfathomable ...
I am officially hooked ...
I have absolutely no frickin idea what you're talking about most of the time. This is why producers with a creative bent need engineers with technical expertise. You could probably do my job and a tech's job no problem. Incredible.
Yes! Depth, not width - totally agree. Thanks Dan, brilliant as ever :)
You just blew my mind with the trick of EQ'ing just the mids or the side in minimal phase mode.
From my own experimentation, I knew that 1 ms would make a huge difference with the mid-side delay.
You've just explained something I do without knowing why I was doing it. 😂
Thank you
"Something that never happens in real life" - unless you're Brian Wilson... thanks for these videos, Dan, they are the best music production tutorials I've found...
i've been doing all sorts of ITD and HRTF panning in my work for some time and i'm happy to see something along the same lines here - f the phase police indeed, 2-channel audio is a playground
I kinda love these pedantic video titles; because though occasionally they are just click-bait and I get fleeced, sometimes they are videos like this one, which I wouldn't normally think to click on, much less search for. What's more is actually being something I could directly try out myself in my sound software without requiring a complicated setup. Great video!
I've learned to watch all of Dan's videos regardless of if I think I care about the topic... (what do I care about LCR mixing?) -- I always end up learning a ton of fascinating general audio art and science. Thanks Dan.
Seriously! I learn more in a video where Dan looks at a specific plug-in that I’m never going to use than I do from any other TH-cam tutorials.
@@TjMoon91 Dude I got a masters degree in Music Production at Berklee College and we didn't even scratch the surface of this kinda stuff Dan explains so eloquently. Could've saved me a boatload of cash!
anytime i watch Dan’s videos i’m sat there in awe… he has a way of taking these highly technical concepts, explaining them and showing you how in practice they can be extremely helpful and produce incredible results!!
Fantastic video! I actually used this very same technique to make a mono-to-stereo, fully monocompatible processor (i made a reaktor ensemble). It injects mono signal into the side and delays it by an amount chosen by the user (small amounts btw).
I discovered that this works very well also on stereo tracks, not only mono ones!
By watching this video i realized that dalaying the mid channel is SO COOL! Also the all-pass is a very great idea!
I sould modify my ensemble to make it a powerhouse filled with Dan’s knowledge 😀
When I like something, I want to know how that thing/process works, and most of the times that leads me to the deepest roots of that thing (even to electrons or quantum theory). On that journey, besides finding new ways of editing the final product, I also find new ways of exploring the creativity/artistic process. It gives me new posibilities, new ideas ... it widens the horizons ... It is so beautiful to learn about how things work. We are so lucky to have Dan Worrall sharing his knowledge with us!
When I mix to stereo (I'm moving more toward surround) I feel that it's absolutely necessary to have motion. Some parts can just sit where I place them in the field but other parts really need to travel. I've noticed that some note sequences gain emphasis through motion, like as if our hunter instincts are trying to pin down their location. While other parts can be de-emphasized by scattering them. Some notes need to start in one place and reach completion in another. I frequently automate 'Wider' to send particular parts out or in. I seldom use algorithms, like LFO's, to produce motion but perhaps I'll begin more to follow your example at the end of this video to give static parts more of a living pulse.
I certainly agree with your contention about headphones with the addendum that it's utterly impossible to keep our heads absolutely motionless unless one were to painfully clamp one's head into a fixed position. Perhaps this is another reason why headphones contribute so much to ear fatigue.
I like to watch people's heads move as they listen to my work.
So now at last we get to know how you get the depth in the recordings you've used in your tutorials. As you've hinted at before - phase is your friend. And like many people who've commented here, I'm going to watch again straightaway.
This feels like a different tier, like being a sorcerer or a magícían.
Can't wait to know what you do to your guitars.
Thank you. Amazing and amazed 🍀
he went all in and it's for a greater good
5:30 was still a tad difficult to pick up on, until you solo'd an element out, thanks for teaching me something I always felt but didn't know how to acknowledge, or label
That all-pass filter technique went straight into buses of my template. Thanks again Dan for a neat trick!
which instruments do you use it on? I tried it on hard panned guitars, and this effect adds an unpleasant random panning effect plus to the depth. I mean real guitars, not that cloudy obscured background guitars Dan had in this mix :)
You don't have to modulate the all pass filter. Usually I wouldn't: this is more of a special effect.
@@DanWorrall oh thanks Dan!
Every sound I hear is stereo because I was born with a magical gift:
Ears on both sides of my head, one aimed left & one aimed right.
Such a coincidence, that last night I watched his first video about LCR mixing and now I see this in my recommendations, noticing it's from 4 hours ago.
I guess it's a sign for me to subscribe.
That 1 millisecond delay blew me away! Incredible lessons here!
This is an absolutely brilliant video. Thank you for sharing your wisdom with us, Dan!
That Reaper trick at the end with the allpass filter just confirms to me both the genius of your teachings, Dan, and the genius of Reaper as a DAW. I'm utterly impressed with that one!! Thank you :)
The most natural trick I personally use is utilizing some Early Reflections. It somehow emulates the natural room and brings the hard panned elements into a real space.
I also tend to give the other elements of the mix the same Early Reflections so that the whole mix sounds more cohesive. Amazing video BTW Dan!
I am the same, I like having a small space reverb with mainly ER (like a studio convolution or something) and feed everything to it to fill in the distance between instruments.
There are people who got this recommended, but if you know how to set up your stuff, this demo is kinda "anstrengend" to listen but also gold.
True understanding begins with listening^^
This was exactly the knowledge-gap I needed plugged right now to take my mixes higher! Thank you Dan!
i'll need to watch this over and over to really understand what i need to do. this system is beautiful.
2:55We want you to show us HOW YOU DO EVERYTHING!
👍 yes 👏
No cap + the guitar effect
AMAZING video. A hard-panned channel not being solely on the Side blew my mind.
If I ever do an LCR mix or want to add depth to a song I'll come back to this.
This is fascinating! As someone interested in composition and mixing, knowing these tricks (and pitfalls) seems very worthwhile.
I've been looking now for a few years for the answer to this 'thing' that pro mixes/masters seem to do differently when I A/B reference tracks. They've always been able to acheive a kind of 3D feeling to the instrument positioning weather it be on headphones or speakers, but also not lose anything when listening in mono like on a google home speaker. I put it down to expensive hardware that the pros use that I could never imagine affording in a thousand lifetimes.
Turns out I was wrong, THIS is it. This is what I've been searching for, this is the answer. By god, thank you so much for this. This is the best 16 minutes I've ever spent looking to further my knowledge in audio production, I am extremely grateful for your time and effort in sharing this.
Easy sub, gonna grab a drink before I binge everything you've uploaded.
It's been a while since I've actually learnt something new from a tutorial, but man this was informative.
right?
Well this was really interesting and mind-expanding. The clickbait title was answered within 90 seconds, but as I heard him expanding on what you can do with it once you know that, I couldn't look away. This seems like the result of decades of engineering experience, the sort of stuff I'm not even sure Alan Parsons was fully aware of and utilising.
Add to that the fact that all this improv source material is by the same guy. Absolutely brilliant.
I will have to watch this a couple more times before I fully understand it, I think.
I love this stuff! Dan’s great, and if we all just appreciated having the input of an experienced sound engineer for free or very little money, there is no argument. And if Dan did not have a point of view of one flavour or another, it would make him rudderless just like it would make anyone rudderless. Once you have a POV you can begin to work from there, and even account for your own bias in having it. I have come to the conclusion about binaural, that it’s superior to stereo, and in fact, is what early on people may have expected or wanted, stereo to be? That’s just my POV.
I was listening to the increasingly great-sounding (I’m older) Tangerine Dream, and some Edgar Froese, and the ability to completely send a source sound to only one ear, means the stereo effect can be more radical? So this TD track, has a shimmering guitar chord that seems to be roaming R to L so much, it’s effect is it is rotating around your head, I realise. This is pretty good for the 1970’s or 1980’s. It doesn’t quite sound like it’s at the back of your head, it’s more like it goes around behind a barrier that’s behind you, but it’s awfully good.
I also used earbuds to listen to a Robin Trower track (‘What lies beneath’ I think), and in this binaural mode of listening, I can picture with a lot of precision, the exact location in 3D space where the different instruments are appearing to emanate from. This really doesn’t happen with a pair of stereo speakers - you’d need a surround-sound system with a curved array rather like pixels of a modern computer screen/TV. Earbuds are a lot cheaper?! Anyway, these are my modest thoughts on the matter.
I have one poser for you Dan - it MUST in theory be possible to make that TD track guitar that seems to rotate around me in binaural listening, actually sound like it’s going behind me? It must be possible to fake it, with only two devices attached to me, because I only have two ears, yet, I can tell in reality, when something is behind me. Is it too subtle an effect for us to capture without some AI system processing the sound, and simulating the effect of our ACTUAL ears, being hit from behind the ear instead of in front? Here’s my wacky theory - that it has not been done before now suggests to me it involves an effect that simple headphones or stereo earbuds cannot simulate, and that could possibly be moving air, hitting the minute hairs on the ear surface, and your brain by this method, knows the sound is from behind. Over a period of years, your brain will learn, what certain common sounds, sound like, from every possible angle.
This database you possess, is much more complete however, for everyday sounds eg a motor vehicle, than say a guitar chord. So, making a motor vehicle sound, and getting it to sound as if it’s coming from behind the listener, must be easier, than trying to get a guitar chord to sound as if it’s doing exactly the same motion relative to our heads/ears? I maintain, that if some algorithm could be built (or generated, hence I said AI), to process sounds to make them appear to come from behind us, then it could presumably be tweaked to accommodate for our individual head dimensions and hair even, to really fool us.
But, if the surface of the ear is involved, and the hairs, the brain won’t be fooled by earbuds alone - it will need a vacuous pair of headphones if not a ‘head-dome’ so that these sounds can in effecto of course, REALLY be made to come from behind us. As I say it would best be effected by a few dozen very small speakers spread across like pixels are, entirely ‘coating’ the inside of the ‘head-dome’, but this would be sort of cheating…because you’ve REALLY made the sound emanate from behind the ear! Surround sound also cheats, because again, it simply makes the sound come from behind you and therefore, is the same phenomenon in reality as a ‘real’ sound in the real world.
I just wonder if it’s not possible to fake a fully rotating sound around your head, for this reason? In other words, why does my brain simply and clearly assume and arrange, stereo sounds, to be at a 180 degree arc in FRONT of me, as they seem to be, when using either binaural or stereo sources to listen to? There is some phenomenon at work, that means my brain doesn’t accidentally suddenly think the sound at say 1 o’clock to the right in front of me, isn’t at 5 o’clock behind me instead? I suppose you could do a test, by blowing air gently and silently at your ears, while listening to a track or sound, and see if your brain suddenly decided to treat the source as behind you not in front?
Anyway, I’m waxing on. Thanks Dan for another enlightening video and tutorial on audio science. I’m off to get our cooling fan, and put my earbuds in, and see if Robin or the TD track, can be made to seem to be behind me, if my ears sense moving air from that direction!
I don't know about creating a 360 degree environment in processing, interesting idea. But binaural recording achieves what you describe. You might already be familiar, but it uses microphones positioned inside the ears of a fake human head. Check out virtual barbershop for an example.
Your voice, the pacing of your speech and the music are ASMR. Listening on a phone with Bluetooth speaker.
Dan, your videos are so informative. Here's a like. Cheers.
The most beautiful imaging tutorial I've ever seen. Thanks Mr. Worrall.
This is why MSED gets put on everything in mixdown.
Can you elaborate? How do you utilize MSED?
As Always… Boggling… but Clear and DEEPLY HELPFUL. THANK YOU.
What the hell? That's all. I'll be watching this one 10 times. Until then, I won't be casually high-passing my side signal...
I've been casually hipassing my side at 80-110 Hz with the brainworx hybrid eq. This just made me way more careful about that
I use Goodhertz Midside for this type of stuff, especially monoing the bass. I've never realised I could be introducing any phase issues before and I presume the Goodhertz people are smart enough to have taken it into consideration.
@@AfferbeckBeats after watching this video, I wouldn't presume anything.
@@nexusobserve So, maybe all of us have been! Isn't a point of using Mid/Side that you CAN HPF your sides to clean up the bass. So... rather than casually high passing our sides, what's the new intentional option?
Don't filter the side channel unless you need to; listen to how it changes the stereo image; if you don't like that change use a linear phase HPF.
I love that the algorithm says 2 ableton live tutorials means I want extreme mixing content.
We are getting to a point, where everyone here should sign an agreement not to talk about Dan Worrall in the outside world. 🤫
This is gold!!!
so you don´t like to share? why so afraid?
@@makrowelle Just kidding.
His tutorials really give you superpowers. But as soon as everyone knows about them superpowers will become standard powers 😁
@@Mansardian and wouldn´t that be nice? think of all that good sounding music...
Thanks! That 0.3ms mid delay is the best tip I've come across in several years. I have a couple songs I like that were mixed LCR (which I don't like). I pulled one up in my DAW and delayed the mid by 0.3ms and now it sounds much nicer across the stereo field. :)
It'll take me a couple of watches to digest this but I already know it sounds great. Can you use this technique on channels that aren't hard panned but that are moved just left or right of center to bring them back into cohesion when listening on headphones? I don't usually pan very much at all because it grates on me even when listening with stereo speakers but this could help to form a better image perhaps?
.
Search for a video called Smart Panning on SamiRabia’s TH-cam channel
Dont tell this to any "audiophile", since they are gonna be livid. They will probably start telling you how headphone x presents music as though one was sitting in a concert hall. Well, provided you use some bogus 2000 buck cable and wait for the stars to align in a certain way.
Best TH-cam homepage recommendation I've gotten in months, how had I never come across this channel before?! Immediately subscribed, your content is just what I needed to nerd out on!
I'm deaf in one ear. Downmixing to mono is a matter of accessibility.
The next dimension in mixing (which some people are already working with) is random (and other forms of algorithmic) movement. This is brilliant material, and while it is something I've thought about, I haven't had the time to come up with some practical applications at the mix level - only instrument and effect params - this is a great spring board of ideas around creative phase randomization. Excellent channel - and as others have noted, you have a great style and voice for explaining the steps and concepts. (Haas-n-pepper?)
Good knowledge
The phase issue with hard panned elements and EQing is something I had almost entirely forgotten about, thanks for the reminder!
Incredible vid as always. Can you (or anyone else) recommend a mid-side allpass capable plugin that would work in other daws besides reaper?
Many EQs. Just look at a picture of the plugin. Sometimes it's referred to as Lateral/Vertical, Mono/Stereo, Mid/Side, Sum/Diff or simply M/S. If you use a highpass filter and set it as low as possible this will have the same effect.
@@willfrank961 I have pro q and a few other m/s capable plugins, but none of them have an allpass option. A highpass filter set as low as possible is NOT the same thing as an allpass, even on sounds that do not contain low frequency content.
@@elirosenkim3659 I didn't say it was the same. I said it has the same effect. As in is causes a phase shift. Is that incorrect? Happy to learn something new.
@@willfrank961 they both cause a phase shift, but the m/s eq would cause a shift in frequency response as well, and the allpass would be more flexible in tweaking the phase response.
Dan Worrall you are the ultimate Professor of Audio. Thanks for empowering the next generation of technicians. People like you are saving me $100,000 in engineering education.
Thanks for offering the skills to help people go and work this out for themselves. Your experience shines through mate.
goodness :o
first time I've ever seen anyone acknowledge the existence of Voxengo Sound Delay!
I spent a while trying to find something exactly like that for my stream setup because various effects were delaying my spoken audio that I needed to find a way to delay the PC audio so that it went back in sync
I had never even imagined to use it in my tracks
Hi Dan! I keep coming back to your channel. This is supreme content. I am by no means as educated on the topic of sound as you are. But being able to learn why my mixing skills suck is kind of priceless. I have watched and listened to all of your videos on this channel (the fabfilter ones as well) And i got to say:
Everytime i consume your content i get amazed. Spot on. To the point. And most importantly why! Thank you for the awesomeness!
i know zero about what you're talking about but your voice and the the music made me listen to the whole thing
Man, this is brilliant. I'm not going to pretend I understand the nitty gritty, or what should sound good or the technique. But I know for a fact that the basis here of phase effecting how you here panned stereo is easily applicable. I don't need to understand all of it right now to know just being aware of this will improve my stereo mixes.
Tell you what, helps that you make these videos like you're a mystery youtuber talking about an unsolved murder. But instead it's an indepth explanation of audio engineering.
So, TH-cam decided I should be interested in this.
While I'm not much into audio production, and I'm not an audiophile, this did remind me of something I did.
A number of years ago, back when Winamp was *the* player to use, I sometimes encountered songs that were not great with headphones, usually because the mixing was just "this instrument goes in the left ear, this other one in the right ear".
So what I did was grab the audio processor plugin, and I wrote myself a little audio processing program, that would take the input, apply a small delay, and add the result to the opposite channel with lower volume. The difference in the experience was amazing.
I don't listen to music as much as I used to, these days. Not because I don't enjoy music, but rather because I have come to enjoy the silence.
Brilliant title, I instantly knew I'm gonna learn something new here. Well done. As a side note I do love hard panned instrumets on headphones, theres nothing unnatural there for me.
I was completely blown away by this, now my mixes sound rich and deep, thanks a lot man
Even as a fan of that 'unnatural, hard-panned sound', it doesn't work for everything. Thanks for the video, man. You're a legend.
When using only 1 hard-panned sound, it's like you going deaf on 1 ear without filter. Obvious and amazing!
Subscribing.
This dude sounds like narrators from science channels and such. Pretty awesome stuff.
Wow. One of the most educating and enjoyable videos I’ve watched in a while. The production quality is beyond this world!!!!! I’m not surprised that an audio engineer has that good of voiceover audio, I’m just flabbergasted by how much effort you put into your videos!
Wow! that was a real metaphorical ear opener! I'd listened to the video a couple of times and didn't get it, but when I actually tried it in one of my sessions, the difference was stark. Actually I still don't understand why this effect happens but it has blown my mind - or perhaps that's why! Many thanks!
No idea why the algorithm put this in front of me, but I'm not objecting too much. Most of that went over my head but I did get a small insight into what as a layman will just call "sound".
I can see an almost infinitely deep rabbit hole here that I could lose myself in. I think for my sanity I shall carefully go around it. That said I will carry with me a greater respect for those people in the field.
it also makes it sounds glued together really nicely
This is so in depth. I feel like I'm in a university course and I've skipped all the prerequisites.