i haven't noticed any change tbh, most new releases that i happen to like are compressed to death, the classic way, to the same degree (if not more) as before.
The secret is to design loud kicks, snares, basses, leads, etc...then arrange and mix them to be loud with mastering being a tiny part of the equation. Basically you need to saturate every sound to it's limit without destroying them, which just sounds better for most sources anyways... and then saturate the instrument busses and then the 2-bus.
And don't forget to get a waveform analyzer like spectroscope (just an example) to see what saturation actually does your waveform and how it tames peaks 🎉✌️
Japan music industry has been using this method since the 90s because most people here doesn't have speakers that actually drive low end frequency, that's why a lot of Japanese made music are kind of weirdly mix where vocal sounds in front than the instrument, and before loudness war most just cut the low boost the high because everyone think it's higher fidelity, but in the end it's what makes it being perceived loud in venue speakers and malls, but the problem is, when you listen with proper speakers it sound weird. Tho as a fellow musician id say just mix how you like your track don't make the commercial get in your way, it's your art and it's your soul in it, you don't wanna ruined.
I've always mixed into a Durrough loudness meter, and I've noticed that basing the mix around the vocals is easier than mixing around the kick and bass. I seem to get better results, more clarity, when the vocals are the focal point. Great video!
Complicated topic today however if i can give my 2 cents: when mixing aim to properly sit each channel or instrument so that it all fits and flows, while during mastering aim to find a balance between the lows and the high mids, since the high mids are the ones that trigger the LUFS variable the most. If you boost too much the high mids, the LUFS will read a higher level overall and lower the volume of your track on streaming services. However, if you cut too much the high mids, letting the sub and lows punch through too much, the track will sound dull, lifeless even though quite possibly a little louder. All this is based on my experience as an electronic music producer, not a recording engineer for acoustic instrument driven music, as such might vary significantly in the outcome.
@@UndergroundPlu9 considering basically nothing was said outside of "balance the mix" I don't think it's going to prove that helpful unless the newbie has no clue what mixing even is... in which case idk why they'd find themselves here to even read the comment.
For me, what makes music sound good and stand out has nothing to do with loudness. (very interesting video, though :) - Clean mix with good separation - Dynamically pleasing - W I D E soundstage (with zero mono compatibility, lol)
But you know how many average consumers listen to music through their bluetooth speakers...those things often end up summing the audio to mono, so surely mono compatibility isn't completely redundant? And before you say they don't care, I've heard plenty of normal consumers listen to songs through those speakers and say "this sounds weird on this speaker..?"
@@FinnJain I like that 'really inhumanely wide' part, lol but true. It's kind of amazing what a few more dB's in the Side channel can do, while also being widened a bit. It doesn't take much to create an immersive audio field.
@@albiss1164 Yeah, by 'inhumanely wide' I mean those noobs that be putting stereo shapers on their subs or basses (not the low mids) and calling it 'unique'. Width is something I really overlook when mixing and mastering but it really can help so much when done subtly
My understanding is that Integrated LUFS measure average loudness over the full duration of the song. So you can effectively "cheat" the system by having super quiet sections than drag down the Integrated LUFS measurement. This will make the Integrated LUFS read lower even though the peaks are quite loud. The result will be that your track will be turned down less in relation to the peaks. I actually think this is a good thing because it encourages songwriters and producers to make more dynamic songs. Let's say you have a bridge where everything drops out besides piano. That section might measure at -20 LUFS. But then your chorus might measure at -5 LUFS. However, that might result in your whole track measuring at -9 Integrated LUFS. That means it's only getting turned down by 5 db. But if you have a track that's just slammed at -5 LUFS the entire way through, that track will be turned down by 9 db and will therefore be 4 db quieter than the more dynamic track. This interpretation comes from Justin Colletti at SonicScoop, btw. He's a professional mastering engineer. So provided I understood him correctly, this should be accurate. And it seems like a far more effective way to get a louder Spotify track than some of the methods mentioned here, which seem to get a bit too into the weeds.
In theory yes but apparently some if not all the platforms take this into account and parts of the track that go below a level threshold don't get included in the lufs calculation :(
@yllekjs76 I wonder if those sections don't count at all or if there's just a minimum value it pretends is always being reached? As long as you'd be at that value or lower, it should still help if the latter is the case. Even above such a limit, you can probably make a bit of a difference
@@jonalexandermoret3581 They think that because quiet parts drag down the average loudness, even the loud parts get decreased by a lesser amount as a result of that. Think it used to just normalize volume downwards, which meant quiet parts had a lot of trouble even just being audible vs. hyper-compressed and super bright tracks that make people lower their volume, especially in environments where there's other sounds like speaking.
Lmfao, what? There are countless recordings of Clare De Lune, a 120 year old piece, with wildly different loudnesses/LUFS values throughout time. There is no such thing as a "perfect LUFS" for really any style of music, but especially so with classical music due to the much greater dynamic range present in the performance.
Clever. But here’s the thing. When a song comes on that sounds louder to me, I turn down my volume to compensate. It’s literally impossible to affect the loudness of a track for any listener who has a volume knob handy. So, why do we keep hurting our mixes and masters in vain pursuit of the impossible?
In reality, sometimes it is hard to explain to clients (even my bandmates) when a songs hits -8lufs loudness (i mastered the song myself, after hearing feedbacks i got a mastering engineer to do it) but they said it still sound softer compared to so called “commercial tracks”. 🥲
Louder stuff just sounds better to a lot of people (myself included). Even if you turn Spotify volume down, you’ll still feel the intensity of a loud master. The squashed dynamics is kinda cool to listen to for me, but obviously it’s very subjective
Frankly I think the fact that there is such a thing as loudness wars in the LUFS era is an excellent argument for completely replacing the inaccurate and outdated LUFS as a measurement with something that actually measures psychoacoustic loudness (such as a neural net trained on human perception of loudness), the past decade and a half I've seen music grow ever more unlistenably harsh and peaky and centered around the vocal frequency range, I can't listen to most full albums any more without getting a headache, the race to be heard as loud as possible is completely altering the way music is not just mixed but arranged, and decades from now when these current standards have faded away the music that will stand the test of time will not be the music engineered to be as loud as possible on a briefly dominant streaming platform that has never made a profit in its existence, but the music that was played back quieter in that era because the engineers chose to ignore the arms race in favour of what sounded more pleasant and less fatiguing.
You raise an important point about the impact of loudness on listening experience and music quality. While exploring new measurement methods could be beneficial, it’s crucial for artists and engineers to prioritize musicality and emotional impact over sheer loudness, ensuring that the art remains enjoyable and timeless!
When you say "acoustic" instruments, are you referring actual instruments recorded live/studio? and if that's what you mean (I think that's what you mean) why do you think that is? And/or, how did Bob Kats explain as to why that is?
@ I went back to review the concept, he has a video on his TH-cam page digidoo or something. My understanding is that the idea is twofold: 1. We want quiet parts to sound quiet, 2. Things like string ensembles have low PLR, as opposed to full orchestras, bands, or modern pop. We try to smash modern pop to have low PLR but certain acoustic recordings just naturally have that.
Interesting stuff, especially your point about vocal level, I think this is really important. I'm confused by your peak-level comments though. I just measured this from Spotify with and without normalisation, and got -7.5 LUFS with +1.5 dBTP, versus -14 LUFS / -4.4 dBTP. This is a PLR of 9.6 un-normalised, and 9 with normalisation, so virtually the same and actually lower with normalisation enabled. The normalised level is exactly where I'd expect and we're definitely not hearing 1.5 dB of extra clipping (on desktop at least) so I don't see how this can influence the perceived loudness. In fact in my tests, clipping a decoded file sounds less impactful, not more. Let me know what you think, or have I misunderstood your point ?
The hard truth about loudness is some people do it very well and sound clean at ridiculous levels like -4 LUFs through sound design, arrangement, mixdown, etc... and everyone else that can't do it as well or lacks understanding just gets salty and whines on the internet about "muh loudness wars". Just read through the comments. It's all so tiresome.
And if you do that clean all you really need is eq and whatever effects and you’re p much set after minimal compression and leveling. It’s a beautiful thing to specify frequencies.
Nothing at -4 LUFS sounds "clean." Some things at -4 sound good, but there are certain aesthetic constraints that generally have to be followed to make this possible. If you're a Noisia-tier producer you can pull off all sorts of things, but a lot of people think they are better at this than they are. Feel free to call me salty again but I'm not against loudness, I'm not saying your heavy electronic tracks need to be at -14, I just don't like when people delude themselves about the inherent tradeoffs once you start getting really extreme with it.
Fool-proof way to make music louder: Bang head on wall repeatedly. Now, chant the sacred walrus mating call. Congratulations, you've unlocked the next level of loudness.
I think it's important to add in every video like this - loudness extremely GENRE DEPENDANT. You can crank loudness in your sleepy look or ambient tracks for example, but I don't believe it would help 😂
Tubes, tape, analog console input transformer drive / gain , analog eqs, compressors etc etc they all ‘saturate’ in some way rounding out transients and allowing you to push sounds further. So pick your poison they all have different aesthetics , tones, shapes and applications. This is the learned skill and experience of a good producer and mixer.
Saturation is distortion. Try multiband to saturate just the upper harmonics of the voice and use asymmetrical aka even harmonic saturation unless you want a more hollow sound.
Then it's too loud and brick walled so I turn it down and it becomes boring sounding. Tracks with more dynamics sound better to me and I can have them louder without hurting my ears.
@@Akyuu2608 It's entirely genre based. EDM if you're not practically clipping the mixer you're never gonna compare in loudness. Your mix is like half of how the track is gonna feel, which is what matters in EDM. Other genres it doesn't matter nearly as much.
If you mix it right early on and use sounds there’s no need to reduce your dynamic range which could make it sound harsh and annoying. So be careful and trust a couple peoples ears and your intuition. It works out best when engineer your sound based on quality not quantity.
All technical equations aside, I have loaded many songs into daws for analysation and technique learning. There are so many that go beyond peaking sometimes as high as +6. The main thing is to use your ears and not your eyes. If it's too loud you will hear distortion.
I personally think the problem is people are measuring loudness by the mix . It's the individual instruments loudness that counts . You can have a fast pace song full of busyness or a slow sparse song with the exact same elements and levels. They will sound the same loudness but measure very different . Just take a song and slow it down a few BPM without changing the pitch . Instant lower Lufs but it sounds the same loudness. So we are all looking at the wrong goal posts .
So duck all instruments spectral with vocal as sidechain, may I suggest, which I use on all my mixes this way, Sonible Smart Compressor in full spectral mode, this way it only pushes the frequencies that the vocal is using and therefore makes the ducking very transparent….
Just when you thought that normalization has finally ended the loudness war for good, the loudness warriors fight back. It almost feels like the constant battle between copyright protectors and pirates 🤣
The way they used to do normalisation sounded way more accurate to me. Now that lufs is used songs sound up and down in a playlist. It's a music industry scam. The platforms use level thresholds to not include quiet parts of a track in the lufs measurement which leads to artists having to reduce the dynamic range to sound louder on the platform.
But what’s the trick to not having it distort/trigger the limiter on iPhone speakers at maximum volume? I hear like certain songs at -4 not distorting the speaker and other songs at like -6/-9 causing pumping and distortion when the volumes up all the way. What gives here?
Great question! For phone playback, the trick is to maintain a clear mix with controlled dynamics, as iPhone speakers can exaggerate distortion. Even at higher loudness levels like -4 LUFS, a well-balanced mix with less aggressive compression will often sound cleaner than a lower level that’s overly squashed.
@@sageaudio that makes sense. But it only occurs at “max volume”… meaning at 1 or 2 clicks down from max, the mix sounds great. It sounds like a limiter gets engaged on certain mixes but not others. Maybe the margin on the master limiter could cause this? Like if the final master peaks at 0 or -1? Might be some under the hood stuff going on with IOS.
about the thing of peaking over 0db... you can only upload 24bit songs to streaming platforms so the info over 0db will not be taken, so its not the same than clippin when its exported? how that can add volume to the normalization on streaming?
this is really only an educational video, you can manipulate LUFS in real time to produce much better results attending to both vox and inst individually to perceive a balanced mix instead of only loud vocals to tend to good lufs
This is good information but to me it seems it can be summed up thusly: make the track FEEL loud, not ACTUALLY loud. Btw none of Billie eilishs tracks SOUND loud to me. So it doesn't matter whether they actually are or not. The normalization equation used by streaming is supposed to take away loudness war concerns and split the difference for a track making it sound as good as possible. I don't think I am going to fight it, I will just continue to make quiet music which sounds loud, as I always have
That’s a solid approach! Focusing on perceived loudness and emotional impact can often be more effective than just pushing for high levels, especially with streaming normalization in play.
I've had to stop caring about loudness. Getting a mix loud enough to compete with modern masters is a losing battle. You have to make compromises and you're still crushing the crap out of the mix at the end of the day. So, I'm just mixing and mastering music going forward to sound good, and mastering it around the level that streaming services output, which is waaaayyy lower than actual modern masters. I'm mostly an offline listener though, I have my own library to listen to, so this is a problem. I now have to turn down the 2000+ songs in my library and everything else I buy going forward so that everything is the same volume. Hopefully things start to change and we can focus on good mixes instead of loud ones somewhere down the line.
The loudness wars will never end because heavy electronic genres aren't just mastered to be loud... they ARE loud, by sound selection, arrangement, mixdown, etc. Loud music doesn't have to sound "crushed" if you know how to write loud songs and make loud sounds. Saturation is your friend when it comes to hitting ridiculous levels like -4 LUFS.
@@gulagwarlord So you're saying artists who make electronic music have never heard of faders? And music isn't just that loud, music has to be pushed to be that loud. If all of the virtual instruments are that loud, they'll clip the master bus. There are two fixes that I know of. One would be to just turn down the damn faders, and the second would be to clip and limit the tracks so that they stay loud but don't clip the master bus. Clipping and limiting takes a toll on audio, and they don't play well with certain things. They can "blow up" certain sounds in an unpleasing way, so you have to grab the EQ and pull those frequencies out, and that's where the compromises begin. Just turn down the faders, you'll be okay. 👍
so is it a balance between making the perceived loudness greater by making the vocal relatively standout and wanting the vocal to sit properly in the mix?
Absolutely! It’s all about finding that sweet spot where the vocal stands out without overpowering other elements, ensuring it complements the mix while still being engaging and dynamic.
The European Broadcast Union? LUFS was basically adopted as a standard because TV companies got so many complaints that adverts seemed to be louder than TV shows, or that switching from one channel to another, or going from a TV show to a DVD, or a games console, or a music streaming site caused a jump in volume, which meant the consumer had to grab the remote. It was never really meant to stop the "loudness war" in music. LUFS was adopted as a standardized way of calculating the perceived volume of various media sources so that they all sound roughly the same when you flick between them. Sites like Spotify and TH-cam just want their music and videos to sound at about the same level as TV shows, films, and games. Fortunately, the algorithm mostly works quite well, but obviously music producers still want to make their songs sound louder while still following the standard.
Great video. I have this plugin and the ADPTR pluging by plugin alliance.. i would sign up for the membership but im only interested in the 5 mixing credits per month
8:22 yes, it's louder but the mix sounds terrible, the voice is way too loud. I think the key is using this trick on songs where it doesn't matter if the voice is too loud.
Did I get it right? So for the loudness it’s better to peak above 0 if it still sound good? And therefore avoid true peak limiting? Thanks in advance 🙏🏻
Hmm 🤔 ok thanks for the analysis. I think I got a few ideas now how to make mix louder, which can even make it more dynamic as well... What do you think does things like Soothe 2 on master to cut sharp resonances affect the measured lufs? Or are the peaks too short to affect the lufs measurements?
Glad you found it helpful! Soothe 2 can effectively reduce resonant frequencies, which can lower the overall peak levels and allow for a more consistent RMS level, potentially increasing your LUFS, even if the peaks are short, without pushing the dynamics into distortion.
While that's true but it's really only to go above 0 true peak with peaks if that makes sense 😄 If you simply go above 0dB all the time, it will be normalised back down, of course... Also, sometimes the extra distortion it causes may not be worth it, so it's still a valid question.
I didn't understand so much. So, let's suppose I already have my mix well balanced and I have a good arrangement. How do I have to export my track with the FabFilter L-2? Do I need to turn on the true peak and reach like 9 or 10 LUFS integrated? And what about the output ceiling, is it okey if I set it to -1.0? I'm going crazy, I don't know what to do to make my songs sound as loud as mainstream commercial songs...
You need to understand and use clipping and limiting in serial so it makes your loudness sound transparent .usually for final limiters set at -0.3 and aim for -1 to -3 Db of gain reductions your final LUFS should be around -10 to -7 for commercial level loudness for most pop / rock / country tunes
0:55 In the diagram it says that the gate threshold is at -70LU, when it should be LUFS. Loudness Units (LU) is a relative measurement (without reference to full scale) in the same way dB is a relative unit, so it's only used for comparing loudness relative to a target level. LUFS is an absolute measurement ("relative to full scale"). -70LU could be correct in this example, but you would then have to state that it is relative to 0dBFS. Even so, it would simply make more sense to just type -70LUFS. Just a technicality, but thought I should mention it in case someone gets confused. Also, the gate has two thresholds, the first at -70LUFS and the second one at -10dB relative to the level measured after the first threshold. I'm guessing that you didn't include that just to not make things too complicated.
i have the opposite issue lol. my music is very low, slow & ambient. so i always struggle to make them as low as my referenced songs without spotify or other streaming platforms do their own shi with it.
Ironically, LUFS was invented and adopted by the European Broadcast Union because consumers hated having to reach for the remote every time TV adverts came on and sounded much louder than the TV show they interrupted. Likewise with TH-cam or Spotify, consumers don't want to keep having to change the volume level for every song/video. The LUFS standard isn't perfect for measuring the perceived volume of music, but it's very useful in a world where we consume a variety of media from a million different sources and are often "changing the channel". I hate reaching for the volume knob.
It seems like you missed the nuance of my comment. I wasn't criticizing LUFS itself, I'm aware of why it exists and how it benefits listeners. My point was more about the practice of trying to game the LUFS system (or any system for that matter) for louder mixes. It feels pointless to me because listeners can always adjust their volume if they think a track is too loud or quiet (which, by the way, you also dislike, seeing how you mention you hate reaching for the knob, you must hate the premise of this video as much as I do).
@@Lothyde I agree with your point. I think the practice of trying to game the system is really silly and actually counter-productive (making songs super-loud ultimately makes things worse for everyone). I added the information about how LUFS came about more for casual readers scrolling through the comments wondering what all the fuss is about.
9:13 That is not true. At least not for Spotify since it will penalize you for having higher true peak levels. Also I highly doubt, that "True Peak Clipping" significantly increases the perceived Loudness. I don't use the Fab filter L2 but I guess if there are any noticable effects it is a "problem" of the plugin. At least with my plugins I cannot verify that this has any effect.
Yup, it’s becoming a lost art to understand when and where to push volume. Skrillex and Noisia’s latest albums are phenomenal examples of utilizing the entirety of the waveform, but not pushing loudness. It’s funny that these bass albums sound much better than most pop music these days. An intricate understanding of the stereo field, and instrument separation seems to be fading away, as to just jam as much sound into the track as possible.
Mmmm yeah I love when I'm on the tube (or Subways) and I can't even feel the kick or snare cutting through the noisy passengers or carriage noise Life IS loud and we need loud but still dynamic mixes to drown it out
@@ThalamusGhipopotamus they do though. What makes you think they don't? I'm aware that perceived loudness is different than peak levels, but the two aren't completely unrelated. What does it have to do with this though? If a streaming service's loudness standard states that the material's max peak needs to be at -1dBTP and you send them material that peaks at +2dBTP, isn't there a risk that they'll turn it down 3dB? Even if they use a true peak limiter to reduce just the overshoots to meet their standards, doesn't that still make it pointless to turn in material with true peak overshoots, as suggested in this video? The only case I can think of when true peak overshoots might give you some extra loudness is if the streaming service does nothing to the material even if it peaks well above their standard level. I'd like to know if this is the case with some streaming services.
@@ThalamusGhipopotamus the video suggests that you can make your master sound louder if you overshoot with the true peaks, but that wouldn't work if the streaming services then turn the volume down because of those same overshoots, right?
@@JiihaaS If this is the case, then in theory they will lower the level and with it the volume. But then there will be a discrepancy in lufs. In any case, the required values must be indicated on the service pages.
@@the95s Have you been checked for autism? and just to entertain your thought, If it gives a negative outlook on to the song to mess with the volume, then you shouldn't be trying to be louder than everything else either.
@@blaness13 it amazes me to think that a song with a significantly lower precived or measured loudness is not going to affect the user perception of the song. ( amateur, not well mixed etc...)
While Atmos releases emphasize spatial sound and immersive experiences, the principles of dynamic range and perceived loudness still apply; a well-balanced mix is crucial regardless of the format. Neglecting these elements can lead to a less engaging listening experience, even in a cutting-edge format like Atmos.
@@sageaudio Specifically I was referring to to volume targets for Atmos releases. Integrated Dolby Atmos loudness measurement should be -18 LKFS or less. The True peak level should be -1 dBTP or less. So the "loudness war" isn't a thing in that realm, for multiple reasons, mostly to do w/ dynamic range of the listening environments. Setting perceived loudness targets makes much more sense if you're mixing for 2-channel stereo or "speaker puck" mono, but binaural and other isolated multi-channel rooms are much more controlled environments so pushing overall perceived loudness boundaries isn't necessary. I think what's more important there is striking the correct balance of proximity between channels. Supposedly the Atmos renderer will do a reasonable job with the 2-channel stereo fold-down from there, but it will sound much different than a mix which is specifically targeted to 2-channel stereo with perceived loudness goals. This may just be a norm that will change with time. Or not.
The loudness war is killing music by reducing dynamics and thus expression. Said so, is there any actual science paper that demonstrates that loudest song are really getting more views?
It's hard to separate causes and effects, but various studies have shown that the biggest hits each year are measuring louder than previous years. By 2050, I expect the biggest selling song will just be a square wave at 0dBFS. We might not be ready for it, but our kids are gonna love it.
Am i the only one who hates how loud music is now? I hate songs with no dynamics. I normalize everyone so my ears don't hurt and the stuff compressed and limited to hell sound so boring after that.
Normalization varies between genres. Most genres aren't really dynamic and there is no climax or the drop isn't innovative compared to the other parts of the song, so u won't see the dynamic changes. Additionaly most music producers use limiters and compress on the master which is why u get the flat sound, even in the most popular songs. Change what u listen to mate.
@@nunu9849 Everywhere! Vocals are the most important sound in a mix that has vocals as the main focus! That's everything from old "doo wop" tracks, thru pop, rock, metal and the most modern pop songs in 2024. The uploader already spoke about "vocal anchoring" at the same time trying to explain how to achieve the most perceived loudness and how any masking can take away from this. Given that main vocals almost always occupy a certain frequency range in the mids, not too low, not too high, and these mids have a greater effect on integrated LUFS than the rest, vocal compression is clearly of importance when mixing in this context, as one can and does bring the vocals to the forefront of a mix and anchor everything else around the percieved loudness, using compression and stereo imaging to give it depth, width, breadth and of course, loudness. Go and listen to "Master of Puppets" original official track, then go and listen Ariana Grande "Eternal Sunshine" new album, any track to see what i'm talking about. The vocals of Ariana are louder than the entire song of Metallica with alll their drums and guitars. It's like, -2 LUFS integrated VS nothing even close. This video highlights, accidentally, the importance of vocal compression as one of the main factors in loudness, short-term and integrated LUFS and percieved loudness when compared to other tracks.
@@doktordrift948 Yes! Im gonna listen to this ariana grande's track, im so excited to look at this 😃btw i now their mastering engineers do lot of job and their tracks are often at -5 - 4 or even louder sometimes, but Its so crazy how they archive such a thing like that.
@@nunu9849 The real magic is the fact that they can not only make the vocals that loud and prominent in a modern pop mix, but how full, wide, textured and natural they make it sound while not comprimising any other parts of the mix. These engineers win the "loudness war" without any real trickery and manage to fully capture your attention over the likes of the next metal song or full orchestra piece, just with some relatively "thin" vocals. Bringing vocals, or lead guitar, or a stack of violins or EDM bass or even a rock drum kit to "the front" of a mix is paramount to loudness and I believe that compression, before most things, is key to that!
Why ! Why to fool the streaming algos ?! Why ?! The consumer DON'T care about loudness... when a song is quieter... and the song is good... the consumer turns the volume up... MORE than every trick or algo does... Simple as that... These tricks force the streamers to modify their algos to prevent this shit ! Cause the streamers want the best pleasant listening experience for the user ! The loudness thing is ONLY a thing in the production world... Stupid ! When the biz concentrate their power on creating good music, with the same power to fool the algos... Then the biz has less problems... I test average people since 2003, with music, mixes and masters... In 21 years, really NO CONSUMER argued that the tracks are too quiet... cause my work is around -10 to -15 LUFSi... Cause of that... i know that loudness is not the key to the hearts of the consumers... I never heard that a consumer finds a good cause of the loudness... So this video wants to tell me that i have to make vocals louder and sound like karaoke to be louder on Spotify ??? Really ? Same with the other stuff... Hell it is music not a car, which has to be trimmed for low fuel consumption or highest possible speed... It is MUSIC !!! When i make music... mix or master music... I only look that i am above -14 LUFSi, cause it is easier for the streamers to turn down stuff, than to push it up... End !
@@gulagwarlord A lot of that is due to loud, sustained sub frequencies. This tends to inflate the lufs values. In reality, they don't sound a lot louder than -6 LUFS tracks in other genres. As far as how it sounds, density = good, but in balance with punch.
Who would've ever guessed the thing that was supposed to end the Loudness Wars only made new and different ways to Loudness War.
i haven't noticed any change tbh, most new releases that i happen to like are compressed to death, the classic way, to the same degree (if not more) as before.
@@victortitov1740i guess that type of sound essentially became a kind of aesthetic and is what artists want their music to sound like ?
@@victortitov1740more emphasis on the low end and wide stereo field now
where there's a will there's a way
The secret to loudness: Make good music so you can make people turn it loud.
The secret is to design loud kicks, snares, basses, leads, etc...then arrange and mix them to be loud with mastering being a tiny part of the equation. Basically you need to saturate every sound to it's limit without destroying them, which just sounds better for most sources anyways... and then saturate the instrument busses and then the 2-bus.
And don't forget to get a waveform analyzer like spectroscope (just an example) to see what saturation actually does your waveform and how it tames peaks 🎉✌️
@@p.b2505minimeters clears everything on the market in terms of visualization for $10. absolutely worth every penny
@@gulagwarlordjust use a high end analog console like I do. No more saturation needed.
@@gulagwarlordYou hit the nail on the head. Push it all to it's limit.
this video is about to make so many amateur producers create the emptiest, harshest mixes.
No worries there are tons of music makers producers and we all add our own sauce
This actually blew my mind! The concept of „Vocal Anchoring“ really makes sense! 🙌🏼
Japan music industry has been using this method since the 90s because most people here doesn't have speakers that actually drive low end frequency, that's why a lot of Japanese made music are kind of weirdly mix where vocal sounds in front than the instrument, and before loudness war most just cut the low boost the high because everyone think it's higher fidelity, but in the end it's what makes it being perceived loud in venue speakers and malls, but the problem is, when you listen with proper speakers it sound weird.
Tho as a fellow musician id say just mix how you like your track don't make the commercial get in your way, it's your art and it's your soul in it, you don't wanna ruined.
It’s almost like they forgot that people can just adjust their volume, but instead they decided to sacrifice the quality of the mix.
Can you please give 2-3 examples of these weird mixes from Japan?
Really cool insight.
Loudness war is over and Loud won.
I fought the loud and the loud won.
Sadly
I've always mixed into a Durrough loudness meter, and I've noticed that basing the mix around the vocals is easier than mixing around the kick and bass. I seem to get better results, more clarity, when the vocals are the focal point. Great video!
Complicated topic today however if i can give my 2 cents: when mixing aim to properly sit each channel or instrument so that it all fits and flows, while during mastering aim to find a balance between the lows and the high mids, since the high mids are the ones that trigger the LUFS variable the most. If you boost too much the high mids, the LUFS will read a higher level overall and lower the volume of your track on streaming services. However, if you cut too much the high mids, letting the sub and lows punch through too much, the track will sound dull, lifeless even though quite possibly a little louder. All this is based on my experience as an electronic music producer, not a recording engineer for acoustic instrument driven music, as such might vary significantly in the outcome.
This is really helpful bro thank you
Your two cents is just to mix and master properly....? 😂
Appreciate you sharing your thoughts!
@@Tatman2TheResQit could help a beginner don't be like that
@@UndergroundPlu9 considering basically nothing was said outside of "balance the mix" I don't think it's going to prove that helpful unless the newbie has no clue what mixing even is... in which case idk why they'd find themselves here to even read the comment.
For me, what makes music sound good and stand out has nothing to do with loudness. (very interesting video, though :)
- Clean mix with good separation
- Dynamically pleasing
- W I D E soundstage (with zero mono compatibility, lol)
But you know how many average consumers listen to music through their bluetooth speakers...those things often end up summing the audio to mono, so surely mono compatibility isn't completely redundant? And before you say they don't care, I've heard plenty of normal consumers listen to songs through those speakers and say "this sounds weird on this speaker..?"
@@FinnJain You're right, I know Mono is still a thing, hence the 'lol' at the end :)
@@albiss1164 Hahaha yeah, tbf, things have to be like really really inhumanely wide before they start sounding 'weird' in mono anyway
@@FinnJain I like that 'really inhumanely wide' part, lol but true. It's kind of amazing what a few more dB's in the Side channel can do, while also being widened a bit.
It doesn't take much to create an immersive audio field.
@@albiss1164 Yeah, by 'inhumanely wide' I mean those noobs that be putting stereo shapers on their subs or basses (not the low mids) and calling it 'unique'. Width is something I really overlook when mixing and mastering but it really can help so much when done subtly
"This video might be a little dry and technical"
It's sad how happy that made me
My understanding is that Integrated LUFS measure average loudness over the full duration of the song. So you can effectively "cheat" the system by having super quiet sections than drag down the Integrated LUFS measurement. This will make the Integrated LUFS read lower even though the peaks are quite loud. The result will be that your track will be turned down less in relation to the peaks.
I actually think this is a good thing because it encourages songwriters and producers to make more dynamic songs. Let's say you have a bridge where everything drops out besides piano. That section might measure at -20 LUFS. But then your chorus might measure at -5 LUFS. However, that might result in your whole track measuring at -9 Integrated LUFS. That means it's only getting turned down by 5 db. But if you have a track that's just slammed at -5 LUFS the entire way through, that track will be turned down by 9 db and will therefore be 4 db quieter than the more dynamic track.
This interpretation comes from Justin Colletti at SonicScoop, btw. He's a professional mastering engineer. So provided I understood him correctly, this should be accurate. And it seems like a far more effective way to get a louder Spotify track than some of the methods mentioned here, which seem to get a bit too into the weeds.
In theory yes but apparently some if not all the platforms take this into account and parts of the track that go below a level threshold don't get included in the lufs calculation :(
@yllekjs76 I wonder if those sections don't count at all or if there's just a minimum value it pretends is always being reached? As long as you'd be at that value or lower, it should still help if the latter is the case. Even above such a limit, you can probably make a bit of a difference
@Gnurklesquimp2 if you search rx optomize you should find a few videos explaining what's happening
Bro, what?
@@jonalexandermoret3581 They think that because quiet parts drag down the average loudness, even the loud parts get decreased by a lesser amount as a result of that.
Think it used to just normalize volume downwards, which meant quiet parts had a lot of trouble even just being audible vs. hyper-compressed and super bright tracks that make people lower their volume, especially in environments where there's other sounds like speaking.
Debussy's Clair de Lune is around -18.5 LUFS.
and it's perfect like that.
Lmfao, what? There are countless recordings of Clare De Lune, a 120 year old piece, with wildly different loudnesses/LUFS values throughout time. There is no such thing as a "perfect LUFS" for really any style of music, but especially so with classical music due to the much greater dynamic range present in the performance.
Sage audio coming through once again with some advanced knowledge that I didn't know I needed!
Thanks for watching :)!
Clever. But here’s the thing. When a song comes on that sounds louder to me, I turn down my volume to compensate. It’s literally impossible to affect the loudness of a track for any listener who has a volume knob handy. So, why do we keep hurting our mixes and masters in vain pursuit of the impossible?
In reality, sometimes it is hard to explain to clients (even my bandmates) when a songs hits -8lufs loudness (i mastered the song myself, after hearing feedbacks i got a mastering engineer to do it) but they said it still sound softer compared to so called “commercial tracks”. 🥲
Louder stuff just sounds better to a lot of people (myself included). Even if you turn Spotify volume down, you’ll still feel the intensity of a loud master. The squashed dynamics is kinda cool to listen to for me, but obviously it’s very subjective
Ah nothing I love more than scrambling for the volume knob because some asshole wants his song louder than the rest
THIS.
Frankly I think the fact that there is such a thing as loudness wars in the LUFS era is an excellent argument for completely replacing the inaccurate and outdated LUFS as a measurement with something that actually measures psychoacoustic loudness (such as a neural net trained on human perception of loudness), the past decade and a half I've seen music grow ever more unlistenably harsh and peaky and centered around the vocal frequency range, I can't listen to most full albums any more without getting a headache, the race to be heard as loud as possible is completely altering the way music is not just mixed but arranged, and decades from now when these current standards have faded away the music that will stand the test of time will not be the music engineered to be as loud as possible on a briefly dominant streaming platform that has never made a profit in its existence, but the music that was played back quieter in that era because the engineers chose to ignore the arms race in favour of what sounded more pleasant and less fatiguing.
ive got a neural network trained on the human perception of loudness in my head its called a brain
You raise an important point about the impact of loudness on listening experience and music quality. While exploring new measurement methods could be beneficial, it’s crucial for artists and engineers to prioritize musicality and emotional impact over sheer loudness, ensuring that the art remains enjoyable and timeless!
@@cltuxunink Of course I mean for automated normalisation on platforms that require it
Amen
Really well put!
This reminds me of what Bob Katz calls the Acoustic Advantage. It’s that music with acoustic instruments sounds louder than compressed synthetic ones.
When you say "acoustic" instruments, are you referring actual instruments recorded live/studio? and if that's what you mean (I think that's what you mean) why do you think that is? And/or, how did Bob Kats explain as to why that is?
@ I went back to review the concept, he has a video on his TH-cam page digidoo or something. My understanding is that the idea is twofold: 1. We want quiet parts to sound quiet, 2. Things like string ensembles have low PLR, as opposed to full orchestras, bands, or modern pop. We try to smash modern pop to have low PLR but certain acoustic recordings just naturally have that.
Interesting stuff, especially your point about vocal level, I think this is really important.
I'm confused by your peak-level comments though. I just measured this from Spotify with and without normalisation, and got -7.5 LUFS with +1.5 dBTP, versus -14 LUFS / -4.4 dBTP. This is a PLR of 9.6 un-normalised, and 9 with normalisation, so virtually the same and actually lower with normalisation enabled. The normalised level is exactly where I'd expect and we're definitely not hearing 1.5 dB of extra clipping (on desktop at least) so I don't see how this can influence the perceived loudness. In fact in my tests, clipping a decoded file sounds less impactful, not more.
Let me know what you think, or have I misunderstood your point ?
Wonderful explanation for something so difficult to comprehend!
The hard truth about loudness is some people do it very well and sound clean at ridiculous levels like -4 LUFs through sound design, arrangement, mixdown, etc... and everyone else that can't do it as well or lacks understanding just gets salty and whines on the internet about "muh loudness wars". Just read through the comments. It's all so tiresome.
thank you finally someone with my brain
i’m hitting -4 and fairly clean lately
And if you do that clean all you really need is eq and whatever effects and you’re p much set after minimal compression and leveling. It’s a beautiful thing to specify frequencies.
Nothing at -4 LUFS sounds "clean." Some things at -4 sound good, but there are certain aesthetic constraints that generally have to be followed to make this possible. If you're a Noisia-tier producer you can pull off all sorts of things, but a lot of people think they are better at this than they are. Feel free to call me salty again but I'm not against loudness, I'm not saying your heavy electronic tracks need to be at -14, I just don't like when people delude themselves about the inherent tradeoffs once you start getting really extreme with it.
@@jp5519 im producing for top500 artists, the market demands transparent loudness at this level. and yes. its clean.
Fool-proof way to make music louder: Bang head on wall repeatedly. Now, chant the sacred walrus mating call. Congratulations, you've unlocked the next level of loudness.
I'm against all wars including loudness war
Amazing analysis! Bringing home intuitive insights I've had, but could not quite pinpoint. Thanks a lot!!!
Thank you for watching!
I think it's important to add in every video like this - loudness extremely GENRE DEPENDANT. You can crank loudness in your sleepy look or ambient tracks for example, but I don't believe it would help 😂
Yep! This is only for particular genres as I mentioned in the video.
@@sageaudio oh, it suppose I wasn't paying attention. I'm sorry :)
Awesome vid and great concepts and demonstrations - thanks
Hey bro make a video on vocal saturation explaining how to heavily saturate vocals without distorting it
Is that a question or a demand, bro
heavy saturatiion is distortion, u can try parallel saturation/parallel compression (which can become distortion too in some cases)
"how to heavily saturate vocals without distorting it" 😂😂😂😂😂😂
Tubes, tape, analog console input transformer drive / gain , analog eqs, compressors etc etc they all ‘saturate’ in some way rounding out transients and allowing you to push sounds further. So pick your poison they all have different aesthetics , tones, shapes and applications. This is the learned skill and experience of a good producer and mixer.
Saturation is distortion. Try multiband to saturate just the upper harmonics of the voice and use asymmetrical aka even harmonic saturation unless you want a more hollow sound.
Push it all loud. Use clippers, saturators, and linear phase EQs and push it all.
Then it's too loud and brick walled so I turn it down and it becomes boring sounding. Tracks with more dynamics sound better to me and I can have them louder without hurting my ears.
@@Akyuu2608 It's entirely genre based. EDM if you're not practically clipping the mixer you're never gonna compare in loudness. Your mix is like half of how the track is gonna feel, which is what matters in EDM. Other genres it doesn't matter nearly as much.
Manny said once, there is something in 23k, the potential of music is stopped by technology. Thank you for your artwork man.
23kHz?
If you mix it right early on and use sounds there’s no need to reduce your dynamic range which could make it sound harsh and annoying. So be careful and trust a couple peoples ears and your intuition. It works out best when engineer your sound based on quality not quantity.
All technical equations aside, I have loaded many songs into daws for analysation and technique learning. There are so many that go beyond peaking sometimes as high as +6. The main thing is to use your ears and not your eyes. If it's too loud you will hear distortion.
youtube mp3 ? I hope not bc of the codec, I hope you used high quality wav files for your analisation otherwise it's kinda pointless you know?
@@blackears2245 true. Imported from commercially released compact disc.
Finding the right balance between technical precision and listening experience is key!
I personally think the problem is people are measuring loudness by the mix . It's the individual instruments loudness that counts . You can have a fast pace song full of busyness or a slow sparse song with the exact same elements and levels. They will sound the same loudness but measure very different . Just take a song and slow it down a few BPM without changing the pitch . Instant lower Lufs but it sounds the same loudness. So we are all looking at the wrong goal posts .
best channel on youtube
So duck all instruments spectral with vocal as sidechain, may I suggest, which I use on all my mixes this way, Sonible Smart Compressor in full spectral mode, this way it only pushes the frequencies that the vocal is using and therefore makes the ducking very transparent….
This video is so valuable, I don’t understand everything so I’m going to go and recap, dissecting each part carefully as I can
Just when you thought that normalization has finally ended the loudness war for good, the loudness warriors fight back. It almost feels like the constant battle between copyright protectors and pirates 🤣
The way they used to do normalisation sounded way more accurate to me. Now that lufs is used songs sound up and down in a playlist. It's a music industry scam. The platforms use level thresholds to not include quiet parts of a track in the lufs measurement which leads to artists having to reduce the dynamic range to sound louder on the platform.
This is how I understand it...
What if my track does not have a Vocal? Should i just put the emphasis on the main Lead or synth?
Yeah that's crazy, I heard or perceived a huge difference 😮
But what’s the trick to not having it distort/trigger the limiter on iPhone speakers at maximum volume? I hear like certain songs at -4 not distorting the speaker and other songs at like -6/-9 causing pumping and distortion when the volumes up all the way. What gives here?
They’ve maybe used clippers and saturation and limiters to get rid of distortion
Most likely its because of heavy saturation and clipping used on the -6/-9 songs
Mix/master through Sonobus on an iPhone and check what does it!
Great question! For phone playback, the trick is to maintain a clear mix with controlled dynamics, as iPhone speakers can exaggerate distortion. Even at higher loudness levels like -4 LUFS, a well-balanced mix with less aggressive compression will often sound cleaner than a lower level that’s overly squashed.
@@sageaudio that makes sense. But it only occurs at “max volume”… meaning at 1 or 2 clicks down from max, the mix sounds great. It sounds like a limiter gets engaged on certain mixes but not others. Maybe the margin on the master limiter could cause this? Like if the final master peaks at 0 or -1? Might be some under the hood stuff going on with IOS.
about the thing of peaking over 0db... you can only upload 24bit songs to streaming platforms so the info over 0db will not be taken, so its not the same than clippin when its exported? how that can add volume to the normalization on streaming?
inter samples peaks caused by lossless streaming quality like spotify OGG encodings will cause peaks to read above 0.0
this is really only an educational video, you can manipulate LUFS in real time to produce much better results attending to both vox and inst individually to perceive a balanced mix instead of only loud vocals to tend to good lufs
so informative. thanks
Great video, thanks
This is good information but to me it seems it can be summed up thusly: make the track FEEL loud, not ACTUALLY loud. Btw none of Billie eilishs tracks SOUND loud to me. So it doesn't matter whether they actually are or not. The normalization equation used by streaming is supposed to take away loudness war concerns and split the difference for a track making it sound as good as possible. I don't think I am going to fight it, I will just continue to make quiet music which sounds loud, as I always have
That’s a solid approach! Focusing on perceived loudness and emotional impact can often be more effective than just pushing for high levels, especially with streaming normalization in play.
This is a great video. Thank you
Appreciate you taking the time to watch!
Aren’t you clipping into whatever is playing back your song by peaking?
I've had to stop caring about loudness. Getting a mix loud enough to compete with modern masters is a losing battle. You have to make compromises and you're still crushing the crap out of the mix at the end of the day. So, I'm just mixing and mastering music going forward to sound good, and mastering it around the level that streaming services output, which is waaaayyy lower than actual modern masters. I'm mostly an offline listener though, I have my own library to listen to, so this is a problem. I now have to turn down the 2000+ songs in my library and everything else I buy going forward so that everything is the same volume. Hopefully things start to change and we can focus on good mixes instead of loud ones somewhere down the line.
The loudness wars will never end because heavy electronic genres aren't just mastered to be loud... they ARE loud, by sound selection, arrangement, mixdown, etc. Loud music doesn't have to sound "crushed" if you know how to write loud songs and make loud sounds. Saturation is your friend when it comes to hitting ridiculous levels like -4 LUFS.
@@gulagwarlord So you're saying artists who make electronic music have never heard of faders? And music isn't just that loud, music has to be pushed to be that loud. If all of the virtual instruments are that loud, they'll clip the master bus. There are two fixes that I know of. One would be to just turn down the damn faders, and the second would be to clip and limit the tracks so that they stay loud but don't clip the master bus. Clipping and limiting takes a toll on audio, and they don't play well with certain things. They can "blow up" certain sounds in an unpleasing way, so you have to grab the EQ and pull those frequencies out, and that's where the compromises begin. Just turn down the faders, you'll be okay. 👍
so is it a balance between making the perceived loudness greater by making the vocal relatively standout and wanting the vocal to sit properly in the mix?
Absolutely! It’s all about finding that sweet spot where the vocal stands out without overpowering other elements, ensuring it complements the mix while still being engaging and dynamic.
Lufs is doing a good job at normalizing 👍respect to the person/peps that came up with it
The European Broadcast Union? LUFS was basically adopted as a standard because TV companies got so many complaints that adverts seemed to be louder than TV shows, or that switching from one channel to another, or going from a TV show to a DVD, or a games console, or a music streaming site caused a jump in volume, which meant the consumer had to grab the remote. It was never really meant to stop the "loudness war" in music. LUFS was adopted as a standardized way of calculating the perceived volume of various media sources so that they all sound roughly the same when you flick between them. Sites like Spotify and TH-cam just want their music and videos to sound at about the same level as TV shows, films, and games. Fortunately, the algorithm mostly works quite well, but obviously music producers still want to make their songs sound louder while still following the standard.
Great video. I have this plugin and the ADPTR pluging by plugin alliance.. i would sign up for the membership but im only interested in the 5 mixing credits per month
How'd you graph out all of the different elements in a song like that? Did you use a certain software?
Fantastic content👌🥷🙏
8:22 yes, it's louder but the mix sounds terrible, the voice is way too loud. I think the key is using this trick on songs where it doesn't matter if the voice is too loud.
Did I get it right? So for the loudness it’s better to peak above 0 if it still sound good? And therefore avoid true peak limiting? Thanks in advance 🙏🏻
Hmm 🤔 ok thanks for the analysis. I think I got a few ideas now how to make mix louder, which can even make it more dynamic as well...
What do you think does things like Soothe 2 on master to cut sharp resonances affect the measured lufs?
Or are the peaks too short to affect the lufs measurements?
Read the manual for specifications. There's a time for integration
Glad you found it helpful! Soothe 2 can effectively reduce resonant frequencies, which can lower the overall peak levels and allow for a more consistent RMS level, potentially increasing your LUFS, even if the peaks are short, without pushing the dynamics into distortion.
Great insight!
Thanks for checking out the video!
Love your content
Thanks for wathcing!
Good information, as always, Sage Audio. Thanks.
Do you recommend export to -0.1 of True Pick?
Point was to go over true peak if you want to sound louder.
@@aleksandarstojceski3139 , I see. Thanks.
@@aleksandarstojceski3139 , tks.
While that's true but it's really only to go above 0 true peak with peaks if that makes sense 😄
If you simply go above 0dB all the time, it will be normalised back down, of course...
Also, sometimes the extra distortion it causes may not be worth it, so it's still a valid question.
I noticed that the LU was .6 on A and .9 on B. I think that's why A sounds louder... My thought's...
Thank you for sharing your thoughts!
Wow this was a good one
Thank you, appreciate you taking the time to watch it!
there's no bass in that Guess song, that has to matter too as far keeping lufs down but perceived loudness up.
fascinating
They surgically removed the tumor, but it seems the cancer is back.
What edm instrumentals
I didn't understand so much. So, let's suppose I already have my mix well balanced and I have a good arrangement.
How do I have to export my track with the FabFilter L-2? Do I need to turn on the true peak and reach like 9 or 10 LUFS integrated? And what about the output ceiling, is it okey if I set it to -1.0?
I'm going crazy, I don't know what to do to make my songs sound as loud as mainstream commercial songs...
You need to understand and use clipping and limiting in serial so it makes your loudness sound transparent .usually for final limiters set at -0.3 and aim for -1 to -3 Db of gain reductions your final LUFS should be around -10 to -7 for commercial level loudness for most pop / rock / country tunes
Back in my day we had a volume knob.
0:55 In the diagram it says that the gate threshold is at -70LU, when it should be LUFS. Loudness Units (LU) is a relative measurement (without reference to full scale) in the same way dB is a relative unit, so it's only used for comparing loudness relative to a target level. LUFS is an absolute measurement ("relative to full scale"). -70LU could be correct in this example, but you would then have to state that it is relative to 0dBFS. Even so, it would simply make more sense to just type -70LUFS. Just a technicality, but thought I should mention it in case someone gets confused.
Also, the gate has two thresholds, the first at -70LUFS and the second one at -10dB relative to the level measured after the first threshold. I'm guessing that you didn't include that just to not make things too complicated.
Appreciate you pointing this out, definitely didn't want to make things too complicated in this one!
Awesome
Thanks for watching!
i have the opposite issue lol. my music is very low, slow & ambient. so i always struggle to make them as low as my referenced songs without spotify or other streaming platforms do their own shi with it.
I never understood the point of loudness wars, do people forget volume knobs exist?
Ironically, LUFS was invented and adopted by the European Broadcast Union because consumers hated having to reach for the remote every time TV adverts came on and sounded much louder than the TV show they interrupted. Likewise with TH-cam or Spotify, consumers don't want to keep having to change the volume level for every song/video. The LUFS standard isn't perfect for measuring the perceived volume of music, but it's very useful in a world where we consume a variety of media from a million different sources and are often "changing the channel". I hate reaching for the volume knob.
It seems like you missed the nuance of my comment. I wasn't criticizing LUFS itself, I'm aware of why it exists and how it benefits listeners. My point was more about the practice of trying to game the LUFS system (or any system for that matter) for louder mixes. It feels pointless to me because listeners can always adjust their volume if they think a track is too loud or quiet (which, by the way, you also dislike, seeing how you mention you hate reaching for the knob, you must hate the premise of this video as much as I do).
@@Lothyde I agree with your point. I think the practice of trying to game the system is really silly and actually counter-productive (making songs super-loud ultimately makes things worse for everyone). I added the information about how LUFS came about more for casual readers scrolling through the comments wondering what all the fuss is about.
How can the measurement of LUFS include a calculation using LUFS 😂 that's like a circular definition or something
9:13 That is not true. At least not for Spotify since it will penalize you for having higher true peak levels. Also I highly doubt, that "True Peak Clipping" significantly increases the perceived Loudness. I don't use the Fab filter L2 but I guess if there are any noticable effects it is a "problem" of the plugin. At least with my plugins I cannot verify that this has any effect.
where did you see spotiy will penalize you? asking for a friend
Isn't it terrible we have got here? Does it sound louder... We all forgot the word better.. 😢
Yup, it’s becoming a lost art to understand when and where to push volume.
Skrillex and Noisia’s latest albums are phenomenal examples of utilizing the entirety of the waveform, but not pushing loudness. It’s funny that these bass albums sound much better than most pop music these days.
An intricate understanding of the stereo field, and instrument separation seems to be fading away, as to just jam as much sound into the track as possible.
Mmmm yeah I love when I'm on the tube (or Subways) and I can't even feel the kick or snare cutting through the noisy passengers or carriage noise
Life IS loud and we need loud but still dynamic mixes to drown it out
I Can't Believe It's Not Better!®
louder = better :D
louder is better
i hear much more high frequencies than most people, so i will never win the loudness war =)
Somebody call Dan Worrall I need to know if the way he's measuring is even correct.
Don't streaming services measure true peak too, and turn the whole song's volume down or apply a true peak limiter if it's louder than their standard?
Peak levels have nothing to do with volume.
@@ThalamusGhipopotamus they do though. What makes you think they don't? I'm aware that perceived loudness is different than peak levels, but the two aren't completely unrelated. What does it have to do with this though?
If a streaming service's loudness standard states that the material's max peak needs to be at -1dBTP and you send them material that peaks at +2dBTP, isn't there a risk that they'll turn it down 3dB? Even if they use a true peak limiter to reduce just the overshoots to meet their standards, doesn't that still make it pointless to turn in material with true peak overshoots, as suggested in this video?
The only case I can think of when true peak overshoots might give you some extra loudness is if the streaming service does nothing to the material even if it peaks well above their standard level. I'd like to know if this is the case with some streaming services.
@@JiihaaS If the services themselves normalize the volume in accordance with their standards, then why think about it?
@@ThalamusGhipopotamus the video suggests that you can make your master sound louder if you overshoot with the true peaks, but that wouldn't work if the streaming services then turn the volume down because of those same overshoots, right?
@@JiihaaS If this is the case, then in theory they will lower the level and with it the volume. But then there will be a discrepancy in lufs. In any case, the required values must be indicated on the service pages.
Both A and B sounded the same level overall to me
Maybe it’s the genre I make music for but nothing about this feels beneficial for the listener…
If you put moments of silence in your track it will skew the normalization average
not true those will fall below the normalization gate and not affect your LUFS readings
@@the95s thou cannot pass the normalization gate
10 minutes and no mention of the volume knob?
its gives a negative outlook on the song if the user has to mess with volume in between songs constantly.
@@the95s Have you been checked for autism?
and just to entertain your thought,
If it gives a negative outlook on to the song to mess with the volume, then you shouldn't be trying to be louder than everything else either.
@@blaness13 it amazes me to think that a song with a significantly lower precived or measured loudness is not going to affect the user perception of the song. ( amateur, not well mixed etc...)
To me B made it feel disconnected from the music. TBH.
Normalization??
This is going to be a moot point for those targeting Atmos releases.
While Atmos releases emphasize spatial sound and immersive experiences, the principles of dynamic range and perceived loudness still apply; a well-balanced mix is crucial regardless of the format. Neglecting these elements can lead to a less engaging listening experience, even in a cutting-edge format like Atmos.
@@sageaudio Specifically I was referring to to volume targets for Atmos releases. Integrated Dolby Atmos loudness measurement should be -18 LKFS or less. The True peak level should be -1 dBTP or less. So the "loudness war" isn't a thing in that realm, for multiple reasons, mostly to do w/ dynamic range of the listening environments. Setting perceived loudness targets makes much more sense if you're mixing for 2-channel stereo or "speaker puck" mono, but binaural and other isolated multi-channel rooms are much more controlled environments so pushing overall perceived loudness boundaries isn't necessary. I think what's more important there is striking the correct balance of proximity between channels. Supposedly the Atmos renderer will do a reasonable job with the 2-channel stereo fold-down from there, but it will sound much different than a mix which is specifically targeted to 2-channel stereo with perceived loudness goals. This may just be a norm that will change with time. Or not.
LOVE THIS CONTENT!!!!! THANK YOU!!! like.sub
Endlessly appreciate the sub, thanks for watching!
loud =/= good i have a volume nob for a reason thanks.
The loudness war is killing music by reducing dynamics and thus expression. Said so, is there any actual science paper that demonstrates that loudest song are really getting more views?
It's hard to separate causes and effects, but various studies have shown that the biggest hits each year are measuring louder than previous years. By 2050, I expect the biggest selling song will just be a square wave at 0dBFS. We might not be ready for it, but our kids are gonna love it.
Am i the only one who hates how loud music is now? I hate songs with no dynamics. I normalize everyone so my ears don't hurt and the stuff compressed and limited to hell sound so boring after that.
Normalization varies between genres. Most genres aren't really dynamic and there is no climax or the drop isn't innovative compared to the other parts of the song, so u won't see the dynamic changes. Additionaly most music producers use limiters and compress on the master which is why u get the flat sound, even in the most popular songs. Change what u listen to mate.
Looks like we have a new loudness war for Dan Worrall to win? 😁
@@Tekkerue Dan admitted that his song's loudness was actually not the "loudest". Prefer to his short video.
very complex... wow
*accidentally highlights the importance of compression and imaging in main vocals
whoops :)
Where?
@@nunu9849 Everywhere!
Vocals are the most important sound in a mix that has vocals as the main focus! That's everything from old "doo wop" tracks, thru pop, rock, metal and the most modern pop songs in 2024. The uploader already spoke about "vocal anchoring" at the same time trying to explain how to achieve the most perceived loudness and how any masking can take away from this. Given that main vocals almost always occupy a certain frequency range in the mids, not too low, not too high, and these mids have a greater effect on integrated LUFS than the rest, vocal compression is clearly of importance when mixing in this context, as one can and does bring the vocals to the forefront of a mix and anchor everything else around the percieved loudness, using compression and stereo imaging to give it depth, width, breadth and of course, loudness.
Go and listen to "Master of Puppets" original official track, then go and listen Ariana Grande "Eternal Sunshine" new album, any track to see what i'm talking about.
The vocals of Ariana are louder than the entire song of Metallica with alll their drums and guitars.
It's like, -2 LUFS integrated VS nothing even close.
This video highlights, accidentally, the importance of vocal compression as one of the main factors in loudness, short-term and integrated LUFS and percieved loudness when compared to other tracks.
@@doktordrift948 Yes! Im gonna listen to this ariana grande's track, im so excited to look at this 😃btw i now their mastering engineers do lot of job and their tracks are often at -5 - 4 or even louder sometimes, but Its so crazy how they archive such a thing like that.
@@nunu9849 The real magic is the fact that they can not only make the vocals that loud and prominent in a modern pop mix, but how full, wide, textured and natural they make it sound while not comprimising any other parts of the mix. These engineers win the "loudness war" without any real trickery and manage to fully capture your attention over the likes of the next metal song or full orchestra piece, just with some relatively "thin" vocals.
Bringing vocals, or lead guitar, or a stack of violins or EDM bass or even a rock drum kit to "the front" of a mix is paramount to loudness and I believe that compression, before most things, is key to that!
So, limit without true peak and let it distort A little over 0 if it doesnt sound bad.
A and B sounded the same to me loudness wise, aside from the vocal in B being WAY too loud.
Why !
Why to fool the streaming algos ?!
Why ?!
The consumer DON'T care about loudness... when a song is quieter... and the song is good... the consumer turns the volume up... MORE than every trick or algo does...
Simple as that...
These tricks force the streamers to modify their algos to prevent this shit !
Cause the streamers want the best pleasant listening experience for the user !
The loudness thing is ONLY a thing in the production world... Stupid !
When the biz concentrate their power on creating good music, with the same power to fool the algos... Then the biz has less problems...
I test average people since 2003, with music, mixes and masters... In 21 years, really NO CONSUMER argued that the tracks are too quiet... cause my work is around -10 to -15 LUFSi...
Cause of that... i know that loudness is not the key to the hearts of the consumers...
I never heard that a consumer finds a good cause of the loudness...
So this video wants to tell me that i have to make vocals louder and sound like karaoke to be louder on Spotify ???
Really ?
Same with the other stuff...
Hell it is music not a car, which has to be trimmed for low fuel consumption or highest possible speed...
It is MUSIC !!!
When i make music... mix or master music... I only look that i am above -14 LUFSi, cause it is easier for the streamers to turn down stuff, than to push it up...
End !
both mixed sounded unbalanced... lol
it's as if the purpose of the demonstration flew right over your head and out the window.....
-9 LUFS 💀
If it aint clippin it aint hittin
the "pros" aren't really "pros", they're just "social proofed'.
@@excelfalope that's the secret trick to attract bedroom producers to jump in and make themselves "pro".
to me A sounds louder... maybe my brain is broken.
...or you could just ignore this and go with what sounds good and serves the song because you don't own the listeners volume knob and never will
In heavy electronic music, -4 LUFs sounds amazing.
@@gulagwarlord A lot of that is due to loud, sustained sub frequencies. This tends to inflate the lufs values. In reality, they don't sound a lot louder than -6 LUFS tracks in other genres.
As far as how it sounds, density = good, but in balance with punch.
stop using Spotify