Thanks for another great podcast! Billie Eilish's latest is pretty dynamic by modern standards :D. I downloaded the HDtracks version, and that had anything from -9 LUFS Integrated down to -11,7.
Interesting & informative podcast. There is a lot of good info on that Ozone article regarding platforms but at the bottom with the recorded sections of dynamic master spotify preview and the 'optimised version' I prefer the more dynamic version over the optimised. In fact I dont like what the optimised version does to the percussion by reducing it within the song (& it sounds a little more processed) and to me the more dynamic version just sounds better. And I can just turn the volume up if I want to hear it louder. So in my view the +3dBish optimised version is poorer not better! Of course this may also be subjective in that some may prefer it and others not, which ulimately shows you cant say or generalise that making a quiter section louder is objectively better because that subjective preference is still more an artistic decision for the musician or band. 00:12
Hi Ian, when finalizing a track, from my understanding it's relatively common practice to trim the final output gain by -1db (as well as utilizing oversampling) to accommodate for potential intersample peaks/dsp overshoots as a result of DAC (lossy) and the potential for distortion/aliasing on certain systems. This may be a very simple to answer for those who have knowledge but my question is: When analyzing a track that has all ready been uploaded (let's say to Spotify for example) the track basically always hits up to 0db from what I can tell. If people are uploading with a trimming of about -1db or so, how come that doesn't show when looking at the track? Continuing with the example of Spotify, I know there is a Normalize volume setting but I have it turned off. Do streaming services like Spotify just bring the track up to 0db anyway, even without using the Normalize setting? If so that would explain why the tracks don't read at -1db at the loudest.
Many files are mastered right up to 0 dBFS, relatively few follow the advice to reduce to -1. Even when they do though, the higher peak values are caused by the lossy data-encoding, especially at lower data rates. The -1 dBTP suggestion will help, but not guarantee a clean decode, especially for high LUFS masters. There's no level change when normalization is disabled, but fewer than 17% of users do this...
Given that AI is in the rapid process of ending music as a profession for about 80% of producers, worrying about loudness is not even in my peripheral thoughts anymore.
Great Podcast!
Thanks for another great podcast! Billie Eilish's latest is pretty dynamic by modern standards :D. I downloaded the HDtracks version, and that had anything from -9 LUFS Integrated down to -11,7.
Interesting & informative podcast. There is a lot of good info on that Ozone article regarding platforms but at the bottom with the recorded sections of dynamic master spotify preview and the 'optimised version' I prefer the more dynamic version over the optimised. In fact I dont like what the optimised version does to the percussion by reducing it within the song (& it sounds a little more processed) and to me the more dynamic version just sounds better. And I can just turn the volume up if I want to hear it louder. So in my view the +3dBish optimised version is poorer not better! Of course this may also be subjective in that some may prefer it and others not, which ulimately shows you cant say or generalise that making a quiter section louder is objectively better because that subjective preference is still more an artistic decision for the musician or band.
00:12
🤙🏻
Hi Ian, when finalizing a track, from my understanding it's relatively common practice to trim the final output gain by -1db (as well as utilizing oversampling) to accommodate for potential intersample peaks/dsp overshoots as a result of DAC (lossy) and the potential for distortion/aliasing on certain systems.
This may be a very simple to answer for those who have knowledge but my question is: When analyzing a track that has all ready been uploaded (let's say to Spotify for example) the track basically always hits up to 0db from what I can tell. If people are uploading with a trimming of about -1db or so, how come that doesn't show when looking at the track?
Continuing with the example of Spotify, I know there is a Normalize volume setting but I have it turned off. Do streaming services like Spotify just bring the track up to 0db anyway, even without using the Normalize setting? If so that would explain why the tracks don't read at -1db at the loudest.
Many files are mastered right up to 0 dBFS, relatively few follow the advice to reduce to -1. Even when they do though, the higher peak values are caused by the lossy data-encoding, especially at lower data rates. The -1 dBTP suggestion will help, but not guarantee a clean decode, especially for high LUFS masters.
There's no level change when normalization is disabled, but fewer than 17% of users do this...
See this guy is at -4.1? It sounds warm.
-4.1 below -14 so they're at -18.1 dB LUFS 😊
Given that AI is in the rapid process of ending music as a profession for about 80% of producers, worrying about loudness is not even in my peripheral thoughts anymore.