Everyone - Spotify, like many other streaming services, transcode your distributed high quality audio masters (WAV or FLAC) to compressed file formats (OGG, AAC, etc). The file compression process essentially averages out finer details of your high quality files to create smaller file sizes that are easier for distribution over the internet. Artifacts are created in this transcoding process - particularly at very high frequencies where the samples are "closer together" because pitch is logarithmic. I usually distribute masters to Spotify with 1 dB of headroom to compensate for this artifact creation. Through doing the exact same monitoring exercise Streaky did here, most of my tracks on Spotify peak around -0.3 or -0.2 dB, and a couple even clip. Spotify doesn't actually recommend mastering your track to -14 LUFS - it only recommends you leave 2.0 dB (!!!) of headroom on your track before distribution. I do think this is a bit extreme, a bit silly and just creates a lot of confusion. Spotify can certainly figure out a way to detect the TP of your distributed master file and adjust the gain accordingly before transcoding. Alas, as Streaky is pointing out here, there is really no correct loudness or TP value to distribute to streaming services at that compress audio files. It's up to your personal preference on how much clipping you can afford on your transcoded file. Loud songs can generally afford less headroom IMO, as it will just feel like distortion and part of the song when consuming the compress version of the track.
Finally someone said the truth😅 Thank you friend🙌🏻 I really don’t understand how “Streaky” has a lot of subscribers, and “a lot of big stars” customers😂 I mean of course he’s good and sometimes shows a true information and very good clues) But sometimes he in all seriousness saying to everyone completely bullshit😅 Sorry, didn’t want to offend anyone, peace✌🏻
I keep a whole mixer channel in my DAW template armed with a loopback input that allows me to internally rip any desktop audio on the fly. Perfect for pulling in reference track audio or sampling any audio source I want at any time to break down and study with a plugin like SPAN, stem separate, focus on sound design sessions, etc. I have the Arturia MiniFuse 4 which supports looback.
@@LETTMusicNice! I have the exact same interface and tried last year that loopback thing, but I failed. How you set up audio settings in daw and in win/macOS?
@@hagis23So probably a little complicated to explain in a single YT comment, but under the loopback section in Minifuse Control Center, it's set to 'Cue2Mix' on the dropdown, and the source added is Computer Source 1-2. I'm on Windows and FL Studio, but don't think that matters, in the DAW, where you would add an input on a mixer channel just like a microphone, instead of choosing something like MIC or LINE 1, there's an option for 'Loopback Left - Loopback Right' which is a stereo input of your computer audio being picked up directly from the audio interface.
Never was a “loudness war is over” believer. If my mix is not as loud or louder than their rough mix, it’s over. Streaming services will do what they do.
Yes, enlightening, thank you! But as a musician that plays many genres, how sad is a market were the competition is about loudness and not music quality. Music has 4 senses: melody, harmony, rhythm and dynamics. Nowadays we lost one of these senses. All that to give a consistent output to the listener. Apparently people are not able anymore to turn the volume knob.
Unfortunately it’s human nature when I mastered to vinyl in the 90’s and CD it was all about who had the loudest Vinyl…always been what everybody wants 🤷♂️
Problem is a lot of people don’t know about the Normalize function in Spotify and Apple Music etc. and it’s usually on by default. So if your master is at -5 Lufs, its volume gets taken down approx. 9dB to hit the -14dB threshold. Try to listen to those tracks with the function back on and you’ll hear the difference in volume (not perceived loudness).
I think there are some conversion happening. Is impossible to have something 0 db of true peak and something also above..so there are some conversion for sure.
Thank you Streaky. I produced music back in the day on my archlinux machine in 2008. I know jackd and realtime kernel from that time. Now i'm on mac. I never knew jack made it to the windows ecosystem.
Excellent tip. For Presonus Revelator users, the Spotify audio can be routed internally via virtual inputs, without the need of additional routing software. Thanks for revealing the big lie of the -14 Lufs. Thaks Streaky!
Streaky, thanks for this video ! What about commercial radio stations ? Everybody talk about Spotify and LUFS and normalization, but how does it work with commercial radio stations ? Do they normalize, compress, use limiting, etc... to play every song at the same level and what happens with the extra loud songs and very quiet songs, are they adjusted up and down ? Is the radio music played in WAV quality, MP3 quality, etc....
@@Streaky_com So it means that a "perfectly mastered" song could sound like "sh**" on radio ? Example perfect snare and kick could sound weak or distorted ?
Been doing this a while with Sonnox Listenhub with the Clarity M plugin running in it (before sound ID)... Agree, don't think I've ever seen anything at -14 and many tracks have TP all over the shop. Cheers Streaky :-)
There is one point that I'm slightly confused by. How is this showing the mastered loudness? If I increase or decrease the volume slider in Spotify this increases or decreases the loudness in the channel on the DAW and therefore affects the reading in the level meter.
Hey Streaky love your content! One question: is there a way to find out what are the real volumes of each track in ProTools? I'm trying to avoid moving faders right away.
Great video streaky! I immediately checked it out on some tracks! Why is true peak almost always > +0 db? What is the advantage? In other videos you showed limiting to -0.1 db
I don't know why, but most of the times when I render at -0.3 true peak, I check the file on my second laptop and it shows +0.5 or even +1 but stays -0.3 where I rendered it. This has always confused me.
The true peak will increases if you encode the file e.g. to compressed format like mp3, aac or ogg vorbis as Spotify is using. To be on the safe side it generic recommended to have true peak -1.5dB. So that you have after encoding e.g. a true peak from -1dB depending on content of your audio file. Also be aware in this video is the peak value measured which has not same consistent as true peak measurement.
Other than the issue with using RMS (which is deemed "outdated", becase LUFS is more precise as RMS tends to favor low end more), the issue is that we don't know what happens to the signal chain and why there is a peak over 0db. If you wanna check how loud the track is, just buy it for a couple of bucks and test it out for yourself. I doubt many are doing multiple masterings at this point 🤔
This is great idea but unfortunately "Soundflower" doesn't work on my Macbook M1. I find a "Black Hole" this is exactly the same kind of tool and working properly on M1. Best regards and thank you for your sharing of fantastic knowledge!
Hi and thanks! Why Spotify don't normalize a song at -9 to -14 ? The example with rms can be misleading 'cause in the highest part of the song rms is very close to short term while the integrated lufs, the streaming target, is related to the all the song and influenced by dynamic variaton of the different sections, (verse, chorus, ..). So is it enough to upload at -9 intLufs to get -4 ??
Nice data! But... How you really Know that turning that Normalize and auto adjust is really giving you the "master copy"... And also RMS is not LUFS and isnt nearly the same! -9 RMS its if we guess -11LUFS S.T.Average... So -4LUFs its also really need it? Dynamics its whats makes "volume" (SPL) and in any mesurement the "time" constant its a must ... Thoughs???
So I’m just curious, that’s a whole bunch of video and a whole bunch of set up to then not use a LUFS meter. It’s similar to RMS but they are definitely NOT the same. And the Spotify spec is for LUFS. You have a stock LUFS meter in Logic. Why not use it?. Also, still Soundflower instead of Blackhole? Again. Just curious since Soundflower has been unsupported for several years now.
Them going over 0 true peak is likely due to algorithm conversion to lossy formats. If you turn a WAV with headroom into a lossy format like mp3 it’s true peaks clip. Spotify uses OGG vorbis
Anyone know how to fix the gain pot on my Benchmark DAC.. when changing the volume, it crackles like hell now :( It is 15 years old, but the knob has had it... I've tried everything short of taking it apart and cleaning it, but pretty sure it needs custom screwdriver bit to open it too... The crackle is 5x louder than the volume of what's playing through it. It rips through my speakers.. Krrrrkkkkkkkxzxzx!!! Painful !
some commercial releases going even higher with true peak. I found major artist tracks with up to +4dB true peak. An the sound was alright even on different devices.
As much as I get vomit gushing up my throat seeing this topic come up once more I salute you for showing people once and for all that nobody should be mastering to a level that a streaming service chooses to normalize their audio to. Please people, don't talk about or raise this subject to other human beings again now
The true peaks are not original as in the first format. Cause Spotify convers your track (WAV) to your OGG. Thats why true peaks are changing. You can do the test by yourself, convert your final mix from WAV to MP3 than see a difference between dbtp. So it is not a true you saying a lot of popular tracks have a exceeding the volume threshold of DbTP.
Hey, great video! One question though: What is the neutral position of the volume slider on the Spotify interface? I think that it is somewhere in the last third of the slider range… or full up? Looking forward to you answer ✌️
I'm pretty sure full volume on Spotify or any streaming app is going to give you the full volume that the track was mastered at. Usually if there was a feature to boost volume above normal, they would add a notch that the slider sticks to.
Thanks for the advice Streaky. I guess the next question i have is that with their being so many different loudness at Mastering, what would be the appropriate LUF's or True Peak youd want as a Mixer to send you at Mastering ? and/or would their be a method to find that out like you did in this video ? Thanks in advanced
Thanks Streaky great tips here! I was curious if mastering the same track for both Atmos and Stereo releases to upload for Apple Music with the -18 LUFS threshold on Atmos wouldn’t near be competitive enough for stereo that you’ll have to master two versions? If so how to submit it twice so when the end user switches on and off Atmos they get the appropriate one? Or because they get normalized you just stick to the -18 and let Apple Music take care of the level and only matters then if you do a physical release?
Adaptr Metric AB gives you the LUFS on every reference tracks uploaded, which is easy to work with than doing this unlees you think the Spotify numbers are different?
maybe a stupid question , but if you can link spotifys audio to your daw , that means you can record the songs into your daw and basically ''steal'' them? the question might be stupid cause this sort of thing might be around for a while and im just now talking about it, and because everything is available nowadays anyway and most people have spotify premium or stuff like that , audio piracy might be irrelevant , im just wondering if it can record it and you get to keep the sound , definitely not planning on doing that, just to be clear! Cheers Streaky , been subscribed for a while now , good stuff!
rms is quite diffrent to lufs and you will need to measure the intergral over the whole song. Also true peaks change, when you compress the upload to different bitrates. spotify only streams with 320 kbits and not lossless.
WTF - How come they are uploading clipped track? 0.2dB? Is this normal? Or are the tracks clipping when spotify converts them to MP3? I think the latter. There can be a 0.3dB difference when you convert from WAV to MP3.
The true peak is the intermediate peak between sample points. If you sample 2 points, and these two points are on the up side of a transient and the other one at the downside, the true peak is higher than both these sample points.
Spotify adjust loudness though doesn't it? I was told never to go over a certain lufs and not to worry too much about how far under I am as Spotify will either increase or crush the track depending where you are within the limits?
Why dont true peaks matter? Youd think Taylor Swift would have the best of the best of mastering engineers working on their songs But the have true peaks over 0
Does this not mean that the whole level ‘standards’ are a nonsense. And also, don’t press record on your DAW when checking out the levels kids, that would be piracy 😉
so you basically just turned off normalization and came to the conclusion that many songs are not mastered for the -14lufs target? but i don't get the "Revealed" thing... it's obvious, that many songs are not mastered for -14lufs. it's just the loudness that spotify outputs if normalization is turned on. that just means you can master the song however you want - in the end it will be normalized to -14lufs. that's good news for dynamic masters because you can be competitive with less compression but it also doesn't mean you have to master dynamically if that's not your aesthetic. the only "Reveal" you showed here is how spotify would sound without the normalization (which is turned on by default for every user btw)
Everyone - Spotify, like many other streaming services, transcode your distributed high quality audio masters (WAV or FLAC) to compressed file formats (OGG, AAC, etc). The file compression process essentially averages out finer details of your high quality files to create smaller file sizes that are easier for distribution over the internet. Artifacts are created in this transcoding process - particularly at very high frequencies where the samples are "closer together" because pitch is logarithmic.
I usually distribute masters to Spotify with 1 dB of headroom to compensate for this artifact creation. Through doing the exact same monitoring exercise Streaky did here, most of my tracks on Spotify peak around -0.3 or -0.2 dB, and a couple even clip.
Spotify doesn't actually recommend mastering your track to -14 LUFS - it only recommends you leave 2.0 dB (!!!) of headroom on your track before distribution. I do think this is a bit extreme, a bit silly and just creates a lot of confusion. Spotify can certainly figure out a way to detect the TP of your distributed master file and adjust the gain accordingly before transcoding. Alas, as Streaky is pointing out here, there is really no correct loudness or TP value to distribute to streaming services at that compress audio files. It's up to your personal preference on how much clipping you can afford on your transcoded file. Loud songs can generally afford less headroom IMO, as it will just feel like distortion and part of the song when consuming the compress version of the track.
Finally someone said the truth😅 Thank you friend🙌🏻 I really don’t understand how “Streaky” has a lot of subscribers, and “a lot of big stars” customers😂 I mean of course he’s good and sometimes shows a true information and very good clues) But sometimes he in all seriousness saying to everyone completely bullshit😅 Sorry, didn’t want to offend anyone, peace✌🏻
Many audio interfaces have a loopback function. Been using that for years to do the same thing.
I keep a whole mixer channel in my DAW template armed with a loopback input that allows me to internally rip any desktop audio on the fly. Perfect for pulling in reference track audio or sampling any audio source I want at any time to break down and study with a plugin like SPAN, stem separate, focus on sound design sessions, etc.
I have the Arturia MiniFuse 4 which supports looback.
@@LETTMusicNice! I have the exact same interface and tried last year that loopback thing, but I failed. How you set up audio settings in daw and in win/macOS?
@@hagis23So probably a little complicated to explain in a single YT comment, but under the loopback section in Minifuse Control Center, it's set to 'Cue2Mix' on the dropdown, and the source added is Computer Source 1-2. I'm on Windows and FL Studio, but don't think that matters, in the DAW, where you would add an input on a mixer channel just like a microphone, instead of choosing something like MIC or LINE 1, there's an option for 'Loopback Left - Loopback Right' which is a stereo input of your computer audio being picked up directly from the audio interface.
@@LETTMusicThen there's me plugging my iPhone into analogue inputs 1 & 2 of my interface! 😂😂😂
I do that too
Its practically the same @gcoudert
I use soundsource and just put a plugin like Youlean loudness meter on the output, much easier than routing through logic 😊
Never was a “loudness war is over” believer. If my mix is not as loud or louder than their rough mix, it’s over. Streaming services will do what they do.
This was very enlightening! Thanks Streaky. 👍
Yes, enlightening, thank you! But as a musician that plays many genres, how sad is a market were the competition is about loudness and not music quality. Music has 4 senses: melody, harmony, rhythm and dynamics. Nowadays we lost one of these senses. All that to give a consistent output to the listener. Apparently people are not able anymore to turn the volume knob.
Unfortunately it’s human nature when I mastered to vinyl in the 90’s and CD it was all about who had the loudest Vinyl…always been what everybody wants 🤷♂️
Very interesting as always Streaky, your channel is the glass of water in the desert! Thanks for doing this! Best!💯
Haha love that thx 🙏
I'd retest that with a song you mastered and know the levels. Cant really know if they did something before.
That was really informative. I always knew they disregarded the loudness war.
Problem is a lot of people don’t know about the Normalize function in Spotify and Apple Music etc. and it’s usually on by default. So if your master is at -5 Lufs, its volume gets taken down approx. 9dB to hit the -14dB threshold. Try to listen to those tracks with the function back on and you’ll hear the difference in volume (not perceived loudness).
Just use loopback as mic input in your mixer channel
Yep if you have that feature
I think there are some conversion happening. Is impossible to have something 0 db of true peak and something also above..so there are some conversion for sure.
Thank you Streaky. I produced music back in the day on my archlinux machine in 2008. I know jackd and realtime kernel from that time. Now i'm on mac. I never knew jack made it to the windows ecosystem.
Excellent tip. For Presonus Revelator users, the Spotify audio can be routed internally via virtual inputs, without the need of additional routing software. Thanks for revealing the big lie of the -14 Lufs. Thaks Streaky!
why not using a lufs meter like youlean for the measurements?
If you want…I like rms
Streaky, thanks for this video ! What about commercial radio stations ? Everybody talk about Spotify and LUFS and normalization, but how does it work with commercial radio stations ? Do they normalize, compress, use limiting, etc... to play every song at the same level and what happens with the extra loud songs and very quiet songs, are they adjusted up and down ? Is the radio music played in WAV quality, MP3 quality, etc....
They have always had broadcast compressors working and that’s a whole other can of worms!!!
@@Streaky_com So it means that a "perfectly mastered" song could sound like "sh**" on radio ? Example perfect snare and kick could sound weak or distorted ?
@@themagicianofsound All songs sound like sh** on the radio, that's a given.
Great, very interesting! Thanks Streaky!
Thanks Streaky hope more people run their favourite music through this test because damn I feel sorry for everyone who's mastered to -14 lufs
Been doing this a while with Sonnox Listenhub with the Clarity M plugin running in it (before sound ID)... Agree, don't think I've ever seen anything at -14 and many tracks have TP all over the shop. Cheers Streaky :-)
another game changing insight from Streaky!!!
There is one point that I'm slightly confused by. How is this showing the mastered loudness? If I increase or decrease the volume slider in Spotify this increases or decreases the loudness in the channel on the DAW and therefore affects the reading in the level meter.
Hey Streaky love your content! One question: is there a way to find out what are the real volumes of each track in ProTools? I'm trying to avoid moving faders right away.
You are the best om yoitube about mastering ... if God blesses me in this matter you ll do all my masters ❤
Great video streaky! I immediately checked it out on some tracks! Why is true peak almost always > +0 db? What is the advantage? In other videos you showed limiting to -0.1 db
I don't know why, but most of the times when I render at -0.3 true peak, I check the file on my second laptop and it shows +0.5 or even +1 but stays -0.3 where I rendered it. This has always confused me.
Team peak and peak program are different I’ll do a short on it this week for you 👍
The true peak will increases if you encode the file e.g. to compressed format like mp3, aac or ogg vorbis as Spotify is using. To be on the safe side it generic recommended to have true peak -1.5dB. So that you have after encoding e.g. a true peak from -1dB depending on content of your audio file. Also be aware in this video is the peak value measured which has not same consistent as true peak measurement.
On Amazon music, most of the music I listen to is about -8 to 6 db Lufs. Some songs are even over -4db.
Other than the issue with using RMS (which is deemed "outdated", becase LUFS is more precise as RMS tends to favor low end more), the issue is that we don't know what happens to the signal chain and why there is a peak over 0db. If you wanna check how loud the track is, just buy it for a couple of bucks and test it out for yourself. I doubt many are doing multiple masterings at this point 🤔
What ever is happening that the level that it’s been delivered at…I’ve tested with stuff I’ve done on the past
Fantastic video streaky, wish I could be with you on Monday, but I'm away on hols! very sad man!.
I’d rather be on hols enjoy 👍
Great tip Streaky!
This is great idea but unfortunately "Soundflower" doesn't work on my Macbook M1. I find a "Black Hole" this is exactly the same kind of tool and working properly on M1. Best regards and thank you for your sharing of fantastic knowledge!
Well i learned something new today and rarely comment or like on anyone content creator, thanks mate
The true Peaks comes from the conversion to the different codes. You don't hear the original waves. In the free version u hear ogg vorbis 160kbps
Great video. Thnks for posting 🎶🎶
Hi and thanks! Why Spotify don't normalize a song at -9 to -14 ? The example with rms can be misleading 'cause in the highest part of the song rms is very close to short term while the integrated lufs, the streaming target, is related to the all the song and influenced by dynamic variaton of the different sections, (verse, chorus, ..). So is it enough to upload at -9 intLufs to get -4 ??
Woah. You use Spotify as a reference for both mixing and mastering.
I use tidal but most use this so thought I’d play Spotify
Great info - Thanks!
Vant you just use the Bridge feature in ProTools instead of SoundFlower?
Nice data! But... How you really Know that turning that Normalize and auto adjust is really giving you the "master copy"... And also RMS is not LUFS and isnt nearly the same! -9 RMS its if we guess -11LUFS S.T.Average... So -4LUFs its also really need it? Dynamics its whats makes "volume" (SPL) and in any mesurement the "time" constant its a must ...
Thoughs???
I find that when I work to rms I get the right levels…use tidal if the master copy is your issue
So I’m just curious, that’s a whole bunch of video and a whole bunch of set up to then not use a LUFS meter. It’s similar to RMS but they are definitely NOT the same. And the Spotify spec is for LUFS. You have a stock LUFS meter in Logic. Why not use it?.
Also, still Soundflower instead of Blackhole? Again. Just curious since Soundflower has been unsupported for several years now.
Them going over 0 true peak is likely due to algorithm conversion to lossy formats. If you turn a WAV with headroom into a lossy format like mp3 it’s true peaks clip. Spotify uses OGG vorbis
Flac or hi-res files in sites have much more true peak around +2db
Can you make a video about mastering super loud like -4 dB LUFS but still sounding clean and not distorted?
Please !!
I did one last week…
Can you tell me which settings I have to use for this technique when using sonible smart:limit? :)
Thanks for this
I always aim for 8 rms and 3/1 tp and I don't give a daamn shit about anything else concerning the loudness war
Doesn't Isotope audiolens do the same thing?
Anyone know how to fix the gain pot on my Benchmark DAC.. when changing the volume, it crackles like hell now :(
It is 15 years old, but the knob has had it... I've tried everything short of taking it apart and cleaning it, but pretty sure it needs custom screwdriver bit to open it too... The crackle is 5x louder than the volume of what's playing through it. It rips through my speakers.. Krrrrkkkkkkkxzxzx!!! Painful !
actually Ian Shepherd three weeks ago put a very similar video for this "issue" on his yt channel. He uses another software, but same results.
He loves a bit of loudness chat 😂
Shout out streaky
some commercial releases going even higher with true peak. I found major artist tracks with up to +4dB true peak. An the sound was alright even on different devices.
Yeah it’s mental…but if it sounds ok then 🤷♂️
Well done!
Make your track as loud as possible without sacrificing sound quality
As much as I get vomit gushing up my throat seeing this topic come up once more I salute you for showing people once and for all that nobody should be mastering to a level that a streaming service chooses to normalize their audio to. Please people, don't talk about or raise this subject to other human beings again now
Thank you Great video
The true peaks are not original as in the first format. Cause Spotify convers your track (WAV) to your OGG. Thats why true peaks are changing. You can do the test by yourself, convert your final mix from WAV to MP3 than see a difference between dbtp. So it is not a true you saying a lot of popular tracks have a exceeding the volume threshold of DbTP.
Thanx streaky
Got audio coming into logic, but the metering plugin is not picking up anything. How to fix??
3:27 had me cracking up 😂😂😂😂😂
Hey, great video! One question though: What is the neutral position of the volume slider on the Spotify interface? I think that it is somewhere in the last third of the slider range… or full up? Looking forward to you answer ✌️
I'm pretty sure full volume on Spotify or any streaming app is going to give you the full volume that the track was mastered at. Usually if there was a feature to boost volume above normal, they would add a notch that the slider sticks to.
Thanks, man!
Thanks for the advice Streaky. I guess the next question i have is that with their being so many different loudness at Mastering, what would be the appropriate LUF's or True Peak youd want as a Mixer to send you at Mastering ? and/or would their be a method to find that out like you did in this video ? Thanks in advanced
Don't pay attention to any of this bs just make it as loud as you can without affecting sound quality.
Yep if you like the sound then that’s all that matters really
As long as it’s not distorting or limited then that is a good level for mastering as the engineer can always turn it down if needed
Thanks Streaky great tips here! I was curious if mastering the same track for both Atmos and Stereo releases to upload for Apple Music with the -18 LUFS threshold on Atmos wouldn’t near be competitive enough for stereo that you’ll have to master two versions? If so how to submit it twice so when the end user switches on and off Atmos they get the appropriate one? Or because they get normalized you just stick to the -18 and let Apple Music take care of the level and only matters then if you do a physical release?
Yeah need to versions if Atmos involved
Soundflower doesn't seem to work on Mac M1 , what other apps ? Audio Hijack ??
So I guess Spotify turns it down before recoding it which would explain TP going over 0 not being an issue?
Adaptr Metric AB gives you the LUFS on every reference tracks uploaded, which is easy to work with than doing this unlees you think the Spotify numbers are different?
Yeah but this way you can check all streaming services and not have to download anything
So all we had to do was turn off normalise and save us years of boring speculation? thank you Streaky! 🙂
Thank you!
what about the output celling settings on these songs
How do you do this on windows?
How are Beatport levels? Their music sounds like the full recording have been triple parallel compressed and triple limited or something.
Same process as this just use beat port instead of
Unfortunately I can't download Soundflower on my Mac M2
Its a real eye opener....
The truth is out there, Scully.
Streaky, what do you think about the war against Ukraine?
VERY GOOD VID!
TH-cam. I’m shaking my damned head…
maybe a stupid question , but if you can link spotifys audio to your daw , that means you can record the songs into your daw and basically ''steal'' them? the question might be stupid cause this sort of thing might be around for a while and im just now talking about it, and because everything is available nowadays anyway and most people have spotify premium or stuff like that , audio piracy might be irrelevant , im just wondering if it can record it and you get to keep the sound , definitely not planning on doing that, just to be clear!
Cheers Streaky , been subscribed for a while now , good stuff!
simple answer: yes, this is possible and literally everyone can do this
Yep 👍🏿👍🏿👍🏿😎
Yep you can record too
It's the easiest way to get samples load up youtube search what you want
Not very accurate though transcode can change true peaks spotify isnt lossless
Really clever!
Level control is fine. I can eat how pizza sound loudness war sounds so bad.
Legend 💯❤️💚
Why did you need extra software? In reaper you can just click the loopback feature.
Not all DAWs have this
@@Streaky_com you should switch to reaper
Streaky, what about CD? What levels should it be mastered at?
Same...one master for all
It's very common to do it up to -1 dB true peak and -9 LUFS.
Is there a version for Apple Music
If you can turn off normalization then it’s the same
Wait. I thought TP was supposed to be -1?
rms is quite diffrent to lufs and you will need to measure the intergral over the whole song. Also true peaks change, when you compress the upload to different bitrates. spotify only streams with 320 kbits and not lossless.
I prefer using rms…but use lufs meter if you like
The stream rate will not change the level just the quality
I just use physical cables to get Spotify into an input on my DAW ¯\_(ツ)_/¯
Brilliant 👍🏿 👍🏿 👍🏿
why not also adding a lufs meter?
Cos I work with rms and it annoys the hell out of this comments section 😂😂
Steaky, how do you this in Protools?
Same choose playback engine
NO WAYYY !!!
WTF - How come they are uploading clipped track? 0.2dB? Is this normal? Or are the tracks clipping when spotify converts them to MP3? I think the latter. There can be a 0.3dB difference when you convert from WAV to MP3.
Playback on Tidal master will be bang on
The true peak is the intermediate peak between sample points. If you sample 2 points, and these two points are on the up side of a transient and the other one at the downside, the true peak is higher than both these sample points.
Spotify adjust loudness though doesn't it?
I was told never to go over a certain lufs and not to worry too much about how far under I am as Spotify will either increase or crush the track depending where you are within the limits?
This is the point of the video to show you the true levels that mastering engineers are doing
@@Streaky_com ahh my bad,
I thought this would just reveal Spotify's adjusted levels, not the original master levels.
Cheers......
Why dont true peaks matter? Youd think Taylor Swift would have the best of the best of mastering engineers working on their songs
But the have true peaks over 0
Streaky can’t you simply run it via virtual outputs and back in?
That's the same thing as what he's doing?
I am this how you set it up
how is a multi input device created on windows? 1:36
question of the day lol
You will need to read the instructions on the mix input software site…or stop being silly and get a Mac 😂😂😂🤦♂️
Wish there was a way to put my vocal chain on podcasts and listen… 💡… oh shit! Goodbye plosives
🔥🔥🔥
-14 LUFS the great myth
Does this not mean that the whole level ‘standards’ are a nonsense.
And also, don’t press record on your DAW when checking out the levels kids, that would be piracy 😉
Yeah you need to master to the same limited sound level as others in your genre or you’ll be too quite and weak when the controls are off
I master EVERYTHING at zero.
❤❤❤❤
so you basically just turned off normalization and came to the conclusion that many songs are not mastered for the -14lufs target? but i don't get the "Revealed" thing...
it's obvious, that many songs are not mastered for -14lufs. it's just the loudness that spotify outputs if normalization is turned on. that just means you can master the song however you want - in the end it will be normalized to -14lufs.
that's good news for dynamic masters because you can be competitive with less compression but it also doesn't mean you have to master dynamically if that's not your aesthetic.
the only "Reveal" you showed here is how spotify would sound without the normalization (which is turned on by default for every user btw)
I reveal how to find out the level with software…