Considering it’s AI and probably trained on more mainstream material I don’t think testing it with such a niche genre without vocals really gives a very good understanding of the capabilities of this service.
Thanks for the roast Wytse. Absolutely benefitted from your feedback. Please check us after we moved away from Beta period as well, we are constantly shipping upgrades. Gain compensation is on our to-do. Best.
Anything without gain compensation (demos, plugins, services, adverts . . . etc) I pretty much reject as a con straightaway, unless the product is stunningly good (and deserves a little more time / attention), ads get skipped, services ignored, plugin demos dumped.
The little motivational quips like _"your song is ready to shine!"_ and "your song is ready to captivate your audience!" (etc etc) are horribly patronising, since when have we needed to treat everyone like small children ? . . . You may as well put "OMG your song is so cool!" after each prompt . . . what would be far more useful would be constructive criticism, I'd rather see "with too much warmth we're starting to hear mild distortion" or "I think the stereo width is a little too wide, taking focus from the vocals" (etc etc).
I love these services because the non-clients (aka clients that probably will refuse to work anyway in the future with)that probably doesn't need a mastering service but just want to feel important giving interminable orders and changes can just play there. I would love to see if that service can even maintain a similar sound along a whole album
A.I. is creeping up to ya... perhaps make some video' about how you master audio... no more plugins, a.i. things... but also your talents... Or both, first about plugins andthen how you would do it... cause we need some leverage against A.I.
From what I've seen here, the chat doesn't seem to follow the instructions accurately. What you could normally do with a simple knob adjustment seems to require several prompt inputs which might trigger other unexpected changes. I think this could eventually work at some point, but like you said - will it ever tell you to go back to the mixing stage to fix something?
Is an interesting idea of voice input. Would be interesting on a mix like Hihats are too loud, can’t hear the bass etc. I predict Moises is only a step away from this.
I wonder if it was able to detect and remove the deliberate section of noise. A human mastering engineer would have noticed such a problem straight away. I wouldn’t be surprised if the AI service would just overlook it and a keep the noise in there.
Totally agree with you on the need for gain matching on this. Otherwise you have to pay a credit to really understand what they’re doing, by importing and gain matching, like you did. When you gain matched, I preferred the premaster to the “master”.
cause it is louder Alex Jones moment. Loudness is Satan! - I am only listening on a laptop but the Diktatorial does seem to bring out the attack a bit on the guitar and drums little punchier with slightly sharper character, the premaster sounded a bit duller, so I do think it did improve the sound character a bit adding a bit of what I would say is transient attack and smack. Could you ask it to do click and pop removal though?
Loudness is not Satan … it‘s a part of making a good and competitive master. But more important is that the frequency balance doesn’t fall apart and you have no added pumping or artifacts (if it‘s needed it should already be in the mix). A good master is not only louder … it sounds better when level matched to the mix.
the stupidity of mastering to -12db is still alive in 2024..I wonder when people will finally understand how streaming platform algorithms work (Spectral band replication (SBR) )it makes no difference if you do -6 or -14 lufs the damage is the same, all streaming platforms requirements are only there to mask this damage a little that is all!
First, I like your channel, and the things that frustrate you often frustrate me. AFAIK, the issue with True Peak is there's no "one-size-fits-all" setting. It only matters when transcoding to a compressed format, and how the compression affects the audio depends on the program material. So, lower true peak values are "safer," which I believe is why streaming services ask for -1 or -2 or whatever. I did some tests of doing data compression with TP at 0, -0.5, and -1.5. At 0, there was a low--level fuzziness added to the audio when compressed. It was still "kind of" there at -0.5 TP but it was definitely gone by -1.5. Perhaps the phenomenon is like time-stretching or "Acidization," where phase changes cause additions that can go above 0. Or maybe not... :)
hee goeie video weer! ik zou een AI mastering iets nooit aan mn tracks laten zitten. Ik vind de human feel belangrijk bij mastering. Ik weet dat je nederlands bent dus het is makkelijker voor mij om in het nederlands tegen je te praten haha. Ik kan ook goed engels, maar dit is even handig denk ik. Mijn idee van mastering is niet zozeer waar het nu op gericht is gewoon keihard tegen die 0db aan rammen. Weg zijn je dynamics, vroeger zette je gewoon het volume wat harder van je versterker. btw die stereo widening die ze erop zetten is gewoon lelijk. ik vind de premasters fijner klinken dan die AI masters.
SoundCloud offers a similar service for mastering. I believe they are mostly concerned with making sure the tracks are not too loud on the site. Powered by Dolby?
You use lots of outboard gear. Since my own homestudio is expanding with hardware synths and fx boxes, perhaps you can do a video on how to plan this with cables and avoiding hum and such. (Ik was net van plan wat aan te schaffen bij Thomann. Ik ga je QR code gebruiken)
I really dislike AI tech in theory, but as a bedroom producer who doesn't have the time to learn how to mix and master, this could be a usable timesaver
It's simple. You have two options: 1) You either understand audio well and can do the work yourself, or 2) you don't understand it well and you try to come up with "creative" feeling-based prompts hoping to get something out of it and spend a lot of time on just this without knowing what you're doing. The end result will be different and I am not afraid of AI any time soon. Nor of people trying to work their way to audio through just using prompts.
It's like playing dice poker where the locked dice still change values with re-throws but not as much as unlocked dice. And the fact that you cannot determine precisely which dice you want to even lock. Not only that, but also the chances for each dice is not independent and uniform, but rather the outcome is always biased towards predetermined number combinations, none of which are better than three of a kind. So yeah.. Try playing dice poker with those odds.
Well that's kinda normal in a way. One of the most used mastering techniques is to boost the spectrum's extremes with a Pultec (the famous smile curve) because they are the ones that make the music sound "expensive".
AI tools are revolutionizing music production, enabling musicians without the resources to create and share their work. However, there's always the other side of that same coin. While I will continue to mix my music in my own way, AI will undoubtedly play a role in shaping all of our audio experiences.
1:39 ... to be honest, my daily business is from time to time a loudness war, but in the evening I don't like to be screamed at by on TH-cam. It is simply not necessary.
On the AI master, there’s some phasiness to the transients and overly aggressive high passes. At least that’s what my phone speakers tell me. It reminds me of the harsh brittleness on a lot of AI generated music.
Loudness is a game changer in these AI mastering battles , nothing common to the real human mastering at all 🤣🤣🤣🤣😂😂. People in these days don't listen to the music using good ear or overhead phones. They don't know the difference between good or bad ones because they don't care. Good punch on bass, muddy mids, it's all right folks! Face the fact - they don't care how mastered song sounds like. Give them more loudness 🤣🤣🤣🤣🤣🤣
Soon AI will be making TH-cam audio engineering tutorials and reviews..... Where "it" will be critical of other AI .... Not sure I wanna be around for that.
Wytse, surely you should be able to say to it "AI, gain match (or loudness normalise) this" ??? Dunno, just seems to me you should be able to tell it to do that.
sounds like the AI cut way too much 1.2khz-2.4khz or so in the mid channel which totally kills the best parts of the sampled drum crunch and shifts the guitars out of balance. strange choice given the material, makes me think that they haven’t trained the model on a wide enough variety of music
I wonder what would have happened if you'd asked it for a level matched master? But in any event, all I hear is that it made your mix sound thinner and louder. What does this do that anyone can't do fro himself with a limiter or finisher plug in? nothing. People don't understand what real mastering actually is.
AI eventually will prevail but not yet and not in this way. I think a broader/deeper analysation and millions of references could eventually give 20 different sounding masters and you never can ask the AI to pick the 'best' because luckily AI has no taste better then the middle of the road. I think the advantage of AI is only worth something when the choice is human. Once AI would have analysed and differentiate the human (your own) choice for a decade in thousands of choices then there is a synergy which could be helpful. Today AI is so stupid it knows all about the average but noting specific for the current use.
I think a time will come where it can do 20 masters and pick the right one for you based upon what it learned from your own input in A/B quizes and your preferred music and some other data that you provide that seemingly has nothing to do with music at all. Genetic Algorithm based AI
Lets keep this in English, its a bit rude to reply in your own language that looks like gibberish to normal people😂. I’ve no idea what you’re on about, but my reply has nothing to do with you being pro AI or not. Not even if I am. You have sketched a scenario that you think might be plausible and I reacted because I think you maybe right, but I guess that AI might take even the last hurdle and be able to come up with a master that is tailor made for your taste. Quite scary actually btw. So now please tell me what I didn’t understand???
Their AI apparently only knows what wide-band compression is, and chooses VERY poor attack/release times. What an annoying amount of compression flutter.
I share your anger and frustration with the lack of gain matching. I can fool just about anyone with just 0.5 db of gain increase into thinking the sound is suddenly 'better.' I've done it many times in my mastering studio, secretly of course, just to elicit a reaction. I always reveal my ruse and explain why it's so important to gain match comparisons meticulously. It's shocking and a bit disturbing that people in the audio world continue to use this sneaky trick to sell their plugins (and now AI mastering sites use it too!) to unsuspecting consumers. BEWARE of this trick, don't let your ears and brain be fooled! Always take the time to gain match ALL comparisons, no matter what. Your bank account will thank you!!
I dont think we're going to see AI take any audio engineering jobs in our lifetime, it's the same situation as with programming, you need a human for the subjectivity. Cool video mate
My guess is 10 years from now it will beat every audio engineer on the planet, subjectivity can be faked easily if you have got the data and set the right specified goal for the AI
*Eventually*….. it seems like you can get there by using multiple prompts until you finally land on the sound you require. However the result is just the one mastered track. As an artist you want to have total parity between ALL of your tracks. You want them to have similar tone, dynamics, saturation, width, depth, presence etc etc…. And, this text-prompt style AI approach doesn’t seem to be the way to achieve that across multiple tracks. Far better to have an understanding of your hardware/plugins and to do the mastering yourself - or hire a pro (of course 😊).
From the start, like you said, I am immediately suspect when a TH-cam vid about audio has bad audio. The worst is when the audio examples are not volume matched with youtuber’s voice. I can forgive it somewhat on live streams.
For me the mids in the mastering version are still weak and a lot more, so by now and for this example I cannot find a benefit from this service but one can argue the price is hot and it is not time consuming. For me, these services are not able to beat "real" mastering engineers for several reasons.
What would I think of these Dynamic EQ newer plugins, are the automated EQ very calm variations made along previoud tracks now looping somewhere?! I don't know. What's the next frontier, pay my bills so you wouldn't sound like one of those flatliners? I don't know. Namastè.
Nobody should support anything in Ai when it comes to music, anyways these things just make it worse and louder regardless if it needs to be louder or not
Considering it’s AI and probably trained on more mainstream material I don’t think testing it with such a niche genre without vocals really gives a very good understanding of the capabilities of this service.
Darude Sandstorm is quite popular though
Thanks for the roast Wytse. Absolutely benefitted from your feedback. Please check us after we moved away from Beta period as well, we are constantly shipping upgrades. Gain compensation is on our to-do. Best.
Anything without gain compensation (demos, plugins, services, adverts . . . etc) I pretty much reject as a con straightaway, unless the product is stunningly good (and deserves a little more time / attention), ads get skipped, services ignored, plugin demos dumped.
The little motivational quips like _"your song is ready to shine!"_ and "your song is ready to captivate your audience!" (etc etc) are horribly patronising, since when have we needed to treat everyone like small children ? . . . You may as well put "OMG your song is so cool!" after each prompt . . . what would be far more useful would be constructive criticism, I'd rather see "with too much warmth we're starting to hear mild distortion" or "I think the stereo width is a little too wide, taking focus from the vocals" (etc etc).
"Gain compensation is on your list?" Seriously? Stop smoking weed at work guys.
In the English language, the plural for feedback is feedback. 😱
now type it in CAPS so its LOUDER
This is exactly what I suggested you to work on almost one year ago :( you already got the intuition back then.
I love these services because the non-clients (aka clients that probably will refuse to work anyway in the future with)that probably doesn't need a mastering service but just want to feel important giving interminable orders and changes can just play there. I would love to see if that service can even maintain a similar sound along a whole album
Love the Glenn moment.
Please, no more fake shocked face thumbnails. You're better than that!
I prefer the premaster, and it's not close
would love to see/hear more about your solar stuff thanks
It comes from the Fletcher Munson (hearing) curve
But it goes to eleven?
I like everithing about your channel & content ! that was so cool!
A.I. is creeping up to ya... perhaps make some video' about how you master audio... no more plugins, a.i. things... but also your talents... Or both, first about plugins andthen how you would do it... cause we need some leverage against A.I.
Fighters!!!
Ask it to volume match🤷♂️
From what I've seen here, the chat doesn't seem to follow the instructions accurately. What you could normally do with a simple knob adjustment seems to require several prompt inputs which might trigger other unexpected changes.
I think this could eventually work at some point, but like you said - will it ever tell you to go back to the mixing stage to fix something?
Yep, soon, mix analysis is coming!
@@Diktatorial that's good news!
it doesn't matter what streaming services request. master how it sounds good and let them adjust it.
Is an interesting idea of voice input. Would be interesting on a mix like Hihats are too loud, can’t hear the bass etc. I predict Moises is only a step away from this.
What would happen if you wrote "Keep the same loudness as unmastered" in the prompt?
LOL … after gain matching the premaster sounded better. What an achievement 😂
We will never have gain compensation 😂
We are on it!
I like premaster more 😊
I wonder if it was able to detect and remove the deliberate section of noise. A human mastering engineer would have noticed such a problem straight away. I wouldn’t be surprised if the AI service would just overlook it and a keep the noise in there.
Totally agree with you on the need for gain matching on this. Otherwise you have to pay a credit to really understand what they’re doing, by importing and gain matching, like you did.
When you gain matched, I preferred the premaster to the “master”.
Could you add ‘gain match original’ to the prompt?
Record Labels first use of A.I. :
''let´s steal everything we can from them''.
I wonder if you had asked it to level match the mix until it was time render out the master if it would have been able to do that?
cause it is louder Alex Jones moment. Loudness is Satan! - I am only listening on a laptop but the Diktatorial does seem to bring out the attack a bit on the guitar and drums little punchier with slightly sharper character, the premaster sounded a bit duller, so I do think it did improve the sound character a bit adding a bit of what I would say is transient attack and smack. Could you ask it to do click and pop removal though?
Loudness TURNS THE FROGS GAY!
Loudness is not Satan … it‘s a part of making a good and competitive master. But more important is that the frequency balance doesn’t fall apart and you have no added pumping or artifacts (if it‘s needed it should already be in the mix). A good master is not only louder … it sounds better when level matched to the mix.
Chill, I’ve had a long week and I feel like someone is shouting at me.
Do these prompts get stacked or does any new prompt only apply it to the original audio?
the stupidity of mastering to -12db is still alive in 2024..I wonder when people will finally understand how streaming platform algorithms work (Spectral band replication (SBR) )it makes no difference if you do -6 or -14 lufs the damage is the same, all streaming platforms requirements are only there to mask this damage a little that is all!
Very very happy i watched this one😂
First, I like your channel, and the things that frustrate you often frustrate me. AFAIK, the issue with True Peak is there's no "one-size-fits-all" setting. It only matters when transcoding to a compressed format, and how the compression affects the audio depends on the program material. So, lower true peak values are "safer," which I believe is why streaming services ask for -1 or -2 or whatever. I did some tests of doing data compression with TP at 0, -0.5, and -1.5. At 0, there was a low--level fuzziness added to the audio when compressed. It was still "kind of" there at -0.5 TP but it was definitely gone by -1.5. Perhaps the phenomenon is like time-stretching or "Acidization," where phase changes cause additions that can go above 0. Or maybe not... :)
Yo Wytse. Relax man... 😉
please keep on ranting about autogain. I'm sure, the developpers hear it.
It should simply be a industrystandart.
Why should I mix careful and master fast?
I would only do that as an AI Reference.
I feel your pain.....😂
hee goeie video weer! ik zou een AI mastering iets nooit aan mn tracks laten zitten. Ik vind de human feel belangrijk bij mastering. Ik weet dat je nederlands bent dus het is makkelijker voor mij om in het nederlands tegen je te praten haha. Ik kan ook goed engels, maar dit is even handig denk ik. Mijn idee van mastering is niet zozeer waar het nu op gericht is gewoon keihard tegen die 0db aan rammen. Weg zijn je dynamics, vroeger zette je gewoon het volume wat harder van je versterker. btw die stereo widening die ze erop zetten is gewoon lelijk. ik vind de premasters fijner klinken dan die AI masters.
SoundCloud offers a similar service for mastering. I believe they are mostly concerned with making sure the tracks are not too loud on the site. Powered by Dolby?
Please make it sound like what i forgot to do in the mix. Or, make it sound like something it not has. (hopeless effort that AI if you ask me)
Plz review aams audio auto mastering system.
You use lots of outboard gear. Since my own homestudio is expanding with hardware synths and fx boxes, perhaps you can do a video on how to plan this with cables and avoiding hum and such. (Ik was net van plan wat aan te schaffen bij Thomann. Ik ga je QR code gebruiken)
I really dislike AI tech in theory, but as a bedroom producer who doesn't have the time to learn how to mix and master, this could be a usable timesaver
Hopefully they're listening to your suggestions (and will pay you some royalties)
😂 awesome reaction
It's simple. You have two options: 1) You either understand audio well and can do the work yourself, or 2) you don't understand it well and you try to come up with "creative" feeling-based prompts hoping to get something out of it and spend a lot of time on just this without knowing what you're doing. The end result will be different and I am not afraid of AI any time soon. Nor of people trying to work their way to audio through just using prompts.
It's like playing dice poker where the locked dice still change values with re-throws but not as much as unlocked dice. And the fact that you cannot determine precisely which dice you want to even lock. Not only that, but also the chances for each dice is not independent and uniform, but rather the outcome is always biased towards predetermined number combinations, none of which are better than three of a kind.
So yeah.. Try playing dice poker with those odds.
I feel like the "mastered" version loses too much of the middle information, am I the only one hearing this?
Well that's kinda normal in a way. One of the most used mastering techniques is to boost the spectrum's extremes with a Pultec (the famous smile curve) because they are the ones that make the music sound "expensive".
@@adenthestudiorat I meant the centre image not talking about the smiley EQ curve. It sounds like the stereo widening is over done.
AI tools are revolutionizing music production, enabling musicians without the resources to create and share their work. However, there's always the other side of that same coin. While I will continue to mix my music in my own way, AI will undoubtedly play a role in shaping all of our audio experiences.
Hit 'em AI hard White Sea! 🤜🏻✨
It kind of made a bit of it better and a bit of it worse lol
hell yea. scream that shit.
1:39 ... to be honest, my daily business is from time to time a loudness war, but in the evening I don't like to be screamed at by on TH-cam. It is simply not necessary.
I like instan Nudleezz!!
Gain not pain
On the AI master, there’s some phasiness to the transients and overly aggressive high passes. At least that’s what my phone speakers tell me.
It reminds me of the harsh brittleness on a lot of AI generated music.
Loudness is a game changer in these AI mastering battles , nothing common to the real human mastering at all 🤣🤣🤣🤣😂😂. People in these days don't listen to the music using good ear or overhead phones. They don't know the difference between good or bad ones because they don't care. Good punch on bass, muddy mids, it's all right folks! Face the fact - they don't care how mastered song sounds like. Give them more loudness 🤣🤣🤣🤣🤣🤣
I was going to say, I am feeling deaf as I can't notice a difference. You all have super hearing or better quality ipods.
Soon AI will be making TH-cam audio engineering tutorials and reviews..... Where "it" will be critical of other AI .... Not sure I wanna be around for that.
Wytse, surely you should be able to say to it "AI, gain match (or loudness normalise) this" ??? Dunno, just seems to me you should be able to tell it to do that.
sounds like the AI cut way too much 1.2khz-2.4khz or so in the mid channel which totally kills the best parts of the sampled drum crunch and shifts the guitars out of balance. strange choice given the material, makes me think that they haven’t trained the model on a wide enough variety of music
What's up with the Hi-hat? Tjitt, tjitt, tjitt.
Wyste is a dangerous animal!! 10 🥵🥰
Your Mic sounds distorted lol
I'm trying out a new mic actually 😅
I hope these services continue to flourish... they are so bad, they are sending clients my way faster than any marketing I could do.
I think louder is better is a Myth, and is become a Dogma.
Did you time stretch the outro? How it possible to speak so fast 😂😮
I didn’t stretch it
I wonder what would have happened if you'd asked it for a level matched master?
But in any event, all I hear is that it made your mix sound thinner and louder.
What does this do that anyone can't do fro himself with a limiter or finisher plug in?
nothing.
People don't understand what real mastering actually is.
or ask him:
make it sound like crap, like a megaphone in a public bathroom,
and hear what comes out.
Tbh Id rap over it and wouldn’t care about these small differences…would be more worried about the vocal and how it fits with the rest.
He won’t do a blind test of ai vs manual mastering.
I tried waves ai mastering, as I got one free trail. It sound horrible, doesn’t make sense with super strong bass and very compressed.
Don't feel like a broken record it's not your fault. Anyone who doesn't gain match during mix moves is making bad decisions, or might be I should say.
I wonder what happens if you load an ABBA song tell the AI make this sound like motorhead 😂
AI eventually will prevail but not yet and not in this way. I think a broader/deeper analysation and millions of references could eventually give 20 different sounding masters and you never can ask the AI to pick the 'best' because luckily AI has no taste better then the middle of the road. I think the advantage of AI is only worth something when the choice is human. Once AI would have analysed and differentiate the human (your own) choice for a decade in thousands of choices then there is a synergy which could be helpful. Today AI is so stupid it knows all about the average but noting specific for the current use.
I think a time will come where it can do 20 masters and pick the right one for you based upon what it learned from your own input in A/B quizes and your preferred music and some other data that you provide that seemingly has nothing to do with music at all. Genetic Algorithm based AI
@@larsborst7121 Nou lars ik denk dat je geen snars begrepen hebt van mijn woordjes. Alsof ik PRO AI ben....
Lets keep this in English, its a bit rude to reply in your own language that looks like gibberish to normal people😂. I’ve no idea what you’re on about, but my reply has nothing to do with you being pro AI or not. Not even if I am. You have sketched a scenario that you think might be plausible and I reacted because I think you maybe right, but I guess that AI might take even the last hurdle and be able to come up with a master that is tailor made for your taste. Quite scary actually btw. So now please tell me what I didn’t understand???
BTW Dutch is also my mother language so that part was just irony
@@larsborst7121 Irony? like? Like I talked Dutch to you by coincedence?
11 out of 10 Mastering Engineers will confirm that Mixing is bad for your eyes
Their AI apparently only knows what wide-band compression is, and chooses VERY poor attack/release times. What an annoying amount of compression flutter.
The next step for 'INSTANT MASTERING'...mmm, 🤔time will tell🙄[edit] as always you provide great indepth plugins/devices reviews👉☕
MY COMMENT READS BETTER!
1:38 😂😂😂
Yes, now I can "correct" my projectsx using AI, but no one is going to correct my bad taste, if you know what I mean.
NOW we need AI gear
the premaster sounds better than their result.
I share your anger and frustration with the lack of gain matching. I can fool just about anyone with just 0.5 db of gain increase into thinking the sound is suddenly 'better.' I've done it many times in my mastering studio, secretly of course, just to elicit a reaction. I always reveal my ruse and explain why it's so important to gain match comparisons meticulously. It's shocking and a bit disturbing that people in the audio world continue to use this sneaky trick to sell their plugins (and now AI mastering sites use it too!) to unsuspecting consumers. BEWARE of this trick, don't let your ears and brain be fooled! Always take the time to gain match ALL comparisons, no matter what. Your bank account will thank you!!
I dont think we're going to see AI take any audio engineering jobs in our lifetime, it's the same situation as with programming, you need a human for the subjectivity. Cool video mate
My guess is 10 years from now it will beat every audio engineer on the planet, subjectivity can be faked easily if you have got the data and set the right specified goal for the AI
@@larsborst7121 It's going to be interesting seeing where it goes for sure
When audio services have shit audio in their ads I’m so baffled
Chill out ouwe....Lol.
No difference on a phone
*Eventually*….. it seems like you can get there by using multiple prompts until you finally land on the sound you require. However the result is just the one mastered track.
As an artist you want to have total parity between ALL of your tracks. You want them to have similar tone, dynamics, saturation, width, depth, presence etc etc…. And, this text-prompt style AI approach doesn’t seem to be the way to achieve that across multiple tracks.
Far better to have an understanding of your hardware/plugins and to do the mastering yourself - or hire a pro (of course 😊).
"OF COURSE IT'S BETTER, BECAUSE IT'S LOUDER!!!!" Snake oil at its best.
From the start, like you said, I am immediately suspect when a TH-cam vid about audio has bad audio. The worst is when the audio examples are not volume matched with youtuber’s voice. I can forgive it somewhat on live streams.
For me the mids in the mastering version are still weak and a lot more, so by now and for this example I cannot find a benefit from this service but one can argue the price is hot and it is not time consuming. For me, these services are not able to beat "real" mastering engineers for several reasons.
Comment for da algorithm
Streak count: 250
🙏🏻
Why did they pick the voice of a scam call operator for the Ai???
because it looks like this service was created by one of scam call centers in their spare time))
hahahaha it's actually the voice of our CEO 🤣
Maybe it's all a hack to get your bank account details 😂
@@Swiftopher755 hmmm feature coming soon... :D
@@cestlinn That's fabulous😂
What would I think of these Dynamic EQ newer plugins, are the automated EQ very calm variations made along previoud tracks now looping somewhere?! I don't know. What's the next frontier, pay my bills so you wouldn't sound like one of those flatliners? I don't know. Namastè.
I don’t want any ai in any creative work it doesn’t matter how good it eventually gets 🤮
Nobody should support anything in Ai when it comes to music, anyways these things just make it worse and louder regardless if it needs to be louder or not
Just tell me if the industry is finished. Save me the time and money.
"The next step for INSTANT MASTERING!" ... just the "!" made this hit like click bait lol! What a difference a [?] can make.
I automatically started singing " I wish ".
Dude I rather pay an audio engineer $50 for a track instead of AI master…seriously. And I doubt it will change in a near future
You watched to much Fricker content. Stop it 😉
Fricker is the GOAT
Sounds like they are just using an old copy of ozone LOL