Then I have to choose one I don't think was created by AI. I know #1 is AI, and between #2 and #3 I tend to think that #3 is likely AI. So, #2 is not AI, which is really sad but exciting for me, because #2 is the worst, at least for my taste.
It doesnt matter all 3 sound good. The real issue for AI generated music for me today is song structure. The loops sound great as is, when it catches up to compose the entire coherent journey it will be crazy to witness.
Very frightening at first. The question is, how can I take advantage of this as a musician? Without selling my soul. Personally, I will continue to produce music with my gear because I enjoy it. Track #2 is my favorite
This has to be number one! The style with that 70s aesthetic screams AI-generated, right? Why? Well, for AI to work its magic, it needs to be fed a ton of data. My guess is they mainly pulled music from earlier times, maybe because of copyright issues or just availability. Also, I bet categorizing as No. 2 or No. 3 would be way tougher for a database, considering how ridiculously specific genres are today compared to 30 years ago when it was mostly just “house,” “trance,” “techno,” or “hardcore.” 🎶
a.i. cannot produce music. what it does is to copy existing ones. however, it is interesting how far he can go. to copy other people's music. but we can move away from the original. he still can't think like us.
They _could_ all be by AI, but if one is non-AI its probably the last one, as it has elements with difference in detail. The tone is smooth and the kick is sharp. Usually AI tend to apply the same level of detail over the whole image - vidually and acoustically - while a human would put more details in element that matter more to a human (details of fingers, eyes and other facial features, continouations of overlaid elements), while other areas (like wide skin areas) are kept comparitively simple. An AI doesn't have intention - so it doesn't know what's important (to a human) - it only has the prompt and tries to fit the result to it in relation to trained examples - so its just a very detailed weighted average, where even the level of detail is based on how well/conclusive the training data was described with words (through an attention mechanism). It can't react to psychoacoustic resonances in human experiences, beyond the point of us modelling/demonstrating it. AI is like a sorting algorithm for data that uses convergence and allows to draw/render interpolated lines in between the data.
3 is the one that sounds like the stuff that is popular and marketable, the algorithm would probably draw from popular music, 1 and 2 are trying to be artistic and creative with a touch of naiveness
I think the third, just because that one has the “human” voice sound in it. Honestly though, none of these float my boat; would not be interested in listening to any of it. Of course, “listening” typically isn’t the points with these genre(s?); they’re for dancing or setting a mood in a shopping area. Such utilitarianism in music, might as well have it made by AI for all the drones it serves. 🤕
Recent music all sounds as if it was made by ai...all the melodies, qll the grooves.... listen to all the techno...piupiupupiu pepepepepeu!...pause...bambambam...with no groove..all tunes with that rolling bass.
Definitely not planning on making money as a producer in the future. A band might be a better way to go, Or maybe produce an entire DJ set of tracks. Do not ever post any of it online. Only play it live. That would be a way to keep people coming back every week and could maybe make a living. 2 & 3😂
All of them, but honestly, suno is still mediocre, and I'd argue it will likely remain mediocre for the foreseeable future. When I see someone with an actual hit made by AI, that's when I'll worry. I have yet to see a language model successfully innovate in any area without an absurd amount of highly specific and directed training which would make them useless for anything but generating a single song in this context, which means you'd still be better off just trying to write a hit record than you would training an entire model for each track. Suno is at the point where it can make decent derivative works, but I'm honestly still not super impressed because I have yet to hear anything come out of it I think would be worth even putting on a playlist. And that's with most of the people I've heard things from having some understanding of music production and songwriting too, so again, I wouldn't worry. But at the very least, IF models can get to the point where they can make music which doesn't sound like it belongs on the radio at a shopping mall, it's gonna require a fairly advanced understanding of music, music terminology, and the devices, instruments and components which generate the songs. Worst case for producers, the only thing that AI is gonna change is WHAT music producers are actually spending their time doing. If you're worried about not being able to twist knobs anymore (I don't think that's even coming tbh, there's always gonna be a place for things like synths etc. I see this going in the direction of us uploading like a lead melody line or vocal, or bass we've recorded, then directing the model to build things like loops and atmos around that from there), not twisting knobs could potentially be a problem, but as a music producer, that's not really my job. My job is to arrange things, and make things sound how I want them, and often the mistakes I make are what makes the song cool. So at the end of the day, whether or not I'm twisting knobs to do that, typing in a box, or giving verbal directions to a mic doesn't really matter, it's still the same thing. Personally, I think it's hilarious so many people crap their pants when they see AI spew out a bunch of generic sounding Muzak.
Honestly well need to learn to adapt, seen other channels where people are taking songs apart from Suno and Udio with stem splitters and crating new and interesting ideas. You can get unique vocal samples nobody else has.l especially if you use Audimee to improve the quality. The problem with splice was everyone using the same vocal samples. You can convert the audio to midi and rearrange the melodies and bass lines to something that caters to your needs. The best use for AI is probably generating ideas when you’re stuck. Write a prompt of what you’re thinking. Get inspiration, take the stems you like then get to work.
Clue: Not all of them
OK, defo 1, probaby 3, hoping it's not 2, that one's slamming.
They all sound alright imo but i reckon 3?
@@dalek604 I’m strongly leaning towards number 2 😂
Then I have to choose one I don't think was created by AI.
I know #1 is AI, and between #2 and #3 I tend to think that #3 is likely AI. So, #2 is not AI, which is really sad but exciting for me, because #2 is the worst, at least for my taste.
Number 2 is a better song than the other two. If that’s ai I am leaving to live the rest of my days in a cave
The hats on the first one don't sound quite right.
#3 is my guess. It's also the one I like the least.
All of them
Yeah... I know how this works
No it's three
Three
1? The hats sound worng
If its 2, then we'r fucked.
Where can i buy Track 3? It's amazing:)))))
1 & 3
this is what I think as well.
Track 2 sounds great. Track 1 & 3 nothing special/filler music.
I'm guessing, if AI is worrisome to you, that it must be track 2.
How can I listen these songs especially third one is amazing
They all sound like bad AI
All bad but track 1
#2 sounds messier, so I'm guessing AI had a go at that one, rather than the popping beat of #3
What is the name of 3rd.its catchy
i'm disinclined to blame humanity for any of these
3 is normal produced. U can hear the „artifacts“ like a low MP3 on 1 & 2
all of them sound horrible. Period😊
1st one sounds weird
Track 1...
My guess basis: people who are mixing this kind of music do not work percs like that...
If 2, then we are fine
1 and 3
It doesnt matter all 3 sound good.
The real issue for AI generated music for me today is song structure. The loops sound great as is, when it catches up to compose the entire coherent journey it will be crazy to witness.
IT'S TRACK 3
1 & 3 is generated by AI
I think none…
Very frightening at first. The question is, how can I take advantage of this as a musician? Without selling my soul. Personally, I will continue to produce music with my gear because I enjoy it. Track #2 is my favorite
all three
9 times out of 10 AI is confused with algorithm
2nd is a banger, I NEED the whole thing, AI or not.
1 and 3 are AI? 2 is way too bangin!
I love your channel but "shorts" are really hard to follow
#1 is AI. #2 a pretty good hypnotic track that i want the track ID for haha
i got you. #2 is AcidPulse by BayBeat… the others are probably AI
i got you. it’s AcidPulse by BayBeat. others are prob AI
This has to be number one! The style with that 70s aesthetic screams AI-generated, right? Why? Well, for AI to work its magic, it needs to be fed a ton of data. My guess is they mainly pulled music from earlier times, maybe because of copyright issues or just availability. Also, I bet categorizing as No. 2 or No. 3 would be way tougher for a database, considering how ridiculously specific genres are today compared to 30 years ago when it was mostly just “house,” “trance,” “techno,” or “hardcore.” 🎶
Track 2
it‘s # 2
All 3
its not A.I, its machine learning!!!!!!
they'd all fit in bad Weekly Discovery on Spotify, but I'm calling 3.
#1 and #2 sound poorly produced. 3 sounds "off"
1 & 3 ?
definitely 3 is ai
a.i. cannot produce music. what it does is to copy existing ones. however, it is interesting how far he can go. to copy other people's music. but we can move away from the original. he still can't think like us.
1 and 3 are AI generated
They _could_ all be by AI, but if one is non-AI its probably the last one, as it has elements with difference in detail. The tone is smooth and the kick is sharp. Usually AI tend to apply the same level of detail over the whole image - vidually and acoustically - while a human would put more details in element that matter more to a human (details of fingers, eyes and other facial features, continouations of overlaid elements), while other areas (like wide skin areas) are kept comparitively simple. An AI doesn't have intention - so it doesn't know what's important (to a human) - it only has the prompt and tries to fit the result to it in relation to trained examples - so its just a very detailed weighted average, where even the level of detail is based on how well/conclusive the training data was described with words (through an attention mechanism). It can't react to psychoacoustic resonances in human experiences, beyond the point of us modelling/demonstrating it. AI is like a sorting algorithm for data that uses convergence and allows to draw/render interpolated lines in between the data.
Look like all three
Maybe as an artist community we should not promote AI in any way. Just a thought 🤷🏻♂️
I hate all of these tracks - but track 3 is the most generic and robotic.
3 is the one that sounds like the stuff that is popular and marketable, the algorithm would probably draw from popular music, 1 and 2 are trying to be artistic and creative with a touch of naiveness
Track 3
3 because i know SynthV, maybe 1. 2 i don't know AI which does that, but it could be a generic loop with weird VST resulting un an AI producing this.
Track one imo
Track 1 in my opinion.
Track # 1.
i dunno but track 2/ is fire
2!
You look so goood, Alice!
Aww thank you! 🥰
1 & 3 🤷🏼♂️
track 3
I think the third, just because that one has the “human” voice sound in it. Honestly though, none of these float my boat; would not be interested in listening to any of it.
Of course, “listening” typically isn’t the points with these genre(s?); they’re for dancing or setting a mood in a shopping area. Such utilitarianism in music, might as well have it made by AI for all the drones it serves. 🤕
3
1 an 2.....
All of them, and all of them suck
Track 1
2 I guess😂
Recent music all sounds as if it was made by ai...all the melodies, qll the grooves.... listen to all the techno...piupiupupiu pepepepepeu!...pause...bambambam...with no groove..all tunes with that rolling bass.
1 maybe 2
Not 2
Definitely not planning on making money as a producer in the future. A band might be a better way to go, Or maybe produce an entire DJ set of tracks. Do not ever post any of it online. Only play it live. That would be a way to keep people coming back every week and could maybe make a living. 2 & 3😂
People who use ghost producers are even worse than AI making bangers 😉
All of them, but honestly, suno is still mediocre, and I'd argue it will likely remain mediocre for the foreseeable future. When I see someone with an actual hit made by AI, that's when I'll worry. I have yet to see a language model successfully innovate in any area without an absurd amount of highly specific and directed training which would make them useless for anything but generating a single song in this context, which means you'd still be better off just trying to write a hit record than you would training an entire model for each track. Suno is at the point where it can make decent derivative works, but I'm honestly still not super impressed because I have yet to hear anything come out of it I think would be worth even putting on a playlist. And that's with most of the people I've heard things from having some understanding of music production and songwriting too, so again, I wouldn't worry. But at the very least, IF models can get to the point where they can make music which doesn't sound like it belongs on the radio at a shopping mall, it's gonna require a fairly advanced understanding of music, music terminology, and the devices, instruments and components which generate the songs. Worst case for producers, the only thing that AI is gonna change is WHAT music producers are actually spending their time doing. If you're worried about not being able to twist knobs anymore (I don't think that's even coming tbh, there's always gonna be a place for things like synths etc. I see this going in the direction of us uploading like a lead melody line or vocal, or bass we've recorded, then directing the model to build things like loops and atmos around that from there), not twisting knobs could potentially be a problem, but as a music producer, that's not really my job. My job is to arrange things, and make things sound how I want them, and often the mistakes I make are what makes the song cool. So at the end of the day, whether or not I'm twisting knobs to do that, typing in a box, or giving verbal directions to a mic doesn't really matter, it's still the same thing. Personally, I think it's hilarious so many people crap their pants when they see AI spew out a bunch of generic sounding Muzak.
i think it is #2
2, 3
Honestly well need to learn to adapt, seen other channels where people are taking songs apart from Suno and Udio with stem splitters and crating new and interesting ideas. You can get unique vocal samples nobody else has.l especially if you use Audimee to improve the quality. The problem with splice was everyone using the same vocal samples. You can convert the audio to midi and rearrange the melodies and bass lines to something that caters to your needs. The best use for AI is probably generating ideas when you’re stuck. Write a prompt of what you’re thinking. Get inspiration, take the stems you like then get to work.
2
#2
One
It’s 2. I can tell y’all don’t have any music training.
No 1
Nr 2
#1
I think you are very beautiful
Two. It sounds too hallucinating and monotone.
No, that's the genre😂
Hahahah
1.
1
First two!
First one
All of them
1&3
1 and 3
All 3
3
2
1
Track 3
All of them
1 and 3
1 and 3
3
3
3
3