Holy balls, this is spine tingling, absolutely loving this. Great quick n dirty demo that really activated the almonds. While the processing time is a bit on the long side (nothing for spontaneous sound design but you can't always have everything all at once) the potential of this thing came through in flying colors (or screaming demons).
Definitely! We might do a "no talking demo" in the future. Morpho also has a 7 day free trial of the entire public model library (currently 35 models) if you want to see what's possible.
9:00 This is going to be so useful for sound designers doing horror and scifi films. Anyone who needs to transform sounds in a way that sounds organic yet still fantastical. Most of my own work is fantastical world stuff, i can see this being useful. My concern is that in order to build models for it, there is reliance on your company to do the model computation. Which is fine now, but it can be hard to know how well the product will be supported in coming years. Any chance you might be able to make the training application available to people with a good GPU?
with vocoflex, concatenator and now morpho, it has become obvious that neural sampling is indeed a new category of AI-powered sampling plugins, and an exciting one for sure :)
Mixed feelings regarding this one. Great in terms of exploration & you can actually get some good sounds out of that, but if you have a specific idea in your mind or a specific feeling which you wanna put out - then old school style it is. Never underestimate the power of the creative mind/soul with a pair of skilled hands 😉 Music is all about the feeling at the end of the day, no matter the tools or tech ;)
I like the concept, but there's an inherent problem with the way neural networks work, which is the sheer lack of definition in how you control sound. It's cool, but it feels like this would be more of a cool unpredictable effect to use in tandem with classical sampling than "the future of sampling". I think the control you have over the minutae of what is happening is part of what makes sound design techniques appealing.
Great comment. We also believe good parametrization is essential for the adoption of these new tools, and we know there are improvements to be made. Our internal research has been tackling this with a two-pronged attack: 1) Disentangling features such as pitch and instrument type within the architecture and 2) Exploring different UX and visualization techniques to condense the latent space into something more navigable. The feedback we get at this stage will greatly influence how we develop Morpho and other future plugins. As you mentioned, combinations of Morpho with traditional sampling are totally valid and very powerful. We don't want to replace but expand the options available.
@neutone_ai I look forward to seeing what you come up with, then! Also glad to see AI solutions that don't attempt to replace the artist but rather enhance their ability
@@radiofloyd2359 No AI "attempts" to replace anything. All AI is designed to ideate and create. Why artists cannot see AI as an amazing tool is beyond me. Artists should be the first people to be using it. My God, AI in the hands of an artist is always going to be 100 x more powerful than in the hands of the non-artist! Use it. Create. Sample your own work and see where it goes. Upload your own work to Suno and extend it. You will get so much inspiration for new works.
I’m concerned that it a) needs 3-5 days of processing of b) 1.5hrs of input? Smaller in and quicker would interest me + being on machine? JPMusic/Aotearoa
Really fascinating! I am a huge Kate Bush fan and can't help but wonder, hope that she finds this and experiments with it some day. I just spent money on some music software, but this is definitely going on my list to try in the future.
Morpho does indeed support offline use. Local training is not a priority currently but we're listening to feedback on the matter. We have another free community plugin, Neutone FX, for those who want to experiment with training neural audio effects locally.
This is truly cool. I would want to see and hear the results of a varied pool of training inputs. What was demonstrated with the single one (speech) here makes my imagination just completely explode. What happens when you train it on one percussionist/drummer in a room full of instruments vs training it on a single instrument? One person playing linear patterns on a piano versus two people playing 4 total unique parts? What happens when you train it on the animated psychedelic 1972 classic Fritz The Cat? Or a little chamber orchestra? Or a homestead yard scene with chickens clucking and pigs snorting and goats making whatever that sound is called? There should be some kind of app companion thing so people can easily do the sampling in the field rather that just at home. Making it an auv3 so people can use it in LoopyPro would also be rad.
I’m not saying the example you gave is bad, but it doesn’t make me feel excited and inspired about the future of sound design. Just me? Hh (The plugin concept itself is exciting and inspiring of imagination. I want demos that match that. Otherwise, it’s just interesting.)
@@jrettetsohyt1 Hey no worries at all. For this video we wanted to focus on how sound design can be truly personal with Morpho, and show how anyone can make an interesting model with just their voice. That said, there is plenty of scope for more "out there" models applied to more boundary pushing music. We work with some brilliant artists who continue to surprise us with the ways they incorporate Morpho into their art. We're all ears if you have suggestions for models/demos/etc.!
@@neutone_aiAs a professional sound designer, absolutely this is a great example of what value this plugin brings to the table. I may need to pick this up. Nice work!
looks like you guys have a great modern recording facility. Funny..well not funny but the difference between CLASSIC as compared to MODERN recording...looks great how and where audio is going.
For Mac users we recommend anything Apple Silicon - the base M1 chip is ample so long as the buffer size and sample rate settings are not too aggressive (48kHz sample rate, 2048 buffer size is a safe starting point). On Windows it is trickier to define, but an i7 or equivalent from 2020 onward should be comfortable running Morpho. You can download the free version of the plugin to check how it runs on your system.
I think to really get this off the ground, and in people’s tracks, the training needs to be somewhat unlimited. At this stage, no one knows what they’re going to get, and they should feel good about experimenting without the fear of results that are not usable. This is compounded by the three-day wait time. Disappointment can easily ruin a great concept. Unleash the early adopters…this is the way.
You raise an important point and it's one we've mulled over a lot internally. We don't want users to feel penalized for experimenting and I know firsthand what it's like when a model didn't come out exactly as planned. On the other hand, training costs us a lot in compute and giving away tokens would be giving away money. Our solution currently is to allow users one retry per token. Generally that's all you need to tweak or fix something in your dataset. We think it's a fair compromise and a gesture of goodwill given that this retry attempt comes out of our own pocket. Your feedback is very welcome though, we're still listening and learning!
We completely hear you. On the one hand we're stoked that it's even possible to have a neural network of this complexity running in real time on CPU, but when we're getting creative we want to throw on as many instances as possible. We're actively working on optimization and we hope to chip away at this in future updates. Thank you for your support!
I think 29 dollars for 1 model Is a bit too much, in my opinion, something around 2.99 would be better. For instance I would buy at least 1, at 29 I would autocontrollo myself and I would not buy neither 1, because I'm too afrauid I could like it a lot and pend too much money one model after the other...
This stuff will reconnect us to art as performance and a human creation I think - when everyone can churn out AI tracks like this it'll take art off a pedestal. So I guess this is positive! Still, can't understand wanting to use it!
We're not interested in AI tools that churn out things for you, and I'm sorry if this video gave you that impression. We think this is far more personal and creative than simply using sample libraries that someone else recorded for you. If you recorded all the training material yourself, and you implemented those sounds into your own music without any decisions such as melody or rhythm being made for you - what creativity has been lost?
To be clear, you can absolutely train pitched models. It requires some additional considerations that we wanted to spare you for this introductory video, but it can be done. Personally I've had great fun playing guitar through a model trained on an opera singer!
@@neutone_aiAgain, as an American to ye Olde Brits, with the greatest of respect, I highly recommend reformatting your approach to relating your developments / products to the outside world. To continue my last comments example about Tesla using Tesla coils to bake a potato, your response to this commenter, who has said "Mr. Tesla, I may invest in this when you show me it can do more than cook a lowly potato" and to that Mr. Tesla (you, in the metaphor) have replied "Oh trust me it totally can do much more than cook a potato, I just didn't want to show that in my video cus it's too smart for you simpletons to understand" As a yank familiar with how Brits word things, there are a lot of "ways of being humble and reserved" from Brit-to-Brit that come off condescending and "head up one's own behind" to people outside that culture. I have a whole theory as to why anything that gets promoted by Brits onto the internet (via written copy, scripts for marketing videos, demonstrations like this one) tends to do fine amongst onlookers in the UK but not so much with the north American market, and I think it's this approach, this cultural "presenting my pride and joy as almost good is good enough because trying to present my good thing as great would mean I'm secretly sh*te, m8!" because Brits culturally have this almost Japanese-woman-isn't-allowed-to-take-a-compliment level aversion to earnestly believing in something being great and presenting information about it as such.
Could you let us know what you found tricky about using Morpho? We really want to remove as many barriers as possible for artists to experiment with neural audio.
Needing to upload my data to some server is a big turn off to me. I don't want anyone to have my data, and I never believe promises of privacy. Even if the current server owners are honest, the company could be bought by someone else in the future who will not honor the previous owner's promises. Recording 1.5 hours of audio to get the effects also doesn't sound very appealing to me. Cut the requirement down to maybe a minute of audio and process it completely locally, without needing internet access and you might get my attention.
Thanks for your feedback. I have a couple of thoughts here that might not change your position but I think are worth mentioning anyway: 1. The audio you upload is deleted once training finished - we have no need for it once the model is ready. Of course this is important for privacy reasons but also for practical ones - it would be an unnecessary operational cost to keep terabytes of audio on our servers that aren’t being used. As a result, the scenario you described where we might be bought by another company with bad intentions is not possible as we no longer have your audio data. Your model is a neural network with frozen weights and cannot be changed or reverse engineered to retrieve the data used to train it. 2. The audio data you upload is not the kind of data that could be sold to advertisers or other similar third parties. Advertisers want to know what TH-cam videos you comment on because that helps them build a profile of you. Contextless audio files are not valuable to them in the same way. 3. Let’s imagine a scenario where we were indeed a company with bad intentions and we wanted to gather lots of audio for training other models. Would it not be easier for such a company to just scrape the web instead? Sadly music/audio are not rare or valuable commodities online. Ethical training is at the core of our company. Case in point - we specifically chose a Lovecraft book for this example model because his works are old enough to be in the public domain. We could have recorded some other more recent book but it felt more ethical to record something old and public. I hope this helps to highlight our position.
Ich traue Steinmeier nach langer Bekanntschaft nun wirklich alles zu, einen schlimmeren Bundespräsidenten kann man sich für ein Land wirklich nicht mehr wünschen! Top of the End!
DNA was first discovered in 1869 by Swiss scientist Friedrich Miescher. Yuri Gagarin, a Soviet cosmonaut, became the first human to travel into space on April 12, 1961. The picture you show of Pierre Schaeffer (along with Francois Bayleand and Bernard Parmegiani -- GRMC) was taken in 1972. And Peter Zinovieff invented the EMS Musys, in 1969 which is thought to be the first sampler. The Fairlight CMI,which was the first commercially used sampler, debuted in 1979. All this to say your comment that these things happened "before we discovered DNA or put a man in space" though figuratively creative, and good for making the point that sampling goes way back are factually incorrect... in the first case by more than a century.
Crick and Watson’s paper and famous sketch of the double helix date to 1953. Gargarin went to space in 1961. Schaeffer’s Etude Aux Chemins De Fer was composed in 1948. Halim El-Dabh’s The Expression of Zaar was composed even earlier in 1944. The date of the photo and the invention of sampling machines are surely irrelevant here? Yes maybe you could argue that we should have changed the word “discovered” to some variant of “understood” but we are really splitting hairs at that point in a video about music.
The TH-cam comment section is probably not suitable for a deep discussion on this but I understand any wariness you might have with data. For now I would encourage you to look at our history - we have been very vocal about our stance on ethical model training and this permeates everything we do. Check out our blog, have a look at aiformusic.info, note how we publicly list and credit all training data sources for every model in the plugin's browser. I know talk is cheap and trust is hard to establish, so we ask you to judge us on our actions. Feel free to shoot us an email if you want to discuss this properly.
I feel like everything we heard here would be easier to just make manually. I mean the effects you used on your voice did a better job , why go through the trouble of training a plugin to give a less accurate and controlsble result,
From the intro, I was hoping this was going to be something truly innovative like Synplant 2 or Visco. But disappointingly, this seems just snake oil and more bother than it's worth.
Check out our free community plugin Neutone FX if you want to experiment with training your own neural audio effects locally. We have an open source SDK on our GitHub and tutorials to get started.
so, let me get this straight, so you're selling a vst, then you're selling to us again a "trained" model, but we have to send it to you so you can get our "models" for free so then you can "sell" our models again?🤨
As an American spending half of my time in the UK the last few years this video cracks me up. For those unaware, Brits have a tendency to love "underwhelming presentations of things", their culture doesn't emphasize the "Hurry! Hurry! Step right up! See the amazing thing! You'll never believe your eyes!!!" showmanship that some of our cultures do (especially American), which might explain everyone's unimpressed reaction to this. To the Brits making the video: If Nikola Tesla was alive today and started a video in his mad scientist workshop claiming that electricity for mankind might be about to change forever due to his zappy new inventions, but then spent 16 minutes using the electricity to bake a potato in a less efficient way than simply baking a potato, you'd probably find the demonstration a bit under-representative of the claim at the start of it, yeah? Suddenly Tesla looks like a bit less than a potential genius and more like someone who stumbled on technology by accident or the effort of others and is now using it for the lulz of mundanity. Have a look at how the creators of the new "Concatenator" synth are promoting it. I don't know if that plugin is worth its salt, or even anything beyond an ineffective grift, but they've created a great set of "hook content" that shows their flavor of AI driven tech accomplishing something that people can't already easily make with, well, Audacity
I got goosebumps when I just saw this concept.
I trust my instinct...it can be HUGE.
1.5 hr of recording followed by a five day wait to get a single guttural vocal sample? I like the idea but the implementation leaves me confused.
definitely great for cinematic sound design and some niche music genres
Holy balls, this is spine tingling, absolutely loving this. Great quick n dirty demo that really activated the almonds. While the processing time is a bit on the long side (nothing for spontaneous sound design but you can't always have everything all at once) the potential of this thing came through in flying colors (or screaming demons).
Haha, really glad to hear it resonated with you. Thank you for the support!
It would be fun with the option to make personal models public, similar to how poeple share Kontakt banks e.g.
This is actually the focus of our next update so it's lovely to hear it requested. We'll have more news soon!
It would also be nice to see videos demoing each of the models you have already trained. Thanks!
Definitely! We might do a "no talking demo" in the future. Morpho also has a 7 day free trial of the entire public model library (currently 35 models) if you want to see what's possible.
Super cool! Great to see creative examples of using these tools. Such a fun vibe. 🎉
Thanks! Cat't wait to hear what you create with Neutone! 😉
9:00 This is going to be so useful for sound designers doing horror and scifi films. Anyone who needs to transform sounds in a way that sounds organic yet still fantastical. Most of my own work is fantastical world stuff, i can see this being useful.
My concern is that in order to build models for it, there is reliance on your company to do the model computation. Which is fine now, but it can be hard to know how well the product will be supported in coming years.
Any chance you might be able to make the training application available to people with a good GPU?
I cut and spliced my parents tapes using a 8mm film splicer as a kid .. then in the 80s I got this Korg DSS-1 sampler and went crazy :)
Awesome. I bet those DSS-1s go for a pretty penny these days!
@@neutone_ai If I hadn't sold it I would have a solid goldbar in that one :)
WOW this is really good and very unique, I like it very much.
Thank you for the kind words!
This is cool it sounds like Bourne Identity experimental soundtrack
Impressive, great work!
insane!
this is amazing. I just purchased! lets go.
Thank you very much!
with vocoflex, concatenator and now morpho, it has become obvious that neural sampling is indeed a new category of AI-powered sampling plugins, and an exciting one for sure :)
Mixed feelings regarding this one.
Great in terms of exploration & you can actually get some good sounds out of that, but if you have a specific idea in your mind or a specific feeling which you wanna put out - then old school style it is. Never underestimate the power of the creative mind/soul with a pair of skilled hands 😉 Music is all about the feeling at the end of the day, no matter the tools or tech ;)
True. It's meant to help you to explore new ideas.
That piece sounds like it could be a bjork piece lol
i like your company
This reminds me of Synplant 2 - curious to check it out :)
Phaseplant does this already, or is this different?
I like the concept, but there's an inherent problem with the way neural networks work, which is the sheer lack of definition in how you control sound. It's cool, but it feels like this would be more of a cool unpredictable effect to use in tandem with classical sampling than "the future of sampling". I think the control you have over the minutae of what is happening is part of what makes sound design techniques appealing.
Great comment. We also believe good parametrization is essential for the adoption of these new tools, and we know there are improvements to be made. Our internal research has been tackling this with a two-pronged attack: 1) Disentangling features such as pitch and instrument type within the architecture and 2) Exploring different UX and visualization techniques to condense the latent space into something more navigable. The feedback we get at this stage will greatly influence how we develop Morpho and other future plugins.
As you mentioned, combinations of Morpho with traditional sampling are totally valid and very powerful. We don't want to replace but expand the options available.
@neutone_ai I look forward to seeing what you come up with, then! Also glad to see AI solutions that don't attempt to replace the artist but rather enhance their ability
@@radiofloyd2359 No AI "attempts" to replace anything. All AI is designed to ideate and create. Why artists cannot see AI as an amazing tool is beyond me. Artists should be the first people to be using it. My God, AI in the hands of an artist is always going to be 100 x more powerful than in the hands of the non-artist! Use it. Create. Sample your own work and see where it goes. Upload your own work to Suno and extend it. You will get so much inspiration for new works.
So this is a Morphoder™
I’m concerned that it a) needs 3-5 days of processing of b) 1.5hrs of input? Smaller in and quicker would interest me + being on machine? JPMusic/Aotearoa
Could this reverse engineer any sound you feed it and then replicate it in a controllable way?
or any suggestions for plugins which can do this?
Synthplant
Really fascinating! I am a huge Kate Bush fan and can't help but wonder, hope that she finds this and experiments with it some day. I just spent money on some music software, but this is definitely going on my list to try in the future.
That would be brilliant.
offline use or local training options?
Guessing there might be copyright issues with that
Morpho does indeed support offline use. Local training is not a priority currently but we're listening to feedback on the matter. We have another free community plugin, Neutone FX, for those who want to experiment with training neural audio effects locally.
So the response is always random? How do you drive this thing? The idea is interesting, though not new.
All I want is local training as an option.
This is truly cool. I would want to see and hear the results of a varied pool of training inputs. What was demonstrated with the single one (speech) here makes my imagination just completely explode. What happens when you train it on one percussionist/drummer in a room full of instruments vs training it on a single instrument? One person playing linear patterns on a piano versus two people playing 4 total unique parts? What happens when you train it on the animated psychedelic 1972 classic Fritz The Cat? Or a little chamber orchestra? Or a homestead yard scene with chickens clucking and pigs snorting and goats making whatever that sound is called? There should be some kind of app companion thing so people can easily do the sampling in the field rather that just at home. Making it an auv3 so people can use it in LoopyPro would also be rad.
Great concept, but “the future of sound design” probably needs a more impressive example…
I’m not saying the example you gave is bad, but it doesn’t make me feel excited and inspired about the future of sound design. Just me? Hh
(The plugin concept itself is exciting and inspiring of imagination. I want demos that match that. Otherwise, it’s just interesting.)
@@jrettetsohyt1 Hey no worries at all. For this video we wanted to focus on how sound design can be truly personal with Morpho, and show how anyone can make an interesting model with just their voice. That said, there is plenty of scope for more "out there" models applied to more boundary pushing music. We work with some brilliant artists who continue to surprise us with the ways they incorporate Morpho into their art. We're all ears if you have suggestions for models/demos/etc.!
@@neutone_aiAs a professional sound designer, absolutely this is a great example of what value this plugin brings to the table. I may need to pick this up. Nice work!
User issue most likely
3 to 5 days is wild
I would love to see AAX ver. developed as well!
Thanks! AAX is definitely on our radar. We'll let you know when we have a more solid timeline for release.
Thats just insane! Id love to work with you guys :)
Any time!
looks like you guys have a great modern recording facility. Funny..well not funny but the difference between CLASSIC as compared to MODERN recording...looks great how and where audio is going.
Oh Yeah! Sounds like Yello. Can't remember which tune though...
What would be a good minimum powered laptop that could handle this plug-in?
Then we’d know that anything above that would be safe. Thanks.
For Mac users we recommend anything Apple Silicon - the base M1 chip is ample so long as the buffer size and sample rate settings are not too aggressive (48kHz sample rate, 2048 buffer size is a safe starting point). On Windows it is trickier to define, but an i7 or equivalent from 2020 onward should be comfortable running Morpho. You can download the free version of the plugin to check how it runs on your system.
@@neutone_ai
How would it pergorm on m1 max 64ram,128buffer 48hz?
It’s almost unusable on my intel MacBook.
Wish I’d of known that before purchasing.
@@Dan_A_Bol Good to know. Cheera
Cool
I think to really get this off the ground, and in people’s tracks, the training needs to be somewhat unlimited. At this stage, no one knows what they’re going to get, and they should feel good about experimenting without the fear of results that are not usable. This is compounded by the three-day wait time. Disappointment can easily ruin a great concept. Unleash the early adopters…this is the way.
You raise an important point and it's one we've mulled over a lot internally. We don't want users to feel penalized for experimenting and I know firsthand what it's like when a model didn't come out exactly as planned. On the other hand, training costs us a lot in compute and giving away tokens would be giving away money. Our solution currently is to allow users one retry per token. Generally that's all you need to tweak or fix something in your dataset. We think it's a fair compromise and a gesture of goodwill given that this retry attempt comes out of our own pocket. Your feedback is very welcome though, we're still listening and learning!
I love it. But its sadly so heavy on the cpu. I hope this will improve :) keep it up!
We completely hear you. On the one hand we're stoked that it's even possible to have a neural network of this complexity running in real time on CPU, but when we're getting creative we want to throw on as many instances as possible. We're actively working on optimization and we hope to chip away at this in future updates. Thank you for your support!
@@neutone_ai Would be great to utilize the GPU instead. But I´am sure if possible you consider this anyways.
I think 29 dollars for 1 model Is a bit too much, in my opinion, something around 2.99 would be better. For instance I would buy at least 1, at 29 I would autocontrollo myself and I would not buy neither 1, because I'm too afrauid I could like it a lot and pend too much money one model after the other...
This stuff will reconnect us to art as performance and a human creation I think - when everyone can churn out AI tracks like this it'll take art off a pedestal. So I guess this is positive! Still, can't understand wanting to use it!
We're not interested in AI tools that churn out things for you, and I'm sorry if this video gave you that impression. We think this is far more personal and creative than simply using sample libraries that someone else recorded for you. If you recorded all the training material yourself, and you implemented those sounds into your own music without any decisions such as melody or rhythm being made for you - what creativity has been lost?
Anything can be made percussive, but noise is not music. I'll look again when notes and chords are trained output.
To be clear, you can absolutely train pitched models. It requires some additional considerations that we wanted to spare you for this introductory video, but it can be done. Personally I've had great fun playing guitar through a model trained on an opera singer!
@@neutone_aiAgain, as an American to ye Olde Brits, with the greatest of respect, I highly recommend reformatting your approach to relating your developments / products to the outside world. To continue my last comments example about Tesla using Tesla coils to bake a potato, your response to this commenter, who has said "Mr. Tesla, I may invest in this when you show me it can do more than cook a lowly potato" and to that Mr. Tesla (you, in the metaphor) have replied "Oh trust me it totally can do much more than cook a potato, I just didn't want to show that in my video cus it's too smart for you simpletons to understand"
As a yank familiar with how Brits word things, there are a lot of "ways of being humble and reserved" from Brit-to-Brit that come off condescending and "head up one's own behind" to people outside that culture. I have a whole theory as to why anything that gets promoted by Brits onto the internet (via written copy, scripts for marketing videos, demonstrations like this one) tends to do fine amongst onlookers in the UK but not so much with the north American market, and I think it's this approach, this cultural "presenting my pride and joy as almost good is good enough because trying to present my good thing as great would mean I'm secretly sh*te, m8!" because Brits culturally have this almost Japanese-woman-isn't-allowed-to-take-a-compliment level aversion to earnestly believing in something being great and presenting information about it as such.
It would be nice not to have to do a PhD in Neutone every time you want to use it.
Could you let us know what you found tricky about using Morpho? We really want to remove as many barriers as possible for artists to experiment with neural audio.
Needing to upload my data to some server is a big turn off to me. I don't want anyone to have my data, and I never believe promises of privacy. Even if the current server owners are honest, the company could be bought by someone else in the future who will not honor the previous owner's promises.
Recording 1.5 hours of audio to get the effects also doesn't sound very appealing to me.
Cut the requirement down to maybe a minute of audio and process it completely locally, without needing internet access and you might get my attention.
Thanks for your feedback. I have a couple of thoughts here that might not change your position but I think are worth mentioning anyway:
1. The audio you upload is deleted once training finished - we have no need for it once the model is ready. Of course this is important for privacy reasons but also for practical ones - it would be an unnecessary operational cost to keep terabytes of audio on our servers that aren’t being used. As a result, the scenario you described where we might be bought by another company with bad intentions is not possible as we no longer have your audio data. Your model is a neural network with frozen weights and cannot be changed or reverse engineered to retrieve the data used to train it.
2. The audio data you upload is not the kind of data that could be sold to advertisers or other similar third parties. Advertisers want to know what TH-cam videos you comment on because that helps them build a profile of you. Contextless audio files are not valuable to them in the same way.
3. Let’s imagine a scenario where we were indeed a company with bad intentions and we wanted to gather lots of audio for training other models. Would it not be easier for such a company to just scrape the web instead? Sadly music/audio are not rare or valuable commodities online.
Ethical training is at the core of our company. Case in point - we specifically chose a Lovecraft book for this example model because his works are old enough to be in the public domain. We could have recorded some other more recent book but it felt more ethical to record something old and public. I hope this helps to highlight our position.
ios pls
come on
Ich traue Steinmeier nach langer Bekanntschaft nun wirklich alles zu, einen schlimmeren Bundespräsidenten kann man sich für ein Land wirklich nicht mehr wünschen! Top of the End!
DNA was first discovered in 1869 by Swiss scientist Friedrich Miescher. Yuri Gagarin, a Soviet cosmonaut, became the first human to travel into space on April 12, 1961. The picture you show of Pierre Schaeffer (along with Francois Bayleand and Bernard Parmegiani -- GRMC) was taken in 1972. And Peter Zinovieff invented the EMS Musys, in 1969 which is thought to be the first sampler. The Fairlight CMI,which was the first commercially used sampler, debuted in 1979. All this to say your comment that these things happened "before we discovered DNA or put a man in space" though figuratively creative, and good for making the point that sampling goes way back are factually incorrect... in the first case by more than a century.
Crick and Watson’s paper and famous sketch of the double helix date to 1953. Gargarin went to space in 1961. Schaeffer’s Etude Aux Chemins De Fer was composed in 1948. Halim El-Dabh’s The Expression of Zaar was composed even earlier in 1944. The date of the photo and the invention of sampling machines are surely irrelevant here? Yes maybe you could argue that we should have changed the word “discovered” to some variant of “understood” but we are really splitting hairs at that point in a video about music.
Yeah... Give us your audio database and sit back and relax... And don't worry,your models are private AND IT WILL BE SAFE WITH US.
The TH-cam comment section is probably not suitable for a deep discussion on this but I understand any wariness you might have with data. For now I would encourage you to look at our history - we have been very vocal about our stance on ethical model training and this permeates everything we do. Check out our blog, have a look at aiformusic.info, note how we publicly list and credit all training data sources for every model in the plugin's browser. I know talk is cheap and trust is hard to establish, so we ask you to judge us on our actions. Feel free to shoot us an email if you want to discuss this properly.
I feel like everything we heard here would be easier to just make manually.
I mean the effects you used on your voice did a better job , why go through the trouble of training a plugin to give a less accurate and controlsble result,
Nothing new in sound shit really.
Mouai, vous etes emballé pour un gadget à bruit. On sait faire tout ca depuis longtemps en bidouillant les plugins
From the intro, I was hoping this was going to be something truly innovative like Synplant 2 or Visco. But disappointingly, this seems just snake oil and more bother than it's worth.
Totally fine if this isn't for you, thanks for having a listen anyway!
hipstershit
No option to train the model locally ? Thats a HARD PASS from me.
Check out our free community plugin Neutone FX if you want to experiment with training your own neural audio effects locally. We have an open source SDK on our GitHub and tutorials to get started.
@@neutone_aia link to the github in your videos description would be appreciated 🤙🏾
so, let me get this straight, so you're selling a vst, then you're selling to us again a "trained" model, but we have to send it to you so you can get our "models" for free so then you can "sell" our models again?🤨
The trained model will be private and available only to you. We don't sell your custom model to anyone.
скачала, попробовала, удалила. бред полнейший.
Oversold and underdelivered
As an American spending half of my time in the UK the last few years this video cracks me up. For those unaware, Brits have a tendency to love "underwhelming presentations of things", their culture doesn't emphasize the "Hurry! Hurry! Step right up! See the amazing thing! You'll never believe your eyes!!!" showmanship that some of our cultures do (especially American), which might explain everyone's unimpressed reaction to this.
To the Brits making the video: If Nikola Tesla was alive today and started a video in his mad scientist workshop claiming that electricity for mankind might be about to change forever due to his zappy new inventions, but then spent 16 minutes using the electricity to bake a potato in a less efficient way than simply baking a potato, you'd probably find the demonstration a bit under-representative of the claim at the start of it, yeah? Suddenly Tesla looks like a bit less than a potential genius and more like someone who stumbled on technology by accident or the effort of others and is now using it for the lulz of mundanity.
Have a look at how the creators of the new "Concatenator" synth are promoting it. I don't know if that plugin is worth its salt, or even anything beyond an ineffective grift, but they've created a great set of "hook content" that shows their flavor of AI driven tech accomplishing something that people can't already easily make with, well, Audacity
Hey amazing!! The discord link in the desciption dosen't work and I would love to join :D
Oh no! Here is the code to paste directly into Discord if you're still having trouble: r6WwYCvJTS