Wow on the achieving this concept. Psychedelic experiences of mine in the past have made given me thoughts of having something like this possible in the world. Great work!
Since this is using grains, a future update could see this as a grain synth. 1 The grains would be repitched to incoming midi. Repitching could be a slider so we can control n automate the degree of repitching. I imagine this would allow unheard of timbres. 2. Slider or xy for Grains scattered in stereo if not binaural 3d space. 3. Timestretching. 4. Non-timestretched but having that timestretching sound. Slider between algos from smooth to Matrix style glitchy. 5. Reverse and bidirectional grains. 6. Morphing slider from input grain to corpus grain. 7. Normalization/compression of grains. 8. Have a scale selector. In the corpus analysis stage, pre-repitch any tonal grains to nearest pitch in the selected scale. Fun to imagine!
Works really well on drums, having a harder time getting a usable sound with pitched inputs. Spectral morphing post Concatenator with the original signal gets a really interesting sound, seem to be able to "rob" the pitch content from the original again but does also introduce artifacts. I wonder if there is any scope for improvement on pitch-following?
Thanks for trying this out and for the feedback! We're going to switch to using the constant-Q transform in an upcoming version, and this should hopefully improve the pitch. For now, you can try to use more particles and make the window length a bit longer, and that should improve the pitch slightly
@sambradshaw289 Ah crap, I forgot that we didn't expose that ability in the UI yet in the beta version, so we'll have to wait until we get the CQT in. For now, just try to use more particles where you feel the pitch isn't coming in clearly. Also, make sure the pitch you're trying to recreate actually exists in the corpus (we also have plans to augment the corpus on the fly to accomodate this, but for now it can only take samples in the corpus)
I don't know what to say guys, but this is like the first plugin I'm seriously excited to buy since like 2018 or smth, we've been drowning in an ocean of only crap plugins for years now
Is there an option to use/output un-warped raw samples? Does this work like a sampler of is it using audio information to spectrally recreate new audio?
The output audio is unwarped. There’s no spectral processing beyond the analysis. It’s like granular synthesis, except the size, density, and index of the grains are following an input signal.
@@bencantil462 Awesome that's great to know. Looks like a buy for me to get started, and to show my support 🙂 Great work so far, very interesting and exciting software with huge potential. Looking forward to seeing any advances or implementation updates in future :) P.s. When is the manual being released? I have more questions to ask, but I think I should have a play and a read first!
@@encanti how about for Kontakt libraries that already have legato samples? Eg, if I load in a cello library and then sing an A# then legato up to a C, would concatenator recognize and use the cello’s legato sample as a match to my sung legato? Thanks.
@@jrettetsohyt1 Yes it's pretty good at that! As long as the notes are in the corpus, and particle count is keeping up with the library size, it should be accurate. We are currently working on making the grains forceably longer, and adding some envelope controls, so in the coming months I anticipate this use case will get even better.
This is amazing but...I just picked up the combobulator and i'm feeling a bit silly. Am I right in saying this does something very similar to what the combobulator does but you use your sounds instead of the paid for packs? It seems like combobulator point 2. In which case I feel like a crossgrade offer should be considered.
I think the output is similar but the underlying technology is distinct, from my understanding the combobulater is creating a neural net specific to a training data set which can generate never before heard outputs in the datasets style. This tool on the other hand can only play back grains from the samples you give it, so it's not generating new source material, but just intelligently stitching stuff together. I'm sure there's nuance I'm missing though
@@ViRiXDreamcore A couple of standouts that makes this plugin unlike anything else out there: Concatenator accurately follows pitch, holds an unlimited amount of samples, includes a large modulation section, and has two stereo modes (I think Reformer is in mono), with near-zero latency.
what happens if you give it drum samples and then you beatbox into it? sometimes i have ideas for drums but if i try to add samples the classic way to approach the idea in my head it is already being overwritten by the non-finished sound i am hearing and suddenly i try to make something different instead. does this plugin solve that? i know it's impossible to tell without trying it, but this is just my first idea when seeing this
Yes, if you give this drum samples and beatbox into this, it is designed to accurately pick out kicks vs snares vs high hats for you, and instantly play it back in less than 50ms so you should be able to beat box with it.
@bencantil462 that latency would be a bit bad for realtime beatboxing, but if the results at least sound more accurate than with existing similiar plugins this could be fun
@@Beatsbasteln If you decrease the window length to 1024 samples, you can get a 23ms latency, at the expense of some pitch resolution (which doesn't matter if you're doing drum sounds!)
SO cool! Why did you have to come out on Black Friday and Cyber Season? The sound design potential's great! It's like a blend of XO and Reformer Pro! Can I suggest a feature on a future update like an X/Y pad for more sample morphable fun? As it is now, can I ask how is it on the CPU? 😀
You can run this in high efficiency mode, or crank the settings up until your CPU breaks... it's mainly the particle knob and polyphony that loads the CPU. And yes we will be building out the XY features!
Is there a way to increase the grain size (or release envelope?) other than stickiness? I can't get usable results even with the stickiness all the way up. Just a jumble of microsecond sounds in succession. Disappointing so far.
How does this compare to Dillon Bastan’s ‘Coalescence’? Obviously, both use concatenative synthesis but this seems more responsive to incoming audio and seems to more accurately map to it. Also seems to be less tweakable though. Does that sound about right? Any idea of specific differences in the algorithms etc of the two plugins?
We love Coalescence - agree the flexibility and options in there are amazing. One of the main differences is Concatenator does not use neural networks, so you can put a HUGE number of samples that load instantly with no training required. We just released a peer reviewed paper about this new method at ISMIR this year, and we're excited to see how the community uses it.
Crazy to read such a talented artist release music research related scientific papers ahhh 🥲 I’m super inerested in reading it! Would you mind sharing the link?
Samplebrain is also concatenative synthesis! But this uses a proprietary new algorithm that far outpaces all previously held benchmarks for speed, accuracy, and flexibility. There's no other plugin that can do what Concatenator does.
Is irony a single prismatic shard, a group of prisms, the prism itself, or simply a momentary glimpse of the prism itself - in how communication or languages structure our shared realities?
@@encanti totally, for now it is being poorly marketed to the people who can benefit from it the most like the less technical video editors, game developers..etc.. but sooner or later a just as powerful and more simplified version of this be integrated into mainstream apps it isnt only going to affect audio but pretty much every single industry will get hit the same way digital daws reduced the footprint of a all major studios the way i see it anyone with a good ear will be making sounds happen the same way the image and video generators are being used the only thing not taking ai to the mainstream are the devs who cant seem to simplify things for the average user
I don't need this but will buy it just because innovation needs to be rewarded in this ocean of vintage emulation crap
you the best
Thank you!!
Had to be said
this way you can emulate splice sample packs instead!
We need more people like you on Planet Earth :3
Looks like it has some insane potential, please post more demos!
Wow on the achieving this concept. Psychedelic experiences of mine in the past have made given me thoughts of having something like this possible in the world. Great work!
Since this is using grains, a future update could see this as a grain synth.
1 The grains would be repitched to incoming midi. Repitching could be a slider so we can control n automate the degree of repitching. I imagine this would allow unheard of timbres.
2. Slider or xy for Grains scattered in stereo if not binaural 3d space.
3. Timestretching.
4. Non-timestretched but having that timestretching sound. Slider between algos from smooth to Matrix style glitchy.
5. Reverse and bidirectional grains.
6. Morphing slider from input grain to corpus grain.
7. Normalization/compression of grains.
8. Have a scale selector. In the corpus analysis stage, pre-repitch any tonal grains to nearest pitch in the selected scale.
Fun to imagine!
what the heck is this magic????? I really don't need a new VST do I???? this looks fantastic.
That is mind bendingly good. Need. it. now.
Wow! this is intresting
noooooooooo waaaaaay! it’s insane :O
Impressive tech 🤯
Works really well on drums, having a harder time getting a usable sound with pitched inputs. Spectral morphing post Concatenator with the original signal gets a really interesting sound, seem to be able to "rob" the pitch content from the original again but does also introduce artifacts. I wonder if there is any scope for improvement on pitch-following?
Thanks for trying this out and for the feedback! We're going to switch to using the constant-Q transform in an upcoming version, and this should hopefully improve the pitch. For now, you can try to use more particles and make the window length a bit longer, and that should improve the pitch slightly
@@ctralie Thanks for this! How can I make the window length longer, is this with the 'stickiness' knob?
@sambradshaw289 Ah crap, I forgot that we didn't expose that ability in the UI yet in the beta version, so we'll have to wait until we get the CQT in. For now, just try to use more particles where you feel the pitch isn't coming in clearly. Also, make sure the pitch you're trying to recreate actually exists in the corpus (we also have plans to augment the corpus on the fly to accomodate this, but for now it can only take samples in the corpus)
WTFFFFFFF This looks fun
LOOOOOOL, this looks awesome
This is actually insane 🤯
wow truly amazing thank you so much
this is unreal 😮
Woah this is crazy! 😂 very good
How cool is this
офигеть
я больше фигею с того, как он это всё произносит смешно))
This is banana city
I don't know what to say guys, but this is like the first plugin I'm seriously excited to buy since like 2018 or smth, we've been drowning in an ocean of only crap plugins for years now
this is craaazy 😮
Is there an option to use/output un-warped raw samples? Does this work like a sampler of is it using audio information to spectrally recreate new audio?
The output audio is unwarped. There’s no spectral processing beyond the analysis. It’s like granular synthesis, except the size, density, and index of the grains are following an input signal.
@@bencantil462 Awesome that's great to know. Looks like a buy for me to get started, and to show my support 🙂
Great work so far, very interesting and exciting software with huge potential. Looking forward to seeing any advances or implementation updates in future :)
P.s. When is the manual being released? I have more questions to ask, but I think I should have a play and a read first!
insane
That is sick! I wanna try it!
I hoped someone would make Something like this
this is nuts
Would Concatenator use legato transition samples between notes?
that's an amazing idea, but no it doesn't repitch samples (yet)
@@encanti how about for Kontakt libraries that already have legato samples? Eg, if I load in a cello library and then sing an A# then legato up to a C, would concatenator recognize and use the cello’s legato sample as a match to my sung legato? Thanks.
@@jrettetsohyt1 Yes it's pretty good at that! As long as the notes are in the corpus, and particle count is keeping up with the library size, it should be accurate. We are currently working on making the grains forceably longer, and adding some envelope controls, so in the coming months I anticipate this use case will get even better.
Also I recommend adding an autotune before the Concatenator on your voice. Pre-processing like this can help the algorithm optimize it's results.
But will this be able to not throw in the towel when I throw my 1TB sample library into it? 😅
Hah okay, so right now it does have to fit into RAM. But hopefully in future versions we will be able to deal with larger corpora on disk
WOW.
That's kinda insane
That's pretty cool
Will you be making an iOS iPad Pro version? I’d buy it for sure if so.
is there a free trial/demo version? would love to try it out before springing $100+ for the full version
Not at this time but we do have a money back satisfaction guarantee.
Same.
i need this. guess i gotta upgrade my RAM... it reminds me of vochlea but more interesting sound triggering
This is amazing but...I just picked up the combobulator and i'm feeling a bit silly. Am I right in saying this does something very similar to what the combobulator does but you use your sounds instead of the paid for packs? It seems like combobulator point 2. In which case I feel like a crossgrade offer should be considered.
I think the output is similar but the underlying technology is distinct, from my understanding the combobulater is creating a neural net specific to a training data set which can generate never before heard outputs in the datasets style. This tool on the other hand can only play back grains from the samples you give it, so it's not generating new source material, but just intelligently stitching stuff together. I'm sure there's nuance I'm missing though
Very cool. It'd be valuable to see a side-by-side against Reformer for sound design applications.
JUST what I was thinking. Wonder how this is different
@@ViRiXDreamcore A couple of standouts that makes this plugin unlike anything else out there: Concatenator accurately follows pitch, holds an unlimited amount of samples, includes a large modulation section, and has two stereo modes (I think Reformer is in mono), with near-zero latency.
what happens if you give it drum samples and then you beatbox into it? sometimes i have ideas for drums but if i try to add samples the classic way to approach the idea in my head it is already being overwritten by the non-finished sound i am hearing and suddenly i try to make something different instead. does this plugin solve that? i know it's impossible to tell without trying it, but this is just my first idea when seeing this
Yes, if you give this drum samples and beatbox into this, it is designed to accurately pick out kicks vs snares vs high hats for you, and instantly play it back in less than 50ms so you should be able to beat box with it.
@bencantil462 that latency would be a bit bad for realtime beatboxing, but if the results at least sound more accurate than with existing similiar plugins this could be fun
@@Beatsbasteln Give it a shot!! Other beatboxers are loving it so far!! We've got a money back guarantee that you'll love it.
@@Beatsbasteln If you decrease the window length to 1024 samples, you can get a 23ms latency, at the expense of some pitch resolution (which doesn't matter if you're doing drum sounds!)
SO cool! Why did you have to come out on Black Friday and Cyber Season? The sound design potential's great! It's like a blend of XO and Reformer Pro! Can I suggest a feature on a future update like an X/Y pad for more sample morphable fun? As it is now, can I ask how is it on the CPU? 😀
You can run this in high efficiency mode, or crank the settings up until your CPU breaks... it's mainly the particle knob and polyphony that loads the CPU. And yes we will be building out the XY features!
Is there a way to increase the grain size (or release envelope?) other than stickiness? I can't get usable results even with the stickiness all the way up. Just a jumble of microsecond sounds in succession. Disappointing so far.
it's like Samplebrain in real time?
How does this compare to Dillon Bastan’s ‘Coalescence’? Obviously, both use concatenative synthesis but this seems more responsive to incoming audio and seems to more accurately map to it. Also seems to be less tweakable though. Does that sound about right? Any idea of specific differences in the algorithms etc of the two plugins?
We love Coalescence - agree the flexibility and options in there are amazing. One of the main differences is Concatenator does not use neural networks, so you can put a HUGE number of samples that load instantly with no training required. We just released a peer reviewed paper about this new method at ISMIR this year, and we're excited to see how the community uses it.
Crazy to read such a talented artist release music research related scientific papers ahhh 🥲
I’m super inerested in reading it! Would you mind sharing the link?
Could I direct it to a Kontakt cello library and then sing to play the cello?
Yes! It may sound a bit "glitchier" than a cello, but you can certainly use a corpus like that
@@ctralie glitch is good hh
Wtf? Haha! It's really something surprising.
Sounds like aphex twin's samplebrain
Samplebrain is also concatenative synthesis! But this uses a proprietary new algorithm that far outpaces all previously held benchmarks for speed, accuracy, and flexibility. There's no other plugin that can do what Concatenator does.
@@bencantil462 damn
this couldve been great if it werent locked behind a paywall to try the products. it seems like a cool plugin what a shame :(
i want it
😸
Ok i'm starting my career as a ai foley artist
Is irony a single prismatic shard, a group of prisms, the prism itself, or simply a momentary glimpse of the prism itself - in how communication or languages structure our shared realities?
!!!
seems cool but pretty funny you used 200 foley files to make lower quality foley.
It sounds like garbage being crushed.
so many sound designers are about to lose their jobs lol
The "lol" isn't very nice here.
@@floppysalmon909why not the current system is a failed system having it flipped on its head is pretty good
I like to think of this as giving sound designers superpowers
@@encanti totally, for now it is being poorly marketed to the people who can benefit from it the most like the less technical video editors, game developers..etc.. but sooner or later a just as powerful and more simplified version of this be integrated into mainstream apps
it isnt only going to affect audio but pretty much every single industry will get hit the same way digital daws reduced the footprint of a all major studios
the way i see it anyone with a good ear will be making sounds happen the same way the image and video generators are being used
the only thing not taking ai to the mainstream are the devs who cant seem to simplify things for the average user
GOOD GOLLY MS. MOLLY. gimme