Fully agree on how difficult it was previously to do concatenative synthesis previously. I've had to use a python library before this and even then there were quite a few limitations that this seems to solve, i'm looking forward to getting my hands on this
I think the two most successful and accessible concatenative synthesis plugins available before were Coalescence by Dillon Bastan and SKataRT from Ircam (along with all the other M4L devices that borrow from it). Coalescence is a lot more reliable than SKataRT in my experience and is very usable in a similar way to Concatenator. It might even be deeper in some respects but, from the videos I’ve seen, Concatenator seems to be waaay more effective at matching input audio to corpus. I’ve got all three (although I currently can’t get SKataRT to work) so I’ll compare once I get a chance to have a proper play with Concatenator.
Out of curiosity does this let you sweep through the low dimensional embedding (so that grains played one after another are always similar so it results in a smooth output) or does it only try to match the input to the corpus? I wonder what happens if you gave it something hella simple like filtered white noise to try to attempt to sweep through them in that manner?
i do wonder which of these videos are sponsored when it comes to datamind and minimal and stuff. just a few words on that in the video would be nice. especially with no demos for this being available
DataMind gave me the plugin for free and just asked me to post a video if I thought it was cool, I'm also working on training a model for Combobulator (their other plugin) - but I'm not being paid directly for this either, there is a split on sales profit if / when it is released. All of that is to say, the video is not sponsored Ben is just a chill guy who is very connected in the producer space because of the Encanti project. I'm only posting this because I genuinely think it's a big deal in the sound design space, though I do think there's plenty of improvements / new features that could be added from here. As far as Minimal, I've worked directly with them from the beginning and have done a lot of contract work with them including concept consulting, ui design, preset design, bug testing and video demos. Similarly, all 3 of the founders are producers who I've known for years, so that's where the connection comes from. Some of the videos on my page ARE paid for by Minimal, and I apologize for not being more transparent about that. I should have communicated more clearly that I work for them, the videos are small part of everything we've been doing from my perspective. In the future I'll be more explicit about this, and I appreciate you bringing it to my attention. I realize that some of this stuff can come across as pushing certain brands. I truly only show stuff I think is cool or useful, but I understand that most people don't get access to this stuff for free and the perspective is very different on the other end.
have you actually tried the minimal plugins? im just happy someone i can trust when it comes to this stuff brought them to my attention because every single one is one of the best music related purchases i have ever made.
@@Frequentaudio thanks for the detailed reply! Yeah i figured its somewhat complicated, i imagine you know a lot of theye people personally. Its just hard to sort things in the right mental category sometimes so disclosure on these things is helpful. Appreciate the videos as always!
This is honestly how I felt at first too, but this specific plugin aside concatenative synthesis has produced sounds I have never heard heard from any other technique. There is SO much potential for this idea, but until now everything has been severely bottlenecked by the load times (especially with long samples) and complicated setup / installation. This device requires a lot of thought and experimentation to get those special sounds, and it really only does one thing, but it is now possible to quickly swap samples in and out of the corpus / point cloud in a simple vst. This is a tip of the iceberg situation where we can now begin to explore all sorts of possibilities that were previously completely prohibitive to the average user. You gotta see between the lines a bit, because while most of the time you kinda just get a glitchy soup, there's these moments where everything aligns and you get something completely new. I believe it will only get easier to create "useful" or "new" sounds with this method. The coprus can be used in other ways than mosiacing in the future, and people will figure out the right types of inputs for specific results with what we have here. Sorry for the rant, I feel you the tech is just exciting lol
Morph 3 by zynaptiq has a modeller mode that does a similar thing. Only plays back one grain at a time and only samples from a single audio file though. It’s also more expensive
Probably takes 50% cpu on a monster build for a single instance too. Their plugins sound amazing, but are horribly optimized. Pitchmap:Colors is so useful, but runs like asssss compared to other similar plugins.
i dont know what Im doing wrong with this plugin, but whenever I hit playback in my DAW, the first moments it sounds as I intended it to behave, then it gets all jittery and over the place. Really weird... I love what it can do though
input chord - piano, vocals, strings...type, reverb and effects modulation, print and paulstretch... throw in some sample libraries like kontakt and thats about ballpark
I bought this plug-in and asked for a refund the next day - couldn't get it to make any actually usable sounds,just the kind if glitchy garbage it sounds like in every demo, which you can already find a million cheap or free sources for.
Looks pretty cool. So to explain a little bit we are dealing with the plugin that splits your audio into spectral elements and then use system to recreate something new. In our world of machine learning and neural codecs the same type of processing is used training AI models. The only difference is that AI model training involves self-supervised learning where a model learns to group certain audio elements together to represent something more than just a frequency bin. Recall those all your features that no one has defined but AI has discovered itself. And once you have all those extracted audio features, you can basically train an AI to either start generating new music with those features or have AI do text-to-speech and what not. By the way, all audio stem separation models out there these days. Use the same methodology. What would make this plugin much more cooler would be if in addition to splitting it into those so-called grains (or what I call unlabeled all the features), it would actually be able to identify those individual features (okay I know, in that sense, a feature is a a component that usually consists of a sequence of individual or layered "grains" and produce a recognizable sound when played back) such that it would be able to then remix your reference audio with new features as if you took the source audio apart into all individual instruments and layers and put them back together using new samples :) This actually gave me an idea for creating such a cool plugin 😋
Whatever these companies are paying you it's not enough. I would've never even heard about this thing if it wasnt for your tutorials. I might even buy this thing although it's pretty expensive for what it does (literally only glitchy and fucked up sounds, not very useable besides this)
14:33 this part sounds like a fully produced album interlude, being generated in real-time. super cool 😮
Oh shit yeah another frequent upload
Fully agree on how difficult it was previously to do concatenative synthesis previously. I've had to use a python library before this and even then there were quite a few limitations that this seems to solve, i'm looking forward to getting my hands on this
I think the two most successful and accessible concatenative synthesis plugins available before were Coalescence by Dillon Bastan and SKataRT from Ircam (along with all the other M4L devices that borrow from it). Coalescence is a lot more reliable than SKataRT in my experience and is very usable in a similar way to Concatenator. It might even be deeper in some respects but, from the videos I’ve seen, Concatenator seems to be waaay more effective at matching input audio to corpus. I’ve got all three (although I currently can’t get SKataRT to work) so I’ll compare once I get a chance to have a proper play with Concatenator.
It has an intriguingly balanced quality, managing to sound both lame af and fire simultaneously
lmao, she's a tough beast to wrangle but sometimes it goes BONKERS
needs two things to be better: pitch awareness/automatic shifting so that it can follow a melodic/harmonic input and envelopes for the grains
If Iannis Xenakis were alive today, he would be losing his mind. Amazing.
Out of curiosity does this let you sweep through the low dimensional embedding (so that grains played one after another are always similar so it results in a smooth output) or does it only try to match the input to the corpus? I wonder what happens if you gave it something hella simple like filtered white noise to try to attempt to sweep through them in that manner?
Aphex Twin’s sample brain was my intro to this type of thing. Total pain in the ass to use. This plugin looks pretty sick!
i do wonder which of these videos are sponsored when it comes to datamind and minimal and stuff. just a few words on that in the video would be nice. especially with no demos for this being available
DataMind gave me the plugin for free and just asked me to post a video if I thought it was cool, I'm also working on training a model for Combobulator (their other plugin) - but I'm not being paid directly for this either, there is a split on sales profit if / when it is released. All of that is to say, the video is not sponsored Ben is just a chill guy who is very connected in the producer space because of the Encanti project. I'm only posting this because I genuinely think it's a big deal in the sound design space, though I do think there's plenty of improvements / new features that could be added from here.
As far as Minimal, I've worked directly with them from the beginning and have done a lot of contract work with them including concept consulting, ui design, preset design, bug testing and video demos. Similarly, all 3 of the founders are producers who I've known for years, so that's where the connection comes from. Some of the videos on my page ARE paid for by Minimal, and I apologize for not being more transparent about that. I should have communicated more clearly that I work for them, the videos are small part of everything we've been doing from my perspective.
In the future I'll be more explicit about this, and I appreciate you bringing it to my attention. I realize that some of this stuff can come across as pushing certain brands. I truly only show stuff I think is cool or useful, but I understand that most people don't get access to this stuff for free and the perspective is very different on the other end.
have you actually tried the minimal plugins? im just happy someone i can trust when it comes to this stuff brought them to my attention because every single one is one of the best music related purchases i have ever made.
@@Frequentaudio Love this well thought out response to a reasonable question
Is this the notorious Minimal Audio youre speaking about here? 🤔
@@Frequentaudio thanks for the detailed reply! Yeah i figured its somewhat complicated, i imagine you know a lot of theye people personally. Its just hard to sort things in the right mental category sometimes so disclosure on these things is helpful. Appreciate the videos as always!
sounds cool but i already have infinity ways to make glitchy noises
This is honestly how I felt at first too, but this specific plugin aside concatenative synthesis has produced sounds I have never heard heard from any other technique. There is SO much potential for this idea, but until now everything has been severely bottlenecked by the load times (especially with long samples) and complicated setup / installation. This device requires a lot of thought and experimentation to get those special sounds, and it really only does one thing, but it is now possible to quickly swap samples in and out of the corpus / point cloud in a simple vst. This is a tip of the iceberg situation where we can now begin to explore all sorts of possibilities that were previously completely prohibitive to the average user. You gotta see between the lines a bit, because while most of the time you kinda just get a glitchy soup, there's these moments where everything aligns and you get something completely new. I believe it will only get easier to create "useful" or "new" sounds with this method. The coprus can be used in other ways than mosiacing in the future, and people will figure out the right types of inputs for specific results with what we have here.
Sorry for the rant, I feel you the tech is just exciting lol
What the fuck is concesative synthesis?
Was...not expecting that
Morph 3 by zynaptiq has a modeller mode that does a similar thing. Only plays back one grain at a time and only samples from a single audio file though. It’s also more expensive
Probably takes 50% cpu on a monster build for a single instance too. Their plugins sound amazing, but are horribly optimized.
Pitchmap:Colors is so useful, but runs like asssss compared to other similar plugins.
i dont know what Im doing wrong with this plugin, but whenever I hit playback in my DAW, the first moments it sounds as I intended it to behave, then it gets all jittery and over the place. Really weird... I love what it can do though
Would love to try it out but I'm stuck in MacOS 10.14 :(
Tell me all about unknown synthesis
can you help me make that pad at 5:38?
input chord - piano, vocals, strings...type, reverb and effects modulation, print and paulstretch... throw in some sample libraries like kontakt and thats about ballpark
I remember trying out aphex twin his version nice that companies pick up on the idea
Nice. Thanks!
Very cool!
I bought this plug-in and asked for a refund the next day - couldn't get it to make any actually usable sounds,just the kind if glitchy garbage it sounds like in every demo, which you can already find a million cheap or free sources for.
pluh
ah helllllll yeah
Looks pretty cool. So to explain a little bit we are dealing with the plugin that splits your audio into spectral elements and then use system to recreate something new.
In our world of machine learning and neural codecs the same type of processing is used training AI models. The only difference is that AI model training involves self-supervised learning where a model learns to group certain audio elements together to represent something more than just a frequency bin. Recall those all your features that no one has defined but AI has discovered itself.
And once you have all those extracted audio features, you can basically train an AI to either start generating new music with those features or have AI do text-to-speech and what not.
By the way, all audio stem separation models out there these days. Use the same methodology.
What would make this plugin much more cooler would be if in addition to splitting it into those so-called grains (or what I call unlabeled all the features), it would actually be able to identify those individual features (okay I know, in that sense, a feature is a a component that usually consists of a sequence of individual or layered "grains" and produce a recognizable sound when played back) such that it would be able to then remix your reference audio with new features as if you took the source audio apart into all individual instruments and layers and put them back together using new samples :)
This actually gave me an idea for creating such a cool plugin 😋
yeah okay buddy have fun with all that math
@@SignificantOther11 well, whoever created that plugin probably wasn't really good at math. Horace would you explain 100% CPU spiking :P
Whatever these companies are paying you it's not enough. I would've never even heard about this thing if it wasnt for your tutorials. I might even buy this thing although it's pretty expensive for what it does (literally only glitchy and fucked up sounds, not very useable besides this)