It seems like my brain is resonating with those trajectories, as if something similar is really happening in our minds when hearing music. That's fantastic, and I would like to see more content on this direction! Kudos!
Ive heard of that but haven't looked into it. I think AI-assisted music composition tools will eventually become more common, and I can't wait to see what they can do!
For years I dreamed of visualizing music in such a way! Incredible work! I think in 10:04 we can see how the location is mainly influcenced by the vocals although there are still instruments in the background. Have you ever tried seperatly encoding different parts of a track (e.g. vocals and instruments seperated?) Have you tried anything like this? Maybe the sound could also be meaningfully separated with ICA before injecting it to the network. If you haven't tried something link this, I will give it a try.
LOVE THIS. I'm trying to run your git repo so I can try this for myself but super struggling. I'd love to use this within an open source project I'm working on.
I think the trick is to associate one sound with another, but sometimes one sound doesn't follow the first sound, so you could break up the line into several snakes
Incredible work Michael, thanks for sharing. I want to see if I can implement this code myself and come up with structures that could be 3D printed. Really fascinating stuff
Thanks very much! :) Took quite a while. I really hope you can. And if not im sure there a bunch of keras code examples building autoencoders. There were so many other experiments in this project that I decided not to cover here. So my model code ended up being more complex than necessary!
I think your work would benefit from isolating different beats and sounds and tones and instruments. The first one you did reminded me of a bongo beat. But sometimes you want to do two beats with one hand and then keep the bongo beat going with two hands. There are certain intuitive structures and motions that might be separate identifiers. Really enjoyed the video
Fascinating. What would happen if you change something in the model and replay it. Would it generate a new version of the music? Don't know if this makes sense because I have not studied it completely. But the thought came up.
It's a shame about the Hey Ya audio, do you have a timestamp in the song for where your manifold plots its first point? So I can try to line it up in the bg
Sure, a more detailed explanation is on my blog post and in the GitHub repository in the description. But basically I used Keras to train an autoencoder. The encoder part of the network then reduces data from the spectrogram to a point in 3D.
It seems like my brain is resonating with those trajectories, as if something similar is really happening in our minds when hearing music. That's fantastic, and I would like to see more content on this direction! Kudos!
Same. I wonder if this is the case
This comes at a great time, just when i upgraded my pc for machine learning and I was about to work on my music composer dlnn again.
Ive heard of that but haven't looked into it. I think AI-assisted music composition tools will eventually become more common, and I can't wait to see what they can do!
Absolutely love this. Loved your older manifold learning stuff as well and this is getting even more fascinating!
Thanks so much!
Bravo!
Just started learning about AI in music... Cool and great work, Michael !
This is so cool.
For years I dreamed of visualizing music in such a way! Incredible work! I think in 10:04 we can see how the location is mainly influcenced by the vocals although there are still instruments in the background. Have you ever tried seperatly encoding different parts of a track (e.g. vocals and instruments seperated?) Have you tried anything like this? Maybe the sound could also be meaningfully separated with ICA before injecting it to the network. If you haven't tried something link this, I will give it a try.
Wow, this is brilliant and just what I was looking for to implement in a project. Thank you Michael!
That's awesome
LOVE THIS. I'm trying to run your git repo so I can try this for myself but super struggling. I'd love to use this within an open source project I'm working on.
I think the trick is to associate one sound with another, but sometimes one sound doesn't follow the first sound, so you could break up the line into several snakes
beautiful work!
Incredible work Michael, thanks for sharing. I want to see if I can implement this code myself and come up with structures that could be 3D printed. Really fascinating stuff
Thanks very much! :) Took quite a while.
I really hope you can. And if not im sure there a bunch of keras code examples building autoencoders. There were so many other experiments in this project that I decided not to cover here. So my model code ended up being more complex than necessary!
Excellent.
I think your work would benefit from isolating different beats and sounds and tones and instruments. The first one you did reminded me of a bongo beat. But sometimes you want to do two beats with one hand and then keep the bongo beat going with two hands. There are certain intuitive structures and motions that might be separate identifiers. Really enjoyed the video
Have you tried that in reverse? Making some purely mathematical strange attractors and then converting them to sound using your NN.
I have not but that's an interesting idea.
Fascinating. What would happen if you change something in the model and replay it. Would it generate a new version of the music? Don't know if this makes sense because I have not studied it completely. But the thought came up.
Would be nice if you could make a vst out of it to visualize the sounds in daws
If you add more dimensions could you detect who is talking or what instrument is playing?
Yeah that sounds possible!
10:34 - 10:42
It's a shame about the Hey Ya audio, do you have a timestamp in the song for where your manifold plots its first point? So I can try to line it up in the bg
Yeah I had to silence it due to copyright but I can try to put it back. The video isn’t monetized anyway.
heya! im trying to get a hold of you for a project, but theres no contact info in your biog. please reach out! johan
Can I ask you how you did this with the code?
Sure, a more detailed explanation is on my blog post and in the GitHub repository in the description. But basically I used Keras to train an autoencoder. The encoder part of the network then reduces data from the spectrogram to a point in 3D.