Music visualized in 3D with machine learning

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024

Комментарии • 32

  • @nembobuldrini
    @nembobuldrini Год назад +3

    It seems like my brain is resonating with those trajectories, as if something similar is really happening in our minds when hearing music. That's fantastic, and I would like to see more content on this direction! Kudos!

    • @timenixe
      @timenixe 6 месяцев назад

      Same. I wonder if this is the case

  • @6AxisSage
    @6AxisSage 2 года назад +2

    This comes at a great time, just when i upgraded my pc for machine learning and I was about to work on my music composer dlnn again.

    • @complexobjects
      @complexobjects  2 года назад +1

      Ive heard of that but haven't looked into it. I think AI-assisted music composition tools will eventually become more common, and I can't wait to see what they can do!

  • @gubbin909
    @gubbin909 2 года назад +2

    Absolutely love this. Loved your older manifold learning stuff as well and this is getting even more fascinating!

  • @dragolov
    @dragolov Год назад

    Bravo!

  • @chuatsinli
    @chuatsinli Год назад

    Just started learning about AI in music... Cool and great work, Michael !

  • @timenixe
    @timenixe 6 месяцев назад

    This is so cool.

  • @floriankowarsch8682
    @floriankowarsch8682 2 года назад +2

    For years I dreamed of visualizing music in such a way! Incredible work! I think in 10:04 we can see how the location is mainly influcenced by the vocals although there are still instruments in the background. Have you ever tried seperatly encoding different parts of a track (e.g. vocals and instruments seperated?) Have you tried anything like this? Maybe the sound could also be meaningfully separated with ICA before injecting it to the network. If you haven't tried something link this, I will give it a try.

  • @juan.engineer1954
    @juan.engineer1954 Год назад

    Wow, this is brilliant and just what I was looking for to implement in a project. Thank you Michael!

  • @meowcrobe
    @meowcrobe Год назад

    That's awesome

  • @realdanryland
    @realdanryland 11 месяцев назад

    LOVE THIS. I'm trying to run your git repo so I can try this for myself but super struggling. I'd love to use this within an open source project I'm working on.

  • @TheGreatTimSheridan
    @TheGreatTimSheridan Год назад

    I think the trick is to associate one sound with another, but sometimes one sound doesn't follow the first sound, so you could break up the line into several snakes

  •  9 месяцев назад

    beautiful work!

  • @hogan_xyz
    @hogan_xyz 2 года назад +1

    Incredible work Michael, thanks for sharing. I want to see if I can implement this code myself and come up with structures that could be 3D printed. Really fascinating stuff

    • @complexobjects
      @complexobjects  2 года назад +1

      Thanks very much! :) Took quite a while.
      I really hope you can. And if not im sure there a bunch of keras code examples building autoencoders. There were so many other experiments in this project that I decided not to cover here. So my model code ended up being more complex than necessary!

  • @arc19-x
    @arc19-x 2 года назад

    Excellent.

  • @TheGreatTimSheridan
    @TheGreatTimSheridan Год назад

    I think your work would benefit from isolating different beats and sounds and tones and instruments. The first one you did reminded me of a bongo beat. But sometimes you want to do two beats with one hand and then keep the bongo beat going with two hands. There are certain intuitive structures and motions that might be separate identifiers. Really enjoyed the video

  • @ShaLun42
    @ShaLun42 2 года назад +5

    Have you tried that in reverse? Making some purely mathematical strange attractors and then converting them to sound using your NN.

    • @complexobjects
      @complexobjects  2 года назад

      I have not but that's an interesting idea.

  • @TauvicRitter
    @TauvicRitter Год назад

    Fascinating. What would happen if you change something in the model and replay it. Would it generate a new version of the music? Don't know if this makes sense because I have not studied it completely. But the thought came up.

  • @thorsoundsystem-doct6103
    @thorsoundsystem-doct6103 Год назад

    Would be nice if you could make a vst out of it to visualize the sounds in daws

  • @TauvicRitter
    @TauvicRitter Год назад

    If you add more dimensions could you detect who is talking or what instrument is playing?

  • @jjh11
    @jjh11 2 года назад +1

    10:34 - 10:42

  • @aepokkvulpex
    @aepokkvulpex 7 месяцев назад

    It's a shame about the Hey Ya audio, do you have a timestamp in the song for where your manifold plots its first point? So I can try to line it up in the bg

    • @complexobjects
      @complexobjects  7 месяцев назад +1

      Yeah I had to silence it due to copyright but I can try to put it back. The video isn’t monetized anyway.

  • @TheVeryBestMusic
    @TheVeryBestMusic 2 года назад

    heya! im trying to get a hold of you for a project, but theres no contact info in your biog. please reach out! johan

  • @thorsoundsystem-doct6103
    @thorsoundsystem-doct6103 Год назад

    Can I ask you how you did this with the code?

    • @complexobjects
      @complexobjects  Год назад

      Sure, a more detailed explanation is on my blog post and in the GitHub repository in the description. But basically I used Keras to train an autoencoder. The encoder part of the network then reduces data from the spectrogram to a point in 3D.