How To Make Sounds With AI (Dance Diffusion)

Поделиться
HTML-код
  • Опубликовано: 17 сен 2024

Комментарии • 26

  • @thymeparzival
    @thymeparzival Год назад +3

    This is amazing. Please keep making more videos like this.

  • @hablalabiblia
    @hablalabiblia 5 месяцев назад

    Loved the model you are using for the script.

  • @amanray
    @amanray Год назад +1

    I trained some sounds with dance diffusion last year, but the results reminds me vocoder sounds, so I finally work with the vocoder that is much faster to use

  • @johndoe_1984
    @johndoe_1984 Год назад +4

    You don’t need to edit the command line arguments manually. The $VARIABLES are replaced by the values on the right panel.
    PS: very cool tutorial

    • @UndulaeMusic
      @UndulaeMusic  Год назад +2

      Maybe they've fixed it since I made this video, but at the time of making the video, it would throw me errors when just using the fields on the right to change the $ variables. Replacing the command line arguments manually was the only way I could get it to work.

  • @revisaretucorreo
    @revisaretucorreo Год назад +2

    LOVE IT!!

  • @rabneba
    @rabneba Год назад

    Thank you!

  • @haskil_music
    @haskil_music 5 месяцев назад

    hi, I encountered such a problem when starting training an error appears and I don’t know how to fix it:
    [Errno 2] No such file or directory: '/content/sample-generator/'
    /content/sample-generator
    shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
    python3: can't open file '/content/sample-generator/train_uncond.py': [Errno 2] No such file or directory

    • @UndulaeMusic
      @UndulaeMusic  5 месяцев назад +1

      Yo! To be honest it's been a hot minute since I've used this thing and I don't know what's changed since I made this video. I recommend joining the Harmonai discord and asking for help there, as there are a lot more code-savvy people that can probably help much more easily than I can.

    • @haskil_music
      @haskil_music 5 месяцев назад

      @@UndulaeMusic Thanks, do you have a link or invitation?

    • @UndulaeMusic
      @UndulaeMusic  5 месяцев назад

      @@haskil_music yeah! it’s in the video description

    • @beermix
      @beermix 5 месяцев назад

      @@UndulaeMusic did you find a solution? tell me please

  • @darkstatehk
    @darkstatehk Год назад

    Reminds me of Richard Devine somehow

  • @grxnrxi5477
    @grxnrxi5477 Год назад

    is there a way to do this without weights and biases? I receive an error every time I attempt to quickstart my weights and biases account and dashboard

    • @UndulaeMusic
      @UndulaeMusic  10 месяцев назад

      I’m not sure to be honest. I know I’m a bit late responding to this. I think people have figured out much more streamlined ways to do this than using Colab - I’d recommend joining the harmonai discord and asking for help on there if you’re still having problems

  • @QwErTY_hi
    @QwErTY_hi Год назад

    does it matter how long the samples/training data are?

    • @UndulaeMusic
      @UndulaeMusic  Год назад

      This is a good question that I don't have a specific answer to. In the video, I was training on loops, longer samples, and songs, all of which were chopped by the chunking algorithm into samples of a usable size for the model to train on, and I wasn't really aiming at any kind of consistency in my dataset other than a particular kind of sound. I have heard others (mainly Mr. Bill) talk about training it on one-shot samples that are all the same length to get more reliable results, but I have not tested this myself.

  • @aleyummusic
    @aleyummusic Год назад

    Seems like such an effort to setup, isn't there just some UI platform to do this? I'm a software engineer and I still don't want to go through all this for mediocre AI sounds

    • @UndulaeMusic
      @UndulaeMusic  Год назад +1

      Yeah, there's a link in the description to a GUI version someone made that I could not get working, but I'm sure you could.
      There's also this, which I discovered after I had already uploaded the video: github.com/Bikecicle/sample-diffusion-kgui
      Haven't tried that one either, because I've been using the CLI version (mentioned in the video) to generate audio from the models I've already trained, which I'm comfortable using, but go for it!
      I'm sure that with how quickly the AI stuff is moving, somebody more talented than me will create a neatly wrapped GUI package to do this in a much simpler way before I'm even finished typing this comment.

  • @MrKillerHobbes
    @MrKillerHobbes 9 месяцев назад

    did u learn from mr bills courses bro?

    • @UndulaeMusic
      @UndulaeMusic  9 месяцев назад

      No actually, I wanted to take his AI course in April but the timing didn’t line up - I was teaching some Ableton classes at a community college and finals were about to hit, and I didn’t want to stretch myself too thin. But, I’d heard him talk about Dance Diffusion on his podcast quite a bit, so I did some research and figured it out myself. Big up Mr. Bill though, dude is the most goated of music production educators

  • @Adimysk
    @Adimysk Год назад

    those sounds could of been so much better if you gave it more time, that sounded like doo doo to me haha but i wanted to know about epoch so thanks!
    ive found a gui and enables you to run this from your computer and your own equipment and ive found my epoch slowly goes up from 0% to 100 apparently. is it safe to say that if it says 97 thats just about the percentage that it utilized?

    • @UndulaeMusic
      @UndulaeMusic  Год назад +1

      to each their own, I kinda like the lo-fi AI garbled sound lmao. yeah, that epoch percentage is basically the trainer working its way through the dataset that you've provided and each new epoch means it's used 100% of the data in the set

  • @craia25
    @craia25 Год назад

    With AI you can do any kind of music... ;-)

  • @corticallarvae
    @corticallarvae Год назад

    Flucoma