You're free to use anything you generate for any purpose, even commercial. No attribution required. Download link is in the description if you missed it.
Is it possible to change the code so that 2 notes must be at least 2 semitones apart when played at the same time? Richt now it's doing a lot of notes right above each other sometimes which makes the song bad. Really cool program tho very well done.
7:31 In my opinion, I think Beyond the Coast is one fo the best ones it generated, if not the best. It's amazing when it pulls off a chord progression I with my lack of music theory would struggle with (fourth measure here, for example, is unexpected for AI).
@@taektiek526 Nothing wrong with creating a policy network. It's a completely different concept than training data. It would be the same type of data that would regulate which music plays during certain moments in, say, a video game. If the question is what would humans want to hear during a certain moment, then the policy would mandate that human data is required to know.
1st knob (increase) : strong melody 1st knob (decrease) : calm melody (not only time signature because the 2nd one can also generate 3/4 melodies without the 1st one) 2nd knob (increase) : decrease in amount of notes / more complex rythm 3rd knob (increase) : invert between major/minor 4th knob (increase) : deep melody 5th knob (increase) : high melody 6th knob (increase) : decrease in amount of notes 7th knob (increase) : more notes hit exactly every beat 8th knob (increase) : triplets/swing 8th knob (decrease) : haunting melody
That's pretty close to what I was thinking. Here's what it seemed like to me the last time I played around with it: 1. Main beat frequency 2. Chord density 3. Key 4. Low range beats vs. other chords 5. High range beats vs. other chords 6. Swing factor 7. Chord height 8. Slow swing vs. fast spooky 9. Note clumping (horizontal and vertical) 10. Overall note probability 11. Beat note probability/coherency 12. Chord note probability/coherency
@@LagMasterSam It's pretty interesting, that spookyness has it's own slider. There must be a really important thing about what we consider spooky. The ai probably learned minor seconds and the 6th.
That's a recurring theme with automation; there are always people in fields that think they are safe - until they aren't! Bottom line seems to be, no one is safe. At best, only for a certain time. Everything so far suggests that in time we can replace most human tasks with robot automated equivalence.
This one still needed the original songs composed by real humans as a seed to do its job, so the job of musicians is still needed. Well, until someone make some an that start without seed by generating 100% random songs, and the user listen to the song and give an score between 1 and 100, those scores he give tot he songs are used by the program to find how close to a good song it was and how it need to improve.
@@Kyle-xk5ut You can already do stuff without people rating it, you just need to use how close it is to original songs as data. The point of start the thing by creating 100% random songs and then people rate it to be the fitness, is that it would create stuff normal musicians woudlnt even think about doing, where at normal methods, it was based at what normal musicians decided that were a song that should be created.
Damn. This is a lot better than Carykh's computer jazz. It is actually somewhat good. Imagine if more effort and time was put into something like this, and more money and computer scientists. I'm sure fully automatic music production is in our future now.
Every indy dame developer wants to know your location. Also, you should set up a 24/7 live stream of your AI pumping out music. A few months later it'll be composing classics! :p
I hope no indie game developer would settle for this. This is a really promising achievement but it is several more breakthroughs away from being production-ready. Even a crappy human composer working for free can make something better (for now).
@Adin Reed If an AI like this is run for too long, it’s output will match the sample data too closely and end up generating the same thing you initially put in
6:40 This would be great to see in a video game with a progressively growing home base. The further along the base grows, the more complex and lively the music becomes.
The StreetPass Mii Plaza on the 3DS actually has something like that. It has 7 different renditions of the BGM, each more elaborate than the previous, and the version that plays depends on how many Mii characters you have in your plaza (more Miis - more fancy music). All 7 are human-composed and pre-scripted (if not prerecorded), but _it'd be pretty cool to use CodeParade's AI to generate that kind of thing on the fly!_ ^_^
Also you could dynamically make the music more complex when you are fighting enemies, or change it so its slower and less complex when exploring, there is a lot of cool stuff this thing can do
Hey, wanted to hop in just to say, I somewhat figured out the first 5 knobs. the 4th and 5th knobs feels like the most coherent and understandable knobs. when the 4th knob goes down, notes tend to rise. we lost bass and are left with more music-box sounding music. if it goes up, notes bunch up around the bass, though not as much. The 5th knob changes how spread the notes up in the registers, so a high value would mean we mostly get both bass and high pitched chords and melodies, while the 4th knob tells us how much in each value. You can even see the effects with your eyes when looking at the piano rolls, 5th knob tells us how spread/bunched up the notes are, 4th knob tells us if it's up or down. However, these knobs also change rhythm and scales in not the clearest way. First and second knobs are somewhat coupled to control rhythm. Together they control intensity. A low value for the first knob + high value for the second often give sparse phrases, that indeed tend to be in 3/4 time. Other than that most combinations would lead to some 4/4 time. The higher 1st knob goes, the more up-beat it feels, and the scale changes along. The 2nd knobs controlls only rhythmic intensity, and feels like when you roll it down it adds more and more stuff. However, scale-wise, the first and third knobs are coupled. 3rd knobs tends to control major-minor tendencies (for music nerds out there, a more appropriate term would be modal brightness), but it often changes directions by what other knobs dictate.
After playing around with this a bit, here are some things the sliders seem to do: 1. Main beat frequency 2. Chord density 3. Key 4. Low range beats vs. other chords 5. High range beats vs. other chords 6. Swing factor 7. Chord height 8. Slow swing vs. fast spooky 9. Note clumping (horizontal and vertical) 10. Overall note probability 11. Beat note probability/coherency 12. Chord note probability/coherency
10:01 (and also 11:16) you straight-up taught your computer how to make pokemon gen 4/5 music somehow, and i sincerely applaud you for your efforts and results
This is an awesome project. The results are magnificent. It is very generous to share all the code und I appreciate the creation of this convenient Standalone Composer. Everything is free to use and without annoying license. That is just brilliant.
This has sooooooo much potential. Especially if we can connect different emotions to sound generation and use those as sliders as well. Really when you think about it this could be revolutionary.
Musical locality is very much a real thing, but "close" notes aren't neighbors on the piano. They're neighbors on the circle of fifths. Notes whose frequencies reduce to fractions with small denominators like 3/2 and 5/4 are "close" (consonant) and irrational frequencies like sqrt(2) are "far" (dissonant) and this also holds true for songs in similar keys. For instance, C is a lot "closer" to G than C# or D, and the note G can substitute for C a lot more easily than C# or D, because if C is 160Hz, G is 240hz (3/2), D is 180hz (9/8), and C# is 170hz (17/16). C major has no sharps or flats, G major has 1 sharp, D major has 2, and C# major has 7.
Alright, imagine you have 2 people hitting a drum at different intervals. If one person hits it 2.1 times every second and the other hits it twice every second, there would be no clear pattern here. Even though the actual frequencies are similar they are quite different in the regard that putting them together just sounds like shit. On the other hand if you have one person hit it 2 times a second and the other 3 times a second there's much more clear of a pattern. In this respect they are closer because there is an actual pattern here.
thanks RUclipsIsGay to be serious, though; I wonder how different the network would be if it took the circle of fifths into account? I might try that over the summer.
Dynamic soundtrack based on mood of the scene. In the future our brains will be wired to the computer so that it can generate just the kind of music that we love to hear at the moment by learning our emotional responses, or even let us describe what we want to hear.
Pretty cool. Next step is to make a neural network that generates a matching title instead of you having to come up with one.. Although 'Lost Toy' was an excellent match.
I guess it would be useful if he made the training data for that a couple hundred names that he gives to the generated songs based on how they make him feel. And then the AI based on that makes new names for its new songs Next, make the AI find its own people to improve it, and after training on that make the AI make its own improvements on itself Next, Skynet Oh shit
I monitor how well the network fits the real songs as it trains. After the 2000 epochs, the training set is still pretty far off from the ground truth. So since I can't even reproduce the original songs, I doubt my random songs would be overfitted. Not 100% scientific, but I definitely haven't heard anything recognizable after listening to hundreds of songs either.
@klye sam It's when your AI effectively learns to replicate the training data instead of generating its own unique set. So yes, it produces a result very similar to the training data.
The important thing is that as long as we perceive, the combination does sound interesting and amusing. Think of how many "songs" (combinations) the base palettes can encode! This says a lot abount what music is. And +CodeParade you have to show us what the principle components sound like, even if they are not so pretty. I want to know.
@@CodeParade Hazel might've meant to ask about what steps you took to end up with a NN that does not overfit. I'm guessing it has to do with narrow hidden layers and some specific approach to setup of fitness evaluation function?
Sorry for double commenting, but this video needs more views. Your approach was pretty unique as I haven't known anyone to decide to generate a song by using an auto-decoder this way, let alone with this much success as well. Let's just hope you can appease The Algorithm.
So NPCs are becoming capable of forming original dialogue and even interactive conversations, making music on the fly, generating worlds, creating people, and writing engaging, also-interactive stories. Video game devs are going to be unemployed in the next 20 years tops...
This channel is pure gold. I have watched 4 videos and all of them are original, inspiring and awesome. You have a very special kind of creativity. Never stop creating
I love VG soundtrack music, so I'm absolutely thrilled about this project conceptually and with the amazing results you have achieved. I can think of so many application-specific suggestions but then the code is open so maybe I should try doing something with it after I'm done with my own projects. Thank you for sharing so much! Recently discovered your channel and loving exploring it.
You are very talented and I am fairly sure you'll subscriber count will reach a hundreds of thousand in upcoming years. I'm not too good at programming, but I hope I am able to download and play with this. The most interesting thing for me would be to input a bands whole discography as midi and then see if the artist is recognizable from the generated music.
I think that this neural network is worth developing further. I thinking it's worth expanding it to make it more sophisticated and capable of writing a bit longer songs, and well as use more than one instrument. Get it good enough, and it may become profitable, as you could possibly sell a few songs to some indie game company or something. Probably be perfect for a really low budget mobile game, or even someone's hobby project.
7:10 "just imagine, a DJ that doesn't just mix live music, but actually composes the entire song". Autechre has been doing this for years. Making some truly mind bending music
This is astounding, best neural network music I've ever heard, although I love videogame soundtracks. Playing around with the program, I've found that the sliders can each be deciphered to some degree and their impact can be understood with a bit of music theory and attention, although they're a bit cryptic because humans have a clearly different way of defining patterns than digital neural nets. Sculpting a tune is really fun, moving each slider to get a feel for what it's doing and finding the ones which most impact melody and rhythm and others which shape the chord progression. This thing generates some weirdly coherent modulations, I've used it as inspiration in a track I'll upload soon. It's such a great way to find new progressions!
hey guys, I figured out what one of the nobs besides the leftmost one does, and I think it is VERY USEFUL. The second red knob from the left controls SYNCOPATION.
This is fascinating! Neural Networks are making such huge advancements right now that it shouldn't have been a surprise to see this pop up. It came out surprisingly great! Well done
Not only does this sound super good, but you titled the songs perfectly! Totally felt the music! 😊 Awesome project! Good job! I love the intertwined world's of Music and AI! ❤
You deserve more subscribers. I would love to leave something like this running on my RTX 2080 for days and come back to a music generator of my best genres. I can just imagine someone training a neural network like this and becoming an anonymous sensation in the music industry lol. I guess the next step would be to generate lyrics that go with there corresponding genre and conforms to the beat generated by the composer. Or maybe the beat composer should obtain features from the lyrics generated so that it adapts to the lyrics, I can see features being derived by encoding an approximation of the phonetic pronunciation of the words in the lyrics. I can imagine AI generated music being played on music services like Spotify, Pandora, or SoundCloud and getting feedback in the form of like to dislikes ratios and key words if people would be able to comment on the songs.
Man, he's got all the classic songs, what a nostalgia trip! I always loved to play: final imagination, legend of tracker, secret of HP and ofcourse Jade chrysalis!
Try making the "note certainty" determine the note velocity. It could sound amazingly human :) In other words, make it so that the red notes that are "just shy of being played" are played quieter than the rest. Your red slider provides a soft threshold instead of a hard threshold.
Genuinely exceptional stuff, CodeParade! I'd love to see this sort of advanced hermeneutic analysis performed on specific bands, then specific genres, specific song themes and all the way up to all music altogether, to beginning mapping the entire fractalline space of musical typing as we know it.
Not really, they tend to shift between sounding pretty human and skillful and having more awkward moments that would seem like a strange mistake for a human to make, especially one with the skill displayed earlier
Very interesting how it figured out harmony pretty well, but not so well that it becomes predictable. It actually writes chord progressions that are just complex enough to not be boring but simple enough to be kinda catchy
AI make visuals, AI make music, AI do gameplay... but "every gamedeveloper want that", in pay day you don't wanna that AI get your paystab. "We don't need compositors, we don't need artists", but everything kind of "not in my backyard"
I've been watching a few videos of yours now.. and I must say, the things you've been doing with neural networks are quite impressive.. and unlike most "neural network generationg something" videos, yours sounds good, or in the case of the face generator, looks good :o And "Marble Marcher" was quite fun too :) although it hurt my GPU quite a bit, but still worth the experience :) I'll continue to stalk your uploads :D Hope you have a wonderful day :)
I would give an arm and leg for a github link for these programs: I will try and code them from scratch soon but it could take a while, these ideas are fascinating
Honestly "Beyond the Coast" was actually amazing and if it were tweaked just a tiny bit (for example in the 10th measure with that awkward silence) it could become really great
Trying to piece together what I can of what do slider do. 11/40 Sort of figured out. Not too bad. A few times I will reference M/B. This is the gap between the Melody and Bass #1 Increases amount of notes *In a simplified system, creates an alternating note pattern (Ala Ghost's Galore) #2 Increases amount of notes (reversed) #'s 1 and 2 counteract each other #3 Decreases the amount of notes if moved in either direction, but also effects note variety #5 Increases the range of notes. Also creates a gap between M/B if high enough. #6 Increases the width of M/B, but does not increase the range. Melodies have shown to be too high for the "range" 5 provides. #10 Seems to increase the amount of notes allowed to be played at once. Also seems to spread them out. #11 Increases the amount of times notes are allowed to play. (reversed) #12 Similar to 11, but can also decrease the amount of notes that can play at once. #14 Increase the amount of notes on the half beat (and some on the quarter beat) #17 Increases note range, does not increase M/B #21 On of the first to clearly effect sections different from each other. If you have a straight line of music, It appears to shift each section up or down depending on the section. It's really hard to notice. You need a constant note beat and even then can only slightly notice visually. #22 Splits the first 2 (top) or last 2 (bottom) sections into melody and bass.
Here's the conclusion I came to this morning about the first 12... 1. Main beat frequency 2. Chord density 3. Key 4. Low range beats vs. other chords 5. High range beats vs. other chords 6. Swing factor 7. Chord height 8. Slow swing vs. fast spooky 9. Note clumping (horizontal and vertical) 10. Overall note probability 11. Beat note probability/coherency 12. Chord note probability/coherency
Oh wow. That's actually a very nice look and observation to work on the type of music you got. Now that I think about it, 3 dimensions makes sense for music. At least you've made a little more observations than the others to produce a better result. It's lovely you've succeeded, people can actually use this tool as inspiration.
Nobody: Boomers who never opened a DAW in their life: “iSn’T tHiS jUsT wHaT AlL oF tHoSe dUmB ‘eLeCtRoNiC mUsIcIaNs’ dO bEcAuSe tHeY jUsT hAvE a coMPuTeR WrItE tHE sONG
But... _the bot doesn't make a longer version than that._ =^/ I suppose you could loop it, or he could modify the code to produce more than 16 measures.
@@yarde.n Yeah. this seems like the perfect inspiration machine, but I feel that a human would need to take that tune and refine it to the point of being an actual full and pleasing song.
I noticed that some of the sliders seemed to control the lines of melody (top-note melody, middle-nite, bottom-note, etc.) while another controlled how adventurous the network was in going up and down its range, or how large its range was. Maybe looking at where the sliders are over a couple of songs would make things more telligible.
Imagine doing a game that every action change the variables in a way that every moment you have a new song thats product of your actions (of course with some kind of restrictions, doing the songs match with the actions or the game itself) (sorry my English, im Chilean) If you have more ideas respond (and if i misspelled something to xD practice is the key to learn)
I wasn't a guy to just download things off the internet via a sketchy link easily, but this video convinced me to do that, and dare I say it was freaking worth it.
"I know nothing about music theory and I have no idea how people come up with original melodies" - creates the catchiest AI composed music I've heard so far by leveraging the core building blocks of music. EDIT: You might get an amazing result if you categorized the most common instruments, like bass, drums, synths. With just one instrument, this is so good already.
Interesting use of autoencoders. CNN's and RNN are finite state automata, so the songs they can create have a type 3 grammar. Code parade is looking for songs with more of a type-0 grammar. Therefore, it requires another machine, which you finds in auto encoders, but usually it's a turing machine. I think maybe the latent space representation is the tape. Also, number can mean song, and it can also mean element of a space, but in this case it's the same thing. I think I may have learned something.
You're free to use anything you generate for any purpose, even commercial. No attribution required. Download link is in the description if you missed it.
It's not anymore lol
@@StLouis-bi3fi it is.
I'm going to make a video about my favorite songs
Is it possible to change the code so that 2 notes must be at least 2 semitones apart when played at the same time? Richt now it's doing a lot of notes right above each other sometimes which makes the song bad. Really cool program tho very well done.
You're awesome, thank you
7:31 In my opinion, I think Beyond the Coast is one fo the best ones it generated, if not the best. It's amazing when it pulls off a chord progression I with my lack of music theory would struggle with (fourth measure here, for example, is unexpected for AI).
sounds like a pokemon theme
@@martian17
You did it. You described it. Good work.
I agree, it's quite nice.
that's my favorite one. this one could actually get stuck in my head... lmfao
i like "ghosts galore"
you should add a voting up or down function to the network to tune it to what you like that would be very cool
That means it would be supervised
@@taektiek526 Nothing wrong with creating a policy network. It's a completely different concept than training data. It would be the same type of data that would regulate which music plays during certain moments in, say, a video game. If the question is what would humans want to hear during a certain moment, then the policy would mandate that human data is required to know.
@@Jack-fw7wd I'm not sure if he is saying that is a bad thing.
Yeah, but think of all the training!
How much different would it be compared to picking good generated songs and training it with that
I downloaded it and I've been sitting here for the last 30 minutes discovering and saving songs
What folder does the program save songs to?
@@moth.cinnabar Same?
@@saltydepression5192 It saves it in the same place as the executable
it overwrites previous ones saved though so move it out of that folder or rename it first
@@Stroopwafe1 Checked that and couldn't find them...what folder do you mean? I might have been wrong, thanks.
1st knob (increase) : strong melody
1st knob (decrease) : calm melody (not only time signature because the 2nd one can also generate 3/4 melodies without the 1st one)
2nd knob (increase) : decrease in amount of notes / more complex rythm
3rd knob (increase) : invert between major/minor
4th knob (increase) : deep melody
5th knob (increase) : high melody
6th knob (increase) : decrease in amount of notes
7th knob (increase) : more notes hit exactly every beat
8th knob (increase) : triplets/swing
8th knob (decrease) : haunting melody
This really helped me fine-tune the music, thanks!
That's pretty close to what I was thinking. Here's what it seemed like to me the last time I played around with it:
1. Main beat frequency
2. Chord density
3. Key
4. Low range beats vs. other chords
5. High range beats vs. other chords
6. Swing factor
7. Chord height
8. Slow swing vs. fast spooky
9. Note clumping (horizontal and vertical)
10. Overall note probability
11. Beat note probability/coherency
12. Chord note probability/coherency
@@LagMasterSam It's pretty interesting, that spookyness has it's own slider. There must be a really important thing about what we consider spooky. The ai probably learned minor seconds and the 6th.
Just when you thought that being in a creative field would spare you from automation...
That's a recurring theme with automation; there are always people in fields that think they are safe - until they aren't! Bottom line seems to be, no one is safe. At best, only for a certain time. Everything so far suggests that in time we can replace most human tasks with robot automated equivalence.
This one still needed the original songs composed by real humans as a seed to do its job, so the job of musicians is still needed.
Well, until someone make some an that start without seed by generating 100% random songs, and the user listen to the song and give an score between 1 and 100, those scores he give tot he songs are used by the program to find how close to a good song it was and how it need to improve.
Or until the listeners are replaced with robots that like any sounds that are thrown at them.
@@Kyle-xk5ut You can already do stuff without people rating it, you just need to use how close it is to original songs as data. The point of start the thing by creating 100% random songs and then people rate it to be the fitness, is that it would create stuff normal musicians woudlnt even think about doing, where at normal methods, it was based at what normal musicians decided that were a song that should be created.
@@M1ndblast CHIM can be a truly terrifying thing.
Damn. This is a lot better than Carykh's computer jazz.
It is actually somewhat good. Imagine if more effort and time was put into something like this, and more money and computer scientists.
I'm sure fully automatic music production is in our future now.
Soon AI will be doing all our jobs and the best we'll do is............................................................................ ¯\_(ツ)_/¯
Do u watch carykh
There already are composer AIs out there. Google them
I wouldn’t be so sure. Good as this is, it lacks some big picture qualities that would be awfully complex for an ai to learn
Streets of Rage 3's soundtrack was supposedly generated at least in part by an algorithm, so we're very close.
Every indy dame developer wants to know your location.
Also, you should set up a 24/7 live stream of your AI pumping out music. A few months later it'll be composing classics! :p
I hope no indie game developer would settle for this. This is a really promising achievement but it is several more breakthroughs away from being production-ready. Even a crappy human composer working for free can make something better (for now).
There would be a risk of over training, though
@@caltheuntitled8021 What do you mean over-training?
@Adin Reed If an AI like this is run for too long, it’s output will match the sample data too closely and end up generating the same thing you initially put in
@@caltheuntitled8021 Ah, that makes sense
6:40 This would be great to see in a video game with a progressively growing home base. The further along the base grows, the more complex and lively the music becomes.
The StreetPass Mii Plaza on the 3DS actually has something like that.
It has 7 different renditions of the BGM, each more elaborate than the previous, and the version that plays depends on how many Mii characters you have in your plaza (more Miis - more fancy music). All 7 are human-composed and pre-scripted (if not prerecorded), but _it'd be pretty cool to use CodeParade's AI to generate that kind of thing on the fly!_ ^_^
Also you could dynamically make the music more complex when you are fighting enemies, or change it so its slower and less complex when exploring, there is a lot of cool stuff this thing can do
Undertale's Start Menu screen does something similar to this.
people can do that already!
Like in Mario galaxy? Or Acumula Town in Pokemon B&W?
Hey, wanted to hop in just to say, I somewhat figured out the first 5 knobs.
the 4th and 5th knobs feels like the most coherent and understandable knobs.
when the 4th knob goes down, notes tend to rise. we lost bass and are left with more music-box sounding music.
if it goes up, notes bunch up around the bass, though not as much.
The 5th knob changes how spread the notes up in the registers, so a high value would mean we mostly get both bass and high pitched chords and melodies, while the 4th knob tells us how much in each value.
You can even see the effects with your eyes when looking at the piano rolls, 5th knob tells us how spread/bunched up the notes are, 4th knob tells us if it's up or down.
However, these knobs also change rhythm and scales in not the clearest way.
First and second knobs are somewhat coupled to control rhythm.
Together they control intensity. A low value for the first knob + high value for the second often give sparse phrases, that indeed tend to be in 3/4 time. Other than that most combinations would lead to some 4/4 time.
The higher 1st knob goes, the more up-beat it feels, and the scale changes along.
The 2nd knobs controlls only rhythmic intensity, and feels like when you roll it down it adds more and more stuff.
However, scale-wise, the first and third knobs are coupled. 3rd knobs tends to control major-minor tendencies (for music nerds out there, a more appropriate term would be modal brightness), but it often changes directions by what other knobs dictate.
That moment when you subscribe to a channel and realize that they uploaded a video in the next five minutes
Same
That moment when you realise that you found a channel waayy too late
After playing around with this a bit, here are some things the sliders seem to do:
1. Main beat frequency
2. Chord density
3. Key
4. Low range beats vs. other chords
5. High range beats vs. other chords
6. Swing factor
7. Chord height
8. Slow swing vs. fast spooky
9. Note clumping (horizontal and vertical)
10. Overall note probability
11. Beat note probability/coherency
12. Chord note probability/coherency
Absolutely astounding! As soon as the source code drops I will definitely make some actual 8bit-style songs with this :)
Have you done this / how did it go? (Source Code: github.com/HackerPoet/Composer )
10:01 (and also 11:16) you straight-up taught your computer how to make pokemon gen 4/5 music somehow, and i sincerely applaud you for your efforts and results
This is an awesome project. The results are magnificent.
It is very generous to share all the code und I appreciate the creation of this convenient Standalone Composer.
Everything is free to use and without annoying license. That is just brilliant.
This has sooooooo much potential. Especially if we can connect different emotions to sound generation and use those as sliders as well. Really when you think about it this could be revolutionary.
Musical locality is very much a real thing, but "close" notes aren't neighbors on the piano. They're neighbors on the circle of fifths.
Notes whose frequencies reduce to fractions with small denominators like 3/2 and 5/4 are "close" (consonant) and irrational frequencies like sqrt(2) are "far" (dissonant) and this also holds true for songs in similar keys.
For instance, C is a lot "closer" to G than C# or D, and the note G can substitute for C a lot more easily than C# or D, because if C is 160Hz, G is 240hz (3/2), D is 180hz (9/8), and C# is 170hz (17/16). C major has no sharps or flats, G major has 1 sharp, D major has 2, and C# major has 7.
I have no idea what I just read
EDIT: 2 years later, I have read a lot more music theory, and now understand what I just read
I couldn't even read it even though I wanted to be able to
Alright, imagine you have 2 people hitting a drum at different intervals. If one person hits it 2.1 times every second and the other hits it twice every second, there would be no clear pattern here. Even though the actual frequencies are similar they are quite different in the regard that putting them together just sounds like shit. On the other hand if you have one person hit it 2 times a second and the other 3 times a second there's much more clear of a pattern. In this respect they are closer because there is an actual pattern here.
That makes sense
thanks RUclipsIsGay
to be serious, though; I wonder how different the network would be if it took the circle of fifths into account? I might try that over the summer.
Imagine a video game where every song was randomly generated like this
It would suck if you like a specific part of the generated songs.
Dynamic soundtrack based on mood of the scene.
In the future our brains will be wired to the computer so that it can generate just the kind of music that we love to hear at the moment by learning our emotional responses, or even let us describe what we want to hear.
Imagine an ai implant that generates random music that fits the current situation you are in
@@shun2240 why do I hear boss music?
@@StarGarnet03 that means you are in danger, be prepared
Pretty cool. Next step is to make a neural network that generates a matching title instead of you having to come up with one.. Although 'Lost Toy' was an excellent match.
I guess it would be useful if he made the training data for that a couple hundred names that he gives to the generated songs based on how they make him feel. And then the AI based on that makes new names for its new songs
Next, make the AI find its own people to improve it, and after training on that make the AI make its own improvements on itself
Next, Skynet
Oh shit
@@markorezic3131 Music AI deafens you, making a robot uprising in the future ultimately easier.
Oofer wow
The chord progressions are really logical this is really cool
I'm not accusing you of this (since I don't know anything about it) but how did you avoid overfitting
I monitor how well the network fits the real songs as it trains. After the 2000 epochs, the training set is still pretty far off from the ground truth. So since I can't even reproduce the original songs, I doubt my random songs would be overfitted. Not 100% scientific, but I definitely haven't heard anything recognizable after listening to hundreds of songs either.
Thanks!
@klye sam It's when your AI effectively learns to replicate the training data instead of generating its own unique set. So yes, it produces a result very similar to the training data.
The important thing is that as long as we perceive, the combination does sound interesting and amusing. Think of how many "songs" (combinations) the base palettes can encode! This says a lot abount what music is. And +CodeParade you have to show us what the principle components sound like, even if they are not so pretty. I want to know.
@@CodeParade Hazel might've meant to ask about what steps you took to end up with a NN that does not overfit. I'm guessing it has to do with narrow hidden layers and some specific approach to setup of fitness evaluation function?
Hey thanks a lot for this I have played with this a lot over the last few years, and it's been good for melody inspiration!
Sorry for double commenting, but this video needs more views. Your approach was pretty unique as I haven't known anyone to decide to generate a song by using an auto-decoder this way, let alone with this much success as well. Let's just hope you can appease The Algorithm.
i didnt think i would be able to find something like this. holy shit now u can never run out of musical ideas!
So NPCs are becoming capable of forming original dialogue and even interactive conversations, making music on the fly, generating worlds, creating people, and writing engaging, also-interactive stories. Video game devs are going to be unemployed in the next 20 years tops...
This channel is pure gold. I have watched 4 videos and all of them are original, inspiring and awesome. You have a very special kind of creativity. Never stop creating
10:45 some... BODY ONCE TOLD ME (but with weird jazzy chords)
I don't hear it.
I hear it.
@@charliek115 it does "some-bo-dy-once-" and then goes nuts
I hear it a little.
i hear a bit of Snowdin Town by Toby Fox with a slower tempo
I love VG soundtrack music, so I'm absolutely thrilled about this project conceptually and with the amazing results you have achieved. I can think of so many application-specific suggestions but then the code is open so maybe I should try doing something with it after I'm done with my own projects.
Thank you for sharing so much! Recently discovered your channel and loving exploring it.
I feel like pumping this with songs I found in various animes.
You are very talented and I am fairly sure you'll subscriber count will reach a hundreds of thousand in upcoming years. I'm not too good at programming, but I hope I am able to download and play with this. The most interesting thing for me would be to input a bands whole discography as midi and then see if the artist is recognizable from the generated music.
I think that this neural network is worth developing further. I thinking it's worth expanding it to make it more sophisticated and capable of writing a bit longer songs, and well as use more than one instrument. Get it good enough, and it may become profitable, as you could possibly sell a few songs to some indie game company or something. Probably be perfect for a really low budget mobile game, or even someone's hobby project.
hopefully in like 4 years i can make an album where every physical copy is personalized thanks to this
This is amazing work. Keep at it!
Dude your channel is soo underrated.
he does not care ,he anyway disabled avds this channel is just his free time
Wow, it’s amazing how definite the chords can be
they're not perfect, but with a little help from some human supervised cleanup, this is a really, really useful tool.
7:10 "just imagine, a DJ that doesn't just mix live music, but actually composes the entire song". Autechre has been doing this for years. Making some truly mind bending music
I think he meant on the go?...
@@ultralowspekken yes. Autechre
@@thomastoews7850 that's cool. Gotta check that out then :)
This is astounding, best neural network music I've ever heard, although I love videogame soundtracks. Playing around with the program, I've found that the sliders can each be deciphered to some degree and their impact can be understood with a bit of music theory and attention, although they're a bit cryptic because humans have a clearly different way of defining patterns than digital neural nets. Sculpting a tune is really fun, moving each slider to get a feel for what it's doing and finding the ones which most impact melody and rhythm and others which shape the chord progression. This thing generates some weirdly coherent modulations, I've used it as inspiration in a track I'll upload soon. It's such a great way to find new progressions!
I have taken a look at your channel and I am curious which track took inspiration from Neural Composer?
hey guys, I figured out what one of the nobs besides the leftmost one does, and I think it is VERY USEFUL. The second red knob from the left controls SYNCOPATION.
This channel is AMAZING. I really enjoy your videos. They are both entertaining and very educational. Keep the good work running!
You are amazing! My favorite song is "Beyond The Coast"
You were right about this video being your best. I really like this tool. Thank you for working on such a great project.
Plot twist: we’ve been listening to it the whole time
This is fascinating! Neural Networks are making such huge advancements right now that it shouldn't have been a surprise to see this pop up. It came out surprisingly great! Well done
this would be a badass tool for a dj. Just moving sliders and generating music
after like 10k epochs it would certainly sound good
3:59 add a couple of effects, drops, echo and here's your new trending song
imagine a show, or an anime, whose main background songs were all randomly composed. it'd have such a mood
Not only does this sound super good, but you titled the songs perfectly! Totally felt the music! 😊 Awesome project! Good job! I love the intertwined world's of Music and AI! ❤
You deserve more subscribers. I would love to leave something like this running on my RTX 2080 for days and come back to a music generator of my best genres. I can just imagine someone training a neural network like this and becoming an anonymous sensation in the music industry lol. I guess the next step would be to generate lyrics that go with there corresponding genre and conforms to the beat generated by the composer. Or maybe the beat composer should obtain features from the lyrics generated so that it adapts to the lyrics, I can see features being derived by encoding an approximation of the phonetic pronunciation of the words in the lyrics. I can imagine AI generated music being played on music services like Spotify, Pandora, or SoundCloud and getting feedback in the form of like to dislikes ratios and key words if people would be able to comment on the songs.
Nice flex dude
Man, he's got all the classic songs, what a nostalgia trip!
I always loved to play: final imagination, legend of tracker, secret of HP and ofcourse Jade chrysalis!
thank you kanye, very kool
Oofer im not sure if this is a joke or not, dont woosh me
@@nidite it is a joke. It might've even been funny back when it was posted
Something with which I usually see song-generating computer programs struggle is harmony, but yours picked it up very quickly. Impressive.
Imagine this ai, but it's only training data is JoJo songs.
make it then
@@Shampoid Calm down, kid.
@@Known_as_The_Ghost sure
@@nysnys3100 skip this ad
@@Dr_Hax skipped
This is seriously amazing! Only just gotten into coding recently, and am so glad I came upon your channel, can't wait to see more videos!
I wonder if this would still sound like classic video game music if it was using different instruments...
Old comment but I think you could kind of separate instruments by using the red slider
Henrix98 it's only piano, isn't it?
Probably also a different training data set
This is my favorite of your videos :0 somehow you manage to be in depth and technical but not boring. 👏👏👏👏
Thank you! :)
Try making the "note certainty" determine the note velocity. It could sound amazingly human :)
In other words, make it so that the red notes that are "just shy of being played" are played quieter than the rest. Your red slider provides a soft threshold instead of a hard threshold.
Also, the cluster-y chords in "ghosts galore" are super cool and spooky and groovy
There is an anime called ,,Carole&Tuesday" in which ai creates most of the music, and i really recommend that 😁
Genuinely exceptional stuff, CodeParade! I'd love to see this sort of advanced hermeneutic analysis performed on specific bands, then specific genres, specific song themes and all the way up to all music altogether, to beginning mapping the entire fractalline space of musical typing as we know it.
"Wonder woods" is indeed quite impressive as result
This is extraordinary. People like you make me happy to be alive and marvel at what is possible.
Is it weird that listening to these gave me a serious uncanny valley reaction?
Not really, they tend to shift between sounding pretty human and skillful and having more awkward moments that would seem like a strange mistake for a human to make, especially one with the skill displayed earlier
@@agentstache135 That's part of this uncanny valley effect
Very interesting how it figured out harmony pretty well, but not so well that it becomes predictable. It actually writes chord progressions that are just complex enough to not be boring but simple enough to be kinda catchy
"Imagine a DJ that doesn't just mix live music, but composes the entire song"
may I introduce you to jazz
Amazing job! This resparks my interest in neural nets! tytyty
AI make visuals, AI make music, AI do gameplay... but "every gamedeveloper want that", in pay day you don't wanna that AI get your paystab.
"We don't need compositors, we don't need artists", but everything kind of "not in my backyard"
The power of human perception isn't to generate music, it is to create emotions from sounds. Love your video man
4:10 this is earth radio and now heres.... human music
I've been watching a few videos of yours now..
and I must say, the things you've been doing with neural networks are quite impressive.. and unlike most "neural network generationg something" videos, yours sounds good, or in the case of the face generator, looks good :o
And "Marble Marcher" was quite fun too :) although it hurt my GPU quite a bit, but still worth the experience :)
I'll continue to stalk your uploads :D
Hope you have a wonderful day :)
I would give an arm and leg for a github link for these programs: I will try and code them from scratch soon but it could take a while, these ideas are fascinating
Yesssss
I want a github link too, but dont know nothing about coding, what do i need to learn to use this program and tweak it? is c++ good?
Download link is available now for standalone version. You can keep your limbs ;)
This is just incredible. Really good job :D
All of the generated songs sound like Minecraft music discs.
oh my god, I did NOT expect it to sound that good! thats absolutely insane!
"Wonder Woods" was so good wow
This is amazing!!
I would make this play constant original music for me for all time.
Honestly "Beyond the Coast" was actually amazing and if it were tweaked just a tiny bit (for example in the 10th measure with that awkward silence) it could become really great
This is just.. absolutely everything.
In my opinion this would be perfect for mashing together some synth wave soundtracks for RPG games
Epoch 500 sounds eerily familiar but I just can't place it, it feels old
Yeah exactly, mainly the 4th and 5th measure seem familiar to me, maybe even a bit nostalgic. I really wonder where I heard it before.
I think it maybe this, ruclips.net/video/v0fy1HeJv80/видео.html
A lot of these give me really chill minecraft vibes. This is fantastic work and some amazing tech!
Beyond The Coast was my fav :) 7:31
You are by far one of the most interesting creators. Your brain is amazing.
9:00 Extra life sounds like a combination of Steven universe and an Octopath traveler song.
I love this guy, makes awesome projects all open source, this guy has to be remembered in the history of open source.
honestly i wanna see Beyond The Coast expanded to actual instruments
that feeling when u find someone has implemented one of ur ideas, cool video btw :)
Trying to piece together what I can of what do slider do. 11/40 Sort of figured out. Not too bad.
A few times I will reference M/B. This is the gap between the Melody and Bass
#1 Increases amount of notes *In a simplified system, creates an alternating note pattern (Ala Ghost's Galore)
#2 Increases amount of notes (reversed)
#'s 1 and 2 counteract each other
#3 Decreases the amount of notes if moved in either direction, but also effects note variety
#5 Increases the range of notes. Also creates a gap between M/B if high enough.
#6 Increases the width of M/B, but does not increase the range. Melodies have shown to be too high for the "range" 5 provides.
#10 Seems to increase the amount of notes allowed to be played at once. Also seems to spread them out.
#11 Increases the amount of times notes are allowed to play. (reversed)
#12 Similar to 11, but can also decrease the amount of notes that can play at once.
#14 Increase the amount of notes on the half beat (and some on the quarter beat)
#17 Increases note range, does not increase M/B
#21 On of the first to clearly effect sections different from each other. If you have a straight line of music, It appears to shift each section up or down depending on the section. It's really hard to notice. You need a constant note beat and even then can only slightly notice visually.
#22 Splits the first 2 (top) or last 2 (bottom) sections into melody and bass.
Here's the conclusion I came to this morning about the first 12...
1. Main beat frequency
2. Chord density
3. Key
4. Low range beats vs. other chords
5. High range beats vs. other chords
6. Swing factor
7. Chord height
8. Slow swing vs. fast spooky
9. Note clumping (horizontal and vertical)
10. Overall note probability
11. Beat note probability/coherency
12. Chord note probability/coherency
Oh wow. That's actually a very nice look and observation to work on the type of music you got. Now that I think about it, 3 dimensions makes sense for music.
At least you've made a little more observations than the others to produce a better result.
It's lovely you've succeeded, people can actually use this tool as inspiration.
Nobody:
Boomers who never opened a DAW in their life:
“iSn’T tHiS jUsT wHaT AlL oF tHoSe dUmB ‘eLeCtRoNiC mUsIcIaNs’ dO bEcAuSe tHeY jUsT hAvE a coMPuTeR WrItE tHE sONG
Well its true...
Roses are red
Violets are blue
I had to google DAW
And so did you
λaron C. not if you’ve been in music for more than a bit
@@prikkiki-ti-2 How many people on YT do you think have created music?
λaron C. Not that many, but my point is that it isn’t obscure.
I don't know how this doesn't have more views. Really impressive music generated! I love the content!
POST A LONGER VERSION OF BEYOND THE COAST
But... _the bot doesn't make a longer version than that._ =^/
I suppose you could loop it, or he could modify the code to produce more than 16 measures.
Or a human can try to continue it.
@@yarde.n Yeah. this seems like the perfect inspiration machine, but I feel that a human would need to take that tune and refine it to the point of being an actual full and pleasing song.
@@jr.jackrabbit10 Yeah, I may try to do that some day.
@@yarde.n did you ever end up doing it?
This turned out surprisingly well!
I noticed that some of the sliders seemed to control the lines of melody (top-note melody, middle-nite, bottom-note, etc.) while another controlled how adventurous the network was in going up and down its range, or how large its range was. Maybe looking at where the sliders are over a couple of songs would make things more telligible.
I've been waiting for this for so long ...
Imagine doing a game that every action change the variables in a way that every moment you have a new song thats product of your actions (of course with some kind of restrictions, doing the songs match with the actions or the game itself) (sorry my English, im Chilean) If you have more ideas respond (and if i misspelled something to xD practice is the key to learn)
La weá perfecta socio !
Epoch 50 sounds like a person who knows about rhytm tries to make a piano song, but does not know the keys so the good old key mashing comes into play
0:14 DOGS AND PUPPIES MAN COME ON
I wasn't a guy to just download things off the internet via a sketchy link easily, but this video convinced me to do that, and dare I say it was freaking worth it.
"I know nothing about music theory and I have no idea how people come up with original melodies" - creates the catchiest AI composed music I've heard so far by leveraging the core building blocks of music. EDIT: You might get an amazing result if you categorized the most common instruments, like bass, drums, synths. With just one instrument, this is so good already.
Interesting use of autoencoders. CNN's and RNN are finite state automata, so the songs they can create have a type 3 grammar. Code parade is looking for songs with more of a type-0 grammar. Therefore, it requires another machine, which you finds in auto encoders, but usually it's a turing machine. I think maybe the latent space representation is the tape. Also, number can mean song, and it can also mean element of a space, but in this case it's the same thing. I think I may have learned something.