I don't know if it was intentional or not - but the word "fruits" mutated to "furits" in your video - and that made them feel almost more "alive" in my mind and I started rooting for the little "furits" hoping that the evil snake would not find them! The music was perfect for this unfolding evolutionary drama!
I'm trying to do something with neuroevolution as well and I have been searching for a way to visualize the changes in the network itself. How did you do these neural network graph visualizations + animations? They are great!
Hey Santos. The tool that he is using here is Manim - and open source python tool created by 3b1b. There's a community version too, which is extensively documented. You can get started there!
It depends on how large the population is and on how many times you make each gene play. with a pop of 150 and each gen playing 10 times it was under 5 minutes for the first 100 generations, but after 400-500 gens it started taking 30-60 seconds per generation. The key is to cache the game's steps and just kill it if it gets caught in a loop.
At the moment no. But if i were to keep training it for say 10000 more generations, considering the current direction of the evolution, i think it would learn to make the zig zag at the bottom right from bottom to top and cover the entire map with that movement pattern, resulting in completing the entire game this way.
What was the exact selection criterion? It seems like it was finding the dot and avoiding obstacles, but other than the boundary I am not sure what obstacles you refer to? I would like to see the criterion be simply 'the least number of steps to reach the dot' as that doesn't seem to be the measure you chose. In fact as the generations increase it seemed to become more efficient initially, then less and less efficient. Your thoughts?
@@semperzero thanks for replying ❤ I'm asking because I'm trying to make the same project for learning purposes. I'm rewarding each gene for eating and getting near to the apple .. I see your approach to make each one play 50 times is to avoid randomness , am I right? please correct me. I think this is why it took me 200 generations to make them learn to eat, however I'm currently stuck with the snakes moving clockwise as near as possible to the wall and eat the apple from the down side , this causes them to hit their tail after a certain score ... any suggestions ? I think playing 50 games wouldn't be the solution at this point can I ask also about your population size ?
@@ahmedyasser8416 yes, it is to give a lesser weight to lucky and unlucky games. I think the best approach would be to re-learn the snake from zero and not make changes after you already trained it 200 generations. My pop size was somewhere around 150. Now, the biggest problem I encountered was in choosing good hyperparameters (mutation rate, mutation power etc.) In order to overcome this and find good hyperparameters I build 2 engines, one using hyperopt random search (hyperopt.rand.suggest) and one using neat itself for a genetic search engine. In short I tried 100 random sets of hyperparameters and for each one of them I trained the snake up to generation 100 or 200 (you can also train the snake multiple times) and took the best results. In order to make the genetic search engine using neat i just made a different project in which the neural network had 1 input of value 1, no bias, no way to add or delete neurons and so on. The output had as 6 neurons, one for each hyperparameter. You can stop snakes that hit a plateau very early on, or don't go past certain thresholds (for example more than 5 fruits on average by gen 40, more than 10 by gen 100, etc) In order to change the hyperparameters all you need to do is a function like this: def set_config(self, confg): self.config.genome_config.bias_mutate_power = confg["bias_mutate_power"] self.config.genome_config.bias_mutate_rate = confg["bias_mutate_rate"] .... call the function after you initialize the config and before you initialize the population self.p = neat.Population(self.config)
Hi there I've been working on my custom evolving algorithm for a while and I'm stuck on one question, maybe you could help me please! Or any one in the community? If i've got a connection from input to output, and two other neurons on it i.e.: (IN)---->-----(N1)------>-----(N2)--->-----(OUT) And say N1 gets disabled in a mutation, should it cause N2 to also be disabled automatically or does NEAT allow keeping of disconnected DEEP neurons? I'can't find the answer anywhere.... I'd so appreciate it. I tried on forums stackexchange, tried on facebook in groups but there doesn't seem to be anyone knowing this well enough to answer.
From my knowledge, nodes are interdependent in the mutation process. If one gets disabled, then all it's edges get disabled as well, but it will not affect other nodes. The reason is that a new connection may be made with N2 in the future.
I give thanks to thee Almighty Algorithm for this content you've fated me to partake. Bless the Creator with endless likes and may you protect us from dislikes. Amen.
Thank you youtube Algorithm. As an AI student and fan, this is amazing, and the orchestral music is a cherry on top. Instant sub
I don't know if it was intentional or not - but the word "fruits" mutated to "furits" in your video - and that made them feel almost more "alive" in my mind and I started rooting for the little "furits" hoping that the evil snake would not find them! The music was perfect for this unfolding evolutionary drama!
You should look up "furries" .
@@earnings_cc Those are not living beings lol
Amazing ! There is a lacking content of NEAT on youtube, and you just fixed it (very good visuals by the way)
This was so well put together. Great music and great representation. Congratz. :)
Very impressive! I'm trying to get into visualizing my nets and was wondering if there is a github repository for this project that one could look at?
the visualization is cool! how do you implement it?
It has also some sensual content, great video!
What is the input of the Nets? You feed the whole grid? Also the Nets make a decision each frame of the game?
Amazing visualization!
How can I display the changes in each generation in the neural network like this
Very cool! I always wanted to learn how to make a neat network. Any way you can share the code for this?
I'm trying to do something with neuroevolution as well and I have been searching for a way to visualize the changes in the network itself. How did you do these neural network graph visualizations + animations? They are great!
Hey Santos. The tool that he is using here is Manim - and open source python tool created by 3b1b. There's a community version too, which is extensively documented. You can get started there!
Love the music tho
Absolutely beautiful and terrifying at the same moment.
Underrated channel. Just subbed
Neat is so cool, I love it very much.
This is so Myelinating........... 👌👌👌👌👌👌👌👌
what's your fitness function?
Really great presentation.... thanks.
Can you teach us how you did the animation
This is fascinating. How did you make the cool visuals btw?
manim ( Mathematical Animation Engine ) library by Grant Sanderson aka 3blue1brown.
SO EPIC !!!!!!!! Really cool video
Great video!!! Btw I am curious how long does it take to simulate 100 generations?
It depends on how large the population is and on how many times you make each gene play. with a pop of 150 and each gen playing 10 times it was under 5 minutes for the first 100 generations, but after 400-500 gens it started taking 30-60 seconds per generation. The key is to cache the game's steps and just kill it if it gets caught in a loop.
This is amazing!!!!
amazing. how do you do the neural network video? thats no also very cool, but also useful
GREAT work . beautiful
Do it manage to fully win the game at a certain moment?
At the moment no. But if i were to keep training it for say 10000 more generations, considering the current direction of the evolution, i think it would learn to make the zig zag at the bottom right from bottom to top and cover the entire map with that movement pattern, resulting in completing the entire game this way.
How were you able to visualize the genetic structure?
Deep respect!
Amazing. I have a question does Neat-python allow adding Hidden layers using config File or the Net evolves by itself for adding hidden layers?
What was the exact selection criterion? It seems like it was finding the dot and avoiding obstacles, but other than the boundary I am not sure what obstacles you refer to? I would like to see the criterion be simply 'the least number of steps to reach the dot' as that doesn't seem to be the measure you chose. In fact as the generations increase it seemed to become more efficient initially, then less and less efficient. Your thoughts?
The obstacles he's referring to is itself. Coming in contact with his own body.
how'd you make the animations
Great video!
how do you reward the gens
?
I make each gene play the game 50 times and compute its average result.
pastebin.com/rudkrxcL
@@semperzero
thanks for replying ❤
I'm asking because I'm trying to make the same project for learning purposes.
I'm rewarding each gene for eating and getting near to the apple .. I see your approach to make each one play 50 times is to avoid randomness , am I right? please correct me.
I think this is why it took me 200 generations to make them learn to eat, however I'm currently stuck with the snakes moving clockwise as near as possible to the wall and eat the apple from the down side , this causes them to hit their tail after a certain score ... any suggestions ?
I think playing 50 games wouldn't be the solution at this point
can I ask also about your population size ?
@@ahmedyasser8416 yes, it is to give a lesser weight to lucky and unlucky games.
I think the best approach would be to re-learn the snake from zero and not make changes after you already trained it 200 generations.
My pop size was somewhere around 150.
Now, the biggest problem I encountered was in choosing good hyperparameters (mutation rate, mutation power etc.) In order to overcome this and find good hyperparameters I build 2 engines, one using hyperopt random search (hyperopt.rand.suggest) and one using neat itself for a genetic search engine. In short I tried 100 random sets of hyperparameters and for each one of them I trained the snake up to generation 100 or 200 (you can also train the snake multiple times) and took the best results.
In order to make the genetic search engine using neat i just made a different project in which the neural network had 1 input of value 1, no bias, no way to add or delete neurons and so on. The output had as 6 neurons, one for each hyperparameter.
You can stop snakes that hit a plateau very early on, or don't go past certain thresholds (for example more than 5 fruits on average by gen 40, more than 10 by gen 100, etc)
In order to change the hyperparameters all you need to do is a function like this:
def set_config(self, confg):
self.config.genome_config.bias_mutate_power = confg["bias_mutate_power"]
self.config.genome_config.bias_mutate_rate = confg["bias_mutate_rate"]
....
call the function after you initialize the config and before you initialize the population
self.p = neat.Population(self.config)
Extremely well done
This is neat
How the input state is encoded?
I found the answer here: ruclips.net/video/h9JZ0YHtKWQ/видео.html
Hey. I got an open source project I thought you might be interested in, how can we get in touch? Can I link you?
Alex.semperzero@gmail.com
Will check it out tomorrow
Hi there I've been working on my custom evolving algorithm for a while and I'm stuck on one question, maybe you could help me please! Or any one in the community?
If i've got a connection from input to output, and two other neurons on it i.e.:
(IN)---->-----(N1)------>-----(N2)--->-----(OUT)
And say N1 gets disabled in a mutation, should it cause N2 to also be disabled automatically or does NEAT allow keeping of disconnected DEEP neurons?
I'can't find the answer anywhere....
I'd so appreciate it. I tried on forums stackexchange, tried on facebook in groups but there doesn't seem to be anyone knowing this well enough to answer.
From my knowledge, nodes are interdependent in the mutation process. If one gets disabled, then all it's edges get disabled as well, but it will not affect other nodes. The reason is that a new connection may be made with N2 in the future.
👋 to continue
Perfect 👌🏻
I give thanks to thee Almighty Algorithm for this content you've fated me to partake. Bless the Creator with endless likes and may you protect us from dislikes. Amen.
Revolution
NEAT is much more exciting to me than transformer nonsense.
excellent video!
Dramatic
Very cool
god made by human
Where is the source code? Dislike
Amazing visualization!