NEAT - Introduction

Поделиться
HTML-код
  • Опубликовано: 20 дек 2024

Комментарии • 200

  • @exeriusofficial
    @exeriusofficial 6 лет назад +300

    That's a pretty neat presentation

    • @fadop3156
      @fadop3156 5 лет назад

      Sankalp Bhamare bag more pls

    • @Freaky_zy
      @Freaky_zy 4 года назад +7

      Pretty neat pun...

    • @umaryusuf12
      @umaryusuf12 4 года назад +1

      Absolute legend😂😂👌🏾

    • @spinLOL533
      @spinLOL533 4 года назад

      lol

  • @brampedgex1288
    @brampedgex1288 5 лет назад +91

    Its been over a year and he’s still liking comments. Wow.

    • @finneggers6612
      @finneggers6612  5 лет назад +12

      i also read all of them and answer if I feel like they should be answered :)

  • @pontusnellgard8706
    @pontusnellgard8706 5 лет назад +14

    Please continue this series! I loved your fully connected series, you are the most pedagogical nearal network programmer on yourube and a huge help an inspiration. Don’t let this series die

  • @shinchikichin
    @shinchikichin 5 лет назад +37

    Really nice presentation! It feels like you are speaking from understanding and not from a premade word for word script. That really helps.

  • @Sajal7861
    @Sajal7861 5 лет назад +31

    18:55 Thanos would be proud

  • @garybutler1672
    @garybutler1672 6 лет назад +12

    Nice work. I'm implementing NEAT myself and just got a prototype working. I'm exhausted this is one of the toughest algorithms to duplicate. Not because of it's complexity, but because there is really nothing like it in the deep learning that I'm used to. You can't simply throw some Keras code at it, you must implement every node and connection from scratch.

    • @finneggers6612
      @finneggers6612  6 лет назад +3

      Yeah that's correct. What languages do you use to implement it?

  • @julianabhari7760
    @julianabhari7760 6 лет назад +41

    This was an awesome presentation and introduction to this algorithm, I can't wait for the next video on this topic

    • @finneggers6612
      @finneggers6612  6 лет назад +2

      Thank you Julian,
      In theory, my implementation is already finished and it does kinda work but the thing is that modern implementations are so much more complex.
      My code already would need at least 7 videos to cover and eventually it will work and we will see networks developing over time but I've got a problem with the speciating.
      I will fix this as soon as possible. I think that the full code of that is already uploaded to my github account. You can check it out if you like.

    • @georgechristoforou991
      @georgechristoforou991 5 лет назад +1

      @@sankalpbhamare3759 NOT REALLY!

  • @75hilmar
    @75hilmar 3 года назад +2

    This was some really profound and useful information and you don't need to be insecure about your english because it is pretty decent. Can't wait to watch your later videos.

    • @finneggers6612
      @finneggers6612  3 года назад +3

      Thank you very much :) I have been somewhat inactive for the last months. But I may be starting to upload again. Got plenty of amazing projects I worked on. For example k wrote one of the top chess engines (Koivisto) and I may do a few videos about that. Also I started writing some more mathematical correct stuff which isn’t directly related to AI and I may start with that too. So stay tuned :)

  • @Dtomper
    @Dtomper 9 месяцев назад

    THANK YOU. This presentation was AWESOME, I understood it very well, thank you thank you thank you so much

  • @nunorodrigues3195
    @nunorodrigues3195 5 лет назад +1

    I have a recurrent question regarding neat networks and overall neuroevoluition networks.
    Since they act like graphs due to the lack of layers, whats is the best way to compute them? Going recursive from the output checking for dependencies untill we reach the inputs seems far from optimal, but propagating forward doesnt seem doable since one input can propagate to a hidden neuron that also has as input an uncalculated hidden neuron.

    • @finneggers6612
      @finneggers6612  5 лет назад +2

      i came up with a solution when i first programmed them.
      So first, I assigned an x value to every node. Input is 0 and Output is 1. (In fact i used 0.1 and 0.9 but i had graphical reasons for that)
      Every node that is not an input or output node is then assigned a value between 0.1 and 0.9.
      If there was a connection between node 1 with x1 and node2 with x2 and I split that connection into two new connections and 1 node, the new node gets the average x value (x1 + x2) / 2
      Now the idea is that a connection is only allowed from left to right (or from a node with smaller x value to one with higher x value).
      Calculations simplify drastically:
      If you sort all the nodes by their x value and iterate through them (starting with the lowest x value).
      You assure that the previous nodes have always been calculated before.
      My implementation needs 3 arrays:
      - of all inputs nodes
      - of all output nodes
      - of all hidden nodes
      Algorithm:
      1.Sort the hidden nodes by their x value (only needs to be done once)
      2. set the input value to the input nodes
      3. iterate through the hidden nodes and calculate their output
      4. iterate through the output nodes and
      4.1 calculate their output
      4.2 store their output in the output array
      There can be multiple optimizations.
      I only sort the nodes once there is a new node added somewhere.
      Actually if you add the node in a way that the hidden nodes array will be kept sorted, you dont need any sorting algorithm.
      Hope this helps :)

    • @nunorodrigues3195
      @nunorodrigues3195 5 лет назад

      @@finneggers6612 I had never thought of something like this, it is certainly an option i will keep in mind. Much appreciated for thorough response.

  • @UntrainableWizard
    @UntrainableWizard 5 лет назад +10

    I've watched this 3 times, not saying I'm obsessed with the system or anything, buuuuuut... lol
    You do an amazing job at describing everything you can, and the slides were great, thank you so much. You may have been the first person that described this topic in a way that I was able to piece together.

  • @Goel25
    @Goel25 5 лет назад +3

    Awesome video! I still have a few questions about the algorithm though.
    1. Are there biases on each of the nodes? If so, then you would need a separate innovation number for the nodes, and the paper never mentions that so it makes me believe that there aren't biases on the nodes. This contrasts other networks though, and I believe biases are very important. Or is there just a single bias neuron as input that is always set to 1?
    2. Is there an activation function applied to each node, or just a sigmoid (or any other function) on the output nodes?
    3. In crossover, are the excess genes (not disjoint genes) only taken from the most fit parent? Figure 3 in the paper shows excess genes being taken from parent 2 and disjoint genes taken from parent 1 and parent 2.
    4. What is the range for weights? You said in the video between -2 and 2, but the paper never mentions anything about it. If there are biases on each node, then what is the range for that?
    Thanks so much!

    • @finneggers6612
      @finneggers6612  5 лет назад +5

      Sorry if that has not been clear but I am happy to see that you read the paper.
      So let’s try to answer your questions:
      1) yes in the paper (and my implementation) der is no bias. It doesn’t matter how you think about the bias. It’s just an offset for EVERY neuron. It is not weight-specific but neuron-specific. This is interesting because we did not include this in the crossover because only connections have been discussed. If you want to include a bias, you need to decide if a neuron shares its bias a cross all genomes or one bias for each neuron for each genome. I think you are better off ignoring the bias term :)
      2. the activation function is applied to every neuron. It serves as a non-linear factor. The important think about the function is the following:
      A NN is highly linear if there would be no activation functions. This is because we only do Addition etc. The problem of a linear function (function=neural network) is that a linear function can only understand linear problems. This is why we want nonlinear functions so we include the activation function at every neuron. Furthermore without the activation function, there would be no need for any neuron in between the input and output because it would not have any impact (or almost none) on the output.
      Basically a linear neural network with 10 layers is as good as a linear network with 1 layer (thinking about default neural networks)
      3. yes excess genes are only taken from the fitter parent
      4. this is a complicated topic. It has been shown that it’s a good idea to randomise it between -1/sqrt(n) and +1/sqrt(n) with n being the amount of neurons in the previous layer. This applies to normal neural networks where we apply backpropagatipn and has to do something with the gradients etc. But because NEAT does use stuff like that, it doesn’t really matter I think. Allthough I am not an expert on that topic :)
      Hope this helps

    • @finneggers6612
      @finneggers6612  5 лет назад +3

      Goel if you also consider the bias, the amount of previous neurons would be the actual amount of prev. Neurons + 1 because the bias can be understood as a connection to a neuron with output 1.

    • @Goel25
      @Goel25 5 лет назад +1

      ​@@finneggers6612 Thank you for such a quick response!
      For the bias, I saw in another video (of a normal NN) that they added the bias as just an input with a value always set to 1, that was always connected to every non-input node. Without that, their NN wasn't able to solve the problem. That should be a pretty easy solution, as it won't require a separate innovation number or anything like that.
      I'll make sure to try a couple of different activation functions (and maybe even make it a mutation or something, so each node can have it's own activation function, since different ones are useful for different things).
      I'll try using a range of -1 to 1 and -2 to 2 and see whichever works better. Although through enough mutations where the weight is increased/decreased, I suppose (as long as I don't constrain it), it could get outside whatever range I choose for it.
      I'll let you know how it goes!

    • @finneggers6612
      @finneggers6612  5 лет назад +2

      Goel think about a Linear function ax+b. for a neural network, a are the weights and b is the bias. So the bias is an Offset. Some problem can in fact not be solved without the bias but this is rare due to the non-linearity. Most problems can be solved without. It’s just one more parameter and the amount of parameter equals the complexity that the neural network can understand.
      BTW i am sorry for all the typos. My German autocorrection is so annoying…

    • @Goel25
      @Goel25 5 лет назад +1

      @@finneggers6612 Ok, I guess I'll try it without the bias first, and then if it has trouble, re-add the bias.
      Thanks so much!!

  • @bernardoolisan1010
    @bernardoolisan1010 2 года назад +2

    Sorry for asking so much questions but this are the last ones...
    1. How the selection of the species is made? how you score the results to then mutate that selection?
    2. The nodes(neurons) what data contains?. Does it contains the weighted sum? like a normal neuron do?
    3. How many types of NEAT algorithms are?
    4. I read on the paper that there are some formulas you didn’t mention, what are those? like the fit formula, why those formula are useful?

  • @chandler5587
    @chandler5587 6 лет назад +4

    I just came across your channel I love your content especially the coding walk through bc I like to try to follow along.

  • @charimuvilla8693
    @charimuvilla8693 4 года назад +1

    Hmm the idea of tuning the architecture of the network opens a new door for me. I've been messing with genetic algorithms as a way to tune weights but I just don't know how big the network has to be. In some cases where you need huge networks knowing when to stop growing would be nice

    • @finneggers6612
      @finneggers6612  4 года назад +2

      chari Muvilla NEAT isn’t designed for big networks sadly because they don’t grow fast enough for this. Look into HyperNEAT. They do exactly this. The algorithm is the same but the encoding part is different

    • @charimuvilla8693
      @charimuvilla8693 4 года назад +1

      @@finneggers6612 nice thanks

  • @rafe_3d160
    @rafe_3d160 Год назад

    Hi, ich schreibe meine Facharbeit über den Lernprozess künstlicher Intelligenzen mit NEAT. bei 12:13 erklärst du mutate_node, in den offiziellen NEAT docs finde ich jedoch nur mutate_add_node und mutate_delete_node. Gab es dahingehend Updates? Desweiteren finde ich gar keine Information in den docs zu den anderen Mutation, wie mutate_weight_shift. Sind diese möglicherweise geupdadet wurden im Namen oder wurden komplett ersetzt? Vielen Dank im voraus

  • @chandler5587
    @chandler5587 6 лет назад +22

    Please do a cool coding project with this topic (NEAT)

    • @finneggers6612
      @finneggers6612  6 лет назад +3

      Hecker551 that’s nice to hear.
      I could do that but right now I am focusing on uni etc.
      The code of my implementation is finished and I think I’ve uploaded it somewhere.
      If not, I will do so later when I am home.
      If you like you could try to code the implementation yourself.
      It’s very challenging and you need to know basic things about maps etc.
      I’ve already made the first video but I think it’s to long so I need to split it up.
      If you’d tell me something to code, I can do that.
      Tell me a game, any idea, anything that is codeable and if I have time, I will do that.

    • @EshanTahir
      @EshanTahir 3 года назад

      @@finneggers6612 If you are still willing to do this, would you please create a 2d ai car that uses ackermans model of steering, with the ability to brake, accelerate, decelerate, turn, and pass on output, and it could have radars that detect distance from track rim, and it could detect its velocity? with semi okay graphics. It could also avoid obstacles, and maybe, when its trained enough, avoid each other? that would be an epic video, or project, even without a video, to get many views. Thanks for your time, and i might be able to help!

  • @PanCave
    @PanCave 5 лет назад

    Ich hab mich mit dem Thema noch nicht wirklich auseinander gesetzt, aber der "Selection" Prozess kommt mir ein bisschen merkwürdig vor. Die Reihenfolge der Betrachtung der Genome könnte einen großen Unterschied in der Kategorisierung machen.

    • @finneggers6612
      @finneggers6612  5 лет назад +1

      Das ist richtig. Jedoch versuchen wir sie ja nur grob in unterschiedliche Spezies einzuteilen.
      Dem originalen Paper ist leider auch nicht mehr zu entnehmen.

  • @subject2749
    @subject2749 4 года назад +4

    This is the best explanation of the NEAT algorithm I could find on the internet

  • @MTEXX
    @MTEXX 3 года назад +2

    Great video Finn! Coding this up for fun. Question. Let's say a mutation creates a new node between node 10 and node 15 and the new node is called node 21. Another sibling in the simulation also mutate creates a new node in the same location. Would the node manager return node 21 or some unique node number?

    • @finneggers6612
      @finneggers6612  3 года назад +2

      Good Question! Theoretically every mutation that splits a connection into a new node and two connections should also split the same connection to the same node. What I did is actually hash the connection id together with the node id which will replace it. Also the 2 new connections should also be the same. I usually hashed the connection using both the node ids.

    • @MTEXX
      @MTEXX 3 года назад +3

      Thanks for the reply. Looking at some other projects, I do think that new Nodes of same lineage need to be reused. This promotes even better connection reuse, crossover and speciation.

    • @finneggers6612
      @finneggers6612  3 года назад +2

      @@MTEXX yes that is correct. I am sorry if it didn’t come directly out of my reply :)

  • @sudarsansudar2120
    @sudarsansudar2120 5 лет назад +1

    Nice presentation. In speciating you said we group the genomes by distance. So, do we use weight matrices as vectors of the neural network? So that it could be compared to find the distance. If the connections are absent then it will be represented by 0. Whether my intuition is correct??

    • @finneggers6612
      @finneggers6612  5 лет назад +1

      sudarsan sudar you might be correct for networks with the same topology but that is not garantued because it’s the topology we optimise. I think the distance function is explained in the video aswell :)

    • @sudarsansudar2120
      @sudarsansudar2120 5 лет назад +1

      You said you will explain it in another video!!

    • @finneggers6612
      @finneggers6612  5 лет назад +1

      @@sudarsansudar2120 but it is explained?

  • @bribes_for_nouns
    @bribes_for_nouns 2 года назад

    best explanation on the internet. i'm doing an ecosystem project right now and tried to look up and understand NEAT algorithms but they were all far too technical for me, but the way you explained it gave me hope that i can implement this step by step. thank you. right now i have a simple feedforward network with no hidden layers for each of the creatures (just input/output and the directions they move) i suppose the next step would be to make it so the network prototype has some type of static method to generate a new hidden neuron/connection. this is going to be tough.
    even still this type of genetic algorithm fused with NN topic interests me way more than gradient descent/backpropagation calculus, so i think this path will be worth it for me in the long run as i just find this topic so much more interesting
    it would be great if this algorithm could be optimized even further somehow

    • @finneggers6612
      @finneggers6612  2 года назад +1

      this algorithm has been improved even further :) The algorithm is still the same although it can be applied to larger networks. Its called HyperNEAT. HyperNEAT does basically the same with the genomes although they represent different informations. I havent looked into HyperNEAT in depth but you may want to do that. It scales way better with larger networks.

    • @bribes_for_nouns
      @bribes_for_nouns 2 года назад +1

      ​@@finneggers6612 i'll definitely check that out after i implement the simpler version first!
      question, i'm still in the beginning stages and i have a Neuron class and a Brain class configured. The Neuron can store a connection object, and the Brain uses static methods to generate the initial network and has an instance method to form a connection between two neurons
      in your instructions you mentioned the importance of the 'innovation number.' in my neuron's connection object, i have both an innovation number and a path value like [1-6] showing which neurons are connected.
      i'm having a hard time distinguishing why i would need both a path and innovation number. would just having the path stored be sufficient enough to check if a previous connection has been made in the global brain? or does innovation number/id play some type of important role later on since it just keeps incrementing over time?
      also, can connections only occur one layer up in this algorithm? meaning a neuron at layer 1 can only connect with a hidden neuron at layer 2, and not bypass and connect with a neuron at layer 3? the inputs start out connecting with the outputs initially directly with no hidden, but if a hidden is dynamically created do they have to go through the hidden to get to output layer 3?

    • @finneggers6612
      @finneggers6612  2 года назад +1

      @@bribes_for_nouns innovation number plays an important role in computing the distance function between two genomes and sorting them into the same "species". Connections are basically just a computational path between two neurons. the connection itself does not give any information which connection is "older". With "older" i mean which connection has been created first. Generally older connections are differently weighted compared to newer weights. Thats why innovation number plays an important role. Also NEAT does not know anything about layers. neurons can be created by splitting connections. as far as i know, HyperNEAT works with layers.

  • @ninek1902
    @ninek1902 2 года назад

    Hey, nice presentation!
    Could you please provide a link for formula that calculates genome distances, to sort genomes into species?
    Thanks!

    • @finneggers6612
      @finneggers6612  2 года назад +1

      This is actually a part that the original paper left pretty open. I did some further research and Also asked on stackexchange but I am unable to find it. I also dont remember the exact method but I think I am doing something like:
      1: the distance of a genome to a species is the distance of the genome to the representative of the species which I consider the FIRST one to enter the species.
      For each genome g:
      Go through each existing species S
      If distance (g,s) < some threshold
      Add g to s
      Break
      If no species found:
      Create new species with g as the representative

    • @finneggers6612
      @finneggers6612  2 года назад

      That’s a simple, yet probably not ideal solution which works good tho and which I used

  • @tag_of_frank
    @tag_of_frank 4 года назад +1

    Need a clear definition of excess genes. "Those at the end" is strange to me. They seem to have the properties of disjoint genes. Is the larger network always parent 2? Are excess genes just the nodes of parent two after the final disjoint or shared node of parent one?

    • @finneggers6612
      @finneggers6612  4 года назад +1

      No, the second one does not have to be the larger one. Excess genes are indeed always the genes in parent two after the last gene of parent 1.

  • @nembobuldrini
    @nembobuldrini 4 года назад +1

    Great and easy to understand explanation of the NEAT algorithm! Kudos!

  • @lukass1604
    @lukass1604 4 года назад

    4:40 is putting a neuron in a connection simply random? And is it also random if the connection is enabled? I'm talking about the first generated networks.

  • @avananana
    @avananana 6 лет назад +5

    It's been some time since you uploaded this but I do look forward to seeing the code you're going to write. I'm having a really difficult time getting NEAT to work for me and can't really find anything on the web that explains how to write them in code. I'm close, but due to the complexity, close is far from enough. :P

    • @finneggers6612
      @finneggers6612  6 лет назад +2

      Avana well the code is already written. It just doesn’t behave as nice as the one in the original paper.
      You can find everything in my github-repo. Just search for „Luecx“ on GitHub and you will find a few projects. One will be called AI or sth. Like that.
      It should be in there. If not, message me please :)

    • @grmmhp
      @grmmhp 6 лет назад +1

      Take at look at the book "AI Techniques for Game Programming" by Mat Buckland. It has a big section about genetic algorithms, neural networks and NEAT

  • @manuelhexe
    @manuelhexe 4 года назад +1

    Very good video. Thank you so much!

  • @EndersupremE
    @EndersupremE 5 лет назад +1

    Hey finn thx for the videos. Does this work for any usage of normal neural nets? Like image recognition and such?

    • @finneggers6612
      @finneggers6612  5 лет назад +2

      in theory yes but not advised. You should look into HyperNeat for bigger networks (the encoding differs, the rest stays the same).
      But for image recognition i would look into supervised algorithms (simple backpropagation with conv. nets).
      Basically we get a neural network and we train it on data.
      The training process is what changes for each algorithm.
      supervised(backpropagation etc.) and unsupervised (genetic algorithms etc.) have some advantages in terms of convergence etc.
      So for some problems neat might be 100 times slower than any other algorithm but for some problems it turns out to be very good.
      neat is designed to iterativly explore the search space. What this means is that it can find the smallest network that is best suited for the problem and not optimise some fixed dimensions.

    • @EndersupremE
      @EndersupremE 5 лет назад +1

      @@finneggers6612 I´ll look in to it thx!

  • @AWESOMEEVERYDAY101
    @AWESOMEEVERYDAY101 4 года назад +5

    i am a bit late but this explanation is really good and covers everything. I am currently trying to make one from scratch and you have helped a lot. :D

  • @PerfectorZY
    @PerfectorZY 6 лет назад +1

    I would love to learn more about how Selection works! Please come back!

    • @finneggers6612
      @finneggers6612  6 лет назад +1

      PerfectorZY well basically you first put all your genomes into species. This is actually quiet challenging and not easy. (Let’s call this part A)
      Part B is then killing the worst genomes in each species. You can add some rules like: if a species has very little amount of genomes, you won’t kill any. The species will go extinct if there are no genomes left (as an example)
      So let’s come back to part A:
      To categorise them into species, we need to find a way to say how “equal” two genomes are. There is a function that does this. I am not 100% sure if I’ve shown it in the video but I might.
      with this function and some bias value, we can sort them into species
      Do you have specific questions?

  • @boka_3451
    @boka_3451 4 года назад +2

    wow thanks! very useful and well structured.

  • @WAXenterprises
    @WAXenterprises Год назад

    Great overview, thanks for this

  • @pavetr
    @pavetr 4 года назад +2

    Very neat presentation.

  • @ryangurnick
    @ryangurnick 5 лет назад +5

    This is wonderful!

  • @ariframadhani525
    @ariframadhani525 6 лет назад +1

    looking forward for the code! :D thanks for the explanation!

  • @siddharthyanamandra9427
    @siddharthyanamandra9427 6 лет назад +2

    Hi !. Great video by the way. I have been looking for an explanation like this. My doubt is, when applying the mutation, do we apply all the types or any one?. Thanks

    • @finneggers6612
      @finneggers6612  6 лет назад

      We usually apply all of them but each which a given probability.
      Like: 10% generated a new node, 20% generate a new link etc...

    • @siddharthyanamandra9427
      @siddharthyanamandra9427 6 лет назад

      Hi, based on your explanation and help from online resources i have created my own neat implementation in java, but the one thing i am stuck is at the evaluation of the network. I am not able to decide how to perform feedforward operation on the network when there is looping between nodes. Thanks in advance

    • @finneggers6612
      @finneggers6612  6 лет назад

      I am looking at my code right now and see that it is not optimal.
      Just look at how I described my algorithm and that should work.

    • @PROJECTJoza100
      @PROJECTJoza100 5 лет назад

      @@siddharthyanamandra9427 I also implemented it in Java by myself and the way I evaluated the network is by using recursion. I will start with the output and check what is connected to that node. Then i would call the same method but for every node that is connected to the one im evaluating and so on till it came to inputs. Hope this helps.

  • @ProBarokis
    @ProBarokis 2 года назад +1

    Hello. What resources did you use to learn NEA?. Have you only read the original paper or are there other great source to learn from?

    • @finneggers6612
      @finneggers6612  2 года назад

      Mainly the original paper as well as a few google results. I also scanned through stackexchange for answers

  • @pal181
    @pal181 3 года назад

    17:58 but why 3 and 512 is same species?

  • @CityCata
    @CityCata 5 лет назад +1

    Hi great video overall, but I think you made one little mistake. Dunno if anyone pointed it out in the comments but at crossover, you said that if parent 1 was fitter than parent 2, the offspring would be without genes 9 and 10. I read the paper for NEAT and in the explanation it says that both Disjoint genes and Excess genes are inherited from the more fit parent to that would mean genes 6 and 7 would also be ignored.

    • @finneggers6612
      @finneggers6612  5 лет назад +2

      Cataa M you Are correct. Tho there are multiple implementations. I also figured that out and in the videos i stick with your example.
      There are examples on the web where disjoint genes of the less fit parent are taken

  • @florian7162
    @florian7162 6 лет назад +2

    Awesome Tutorial! Keep up the work!

  • @teenspirit1
    @teenspirit1 2 года назад

    I love the work, I definitely want to watch through the series, I am especially confused about the "calculating" part because the random mutations cause cycles in my graphs.
    But why do you have jordan peterson in your NEAT playlist? I don't have a thing against the guy but it looks out of place.

    • @finneggers6612
      @finneggers6612  2 года назад

      Yeah I had that problem too. I solved it by assigning an x coordinate. Input nodes had an x value of 0 and output nodes an x value of 1. I only allowed new connections from a node with a smaller x to one with a higher x. This solves this problem entirely

  • @nightfall2863
    @nightfall2863 6 лет назад +1

    keep this up, this video was more useful than other ones

  • @bernardoolisan1010
    @bernardoolisan1010 2 года назад

    I have a question. In what does the Encoding helps us, the Neat Encoding Scheme helps us only to visualize as a Genetically form? or is there another use? can you explain it to me?

    • @finneggers6612
      @finneggers6612  2 года назад +1

      the encoding is the principle of how genomes are compared which then serves for speciating.

  • @Infinity7111
    @Infinity7111 5 лет назад +2

    Great presentation! Is the next video coming? :)

    • @finneggers6612
      @finneggers6612  5 лет назад +1

      perhaps. If I have the time :)

    • @Infinity7111
      @Infinity7111 5 лет назад +1

      @@finneggers6612 i subbed, to see it if it comes out :)

  • @o_2731
    @o_2731 Год назад

    Thank you very much for this introduction, It was very helpful

  • @PhilosophicalMachine
    @PhilosophicalMachine 5 лет назад +1

    Finn, What the fuck bro? How can you make an AWESOME video like that, huh?

  • @aligranett6355
    @aligranett6355 9 месяцев назад

    dose anyone know why in the last slide he killed 6 with fitness 442 but did not kill 5 and 7?

  • @GercioP
    @GercioP 4 года назад +1

    Fantastic explanation! Yes, the toughest to learn was the Selection algorithm :)

  • @dgdffgdf
    @dgdffgdf 4 года назад +1

    Hi, first I would like to say that I love your videos and really appreciate the basic dumb-proof explanation of NEAT and GA overall. :)) Do you think NEAT ANN is suitable for stock-market prediction? I'm getting a few troubles with defining the inputs and fitness function for such an application.

    • @finneggers6612
      @finneggers6612  4 года назад +3

      Ruslan Mykulyn mhm I played a little bit with stock market prediction and can tell you the following: I didn’t succeed using non recurrent networks. The problem is that neural networks usually don’t understand time. You can use LSTM networks for this purpose and this worked quiet well. The problem here is that I don’t see a good solution combining this with NEAT. Recurrent networks have multiple advantages in this field and you should probably look into that. Sadly is not easy to implement them unless you are using some library like keras

    • @dgdffgdf
      @dgdffgdf 4 года назад +1

      Finn Eggers wow ty, for swift response. well that's interesting, because one of the main reasons why I decided to dive deeper into those GA waters was bacuse I red this paper pdfs.semanticscholar.org/b036/926d380452a93de2a8c46c1f7fbf50c12487.pdf where they achieve same or better results using NEAT instead of perceptron ANN. And also I watched some RUclips channel with guy performing BTC/USD prediction also using GA (gonna link too). However this guy didn't present his code as a open-source... Anyways I will continue to educate myself in this direction regardless the final application of knowledge :)) Btw did you manage to make stock-market predictions any-close to the actual real-world application? Or it was just some sort of POC, which was not profitable?

    • @dgdffgdf
      @dgdffgdf 4 года назад +1

      Here's the guy. ruclips.net/video/R1snBs5tyY8/видео.html

  • @AK-km5tj
    @AK-km5tj 5 лет назад +1

    It was the first video I actually understood on this topic!

  • @samuelcrawford8055
    @samuelcrawford8055 Год назад

    Sorry, I'm having a little trouble in understanding how to ensure that the nodes have consistent ids. Should the function that creates them take as arguments the ids of the input and output nodes of the connection it is splitting? What about more complex structures with many hidden nodes that interact? But generally though, great video. Definitely earned a like and subscribe!

    • @samuelcrawford8055
      @samuelcrawford8055 Год назад

      doesn't matter, I actually solved the issue. I'm currently implementing a simple version of a neural network that can be used for NEAT in python if anyone would want to take a look

  • @taigofr1
    @taigofr1 6 лет назад +2

    Really liked this video! Amazing! Very enlightening :D Keep doing videos

    • @taigofr1
      @taigofr1 5 лет назад +1

      @@sankalpbhamare5885 Thanks

    • @PROJECTJoza100
      @PROJECTJoza100 5 лет назад +1

      @@sankalpbhamare5885 lol so much advertising...

  • @d6853
    @d6853 3 года назад

    I understand how all these networks work but I’m finding it really difficult knowing how to go about programming it.

  • @gauravbhandari8089
    @gauravbhandari8089 5 лет назад +1

    Hello.... we are waiting for the next video on NEAT... very good explanation

    • @sankalpbhamare3759
      @sankalpbhamare3759 5 лет назад +1

      Might want to see this?
      ruclips.net/video/D0XDldwCZ4E/видео.html

  • @thegaminghobo4693
    @thegaminghobo4693 3 года назад

    I don't see the point of the "identification numbers" on connections? If you know they have the same nodes why does it matter?

    • @finneggers6612
      @finneggers6612  3 года назад +1

      you are partially correct. obviously they matter when doing crossover between two genomes. In my latest implementation I do have a function which maps two node ids to a connection innovation number

    • @thegaminghobo4693
      @thegaminghobo4693 3 года назад

      @@finneggers6612 Oh ok thank you for quick reply, that’s how I was going to implement it as well. Apart from that 1 question your video was seriously great. Best explanation I’ve seen so far.

  • @smjonas8616
    @smjonas8616 6 лет назад

    Which probabilities did you choose for the different mutations?

  • @Elzelgator
    @Elzelgator 5 лет назад

    Ok another Question? How do they cross over their inovation numbers? In other words which one of the parent will give the weight to its offspring since both of the parents have different weights on their same inovation number.

    • @finneggers6612
      @finneggers6612  5 лет назад

      I think the fitter parent would give it to the child.
      But I just checked my code and it seems like I implemented it so that a connection with the same innovation number shares its weight with every genome.

    • @Elzelgator
      @Elzelgator 5 лет назад

      @@finneggers6612 gekkoquant.com/2016/04/02/evolving-neural-networks-through-augmenting-topologies-part-2-of-4/
      Check this link my friend in the second part inoavtion numbers have different weights. I am confused :D Maybe it is because after some time the genes are mutated and had different inovation number, it might be the case .

  • @moddingdudes7055
    @moddingdudes7055 5 лет назад

    I have a question about when your mutating a node. You said “there can be no other scenario like it” but there’s a problem. If you have weights 1-3 and 2-4, and you put a node on 1-3 on one offspring and 2-4 on another. The innovation numbers will be the same and cause problems right?

    • @moddingdudes7055
      @moddingdudes7055 5 лет назад

      Just realized I can run it through with a for loop to check with my nodes already existing. What did you mean by “this is a problem with node” when you were talking ab mutating a new node onto a weight?

    • @finneggers6612
      @finneggers6612  5 лет назад

      When splitting a connection into two new ones, the node in the middle will most likely have a new innovation number. I did not quiet understand your scenario. Maybe i didn’t explain it well enough.
      So the new node has a new innovation number and the two new connections aswell. So everything is alright

    • @moddingdudes7055
      @moddingdudes7055 5 лет назад

      Finn Eggers thank you I understand it now.

    • @moddingdudes7055
      @moddingdudes7055 5 лет назад

      Finn Eggers I also have one last question if you don’t mind. When your breeding and encoding and such, people have said different things from the NEAT paper and I always thought if there was a disjoint and the fitter parent had the gene, the offspring would inherit it compared to if the less fit parent had the gene, the offspring would not inherit that gene. What’s the true answer to the question?

    • @finneggers6612
      @finneggers6612  5 лет назад

      moddingdudes that’s absolutely correct that way. Disjoint genes are taken from only the fitter parent.
      It gets interesting für excess genes. Some say that excess genes aren’t taken at any time. I found different sources on that.
      For the disjoint genes, that’s absolutely correct.
      I am not sure if I said something different in the video. Hope I didn’t :)

  • @sebimoe
    @sebimoe 5 лет назад

    During grouping genomes into species, is that a problem if a given genome would match with more than one specie?

    • @finneggers6612
      @finneggers6612  5 лет назад

      No. the first one that matches is taken I believe.
      Like, when you iterate through all the species, you select the first one that would work

    • @sebimoe
      @sebimoe 5 лет назад

      @@finneggers6612 Thanks for a quick answer, I have some other questions but I see you have videos on implementation, I will check these first :) I'm trying to make use of cleverness from NEAT for real time evolution on individual basis

    • @finneggers6612
      @finneggers6612  5 лет назад

      @@sebimoe you are welcome! Feel free to ask any questions that have not been answered.

  • @prokilz
    @prokilz 4 года назад

    Could you please type out transcript for video? I am hard at hearing english and transcript help me read to understand - Thanks for making video!

  • @3mzodiactherapy59
    @3mzodiactherapy59 5 лет назад

    I lost it at the difference between disjoint and excess. Where can I find a good example?

    • @finneggers6612
      @finneggers6612  5 лет назад

      disjoint are those who are not shared. excess are those who are not shared and at the end of the genome.
      You can read the original paper. It's being explained there.

  • @olli3686
    @olli3686 3 года назад

    Do you have a discord to discuss NEAT?

  • @asdf154
    @asdf154 6 лет назад

    Where are the next videos?

    • @finneggers6612
      @finneggers6612  6 лет назад

      I did not yet have the time for that. Hopefully will do soon

  • @Elzelgator
    @Elzelgator 6 лет назад

    How do you Feed forward a input?

    • @finneggers6612
      @finneggers6612  6 лет назад

      Good question.
      I had a few rules that I added.
      Each node has an x value (like when you try to draw it graphically).
      Left side has a value of 0 (input) and the output nodes have a x value of 1.
      Everything in between is obviously in between 0 and 1.
      With this we can solve 2 problems:
      A: We wont have cyclic loops for our calculations (no recursion)
      B: We can easily feed the data
      I will explain both:
      To A: If we only allow new connections to exist from a node with lower x to a node with higher x, no data will ever go backwards. This means that there cannot be loops
      To B: You can sort all the node by their x value and store that in an ArrayList. You can simply iterate through that list and if you are doing the output of a node, you ensured that all previous nodes have been calculated.

    • @Elzelgator
      @Elzelgator 6 лет назад

      @@finneggers6612 I am trying to build a real time triangle objects which tries to follow the mouse. I believe reinforcement learning would be better, but i am not sure how to apply it. So i am writing a python code for NEAT from scratch. But since it is real time application, it is difficult to feed forward it between certain time.

    • @finneggers6612
      @finneggers6612  6 лет назад

      Could you explain what you are trying to do? calculations in neural networks are very fast, dont worry about that.@@Elzelgator

    • @Elzelgator
      @Elzelgator 6 лет назад

      @@finneggers6612 i am trying to write a python code, in pygame library. In my code there are objects which moves with neural network. The neural network have 2 inputs which gets the x and y distance to the mousepad, and 2 outputs with turn right or left and speed. With this setup i will try to make them follow the mouse on the screen. But it might be difficult to feed every neural network in a real time application like this. I implemented the 3 class. Speciesclass, neurallnetworkclass and inovationnumberclass from your explanation. Inovationclass has the mutate functions. And speciesclass have a inovationclass list that remembers inovation numbers. I wrote this code from your explanation only rigth now :) but feed forward is a problem. I will try to do it tonight from your comment. Or maybe further research.

    • @finneggers6612
      @finneggers6612  6 лет назад

      You shouldnt be worried about performance. I've used my NEAT code on a realtime application with 3000 clients. If you have any questions, feel free to ask :)@@Elzelgator

  • @hulohai
    @hulohai 2 года назад

    Thank you!

  • @omkarbhale442
    @omkarbhale442 4 года назад

    I'm not understanding anything 😐 actually I just started programming. Haven't even implemented Genetic algorithms

  • @LoveWapping
    @LoveWapping 6 лет назад +6

    One of the better videos on this subject. I think maybe slow your speech down a little, avoid repetition, higher res graphics and a proper highlighter. Thanks for producing this!.

  • @anthonyh694
    @anthonyh694 2 года назад

    very good explanation

  • @mr.peanutbutter303
    @mr.peanutbutter303 5 лет назад

    still waiting for you coding tutorials! It's almost been a year

  • @CompletelyRandomUser
    @CompletelyRandomUser 6 лет назад +3

    I understood about 60% of the speech, the rest wasn't clear because of your accent. Anyway pretty good explanation. Now I know more about NEAT. For next video I'd suggest to use higher resolution images for presentation. Thank you very much for talking with moderate speed, not 200 words per second like rappers. For complicated topic like this there is no need to rush. Thanks!

  • @greenappleFF1
    @greenappleFF1 5 лет назад +1

    Nice presentation👌🏻

  • @tomborninger1752
    @tomborninger1752 6 лет назад +1

    Do you live in Germany?

    • @finneggers6612
      @finneggers6612  6 лет назад

      yes

    • @tomborninger1752
      @tomborninger1752 6 лет назад

      And is it your first language?
      Because I have been living in Germany too.

    • @finneggers6612
      @finneggers6612  6 лет назад

      Tom Bőrninger I was born in Germany. So yeah, it’s my first language. I learned English and a bit of French

  • @glugt9240
    @glugt9240 6 лет назад +1

    Do you have the code for your implementation online somewhere? I'm working on a school project and trying to figure out how NEAT works.

    • @finneggers6612
      @finneggers6612  6 лет назад

      g lugt yeah. It should be in one of my github projects. Search for Luecx on github. If you don’t find it, I can send you the link later

  • @metlov
    @metlov 7 месяцев назад

    Why not passing the average weight of the parents to the offspring. Doesn't it improves the network diversity over when we copy the gene of only one parent?

  • @HarryBGamer2570
    @HarryBGamer2570 2 года назад

    well, that's neat

  • @ArMeD217
    @ArMeD217 2 года назад

    The presentation was good, but I believe your explanation suffered from the lack of video editing. Some cuts could have made it all clearer and shorter.

  • @gijsvermeulen1685
    @gijsvermeulen1685 Год назад

    Thanks a lot!

  • @chandler5587
    @chandler5587 6 лет назад

    I joined your notification group too!

  • @Dalroc
    @Dalroc Год назад

    Everything is great except for the part about speciation. And it's not because of the animation!
    You're trying to be specific while also refering to future videos for the details you're being specific about. Don't repeat it five times. Just say that you categorize the genomes by cerain method that you'll show later and you'd skip a lot of confusion, and time!

  • @memoai7276
    @memoai7276 5 лет назад

    Fantastic!

  • @jotaframo
    @jotaframo 4 года назад

    thxs m8, this really helped!

  • @finn9233
    @finn9233 5 лет назад

    super erklärt.
    Pop Schutz fürs Mikro wäre angebracht.

    • @finneggers6612
      @finneggers6612  5 лет назад

      hab mir kurz nach dem video ein neues mikro gekauft.

  • @nurjamil6381
    @nurjamil6381 6 лет назад

    if you want more viewers u should use neuroEvolution of augmenting topologies instead of just NEAT, love ur videos btw

  • @HE-ko8fp
    @HE-ko8fp 6 лет назад

    i am waiting for the next video don't stop (Y)

  • @amir3645
    @amir3645 2 года назад

    thanks man

  • @stashmm
    @stashmm 2 года назад

    thanks

  • @timmnicolaizik6697
    @timmnicolaizik6697 5 лет назад

    Bist du deutsch? ^^

  • @Nih1l__
    @Nih1l__ 6 лет назад

    Pls be alive love u

    • @finneggers6612
      @finneggers6612  6 лет назад

      am alive, love u 2.

    • @finneggers6612
      @finneggers6612  6 лет назад

      What do you want me to do? :) Videos about the implementation? Videos about other algorithms? Chess engine? applications?
      I am kinda busy right now and starting the series about the implementation of NEAT will take atleast 8 videos or sth. and I can't grant that it works 100% perfect. I've found a small problem in my implementation. First, I need to fix that one and then, I need to find enough time to make those 8 videos.
      However, my full code is uploaded. Do you need the github-repo?

    • @Nih1l__
      @Nih1l__ 6 лет назад

      @@finneggers6612 ooh chess engine sounds good

  • @spassthd8406
    @spassthd8406 5 лет назад +1

    Kannst du deine Videos bitte auf Deutsch machen :P

    • @dioelric3032
      @dioelric3032 5 лет назад

      SpasstHD I’m learning German in school so maybe I can translate it into German as a project respond if you agree

  • @johanneszwilling
    @johanneszwilling 5 лет назад

    🤓Danke!

  • @SintaxErorr
    @SintaxErorr 5 лет назад

    you also have apple bottom genes

  • @KaletheQuick
    @KaletheQuick 5 лет назад

    What? The ai dreams?

  • @sonoda7723
    @sonoda7723 6 лет назад

    I don't look forward for the next video, thank for your explanation

    • @sankalpbhamare3759
      @sankalpbhamare3759 5 лет назад

      See this video on NEAT
      ruclips.net/video/D0XDldwCZ4E/видео.html

  • @kraken2844
    @kraken2844 2 года назад

    im sorry its so hard to listen your thoughts are all over the place, please prepare a script

    • @kraken2844
      @kraken2844 2 года назад

      every time you say "um" I completely lose focus