The Brain function can be heavily simplified You can put the two edge cases outside of the loop, caling layers[0], and layers[layers.length-1], and having the for loop start with i=1, and run while 1 < layers.length -1
I've seen a lot of videos about Neural Networks and yours is the one that explains it in an understandable manner (Or maybe the 10th time is the charm) I'm curious to see the next one
I was following along in python. Here's the code if anyone wants it. I didn't test it though because I don't really know how to use it. Tutorial was too short ): networkShape = [2, 4, 4, 2] class Layer(object): def __init__(self, n_inputs, n_nodes): self.n_nodes = n_nodes self.n_inputs = n_inputs self.weightsArray = [n_nodes, n_inputs] self.biasesArray = [n_nodes] self.nodeArray = [n_nodes]
def forward(self, inputsArray): self.nodeArray = [self.n_nodes] for i in range(self.n_nodes): # Sum of the weights times inputs for j in range(self.n_inputs): self.nodeArray[i] += self.weightsArray[i, j] * inputsArray # Add the bias self.nodesArray[i] += self.biasesArray[i]
def activation(self): for i in range(self.n_nodes): if self.nodeArray[i] < 0: self.nodeArray[i] = 0 def awake(): global layers # layers = Layer(len(networkShape) - 1) layers = [] for i in range(len(networkShape) - 1): # layers[i] = Layer(networkShape[i], networkShape[i + 1]) layers.append(Layer(networkShape[i], networkShape[i + 1])) def brain(inputs): for i in range(len(layers)): if i == 0: layers[i].forward(inputs) layers[i].activation() elif i == len(layers) - 1: layers[i].forward(layers[i - 1].nodeArray) else: layers[i].forward(layers[i - 1].nodeArray) layers[i].activation() return layers[-1].nodeArray
Nice video! In general I'd say a neural network is still a black box even if you built it and know the values of all the nodes, weights, biases and layers.
That was an excellent, practical video on neural networks! As someone just beginning to dig into this subject, I love it! Also, clean and neat code. Enjoyable to read (although I'm not a fan of nesting classes).
@@TheZazatv Yeah been really busy with work and starting my own company (Ironically ita a video editing company). I am hoping to finish it this week but I guess that has always been the goal lol. But I am hoping this week will be the week
:') Episode 3 where are you... this is the new Half Life 3 for me. Am I wrong in thinking the weights and biases were never given values here? should those be made in this class too?
I believe you are correct the weights and biases were not given values yet, they are going to be randomly generated and then randomly modified each time the creatures reproduce. This is going to be in part 3............ someday... But luckily someone lifted the curse and I can now finish part 3 lmao, I responded to this amazing comment today by W_Shorts: ruclips.net/video/Ifx3kX5VQh4/видео.html&lc=UgzIeWiWP2lb2gqR8_h4AaABAg.9lUU-CvLP8G9od6BhfRwBh
this is a much better way to do the forward pass: public void Forward(float[] inputsArray) { for (int i = 0; i < n_nodes; i++) { nodeArray[i] = biasesArray[i]; for (int j = 0; j < n_inputs; j++) { nodeArray[i] += weightsArray[i, j] * inputsArray[j]; } } } this way you don't create a new array every time. also the opening "{" is in the correct location.
i need that next video i have no idea what im doing :( i have this code and i think i understood how it works after starring at it for an Eternity BUT how can i make use of it now....
Good tutorial, but as others have pointed out there is a compile error both in the video and the github code in Awake(). layer in the for loop should be layers. Makes me wonder if was ever tested?
Yeah I am not sure how that got in there. The code definitely works because all of the clips of the training are created using this code so I must have done some refactoring to improve the names of variables for the video and had a typo. Will fix that soon
Clearly this is a tutorial for beginners, there is part 3 coming up and the most complicated things are built on top of simple concepts.. like matrix dot products 🤷♂ when you make a video that could explain backpropagation in 17 mins to beginners, please share with us. Cheers,
I smell an underrated channel.
You are literally the savior of my science fair project, thank you so much.
The Brain function can be heavily simplified
You can put the two edge cases outside of the loop, caling layers[0], and layers[layers.length-1], and having the for loop start with i=1, and run while 1 < layers.length -1
Hey! I'm wondering when Part 3 is coming out. Can't wait to see it!
I've seen a lot of videos about Neural Networks and yours is the one that explains it in an understandable manner (Or maybe the 10th time is the charm)
I'm curious to see the next one
I was following along in python. Here's the code if anyone wants it. I didn't test it though because I don't really know how to use it. Tutorial was too short ):
networkShape = [2, 4, 4, 2]
class Layer(object):
def __init__(self, n_inputs, n_nodes):
self.n_nodes = n_nodes
self.n_inputs = n_inputs
self.weightsArray = [n_nodes, n_inputs]
self.biasesArray = [n_nodes]
self.nodeArray = [n_nodes]
def forward(self, inputsArray):
self.nodeArray = [self.n_nodes]
for i in range(self.n_nodes):
# Sum of the weights times inputs
for j in range(self.n_inputs):
self.nodeArray[i] += self.weightsArray[i, j] * inputsArray
# Add the bias
self.nodesArray[i] += self.biasesArray[i]
def activation(self):
for i in range(self.n_nodes):
if self.nodeArray[i] < 0:
self.nodeArray[i] = 0
def awake():
global layers
# layers = Layer(len(networkShape) - 1)
layers = []
for i in range(len(networkShape) - 1):
# layers[i] = Layer(networkShape[i], networkShape[i + 1])
layers.append(Layer(networkShape[i], networkShape[i + 1]))
def brain(inputs):
for i in range(len(layers)):
if i == 0:
layers[i].forward(inputs)
layers[i].activation()
elif i == len(layers) - 1:
layers[i].forward(layers[i - 1].nodeArray)
else:
layers[i].forward(layers[i - 1].nodeArray)
layers[i].activation()
return layers[-1].nodeArray
sick
Thanks!
why python?
A great series. Not only for content, but well edited too. Cheers John.
Let's gooooo, part 3 please!
Great video. Thanks for the effort. I'm looking forward to see the part 3. Cheers,
we need the continuation, really
Nice video! In general I'd say a neural network is still a black box even if you built it and know the values of all the nodes, weights, biases and layers.
Hopefully you'll finish this eventually! I enjoyed the last two videos.
thank you so much! this is exactly what i need for my uni project
best video ive ever seen not gonna lie
That was an excellent, practical video on neural networks! As someone just beginning to dig into this subject, I love it!
Also, clean and neat code. Enjoyable to read (although I'm not a fan of nesting classes).
Also, your channel is criminally underrated.
Great video ! I'm really looking forward to see how the network will be trained
just amazing 🤩🤩
isn't the 'layer' in layer[ i ] = new Layer( networkShape[ i ], networkShape[ i + 1 ] ); supposed to be 'layers' ?
Hey! I wonder when episode 3 will come out?
Hey! I’m almost done with it, hoping to have it out in less than a week!
@@JohnnyCodesCan’t wait for it!!!
Hi, is part 3 coming out?
Please Upload Part 3
part 3 how to set up in unity?
amazing !!! waiting the training part
O man thank you! Such a gem content out here :) I'm a Swift Dev the code is not hard to grasp
btw is part 3 coming up or nah?
@@TheZazatv Yeah been really busy with work and starting my own company (Ironically ita a video editing company). I am hoping to finish it this week but I guess that has always been the goal lol. But I am hoping this week will be the week
@@JohnnyCodes oh nice we’ll be patiently waiting. And congrats on launching ur company 🫰
@@JohnnyCodes Still waiting :D
amazing
very useful John
Part 3?
:') Episode 3 where are you... this is the new Half Life 3 for me. Am I wrong in thinking the weights and biases were never given values here? should those be made in this class too?
I believe you are correct the weights and biases were not given values yet, they are going to be randomly generated and then randomly modified each time the creatures reproduce. This is going to be in part 3............ someday...
But luckily someone lifted the curse and I can now finish part 3 lmao, I responded to this amazing comment today by W_Shorts: ruclips.net/video/Ifx3kX5VQh4/видео.html&lc=UgzIeWiWP2lb2gqR8_h4AaABAg.9lUU-CvLP8G9od6BhfRwBh
drop the training video right now!
this is a much better way to do the forward pass:
public void Forward(float[] inputsArray) {
for (int i = 0; i < n_nodes; i++) {
nodeArray[i] = biasesArray[i];
for (int j = 0; j < n_inputs; j++) {
nodeArray[i] += weightsArray[i, j] * inputsArray[j];
}
}
}
this way you don't create a new array every time.
also the opening "{" is in the correct location.
I wonder that the shape of network . I mean how many hidden layers and nodes should we use for each sample .
And also wondering about third part .
very good :)
please, do the third part
if the activation function always happen right after forward. why not just combine them?
i need that next video i have no idea what im doing :(
i have this code and i think i understood how it works after starring at it for an Eternity BUT how can i make use of it now....
Good tutorial, but as others have pointed out there is a compile error both in the video and the github code in Awake(). layer in the for loop should be layers. Makes me wonder if was ever tested?
Yeah I am not sure how that got in there. The code definitely works because all of the clips of the training are created using this code so I must have done some refactoring to improve the names of variables for the video and had a typo. Will fix that soon
Thank you! I would have kissed you for that great explanation!
what
This is useless, you literally just implemented matrix dot in C#. Most of the difficulty in making a neural network is just backprop, jfc
Clearly this is a tutorial for beginners, there is part 3 coming up and the most complicated things are built on top of simple concepts.. like matrix dot products 🤷♂ when you make a video that could explain backpropagation in 17 mins to beginners, please share with us. Cheers,
The true useless entity here is you, my dear sir.