- Видео 4
- Просмотров 47 596
Semper Zero
Добавлен 13 июл 2021
Neural Network Visualization - Learning
The tool offers a new interpretation of the feed-forward mechanism and gradient descent. I wanted to address questions such as: why does introducing nonlinearities through activation functions lead to input space warping (e.g., spirals)? How can we visualize the matrix multiplications within the network, as these represent linear projection transformations into smaller hyper-spaces? What happens when the loss function has a plateau, but the network configuration continues to evolve? Is the function stuck in a local minimum? How can we interpret gradient descent steps (the movement of projection vectors, represented by matrix lines between layers, in the optimal direction for the desired s...
Просмотров: 2 005
Видео
Analysis of Back Pain Using Biomechanics and Artificial Intelligence (ML)
Просмотров 873Год назад
Worked on an Artificial Intelligence and data analysis research project correlating back pain with posture, in collaboration with Rectify and Omdena I have designed and implemented the entire data analysis, modeling and visualization process, together with the ML algorithm training and hyperparameter tuning A paper will be published soon, detailing the algorithms and the final results Rectify p...
Visualizing the NEAT Algorithm - 2. How the AI Works, Inputs and the Neural Network
Просмотров 10 тыс.3 года назад
Visualizing the NEAT Algorithm - 2. How the AI Works, Inputs and the Neural Network
Visualizing the NEAT Algorithm - 1. Evolution
Просмотров 35 тыс.3 года назад
The purpose of this video is to give a visually appealing intuition as to how a neural network can evolve and learn. I will explain the intricacies of the algorithm in further videos. The input layer consists of sensors that detect the closest walls and tail segments in the 8 cardinal directions ( N, S, NW, etc), together with the fruit's direction and distance relative to the head, its size an...
Perfect 👌🏻
very enlightening. BTW what are the three axis exactly? What dim reduction method do you use?
Look like anything familiar?
What does each point represent?
The pixels of a handwritten digit image, reduced dimensionally through the layers of the neural network, and then reduced to 3 dims with PCA
@@semperzero Have you tried tsne, umap? PCA tends to overestimate and make results biased
@@peasant12345 Tsne or Umap would not work for this approach, because I am looking for the way in which the hyper-space is bending. The linearity of PCA keeps that bending somewhat true, while the non linearity of tsne or umap would compromise it entirely
@@semperzero thanks for explaining. I got it
Is there a notebook or write up to follow?
HI, it's an amazing visualization work. By the way, have you got some conclusions from the animation? I just wonder what can we get from the beautiful animation? Thank you advance! It's a good job!
Amazing... What visualization tool is this?
Used Plotly for the interactive 3d graphs at the beginning and the end For the animations in the middle I built a very simple graphics engine as a wrapper over plotly
Very rich visualizations! May I ask what the dimensions being compared mean?
The xyz dimensions are the result of running PCA on very high dimensional spaces representing the number of neurons on each layer.
@@semperzero is the movement then how the parameters evolve with training?
Looks cool! What activation function(s) do you use?
Thank you. In this case it was relu->sig->sig->softmax
Curious, is it necessary to have an input node for north and south instead of just y location? Or like north east which would just be a combination of north and east nodes that already exist? It seems more computationally expensive but please correct me if I'm wrong to assume those extra input nodes don't contribute much.
I want to see the full paper!
Neat is so cool, I love it very much.
Great video!
god made by human
NEAT is much more exciting to me than transformer nonsense.
How can I display the changes in each generation in the neural network like this
Hello , im trying to implements pacman with NEAT and i rlly struggle with it . if u can help me i will very appreciate it and please let me know
Soo...there wont be a part 3 ?
Why did I get recommendations for older snake videos but not this
Why won't you try to pass as inputs the whole field with every of 169 pixels with a value based on its current content. Such scrupulous input preparation is cheating.
Absolutely fascinating case study!
eyyy the king is back
nice
Cool video, but the analysis / conclusion is missing.
I am sorry for that, but it will be available in a paper we will publish soon! The idea of this video is to display the beauty of mathematical modeling inside data science
And beautiful it is!
@@Djellowman, Thank you!
Very nice visualisations and presentation! What were the conclusions of the experiments?
The conclusion was that indeed, back pain correlates less with vertical positions and more with bent position (some of the bent ones are worse than others). That is not new, but it's the first actual data based study in this regard. All the previous ones relied on self reports, heavily simplified sensors or heavily simplified data analysis which blurred the results too much. We can see this as a "myth confirmed" I am writing the paper at this moment and will publish it soon
Wow this is neat! Nice work.
Amazing. I have a question does Neat-python allow adding Hidden layers using config File or the Net evolves by itself for adding hidden layers?
Amazing visualization!
GREAT work . beautiful
how'd you make the animations
Absolutely beautiful and terrifying at the same moment.
Wait till terrorist organization creates something like this and then it wipes all of us from the face of the earth.
what's your fitness function?
can you post the final hyperparameters that worked for you?
Thank you for this video, i understand a lot more the concept of the NEAT Algo now. I watched a lot of videos on this algorithm before without ever really succeeding in understanding the concept.
How the input state is encoded?
I found the answer here: ruclips.net/video/h9JZ0YHtKWQ/видео.html
Try adding an anticipatory reward function on a recursively calculated derivative of the instantaneous representation.
On this subject, I've seen an impressive neural network build + visualisation , from scratch and in vanilla JS, about self driving car by Radu. He explains and build all about inputs, outputs, weights and biases, then shows how to visualize the NN.
Revolution
Can you teach us how you did the animation
👋 to continue
brilliant illustration
MORE
We need part 3 immediately 😂
Where is the source code? Dislike
This was so well put together. Great music and great representation. Congratz. :)
How were you able to visualize the genetic structure?
+1 kudo for you
This is so Myelinating........... 👌👌👌👌👌👌👌👌