10.2: Neural Networks: Perceptron Part 1 - The Nature of Code

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 527

  • @paoloricciuti
    @paoloricciuti 7 лет назад +176

    One thing you should definitely do is try your Perceptron with new data: you can create another 100 Points, exclude these Points from training and test the Perceptron with these Points...also a very cool way to test you Perceptron is to add a Point at the click of mouse and let it label from the Perceptron (this way if it's wrong you are gonna see a black point inside the white blob). Anyway really good work, i love your way of teach, i love programming, i love machine learning and out of all the videos and blog post and slides that i read you are the one that can really make something easy to understand. Never stop to be like this! ;)
    P.s. if you need it i made a porting to p5.js with this suggestions implemented. :)

    • @TheCodingTrain
      @TheCodingTrain  7 лет назад +18

      Thanks, I'll do this on tomorrow's live stream! (Hope I remember). If you like, you can pull request the p5 version here: github.com/CodingTrain/Rainbow-Code/tree/master/CodingChallenges

    • @paoloricciuti
      @paoloricciuti 7 лет назад +6

      The Coding Train Thank's you for your great way of teaching! ;)

    • @playlikeaboss22
      @playlikeaboss22 7 лет назад +2

      Paolo Ricciuti can you help me out! I am trying to make a game using simple codes but I just can't seem to finish. I have a ball which is being controlled by the keyboard and I am trying to have it so if I go any distance greater than the window it wouldn't allow me so I can't leave the window . Also, I am trying to detect lines, (so if the ball hits the line it loops back to a specific coordinates). Thank you

    • @playlikeaboss22
      @playlikeaboss22 7 лет назад +1

      Paolo Ricciuti How do you detect a color in an if statement? I am trying to have it so if ellipse/x is greater than (color) the ball loops back to 30

    • @realcygnus
      @realcygnus 7 лет назад

      cool........can I get a hold of that ?

  • @chinmaybharti4085
    @chinmaybharti4085 5 лет назад +82

    your speaking skills, your interactiveness, your teaching ability...best online teacher I've ever met

  • @CJKims
    @CJKims 7 лет назад +259

    First impression: 44 minutes? Are you crazy? I'm not going to waste my time on a single video!
    After done watching: (quietly clicks subscribe button) ....

    • @nataliekidd2135
      @nataliekidd2135 5 лет назад +5

      Haha that was me.

    • @DevrajSinghRawat
      @DevrajSinghRawat 4 года назад +6

      I didn't even realise that his video was 44 min .. I just checked after reading your comment

    • @neillunavat
      @neillunavat 4 года назад

      True 😂

  • @Efferto93
    @Efferto93 7 лет назад +208

    "How to train your Perceptron" (2017)
    IMDb 8.7/10

  • @zendoclone1
    @zendoclone1 7 лет назад +72

    love that you focus more on the conceptual side of programming more than the nitty gritty details. It's really annoying to see all these other videos that do something like "we have A then we have B, then voila! life!" I've honestly been waiting for a video like this for a long time.

    • @TheCodingTrain
      @TheCodingTrain  7 лет назад +12

      Appreciate the feedback, thank you!

    • @MrCmon113
      @MrCmon113 5 лет назад +1

      There is a ton of videos that pick out some complex topic and spend 90% of the time explaining something you should have learned in school or something that's rather obvious.

  • @keyboardbandit
    @keyboardbandit 7 лет назад +26

    I can't believe this video was released today! I just started working on a problem at work (internship) that needs a (albeit more complicated) neural network to solve! This was a perfect primer to help me really understand the basics! The fit(), train(), and activate() functions is scikit-learn seem far less magical and way more accessible to me now!
    THANK YOU SO MUCH!

    • @jamey90
      @jamey90 5 лет назад

      Keyboard Bandit - Did you solve your problem? 😁

  • @joeydash3042
    @joeydash3042 7 лет назад +75

    who ever you are thank you very much ....I love watching all your video and learn new stuffs

  • @shaileshrana7165
    @shaileshrana7165 3 года назад +9

    Thank you so much, Daniel. I have never studied coding formally. Started with watching your coding challenges and I'm happy to say that this amazing and simple explanation is exactly what I needed to start my journey into machine learning. You've inspired me and taught me. Thank you.

  • @TheRayll
    @TheRayll 7 лет назад +107

    I'd recommend this to everyone over any movie

    • @89elmonster
      @89elmonster 6 лет назад +1

      MukulNegi Awesome comment

    • @chillydoog
      @chillydoog 6 лет назад

      I agree w jahmahent. This would

  • @thisaintmyrealname1
    @thisaintmyrealname1 7 лет назад +10

    The suspense built by the code and then watching it all work so well was amazing. It was also a very good idea to show the progress with each click.

  • @ianchui
    @ianchui 7 лет назад +157

    18:05 that pen flip was so smooth lmfao

  • @micahgilbertcubing5911
    @micahgilbertcubing5911 6 лет назад +4

    I love this series so much! Most machine learning / neural network explanations and tutorials are either designed for 5th graders or people with a college degree. The mathematical parts and coding are perfect for a high schools CS student. Thanks so much for finally making me understand backpropogation!

  • @isaiahsias3818
    @isaiahsias3818 7 лет назад

    Dude, I cannot thank you enough for diving into this concept in such an engaging way. It beats trying to break it down from a completely algebraic standpoint

  • @prateek752
    @prateek752 3 года назад

    I have become accustomed to listening long lecture videos at ~2x speed. And watching your videos at 1.75x hits the sweet spot.

  • @imvickykumar999
    @imvickykumar999 5 лет назад +3

    I am highly motivated with your videos ...the way you speak, the humour you have, the knowledge you earned is really... appreciable 👌👏👏
    I tries to give PPT in front of mirror like you...😊

  • @tinkumonikalita2576
    @tinkumonikalita2576 5 лет назад +1

    This is how powerful processing can be in understanding a concept. Great video Dan as always.

  • @pavzutube
    @pavzutube 7 лет назад +1

    That was a brilliant intro to neural networks #The Coding Train.... For those who are confused how the multiple iterations happened for the learning process, the key is the draw function which runs in an infinite loop. It took me a while to figure this out. So if you are using something other than processing, then you need to run an infinite loop in your training algo.

  • @DadanHamdaniTop
    @DadanHamdaniTop 7 лет назад

    This is the clearest explanation on machine learning that I have ever watch.

  • @sicko_drops
    @sicko_drops 4 года назад

    you are the only person on youtube who actually practically shows how this all works. actually coding it.

  • @8eck
    @8eck 4 года назад

    This is exactly what i was looking for! Explaining what is actually happening behind the scenes on the background of ml5 and tensorflow, how it's working.
    Thank you so much for this!

  • @pedrodiaz4844
    @pedrodiaz4844 5 лет назад +5

    Congratulations!! You have explained very well the basic of perceptron model.

  • @shankarkarthik
    @shankarkarthik 6 лет назад

    OMG - I am going through Coursera ML course and your video is simply amazing. I cannot thank you with words for offering these videos for free. Love your fun filled way and going through concepts on NN sessions in one at a time.

  • @MrPaperlapapp1
    @MrPaperlapapp1 7 лет назад +1

    Thank you Daniel, that really helped me to get into the topic. I like the idea "if you can code it, you kind of understand it".

  • @stopaskingmetousemyrealnam3810
    @stopaskingmetousemyrealnam3810 4 года назад

    Really nice to see someone writing out the necessary code on the spot, with minimal bells and whistles. Too often it's presented alongside formalism that's much more intimidating than it needs to be.

  • @somyek6336
    @somyek6336 6 лет назад +1

    Hey! A big thanks for these lectures. I was always struggling to understand what neural network really is. You made my life simple. Although i code in Python, i love watching your p5 videos
    Thank You whoever you are!!!

  • @narutosimas
    @narutosimas 5 лет назад

    You are the best teacher I've ever seen. Thanks for sharing your knowledge

  • @harishsn4866
    @harishsn4866 6 лет назад

    This is your first video that I've seen and I must say, I am enthralled by the way you made me understand the Perceptron. Never have I ever seen anyone explain it in such a intuitive, easy, and clear manner. I subscribed immediately and I am gonna go through all your machine learning videos. Thank you very very much. I can't possible explain how much I'm spellbound and impressed by this video. I wish you were my College Professor.

  • @FazilKhan-vr6sw
    @FazilKhan-vr6sw 5 лет назад

    i don't understand java but still i got 100% content from your video.
    it is the power of skillfull teacher.

  • @TonyUnderscore
    @TonyUnderscore 5 лет назад

    I am 17 and only started self-teaching java 2 weeks ago. This helped a lot to get a better understanding of how the language works and also how ML operates (which is something i am very interested in). The fact that you were somehow 100% understandable to me considering the amount of experience i have is actually phenomenal. Also i was surprised to see you are still interacting with comments
    Subbed and please make more of these

  • @thomasbouasli6102
    @thomasbouasli6102 4 года назад

    this is the most diverse course man, he starts off with JS, now Java and i belive he`ll show python, on the same concept, this is acctualy great

  • @MoMoGammerOfficial
    @MoMoGammerOfficial 5 лет назад +4

    Woow that is amazing video that just lighten me up to go into Machine Learning flow. I am definitely going to try this. I am more interested in watching how you can make its path based on 360* rotation and with some pros and cons implemented like obstacles, or food to eat or stuff like that. I need more insights before i jump into writing bots for my own game which are hard to beat.

  • @djrbaker1
    @djrbaker1 6 лет назад +1

    I'm watching this a year later and I'm still having plus fun. Not negative fun.

  • @spacedustpi
    @spacedustpi 6 лет назад

    Starting to really dig your videos. This is the first perceptron tutorial to point out that X_0 and X_1 represent x and y, a data point. I've been looking at neural networks for 2 weeks now, and finally I know this distinction! Really helps! Thank you.

    • @spacedustpi
      @spacedustpi 6 лет назад

      If [X_0, X_1] is [x, y] we are inputting a data point from a two dimensional space, correct? If we input [X_0, X_1, X_2], we can say this is equivalent to [x, y, z], which is a data point from three dimensional data space?

    • @spacedustpi
      @spacedustpi 6 лет назад

      Why stop there? We can input many data points at a time right? For example, we can input two vectors, i.e. a two dimensional array: [X_0, X_1, X_2] and [X_3, X_4, X_5]. So now we are inspecting two data points from a three dimensional space. Is this right? Am I using the correct vocabulary here? Thanks!

    • @TheCodingTrain
      @TheCodingTrain  6 лет назад +1

      Yes, you can send any vector of any length into a perceptron, and have that vector represent any data you like. Whether it will do what you want it to do in terms of the intended outputs that's another story and why you might need a "multi-layered perceptron"! (Coming in the next videos if you keep watching.)

    • @spacedustpi
      @spacedustpi 6 лет назад

      Thanks! I'm taking the Udacity Machine Learning Engineer course, but I find supplementing it with your videos clears up a lot of questions.

  • @GlenMillard
    @GlenMillard 3 года назад

    Good day - I noticed your Nick Cave t-shirt. I have loved Nick Cave ever since I was a young guy - early 1980s.
    Thanks for your videos - very helpful.

  • @Captain_Rhodes
    @Captain_Rhodes 5 лет назад +4

    I dont understand java but this is still the best video and clearer than the ones for languages ive used before

  • @patrickhendron6002
    @patrickhendron6002 8 месяцев назад

    💯 Someone needs to offer this man his own show to reach more people.

  • @Trixcy
    @Trixcy 3 года назад

    Amazing explanation. Saved my time for searching this whole stuff. Really a great masterpiece. I need to make project on deep learning in just 3 weeks. Your lectures are helping me. God bless you.

  • @KennTollens
    @KennTollens 4 года назад

    Thank you so much for explaining what is going on with a neural network. So many people dive straight into code or go off in their own math world. I'm a little confused on when the looping stops.

  • @dalegriffiths3628
    @dalegriffiths3628 4 года назад

    Also followed along but doing it in javascript. Cool little project. I trained it on 500 circles and then did a test set of 100. I used frameRate instead of mouse click to slow animation. Occasionally got some weird oscillating behaviour near around 470-480 correct on the training data but usually got it quickly. Also once trained with the training data normally got all 100 test data on one epoch but sometimes got stuck at around 98/99 correct (that only happened around 10% of attempts). I watched the self driving car video and that sent me back to flappy birds and now neural nets before doing flappy birds neuroevolution and then finally back to self driving cars. Once i've a handle of this in javascript i want to do it in something like unity using c#, this will need me to learn blender to make my circuit. Much more fun than watching endless hours of Netflix whilst in lockdown! Happy coding all.

  • @jeffreycordova9082
    @jeffreycordova9082 7 лет назад

    I watched some videos way back when the channel started, must say you've definitely improved the format and pacing. Great work!

  • @annperera6352
    @annperera6352 3 года назад

    Thank you sir, the best place to learn the explanation of NNs in most simplest way. Keep up the good worker.Looking for more videos by Sir

  • @MarkJay
    @MarkJay 7 лет назад +1

    Great video! Neural networks can be confusing but I like that you started with a simple example.

    • @yxor
      @yxor 6 лет назад

      Mark Jay spotted

    • @yxor
      @yxor 6 лет назад

      Love your videos

  • @jassimelattar
    @jassimelattar 3 месяца назад

    watching "The Coding Train" videos and "Andrew Ng" Courses really helps.
    whenever i get lost i come here to understand the concept better
    even if he is using Js and Andrew using python , the idea is the same

  • @smershad-ulislam7857
    @smershad-ulislam7857 Год назад

    Amazing video! You are just a genius to kick the idle mind of us who have average mind wishing to fight for the most challenging things. KUDOS!!!

  • @NeoxX317
    @NeoxX317 6 лет назад +1

    i can't stop watching your videossss, you're a great teacher !!

  • @piotrwyrw
    @piotrwyrw 3 года назад

    Finally somebody who explained it the right way

  • @ToastalService
    @ToastalService 6 лет назад

    You have a very engaging instruction style and explained this concept very well. Great video.

  • @furrane
    @furrane 7 лет назад

    4:28 Saying the point is at coordinate (x0, x1) is valid and solve the issue
    39:00 I think a cool way to visualize the learning process in this particular example would have been to color every pixels of the canvas relative to whether the perceptron got it right.
    Nice video again though, I'm really looking forward to the next video on this topic =)
    Cheers

  • @Zalcens
    @Zalcens 6 лет назад

    While I was doing this with you, almost at the end of the video, processing crashed and i lost the code. Good thing it's the first video and easy to recreate

  • @iwatchedthevideo7115
    @iwatchedthevideo7115 Год назад

    I am taking a ML course now with the most horrible lectures imaginable. This is a godsend!

  • @eliebordron5599
    @eliebordron5599 3 года назад

    Really cooooooooooool i love it. I understood many things for my project in january. Later I'll look up Neuron sigmoid model. You're a great explanator !

  • @8eck
    @8eck 4 года назад

    Very interesting use of random in your code examples. Thank you.

  • @stormilha
    @stormilha 4 года назад

    @23:20 +Fun increases your fun potential, -Fun spends it. It's like climbing a slide, you have fun climbing, you got even more sliding it down!

  • @rameshthamizhselvan2458
    @rameshthamizhselvan2458 5 лет назад

    You deserve a gold medal...

  • @drivenbygames1728
    @drivenbygames1728 6 лет назад +9

    18:08 *successfully throws are catches marker* 10/10

  • @adriand00
    @adriand00 4 года назад

    All i had was positive fun the whole video.
    It´s hard to see this man videos without smiling

  • @JeffreyAborot
    @JeffreyAborot 7 лет назад +4

    I like how you treat teaching neural networks through coding. Thumbs up! I think I missed your explanation on the modification of the weight as new_weight = error * weight. Could you give an intuition about it or did you explain it in some of your previous videos? Good job :)

    • @paedrufernando2351
      @paedrufernando2351 6 лет назад

      he did that in another video...Just browse through his Playlist

  • @radekzach1942
    @radekzach1942 3 года назад

    You have the best tutorial for neural networks!!

  • @viktorstrate
    @viktorstrate 7 лет назад +2

    Why is the new weights, multiplied with the value of the input, in the train function?
    If you pass it a point very close to (0,0), it will be harder to train it, right?

  • @garrett7754
    @garrett7754 7 лет назад +8

    I'm having trouble understanding for your delta weight formula why you are multiplying the error times the input. On some level it makes sense that it isn't just the error but why the input?

    • @AlessandroPiccione
      @AlessandroPiccione 7 лет назад +2

      It is explained here: en.wikipedia.org/wiki/Delta_rule

    • @filipcoja
      @filipcoja 5 лет назад

      Have the same problem, just doesn't explain to me why I add the input too..

    • @blasttrash
      @blasttrash 5 лет назад

      I want a simple explanation as well. The wiki link is too engrossed in math just like this link stackoverflow.com/questions/50435809/perceptron-training-rule-why-multiply-by-x
      If you have found a simple explanation, please do let us know. I remember from somewhere(might be andrew ng's course) that multiplying by x automatically causes our algorithm to descend towards global optima.

  • @grainfrizz
    @grainfrizz 7 лет назад

    Both you and Siraj are great in so many ways. Both charismatic, both very intelligent, both very funny.. But Siraj doesn't have your talent in teaching and making a solid bridge of information between you as the teacher and us as students through RUclips as medium.

  • @Avionics1958
    @Avionics1958 4 года назад

    Thoroughly enjoyed it thank you! I use to learn processing from you and now Neuron Network 👍

  • @franciscohanna2956
    @franciscohanna2956 7 лет назад

    Great! I just started this topic in my IA course at university.
    Thank you!

  • @gregoirepelegrin1966
    @gregoirepelegrin1966 5 лет назад +4

    How can I be so proud of a cell that is doing the job OF AN IF STATEMENT ?
    Neural Netw... Perceptron Power !

  • @Mezklador
    @Mezklador 7 лет назад +3

    Soooooo cool: Professor Shiffman is wearing a Nick Cave and the Bad Seeds t-shirt! Yeah!!!

  • @cabbler
    @cabbler 7 лет назад

    Hey Daniel it's so impressive how readily fluent you are so many languages. Java, javascript, processing, P5, apparently python and C as well. Is this or was this ever a challenge for you? Do you have to warm up before using ones you have not used in a few vids?

  • @tqoe
    @tqoe 6 лет назад

    I'm having +fun!
    As always, great video. Your energy and enthusiasm is contagious! Please never stop being so amazing.

  • @RajdeepSingh11
    @RajdeepSingh11 6 лет назад

    Thank you so much for step by step simple explanation. Was very descriptive.

  • @geoffwagner4935
    @geoffwagner4935 Год назад

    that' incredible. took a couple times lol "guesses a "-1" or "1" as labels" "learns to label the two more accurately for each point, each loop" trippy , Playstation steers us to a better solution . i agree on that one

  • @thomasbouasli6102
    @thomasbouasli6102 4 года назад

    I love how you naturally write p5 instead of pt

  • @johnkarippery511
    @johnkarippery511 5 лет назад +1

    formula y = total of Xi+Wi +b. I think in your video you didn't mention Bias? so Bias is compulsory to add or not?

  • @erikakerberg1104
    @erikakerberg1104 7 лет назад +1

    To convert a positive number to one and negative number to minus one you could just do n/abs(n) aka n divided by the absolute value of itself.

    • @andrewtofelt357
      @andrewtofelt357 7 лет назад +1

      That would throw an error at n=0. Plus, it's doing quite a bit of unnecessary work behind the scenes.

  • @vinitsunita
    @vinitsunita 5 лет назад +1

    Just question, When you are continuously tuning weights when guess of perceptron does not match as per expected output, Can this delta change of weights negatively impact the previous inputs for which error was zero and guess was matching expected output. In other words, is it impossible to have ideal value of weights where error becomes zero for all the provided training data.

  • @patjohn775
    @patjohn775 4 года назад

    If I had 15ft of pure white snow I’d watch every single video you have published

  • @afonsorafael2728
    @afonsorafael2728 7 лет назад +3

    I think nobody noticed but you were using a nick cave and the bad seeds t shirt hahah, great artist

  • @FraztheWizard
    @FraztheWizard Год назад

    This is great. Thanks for breaking it down into simple form!

  • @michaelhunt6313
    @michaelhunt6313 5 лет назад +3

    I love you so much for your tutorials!

  • @christianbrembilla2214
    @christianbrembilla2214 5 лет назад

    Dear Nick, I have a question about your presentation. At the 26th minute you explained about the gradient decent. This gradient adjusts the delta weight. The delta weight is calculated by multiplying the error by the input by the learning rate. If your inputs are arrays of 1000 data each, do you have to make a sum of all the delta weight? In the end you need to have just to weights, one for x1 and another for x2. Or do I misunderstood your presentation? Best regards, Christian

  • @DamianReloaded
    @DamianReloaded 7 лет назад

    Hi Dan! If you use an activation function that returns non zero, in principle, you don't need a bias. Cheers!

    • @DevDeffiliate
      @DevDeffiliate 7 лет назад

      the Sign() function will return 0 if the input is zero though right?....

    • @TheCodingTrain
      @TheCodingTrain  7 лет назад +1

      Thanks for this clarification!

    • @DamianReloaded
      @DamianReloaded 7 лет назад

      Hi. I'm currently trying to make sense out of LSTMs and implement them in c++ but for the life of me it's a struggle! ^_^

  • @adarshc43
    @adarshc43 7 лет назад +8

    is learning p5.js useful??also the pages made using p5 are not smooth on android...😞

    • @hammer158
      @hammer158 7 лет назад +10

      I think p5.js is very easy to learn (if you can already code in js of-course).Just try it out and make the decision your self, it won't cost you anything!

    • @sadhlife
      @sadhlife 7 лет назад +1

      umm, they are smooth on my android phone...

  • @SB-gm6zx
    @SB-gm6zx 6 лет назад +1

    Hi Daniel, I need some help with my masters dissertation. i just need some guidance. my project is on Recognising ragas (indian music) with deep learning. the plan is to record clips of various songs and label them for training data, and construct a prediction model. will you be able to guide me please?

  • @alcesgabbo
    @alcesgabbo 3 года назад

    Great video Daniel!
    But I think there is an error:
    According to the book "Make Your Own Neural Network
    " by Tariq Rashid the updated weight should be equal to ... error DIVIDED BY input.
    Am I wrong?

  • @getamustache1574
    @getamustache1574 8 месяцев назад

    Thanks for the video mate, really a good one. About that Delta_w_0 = error*x_0 is there any resource you can suggest to get a better understanding of it? Thanks :)

  • @argmax11
    @argmax11 2 года назад

    Is it just me or everyone else LOLs while watching this guy but still learns a ton from listening . Coolest teaching style

  • @legel93
    @legel93 6 лет назад

    Best explanation I found on NNs. Thank you!

  • @anren7445
    @anren7445 3 года назад +1

    i am confused by (w = w + deltaW) equation, wasn't it supposed to be "w - deltaW" (subtraction) instead of addition?

  • @ERINDIGR
    @ERINDIGR 7 лет назад

    i think the reason why your neural net did so well is that you give ti as input the same data you tried it with

  • @RolysentParedes
    @RolysentParedes 7 лет назад

    Hello,
    I have a school project right now in which I need to outperform the computational cost of the numerical method of solving the Generalized Singular Value Decomposition (GSVD) with an ANN-based approach. In short, the ANN-based GSVD should finish first compare to the numerical approach. I hope you can help me with this and if YES, I would like to know how much would the project cost? Thank you so much.

  • @griffonthomas7869
    @griffonthomas7869 7 лет назад

    Great work as always Dan, can't wait for full Neural Networks!

  • @muhammadaousaja6732
    @muhammadaousaja6732 5 лет назад

    Greetings dear (Mr.) from Coding Train, I really like this video of yours and I really like the way you explain the entire procedure, and it's my humble request to you that, is it possible for you to make a video on programming a neural network for a (personal AI Assistant) and show us how you interact with that AI Assistant after programming it's neural net, I will be highly thankful to you for the rest of my life, I hope you gets this message of mine, and I am looking forward to see this video, thank you.

  • @w3sp
    @w3sp 2 года назад

    If anybody still reads the comments, I have a question:
    27:12 Why exactly is DeltaWeight = error * input? The error is obvious of course because it's the difference between the correct answer and the guess, but why does it need to be multiplied with the input again?

  • @eraldoforgoli62
    @eraldoforgoli62 7 лет назад +30

    Another great video 🤗👌

    • @TheCodingTrain
      @TheCodingTrain  7 лет назад +3

      Thank you!

    • @kamilbolka
      @kamilbolka 7 лет назад

      ^^^

    • @playlikeaboss22
      @playlikeaboss22 7 лет назад

      The Coding Train How do you detect a color in an if statement? I am trying to have it so if ellipse/x is greater than (color) the ball loops back to 30

    • @lucashenry2556
      @lucashenry2556 7 лет назад +1

      you could make the ellipse an object and make it's colour a this. property, and then use a comparator like
      if (ellipse.colour > something) {
      //do something
      }

  • @RTouch10
    @RTouch10 6 лет назад

    When i tried this example, I didn't even come close to your results. I think the thing that made your example work so well is that you trained, and then reclassified the same points in the draw loop. If you train outside of the draw loop, the results aren't very good with 100 points. 1000 or 10000 got less than 0.01% error, but 100 got over 10% in many cases.

  • @60pluscrazy
    @60pluscrazy 3 года назад

    Excellent explanation and demonstration. Keep it up 🙏👌

  • @jithunniks
    @jithunniks 6 лет назад +1

    You are really good at the art of teaching :)

  • @WindImHaar
    @WindImHaar 7 лет назад +2

    Finally some processing, much appreciated

  • @nextspotofficial
    @nextspotofficial 4 года назад

    Sorry sir, I'm confused, can you explain why you loop (for) to calculate the weight by the weight length?

  • @naifalharbi6279
    @naifalharbi6279 4 года назад

    Thank you for your explanation, I got one quick question. In case of image classification the error can be found , as (predicted value - actual value) ... But what is the actual value and how is found in case of images ???

  • @nagesh007
    @nagesh007 Год назад

    Thanks for the video. One major doubt : Activation function logic and label calculation logic is totally different, is it ok ? I tot it should be same. can you please explain.

  • @Kino-Imsureq
    @Kino-Imsureq 7 лет назад

    long time no see, sign function! i didnt remember that it tells whether a number is negative or positive (as those are signs so yeah sign()) i really like this its easy for me to convert to javascript. thank you so much!