Tutorial 9- Drop Out Layers in Multi Neural Network

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 167

  • @forever-fz1hk
    @forever-fz1hk 4 года назад +31

    krish sir just one thing to say...i too teach myself sometimes to school children,the thing is the effort you are putting in making these videos at free of charge is commendable...May god bless you sir..I am gaining confidence too after seeing ur videos and and thus becoming a data scientist

  • @snehalbm
    @snehalbm 3 года назад +9

    You are the mentor every aspiring data scientist needs, Thanks!!

  • @ebisaabebe615
    @ebisaabebe615 8 месяцев назад

    I am Msc. student from Ethiopia, Really to tell you the fact I have learnt a lot from your videos. May God bless your mind!!

  • @laxminarasimhaduggaraju2671
    @laxminarasimhaduggaraju2671 5 лет назад +12

    Just I can see ur face is full of happiness when u explains a concept
    I guess u r like 🙏🙏

  • @shivangirastogi9723
    @shivangirastogi9723 2 года назад +5

    Thanks for putting your efforts in making these in-depth videos which clarifies concepts in detail. Your videos are helping students like me who are very new to the ML and AI field.

  • @RanjitSingh-rq1qx
    @RanjitSingh-rq1qx 2 года назад +2

    I watched 10 videos but yet i didn't code anything, but i am sure whenever I will code. I will be perform in more clearly.because these are videos are focusing on more basics and defining the more depth of ANN. Thank you so much sir. 🥰🥰😘🇮🇳🇮🇳

  • @shosad100
    @shosad100 4 года назад +1

    Krish Sir you are my favorite Teacher...your lessons and explanation's are simple and easy to understand , me like B grade student also can understand the concepts. Thank you Sir.

  • @elmoreglidingclub3030
    @elmoreglidingclub3030 3 года назад +2

    Great stuff. But I have to listen several times to understand given our different dialects. Much appreciation for your work and explanations!! Excellent!

  • @anujeetswain7368
    @anujeetswain7368 4 года назад +2

    This deeplearning series is extremely good.

  • @nandinisarker6123
    @nandinisarker6123 4 года назад

    I found it extremely useful, easier to understand than many known experts

  • @shaz-z506
    @shaz-z506 5 лет назад +18

    That's the good video Krishna, I never thought about the random forest playing a similar mechanism when the first time I was studying dropout. good, you've cleared my concept with this video. Thanks!

  • @pankajverma-sw9oz
    @pankajverma-sw9oz 2 года назад +1

    i was alwasy confuse about deep learning beacuse of u i got clarity

  • @smarthbakshi7041
    @smarthbakshi7041 3 года назад

    This man makes ML a cakewalk!

  • @AmitYadav-ig8yt
    @AmitYadav-ig8yt 5 лет назад +5

    Thank you very much, You have been an angel for me. Please upload a video on the theory part of SVM, K-Means or other unsupervised ML. Thanks a lot once again. Hari Om

  • @grace30
    @grace30 4 года назад +3

    Really Like the way you explain! I have just completed Udemy Bootcamp and you are definitely reinforcing what I have learned. Keep up the good work!

  • @sandipansarkar9211
    @sandipansarkar9211 4 года назад +1

    Hello Krish.Came to know about the use of random forest in deep learning.Thanks

  • @paullan-learning-read-dev7040
    @paullan-learning-read-dev7040 4 года назад +39

    Thank you. Much easier to understand than the one by Andrew Ng.

    • @manishsharma2211
      @manishsharma2211 4 года назад +12

      But can't ignore the fact , that he is God in AI

    • @nabiltech1366
      @nabiltech1366 4 года назад +2

      Do u take and finish Andrew Ng course?

    • @MrBemnet1
      @MrBemnet1 4 года назад

      @@nabiltech1366 half way . did you finish

    • @nabiltech1366
      @nabiltech1366 4 года назад +1

      @@MrBemnet1 No bro.The way he teach is very complicated to me.So i decide to learn a new way.When i have a little bit knowledge that i understand,i will try to retake the course so it will be easier than before.What about u?

    • @MrBemnet1
      @MrBemnet1 4 года назад

      @@nabiltech1366 I dont some of the concepts right a way.i will check other resources then come back and view it again. I will finish it everything within 2 weeks

  • @mgreek31
    @mgreek31 2 года назад

    The effort in these Videos !!!
    Thanks Krish !!!

  • @adityashewale7983
    @adityashewale7983 Год назад

    hats off to you sir,Your explanation is top level, THnak you so much for guiding us...

  • @arohawrami8132
    @arohawrami8132 11 месяцев назад

    Thanks a lot Krish for your best explanation.

  • @Fatima-kj9ws
    @Fatima-kj9ws 3 года назад +1

    Great explanations, thank you very much sir

  • @gooopin
    @gooopin 4 года назад +2

    Thanks for the sessions... These are precise and organized...

  • @maddybharathi
    @maddybharathi 4 года назад

    You have a knack of making things short and simple and easy to grasp :)

  • @gopalakrishna9510
    @gopalakrishna9510 5 лет назад

    sir i think your enjoying this teaching ?
    your expressions indicating you are enjoying the teaching ...

  • @smitirashmiguru7649
    @smitirashmiguru7649 4 года назад +4

    Love the Deep Learning Series. Great Learning !!

  • @urwahmunir9636
    @urwahmunir9636 4 года назад +1

    Extraordinary teaching style step wise.You made all my concepts clear , Can you please add some practical implementation of neural network models in which all these techniques can be used. like dropout, loss function , learning rate , regularization , optimizer in one model implementation..Thanks in advance...

  • @sauravkr.mahato
    @sauravkr.mahato 6 месяцев назад

    how simply he explained it .

  • @sukumarroychowdhury4122
    @sukumarroychowdhury4122 3 года назад

    Krish: You are the very best trainer

  • @manjularathore1076
    @manjularathore1076 4 года назад +1

    Hi Krish, Thanks for making such nice videos and excellent explanation. Finally I have found somethingl I was looking for better understanding of deep learning.

  • @ameygirdhari8703
    @ameygirdhari8703 3 года назад

    simple and clear explanation

  • @pranjalijoshi6114
    @pranjalijoshi6114 2 года назад

    your all videos are very useful ...thanks alot for this good work

  • @theforgottenhealth3244
    @theforgottenhealth3244 4 года назад +1

    Great service. Amazing Explanation!!

  • @vishalvaibhav9697
    @vishalvaibhav9697 4 года назад +4

    Hello Krishna, first of all thanks you so much for the videos as lot of my queries are getting cleared up by watching your videos. I have a better understanding of Neural Networks now with all the maths behind it. I have one query though for this particular video : What is Batch Normalization in Neural Networks and how does it help in preventing over-fitting problems in a neural network?

  • @AKHILESHYADAV-ig7uv
    @AKHILESHYADAV-ig7uv 4 года назад

    It's really very good lecture series

  • @Amanullah-lt6fq
    @Amanullah-lt6fq 2 года назад

    I am watching your videos from few months and I learned a lot, your channel deserve subscription, I subscribed your channel

  • @davidhakobyan6377
    @davidhakobyan6377 3 года назад

    You explain very good! Thank you!

  • @adityagamingchanneltv9041
    @adityagamingchanneltv9041 3 года назад

    Your lectures are superb

  • @vantuannguyen4337
    @vantuannguyen4337 3 года назад

    i really love your energy

  • @wajidiqbal5633
    @wajidiqbal5633 3 года назад

    very well explained. thankyou

  • @vikshukla44
    @vikshukla44 4 года назад

    Sir you are amazing! , you have cleared everything.

  • @babbarutkarsh7770
    @babbarutkarsh7770 3 года назад

    Can there be a better explaination? Simply perfect!!

  • @pedramdabaghian1329
    @pedramdabaghian1329 2 года назад

    Thank you. It was so helpful.

  • @jeevanaddepalli5494
    @jeevanaddepalli5494 3 года назад +2

    I think during test time we should multiply the weights with keep probability value = (1- dropout rate). Intuitively keep probability means how many % of times we have used that weight or edge or connection to train our NN. please correct me if i am wrong Krish sir.

  • @dnakhawa
    @dnakhawa 4 года назад

    You teach very well... Gr8 stuff about Data Science in your channel. Thanks Harish!

  • @aakashnishad7048
    @aakashnishad7048 5 лет назад +2

    Thanks Krish

  • @debopamsengupta4409
    @debopamsengupta4409 4 года назад

    Hi Krish, great work, real smooth and informative explanation

  • @fthialbkosh1632
    @fthialbkosh1632 4 года назад

    Thanks a lot, sir, very good explanation.

  • @hokapokas
    @hokapokas 5 лет назад +1

    Good work as usual krish... Awaiting its implementation 🙏🙏

  • @firstkaransingh
    @firstkaransingh 2 года назад

    Great explanation 👍

  • @bhushanbowlekar4539
    @bhushanbowlekar4539 2 года назад

    guys please note that .....If you're dropping neurons or activation functions at the rate of p then 1-p will be multiplied at test phase.

  • @Aliabbashassan3402
    @Aliabbashassan3402 4 года назад

    thank u from Iraq .. Good Job brother

  • @anindyabanerjee743
    @anindyabanerjee743 4 года назад

    krish..you make my life easier

  • @pranjalgupta9427
    @pranjalgupta9427 4 года назад +1

    Amazing explanation but what happen if p=0 Or p=1?

  • @adityachandra2462
    @adityachandra2462 4 года назад +1

    P-value in drop rate section of the middle layer would be 0.6 ( blocking 60% instead of 0.5 ( single value of 1.0 means no dropout and value of 0.0 is full dropout or no output from that layer)......u keep on repeating that, plz rectify it

  • @prasantimohanty6750
    @prasantimohanty6750 4 года назад +1

    I have a doubt.
    In test data which neurons are not activated we are doing p*w but which neurons are activated what will we doing in that case?

  • @koushikkonar4186
    @koushikkonar4186 Год назад

    Hi, In this video, when we are going to apply for test data...what will be the weight of deactivated neurons

  • @tejateju6303
    @tejateju6303 6 месяцев назад +1

    The video explains the concept of dropout layers in deep neural networks, which helps prevent overfitting by randomly deactivating a subset of neurons during training.
    Key moments:
    00:00 Artificial neural networks with many weights and bias parameters can lead to overfitting issues, dropout regularization helps prevent overfitting by randomly dropping units during training.
    -Explanation of overfitting in deep neural networks due to excessive parameters and the need for regularization techniques like dropout.
    -Comparison between underfitting in single-layer neural networks and the role of multiple layers in preventing underfitting in deep neural networks.
    -Introduction to dropout regularization as a technique to prevent overfitting by randomly dropping units during training, with a reference to the 2014 thesis by Srivastava and Hinton.
    03:54 The video discusses the concept of dropout layers in neural networks, where a subset of features or neurons are randomly deactivated during training to prevent overfitting and improve model generalization.
    -Explanation of how dropout layers work in neural networks by randomly deactivating a subset of features or neurons during training to improve model generalization.
    -Comparison of dropout layers in neural networks to the concept of selecting subsets of features in random forests to create diverse decision trees for better model performance.
    07:25 Dropout layer in neural networks randomly deactivates some neurons and activates others during training to prevent overfitting, similar to random forest's feature selection and majority voting. Test data connects all neurons without deactivation or activation, using weights multiplied by dropout probabilities for prediction.
    -Comparison of dropout layer with random forest for feature selection and majority voting to prevent overfitting in neural networks.
    -Explanation of how test data is handled in dropout layer, connecting all neurons without deactivation or activation, and using weights multiplied by dropout probabilities for prediction.
    -Selecting the dropout ratio (p-value) through hyperparameter optimization to prevent overfitting in deep neural networks, with a recommendation for p-value above 0.5.

  • @priyasingh-zd1wm
    @priyasingh-zd1wm 4 года назад

    Such awesome content and explanations!!!

  • @9971916866
    @9971916866 4 года назад +2

    Thank you Krish for the video, this is excellent!! One question, drop out will be applied at each epoch, then how does it combine the results from all the epoch?

  • @pawansharma-ij7kg
    @pawansharma-ij7kg 3 года назад

    Nice Explanation

  • @shashwatdev2371
    @shashwatdev2371 4 года назад +1

    I have a doubt -
    On every iteration drop out ratio of any particular layer remains same or not? If not then do we take average to multipy with weights for test data ?

  • @marijatosic217
    @marijatosic217 4 года назад

    Great as always! Thank you :)

  • @lol-ki5pd
    @lol-ki5pd Год назад

    just a question . during back propogration, for each neuron we get updated weights. Now when we back propogate to starting, and again random starting feature points are chosen, what happens to back propogated weights?

  • @zx3215
    @zx3215 5 лет назад +1

    In your sketch - did you really drop a couple of inputs out? Is this allowed in dropout approach?

  • @Sovereignl55
    @Sovereignl55 Год назад

    Sir if we're dropping some input and also hidden layers,
    It will not affect our output?
    Mean correct predictions

  • @midhileshmomidi2434
    @midhileshmomidi2434 5 лет назад +1

    Hi Sir,
    I have a doubt.
    If we take p=0.5 half of the features which will be deactivated at 1st epoch will be reactivated in 2nd epoch and same goes on for other features in upcoming epochs as well
    Please explain

  • @abdulqadar9580
    @abdulqadar9580 2 года назад

    Amazing Sir

  • @joseguilherme5008
    @joseguilherme5008 2 года назад

    Great video 👏

  • @RajeshRajesh-sh7zj
    @RajeshRajesh-sh7zj 7 месяцев назад

    In the next iteration, will the deactivated neurons get activated randomly???

  • @michaelloturco5584
    @michaelloturco5584 3 года назад

    Thank you for this excellent explanation! could you link the original research thesis you mentioned? (or maybe i'm just not finding in description)

  • @AbdulRehman-hg9es
    @AbdulRehman-hg9es 3 года назад +1

    Great effort Krish! I like your passion. I have a one confusion about drop-out ratio. Why are you using drop-out ratio of 0.5 for input layer ? According to my knowledge that should be higher (i.e 1.0 or 0.9).

  • @ankitbisht1517
    @ankitbisht1517 Месяц назад

    Can you please share the URL of any report related to this regularisation technique

  • @pankajkumarbarman765
    @pankajkumarbarman765 4 года назад

    Sir you are great 💖

  • @anujsinha12
    @anujsinha12 4 года назад

    Hello @Krish Naik, You mentioned in Video that for test data w should be multiplied by P. Do we need to write a code for that in Model ? Does it happens aromatically?

  • @sameerherkal9205
    @sameerherkal9205 7 месяцев назад

    Hi @Krish,
    I got asked in an interview, what if we remove one hidden layer instead of DropOut, wont it be good to remove one Hidden Layer instead of DropOut,
    Can you please help me with the Answer.

  • @vaibhavhariramani
    @vaibhavhariramani 3 года назад

    I just have a little query if we keep activating and de-activating neurons while training
    doesn't it cause overfitting when testing with all neurons activated at once which were trained in some different combinations during training

  • @shahariarsarkar3433
    @shahariarsarkar3433 7 месяцев назад

    Please suggest a good reference book for Deep learning.

  • @vishalaaa1
    @vishalaaa1 4 года назад

    Hi, You did not explain how the exploding problem can be corrected - is it through Same RELU ?

  • @VamsiKrishna-vg6vd
    @VamsiKrishna-vg6vd 5 лет назад +1

    For training data suppose we are ignoring few features and neurons as per the drop out ratio and calculating the weights and with back propagation v r updating the weights. In the second step another set of features and neurons are selected randomly, Now if we are again calculating the new weights that doesn't make sense rights as this will keep on repeating with different random combinations.... Please correct me if I am wrong...Thanks in advance.

    • @vaishnavkrishnan7996
      @vaishnavkrishnan7996 4 года назад

      so after all of this is done the best set of features are selected for that particular output value i guess

  • @palashchanda9308
    @palashchanda9308 4 года назад

    Can you please provide link for the Machine Learning playlist?

  • @ParthivShah
    @ParthivShah Год назад

    thank you sir.

  • @chaoxi8966
    @chaoxi8966 2 года назад

    Hi, Sir. I would like to know in each epoch of training, does dropout have relations to batch_size?

  • @kuskargupt2887
    @kuskargupt2887 4 года назад

    Sir as we are randomly selecting some features or neurons, then those are being updated according to that set of neurons in that particular FP and BP, so how come the model is going to predict the right answer when all the neurons are activated together for Test data as we have trained the weights of the neurons when there where less number of the activated neuron, like how, the model will sum up all the weights to give the right prediction(with least error).

  • @dmlane_sougata
    @dmlane_sougata 3 года назад

    Sir all weights will be updated as (P*W) while testing data or the P value will be updated as (P*W) ? Please clear this.

  • @VidyaranyaSaiNuthalapatiNSV
    @VidyaranyaSaiNuthalapatiNSV 2 года назад

    I think there is a mistake in the explanation when dealing with test time. If p is the probability of dropping a neuron, then the weights should be multiplied by 1-p during test time

  • @shubhamchauda425
    @shubhamchauda425 4 года назад

    i have a question. we have to add different drop layers for different layers or we have to add once for all layer ?

  • @amitghodke838
    @amitghodke838 4 года назад

    Can you explain how it is helping to avoid overfitting problem.

  • @mohd.faizan3003
    @mohd.faizan3003 4 года назад

    Sir I have a doubt that when the neurons are randomly selected base on p value then for next epochs from which neurons the random selection which will performed activated ones or all of them

  • @rahul-wz7rn
    @rahul-wz7rn 2 года назад

    if we apply drop out ratio is there any chance that the features which are selected first time get selected in second time..or new features get selected.

  • @sharadkolse6871
    @sharadkolse6871 4 года назад

    Best explained:)

  • @shiffin_chippe
    @shiffin_chippe 5 лет назад +6

    So when the neurons are reactivated what are their weights?

    • @akashkewar
      @akashkewar 4 года назад +4

      their weights are the same as before because you didn't update them using backpropagation. You only update the weights corresponding to neurons that are activated at an iteration. So in the next iteration, if we happen to activate the neuron which was not active on the last iteration it's weight will be the same until backpropagation updates it (because that neuron is active now and hence will get updated).

    • @shiffin_chippe
      @shiffin_chippe 4 года назад +2

      @@akashkewar thanks for the reply after 8 months😃😃😃♥️

    • @akashkewar
      @akashkewar 4 года назад +1

      ​@@shiffin_chippe :D "Better late than never". I hope you are doing fine in life and don't give up.

  • @spadiyar6725
    @spadiyar6725 3 года назад

    you have not explained why everything will be connected for test data. you explained the calculation after connected. but i would like to know why everything connected? what happens if we use dropout for the test data.

  • @samyakjain8079
    @samyakjain8079 3 года назад +1

    x0, x1,x2 should not be dropped -- according to Andrew ng

  • @absolutelynobody3837
    @absolutelynobody3837 3 года назад

    Wouldn't the weights in testing be w(1-p) rather than wp?

  • @tag_of_frank
    @tag_of_frank 2 года назад

    This sounds more like stochastic optimization than regularization.

  • @manikosuru5712
    @manikosuru5712 5 лет назад +2

    Hi sir, Amazing explanation..
    small doubt..
    while multiplying 'p' value with weight 'w' for test data,do we include(add) bias value with input??

    • @krishnaik06
      @krishnaik06  5 лет назад +8

      We have to include the bias..

  • @gouravdidwania1070
    @gouravdidwania1070 3 года назад +1

    p=0.7 so 0.7 will be selected or 0.7 will be dropped out?

  • @swastikpathak4669
    @swastikpathak4669 3 года назад

    Krish Naik: that's like the coolest name

  • @debasispatra8368
    @debasispatra8368 4 года назад

    Krish i have a doubt. Suppose i have 5 inputs & 5 neurons in my 1st hidden layer. In training time, i have given drop out ratio as 0.5, & due to this suppose 2 inputs & 2 neurons got deactivated. In this case now we have 3 i/p & 3 neuron left, so 9 weights we have to train. But at testing time we have to multiply 'p' value with 25 weights as testing time all i/p & neurons exists. So how to do this?

    • @vaishnavkrishnan7996
      @vaishnavkrishnan7996 4 года назад

      i think the drop out ratio for other deactivated neurons in the test set would be 0 i guess doesnt make sense though

  • @anonim5052
    @anonim5052 11 месяцев назад

    great!!!