Sir, your explanation is great great great. But, sir please make video on this series fast so, as our exam come near we prepare well and we complete in less time. Thanks a lot for making such a good videos.
@@codebasics Sir actually my project is on continous data of 5 inputs and output. I have used multiple linear regression. Also I am using Ann by batch gradient and tensor flow But I request you to please upload video on continous output or supervised learning like prediction of salary by tf
Sir, can you explain why dropping neurons to 50% isn't as the same as reducing the neuron size to 50%? For example, instead of taking 60 neurons and dropping 50%, why don't we just take 30 neurons to begin with? Thanks in advance
beacause neurons might be biased towards some neurons, with each epoch, it randomly drops neurons and trains the model, then back propagates. This way, we avoid bias in neural networks and we efficiently train our data
@@NitinKumar-wm2dg so all about dropout is that for each epoch, it RANDOMLY drops neurons (which are different from neurons dropped from the previous and following epochs) and train with the remaining
05:10 so effectively a drop out could be considered similarly then to test/train data, in that it trains neurons A and C, then adjusts B and D based on the test results from A and C
Sir, in my dataset i am having 20 target variables(ie., multi- class problem). When i train and test my accuracy it is only 45%. I am little bit struck with this. It will be helpfull if you give me some suggestions .
HI Sir, I have a question about the droupout technique, as we can see this technique randomly deactivate the neuron, what about the testing, is it still deactivated ?
@@codebasics Who Doesn't like briyani😂😂??how much time does it take you to make these amazing videos ? I teach data science in Arabic , which is way harder than in English , because some terms doesn't have a proper translation and there is no source for Data Science in ARABIC!! so one webinar will take me 5 days , and a video will take me around 2 days , including video editing, so how much time does it take you to make these videos ? Thanks again !
I loved your tutorials Brother. I have just one question from you, In every iteration, we have a new set of data and neurons can be chosen at random right? From this, I infer, that neurons will learn different data and will not be biased towards certain data inputs right?
Hi - if you, say LabelEncode the y, the values will be 0 and 1, obviously, but what will happen here is the model will take the values as some kind of numerical order, that is 1 > 0. Here it is not what we want right , we want the model to learn the M and R and not M>R or R>M. That is why one - hot encoding is better option. There is a detailed video on this by Sir - where he has explained very clearly the logic of one-hot encode. You can check this out by YT search on One Hot Encode Codebasics. ' Thanks.
Hi, when I ran the classification report after adding the dropout layers, I got slightly lesser accuracy and F1 scores. Is this normal? or it could be that I must have made some mistake?
It is correct as many times adding the dropout layer decreases the accuracy but it is fine as it appropriately generalizes your model so that it performs well in all conditions.
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
no
I'm at a loss for words to express my gratitude towards you. your tutorial is amazing, thank you so much
Sky has no limits u r teaching has no more questions.Those who are dislike they are like finding the color of water,run the car with out petrol
😊
Sir, Your way of teaching is awesome.Sir, Please do videos on Multi-class classification problem in deep learning.
Hi ,can you please let me know how dropout works during testing phase
@@rafibasha4145 basically it drops neuron randomly from hidden layers
such a funny analogy at the beginning! You are true genius educator :D :D
happy to be here again!!!
sure karthik. thanks.
Sir, your explanation is great great great.
But, sir please make video on this series fast so, as our exam come near we prepare well and we complete in less time.
Thanks a lot for making such a good videos.
I will try my best Pa. And thanks for your kind words
@@codebasics Sir actually my project is on continous data of 5 inputs and output.
I have used multiple linear regression.
Also I am using Ann by batch gradient and tensor flow
But I request you to please upload video on continous output or supervised learning like prediction of salary by tf
and here I was waiting dropout regularization to happen for you to delete dense layers #2 and #3 !! hahaha. Great stuff. Keep up the good work.
I am enjoying your tutorials. Thank you so much.
Glad you like them!
Sir Please cover the concept of EARLY STOPPING... I know the implementation part but want to know in-depth.
Sir, can you explain why dropping neurons to 50% isn't as the same as reducing the neuron size to 50%? For example, instead of taking 60 neurons and dropping 50%, why don't we just take 30 neurons to begin with?
Thanks in advance
beacause neurons might be biased towards some neurons, with each epoch, it randomly drops neurons and trains the model, then back propagates. This way, we avoid bias in neural networks and we efficiently train our data
@@NitinKumar-wm2dg so all about dropout is that for each epoch, it RANDOMLY drops neurons (which are different from neurons dropped from the previous and following epochs) and train with the remaining
Thank you. Nicely explained with a clear-cut example.
El accuracy me salio casi igual , sin embargo, agradesco el video
Hi,
In Deep learning, can you Please post some videos on hyper parameter tuning.
Thanks
05:10 so effectively a drop out could be considered similarly then to test/train data, in that it trains neurons A and C, then adjusts B and D based on the test results from A and C
Thank you for this amazing tutorial! I even understood the batch size, without this being my goal with this video here!
9:10 can we replace M and R with 0 and 1 instead of using dummy variable ??
yes , you can.
Sir, in my dataset i am having 20 target variables(ie., multi- class problem). When i train and test my accuracy it is only 45%. I am little bit struck with this. It will be helpfull if you give me some suggestions .
Awsome Sir, Thakyou so much for making us understand such important concepts in simple n easy way..!!!
Sir..Is it possible to apply dropout in deep autoencoder??
HI Sir, I have a question about the droupout technique, as we can see this technique randomly deactivate the neuron, what about the testing, is it still deactivated ?
They're only deactivated during training. During testing, all of the weights are resealed by p, the probability that a node gets dropped.
Very good explanation
sir can you make or suggest any video for ADAM optimizer??@codebasics
yes i will be adding that in this series
thank you for your tutorial. I have learned much from it
Great tutorial, love the biryani example 😂😂
ha ha.. i knew fahad that some biryani lovers are going to like it for sure. Looks like you like biryani correct :)
@@codebasics Who Doesn't like briyani😂😂??how much time does it take you to make these amazing videos ? I teach data science in Arabic , which is way harder than in English , because some terms doesn't have a proper translation and there is no source for Data Science in ARABIC!! so one webinar will take me 5 days , and a video will take me around 2 days , including video editing, so how much time does it take you to make these videos ? Thanks again !
You are the best My Boss
Invaluable 👏
Glad you think so!
what a lecture omg
Sir what is the keyboard that you are using for progrmaming.
any keyboard is ok karthik.
codebasics Thanks sir
Sir one question more batch gradient is also artificial neural network
batch gradient is just a technique of gradient descent. That can be used in artificial newural net as well as statistical models such as decision tree
This was so good, thank you!
Looks like you always get confused between input layer dimensions and hidden layer dimensions
Sir please give example with continuous output regression or multiple classification
I loved your tutorials Brother. I have just one question from you, In every iteration, we have a new set of data and neurons can be chosen at random right?
From this, I infer, that neurons will learn different data and will not be biased towards certain data inputs right?
yes that is the correct understanding
why drop out not used for test and validation data?
Why one hot encoding to convert 'y' into integer? Can't we do that with simpile 0 and 1 conversion?? Can you clear this please?
Hi - if you, say LabelEncode the y, the values will be 0 and 1, obviously, but what will happen here is the model will take the values as some kind of numerical order, that is 1 > 0. Here it is not what we want right , we want the model to learn the M and R and not M>R or R>M. That is why one - hot encoding is better option. There is a detailed video on this by Sir - where he has explained very clearly the logic of one-hot encode. You can check this out by YT search on One Hot Encode Codebasics. '
Thanks.
We are huge fan of you.....
really great explanation ..
Thank You so much!
getting message accuracy not defined
Hi, when I ran the classification report after adding the dropout layers, I got slightly lesser accuracy and F1 scores. Is this normal? or it could be that I must have made some mistake?
It is correct as many times adding the dropout layer decreases the accuracy but it is fine as it appropriately generalizes your model so that it performs well in all conditions.
high accuracy is not always good, it also means ur model might deal with overfitting
Thank you!
its awesome , thanks
Glad you like it!
Sir I can't download the csv file
check video descripition, there is csv file path.
@@codebasics Thank you sir 😁
very good
Sir,can u plz make videos on pytorch
sure it is in my todo. but first let me finish this series in tensorflow. I am also working on data structures series that I need to finish as well
@@codebasics sorry sir by mistake it is pycaret
love u sir
Tea set or t-shirt 😂