Channel description, channel tags, video description tags etc... look into all this. Did you alter any settings from your YT account settings? It is unbelievable how this channel has less subs than other similar not that good channels. Your content is undoubtably top tier. I think there is some problem with YT algorithm not picking up this channel during search results.
Exactly I am also agree with this. I am liking every video and commenting on every video. There are definitely some issues no doubt on that. I personally think the logo is very very un-catchy. The color are too dark for a visible logo. The color combination of red blue and green is too dark. I understand the logo represent "C" in blue color for Campus and red "X" for X and green is picture of campus. But it is completely nonvisible. I really want to see this channel grow.
The truth is, People go to a channel which claims to be teaching all Machine learning, Deep learning and data science in crash course. These types of long series haunt them 😅 that's why the viewership and subscriber are low from my perspective.
amazing videos!, btw video ke end me please 10 second ka black screen/ ending screen with music daal skte ho sir toh like krne ka time mil jaata hai video ko if I forget! just feedback if it may help in future videos! thanks for choosing to teach!!
Really enjoyed Your tutorial Sir I have one doubt how to run all that code in pycharm could you please help me in that.I am all done with the code of IPL win probablity but don't know that pycharm part please sir help me out...🙏🙏🙏🙏
Bro, are you planning to teach coding part in deep learning using tensor flow , keras , pytorch such we have a reliable source to learn in a streamlined manner , over internet it's clumsy
i m bit confused. you said that the stochastic uses only one row to update that means it uses batch size equal to one right so that is fastest? then while explaining mini batch you told that smaller batch size is slow. please someone correct me if i m wrong.
As exploding happens in sequences as time goes the gradients were become too large as all the sequences were merged in 1 vector to minimise it we can use he initialisation
Channel description, channel tags, video description tags etc... look into all this. Did you alter any settings from your YT account settings? It is unbelievable how this channel has less subs than other similar not that good channels. Your content is undoubtably top tier. I think there is some problem with YT algorithm not picking up this channel during search results.
Exactly I am also agree with this. I am liking every video and commenting on every video. There are definitely some issues no doubt on that. I personally think the logo is very very un-catchy. The color are too dark for a visible logo. The color combination of red blue and green is too dark. I understand the logo represent "C" in blue color for Campus and red "X" for X and green is picture of campus. But it is completely nonvisible. I really want to see this channel grow.
The truth is, People go to a channel which claims to be teaching all Machine learning, Deep learning and data science in crash course. These types of long series haunt them 😅 that's why the viewership and subscriber are low from my perspective.
Very informative video. I think loss function is also an important hyper-parameter. Selection of the right loss function may improve the performance.
greate content sir . for sambhavami yuge yuge...
amazing videos!, btw video ke end me please 10 second ka black screen/ ending screen with music daal skte ho sir toh like krne ka time mil jaata hai video ko if I forget! just feedback if it may help in future videos! thanks for choosing to teach!!
Great content sir👌 👏 👍
Very good explanation..
Sir please also add the part 2 of machine learning interview questions..🙏🙏
Sir superb video...kindly guide some resources about hyperparameters tuning of neural network in matlab
Very well explained sir.
Hattss offf to your hard work sir... Thank You So much for DL series❤
Well explained...........
Please cover underfitting problems as well.
Very good tutorial. Thank you very much for such a good tutorial.
Thank you so much sir 🙏🙏🙏
thank u sir
Super knowledge... Thanks.. a ton...
Bro,please update NLP interview playlist every week
The best 👍💯 thanks🙏
Really enjoyed Your tutorial
Sir I have one doubt how to run all that code in pycharm could you please help me in that.I am all done with the code of IPL win probablity but don't know that pycharm part please sir help me out...🙏🙏🙏🙏
Thanks ❤
sir how do you approach new ML/DL topics .. when you are learning those topics for the first time .?
Bhai tusi grt ho.. :)
Incredible!
-11:31 ,please cover aboutearning rate in detail bhai
You are the best.
Back to the basics
very nice explanation...
Amazing.....
Thank You Sir.
Bro, are you planning to teach coding part in deep learning using tensor flow , keras , pytorch such we have a reliable source to learn in a streamlined manner , over internet it's clumsy
Why exploding gradient is not seen in ANN when ReLu like activation function is used.
thank you sir
Sir, please update NLP and DL playlists
why u stopped data science interview sir???
Gotta love how all the text is in English the intro is in English, and then it turns hindi 🙃
thank you so much
incredible
ok thank you sir
i m bit confused. you said that the stochastic uses only one row to update that means it uses batch size equal to one right so that is fastest? then while explaining mini batch you told that smaller batch size is slow. please someone correct me if i m wrong.
Not at all, stochastic is slower.
Answering my old self. -> stochastic gradient descent has faster iteration speed but slower convergence speed.
@prashantmaurya9635 thank you bro 😄
Thank you.
great
We are Waiting For 20K subscribers🥳
finished watching
thanks sir
you are the best
love you sir
From pak
Doing God's work!!
Thanks
Revising my concepts.
August 12, 2023😅
Sir please do English videos too.. these videos are precious!!
thanks
💙
❤
Finally hass...
Don't leave RUclips please, dont even joke about it...🙏
Good but you missed underfitting
Goat
best
Bro,did you miss exploding graident problem
he covered a bit at the end of last video but said he would cover it during RNN
As exploding happens in sequences as time goes the gradients were become too large as all the sequences were merged in 1 vector to minimise it we can use he initialisation
God level teacher ❤️🤌🏻
,🙏
Thank You Sir 🙏