How to Improve the Performance of a Neural Network

Поделиться
HTML-код
  • Опубликовано: 4 фев 2025

Комментарии • 68

  • @akzork
    @akzork 2 года назад +52

    Channel description, channel tags, video description tags etc... look into all this. Did you alter any settings from your YT account settings? It is unbelievable how this channel has less subs than other similar not that good channels. Your content is undoubtably top tier. I think there is some problem with YT algorithm not picking up this channel during search results.

    • @rb4754
      @rb4754 8 месяцев назад +1

      Exactly I am also agree with this. I am liking every video and commenting on every video. There are definitely some issues no doubt on that. I personally think the logo is very very un-catchy. The color are too dark for a visible logo. The color combination of red blue and green is too dark. I understand the logo represent "C" in blue color for Campus and red "X" for X and green is picture of campus. But it is completely nonvisible. I really want to see this channel grow.

    • @semifarzeon
      @semifarzeon 3 месяца назад +2

      The truth is, People go to a channel which claims to be teaching all Machine learning, Deep learning and data science in crash course. These types of long series haunt them 😅 that's why the viewership and subscriber are low from my perspective.

  • @vimalshrivastava6586
    @vimalshrivastava6586 10 месяцев назад +3

    Very informative video. I think loss function is also an important hyper-parameter. Selection of the right loss function may improve the performance.

  • @rajfekar1514
    @rajfekar1514 Месяц назад

    greate content sir . for sambhavami yuge yuge...

  • @paragbharadia2895
    @paragbharadia2895 5 месяцев назад +2

    amazing videos!, btw video ke end me please 10 second ka black screen/ ending screen with music daal skte ho sir toh like krne ka time mil jaata hai video ko if I forget! just feedback if it may help in future videos! thanks for choosing to teach!!

  • @jayantsharma2267
    @jayantsharma2267 2 года назад +3

    Great content sir👌 👏 👍

  • @akshaychaskar3483
    @akshaychaskar3483 2 года назад +5

    Very good explanation..
    Sir please also add the part 2 of machine learning interview questions..🙏🙏

  • @faizanhassan8879
    @faizanhassan8879 Год назад +1

    Sir superb video...kindly guide some resources about hyperparameters tuning of neural network in matlab

  • @sujithsaikalakonda4863
    @sujithsaikalakonda4863 Год назад +2

    Very well explained sir.

  • @Sandesh.Deshmukh
    @Sandesh.Deshmukh 2 года назад +1

    Hattss offf to your hard work sir... Thank You So much for DL series❤

  • @rb4754
    @rb4754 8 месяцев назад +1

    Well explained...........

  • @rafibasha4145
    @rafibasha4145 2 года назад +6

    Please cover underfitting problems as well.

  • @SurajitDas-gk1uv
    @SurajitDas-gk1uv Год назад +2

    Very good tutorial. Thank you very much for such a good tutorial.

  • @rockykumarverma980
    @rockykumarverma980 Месяц назад

    Thank you so much sir 🙏🙏🙏

  • @TanushDang
    @TanushDang 13 дней назад

    thank u sir

  • @amitattafe
    @amitattafe 2 года назад +2

    Super knowledge... Thanks.. a ton...

  • @rafibasha4145
    @rafibasha4145 2 года назад +8

    Bro,please update NLP interview playlist every week

  • @jammascot
    @jammascot 2 года назад +1

    The best 👍💯 thanks🙏

  • @subharanjanmohanty838
    @subharanjanmohanty838 2 года назад +2

    Really enjoyed Your tutorial
    Sir I have one doubt how to run all that code in pycharm could you please help me in that.I am all done with the code of IPL win probablity but don't know that pycharm part please sir help me out...🙏🙏🙏🙏

  • @RR-rg5lr
    @RR-rg5lr Месяц назад

    Thanks ❤

  • @anujshrivastav7365
    @anujshrivastav7365 2 года назад +4

    sir how do you approach new ML/DL topics .. when you are learning those topics for the first time .?

  • @jyotisharma1690
    @jyotisharma1690 10 месяцев назад +1

    Bhai tusi grt ho.. :)

  • @virajkaralay8844
    @virajkaralay8844 Год назад +1

    Incredible!

  • @rafibasha4145
    @rafibasha4145 2 года назад +5

    -11:31 ,please cover aboutearning rate in detail bhai

  • @sudhanshubhardwaj2241
    @sudhanshubhardwaj2241 3 месяца назад

    You are the best.

  • @elonmusk4267
    @elonmusk4267 5 месяцев назад

    Back to the basics

  • @mahmud1115
    @mahmud1115 Год назад

    very nice explanation...

  • @barunkaushik7015
    @barunkaushik7015 2 года назад +1

    Amazing.....

  • @ParthivShah
    @ParthivShah 9 месяцев назад +1

    Thank You Sir.

  • @krishnakanthmacherla4431
    @krishnakanthmacherla4431 2 года назад +7

    Bro, are you planning to teach coding part in deep learning using tensor flow , keras , pytorch such we have a reliable source to learn in a streamlined manner , over internet it's clumsy

  • @akeshagarwal794
    @akeshagarwal794 Год назад +1

    Why exploding gradient is not seen in ANN when ReLu like activation function is used.

  • @akshaykanavaje474
    @akshaykanavaje474 2 года назад +2

    thank you sir

  • @shanu9494
    @shanu9494 Год назад +1

    Sir, please update NLP and DL playlists

  • @akshay_kale147
    @akshay_kale147 2 года назад +2

    why u stopped data science interview sir???

  • @larsiparsii
    @larsiparsii 2 года назад

    Gotta love how all the text is in English the intro is in English, and then it turns hindi 🙃

  • @SleepeJobs
    @SleepeJobs Год назад +1

    thank you so much

  • @sachin-ll1by
    @sachin-ll1by 7 месяцев назад

    incredible

  • @Vaishnavi-dz6ef
    @Vaishnavi-dz6ef 18 дней назад

    ok thank you sir

  • @sachinpawar6339
    @sachinpawar6339 2 месяца назад

    i m bit confused. you said that the stochastic uses only one row to update that means it uses batch size equal to one right so that is fastest? then while explaining mini batch you told that smaller batch size is slow. please someone correct me if i m wrong.

    • @prashantmaurya9635
      @prashantmaurya9635 Месяц назад

      Not at all, stochastic is slower.

    • @sachinpawar6339
      @sachinpawar6339 Месяц назад

      Answering my old self. -> stochastic gradient descent has faster iteration speed but slower convergence speed.

    • @sachinpawar6339
      @sachinpawar6339 Месяц назад

      @prashantmaurya9635 thank you bro 😄

  • @pavangoyal6840
    @pavangoyal6840 2 года назад

    Thank you.

  • @apudas7
    @apudas7 4 месяца назад

    great

  • @RajaKumar-yx1uj
    @RajaKumar-yx1uj 2 года назад +2

    We are Waiting For 20K subscribers🥳

  • @sandipansarkar9211
    @sandipansarkar9211 2 года назад +1

    finished watching

  • @sanchitdeepsingh9663
    @sanchitdeepsingh9663 Год назад

    thanks sir

  • @Sara-fp1zw
    @Sara-fp1zw 2 года назад +1

    you are the best

  • @technicalhouse9820
    @technicalhouse9820 Год назад

    love you sir
    From pak

  • @herambvaidya
    @herambvaidya Год назад

    Doing God's work!!

  • @sanjaisrao484
    @sanjaisrao484 Год назад

    Thanks

  • @ahmadtalhaansari4456
    @ahmadtalhaansari4456 Год назад +1

    Revising my concepts.
    August 12, 2023😅

  • @abhin4747
    @abhin4747 2 года назад +1

    Sir please do English videos too.. these videos are precious!!

  • @ChotuSinghRajput4
    @ChotuSinghRajput4 Год назад

    thanks

  • @znyd.
    @znyd. 8 месяцев назад

    💙

  • @alokmishra5367
    @alokmishra5367 Год назад

  • @mr.deep.
    @mr.deep. 2 года назад +1

    Finally hass...

  • @abhishekpattanshetti6771
    @abhishekpattanshetti6771 2 года назад +2

    Don't leave RUclips please, dont even joke about it...🙏

  • @Ishant875
    @Ishant875 Год назад

    Good but you missed underfitting

  • @binayakdey2497
    @binayakdey2497 Месяц назад

    Goat

  • @yashjain6372
    @yashjain6372 Год назад

    best

  • @rafibasha4145
    @rafibasha4145 2 года назад +4

    Bro,did you miss exploding graident problem

    • @AmitUtkarsh99
      @AmitUtkarsh99 Год назад +1

      he covered a bit at the end of last video but said he would cover it during RNN

    • @susprogramming9000
      @susprogramming9000 Год назад +1

      As exploding happens in sequences as time goes the gradients were become too large as all the sequences were merged in 1 vector to minimise it we can use he initialisation

  • @CODEToGetHer-rq2nf
    @CODEToGetHer-rq2nf Год назад +1

    God level teacher ❤️🤌🏻

  • @ajitkumarpatel2048
    @ajitkumarpatel2048 2 года назад

    ,🙏

  • @sai_bhargav
    @sai_bhargav 6 дней назад

    Thank You Sir 🙏