Loss Functions in Deep Learning | Deep Learning | CampusX

Поделиться
HTML-код
  • Опубликовано: 28 июн 2024
  • In this video, we'll understand the concept of Loss Functions and their role in training neural networks. Join me for a straightforward explanation to grasp how these functions impact model performance.
    ============================
    Do you want to learn from me?
    Check my affordable mentorship program at : learnwith.campusx.in
    ============================
    📱 Grow with us:
    CampusX' LinkedIn: / campusx-official
    CampusX on Instagram for daily tips: / campusx.official
    My LinkedIn: / nitish-singh-03412789
    Discord: / discord
    👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
    💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
    ✨ Hashtags✨
    #DeepLearning #LossFunctions #NeuralNetworks #MachineLearning #AI #LearningBasics #SimplifiedLearning #modeltraining
    ⌚Time Stamps⌚
    00:00 - Intro
    01:09 - What is loss function?
    11:08 - Loss functions in deep learning
    14:20 - Loss function vs cost function
    24:35 - Advantages/Disadvantages
    59:13 - Outro

Комментарии • 115

  • @kumarabhishek1064
    @kumarabhishek1064 2 года назад +23

    When will you cover RNN, encoder-decoder & transformers?
    Also, if you could make mini projects on these topics, it would be great.
    Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍

  • @AhmedAli-uj2js
    @AhmedAli-uj2js Год назад +5

    Your every word and every minute of sayings are worth a lot!

  • @pratikghute2343
    @pratikghute2343 Год назад +2

    the only channel I have ever seen on youtube is underrated! best content seen so far....... Thanks a lot

  • @anuradhabalasubramanian9845
    @anuradhabalasubramanian9845 Год назад +7

    Fantastic Explanation Sir ! Absolutely brilliant ! Way to go Sir ! Thank you so much for the crystal clear explanation

  • @anitaga469
    @anitaga469 Год назад +8

    Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.

  • @abchpatel4745
    @abchpatel4745 5 месяцев назад +2

    Ony day this channel become most popular for Deep learning ❤️❤️

  • @HARSHKUMAR-cm7ek
    @HARSHKUMAR-cm7ek 7 месяцев назад +4

    Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!

  • @HirokiKudoGT
    @HirokiKudoGT 11 месяцев назад

    This is the best explaination about the whole basic of losses , all doubt are cleared thank you so much for this video.

  • @tanmayshinde7853
    @tanmayshinde7853 2 года назад +4

    These loss functions are the same as taught in machine learning. Difference in Huber, Binary and Categorical loss function.

  • @abhaykumaramanofficial
    @abhaykumaramanofficial Год назад +1

    Great content for me....now everything about loss function is clear .......thank you

  • @paragvachhani4643
    @paragvachhani4643 Год назад +12

    My Morning begins with campusX...

    • @santoshpal8612
      @santoshpal8612 5 месяцев назад +1

      Gentlemen u r on right track

  • @FaizKhan-zu2kp
    @FaizKhan-zu2kp 11 месяцев назад +3

    please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤

  • @AlAmin-xy5ff
    @AlAmin-xy5ff Год назад

    Sir, you are really amazing. I have learned lot of things from your RUclips channel.

  • @talibdaryabi9434
    @talibdaryabi9434 Год назад

    I wanted this video and got it. Thank you.

  • @practicemail3227
    @practicemail3227 11 месяцев назад +1

    was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟

  • @barunkaushik7015
    @barunkaushik7015 Год назад

    Such wonderful learning experience

  • @paruParu-rc1bu
    @paruParu-rc1bu Год назад

    With all respect....thank you very much ❤

  • @palmangal
    @palmangal 9 месяцев назад

    It was a great Explanation . Thank you so much for such amazing videos.

  • @Tusharchitrakar
    @Tusharchitrakar 2 месяца назад

    Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!

  • @RanjitSingh-rq1qx
    @RanjitSingh-rq1qx Год назад

    Great work sir. Amazing 😍

  • @stoic_sapien1
    @stoic_sapien1 8 дней назад

    44:52 Binary cross entropy loss is convex function,it will only have one local minima or only one global minima

  • @GamerBoy-ii4jc
    @GamerBoy-ii4jc 2 года назад

    hmesha ki trha kmaaaal Sir g

  • @farhadkhan3893
    @farhadkhan3893 Год назад

    thank you for your hard work

  • @sambitmohanty1758
    @sambitmohanty1758 2 года назад

    Great video sir as expected

  • @parikshitshahane6799
    @parikshitshahane6799 4 месяца назад

    Very very excellent teaching skills you have Sir! Its like college senior explaining concept to me sitting in hostel room.

  • @trueindian1340
    @trueindian1340 Год назад

    Amazing sir 🙏🏻

  • @SumanPokhrel0
    @SumanPokhrel0 Год назад

    Beautiful explanation

  • @ShubhamSharma-gs9pt
    @ShubhamSharma-gs9pt 2 года назад

    this playlist is a 💎💎💎💎💎

  • @RajaKumar-yx1uj
    @RajaKumar-yx1uj 2 года назад

    Welcome Back Sir 🤟

  • @safiullah353
    @safiullah353 Год назад

    How beautiful this is 🥰

  • @IRFANSAMS
    @IRFANSAMS 2 года назад

    Awesome sir!

  • @wahabmamond4368
    @wahabmamond4368 7 месяцев назад +3

    Learning DL and Hindi together, respect from Afghanistan Sir!

  • @Shisuiii69
    @Shisuiii69 4 месяца назад

    Thanks for the timestamps It's really helpful

  • @amitmishra1303
    @amitmishra1303 5 месяцев назад

    Nowadays my morning and night end with your lecture sir😅.. thanks for putting so much effort.

  • @jayantsharma2267
    @jayantsharma2267 2 года назад

    great content

  • @narendraparmar1631
    @narendraparmar1631 4 месяца назад

    Very well explained, Thanks

  • @rb4754
    @rb4754 27 дней назад

    Mindboggling !!!!!!!!!!!!!!!!!!

  • @OguriRavindra
    @OguriRavindra 3 месяца назад

    Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.

  • @rohansingh6329
    @rohansingh6329 4 месяца назад

    awesome man just amazing ... ! ! !

  • @uzairrehman5765
    @uzairrehman5765 8 месяцев назад

    Great content!

  • @Sara-fp1zw
    @Sara-fp1zw 2 года назад

    Thank you!!!

  • @faheemfbr9156
    @faheemfbr9156 10 месяцев назад

    Very well explained

  • @rashidsiddiqui4502
    @rashidsiddiqui4502 3 месяца назад

    thank you so much sir, clear explaination

  • @uddhavsangle2219
    @uddhavsangle2219 11 месяцев назад

    nice explanation sir
    thank you so much

  • @ariondas7415
    @ariondas7415 5 дней назад

    if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.

  • @mrityunjayupadhyay7332
    @mrityunjayupadhyay7332 11 месяцев назад

    Amazing

  • @narendersingh6492
    @narendersingh6492 Месяц назад

    This is so very important

  • @pavangoyal6840
    @pavangoyal6840 2 года назад

    Thank you

  • @ANKUSHKUMAR-jr1pf
    @ANKUSHKUMAR-jr1pf Год назад

    at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.

  • @ParthivShah
    @ParthivShah 2 месяца назад +1

    Thank You Sir.

  • @user-mf6vv5lc5h
    @user-mf6vv5lc5h 4 месяца назад

    At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)

  • @kindaeasy9797
    @kindaeasy9797 5 дней назад

    22:25 unit^2

  • @nxlamik1245
    @nxlamik1245 4 месяца назад

    I am enjoying your video like a web series sir

  • @manashkumarbhadra6208
    @manashkumarbhadra6208 Месяц назад

    Great work

  • @partharora6023
    @partharora6023 2 года назад

    sir carryon this series

  • @lakshya8532
    @lakshya8532 11 месяцев назад

    One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima

  • @hey.Sourin
    @hey.Sourin 3 месяца назад

    Thank you sir 😁😊

  • @sumitprasad035
    @sumitprasad035 11 месяцев назад

    🦸‍♂Thank you Bhaiya ...

  • @kindaeasy9797
    @kindaeasy9797 5 дней назад

    amazing lectureeeeeeee

  • @shantanuekhande4788
    @shantanuekhande4788 2 года назад

    great explanation. can you tell me why we need bias in NN . how it is useful

  • @74kumarrohit
    @74kumarrohit 3 месяца назад

    Can you please create a videos for remainig Loss Function , for AutoEncoders, GANS, Transformers also. Thanks

  • @Avsjagannath
    @Avsjagannath 5 месяцев назад

    excellent teaching skill.sir plz provide notes pdf

  • @aakiffpanjwani1089
    @aakiffpanjwani1089 3 месяца назад

    can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?

  • @bhojpuridance3715
    @bhojpuridance3715 9 месяцев назад

    Thanxs sir

  • @zkhan2023
    @zkhan2023 2 года назад

    Thanks Sir

  • @sanchitdeepsingh9663
    @sanchitdeepsingh9663 7 месяцев назад

    thanks sir

  • @AkashBhandwalkar
    @AkashBhandwalkar 2 года назад +5

    Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too

  • @techsavy5669
    @techsavy5669 2 года назад +2

    Great concise video. Loved it.
    A small question 💡:
    Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?

    • @pratikghute2343
      @pratikghute2343 Год назад

      I think this might be happening automatically or not needed bcoz that way we could not get the loss for that category

    • @AmitUtkarsh99
      @AmitUtkarsh99 8 месяцев назад

      yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.

  • @ManasNandMohan
    @ManasNandMohan 25 дней назад

    Awesome

  • @sandipansarkar9211
    @sandipansarkar9211 Год назад

    finished watching

  • @vinayakchhabra7208
    @vinayakchhabra7208 Год назад

    best

  • @tejassrivastava6971
    @tejassrivastava6971 Год назад +1

    Wouldn't Categorical and Sparse Entropy become same ?
    As after OHE, all log terms become zero except the current one which gives same result as from Sparse.

  • @kindaeasy9797
    @kindaeasy9797 5 дней назад

    easyy thankssss

  • @vinayakchhabra7208
    @vinayakchhabra7208 Год назад

    maza aagya

  • @alokmishra5367
    @alokmishra5367 6 месяцев назад

  • @ashwinjain5566
    @ashwinjain5566 10 месяцев назад

    at 36:27, shouldnt the line be nearly perpendicular to what you drew? seems like a case of simpson's paradox.

  • @alastormoody1282
    @alastormoody1282 3 месяца назад

    Respect

  • @lonehawk4096
    @lonehawk4096 2 года назад

    ML MICE SKLEARN video is still pending sir pleases make that video, other Playlist are also very helpfull thanks for all content.

  • @vishalpatil228
    @vishalpatil228 6 месяцев назад

    43.32
    cost function = 1/n∑ ( loss function )

  • @suriyab8143
    @suriyab8143 Год назад +1

    Sir, which tool are you using for explanation in this video

  • @shashankshekharsingh9336
    @shashankshekharsingh9336 Месяц назад

    thank your sir for this great content.
    13/05/24

  • @Sandesh.Deshmukh
    @Sandesh.Deshmukh 2 года назад

    As usual crystal clear explanation Sir ji❤❤🙌 @CampusX

  • @abhisheksinghyadav4970
    @abhisheksinghyadav4970 Год назад

    please share the white board @CampusX

  • @KiyotakaAyanokoji1
    @KiyotakaAyanokoji1 10 месяцев назад +1

    what is the difference :
    1.) if we update the weights and bias on each row ,for all epoches ,
    2) for each batch (all rows togeather), for all epoches .
    can you tell senarios where one is better over other?

  • @sam-mv6vj
    @sam-mv6vj 2 года назад

    Thank you sir for resuming

  • @ahmadtalhaansari4456
    @ahmadtalhaansari4456 11 месяцев назад

    Revising my concepts.
    August 04, 2023 😅

  • @KaranGupta-kv6wq
    @KaranGupta-kv6wq 3 месяца назад

    can someone explain me how 0.3 0.6 0.1 is coming @ 52:37 I want to know how can I get these values and which formula is used

  • @user-xw1eu7jx7n
    @user-xw1eu7jx7n 3 месяца назад

    grate

  • @anishkhatiwada2502
    @anishkhatiwada2502 5 месяцев назад

    please put timestamp for each topic in this video.

  • @amitbaderia6385
    @amitbaderia6385 2 месяца назад +1

    Please take care of background noises

  • @Pipython
    @Pipython Месяц назад

    aise explain karoge to like to karna padega na....

  • @vikeshdas3909
    @vikeshdas3909 2 года назад

    First viewer

  • @spyzvarun5478
    @spyzvarun5478 11 месяцев назад

    Isn't logloss convex?

  • @vikeshdas3909
    @vikeshdas3909 2 года назад

    Black bord achha tha

  • @praveendeena1493
    @praveendeena1493 2 года назад

    Hi sir
    I want complete end to end project video.please share me

  • @8791692532
    @8791692532 2 года назад +1

    Why you stopped posting videos in this Playlist?

    • @campusx-official
      @campusx-official  2 года назад +3

      Creating the next one right now... Backpropogation

    • @8791692532
      @8791692532 2 года назад +1

      @@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube!
      Your method of teaching is very simple and understandable. Thank You for providing credible content!

  • @Lucifer-wd7gh
    @Lucifer-wd7gh 2 года назад +2

    Time series in details 😓

    • @geekyprogrammer4831
      @geekyprogrammer4831 2 года назад +3

      Let him finish this series na. Why forcing like this???

    • @namanmodi7536
      @namanmodi7536 2 года назад +2

      @@geekyprogrammer4831 true brother

  • @assetss
    @assetss Год назад

    Birds ka voice aara background me

  • @mrarul1
    @mrarul1 4 месяца назад

    Avoid Hindi speaking in video

  • @GovindSingh-lg5qs
    @GovindSingh-lg5qs Месяц назад +1

    Data science ke Thalapathy