Cost Function and Loss Function in Data Science | Cost function machine learning | Regression Cost

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии • 103

  • @kavyanagesh8304
    @kavyanagesh8304 8 месяцев назад +1

    Thank you so much! You're a great teacher, Mr.Aman! 🙏

  • @amalaj4988
    @amalaj4988 3 года назад +13

    Regression Loss : L1 loss, L2 loss, Huber Loss
    Classification Loss : Hinge Loss (SVM) , Cross Entropy (Binary CE ( sigmoid ) , Categorical CE ( softmax ) , Sparse Categorical CE )

    • @raghavverma120
      @raghavverma120 2 года назад

      Btaiyo bhai softmax kha use karte hai?

  • @divyankverma
    @divyankverma 2 года назад +4

    In 10 minutes you taught me more than my lecturer did in 4 months

  • @ajiteshthawait4771
    @ajiteshthawait4771 2 года назад +3

    Notes: cost/ loss function: A function which associates cost with function. Decision using costs, for ex Google maps, loss = diff between actual and predict values, cost = sum of all losses, MAE is the cost function for linear regression. MAE:L1 loss MSE: L2 loss, loss is per observation and cost is for the whole class. Our goal is to find the cost function. Loss functions in the regression. Any algorithm which uses optimization uses loss function.

  • @memescompilation6477
    @memescompilation6477 3 года назад +9

    perfect. this is how i want someone to teach.
    Please cover every aspect of machine learning.

  • @TehreemAyesha
    @TehreemAyesha 3 года назад +1

    Please do not stop making the videos. You are doing great explaining these topics in simple terms. Thank you!
    Would love to learn entire ML from you.

  • @digvijaydesai5642
    @digvijaydesai5642 8 месяцев назад

    Thank you Aman sir!! You are the best Data Science teacher.

  • @findingvaluetoday
    @findingvaluetoday Год назад

    Your explanations are very intuitive and easy to understand. Thank you very much for running this channel - it is helping me a ton in learning data science!

    • @UnfoldDataScience
      @UnfoldDataScience  Год назад

      You're very welcome! Please share channel with friends as well. Thanks again.

  • @vikrantchouhan9908
    @vikrantchouhan9908 3 года назад +1

    Thanks for the gentle and uncomplicated explanation.

  • @denzelomondi6421
    @denzelomondi6421 2 года назад

    Man you are too good at this thing

  • @usmansaeed678
    @usmansaeed678 2 года назад

    keep up the good work .

  • @rosemarydara1025
    @rosemarydara1025 2 года назад

    superrrrrb explanation. even a layman can understand

  • @appagi
    @appagi Год назад

    Very nicely explained Sir!!

  • @sheikhshah2593
    @sheikhshah2593 3 года назад

    Great. 👍. Concluded video

  • @nabanitapaul2331
    @nabanitapaul2331 11 месяцев назад

    Nice explanation

  • @vidyasagarpasala5218
    @vidyasagarpasala5218 3 года назад

    Very nice explanation

  • @rds9815
    @rds9815 Год назад

    very good one i was watching other video which are provided by some institute am unable to under stand that but after watching your video the doubts are cleared
    please keep doing videos

  • @paramesh1629
    @paramesh1629 3 года назад

    Top Class Explaination

  • @dataflex4440
    @dataflex4440 3 года назад +1

    Brilliant Dude ! Superb explanation this channel needs growth

  • @himiee
    @himiee 3 года назад +1

    Perfectly explained... Got it in one view 🙌🙌

  • @AnkitKumar-ss7sx
    @AnkitKumar-ss7sx 3 года назад +1

    SIr, in some explanations I've seen they use 1/2n for mean and formula is written as 1/2n (nEi=1)(h(x*i) - y*i)square. Please explain.

  • @askpioneer
    @askpioneer Год назад

    Hello Aman ! great video. in simple language its clear. Can you please make video for classification type as you said it will be different like cross entropy or some thing else ?

  • @JP-fi1bz
    @JP-fi1bz 3 года назад +2

    Hello sir when are you going to have next q & a session and wat time?

    • @UnfoldDataScience
      @UnfoldDataScience  3 года назад

      This weekend we ll hv, I ll announce on community section of my channel.

  • @bangarrajumuppidu8354
    @bangarrajumuppidu8354 3 года назад +1

    great explanation sir !!

  • @osho2810
    @osho2810 2 года назад

    Great Sir...

  • @carlmemes9763
    @carlmemes9763 3 года назад +1

    Thanks for this video sir👍❤️

  • @Shivakumar-ph6gk
    @Shivakumar-ph6gk 2 года назад

    Well explained. Than q

  • @bhartichambyal6554
    @bhartichambyal6554 3 года назад +1

    Why we use gradient descent. because sklearn can automatically find best fit line for our data.what is the purpose of gradient descent.

    • @UnfoldDataScience
      @UnfoldDataScience  3 года назад +1

      Gradient descent is a generic method to optimize parameters used in many ways. This is not related to only one algorithm as such.

    • @bhartichambyal6554
      @bhartichambyal6554 3 года назад +1

      ​@@UnfoldDataScience when we use simple linear regression and multi-linear regression then it use OLS by default or it use GRADIENT DESCENT for finding the best fit line ? please answer my question

  • @coreyhartman4510
    @coreyhartman4510 3 года назад

    great explanation.

  • @ArpitYadav-ws5xe
    @ArpitYadav-ws5xe 3 года назад

    Excellent

  • @archanamohapatra7589
    @archanamohapatra7589 3 года назад +1

    Big Thanks, Nicely explained 👍

  • @fishx1580
    @fishx1580 2 года назад

    your the best.. ;)

  • @muhammadabuzar7910
    @muhammadabuzar7910 3 года назад

    this is such a perfect way of explaining it
    thank you so very much

  • @fazilhabeeth
    @fazilhabeeth 2 года назад

    Your teaching is excellent bro....
    Others.. use lots of concept to explain one concept make us to confuse....
    But your method is very simple... and can be understood by all....
    👌👌👌

    • @UnfoldDataScience
      @UnfoldDataScience  2 года назад +1

      Thanks Fazil. Please share with others as well. Happy learning 😊

  • @sandipansarkar9211
    @sandipansarkar9211 3 года назад

    finished watching

  • @akhileshgandhe5934
    @akhileshgandhe5934 3 года назад

    Do more videos👍

  • @megalaramu
    @megalaramu 3 года назад

    Hi Aman, I do have a question - We use MAE/MSE/RMSE for regression problems even if it's Decision Trees. But when it comes to classification we use Log Loss for Logistic, Hinge Loss for SVMetc For decision Tree is there anything seperte it is based on entropy or the gini impurity? Also NB just acts like a lookup table right, how about there

  • @Maryam_Qureshi
    @Maryam_Qureshi 3 года назад

    Thank you. It was helpful

  • @_shikh4r_
    @_shikh4r_ 3 года назад

    Nicely explained, Subscribed 👍

  • @RamanKumar-ss2ro
    @RamanKumar-ss2ro 3 года назад +1

    Thank you.

  • @hardikvegad3508
    @hardikvegad3508 3 года назад +1

    Sir can we relate this |w|^2 L2 & |w| L1 with ridge & lasso?

    • @UnfoldDataScience
      @UnfoldDataScience  3 года назад +2

      We will understand lasso and ridge in detail in other video.

  • @lemonbitter7641
    @lemonbitter7641 3 года назад

    Sir am confused about what does cross val score tells and between loss function in a classification model

    • @UnfoldDataScience
      @UnfoldDataScience  3 года назад

      Cross Validation is a different cocept, understand it here
      ruclips.net/video/rPlBijVFw7k/видео.html

    • @lemonbitter7641
      @lemonbitter7641 3 года назад

      @@UnfoldDataScience thanks for helping me out sir.

  • @balapranav5364
    @balapranav5364 3 года назад +1

    I have one doubt sir , when to use Mae, when to use Mae and when to use rmse please

    • @UnfoldDataScience
      @UnfoldDataScience  3 года назад +2

      In case your data has outliers, use mae, not squared ones(rmse, mse)

    • @abrahammathew8698
      @abrahammathew8698 3 года назад

      @@UnfoldDataScience Sir, Is it mean absolute deviation or median absolute deviation as mean can be impacted by outlier? Thanks for great video.

  • @letslearndatasciencetogeth479
    @letslearndatasciencetogeth479 3 года назад +1

    Sr plss explain cross entropy explaination

  • @faiemveg7350
    @faiemveg7350 3 года назад

    How can you take half of 68 here
    I think in equation y= MX+c we can use beta0= 150 and beta1 = 0.5 then equation should be y= 150+0.5*176 ??

    • @janakiraam1
      @janakiraam1 3 года назад

      @faie veg, here independent variable is height and dependent variable is weight so the equation is y=150+0.5*68.

  • @bintu8962
    @bintu8962 2 года назад

    Sir how u applying beta 1 value 150 or .5

    • @UnfoldDataScience
      @UnfoldDataScience  2 года назад

      There are recommendations to be followed by research.

  • @krishnab6444
    @krishnab6444 2 года назад

    thank u sir!

  • @hanselcreado3570
    @hanselcreado3570 3 года назад

    How do you pick the value for beta0 and beta1

  • @Ankitsharma-vo6sh
    @Ankitsharma-vo6sh 3 года назад +1

    thanks

    • @UnfoldDataScience
      @UnfoldDataScience  3 года назад

      Thank you.

    • @Ankitsharma-vo6sh
      @Ankitsharma-vo6sh 3 года назад

      @@UnfoldDataScience bro being true there is nothing to thank me but i have a request try some end to end projects i really want to see what is your approach towards problems

  • @mosama22
    @mosama22 3 года назад

    Thank you Amen, this was really beautiful :-)

  • @azrflourish9032
    @azrflourish9032 3 года назад +1

    so,
    loss function : y_actual - y_predict
    cost function:
    l1= MAE : (1/n)( sigma(y_actual - y_pred))
    l2 = MSE : (1/n)( sigma((y_actual - y_pred)^2)
    did I get correctly???

    • @raghavverma120
      @raghavverma120 2 года назад

      Confusing.. first he called mae as cost function and later on he called the same thing as loss function for linear regression

    • @raghavverma120
      @raghavverma120 2 года назад

      Inshort.. loss function = cost function and can be used interchangeably

    • @raghavverma120
      @raghavverma120 2 года назад

      Cost function is calculated over the entire data set, and loss for one training instance

  • @CRICKETLOVER_10
    @CRICKETLOVER_10 3 года назад

    I am b.com pass out can i became data scienctist

  • @arslanjutt4282
    @arslanjutt4282 Год назад

    Not understand what is difference in cost and loss function because mse is cost function as well loss

  • @neekhilsingh2114
    @neekhilsingh2114 Год назад

    Sir please share your mail id for my cv review…. Unfolddata science mail id does not work. Please share

  • @sandipansarkar9211
    @sandipansarkar9211 3 года назад +1

    finished watching

  • @pranjalgupta9427
    @pranjalgupta9427 3 года назад +1

    Thanks 🙏