#12 Machine Learning Specialization [Course 1, Week 1, Lesson 3]

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024
  • The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications.
    This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.
    This video is from Course 1 (Supervised Machine Learning Regression and Classification), Week 1 (Introduction to Machine Learning), Lesson 3 (Regression Model), Video 4 (Cost function intuition).
    To learn more and access the full course videos and assignments, enroll in the Machine Learning Specialization here: bit.ly/3ERmTAq
    Download the course slides: bit.ly/3AVNHwS
    Check out all our courses: bit.ly/3TTc2KA
    Subscribe to The Batch, our weekly newsletter: bit.ly/3TZUzju
    Follow us:
    Facebook: / deeplearningaihq
    LinkedIn: / deeplearningai
    Twitter: / deeplearningai_

Комментарии • 16

  • @emmanuelteitelbaum
    @emmanuelteitelbaum 4 месяца назад +6

    My favorite linear regression video of all time!

  • @soap4890
    @soap4890 6 месяцев назад +7

    🎯 Key Takeaways for quick navigation:
    [\[00:01\](ruclips.net/video/peNRqkfukYY/видео.html)] 🧠 Understanding the cost function:
    - The section aims to build intuition about the role and behavior of the cost function in linear regression.
    - A recap emphasizes the objective of finding parameter values \( W \) and \( B \) that minimize the cost function \( J(W, B) \), indicating optimal model performance.
    - Simplifying the linear regression model to \( f(W, X) = W \times X \) facilitates visualizing the cost function's relationship with parameter \( W \).
    [\[04:05\](ruclips.net/video/peNRqkfukYY/видео.html)] 📉 Analyzing the cost function graphically:
    - Graphical representations of both the linear model \( f(W, X) \) and the cost function \( J(W) \) are examined.
    - Different values of parameter \( W \) correspond to various straight line fits to the training data, influencing the cost function.
    - The relationship between parameter \( W \), the linear model fit, and the cost function values is elucidated through graphical analysis.
    [\[13:47\](ruclips.net/video/peNRqkfukYY/видео.html)] 📊 Choosing optimal parameter values:
    - Optimal parameter values \( W \) are selected based on minimizing the cost function \( J(W) \), indicating the best fit of the linear model to the training data.
    - Graphical analysis demonstrates how the choice of parameter \( W \) affects the fit of the model to the data and the resulting cost function values.
    - The ultimate goal in linear regression is to identify parameter values that result in the smallest possible cost function value, indicating an accurate model fit.
    Made with HARPA AItail

  • @tamaramartinovic
    @tamaramartinovic Год назад +8

    So well explained, thank you very much!

  • @leogodofnone2706
    @leogodofnone2706 6 месяцев назад +4

    Thank you 🙏🏻

  • @ooxxx
    @ooxxx Год назад +2

    thank you!

  • @mithun442
    @mithun442 8 месяцев назад +1

    Understood sir

  • @nozarlahooti1789
    @nozarlahooti1789 6 месяцев назад

    For the second example would be 4.25/6=0.7, right?

  • @quietcorner5989
    @quietcorner5989 10 месяцев назад +1

    so why you calculated J function in the beginning with the same slope which is w=1 ,and then u calculated the value of j function from two different slopes w=1 and w=0.5? in the first one you considered that the actual function and the function of the model prediction had the same slope w=1 so it is normal that the error would be 0!! can you please clarify this i didn t get it. did you suppose in the beginning that the model predition was 100% correct and precise and it mached with the actual function , so the error was 0?

    • @Lostverseplays
      @Lostverseplays 9 месяцев назад

      In the start yes the model prediction was 100% correct that is why the cost function(J) gave 0 as output as there was no error in prediction, when the slope became 0.5 the model(Function used to predict prices i.e f(x)) was deviated from actual prediction whose error was calculated by J function

  • @karthik_0180
    @karthik_0180 11 месяцев назад

    Yes

  • @YashrajVerma-ni4jn
    @YashrajVerma-ni4jn Год назад +1

    😊

  • @TeacherAzkaShehzadi-gf6vb
    @TeacherAzkaShehzadi-gf6vb 5 месяцев назад

    date 16

  • @gnull
    @gnull Месяц назад

    i smell calculus

  • @MikeM-uy6qp
    @MikeM-uy6qp 4 месяца назад

    Huh?