Tutorial IV

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 14

  • @merlinzbeard1
    @merlinzbeard1 6 лет назад +13

    There is a mistake in the explanation of MAP.
    1) The value of .9 was the value of (p(x=0/y=1).p(y=1))p(x=0). [You have shown it to be the value of the conditional probability when y= 0.
    2) The 2 probabilities will not add up to 1 since the first expression has p(y=1) in the numerator and the second probability is weighted with p(y=0).
    3) Going by the logic, (p(x=0/y=0).p(y=0))p(x=0) = (.25)(.6) / (.2) which works to be .75
    Hence, even tho Y will most likely be 1, the work shown by the TA is wrong.
    That being said, I really do learn a lot from your lectures. Thank you for this :)

    • @umang9997
      @umang9997 2 года назад

      Can you please explain why the 2 probabilities will not add up to 1?
      (
      we have the information , that x=0 .
      now y can only take 2 values , either 0 or 1,
      so p(y=0|x=0) should be equal to 1 - p(y=1|x=0).
      What am I missing here?
      )

  • @rekhakushwaha7224
    @rekhakushwaha7224 6 лет назад +3

    For Y= 1 we have .9 & for Y = 0 we have .75
    So Y= 1 ll be the answer

  • @rekhakushwaha7224
    @rekhakushwaha7224 6 лет назад +2

    MAP inference is calculated wrongly

  • @solarstryker
    @solarstryker 7 лет назад +6

    Explained better than Mam

    • @AnirbanSantara
      @AnirbanSantara 7 лет назад

      Wow! Really!? Means a lot to me! Thanks!

  • @mayanksj
    @mayanksj 6 лет назад +1

    Machine Learning by Prof. Sudeshna Sarkar
    Basics
    1. Foundations of Machine Learning (ruclips.net/video/BRMS3T11Cdw/видео.html)
    2. Different Types of Learning (ruclips.net/video/EWmCkVfPnJ8/видео.html)
    3. Hypothesis Space and Inductive Bias (ruclips.net/video/dYMCwxgl3vk/видео.html)
    4. Evaluation and Cross-Validation (ruclips.net/video/nYCAH8b5AQ0/видео.html)
    5. Linear Regression (ruclips.net/video/8PJ24SrQqy8/видео.html)
    6. Introduction to Decision Trees (ruclips.net/video/FuJVLsZYkuE/видео.html)
    7. Learning Decision Trees (ruclips.net/video/7SSAA1CE8Ng/видео.html)
    8. Overfitting (ruclips.net/video/y6SpA2Wuyt8/видео.html)
    9. Python Exercise on Decision Tree and Linear Regression (ruclips.net/video/lIBPIhB02_8/видео.html)
    Recommendations and Similarity
    10. k-Nearest Neighbours (ruclips.net/video/PNglugooJUQ/видео.html)
    11. Feature Selection (ruclips.net/video/KTzXVnRlnw4/видео.html )
    12. Feature Extraction (ruclips.net/video/FwbXHY8KCUw/видео.html)
    13. Collaborative Filtering (ruclips.net/video/RVJV8VGa1ZY/видео.html)
    14. Python Exercise on kNN and PCA (ruclips.net/video/40B8D9OWUf0/видео.html)
    Bayes
    16. Baiyesian Learning (ruclips.net/video/E3l26bTdtxI/видео.html)
    17. Naive Bayes (ruclips.net/video/5WCkrDI7VCs/видео.html)
    18. Bayesian Network (ruclips.net/video/480a_2jRdK0/видео.html)
    19. Python Exercise on Naive Bayes (ruclips.net/video/XkU09vE56Sg/видео.html)
    Logistics Regession and SVM
    20. Logistics Regression (ruclips.net/video/CE03E80wbRE/видео.html)
    21. Introduction to Support Vector Machine (ruclips.net/video/gidJbK1gXmA/видео.html)
    22. The Dual Formation (ruclips.net/video/YOsrYl1JRrc/видео.html)
    23. SVM Maximum Margin with Noise (ruclips.net/video/WLhvjpoCPiY/видео.html)
    24. Nonlinear SVM and Kernel Function (ruclips.net/video/GcCG0PPV6cg/видео.html)
    25. SVM Solution to the Dual Problem (ruclips.net/video/Z0CtYBPR5sA/видео.html)
    26. Python Exercise on SVM (ruclips.net/video/w781X47Esj8/видео.html)
    Neural Networks
    27. Introduction to Neural Networks (ruclips.net/video/zGQjh_JQZ7A/видео.html)
    28. Multilayer Neural Network (ruclips.net/video/hxpGzAb-pyc/видео.html)
    29. Neural Network and Backpropagation Algorithm (ruclips.net/video/T6WLIbOnkvQ/видео.html)
    30. Deep Neural Network (ruclips.net/video/pLPr4nJad4A/видео.html)
    31. Python Exercise on Neural Networks (ruclips.net/video/kTbY20xlrbA/видео.html)
    Computational Learning Theory
    32. Introduction to Computational Learning Theory (ruclips.net/video/8hJ9V9-f2J8/видео.html)
    33. Sample Complexity: Finite Hypothesis Space (ruclips.net/video/nm4dYYP-SJs/видео.html)
    34. VC Dimension (ruclips.net/video/PVhhLKodQ7c/видео.html)
    35. Introduction to Ensembles (ruclips.net/video/nelJ3svz0_o/видео.html)
    36. Bagging and Boosting (ruclips.net/video/MRD67WgWonA/видео.html)
    Clustering
    37. Introduction to Clustering (ruclips.net/video/CwjLMV52tzI/видео.html)
    38. Kmeans Clustering (ruclips.net/video/qg_M37WGKG8/видео.html)
    39. Agglomerative Clustering (ruclips.net/video/NCsHRMkDRE4/видео.html)
    40. Python Exercise on means Clustering (ruclips.net/video/qs7vES46Rq8/видео.html)
    Tutorial I (ruclips.net/video/uFydF-g-AJs/видео.html)
    Tutorial II (ruclips.net/video/M6HdKRu6Mrc/видео.html )
    Tutorial III (ruclips.net/video/Ui3h7xoE-AQ/видео.html)
    Tutorial IV (ruclips.net/video/3m7UJKxU-T8/видео.html)
    Tutorial VI (ruclips.net/video/b3Vm4zpGcJ4/видео.html)
    Solution to Assignment 1 (ruclips.net/video/qqlAeim0rKY/видео.html)

  • @sambro890
    @sambro890 5 лет назад +1

    Sir the most probable inference value of y will be 0

  • @vaibhavmahajan4276
    @vaibhavmahajan4276 6 лет назад +1

    He maked a mistake in map inference

  • @RwikKumarDutta
    @RwikKumarDutta 8 лет назад

    U explain really well!!!! :)

  • @ashishkashyap1872
    @ashishkashyap1872 3 года назад

    Great Explanation!!