Uncertainty Quantification (3): From Full to Split Conformal Methods

Поделиться
HTML-код
  • Опубликовано: 28 дек 2024

Комментарии • 6

  • @BlakeEdwards333
    @BlakeEdwards333 Год назад +2

    Awesome video series. Thanks!

    • @MLBoost
      @MLBoost  Год назад

      Glad you enjoyed it! Thanks for your comment!

  • @שחרכהן-פ6ד
    @שחרכהן-פ6ד 8 месяцев назад

    This is great!! Thanks!

    • @MLBoost
      @MLBoost  8 месяцев назад

      You're welcome!

  • @71sephiroth
    @71sephiroth 11 месяцев назад +1

    After watching this series a few times, the thing that (still) confuses me is the fact that for (100 % of training data + each new point) we have to fit a new model each time for each plausible label, but for (percentage of training data or calibration data + each new point) we don't. It looks to me that in both cases test point could be out of bag, be it full training data or e.g. 20% of training data (calibration dataset). I see that everything is the same in terms of implementing CP, but the difference is only in the number of points (training vs calibration).

    • @MLBoost
      @MLBoost  9 месяцев назад

      I am glad to see that you have watched all the videos. In full conformal, test point is assigned a plausible label, which allows it to be considered as part of the bag; we have to refit the model so that all plausible values are considered. With split conformal however the bag consists of calibration points only.