After watching this series a few times, the thing that (still) confuses me is the fact that for (100 % of training data + each new point) we have to fit a new model each time for each plausible label, but for (percentage of training data or calibration data + each new point) we don't. It looks to me that in both cases test point could be out of bag, be it full training data or e.g. 20% of training data (calibration dataset). I see that everything is the same in terms of implementing CP, but the difference is only in the number of points (training vs calibration).
I am glad to see that you have watched all the videos. In full conformal, test point is assigned a plausible label, which allows it to be considered as part of the bag; we have to refit the model so that all plausible values are considered. With split conformal however the bag consists of calibration points only.
Awesome video series. Thanks!
Glad you enjoyed it! Thanks for your comment!
This is great!! Thanks!
You're welcome!
After watching this series a few times, the thing that (still) confuses me is the fact that for (100 % of training data + each new point) we have to fit a new model each time for each plausible label, but for (percentage of training data or calibration data + each new point) we don't. It looks to me that in both cases test point could be out of bag, be it full training data or e.g. 20% of training data (calibration dataset). I see that everything is the same in terms of implementing CP, but the difference is only in the number of points (training vs calibration).
I am glad to see that you have watched all the videos. In full conformal, test point is assigned a plausible label, which allows it to be considered as part of the bag; we have to refit the model so that all plausible values are considered. With split conformal however the bag consists of calibration points only.