Excellent question; I had planned to discuss the MGP on a previous version of this talk, but dropped it for space reasons. It absolutely would fit the description of a post-Bayesian method, and (in terms of the illustration in terms of sets of belief updates) is complementary to the blue optimisation-centric generalisation. You can find the slides for this from slide 198 onwards here: jeremiasknoblauch.github.io/talk/post-bayesian-machine-learning/Gatsby-presentation-final.pdf
So how does the martingale posterior fit into this? Is it even Bayesian anymore, and does it work in practice?
Excellent question; I had planned to discuss the MGP on a previous version of this talk, but dropped it for space reasons. It absolutely would fit the description of a post-Bayesian method, and (in terms of the illustration in terms of sets of belief updates) is complementary to the blue optimisation-centric generalisation. You can find the slides for this from slide 198 onwards here: jeremiasknoblauch.github.io/talk/post-bayesian-machine-learning/Gatsby-presentation-final.pdf
@@jknoblauch3442 Thank you, great stuff!