I think one critique I made of the speaker is that, like a lot of us bayesians, he's too wishywashy in his recommendations. I know that as bayesians we tend to put uncertainity and not committing to answers fully first, it's part of the strength of the framework, but when it comes to decisions for people that don't see the full picture yet that kind of recommendation just leaves people confused and not knowing what to do. The answer I think he should have given is to recognise that if bayesian stats can everything frequentism can do and do it better, and even more that frequentists are answering "the wrong questions precisely", it means that frequentism is set for extinction as bayesian methods catch on. There's no reason to keep it around other than prototyping or something like that because of computational speed. In light of that, where your decision making is free enough to make the shift, you basically should do that. The problem is that you recommend people to be trail-blazers which sets them up to do frustrating and nebulous work that isn't for everyone, so you warn them about that. However, for the people who are robust enough to follow that path you should give them the tools to get started and the reward will be that they're well positioned to handle the change when broader statistical culture shifts as it is at the moment. They'll also be able to solve problems that stump frequentists all the time, framing probability as a belief frees you to answer so many more questions that are useful in contexts where its needed most, areas with high information asymmetry and limited data. The bottom line is, given the mechanisms that are at play, is it worth investing in this approach? I cannot stress how little value I currently see in frequentist statistics other than convienience and even then I think its value is limited. So investing in Bayesian statistics seems to be a pretty good mid to long-term career investment for those who want to challenge themselves and solve problems that nobody else can.
Your assessment of frequentism is extremely flawed. A lot of the properties of MCMC and related diagnostics that provide the basis for practical Bayesian inference are justified on frequentist grounds and for those contexts, the idea of an infinite sequence of samples drawn from an unchanging process is a much more tenable than for applications in science. On the other hand, "speed" is not an argument that frequentist methods have uniquely in their favor. Performing Bayesian inference with approximate methods can be just as fast. But once again, understanding how to investigate those methods in order to quantify the quality of the approximations is something that can be done very naturally from a frequentist perspective. For the particular cases where the inferences provided by both frameworks match, then one can just as easily consider frequentist approaches to be a good, fast approximation of Bayesian ones. And practically speaking, given the limited time non-statistics students get to understand these topics, having to deal with the computational difficulties involved in full posterior sampling can be a huge time sink. A simulation-based course that gives the students the tools to probe whatever method or definition they come across would be far more useful than the desperate attempts I have repeatedly seen at having them "understand" an entire inferential framework (either one) by repeating a bunch of terms and procedures handed to them from high above.
@@nuhuhbruhbruh I agree with your assessment. One must understand the frequentist methods of estimation to fully appreciate the utility and application of Bayesian statistics. Bayes rule and simulations like Monte Carlo methods are a good way to start. I'm fairly new to this, but one thing I'm starting to realize is that I overlooked for a long time the possible pitfalls in doing frequentist inference.
I think one critique I made of the speaker is that, like a lot of us bayesians, he's too wishywashy in his recommendations. I know that as bayesians we tend to put uncertainity and not committing to answers fully first, it's part of the strength of the framework, but when it comes to decisions for people that don't see the full picture yet that kind of recommendation just leaves people confused and not knowing what to do.
The answer I think he should have given is to recognise that if bayesian stats can everything frequentism can do and do it better, and even more that frequentists are answering "the wrong questions precisely", it means that frequentism is set for extinction as bayesian methods catch on. There's no reason to keep it around other than prototyping or something like that because of computational speed. In light of that, where your decision making is free enough to make the shift, you basically should do that.
The problem is that you recommend people to be trail-blazers which sets them up to do frustrating and nebulous work that isn't for everyone, so you warn them about that. However, for the people who are robust enough to follow that path you should give them the tools to get started and the reward will be that they're well positioned to handle the change when broader statistical culture shifts as it is at the moment. They'll also be able to solve problems that stump frequentists all the time, framing probability as a belief frees you to answer so many more questions that are useful in contexts where its needed most, areas with high information asymmetry and limited data.
The bottom line is, given the mechanisms that are at play, is it worth investing in this approach? I cannot stress how little value I currently see in frequentist statistics other than convienience and even then I think its value is limited. So investing in Bayesian statistics seems to be a pretty good mid to long-term career investment for those who want to challenge themselves and solve problems that nobody else can.
Your assessment of frequentism is extremely flawed. A lot of the properties of MCMC and related diagnostics that provide the basis for practical Bayesian inference are justified on frequentist grounds and for those contexts, the idea of an infinite sequence of samples drawn from an unchanging process is a much more tenable than for applications in science.
On the other hand, "speed" is not an argument that frequentist methods have uniquely in their favor. Performing Bayesian inference with approximate methods can be just as fast. But once again, understanding how to investigate those methods in order to quantify the quality of the approximations is something that can be done very naturally from a frequentist perspective. For the particular cases where the inferences provided by both frameworks match, then one can just as easily consider frequentist approaches to be a good, fast approximation of Bayesian ones. And practically speaking, given the limited time non-statistics students get to understand these topics, having to deal with the computational difficulties involved in full posterior sampling can be a huge time sink.
A simulation-based course that gives the students the tools to probe whatever method or definition they come across would be far more useful than the desperate attempts I have repeatedly seen at having them "understand" an entire inferential framework (either one) by repeating a bunch of terms and procedures handed to them from high above.
@@nuhuhbruhbruh I agree with your assessment. One must understand the frequentist methods of estimation to fully appreciate the utility and application of Bayesian statistics. Bayes rule and simulations like Monte Carlo methods are a good way to start.
I'm fairly new to this, but one thing I'm starting to realize is that I overlooked for a long time the possible pitfalls in doing frequentist inference.