Andrew Gelman: Introduction to Bayesian Data Analysis and Stan with Andrew Gelman

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024
  • Stan is a free and open-source probabilistic programming language and Bayesian inference engine. In this talk, we will demonstrate the use of Stan for some small problems in sports ranking, nonlinear regression, mixture modeling, and decision analysis, to illustrate the general idea that Bayesian data analysis involves model building, model fitting, and model checking. One of our major motivations in building Stan is to efficiently fit complex models to data, and Stan has indeed been used for this purpose in social, biological, and physical sciences, engineering, and business. The purpose of the present webinar is to demonstrate using simple examples how one can directly specify and fit models in Stan and make logical decisions under uncertainty.
    Andrew Gelman is a professor of statistics and political science at Columbia University. He has received the Outstanding Statistical Application award three times from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40. His books include Bayesian Data Analysis (with John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin), Teaching Statistics: A Bag of Tricks (with Deb Nolan), Data Analysis Using Regression and Multilevel/Hierarchical Models (with Jennifer Hill), Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do (with David Park, Boris Shor, and Jeronimo Cortina), A Quantitative Tour of the Social Sciences (co-edited with Jeronimo Cortina), and Regression and Other Stories (with Jennifer Hill and Aki Vehtari).

Комментарии • 25

  • @KyPaMac
    @KyPaMac 7 лет назад +24

    That golf putting model is just about the coolest thing ever.

  • @crypticnomad
    @crypticnomad 4 года назад +4

    I know this is a rather old video but it is still highly relevant and useful. At 47:02 I don't think a standard EV calculation really does that situation justice. With high payout/low loss situations like that I think it is better to weight the payouts by their utility. For example losing $10 may have basically no subjective utility loss when compared to the subjective utility gained from having $100k. Lets say that to me having $100k has 20k times as much ultility as losing $10 does. When you switch from an ev calculation based on the win/loss to a subjective ultility of the payouts there is a drastic increase in the EV(although still negative in this case). e.g:
    win=10000 dollars
    lose = 10 dollars
    (win*5.4e-06)-((1-5.4e-06)*lose)= ~-$9.46 dollars
    win_util = 20000 utility points or "utils"
    lose_util = 1 utils
    (win_util*5.4e-06)-((1-5.4e-06)*lose_util)=-0.89 utils
    This is a simple example and we could for sure argue about the subjective utility values but I think overall it shows that the normal EV calculation doesn't really do the situation justice when you think about the utility of the win versus the utility of the loss. One could also flip this around and talk about the subjective utility of losing samples versus winning samples. Like say this was overall +ev but that the subjective value in winning so rarely was less than the subjective value loss from losing so often.
    I got this concept from game theory. There are plenty of examples, especially in poker, where doing something that is -ev right now could lead to a +ev situation later on. Poker players call that implied ev and an example could be calling with some sort of draw when the current raw pot odds don't justify it but you know that when you do make your hand that the profits gained will make up for the marginal loss now. So for example lets say I have some idea for a product or service that would earn $50k a year off a $100k investment. With using a fairly standard 10x income for valuation estimation I could say the subjective utility of winning that 100k is actually worth 50k utility points versus the 10k utility points implied by an even weighting. This specific situation would still be -ev though.
    All of that leads me down the path of seriously doubting most of rational economics.

    • @arnabghosh8843
      @arnabghosh8843 3 месяца назад

      and even just going with constant valuation of money, you get _multiple entries_. So, sure, the probability of 1 contribution winning is whatever he got, but you can submit multiple submissions whose combination could definitely lead to a win. especially, when you consider that you can make about 10k submissions.
      made a little sad to see that reductive analysis (and with all the floating) when I was excited to see a really interesting decision problem at hand :/

  • @RobetPaulG
    @RobetPaulG 7 лет назад +6

    Thanks a lot for making this code available for download. That was really helpful for getting started in Stan.

  • @johnnyedwards1948
    @johnnyedwards1948 5 лет назад +4

    Also really liked the golf putt example.

  • @SpaceExplorer
    @SpaceExplorer 7 лет назад +6

    Thanks Dr. Gelman

  • @macanbhaird1966
    @macanbhaird1966 2 года назад

    Wow! Brilliant - this really helped me a lot. Thank you.

  • @josephjohns4251
    @josephjohns4251 Год назад

    Just beginning to learn about Bayesian analysis ... thanks for the great video and everyone for links in comments ...
    Question: Is it correct to say that, in the world cup example 1, the only variables that are calculated by Stan are: b real, sigma_a >=0, and sigma_y >=0?
    In other words, Stan figures out (simultaneously/jointly):
    (1) the best b and sigma_a for the equation a = b*prior_scores + sigma_a*[eta_a ~ N(0,1)]
    (2) the best sigma_y so that student_t(df = 7, a[team_1] - a[team_2], sigma_y) best predicts ~ sqrt_adjusted[score(team_1) - score(team_2)]
    That seems kind of weird to me that after we figure out the formula for a, it kind of boils down to just one parameter = sigma_y

  • @usptact
    @usptact 7 лет назад +3

    Thanks for the great presentation and explanations on real models.
    This made me laugh: "working with live posterior"

  • @mehmetb5132
    @mehmetb5132 19 дней назад

    Wondered why we have '-1' in "2 * Phi(asin((R - r) / sigma) - 1" in the golf example model. (Min 51:38)

  • @emf1775
    @emf1775 3 года назад

    Gelman is quite nice to listen to. His RL voice sounds different from his blog voice somehow

  • @NikStar210
    @NikStar210 7 лет назад

    Prof. Gelman: At 19:00 you talk about checking how the model fits the data; are there any tools in Stan to avoid overfitting?

    • @generableHQ
      @generableHQ  7 лет назад +4

      There are no "tools", but this may help: andrewgelman.com/2017/07/15/what-is-overfitting-exactly/

  • @yoij-ov3sd
    @yoij-ov3sd 4 года назад

    At 16:27 you talk about checking predictive posterior distributions for games against their actual results to check if they are within their respective 95% CIs. Are these games training data or unseen data?

    • @Houshalter
      @Houshalter 4 года назад

      He didn't say they were held out samples. So probably they were in the training data. Ideally you shouldn't do that. Because it would hide overfitting. However bayesian methods are much less prone to overfitting. In his example he found some completely different problem.

    • @yoij-ov3sd
      @yoij-ov3sd 4 года назад

      @@Houshalter thanks

    • @crypticnomad
      @crypticnomad 4 года назад

      @@Houshalter I've heard people argue that bayesian methods don't overfit but the developers sometimes do have incorrect assumptions about priors and their distributions which can lead to situations that may look similar to overfitting in the classic sense of the word. For example say we naively look at some time series data and we think we have a solid basic understanding of the distributions of the processes that formed that data. We fit a model and it seems to do well on the trainging set but fails pretty horribly when testing on unseen data. There are many reasons this could happen and almost all of them are based on the fact that our training sample didn't include enough data to really estimate the distributions and their parameters, we picked the wrong distributions/priors for those processes or that the processes that generate our data vary over time.

    • @Houshalter
      @Houshalter 4 года назад

      @@crypticnomad bayesian can absolutely suffer from a bad model. But it's a different problem than normal overfitting. And a validation test would not necessarily show any difference from the training set

  • @erwinbanez6442
    @erwinbanez6442 7 лет назад +1

    Thanks for this. Any link to the slides?

    • @SrikantGadicherla
      @SrikantGadicherla 6 лет назад +1

      www.dropbox.com/s/sfi0pcf7hais91r/Gelman_Stan_talk.pdf?dl=0
      This (for given slides) talk was given in Aalto university, October 2016.

  • @mattn2364
    @mattn2364 4 года назад +7

    "Soccer games are random, it all depends how good the acting is"

  • @JesseFagan
    @JesseFagan 4 года назад

    What was the bug he fixed? I want to know how he solved the problem.

    • @omarfsosa
      @omarfsosa 4 года назад +3

      There was a factor of 2 missing. Full story is here: statmodeling.stat.columbia.edu/2014/07/15/stan-world-cup-update/

  • @prod.kashkari3075
    @prod.kashkari3075 Год назад

    Lmfao this guys so funny