The false positive risk: an R shiny app.

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • Yet another talk about p values. This one has more emphasis on our web calculator than usual because it was given to the UCL R users group, 8 Dec 2022. For explanations and downloads click on SHOW MORE
    The web app can be found at fpr-calc.ucl.ac...
    In the talk, I forgot to draw attention to the Notes tab in the web app, where more details and references, can be found.
    I'll clarify here some of the answers that I gave in the Q&A.
    1. Ellen asked about the effect size. The app (or the R scripts) need as input the normalised effect size, i.e. the effect size divided by its standard deviation (not its standard error). I should have mentioned that the calculations assume that we know the true values of these. In real life, all we have is the sample estimates, the observed mean effect size and its sample standard deviation. Except for very small samples, use of the observed mean and standard deviation is unlikely to have any great effect on the conclusions. This was tested by simulations in section 5.1 of my 2019 paper: royalsocietypu...
    2. Ellen also asked about the problems I had with making the Shiny app work. After I had asked for help on Twitter, I got this, from Brendan Halpin of the University of Limerick.
    "I think the core problem is that you can't use the same name for
    multiple inputs. That is, the inputs in the different conditionalPanels
    need different names. See the attached where I give them different names
    in ui.R and re-assign them to common names in server.R. "
    This solved the problem.
    The current code, app.R, can be downloaded from www.onemol.org....
    Links to papers, articles and other videos on this topic can be found at www.onemol.org....
    3. Kimberly ask about the extent to which putting equal prior probability on H0 and H1 is a good expression of ignorance. This is contentious, but I think that it's widely believed that it's as close as you can get to equipoise (in the context of comparing two specified hypotheses). If you don't like it then I suggest using reverse Bayes (radio button 1) to calculate the prior probability that you'd need to achieve a specified FPR. For example if you observe p = 0.05, then, in order to achieve a false positive risk of 0.05 you'd need to persuade your readers that you could justify assuming that P(H1) was 0.87, ie p(H0) = 0.13, before you'd done the experiment. You'd need pretty strong prior knowledge to justify this.
    I should probably have put more emphasis on the fact that any attempt to reduce the risk of false positives will, inevitably, increase the risk of false negatives. I interpret this trade-off as meaning that science is harder than we thought.

Комментарии • 1

  • @fburton8
    @fburton8 Год назад +1

    Thanks for the talk!
    Writing “P values < 0.05 were considered significant” is pretty much universal in biological science papers. Would there be value in this context of quoting a smaller P value equivalent to the 5% false positive risk? Of course, one consequence would be less gets published, but maybe that is a good thing?!
    That suggestion aside, I do like the suggestion to quote FPRs _in addition to_ P values. It’s a fairly gentle but significant(!) move in the right direction.