An introduction to Jeffreys priors - 2
HTML-код
- Опубликовано: 18 сен 2024
- These series of videos explain what is meant by Jeffreys priors as well as how they satisfy a particular notion of ‘uninformativeness’. This concept is explained through a simple Bernoulli example.
This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co....
For more information on all things Bayesian, have a look at: ben-lambert.co.... The playlist for the lecture course is here: • A Student's Guide to B...
when you write the log likelihood (1:00), the expression inside the brackets is posterior but not the likelihood.
this other video of dr.lambert’s might clear things up a bit. ruclips.net/video/IhoEwC9R8pA/видео.htmlsi=TSzEfWCRUN4JEIrD
I am not used to leaving comments under youtube videos but big thanks to you Ben. Definitely not the first nor the last video of yours that I will be watching
Thank you for sharing this insight. Straight and simple.
why can you cancel the integrals in this case?
This has been puzzling me as I see this happening in another video in this series! Can somebody please clarify?
It's really sloppy. What he is actually doing is applying a Jacobian.
still have no idea. Ben can you decipher this to a non-PhD statistician in a practical sense. Ii.e. some data examples? It is impenetrable.
TBF to Ben, it's a set of videos about a notorious theoretical issue that led to Bayesian inference being deemed "unusable" for many years. As such, the videos are dealing with a broad problem of "what if two people choose to define the same question in slightly different ways". Numerical examples would hide the nature of the solution to the bigger problem
isn't it supposed to be l x given theta?
excellent