Attention conservation notice: 750+ self-promoting words about a new preprint on Bayesian statistics and the philosophy of science. Even if you like watching me ride those hobby-horses, why not check back in a few months and see if peer review has exposed it as a mass of trivialities, errors, and trivial errors?
I seem to have a new pre-print:
As the two or three people who still read this blog may recall, I have long had a Thing about Bayesianism, or more exactly the presentation of Bayesianism as the sum total of rationality, and the key to all methodologies. (Cf.) In particular, the pretense that all a scientist really wants, or should want, is to know the posterior probability of their theories — the pretense that Bayesianism is a solution to the problem of induction — bugs me intensely. This is the more or less explicit ideology of a lot of presentations of Bayesian statistics (especially among philosophers, economists* and machine-learners). Not only is this crazy as methodology — not only does it lead to the astoundingly bass-ackwards mistake of thinking that using a prior is a way of "overcoming bias", and to myths about Bayesian super-intelligences — but it doesn't even agree with what good Bayesian data analysts actually do.
If you take a good Bayesian practitioner and ask them "why are you using a hierarchical linear model with Gaussian noise and conjugate priors?", or even "why are you using that Gaussian process as your prior distribution over regression curves?", if they have any honesty and self-awareness they will never reply "After offering myself a detailed series of hypothetical bets, the stakes carefully gauged to assure risk-neutrality, I elicited it as my prior, and got the same results regardless of how I framed the bets" — which is the official story about operationalizing prior knowledge and degrees of belief. (And looking for "objective" priors is hopeless.) Rather, data analysts will point to some mixture of tradition, mathematical convenience, computational tractability, and qualitative scientific knowledge and/or guesswork. Our actual degree of belief in our models is zero, or nearly so. Our hope is that they are good enough approximations for the inferences we need to make. For such a purpose, Bayesian smoothing may well be harmless. But you need to test the adequacy of your model, including the prior.
Admittedly, checking your model involves going outside the formalism of Bayesian updating, but so what? Asking a Bayesian data analyst not just whether but how their model is mis-specified is not, pace Brad DeLong, tantamount to violating the Geneva Convention. Instead, it is recognizing them as a fellow member of the community of rational inquirers, rather than a dumb numerical integration subroutine. In practice, good Bayesian data analysts do this anyway. The ideology serves only to give them a guilty conscience about doing good statistics, or to waste time in apologetics and sophistry. Our modest hope is to help bring an end to these ideological mystifications.
The division of labor on this paper was very simple: Andy supplied all the worthwhile parts, and I supplied everything mistaken and/or offensive. (Also, Andy did not approve this post.)
*: Interestingly, even when economists insist that rationality is co-extensive with being a Bayesian agent, none of them actually treat their data that way. Even when they do Bayesian econometrics, they are willing to consider that the truth might be outside the support of the prior, which to a Real Bayesian is just crazy talk. (Real Bayesians enlarge their priors until they embrace everything which might be true.) Edward Prescott forms a noteworthy exception: under the rubric of "calibration", he has elevated his conviction that his prior guesses are never wrong into a new principle of statistical estimation.
Manual trackback: Andrew Gelman; Build on the Void; The Statistical Mechanic; A Fine Theorem; Evolving Thoughts; Making Sense with Facilitated Systems; Vukutu; EconTech; Gravity's Rainbow; Nuit Blanche; Smooth; Andrew Gelman again (incorporating interesting comments from Richard Berk); J.J. Hayes's Amazing Antifolk Explicator and Philosophic Analyzer; Manuel "Moe" G.; Dynamic Ecology
Posted at June 26, 2010 15:58 | permanent link