Many examples are from this Effective Altruism forum post. A more exhaustive (but less accessible) list of priors, edited by Andrew Gelman, can be found here. Without much further ado:

Practical priors

Theory

  • The principle of maximum entropy
    • For example the maximum entropy distribution
      • …on a finite set is the uniform distribution,
      • …on the positive reals with fixed mean is the exponential distribution,
      • …on the reals with fixed mean and variance is the normal distribution.
  • Universality/Looking for fixed points
    • If some quantity to be estimated can be seen to be (roughly) invariant under some flow, its probability distribution has to be (close to) a fixed point of that flow.
    • The by far most common instances of this are the “flows” of adding/multiplying another independent variable and normalising. The multiplicative case can be reduced to the additive one by taking logarithms.
      • In the case of finite variances, the additive case leads to a normal distribution, by the central limit theorem. With infinite variances, we may follow a similar approach to get α-stable Lévy processes.
      • Example: The usual derivation of stock market prices being modelled as geometric Brownian motions.
    • Another set of examples is that of flows whose fixed points are scale invariant. In this case we expect power laws to appear.

Combining the above to get a forecast

More often than not, the questions we are interested in are more complex than any of the above. However, we may be able to break them down into subquestions for which the above priors are helpful, see e.g. Fermi problems.

Note, however, that the more subquestions there are to be estimated, the bigger the expected error (although, if the indiviual estimates are unbiased and independent, this error is smaller than one might think). Moreover, one needs to be careful as soon as one ends up using information coming from the tail of a distribution (e.g. a 95% or 99% confidence interval); slightly paraphrasing SimonM:

  1. People (and their models) in general are not calibrated well, especially at the tails.
  2. If they are, it takes a while to tell apart those that are from those that are not.
  3. Tails are often dominated by model failure, so asking about 95% CIs tells you more about their model than about “real” probabilities.