[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [IPTA-cs] Bayesian upper limits



Much of what I said in my last message is ameliorated by the use of
uninformative priors.  If we give something proportional to P(D|M) P(M),
but P(M) is a constant, then it is also proportional to P(D|M).  So the
same information is readily accessible.

This lets us know what kind of prior to use.  If M is discrete, we
should use a prior with equal weight on all possibilities.  If M is
continuous, then P(D|M) is a probability density function over M, and we
should use a constant function, i.e., a uniform prior, in the parameters
of M.  If the parameter is, say, log10_A_GWB, we should use a uniform
prior in log10_A_GWB, which is a log-uniform prior in A_GWB.  (If the
parameter is A_GWB, we should use a uniform prior in that.  But if we
are then going to give a graph whose horizontal axis is log A_GWB, we
should rescale the density function so that it corresponds to the axis,
producing the same result as having used a logarithmic parameter.)

So nothing I said is very important for presenting posteriors, except to
say that you should always use an uninformative prior even if you have
more information.  But there is still the question of upper limits.
Suppose we give P(D|logGmu) or equivalently P(D|logGmu) P(logGmu).  I
continue to think that the interesting number is the point at which this
quantity drops to 5% of its value at Gmu=0, not the value that encloses
95% of the probability.  The latter depends on the lower cutoff on
logGmu.  The former does not.

                                        Ken