Download Objective Bayes and Conditional Probability

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Inductive probability wikipedia , lookup

Approximate Bayesian computation wikipedia , lookup

Time series wikipedia , lookup

History of statistics wikipedia , lookup

Bayesian inference wikipedia , lookup

Foundations of statistics wikipedia , lookup

Statistical inference wikipedia , lookup

Transcript
Dear All,
The spring programme of Statistics seminars at the University of Kent at Canterbury
is given below. All are welcome to attend (no registration necessary). For full details
visit http://www.kent.ac.uk/IMS/seminars/index.html#statistics
Prof. Prof. Alastair Young (Imperial College London)
February 4
Title: Objective Bayes and Conditional Probability Matching
In Bayesian parametric inference, in the absence of subjective prior information about
the parameter of interest, it is natural to consider use of an objective prior which leads
to posterior probability quantiles which have, at least to some higher order
approximation in terms of the sample size, the correct frequentist interpretation. Such
priors are termed probability matching priors. In many circumstances, however, the
appropriate frequentist inference is a conditional one. The key contexts involve
inference in multi-parameter exponential families, where conditioning eliminates the
nuisance parameter, and models which admit ancillary statistics, where conditioning
on the ancillary is indicated by the conditionality principle of inference. In this talk,
we consider conditions on the prior under which posterior quantiles have, to high
order, the correct conditional frequentist interpretation. We focus on the exponential
family context, where it turns out that a sufficient condition for higher order
conditional frequentist accuracy reduces to a condition on the model, not the prior.
When the condition is satisfied, as it is in many key situations, any first order
probability matching prior (in the unconditional sense) automatically yields higher
order conditional probability matching.
Prof. John Kent (University of Leeds)
February 11
Title: Procrustes methods in projective shape analysis
Projective shape refers to information recorded on a camera image that is invariant under
changes of the camera view. It is an important tool in machine vision for identifying common
features in images of the same scene taken from different camera angles. The simplest
example is the cross ratio for 4 points on a line. In this talk we describe the beginnings of a
metric theory for projective shape which provides the tools needed to estimate shape
averages and shape variability. The methodology is analogous to the more familiar
Procrustes methodology for similarity shape analysis.
Dr. Robert Gramacy (University of Cambridge)
February 25
Title: Particle Learning for Sequential Design and Optimization
We devise a sequential Monte Carlo method, via particle learning (PL), for on-line sampling
from the posterior distribution of two static non-parametric regression models: (1) Gaussian
processes (GPs), a typical choice for the sequential design of computer experiments and
optimization; and (2) a new dynamical tree model inspired by Bayesian CART. Online PL of
these models, coupled with active learning heuristics (such as the ALM/C algorithms and the
expected improvement), represents a thrifty approach to sequential design compared to
MCMC which must be re-started and iterated to convergence with the inclusion of each new
design point. Our empirical results demonstrate that the PL approach yields comparable (with
GPs) and better (with trees) sequential designs compared to similar and higher-powered
methods using MCMC inference, and (both) at a fraction of the computational cost. We also
demonstrate how the ensemble aspects of PL lead to a better exploration of the posterior
distribution compared to MCMC, which can suffer from mixing problems.
This is joint work with Nicholas Polson and Matthew Taddy, both at the University of Chicago,
Booth school of business
Prof. Steffen Lauritzen (Oxford University)
March 4
Professor P. E. Jupp (University of St. Andrews)
March 18
Prof. Jonathan Forster (University of Southampton)
March 25
Bayesian model averaging for categorical data
--------------------------------------------It is common for multivariate categorical data (which may be represented as a contingency
table) to be unbalanced or sparse, particularly when the dimensionality is large. Then,
estimating cell probabilities, or predicting the unobserved population in a finite population
sampling analysis, typically relies on some kind of modelling to provide smoothed estimates.
In this talk I will investigate Bayesian model averaging as a estimation method for multivariate
categorical data which allows multiple models to be entertained. I will discuss default choices
of model class, and of prior distributions on model parameters, across a range of applications.
Dr. Efang Kong (University of Kent at Canterbury)
April 8