Download Binary Variables (1) Binary Variables (2) Binomial Distribution

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Vector generalized linear model wikipedia , lookup

Pattern recognition wikipedia , lookup

Computational phylogenetics wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Generalized linear model wikipedia , lookup

Transcript
1/24/2011
Binary Variables (1)
Coin flipping: heads=1, tails=0
Bernoulli Distribution
Binary Variables (2)
Binomial Distribution
N coin flips:
Binomial Distribution
1
1/24/2011
The Multinomial Distribution
The Gaussian Distribution
Multinomial distribution is a generalization of the binominal distribution. Different
from the binominal distribution, where the RV assumes two outcomes, the RV for
multi-nominal distribution can assume k (k>2) possible outcomes.
Let N be the total number of independent trials, mi, i=1,2, ..k, be the number of times
outcome i appears. Then, performing N independent trials, the probability that outcome
1 appears m1, outcome 2, appears m2, …,outcome k appears mk times is
Moments of the Multivariate Gaussian (1)
Moments of the Multivariate Gaussian (2)
thanks to anti-symmetry of z
2
1/24/2011
Central Limit Theorem
Beta Distribution
Beta is a continuous distribution defined on the interval of 0 and 1, i.e.,
The distribution of the sum of N i.i.d. random
variables becomes increasingly Gaussian as N
grows.
Example: N uniform [0,1] random variables.
parameterized by two positive parameters a and b.
where T(*) is gamma function. beta is conjugate to the binomial and Bernoulli
distributions
Beta Distribution
The Dirichlet Distribution
The Dirichlet distribution is a continuous multivariate probability distributions
parametrized by a vector of positive reals α. It is the multivariate generalization of
the beta distribution.
Conjugate prior for the
multinomial distribution.
3
1/24/2011
Mixtures of Gaussians (1)
Mixtures of Gaussians (2)
Combine simple models
into a complex model:
Old Faithful data set
Component
Mixing coefficient
K=3
Single Gaussian
Mixture of two Gaussians
Mixtures of Gaussians (3)
The Exponential Family (1)
where η is the natural parameter and
so g(η) can be interpreted as a normalization
coefficient.
4
1/24/2011
The Exponential Family (2.1)
The Exponential Family (2.2)
The Bernoulli Distribution
The Bernoulli distribution can hence be
written as
where
Comparing with the general form we see that
and so
Logistic sigmoid
The Exponential Family (3.1)
The Exponential Family (3.2)
The Multinomial Distribution
Let
. This leads to
and
where,
,
and
NOTE: The ´k parameters are
not independent since the
corresponding µk must
satisfy
Softmax
Here the ηk parameters are independent. Note
that
and
5
1/24/2011
The Exponential Family (3.3)
The Multinomial distribution can then be
written as
The Exponential Family (4)
The Gaussian Distribution
where
where
Conjugate priors
For any member of the exponential family,
there exists a prior
Combining with the likelihood function, we
get posterior
Conjugate priors (cont’d)
• Beta prior is conjugate to the binomial and
Bernoulli distributions
• Dirichlet prior is conjugate to the
multinomial distribution.
• Gaussian prior is conjugate to the Gaussian
distribution
The likelihood and the prior are conjugate if the prior and posterior
have the same distribution.
6
1/24/2011
Noninformative Priors (1)
With little or no information available a-priori, we
might choose uniform prior.
• λ discrete, K-nomial :
• λ∈[a,b] real and bounded:
• λ real and unbounded: improper!
Nonparametric Methods (2)
Parametric distribution models are restricted
to specific forms, which may not always be
suitable; for example, consider modelling a
multimodal distribution with a single,
unimodal model.
Nonparametric approaches make few
assumptions about the overall shape of the
distribution being modelled.
Nonparametric Methods (3)
Kernel Density Estimation: is a non-parametric way of
estimating the probability density function of a random variable
Histogram methods partition
the data space into distinct
bins with widths ∆i and count
the number of observations,
ni, in each bin.
• Often, the same width is
used for all bins, ∆i = ∆.
• ∆ acts as a smoothing
parameter.
Nonparametric Methods (1)
Let (x1, x2, …, xn) be an iid sample drawn from some
distribution with an unknown density p(x) (Parzen
window)
1 | x − xn |<1/ 2
x − xn
k(
) =  0, elseh
h

It follows that
•In a D-dimensional space,
using M bins in each dimension will require MD bins!
p ( x) =
1 N x − xn
∑ k( h )
Nh n=1
k() is the kernel function and h is bandwith, serving as a smoothing parameter. The only
parameter is h.
7
1/24/2011
Nonparametric Methods (4)
Nonparametric Methods (5)
To avoid discontinuities in p(x),
use a smooth kernel, e.g. a
Gaussian
Nonparametric models (not histograms)
requires storing and computing with the
entire data set.
Parametric models, once fitted, are much
more efficient in terms of storage and
computation.
Any kernel such that
h acts as a smoother.
will work.
K-Nearest-Neighbours for Classification
K-Nearest-Neighbours for Classification
The k-nearest neighbors algorithm (k-NN) is a method for classifying objects
based on closest training examples in the feature space.
K=3
K=1
The best choice of k depends upon the data; larger values of k reduce the
effect of noise on the classification, but make boundaries between classes
less distinct. A good k can be selected by cross-validation.
8
1/24/2011
K-Nearest-Neighbours for Classification (3)
Parametric Estimation
Basic building blocks:
Need to determine given
Maximum Likelihood (ML)
θ * = arg max p ( x 1 , x 2 ,..., x N | θ )
θ
Maximum Posterior Probability (MAP)
θ * = arg max p (θ | x 1 , x 2 ,..., x N )
θ
• K acts as a smother
• For
, the error rate of the 1-nearest-neighbour classifier is never more than
twice the optimal error (obtained from the true conditional class distributions).
ML Parameter Estimation
Since samples x1, x2, ..,xn are IID, we have
L (θ ) = p ( x 1 , x 2 ,..., x N | θ ) =
∏
p ( xn | θ )
Parameter Estimation (1)
ML for Bernoulli
Given:
n
The log likelihood can be obtained as
LogL (θ ) =
∑ p(x |θ )
θ can be obtained by taking the derivative of the
Log likelihood with respect to θ and setting it to zero
n
n
∂ LogL (θ )
=
∂θ
∑
n
∂p ( xn | θ )
= 0
∂θ
9
1/24/2011
Maximum Likelihood for the Gaussian
Given i.i.d. data
hood function is given by
Maximum Likelihood for the Gaussian
, the log likeli-
Set the derivative of the log likelihood
function to zero,
and solve to obtain
Sufficient statistics
Similarly
Bayesian Inference for the Gaussian (1)
MAP Parameter Estimation
Since samples x1, x2, ..,xn are IID, we have
p (θ | x 1 , x
2
,...,
x
N
= α P (θ ) p ( x 1 , x
= α P (θ ) ∏
p ( x
)
,...,
2
n
Assume σ is known. Given i.i.d. data
, the likelihood function for
µ is given by
x
N
|θ )
|θ )
n
Taking the log yields posterior
Log
= Log
p (θ | x 1 , x
α + LogP
2
,...,
x
N
(θ ) +
)
∑
n
log
p ( x
n
|θ )
This has a Gaussian shape as a function of µ
(but it is not a distribution over µ).
θ can be solved by maximizing the log posterior. P(θ) is typically
chosen to be the conjugate of the likelihood.
10
1/24/2011
Bayesian Inference for the Gaussian (2)
Combined with a Gaussian prior over µ,
Bayesian Inference for the Gaussian (3)
… where
this gives the posterior
Note:
Completing the square over µ, we see that
Bayesian Inference for the Gaussian (4)
Example:
and 10.
for N = 0, 1, 2
11