Download 4. Probability - WordPress.com

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
Probability
Objectives
At the end of the training, the participants will be able to
a) Define Bayes’ theorem
b) Find the probability distribution of a random variable
c) Find the mean of a binomial distribution B(n, p)
Introduction
The conditional probability of an event B in relationship with A is the probability that B
p( Aand B)
occurs given that A has already occurred. The formula is given by p(b / A) 
. The
p( A)
concept of conditional probability is one of the most fundamental and important concept in
probability.
In Statistical inference, the conditional probability is an update of the probability of an
event of new information. Without the knowledge of the occurrence of B, the information
about the occurrence of A would be p(A).
Bayes’ theorem
Bayes’ theorem is a theorem of probability originally stated by Reverend Thomas Bayes. It can
be seen as a way of understanding how a probability that a theory is true is affected by a new
piece of evidence. This is a conditional probability that one proposition is true provide that
another proposition is true.
It has been used in a wide variety of contexts, ranging from marine biology to the development
of "Bayesian" spam blockers for email systems. In the philosophy of science, it has been used to
try to clarify the relationship between theory and evidence. Many insights in the philosophy of
science involving confirmation, falsification, the relation between science and pseudosience,
and other topics can be made more precise, and sometimes extended or corrected.
When to apply Bayes’ theorem:
Part of the challenge in applying Bayes' theorem involves recognizing the types of problems
that warrant its use. You should consider Bayes' theorem when the following conditions exist.

The sample space is partitioned into a set of mutually exclusive events { A1, A2, . . . , An }.

Within the sample space, there exists an event B, for which P(B) > 0.

The analytical goal is to compute a conditional probability of the form: P( A k | B ).

You know at least one of the two sets of probabilities described below.

P( Ak ∩ B ) for each Ak

P( Ak ) and P( B | Ak ) for each Ak
Probability Distribution:A probability distribution is a table or an equation that links each outcome of a statistical
experiment with its probability of occurrence.
Probability Distribution Prerequisites
To understand probability distributions, it is important to understand variables. random
variables, and some notation.

A variable is a symbol (A, B, x, y, etc.) that can take on any of a specified set of values.

When the value of a variable is the outcome of a statistical experiment, that variable is a
random variable.
Definitions
1. A random variable is a variable whose value is determined by the outcome of a random
experiment.
2. A discrete random variable is one whose set of assumed values is countable (arises
from counting).
3. A continuous random variable is one whose set of assumed values is uncountable
(arises from measurement.).
Cumulative Probability Distributions
A cumulative probability refers to the probability that the value of a random variable falls
within a specified range.
Let us return to the coin flip experiment. If we flip a coin two times, we might ask: What is the
probability that the coin flips would result in one or fewer heads? The answer would be a
cumulative probability. It would be the probability that the coin flip experiment results in zero
heads plus the probability that the experiment results in one head.
P(X < 1) = P(X = 0) + P(X = 1)
Uniform Probability Distribution
The simplest probability distribution occurs when all of the values of a random variable occur
with equal probability. This probability distribution is called the uniform distribution.
Uniform Distribution. Suppose the random variable X can assume k different values. Suppose
also that the P(X = xk) is constant. Then,
P(X = xk) = 1/k
Binomial distribution
the binomial distribution with parameters n and p is the discrete probability distribution of the
number of successes in a sequence of n independent yes/no experiments, each of which yields
success with probability p. A success/failure experiment is also called a Bernoulli experiment
or Bernoulli trial.
Mean = np and Variance = npq.
Binomial Experiment
A binomial experiment is a statistical experiment that has the following properties:

The experiment consists of n repeated trials.

Each trial can result in just two possible outcomes. We call one of these outcomes a
success and the other, a failure.

The probability of success, denoted by P, is the same on every trial.

The trials are independent; that is, the outcome on one trial does not affect the
outcome on other trials.
Binomial Distribution
A binomial random variable is the number of successes x in n repeated trials of a binomial
experiment. The probability distribution of a binomial random variable is called a binomial
distribution.
Binomial Formula.
Suppose a binomial experiment consists of n trials and results in x successes. If the probability
of success on an individual trial is p, then the binomial probability is
B (x; n, P) = nCx * px * (1 - p)n - x
Cumulative Binomial Probability
A cumulative binomial probability refers to the probability that the binomial random variable
falls within a specified range (e.g., is greater than or equal to a stated lower limit and less than
or equal to a stated upper limit).