Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Summary of Chapters Three, Four and Five (Final Review) Chapter 3. Continuous Distributions A random variable X is called continuous if there is a nonnegative function f , called the probability density function of X, such that for any set B Z P (X ∈ B) = f (x)dx. B The distribution function of X is defined by Z x F (x) = P (X ≤ x) = f (y)dy. −∞ We Rhave the following formulae: ∞ (1) −∞ f (x)dx = 1. Rb (2) P (a < X ≤ b) = a f (x)dx = F (b) − F (a). 0 (3) F (x) = f (x). Expectation and Variance of Continuous Random Variables The expected value of a continuous random variable X is defined by Z ∞ µ = E[X] = xf (x)dx. −∞ 1 The variance of X is defined by 2 Z 2 ∞ σ = V ar(X) = E[(X − E[X]) ] = (x − µ)2f (x)dx −∞ and the standard deviation of X is p σ = V ar(X). The moment generating function M (t) of X is defined by Z ∞ M (t) = etxf (x)dx, −h < t < h. −∞ We have the following formulae (1) E[aX + b] = aE[X] + b, a, b are constants. (2) V ar(X) = E[X 2] − (E[X])2, V ar(aX + b) = a2V ar(X). (3) For any function u, Z ∞ E[u(X)] = u(x)f (x)dx −∞ (4) 0 00 µ = M (0), σ 2 = M (0) − µ2. Special Continuous Random Variables (i) A random variable X is said to be uniform over the interval (a, b) if its probability density function is given by 1 if a < x < b f (x) = b−a 0 otherwise 2 Its expected value and variance are a+b , E[X] = 2 (b − a)2 V ar(X) = . 12 (ii) A random variable X is said to be exponential with parameter θ if its density function is given by 1 −x e θ, x ≥ 0 f (x) = θ 0, otherwise. The distribution function F (x) of X is given by x 1 − e− θ , x ≥ 0 F (x) = 0, otherwise. The expected value and variance of X are E[X] = θ, V ar(X) = θ2. It satisfies the memoryless property, for positive s and t, P {X > s + t|X > t} = P {X > s}. (iii) Gamma distribution: its p.d.f. is defined by 1 α−1 −x/θ x e , 0 ≤ x < ∞, f (x) = Γ(α)θα and its mean µ = αθ and variance σ 2 = αθ2. (iv) Chi-square distribution χ2(r) (r is the degrees of freedom): its p.d.f. is given by 1 r/2−1 −x/2 f (x) = x e , 0 ≤ x < ∞, Γ(r/2)2r/2 3 and its mean µ = r and variance σ 2 = 2r. (v) Normal distributions: A random variable X is said to be normal with parameters µ and σ 2 (denoted by N (µ, σ 2) ) if its probability density function is given by f (x) = √ 1 2 2 e−(x−µ) /2σ , −∞ < x < ∞ 2πσ We have E[X] = µ and V ar(X) = σ 2. If X has a normal distribution N (µ, σ 2), then Z = (X − µ)/σ is a standard normal random variable N (0, 1) with mean 0 and variance 1. The distribution function of X can be expressed by X −µ a−µ ≤ FX (a) = P {X ≤ a} = P σ σ a−µ a−µ = P Z≤ =Φ . σ σ where Φ is the distribution function of Z. We also have b−µ a−µ P (a ≤ X ≤ b) = F (b) − F (a) = Φ −Φ . σ σ Chapter 4. Bivariate Distributions: If X and Y are discrete random variables, the joint probability mass function of X and Y is defined by f (x, y) = P (X = x, Y = y) 4 The marginal probability mass functions of X and Y are X X fX (x) = P (X = x) = f (x, y), fY (y) = P (Y = y) = f (x, y) y x If X and Y have a joint probability mass function f (x, y), then XX E[u(X, Y )] = u(x, y)f (x, y) y x The random variables X and Y are said to be jointly continuous if there is a nonnegative function f (x, y), called the joint probability density function, such that for any set C in the xy-plane, Z Z P {(X, Y ) ∈ C} = f (x, y)dxdy C When C = {(x, y)|x ∈ A, y ∈ B}, then Z Z f (x, y)dxdy P {X ∈ A, Y ∈ B} = B A The marginal density functions of X and Y are given by Z ∞ Z ∞ f (x, y)dx fX (x) = f (x, y)dy, fY (y) = −∞ −∞ If X and Y have a joint density function f (x, y), then Z ∞Z ∞ E[u(X, Y )] = u(x, y)f (x, y)dxdy −∞ −∞ Conditional Distributions If X and Y are discrete random variables, then the conditional 5 probability mass function of X given that Y = y is defined by g(x|y) = P (X = x|Y = y) = f (x, y) fY (y) where f (x, y) is the joint probability mass function of X and Y , and fY (y) is the marginal probability mass function of Y . then the conditional probability mass function of Y given that X = x is defined by h(y|x) = P (Y = y|X = x) = f (x, y) fX (x) where fX (x) is the marginal probability mass function of X. Some realted formulas: P (1) E[u(Y )|X = x] = y u(y)h(y|x). P (2) µY |x = E(Y |x) = y yh(y|x). P (3) σY2 |x = E{[Y − E(Y |x)]2|x} = y [y − E(Y |x)]2h(y|x). If X and Y are jointly continuous with joint density function f , then the conditional probability density function of X given that Y = y is given by g(x|y) = f (x, y) fY (y) and the conditional probability density function of Y given that X = x is given by h(x|y) = 6 f (x, y) . fX (x) Some related formulas: Rb (1) P (a < X < b|Y = y) = a g(x|y)dx. Rb (2) P (a < Y <R b|X = x) = a h(y|x)dy. ∞ (3) E(Y |x) = −∞ R ∞yh(y|x)dy. (4) V ar(Y |x) = −∞[y − E(Y |x)]2h(y|x)dx. Independent Random Variables The random variables X and Y are independent if for all sets A and B P {X ∈ A, Y ∈ B} = P {X ∈ A}P {Y ∈ B} The independence of X and Y is equivalent to f (x, y) = fX (x)fY (y) for both discrete and continuous cases. If the joint density (or mass) function factors into a part depending only on x and a part depending only on y, then X and Y are independent. Chapter 5. Distributions of Functions of Random Variables Transformation of Variables: Let the random variable X have probability density function f (x) with support Sx. For Y = u(X), either an increasing or decreasing function of X with the inverse function X = v(Y ) whose support is Sy , the probability density function for Y is 0 g(y) = f (v(y)) v (y) . 7 Sums of Independent Random Variables If X1, X2, · · · , Xn are independent continuous random variables with respective p.m.f. (or p.d.f.) f1(x1), f2(x2), · · · , fn(xn), then the joint p.m.f. (or p.d.f.) is f (x1, x2, · · · , xn) = f1(x1)f2(x2) · · · fn(xn). and we have the following property E[u1(X1)u2(X2) · · · un(Xn)] = E[u1(X1)]E[u2(X2)] · · · E[un(Xn)]. If Xi, i = 1, 2, · · · , n, are independent random variables with respective means µi and variances σi2, i = 1, 2, · · · , n, then for Pn Y = i=1 aiXi, we have µY = E[Y ] = n X aiµi i=1 and σY2 = V ar(Y ) = n X i=1 8 a2i σi2.