* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Stochastic processes
Survey
Document related concepts
Transcript
Stochastic processes M. Veeraraghavan; Feb. 10, 2004 A stochastic process (SP) is a family of random variables { X ( t ) t ∈ T } defined on a given probability space, indexed by the time variable t , where t varies over an index set T . [1] Just as a random variable assigns a number to each outcome s in a sample space S , a stochastic process assigns a sample function x ( t, s ) to each outcome s . 1 A sample function x ( t, s ) is the time function associated with outcome s of an experiment. The ensemble of a stochastic process (sp) is the set of all possible time functions that can result from an experiment. Sample function x ( t, s 1 ) s1 s2 s3 x ( t, s 2 ) x ( t, s 3 ) t = t1 X ( t 1 )is a random variable Figure 1:Representation of a stochastic process - relation to random variables Example of an sp: Number of active calls M ( t ) at a switch at time t . One trial of the experiment yields the sample function m ( t, s ) where the number of active calls is measured for every second over one 15 minute interval. Say this measurement is taken every day starting at 10AM. An ensemble average can be obtained from all measurements for t = 2min after 10AM. Or a time average can be obtained over a 15-minute interval based on one-day’s measurements. 1. Most of the statements in this writeup have been taken verbatim from [2]; exceptions are primary from [1] as noted. 1 Types of stochastic processes: Discrete value and continuous value; Discrete time and continuous time. If each random variable X ( t ) for different values of t are discrete rv, then the sp is a discrete value. If the process is defined only for discrete time instants, then it is a discrete time sp. Random sequence: for a discrete time process, a random sequence X n is an ordered sequence of random variables X 0 , X 1 , ....- Essentially a random sequence is a discrete-time stochastic process. Relation between sp and rv: A discrete value sp is defined by the joint PMF P X ( t1 ), …X ( t k ) ( x 1, …, x k ) , while a continuous value sp is defined by the joint PDF. Independent, identically distributed random sequence: Let X n be an iid random sequence. All X i has the same distribution. Therefore P Xi ( x ) = P X ( x ) . For a discrete value process, the sample vector X n1, …, X nk has the joint PMF (by property of independent random variables) n P X ( t1 ), …X ( tk ) ( x 1, …, x k ) = P X ( x 1 )P X ( x 2 )…P X ( x n ) = ∏ PX ( xi ) (1) i=1 For continuous value sp, same operation but with pdf. A counting process: A stochastic process N ( t ) is a counting process if for every sample function n ( t, s ) = 0 for t < 0 and n ( t, s ) is integer valued and nondecreasing with time. N(t) S1 X1 X2 S2 S3 X3 X4 S4 Figure 2:Sample path of a counting process 2 A counting process is always a discrete-value SP (because n ( t, s ) is integer valued) but can be either continuous-time or discrete-time. A renewal process is a counting process in which the interarrival times X 1, X 2, … are an iid random sequence. Poisson process: is a counting process: • in which number of arrivals in ( t 0, t 1 ] , which is N ( t 1 ) – N ( t 0 ) , is a Poisson random variable with expected value λ ( t 1 – t 0 ) • For any pair of non-overlapping intervals ( t 0, t 1 ] and ( t 0', t 1' ] , number of arrivals in each interval N ( t 1 ) – N ( t 0 ) and N ( t 1' ) – N ( t 0' ) are independent random variables. M , the number of arrivals in the interval ( t 0, t 1 ] , which is N ( t 1 ) – N ( t 0 ) , has a PMF [ λ ( t – t ) ] m e –λ ( t1 – t0 ) 1 0 ---------------------------------------------------PM ( m ) = m! 0 m = 0, 1, … (see RV.pdf for Poisson rv pmf) (2) otherwise A Poisson process is a renewal process, i.e., the X’s form an iid random sequence. Consider a set of sample functions of a Poisson process as in Fig. 1. If we take the random variable N ( t 1 ) at time instant t 1 (just like X ( t 1 ) in Fig. 1) then the distribution of this random variable is Poisson with parameter λ × t 1 because it represents the cumulative number of arrivals until time t 1 from time 0. It is not sufficient to describe the distribution of the random variable at any one instant in time to define an SP. To completely define a stochastic process, we need to give the joint PMF, which is given by the following Theorem. Theorem 6.2 of [2]: For a Poisson process of rate λ , the joint PMF of N ( t 1 ), …, N ( t k ) , t 1 < … < t k is 3 α n1 e –α1 α n2 – n1 e –α2 α nk – nk – 1 e –αk 1 2 k ----------------- ------------------------- … ----------------------------0 ≤ n1 ≤ … ≤ nk P N ( t 1 ), …, N ( t k ) ( n 1, …, n k ) = n 1! ( n 2 – n 1 )! ( n k – n k – 1 )! 0 otherwise (3) where α i = λ ( t i – t i – 1 ) . Why is the following incorrect? P N ( t 1 ), …, N ( t k ) ( n 1, …, n k ) = P N ( t1 ) ( n 1 )P N ( t2 ) ( n 2 )…P N ( tk ) ( n k ) (4) This is because N ( t 1 ), N ( t 2 ), …, N ( t k ) are not independent random variables. ( 0, t 2 ) overlaps with ( 0, t 1 ) , hence N ( t 2 ) is not independent of N ( t 1 ) . We use the independence rule P ( A ∩ B ) = P ( A )P ( B ) , where A = N ( t1 ) and k –α α1 e 1 ---------------. B = N ( t 2 ) – N ( t 1 ) , both of which are Poisson r.v. and hence we use the PMF k! To fully specify a stochastic process, is it sufficient to define the distributions of the random variables at different instants in time? Answer is no: we need the joint PMF of these random variables. If the SP is a renewal process, the intervals are independent and so using the independence rule, the joint PMF is easy to define. Properties of Poisson processes: I. Relation between a Poisson process and exponential distribution: For a Poisson process of rate λ , the interarrival times X 1, X 2, … are an iid with the exponential PDF with parameter λ . [2, page 214]: Proof [4, page 35]: The first time interval, i.e., time to the first arrival X 1 , can be characterized as follows: P ( X1 > t ) = P ( N ( t ) = 0 ) = e – λt (5) This means X 1 is exponentially distributed with parameter λ . Now to find the distribution of X 2 conditional on X 1 : 4 P ( X 2 > t X 1 = s ) = P { 0 arrivals in ( s , s + t ] X 1 = s } = P { 0 arrivals in ( s , s + t ] } = e – λt (6) Since the number of arrivals in the interval ( s , s + t ] is independent of X 1 , we drop the condition in the second step of the above equation. 1 The above means that X 2 is also an exponential random variable with mean --- . Furthermore we λ can conclude from the above that X 2 is independent of X 1 . Note that N ( t 2 – t 1 ) and N ( t 3 – t 2 ) are not iid unles ( t 2 – t 1 ) is equal to ( t 3 – t 2 ) if they are non-overlapping. They are independent, but do not have the same parameter in order to be “identically distributed.” Extending the argument, let t n – 1 = x 1 + x 2 + … + x n – 1 , where X 1 = x 1, …, X n – 1 = x n – 1 . P ( X n > x X 1 = x 1, …X n – 1 = x n – 1 ) = P ( N ( t n – 1 + x ) – N ( t n – 1 ) = 0 X 1 = x 1, …X n – 1 = x n –(7) 1) because if N ( t n – 1 + x ) – N ( t n – 1 ) is not 0 then it means X n ended before x time passed. Since the event N ( t n – 1 + x ) – N ( t n – 1 ) = 0 is independent of the lengths of X 1, …, X n – 1 , we can equate (7) to be P ( N ( t n – 1 + x ) – N ( t n – 1 ) = 0 ) . Setting N ( t n – 1 + x ) – N ( t n – 1 ) = M , P M ( 0 ) 0 – λx ( λx ) e – λx from (2) is equal to P M ( 0 ) = ------------------------ = e . Since the number of arrivals in the interval 0! [ t n – 1, t n – 1 + x ] is independent of the past history described in X 1, …X n – 1 . Conclusion: X n are an exponentially distributed random variable, and is independent of the interval n . In other words, X 1, X 2, … are iid random variables. II. Approximating exponential functions: For every t ≥ 0 and δ ≥ 0 , P { N ( t + δ ) – N ( t ) = 0 } = 1 – λδ + o ( δ ) (8) P { N ( t + δ ) – N ( t ) = 1 } = λδ + o ( δ ) (9) P{N(t + δ) – N(t) = 2} = o(δ) (10) ( δ )- = 0 where lim o---------δ→0 δ (11) These can be verified by using Taylor’s series expansion of e 5 – λδ 2 ( λδ ) = 1 – λδ + ------------- – … . 2 III. Merging of Poisson processes: If two more independent Poisson processes A 1, …A k are merged into a single process A = A 1 + … + A k , the latter is a Poisson process with its parameter equal to the sum of the rates of its components. IV. Splitting of a Poisson process: If a Poisson process is split into two processes by independently assigning each arrival to the first and second of these processes with probability p and ( 1 – p ), respectively, the two arrival processes are Poisson (note: it is essential that the assignment of each arrival is independent of the previous assignment; if for example, all even arrivals are sent to the first queue and all odd to the second, the two processes will not be Poisson). Examples of stochastic processes: 1. Counting process 2. Renewal process 3. Poisson process 4. Markov process 5. Brownian motion Note that autocovariance and autocorrelation are time-varying functions (unlike covariance and correlation of two random variables, which are numbers). From [2, pg. 217, 226]: Autocovariance and autocorrelation functions indicate the rate of change of the sample functions of a stochastic process. Autocovariance function of a stochastic process X ( t ) is C X ( t, τ ) = Cov [ X ( t ), X ( t + τ ) ] (12) Autocorrelation function of a stochastic process X ( t ) is R X ( t, τ ) = E [ X ( t )X ( t + τ ) ] (13) Relation between these functions: C X ( t, τ ) = R X ( t, τ ) – µ X ( t )µ X ( t + τ ) (14) Stationary process [2, pg. 220, 226]: An SP is stationary if the randomness does not vary with time. [Strict stationary] 6 A stochastic process X ( t ) is stationary if and only if for all sets of time instants t 1, …t m , and any time difference τ (i.e., joint pdf does not change with time) f X ( t 1 ), …X ( tm ) ( x 1, …, x m ) = f X ( t 1 + τ ), …X ( tm + τ ) ( x 1, …, x m ) (15) Same thing for a stationary random sequence - where the time difference is a discrete value k . Properties of a stationary process (mean stays constant and autocovariance and autocorrelation only depend on time interval): µX ( t ) = µX R X ( t, τ ) = R X ( 0, τ ) = R X ( τ ) (16) 2 C X ( t, τ ) = R X ( τ ) – µ X = C X ( τ ) Wide-sense stationary process [2, pg. 223, 226]: An SP is w.s.stationary if the expected value is constant with time and the autocorrelation depends only on the time difference between two random variables. X ( t ) is a w.s. stationary process if and only if for all t : E [ X ( t ) ] = µX R X ( t, τ ) = R X ( 0, τ ) = R X ( τ ) (17) Similar definition for a wide-sense stationary random sequence. Markov process [1, pg. 337] is an sp whose dynamic behavior is such that probability distribution for its future development depends only on its present state and not how the process arrived in that state. If state space is discrete, then it is a Markov chain. If X 's are discrete random variables we get this Definition of a DTMC: A DTMC { X n n = 0, 1, 2, … } is a discrete time discrete value random sequence such that given X 0, X 1, …, X n – 1 , the next random variable X n depends only on X n – 1 through the transition probability P [ X n = j X n – 1 = i, X n – 2 = i n – 2, …, X 0 = i o ] = P [ X n = j X n – 1 = i ] = P ij 7 (18) Definition of a CTMC [3]: A CTMC is { X ( t ) t ≥ 0 } is a continuous time, discrete value random process if for t o < t 1 < … < t n < t , with t and t r ≥ 0 for r = 1, 2, …, n , P [ X ( t ) = j X ( t n ) = i, X ( t n – 1 ) = i n – 1, …, X ( t 0 ) = i o ] = P [ X ( t ) = j X ( t n ) = i ] (19) Relationship between Markov chain and Poisson process: Is the Poisson process a special case of a Markov chain? It is indeed a CTMC. Here’s why. By definition of Poisson process, the number of arrivals in two non-overlapping intervals are independent; therefore the number of arrivals in ( t n, t ) is independent of the arrivals in ( t 0, t n ) if t o < t 1 < … < t n < t . The random variable N ( t ) does depend upon N ( t n ) . Its dependence on N ( t n ) is as follows: P [ N ( t ) = j N ( tn ) = i ] = P [ N ( t ) – N ( tn ) = j – i ] (20) But it does not depend on N ( t 0 ) , N ( t 1 ) , etc. Therefore, P [ N ( t ) = j N ( t n ) = i, N ( t n – 1 ) = i n – 1, …, N ( t 0 ) = i o ] = P [ N ( t ) = j N ( t n ) = i ], (21) which by the definition of a CTMC as per (19) implies the Poisson process is a Markov chain. Sure enough, [see 3, page 389], says a pure birth Markov process with the constraint of constant birth rates is the same as a Poisson process. A pure birth process is a counting process. Definition of a CTMC: [2, page 381] - actually true for time-homogeneous CTMCs A CTMC { X ( t ) t ≥ 0 } is a continuous time, discrete value random process such that for an infinitesimal time step of size ∆ , P [ X ( t + ∆ ) = j X ( t ) = i ] = q ij ∆ P[ X( t + ∆ ) = i X( t ) = i ] = 1 – ∑ qij ∆ (22) j≠i The above definition is somewhat simplistic since it treats transition rates q ij to be independent of time, which is true for time-homogeneous CTMCs. Compare the definition of a CTMC in (22) with the definition of a Poisson process. 8 X ( t ) in (22) is the same as N ( t ) of the Poisson process definition, and if ( j ≠ ( i + 1 ) ) q ij = 0 qi( i + 1) = λ (23) Time-homogeneous Markov chains (3, page 361, 363): A Markov chain { X ( t ) t ≥ 0 } is said to be time-homogeneous (or is said to have stationary transition probabilities if p ij ( ν, t ) depends only on the time difference t – ν , where p ij ( ν, t ) = P [ X ( t ) = j X ( ν ) = i ] . In other words, p ij ( t ) = P [ X ( t + u ) = j X ( u ) = i ] for any u ≥ 0 . (24) In time-homogeneous Markov chains, the transition rates q ij ( t ) and q j ( t ) are independent of time. For example, in Poisson process definition, we see a similar dependence on just the interval, and λ being independent of time. q ij ( t ) = ∂ p ij ( ν, t ) ∂t ∂ q j ( t ) = – (p jj ( ν, t ) ∂t ν=t ν=t = ) = lim p ij ( t, t + h ) -------------------------h h→0 (25) lim 1 – p ij ( t, t + h ) --------------------------------h h→0 (26) Relation between counting process and a Markov chain: A Markov chain could be a counting process (such as a pure-birth Markov process), but in general it could be a non-counting process (note that a Markov chain is a discrete state process as is a counting process). The state of the variable represented by Markov chain can increase or decrease in a sample function. Counting processes are necessarily non-stationary (they are even not wide sense stationary) because N ( t ) keeps increasing with time (unless they are constant, because the definition of a counting process says non-decreasing, not increasing). A Markov process is hence quite general - the q ij could be functions of time; it could be nondecreasing (counting) or a process that increases and decreases. And hence M in M/M/1 should 9 stand for Memoryless not Markov. The arrival process is Poisson, which is a special case of Markov. References [1] [2] [3] [4] K. S. Trivedi, “Probability, Statistics with Reliability, Queueing and Computer Science Applications,” Second Edition, Wiley, 2002, ISBN 0-471-33341-7. R. Yates and D. Goodman, “Probability and Stochastic Processes,” Wiley, ISBN 0-471-17837-3. K. S. Trivedi, “Probability, Statistics with Reliability, Queueing and Computer Science Applications,” First Edition, Prentice Hall, ISBN 0-13-711564-4. S. Ross, “Stochastic Processes,” Wiley 1983. 10