Download Document

Document related concepts

Indeterminism wikipedia , lookup

Randomness wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Inductive probability wikipedia , lookup

Birthday problem wikipedia , lookup

Stochastic geometry models of wireless networks wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Probability interpretations wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Law of large numbers wikipedia , lookup

Long-tail traffic wikipedia , lookup

Transcript
Lecture 2.7.
Queuing Theory
1
Basic teletraffic concepts
2
1 user making phone calls
TRAFFIC is a “stochastic process”
BUSY 1
IDLE 0
time
• How to characterize this process?
– statistical distribution of the “BUSY” period
– statistical distribution of the “IDLE” period
– statistical characterization of the process “memory”
• E.g. at a given time, does the probability that a user starts a
call result different depending on what happened in the past?
3
Traffic characterization
suitable for traffic engineering
amount of busy time in t
traffic intensity A i  lim

t
t 
 average number  of calls per min  average call duration  min  
 probabilit y that, at a random time t, user is in BUSY state 
 mean process value
All equivalent (if stationary process)
4
Traffic Intensity: example
• User makes in average 1 call every hour
• Each call lasts in average 120 s
• Traffic intensity =
– 120 sec / 3600 sec = 2 min / 60 min = 1/30
• Probability that a user is busy
– User busy 2 min out of 60 = 1/30
Dimensionless
5
Traffic generated by
more than one users
U1
U2
U3
U4
TOT
6
Traffic intensity
(dimensionless, measured in Erlangs):
LOADING AND ITS CHARACTERISTICS
Erlang (unit)
The erlang (symbol E) as a dimensionless unit is used
as a statistical measure of offered load.
It is named after the Danish telephone engineer A.K.Erlang, the originator of
traffic engineering and queueing theory.
Traffic of one erlang refers to a single resource being
in continuous use, or two channels being at fifty
percent use each, and so on. For example, if an office
had two telephone operators who are both busy all the
time, that would represent two erlangs (2 E) of traffic,
or a radio channel that is occupied for thirty minutes
during an hour is said to carry 0.5 E of traffic.
7
Alternatively, an erlang may be regarded as
a "use multiplier" per unit time, so 100% use
is 1 E, 200% use is 2 E, and so on. For
example, if total cell phone use in a given
area per hour is 180 minutes, this represents
180/60 = 3 E.
In general, if the mean arrival rate of new
calls is λ per unit time and the mean call
holding time is h, then the traffic in erlangs
E is: E = λh
8
Example 1
• 5 users
• Each user makes an average of 3 calls per hour
• Each call, in average, lasts for 4 minutes
1
 calls  4
Ai  3
 hours   erl 

5
 hour  60
1
A  5  erl   1erl 
number of active users
5
0
Meaning: in average, there is 1 active call;
but the actual number of active calls varies
from 0 (no active user) to 5 (all users active),
with given probability
9
1
2
3
4
5
probability
0,327680
0,409600
0,204800
0,051200
0,006400
0,000320
Example 2
• 30 users
• Each user makes an average of 1
calls per hour
• Each call, in average, lasts for 4
minutes
 4
A  30  1   2 Erlangs
 60 
SOME NOTES:
-In average, 2 active calls (intensity A);
-Frequently, we find up to 4 or 5 calls;
-Prob(n.calls>8) = 0.01%
-More than 11 calls only once over 1M
TRAFFIC ENGINEERING: how many
channels to reserve for these users!
10
n. active users
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
binom
1
30
435
4060
27405
142506
593775
2035800
5852925
14307150
30045015
54627300
86493225
119759850
145422675
155117520
145422675
119759850
86493225
54627300
30045015
14307150
5852925
2035800
593775
142506
27405
4060
435
30
1
probab
1,3E-01
2,7E-01
2,8E-01
1,9E-01
9,0E-02
3,3E-02
1,0E-02
2,4E-03
5,0E-04
8,7E-05
1,3E-05
1,7E-06
1,9E-07
1,9E-08
1,7E-09
1,3E-10
8,4E-12
5,0E-13
2,6E-14
1,2E-15
4,5E-17
1,5E-18
4,5E-20
1,1E-21
2,3E-23
4,0E-25
5,5E-27
5,8E-29
4,4E-31
2,2E-33
5,2E-36
cumulat
0,126213
0,396669
0,676784
0,863527
0,953564
0,987006
0,996960
0,999397
0,999898
0,999985
0,999998
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
1,000000
A note on binomial coefficient computation
 60 
60!
  
 1.39936e  12
12  12!48!
but 60! 8.32099e  81 (overflow problems! ! )
  60  
 60 
   exp  log     exp log 60!  log 12!  log 48! 
12 
 12  
12
48
 60

 exp   log i    log i    log i  (no overflow! ! before exp...)
i 1
i 1
 i 1

 60  12
  Ai 1  Ai 48 
12 
12
48
 60

 exp   log i    log i    log i   12 log  Ai   48 log 1  Ai 
i 1
i 1
 i 1

(no overflow! ! never! )
11
Infinite Users
Assume M users, generating an overall traffic intensity A
(i.e. each user generates traffic at intensity Ai =A/M).
M
A
We have just found that

M 
M k
Pk active calls, M users     Aik 1  Ai 
k 
1  
M!  A   M 

 
M  k !k!  M   A  k
1  
 M
k
Let Minfinity, while maintaining the same overall traffic intensity A
Pk active calls,  users   lim
M 
M!
1 A 
A 
A
  k  1    1  
M  k ! k! M  M   M 

Ak
M M  1 M  k  1 
A

 lim
 1  
k
M


 M 
k!
M

12
M
k
M

A




A
A

 1  
 M
k
k
 e A

Ak
k!
Poisson Distribution
30%
poisson
binomial (M=30)
A=2 erl
25%
20%
A=10 erl
15%
10%
5%
0%
0
Pk  A  e
13
2
A
4
k
A
k!
6
8
10
12
14
16
18
20
22
Very good matching with Binomial
(when M large with respect to A)
Much simpler to use than Binomial (no
annoying queueing theory complications)
Limited number of channels
THE most important problemU1
in circuit switching
• The number of channels C is less
U2
than the number of users M
(eventually infinite)
• Some offered calls will be
U3
“blocked”
• What is the blocking probability?
– We have an expression for U4
X
X
P[k offered calls]
– We must find an expression
for
P[k accepted calls]
– As:
P[block ]  PC accepted calls 
14
TOT
No. carried calls versus t
No. offered calls versus t
Channel utilization probability
offered traffic: 2 erl - C=3
•
•
•
C channels available
Assumptions:
– Poisson distribution (infin.
users)
– Blocked calls cleared
It can be proven (from Queueing
theory) that:
35%
30%
offered calls
accepted calls
25%
20%
15%
10%
P[k calls in the system, k  (0, C)]  5%
0%
Pk offered calls 
 C
 Pi offered calls 
0
1
2
3
4
5
6
7
i 0
(very simple result!)
P[system full ]  P[C accepted calls ] 
•
Hence:
15
PC offered calls 
C
 Pi offered calls 
i 0
8
Blocking probability: Erlang-B
Fundamental formula for
telephone networks planning
– Ao=offered traffic in Erlangs
AoC
 block  C C! j  E1,C  Ao 
Ao

j 0 j!
 Efficient recursive computation
available
E1,C  Ao  
Ao E1,C 1  Ao 
C  Ao E1,C 1  Ao 
100,00%
blocking probability
•
10,00%
1,00%
C=1,2,3,4,5,6,7
0,10%
0,01%
0
16
1
2
3
offered load (erlangs)
4
5
NOTE: finite users
•
•
Erlang-B obtained for the infinite
users case
It is easy (from queueing theory) to
obtain an explicit blocking formula
for the finite users case:
•
ENGSET FORMULA:
 M  1

A 
C 

 C
k  M  1

Ai 

k 0
 i 
C
i
 block
17
Ao
Ai 
M
 Erlang-B can be re-obtained
as limit case
 Minfinity
 Ai0
 M·AiAo
 Erlang-B is a very good
approximation as long as:
 A/M small (e.g. <0.2)
 In any case, Erlang-B is a
conservative formula
 yields higher blocking
probability
 Good feature for planning
Capacity planning
• Target: support users with a given Grade Of Service
(GOS)
– GOS expressed in terms of upper-bound for the
blocking probability
• GOS example: subscribers should find a line available in the
99% of the cases, i.e. they should be blocked in no more than
1% of the attempts
• Given:
• C channels
• Offered load Ao
• Target GOS Btarget
– C obtained from numerical inversion of
18
Btarget  E1,C  Ao 
Channel usage efficiency
Carried load (erl)
Offered load (erl)
Ao
C channels Ac  Ao 1  B 
Ao B
Blocked traffic
Ac Ao 1  E1,C  Ao 
efficiency :  

C
C
Ao

if small blocking
C
Fundamental property: for same GOS, efficiency
increases as C grows!! (trunking gain)
19
Example
blocking probability
100,0%
A=
A=
A=
A=
10,0%
40 erl
60 erl
80 erl
100 erl
1,0%
0,1%
0
20
40
GOS = 1% maximum blocking.
Resulting system dimensioning
and efficiency:
20
60
capacity C
40 erl
60 erl
80 erl
100 erl
80
C >= 53
C >= 75
C >= 96
C >= 117
100
 = 74.9%
 = 79.3%
 = 82.6%
 = 84.6%
120
Erlang B calculation - tables
21
Introduction to Queuing
Theory
22
Queuing theory definitions
• (Bose) “the basic phenomenon of queueing arises
whenever a shared facility needs to be accessed for
service by a large number of jobs or customers.”
• (Wolff) “The primary tool for studying these
problems [of congestions] is known as queueing
theory.”
• (Kleinrock) “We study the phenomena of standing,
waiting, and serving, and we call this study
Queueing Theory." "Any system in which arrivals
place demands upon a finite capacity resource may
be termed a queueing system.”
• (Mathworld) “The study of the waiting times,
lengths, and other properties of queues.”
23
23
Applications of Queuing Theory
• Telecommunications
• Traffic control
• Determining the sequence of computer
operations
• Predicting computer performance
• Health services (eg. control of hospital bed
assignments)
• Airport traffic, airline ticket sales
• Layout of manufacturing systems.
24
24
Example:
application of queuing theory
• In many retail stores and banks
– multiple line/multiple checkout system 
a queuing system where customers wait for
the next available cashier
– We can prove using queuing theory that :
throughput improves increases when
queues are used instead of separate lines
25
25
Example:
application of queuing theory
26
26
Queuing theory
for studying networks
• View network as collections of queues
– FIFO data-structures
• Queuing theory provides probabilistic
analysis of these queues
• Examples:
– Average length
– Average waiting time
– Probability queue is at a certain length
– Probability a packet will be lost
27
27
Little’s Law
System
Arrivals
Departures
• Little’s Law:
Mean number tasks in system =
= mean arrival rate x mean response time
– Observed before, Little was first to prove
• Applies to any system in equilibrium, as long as
nothing in black box is creating or destroying tasks
28
28
29
Proving Little’s Law
Arrivals
Packet 3
2
#
1
# in 3
System2
1
Departures
1 2 3 4 5 6 7 8
Time
J = Shaded area = 9
Time in 3
System 2
1
30
1 2 3 4 5 6 7 8
Time
Same in all cases!
1 2 3
Packet #
30
Definitions
•
•
•

J: “Area” from previous slide
N: Number of jobs (packets)
T: Total time
: Average arrival rate
– N/T
• W: Average time job is in the system
– = J/N
• L: Average number of jobs in the system
– = J/T
31
31
Proof: Method 1: Definition
# in 3
Time in 3
=
Syst 2
System 2
1
em
(W) 1
1 2 3 4 5 6 7 8
(L)
1 2 3
Time (T)
Packet # (N)
J  TL  NW
L  ( TN )W
L  ( )W
32
32
Proof: Method 2: Substitution
L  ( )W
L  ( TN )W
33
J
T
 ( TN )( NJ )
J
T

J
T
Tautology
33
Model Queuing System
Queuing System
Queue
Queuing System
Server
Server System
• Use Queuing models to
– Describe the behavior of queuing systems
– Evaluate system performance
34
34
Characteristics
of queuing systems
35
• Arrival Process
– The distribution that determines how the
tasks arrives in the system.
• Service Process
– The distribution that determines the task
processing time
• Number of Servers
– Total number of servers available to
process the tasks
35
Kendall Notation 1/2/3(/4/5/6)
•
36
Six parameters in shorthand
• First three typically used, unless specified
1. Arrival Distribution
2. Service Distribution
3. Number of servers
4. Total Capacity (infinite if not specified)
5. Population Size (infinite)
6. Service Discipline (FCFS/FIFO)
36
Distributions
• M: stands for "Markovian", implying
exponential distribution for service times or
inter-arrival times.
• D: Deterministic (e.g. fixed constant)
• Ek: Erlang with parameter k
• Hk: Hyperexponential with param. k
• G: General (anything)
37
37
Kendall Notation Examples
• M/M/1:
– Poisson arrivals and exponential service, 1 server,
infinite capacity and population, FCFS (FIFO)
– the simplest ‘realistic’ queue
• M/M/m
– Same, but M servers
• G/G/3/20/1500/SPF
– General arrival and service distributions, 3
servers, 17 queue slots (20-3), 1500 total jobs,
Shortest Packet First
38
38
Poisson Process
• For a Poisson process with average arrival rate  ,
the probability of seeing n arrivals in time interval
delta t
e  t (t ) n
Pr( n) 
n!
E (n)  t
2
(


t
)
Pr(0)  e t  1  t 
...  1  t  o(t )  Pr(0)  1  t
2!
(t ) 2
 t
Pr(1)  te
 t[1  t 
...]  t  o(t )  Pr(1)  t
2!
Pr(  2)  ...  0
39
39
Poisson process
& exponential distribution
• Inter-arrival time t (time between arrivals) in
a Poisson process follows exponential
distribution with parameter 
Pr(t )  e
1
E (t ) 
 t

40
40
Analysis of M/M/1 queue
• Given:
• : Arrival rate of jobs (packets on input
link)
• m: Service rate of the server (output link)
• Solve:
– L: average number in queuing system
– Lq average number in the queue
– W: average waiting time in whole system
– Wq average waiting time in the queue
41
41
M/M/1 queue model
L
Lq

m
1
m
Wq
W
42
42
Solving queuing systems
• 4 unknowns: L, Lq W, Wq
• Relationships:
– L=W
– Lq=Wq (steady-state argument)
– W = Wq + (1/m)
• If we know any 1, can find the others
• Finding L is hard or easy depending on the type of
system. In general:

L   nPn
n 0
43
43
Analysis of M/M/1 queue
• Goal: A closed form expression of the
probability of the number of jobs in the
queue (Pi) given only  and m
44
44
Equilibrium conditions


n-1
m

n
m

n+1
m
m
Define Pn (t )to be the probability of having n tasks in the system at time t
P0 (t  t )  P0 (t )[(1  mt )(1  t )  mtt ]  P1 (t )[( mt )(1  t )]
Pn (t  t )  Pn (t )[(1  mt )(1  t )  mtt ]  Pn 1 (t )[( mt )(1  t )]  Pn 1 (t )[( t )(1  mt )]
P0 (t  t )  P0 (t )
 P0 (t )  mP1 (t )
t
Pn (t  t )  Pn (t )
 Pn 1 (t )  (  m ) Pn (t )  mPn 1 (t )
t
P (t  t )  Pn (t )
Stablize when   m ,
lim Pn (t )  Pn , lim n
0
t 
t 
t
45
45
Equilibrium conditions


n-1
m

n
m

n+1
m
m
P0  mP1
(  m ) Pn  Pn 1  mPn 1
46
46
Solving for P0 and Pn
• Step 1
2
P1 
• Step 2
n



P0 , P2    P0, Pn    P0
m
m
m
n

P

1
,
then
P

n
0 
 m   1,  P0 
n 0
n 0 


47

1

 

n 0  m 

n
47
Solving for P0 and Pn
• Step 3

ρ  , then
m
• Step 4
P0 
 1  ρ and Pn  ρ n 1  ρ 
1

ρ
n



1

ρ
1
n
    ρ 
ρ  1


1 ρ 1 ρ
n 0  m 
n 0

n
n 0
48
48
Solving for L



n 0
n 0
n 1
L   nPn   n n (1   )  (1   )   n n1


n
d
(1   )  d      (1   )  dd 11 
 n 0 

(1   )  
1
(1  )2
49

(1  )


m 
49
Solving W, Wq and Lq
   
 W       
W
Wq
L



m 
1
m
Lq  Wq  
50
1


m 

m ( m  )
1
m 
1
m


m ( m  )
2
m ( m  )
50
Online M/M/1 animation
• http://www.dcs.ed.ac.uk/home/jeh/Simjava/
queueing/mm1_q/mm1_q.html
51
51
52
Response Time vs. Arrivals
Waiting vs. Utilization
0.25
W(sec)
0.2
0.15
0.1
0.05
0
0
0.2
0.4
0.6
0.8
1
1.2
 %
53
W
1
m 
53
Stable Region
Waiting vs. Utilization
0.025
W(sec)
0.02
0.015
linear region
0.01
0.005
0
0
0.2
0.4
0.6
0.8
1
 %
54
54
Example
• On a network gateway, measurements show
that the packets arrive at a mean rate of 125
packets per second (pps) and the gateway
takes about 2 millisecs to forward them.
Assuming an M/M/1 model, what is the
probability of buffer overflow if the gateway
had only 13 buffers. How many buffers are
needed to keep packet loss below one packet
per million?
55
55
Example
• Measurement of a network gateway:
– mean arrival rate (l): 125 Packets/s
– mean response time (m): 2 ms
• Assuming exponential arrivals:
– What is the gateway’s utilization?
– What is the probability of n packets in the
gateway?
– mean number of packets in the gateway?
– The number of buffers so P(overflow) is <10-6?
56
56
Example
•
•
•
•
Arrival rate λ =
Service rate μ =
Gateway utilization ρ = λ/μ =
Prob. of n packets in gateway =
• Mean number of packets in gateway =
57
57
Example





Arrival rate λ = 125 pps
Service rate μ = 1/0.002 = 500 pps
Gateway utilization ρ = λ/μ = 0.25
Prob. of n packets in gateway =
(1  ρ)ρ n  0.75(0.25) n
Mean number of packets in gateway =
ρ
0.25

 0.33
1  ρ 0.57
58
58
Example
• Probability of buffer overflow:
• To limit the probability of loss to less than 10-6:
59
59
Example
60

Probability of buffer overflow:
= P(more than 13 packets in gateway)

To limit the probability of loss to less than
10-6:
60
Example


61
Probability of buffer overflow:
= P(more than 13 packets in gateway)
= ρ13 = 0.2513 = 1.49x10-8
= 15 packets per billion packets
To limit the probability of loss to less than
10-6:
61
Example


62
Probability of buffer overflow:
= P(more than 13 packets in gateway)
= ρ13 = 0.2513 = 1.49x10-8
= 15 packets per billion packets
To limit the probability of loss to less than
10-6:
ρ n  106
62
Example
63

To limit the probability of loss to less than
10-6:
ρ n  106

or


n  log 106 / log 0.25
63
Example

To limit the probability of loss to less than
10-6:
ρ n  106

or


n  log 106 / log 0.25
= 9.96
64
64
1. Some Queuing Terminology
• To describe a queuing system, an input process and
an output process must be specified.
• Examples of input and output processes are:
Situation
65
Input Process
Output Process
Bank
Customers arrive at
bank
Tellers serve the
customers
Pizza parlor
Request for pizza
delivery are received
Pizza parlor send out
truck to deliver pizzas
The Input or Arrival Process
• The input process is usually called the arrival
process.
• Arrivals are called customers.
• We assume that no more than one arrival can occur
at a given instant.
• If more than one arrival can occur at a given instant,
we say that bulk arrivals are allowed.
• Models in which arrivals are drawn from a small
population are called finite source models.
• If a customer arrives but fails to enter the system,
we say that the customer has balked.
66
The Output or Service Process
• To describe the output process of a queuing system,
we usually specify a probability distribution – the
service time distribution – which governs a
customer’s service time.
• We study two arrangements of servers: servers in
parallel and servers in series.
• Servers are in parallel if all servers provide the same
type of service and a customer needs only pass
through one server to complete service.
• Servers are in series if a customer must pass through
several servers before completing service.
67
Queue Discipline
• The queue discipline describes the method used to
determine the order in which customers are served.
• The most common queue discipline is the FCFS
discipline (first come, first served), in which
customers are served in the order of their arrival.
• Under the LCFS discipline (last come, first served),
the most recent arrivals are the first to enter service.
• If the next customer to enter service is randomly
chosen from those customers waiting for service it
is referred to as the SIRO discipline (service in
random order).
68
• Finally we consider priority queuing disciplines.
• A priority discipline classifies each arrival into one
of several categories.
• Each category is then given a priority level, and
within each priority level, customers enter service
on a FCFS basis.
• Another factor that has an important effect on the
behavior of a queuing system is the method that
customers use to determine which line to join.
69
2. Modeling Arrival and Service
Processes
• We define ti to be the time at which the ith customer
arrives.
• In modeling the arrival process we assume that the
T’s are independent, continuous random variables
described by the random variable A.
• The assumption that each interarrival time is
governed by the same random variable implies that
the distribution of arrivals is independent of the
time of day or the day of the week.
• This is the assumption of stationary interarrival
times.
70
• Stationary interarrival times is often unrealistic, but
we may often approximate reality by breaking the
time of day into segments.
• A negative interarrival time is impossible. This
allows us to write
c

0
c
P(A  c)   a(t )dt and P(A  c)   a(t )dt
• We define1/λ to be the mean or average interarrival
time.

1

71
  ta(t )dt
0
• We define λ to be the arrival rate, which will have
units of arrivals per hour.
• An important question is how to choose A to reflect
reality and still be computationally tractable.
• The most common choice for A is the exponential
distribution.
• An exponential distribution with parameter λ has a
density a(t) = λe-λt.
• We can show that the average or mean interarrival
time is given by E (A )  1 .
72
• Using the fact that var A = E(A2) – E(A)2, we can
show that
var A 
1
2
• Lemma 1: If A has an exponential distribution, then
for all nonnegative values of t and h,
P ( A  t  h | A  t )  P ( A  h)
73
• A density function that satisfies the equation is said
to have the no-memory property.
• The no-memory property of the exponential
distribution is important because it implies that if
we want to know the probability distribution of the
time until the next arrival, then it does not matter
how long it has been since the last arrival.
74
Relations between Poisson
Distribution and Exponential
Distribution
• If interarrival times are exponential, the probability
distribution of the number of arrivals occurring in
any time interval of length t is given by the
following important theorem.
• Theorem 1: Interarrival times are exponential with
parameter λ if and only if the number of arrivals to
occur in an interval of length t follows the Poisson
distribution with parameter λt.
75
• A discrete random variable N has a Poisson
distribution with parameter λ if, for n=0,1,2,…,
e   n
P(N  n) 
(n  0,1,2,...)
n!
• What assumptions are required for interarrival times
to be exponential? Consider the following two
assumptions:
– Arrivals defined on nonoverlapping time intervals are
independent.
– For small Δt, the probability of one arrival occurring
between times t and t +Δt is λΔt+o(Δt) refers to any
quantity satisfying lim o(t )  0
t  0
76
t
• Theorem 2: If assumption 1 and 2 hold, then Nt
follows a Poisson distribution with parameter λt,
and interarrival times are exponential with
parameter λ; that is, a(t) = λe-λt.
• Theorem 2 states that if the arrival rate is stationary,
if bulk arrives cannot occur, and if past arrivals do
not affect future arrivals, then interarrival times will
follow an exponential distribution with parameter λ,
and the number of arrivals in any interval of length t
is Poisson with parameter λt.
77
The Erlang Distribution
• If interarrival times do not appear to be exponential
they are often modeled by an Erlang distribution.
• An Erlang distribution is a continuous random
variable (call it T) whose density function f(t) is
specified by two parameters: a rate parameter R and
a shape parameter k (k must be a positive integer).
• Given values of R and k, the Erlang density has the
following probability density function:
R( Rt ) k 1 e  Rt
f (t ) 
(t  0)
(k  1)!
78
• Using integration by parts, we can show that if T is an
Erlang distribution with rate parameter R and shape
parameter k,
k
k
E
(
T
)

and
var
T

• then
R
R2
• The Erlang can be viewed as the sum of independent and
identically distributed exponential random variable with rate
1/m
79
Using EXCEL to Compute Poisson
and Exponential Probabilities
• EXCEL contains functions that facilitate the
computation of probabilities concerning the Poisson
and Exponential random variable.
• The syntax of the Poisson EXCEL function is as
follows:
– =POISSON(x,Mean,True) gives probability that a
Poisson random variable with mean = Mean is less than
or equal to x.
– =POISSON(x,Mean,False) gives probability that a
Poisson random variable with mean =Mean is equal to x.
80
• The syntax of the EXCEL EXPONDIST function is
as follows:
– =EXPONDIST(x,Lambda,TRUE) gives the probability
that an exponential random variable with parameter
Lambda assumes a value less than or equal to x.
– =EXPONDIST(x,Lambda,FALSE) gives the probability
that an exponential random variable with parameter
Lambda assumes a value less than or equal to x.
81
Modeling the Service Process
• We assume that the service times of different
customers are independent random variables and
that each customer’s service time is governed by a
random variable S having a density function s(t).
• We let 1/µ be the mean service time for a customer.
• The variable 1/µ will have units of hours per
customer, so µ has units of customers per hour. For
this reason, we call µ the service rate.
• Unfortunately, actual service times may not be
consistent with the no-memory property.
82
• For this reason, we often assume that s(t) is an
Erlang distribution with shape parameters k and rate
parameter kµ.
• In certain situations, interarrival or service times
may be modeled as having zero variance; in this
case, interarrival or service times are considered to
be deterministic.
• For example, if interarrival times are deterministic,
then each interarrival time will be exactly 1/λ, and if
service times are deterministic, each customer’s
service time is exactly 1/µ.
83
The Kendall-Lee Notation for
Queuing Systems
• Standard notation used to describe many queuing
systems.
• The notation is used to describe a queuing system in
which all arrivals wait in a single line until one of s
identical parallel servers is free. Then the first
customer in line enters service, and so on.
• To describe such a queuing system, Kendall devised
the following notation.
• Each queuing system is described by six characters:
1/2/3/4/5/6
84
• The first characteristic specifies the nature of the
arrival process. The following standard
abbreviations are used:
M = Interarrival times are independent, identically
distributed (iid) and exponentially distributed
D = Interarrival times are iid and deterministic
Ek = Interarrival times are iid Erlangs with shape parameter
k.
GI = Interarrival times are iid and governed by some
general distribution
85
• The second characteristic specifies the nature of the
service times:
M = Service times are iid and exponentially distributed
D = Service times are iid and deterministic
Ek = Service times are iid Erlangs with shape parameter
k.
G = Service times are iid and governed by some general
distribution
86
• The third characteristic is the number of parallel servers.
• The fourth characteristic describes the queue discipline:
–
–
–
–
FCFS = First come, first served
LCFS = Last come, first served
SIRO = Service in random order
GD = General queue discipline
• The fifth characteristic specifies the maximum allowable
number of customers in the system.
• The sixth characteristic gives the size of the population from
which customers are drawn.
87
• In many important models 4/5/6 is GD/∞/∞. If this
is the case, then 4/5/6 is often omitted.
• M/E2/8/FCFS/10/∞ might represent a health clinic
with 8 doctors, exponential interarrival times, twophase Erlang service times, a FCFS queue
discipline, and a total capacity of 10 patients.
88
END
89