Download determination of explicit solution for a general class of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ars Conjectandi wikipedia , lookup

Transcript
DETERMINATION OF EXPLICIT SOLUTION
FOR A GENERAL CLASS OF MARKOV PROCESSES
DANIELLE LIU AT&T Bell Laboratories, Holmdel, NJ
Y. QUENNEL ZHAO Department of Mathematics and Statistics, University
of Winnipeg, Winnipeg Manitoba, CANADA
ABSTRACT
In this paper, we study both the GI/M/1 type and the M/G/1 type Markov chains
with the special structure A0 = ωβ. We obtain explicit formulas for matrices R
and G, which generalized earlier results on Markov chains of QBD type. In the case
of GI/M/1 type, we show how to find the maximal eigenvalue of R, and then that
the computation of R is equivalent to that of x2 . In the case of M/G/1 type, we
show that G = 1β. Based on the earlier results, we show that a stable recursion for
finding the stationary probability vectors xi can be carreid out easily. Two models
in application are discussed, as examples of such processes.
1. INTRODUCTION
We consider both the GI/M/1 type and the M/G/1 type Markov chains [6, 7],
where the transition submatrices are of order m ≤ ∞. The results in Theorem 1 are
consequences of Tweedie [9]. Our purpose is to show the explicit expressions of the
rate matrix R and the matrix G when A0 has the special structure A0 = ωβ. The
matrices R and G play important roles in matrix analytic solutions. Our results
are generalizations of Gillent and Latouche [2] and Ramaswami and Latouche [10],
in which QBD processes were considered. The former work dealt with the case of
m < ∞. Later, the results were generalized to the case of m ≤ ∞. Besides, they
also obtained the counterpart results when A2 has a similar special structure. In
this paper, we generalize the results in [2] and [10] to those Markov chains of the
GI/M/1 type and the M/G/1 type. Many interesting problems in applications
can be formulated into either a GI/M/1 type Markov chain or a M/G/1 type
Markov chain with the special structure. The results in this paper provide an
unified treatment of the shortest queue model with jockeying. This kind of queueing
system has been tackled for many years by using different methods [3, 12, 1, 13].
The main results will be discussed later in the paper as an example of GI/M/1
type Markov chain. The study of patient and impatient customers in a telephone
system serves as an example of M/G/1 type [14].
We consider irreducible, aperiodic Markov chains with the transition probability
matrix in the form of either


B C0
 0

 B1 C1 A0



 B2
P =

 B3

 .
 ..


A2 A1 A0
A3 A2 A1 A0
..
..
..
.. . .
.
.
.
.
.







,







(1)
where B0 is a square matrix of order n ≤ ∞, Ai (i = 0, 1, 2, . . .) are square matrices
of order m ≤ ∞, C0 , C1 and Bi (i = 1, 2, 3, . . .) are matrices of proper dimensions;
or

B B1 B2 B3 · · ·
 0

 A0 A1 A2 A3 · · ·




P̃ = 







A0 A1 A2 · · ·
A0 A1 · · ·
A0 · · ·
...








,







(2)
where Bi and Ai (i = 0, 1, 2, . . .) are square matrices of order m ≤ ∞.
We denote by x = (x0 , x1 , x2 , . . .) the stationary probability vector associated
with P or P̃ , then we have
THEOREM 1 If the Markov chain is positive recurrent, then the stationary probability vector x is such that:
a) for P in (1),
xi = x1 Ri−1 ,
for i ≥ 2,
the matrix R of order m is the minimal nonnegative solution of the matrix equation
R=
∞
X
Rk Ak .
(3)
k=0
And it is the limit of the monotonically increasing sequence of matrices:
R0 = 0,
Rk+1 = A0 (I − A1 )−1 +
∞
X
Rkl Al (I − A1 )−1 , k ≥ 0.
(4)
l=2
The spectral radius η of the matrix R is strictly less than one. The vector (x0 , x1 )
is obtained by solving the linear system

B0
(x0 , x1 ) = (x0 , x1 ) 
 P
∞
k=1
R
C0
k−1
Bk C1 +
with the normalizing condition
P∞
k=2
R
k−1
Ak


,
x0 1 + x1 (I − R)−1 1 = 1,
where 1 is a column vector of 1’s with the proper size.
b) for P̃ in (2), the matrix G of order m for the fundamental period is the
minimal nonnegative solution of the matrix equation
G=
∞
X
Ak Gk ,
(5)
k=0
and it is the limit of the monotonically increasing sequence of matrices:
G1 = A0 ,
(6)
∞
X
(7)
Gk+1 =
Al Glk , k ≥ 1.
l=0
The theorem as stated here were obtained in Neuts [6], [7] for m < ∞. For
m = ∞, the proof for the case of GI/M/1 type is a direct consequence of the
results by Tweedie [9], while the case of M/G/1 type can be similarly proved. For
details, one may also refer to [10].
The analogous results for continuous Markov chains are stated in the following
theorem.
THEOREM 2 For an irreducible, positive recurrent continuous Markov chain of
GI/M/1 type with its infinitesimal generator partitioned as in (1) and supi (−A1 )i i <
∞, the stationary probability vector x is such that
xi = x1 Ri−1 ,
for i ≥ 2,
the matrix R of order m is the minimal nonnegative solution of the matrix equation
∞
X
Rk Ak = 0,
(8)
k=0
and it is the limit of the monotonically increasing sequence of matrices:
R0 = 0,
Rk+1 = A0 (−A1 )−1 +
∞
X
Rkl Al (−A1 )−1 , k ≥ 0.
(9)
l=2
The spectral radius η of the matrix R is strictly less than one. The vector (x0 , x1 )
is obtained by solving the linear system

(x0 , x1 ) 
 P
∞
k=1
B0
R
C0
k−1
Bk C1 +
with the normalizing condition
P∞
k=2
R
k−1
Ak


=0
x0 1 + x1 (I − R)−1 1 = 1.
Similarly, for an irreducible, positive recurrent continuous Markov chain of M/G/1
type with its infinitesimal generator partitioned as in (2) and supi (−A1 )i i < ∞, the
matrix G of order m for the fundamental period is the minimal nonnegative solution
of the matrix equation
∞
X
Ak Gk = 0
(10)
k=0
and it is the limit of the monotonically increasing sequence of matrices:
G1 = A0 ,
Gk+1 = (−A1 )−1 A0 +
(11)
∞
X
l=2
(−A1 )−1 Al Glk , k ≥ 1.
(12)
Our purpose is to show the simplifications obtained when the rows of A0 are
proportional to a common row vector β, that is, A0 = ω · β, where ω is a column
vector. Without loss of generality, we assume that β is a row vector with β1 = 1.
2. DETERMINATIONS OF MATRICES R AND G
THEOREM 3 Assume that A0 = ωβ with β1 = 1. If the Markov chain P is
positive recurrent, then
"
R = A0 I −
∞
X
η
i−1
Ai
i=1
or R = ωξ, where
"
ξ=β I−
∞
X
η
i−1
Ai
i=1
#−1
#−1
,
(13)
,
(14)
and η = ξω is the maximal eigenvalue of R.
Proof: It follows from (4) and A0 = ωβ with β1 = 1 that Rk = ωξk for k ≥ 1,
where
ξ1 = β(I − A1 )−1 ,
ξk = β(I − A1 )−1 +
∞
X
(ξk−1 ω)l−1 ξk−1 Al (I − A1 )−1 .
l=2
Note that ξk ≥ 0 for k ≥ 1, thus R = ωξ for some nonnegative row vector ξ, which
satisfies the equation
−1
ξ = β(I − A1 )
+
∞
X
(ξω)l−1 ξAl (I − A1 )−1 .
l=2
Equivalently,
"
β=ξ I−
∞
X
η
l−1
Al
l=1
h
with η = ξω. Since η < 1, the inverse of I −
P∞
l=1
#
i
η l−1 Al exists.
Next we show that η = ξω is in fact the maximal eigenvalue of R. Since R = ωξ,
Rω = (ξω)ω. Obviously, ξω is the spectral radius of R for m < ∞. For m = ∞,
we have
∞
X
l=0
r l Rl =
∞
X
rl (ξω)l−1 ωξ,
l=0
which converges for positive r if and only if r < (ξω)−1 .
THEOREM 4 Assume that A0 = ωβ with β1 = 1. If the Markov chain P̃ is
positive recurrent, then G = 1β.
Proof: It follows from (7) and A0 = ωβ with β1 = 1 that Gk = νkβ for k ≥ 1,
where
ν1 = ω,
νk =
∞
X
Al (βνk−1 )l−1 νk−1 .
l=0
Thus G = νβ. Also we know that G1 = 1, that means G1 = νβ1 = 1 since
β1 = 1. Therefore, ν = 1.
Remark: The above result is also intuitively true since Gj k is the conditional
probability, given that it starts in state (i, j), that the Markov chain eventually
enters the set of states at level i − 1 by hitting (i − 1, k), i ≥ 1. Since A0 is the
block where the Markov chain moves downwards, G = 1β.
Next we state the analogous result for continuous time Markov processes.
THEOREM 5 Suppose that we have an irreducible, positive recurrent continuous
Markov chain with an infinitesimal generator P partitioned as in (1). Assume
that the matrix
P∞
i=0
Ai is irreducible and A0 = ωβ. Further, let supi (−A1 )ii
be finite. Then its stationary probability vector x = (x0 , x1 , x2 , . . .) is such that
xi = x1 Ri−1 ,
for i ≥ 2, where
"
R = A0 −
∞
X
η i−1 Ai
i=1
#−1
,
(15)
or R = ωξ, where
"
ξ=β −
∞
X
η
i−1
i=1
Ai
#−1
,
(16)
and η = ξω is the maximal eigenvalue of R.
Similarly, for an irreducible, positive recurrent Markov chain with an infinitesimal generator P̃ partitioned as in (2), assume that A0 = ωβ, with β1 = 1, then
G = 1β.
For a GI/M/1 type Markov chain, an efficient algorithm to compute matrix R
is usually the key in obtaining numerical results. It is clear from Theorem 3 that
if we can find the maximal eigenvalue of R in advance, then the computation of R
is equivalent to solving a linear system of size m. Generally, if α is an eigenvalue
of R, then it is a zero of the determinant det(αI −
P∞
[13]), or det(
k=0
P∞
k=0
αk Ak ) (see Theorem 1 in
αk Ak ) for the continuous case. The reverse is also true, but the
proof is less obvious (see Theorem 2 in [4]). When A0 = ωβ, this can be stated in
the following lemma.
LEMMA 1 Under the assumptions of Theorem 3 (or Theorem 5), the eigenP∞
value(s) of R is (are) exactly the zero(s) of det(αI−
satisfying |α| < 1.
k=0
P∞
αk Ak ) (or det(
k=0
αk Ak ))
A numerical approach to obtain the maximal eigenvalue η without computing
the matrix R is outlined in Neuts [6]. The significance of the maximal eigenvalue
of R is well examined in [8]. Some special cases in which η can be derived explicitly
are also discussed in that paper. After finding the maximal eigenvalue η of R, the
computation of R is equivalent to solving the linear system of equations for ξ, which
is
ξ(I −
∞
X
η i−1 Ai ) = β
i=1
for the discrete case, or
ξ(−
∞
X
η i−1 Ai ) = β
i=1
for the continuous case. It is in fact equivalent to the computation of the stationary
probability vector x2 . Since all rows of R are proportional to each other, the ith
row is ωi (ξ1 , ξ2 , . . . , ξm ). Letting xi = (xi 1 , xi 2 , . . . , xi m ), we have
x2 = x1 R =
m
X
ωi x1 i (ξ1 , ξ2 , . . . , ξm ),
i=1
and therefore
x2 k
ξk = Pm
,
i=1 ωi x1 i
k = 1, 2, . . . , m.
The probability vector x2 , together with x0 and x1 , are determined by

B0
C0


(x0 , x1 , x2 ) = (x0 , x1 , x2 ) 
B1

 P
∞
k−2
k=2
η
C1
Bk
for the discrete case or by

B0
η
P∞
k=2
A0
Bk
k=2
k=1
A0
η k−2 Ak
P∞
k=1
η k−1 Ak

0
C1
P∞
P∞
η k−2 Ak
C0


(x0 , x1 , x2 ) 
B1

 P
∞
k−2
k=2
0
η k−1 Ak
for the continuous case with the normalizing condition
x0 1 + x1 1 + x2 1/(1 − η) = 1.


=0





 (17)


(18)
(19)
Summarizing the above discussion leads to the following theorem.
THEOREM 6 Assume that A0 = ωβ with β1 = 1. If the Markov chain P is
positive recurrent, then
xi = x2 η i−2
for i > 2,
where η is the maximal root of the equation det(αI −
P∞
discrete time case (or det(
k=0
(20)
P∞
k=0
αk Ak ) = 0 for the
αk Ak ) = 0 for the continuous time case) satisfying
0 < α < 1. And (x0 , x1 , x2 ) is determined by solving the linear system in (17)
(or (18)).
For a M/G/1 type Markov chain with A0 = ωβ, since G = 1β, Gk = G
for all k = 1, 2, . . .. The stable recursion for computing probability vectors xi,
i = 1, 2, . . ., proposed by Ramaswami [11] can be easily achieved since both B̄k and
Āk are explicitly expressed. Those results are stated as follows:
THEOREM 7 Assume that A0 = ωβ with β = (β1 , β2 , . . . , βm ) and β1 = 1. If
the Markov chain P̃ is positive recurrent, then x0 is the stationary probability vector
of the stochastic matrix K = B0 +
constant c, and for i = 1, 2, . . .,

P∞
xi = x0 B̄i +
i=1
i−1
X
j=1
Bi G up to a difference of the normalization

xj Āi+1−j  I − Ā1
−1
(21)
where
B̄ν =
∞
X
Bi Gi−ν , Āν =
i=ν
∞
X
Ai Gi−ν , ν ≥ 0.
(22)
i=ν
Proof: See Neuts [7].
Since G = 1β has a special structure, most of the matrix operations in (21)
may be replaced by vector operations.
3. APPLICATIONS
There are many applications where the model can be formulated as Markov
chains with special structures studied in this paper. Besides those discussed in [10],
we consider two different applications, one is of GI/M/1 type and the other of
M/G/1 type. The first one deals with a shortest queue with jockeying, which has
brought attention of researchers for many years as we mentioned in the introduction.
The results presented in this paper enable us to give a unified treatment of the
problem. As our second example, we use a telephone model studied recently by
Zhao and Alfa [14], which has the transition matrix of M/G/1 type.
The shortest queue model with jockeying
Shortest queue models can be seen in a broad area of applications. As an example, consider the following satellite communication system. In such a system, all
stations on earth are organized into disjoint zones; packets generated from earth
zones arrive at the satellite by using different possible access techniques, several
buffers are provided at the satellite for the waiting packets to be processed or transmitted; and finally, the packets are sent to their destinations by the multi-down-link
beams. Since there is more than one buffer on board, introducing jockeying of the
waiting packets among the buffers could improve the performance of the systems.
For example, if we allow a packet waiting in the buffer with many waiting packets to move to some other buffer with fewer waiting packets in it, then the average
packet waiting time is obviously reduced. Specifically, we make the following model
assumptions used in [13], which will include all models studied in [3, 12, 1] as its
special cases.
a) Customers arrive singly with interarrival times independent and identically distributed according to an arbitrary distribution function A(t) with A(t) = 0 if
t < 0, and they are not allowed to renege or balk.
b) There are c (c ≥ 2) servers, numbered 1, 2, . . . , c, in the system and each of them
has its own waiting line. In each waiting line, service is rendered according
to FIFO (first come first served) discipline. The c servers have independent
exponential service times. The service times are independent of arrivals.
c) An arriving customer joins one of the shortest waiting lines with a pre-determined
probability distribution.
d) Jockeying among the waiting lines is permitted. The last customer(s) in the
longest waiting line(s) instantaneously jockeys to the shortest waiting line(s)
with a pre-determined probability distribution as soon as the difference of
the customer numbers between the shortest waiting line(s) and the longest
waiting line(s) exceeds r, r ≥ 1.
1) Continuous case:
Assume that the interarrival times are independent and identically exponential
random variables with mean 1/λ. Let Xk (t) represent the number of customers in
queue k including the customer in service (if any) at time t for k = 1, 2, . . . , c, then
(X1 (t), X2 (t), . . . , Xc (t)) is a continuous time Markov chain. Let the infinitesimal
generator of the Markov chain of queue lengths be partitioned according to the
blocks G<r and Gm (m = r, r + 1, . . .) defined by the largest number in the waiting
line (including the customer being served if any):
G<r = {(i1 , i2 , . . . , ic )|max1≤k≤r ik < r},
and for m ≥ c,
Gm = {(i1 , i2 , . . . , ic )|max1≤k≤c ik = m}.
The states in each block can be arranged arbitrarily. For convenience, let (m, m, . . . , m)
be the last state in the block. Then Q is given by

B
C0
0


 B1




Q=







C1 Q0
Q2 Q1 Q0
Q2 Q1 Q0
... ... ...








 ,







(23)
where, B0 is a submatrix of size rc × rc , C0 of size rc × [(r + 1)c − rc ], C1 and Qm for
m = 0, 1, 2 are of size [(r + 1)c − rc ] × [(r + 1)c − rc ], and B1 is of a proper size. If we
denote the last row of Q0 by β/λ, then Q0 = ω · β, where ω is the column vector
with all elements being 0, except the last one (remember (m, m, . . . , m) is the last
state in the block), which is 1/λ. The maximal eigenvalue of R is found equal to
the traffic intensity to the power c (see [12]), or η = ρc = (λ/µ)c with µ =
Pc
i=1
µi .
2) Discrete case: the embedded Markov chain
Consider the embedded Markov chain at the arrival epochs if the interarrival
times are random variables with a general distribution. If the transition matrix P
is partitioned in the same way as in the continuous case, then P is given by










P =










B0 C0
B1 C1 A0
B2 A2 A1 A0
B3 A3 A2 A1 A0
B4 A4 A3 A2 A1 A0
..
..
..
..
..
.. . .
.
.
.
.
.
.
.
..
..
..
..
..
..
...
.
.
.
.
.
.









 .









(24)
Here, B0 is a submatrix of size rc × rc , C0 of size rc × [(r + 1)c − rc ], Bm , m =
1, 2, . . ., of size [(r + 1)c − rc ] × rc , and C1 and Am for all m = 0, 1, 2, . . . are of size
[(r + 1)c − rc ] × [(r + 1)c − rc ]. If we denote the last row of A0 by b0 β with
b0 =
Z
0
∞
e−µt dA(t),
then A0 = ω · β, where ω is the column vector with all elements being 0, except
the last one, which is b0 . It follows from Theorem 6 that η and (x0 , x1 , x2 ) will
completely determine all the other stationary probability vectors. It was proved in
[13] that the maximal eigenvalue of R is given by η = σ c , where σ is the unique
solution for x, inside the unit circle, of the equation
x=
∞
X
k=0
k
x bk ,
with
bk =
Z
0
∞
(µt)k −µt
e dA(t).
k!
A telephone system with both patient and impatient customers
Recently, Zhao and Alfa [14] studied the impact of the presence of impatient
customers on a telephone switch. An impatient customer (incomplete call), either
due to a dial tone delay or abundance from the system, still consumes about 30% to
80% of the real time of processing a complete call. They formulated the model into
a discrete time Markov chain with finite capacity. Since the buffer size is usually
large, we relax the condition to infinite capacity here. Specifically, we assume that
the arrival process is Poisson with rate λ. The service mechanism is last come
first serve and always the patient customers get served first. Let T0 be the waiting
time threshold. Upon the arrival, any customer is patient. If the waiting time a
customer has endured in the buffer before entering the service is at least T0 , then it
becomes impatient and will remain impatient forever. The service times of patient
and impatient customers are constants and equal to T− and T+ , respectively, with
T− > T+ . Let N+ (n) and N− (n) be, respectively, the numbers of the impatient
and patient customers in the system at time tn . Because a patient customer may
become impatient later, neither of them is Markovian. To study this system, an
approximate Markovian model was proposed in [14] essentially by refreshing all
patient customers at each service completion epoch by forgetting their waiting time
history. Let the capacity K− of the buffer for storing patient customers be finite
and let the capacity for storing impatient customers be infinite. Let (i, j) be a state
of the Markov chain with i and j the numbers of impatient and patient customers
in the system, respectively. Define blocks Gi as follows
Gi = {(i, j)|j = 0, 1, . . . , K− },
i = 0, 1, . . . .
It can be shown that the transition matrix is given by

B B1 B2 B3 · · · · · ·
 0

 A0 A1 A2 A3 · · · · · ·


 0

P =

 0

 .
 ..


A0 A1 A2 · · ·
0
..
.
..
.
..
.
where
a a1 a2 · · · aK− −1
 0

 a0 a1 a2 · · · aK −1

−
(25)

aK− 

aK− 


a0 a1 · · · aK− −2 aK− −1 

0
0
..
.
0
0
..
.
a0 · · · aK− −3 aK− −2
..
..
..
. ···
.
.
0
0
···
a0

0 0 ··· 0


 0 0 ··· 0

for j = 1, 2, . . .,





··· 

,

··· 


··· 


A0 A1 · · ·
..
..
. ···
.
..
..
. ··· ···
.




B0 = 











Bj = 







a1

,







aj+K− 

aj+K− 


0 0 · · · 0 aj+K− −1 

0
..
.
0 ···
..
. ···
0 aj+K− −2
..
..
.
.
0 0 ··· 0
aj+1
(27)










b b1 b2 · · · bK− −1 bK− 
 0


 0 0 0 ···
0
0 


,
A0 = 
.. ..
..
.. 
 ..
. . ···
.
. 

 .

(26)
0
0
0 ···
0
0

(28)







A1 = 





and
0
0
0
···
0
a0 a1 a2 · · · aK− −1

bK− +1 

aK− 

0
..
.
a0 a1 · · · aK− −2 aK− −1
..
..
.. ..
.
.
. . ···
0
0
0
···
a0
a1

bj+K−
 0 0 ··· 0

 0 0 ··· 0 a

j+K− −1

 0
Aj = 

 .
 ..


0 ···
..
. ···
0 aj+K− −2
..
..
.
.
0 0 ··· 0
aj
for j = 2, 3, . . .. In the matrices, for k = 0, 1, 2, . . .,
ak =


,





(29)













(λT− )k −λT−
(λT+ )k −λT+
e
and bk =
e
.
k!
k!
(30)
(31)
This is a matrix of M/G/1 type, which has the special structure studied in this
paper. Let b≤K− =
PK−
i=0 bi
and let b≤K− β is the first row of A0 . Then, A0 = ω · β,
where ω is the column vector with all elements being 0, except the first one, which
is b≤K− =
PK−
i=0 bi .
In this case,
G=
1
b≤K−

b b1 · · · bK−
 0

 b0 b1 · · · bK
−

 .
..
..
 .
.
.
 .

b0 b1 · · · bK−





.



For numerical results about these two examples, one may refer to [13] and [14].
Acknowledgement
The authors thank the referees for their useful comments and Y.Q. Zhao acknowledges that this work was partly supported by Grant No.4452 from the Natural
Sciences and Engineering Research Council of Canada.
References
[1] I.J.B.F. Adan, J. Wessels and W.H.M. Zijm, “Analysis of the asymmetric
shortest queue problem with threshold jockeying,” Stoch. Models, 615–627,
Vol. 7, 1991.
[2] F. Gillent and G. Latouche, “Semi-explicit solutions for M/P H/1-like queuing
systems,” Eur. J. Oper. Res., 151–160, Vol. 13, 1983.
[3] E.P.C. Kao and C. Lin, “A matrix-geometric solution of the jockeying problem,” Eur. J. Oper. Res., 67–74, Vol. 44, 1990.
[4] G.R. Murthy, M. Kim and E.J. Coyle, “Equilibrium analysis of skip free
Markov chains: nonlinear matrix equations,” Stoch. Models, 547–571, Vol. 7,
1991.
[5] M.F. Neuts, “The Markov renewal branching process,” Proc. of the Conf. on
Math. Methodology of Queues, Kalamazoo, Michigan, Springer-Verlag, NY,
1974.
[6] M.F. Neuts, “Matrix-geometric solutions in stochastic models: an algorithmic
approach,” Johns Hopkins University Press, Baltimore, 1981.
[7] M.F. Neuts, “Structured stochastic matrices of M/G/1 type and their applications,” Marcel Dekker Inc., New York, 1989.
[8] M.F. Neuts, “The Caudal Characteristic Curve of Queues,” Adv. Appl. Prob.,
18, 221-254, 1986.
[9] R.L. Tweedie, “Operator-geometric stationary distributions for Markov chains
with applications to queueing models,” Adv. Appl. Prob., 368–391, Vol. 14,
1982.
[10] V. Ramaswami and G. Latouche, “A general class of Markov processes with
explicit matrix-geometric solutions,” OR Spektrum, 209–218, Vol. 8, 1986.
[11] V. Ramaswami, “A stable recursion for the steady state vector of M/G/1
type,” Stochastic Models, 183–188, 4(1), 1988.
[12] Y.Q. Zhao and W.K. Grassmann, “The shortest queue model with jockeying,”
Naval Res. Log., 773–787, Vol. 37, 1990.
[13] Y.Q. Zhao and W.K. Grassmann, “Queueing analysis of a jockeying model,”
Operations Research, accepted, 1993.
[14] Y.Q. Zhao and A.S. Alfa, “Performance analysis of a telephone system with
both patient and impatient customers,” Telecommunication Systems, submitted, 1994.