Download On Self-Similar Traffic in ATM Queues: Definitions, Overflow

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Randomness wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Birthday problem wikipedia , lookup

Inductive probability wikipedia , lookup

Stochastic geometry models of wireless networks wikipedia , lookup

Probability interpretations wikipedia , lookup

Long-tail traffic wikipedia , lookup

Transcript
On Self-Similar Traffic in ATM Queues:
Definitions, Overflow Probability Bound and Cell Delay
Distribution *
Boris Tsybakov
Nicolas D. Georganas, FIEEE
Institute for
Problems in Information Transmission
Russian Academy of Science
19, Bolshoi Karetnii Per.
Moscow, Russia
e-mail: [email protected]
Multimedia Communications Research
Laboratory (MCRLab)
Dept. of Electrical & Computer Engin.
University of Ottawa
Ottawa, Ontario, Canada K1N 6N5
e-mail: [email protected]
ABSTRACT
Recent traffic measurements in corporate LANs, Variable-bit-rate video sources,
ISDN control-channels and other communication systems, have indicated traffic
behaviour of self-similar nature. This paper first discusses some definitions and
properties of (second-order) self-similarity and gives simpler criteria for it. It then
gives a model of self-similar traffic suitable for queueing system analysis of an
ATM queue. A lower bound to the overflow probability of a finite ATM buffer is
obtained, as also a lower bound to the cell loss probability. Finally, the stationary
distribution of the cell delay in an infinite ATM buffer is obtained.
1. Introduction
Recent studies of high-quality, high resolution traffic measurements have revealed a new
phenomenon with potentially important ramifications to the modeling, design and control of
broadband networks. These include an analysis of hundreds of millions of observed packets on
several Ethernet LANs at the Bellcore Morristown Research and Engineering Centre [10], [11],
and an analysis of a few millions of observed frame data by Variable-Bit-Rate(VBR) video
services [3]. In these studies, packet traffic appears to be statistically self-similar ("fractal"). Selfsimilar traffic is characterized of "burstiness" across an extremely wide range of time scales: traffic
"spikes" ride on longer-term "ripples", that in turn ride on still longer term "swells", etc.[10]. This
"fractal" behaviour of aggregate Ethernet traffic is very different both from conventional telephone
traffic and from currently considered models for packet traffic (e.g., Poisson, Batch-Poisson,
This work was supported in part by a grant from Bell-Northern Research (BNR) and in part by NSERC
operating grant A-8450.
-1-
Markov-Modulated Poisson Process, Fluid Flow models, etc.). Contrary to common beliefs that
multiplexing traffic streams tends to produce smoothed-out aggregate traffic with reduced
burstiness, aggregating self-similar traffic streams actually retain intensify burstiness rather than
diminish it [10].
Self-similarity manifests itself in a variety of different ways: a spectral density that diverges
at the origin, a non-summable autocorrelation function (indicating long-range dependence), an
Index of Dispersion of Counts (IDC) that increases monotonically with the sample time T , etc.
[2], [10], [16]. A key parameter characterizing self-similar processes is the so called Hurst
parameter, H , which is designed to capture the degree of self-similarity.
Asynchronous Transfer Mode (ATM), high-speed, cell-relay networks will most likely be
first used as backbones for the interconnection of enterprise networks composed of several LANs,
and may also carry VBR video traffic. A lot of studies have been made for the design, control and
performance of such networks, using traditional traffic models. It is likely that many of those
results need major revision, particularly those pertaining to cell-loss rates and congestion control,
when self-similar traffic models are considered . In [12], some of the first studies of ATM queues
with self-similar input traffic and a conjecture on the cell-loss probability function are given.
In this paper, we first give some definitions and clarifications regarding second-order selfsimilar processes and show that some previous definitions contain some redundancy. We then
consider a large class of asymptotically second-order self-similar processes, to model self-similar
input in an ATM queue. This class includes the [4] model of self-similar traffic as a special case.
Identification of parameters in our model from actual traffic measurements can be made, with
practical applications. We then find a lower bound to the buffer overflow probability in a finite
ATM buffer. It is shown that this probability cannot decrease faster than hyperbolically with the
buffer size h , in contrast to the one in traditional Markov models that decreases faster then
exponentially (Kingman bound). This is a direct consequence of the self-similarity of input traffic.
We also establish a lower bound to the buffer cell-loss probability and show that it also decreases
hyperbolically with increasing h. Finally, considering a particular service discipline that we call
"source-by-source" service, we establish the stationary distribution of the cell delay in an infinite
ATM buffer. Our models and bounds, have immediate practical application in the dimensioning of
buffers in ATM multiplexers and switches.
2. Second-Order Self-Similar Processes: Redundancies in old and some new definitions
In this Section, some known material on the definitions of exactly and asymptotically
second-order self-similar processes is first given and properties of the processes are discussed. We
then give what we prove to be better definitions and compare them with those presented in [10],
[11] and [4]. We consider only self-similar processes with discrete time.
-2-
Let X = {Xt } = (X1 , X2 ,...) be a semi-infinite segment of a covariance-stationary stochastic
process of discrete argument t (time) t ∈ I1 = {1,2,...}. Denote µ = EXt and xt = Xt − µ .. The
processes X and x = {xt } have the autocorrelation function and the variance denoted as
E[xt xt + k ]
r(k) =
(2.1)
Ext2
and
var X = var x = σ 2 = Ext2 .
(2.2)
2
Both r(k) and σ < ∞ as well as µ < ∞ are independent of t due to the stationarity of Xt and xt .
We begin with the definition of exactly second-order self-similar process given in [10].
Let X have an autocorrelation function of the form
r(k) ~ k − β L(k), k → ∞
(2.3)
where 0 < β < 1 and L is slowly varying function as k → ∞ , i.e.
lim t →∞ L(tx) / L(t) = 1 for all x > 0 . For each m ∈ I1 , let
1
Xt( m) = (Xtm − m +1 +...+ Xtm ),t ≥ 1.
(2.4)
m
The process X ( m) = {Xt( m)} = (X1( m) , X2( m) ,...) is the semi-infinite segment of a second-order
stationary process of discrete argument t ∈ I1 . Let r ( m) (k) denote the autocorrelation function of
process X ( m) .
Process X is called exactly second-order self-similar with Hurst parameter H = 1 − (β / 2),
if for all m ∈ I2 = {2,3,...}
var X ( m) = σ 2 m − β
and
r ( m) (k) = r(k), k ∈ I0 = {0,1,2,...}.
(2.5)
(2.6)
Thus, according to [10], a stationary process X is called exactly second-order selfsimilar, if the three conditions (2.3), (2.5), and (2.6) are satisfied.
Now, let us discuss this definition. Not all of the above conditions are independent of each
other, and that fact will be proven below. As we will see, conditions (2.3 ) and (2.6) follow indeed
from condition (2.5). Moreover, condition (2.5) gives the explicit expression for r(k). This
expression allows for L(k) to be only some constant but not any slow varying function. After this
discussion, we will come to a more natural definition of exactly self-similar process.
Let us first begin with some statements. The following establishes that the condition (2.5)
gives us an explicit form for the autocorrelation function.
Statement 1. Process X satisfies the condition (2.5), if and only if its autocorrelation
function is
∆
1
r(k) = [(k + 1)2 − β − 2k 2 − β + (k − 1)2 − β ]= g(k), 0 < β < 1, k ∈ I1 .
(2.7)
2
-3-
Proof.The claim that (2.7) implies (2.5) was given in [4]. To show that (2.5) implies (2.7),
we can use induction over k with evident initial induction point at k = 1. ∴
Fig. 1 shows the autocorrelation function g(k) as a function of k for two values of the
parameter β .
The following statement is easy to prove and shows that the slowly varying function L(k)
in (2.3) is indeed some explicitly given constant dependent on the parameter β .
Statement 2. For the autocorrelation function r(k) = g(k) , the following limiting
equation holds
r(k) 1
lim − β = (2 − β )(1 − β ) = H(2H − 1)
k →∞ k
2
β
where H = 1 − , 0 < β < 1 .
2
(2.8)
Now, it will be shown that condition (2.5), or equivalently (2.7), imply (2.6):
Statement 3. Process X satisfies the condition
( m)
r (k) = r(k), k ∈ I0 , m ∈ I2 = {2,3,...}
(2.9)
if the condition (2.5) holds or if r(k) = g(k) for 0 < β < 1.
Proof. The claim that the equation r(k) = g(k) implies (2.9) was given in [4]. To show
that (2.5) implies (2.9),we simply need to use Statement 1. ∴
As in engineering studies ,we often work with the spectral density function instead of the
autocorrelation, it will be demonstrated below that an explicit expression for the spectral density of
an exactly (second-order) self-similar process can be given.
Statement 4 [14]. Process X satisfies the condition (2.5) if and only if its spectral density
is
∞
2
1
1
1
, − ≤λ ≤ ,
f (λ ) = c e 2 π iλ − 1 ∑
(2.10)
3− β
2
2
l=−∞ λ + l
1
2
where c is a constant given by normalization,
∆
∫ f (λ )dλ = σ
2
.
1
−
2
( By definition, f (λ ) is linked with w( k ) = Ext xt + k by the following equation
w( k ) =
1
2
∫e
2π iλ k
f (λ )dλ .)
1
−
2
As it follows from (2.10), the function
λ
-4-
f (λ ) has a singularity of the type
From the above discussion, we can now give the following definition of exactly secondorder self-similar process [4] .
Definition. Process X is called exactly second-order self-similar with parameter
β
H = 1 − , 0 < β < 1 , if its autocorrelation function is (2.7), i.e. if r(k) = g(k) .
2
The other alternative definitions ( for example, X is exactly self-similar if (2.5) holds) are
not quite suitable for an extention of the definition on an asymptotically (second-order) self-similar
processes presented below. Nevertheless, we see that a process is exactly (second-order) selfβ
similar with parameter H = 1 − , 0 < β < 1 if and only if
2
(i) its spectral density has the form (2.10),
or if and only if
(ii) it satisfies the condition (2.5).
Also, an exactly second-order self-similar process has the properties (2.6) and (2.8).
According to (2.8), the slowly varying function L(k) appearing in (2.3) does not play any role in
the definition of an exactly self-similar process. This is good in applications, since in the opposite
case it would have been necessary to include this function L(k) in the list of model parameters for
identification and thus complicate the model.
In addition to the definition of second-order self-similarity, the unique definition of H self-similarity (general, not second-order) for stationary sequences is widely accepted as a fixed
point of the re-normalization group (see, for example,[14] , [16]). For H -self-similar sequences,
both conditions (2.5) and (2.6) follow obviously from its definition and because of Statement 1 , a
zero-mean Gaussian sequence is H -self-similar , if and only if its autocorrelation function is g(k).
Now we are going to define and discuss asymptotically second-order self-similar
processes. As we will see later in Section 3, we can model such processes with a model which,
when used in the analysis of an ATM buffer, is amenable to analytical treatment.
Definition. Process X is called asymptotically second-order self-similar with parameter
β
H = 1 − , 0 < β < 1 , if for all k ∈ I1 ,
2
∆
1
lim r ( m) (k) = [(k + 1)2 − β − 2k 2 − β + (k − 1)2 − β ]= g(k) .
(2.11)
m→∞
2
To get an insight in this definition, we will present some implications of equation (2.11)
and a property of r(k) leading to (2.11).
Statement 5. If
r(k)
lim
= c, 0 < β < 1
k →∞ k − β
(2.12)
-5-
where 0 < c < ∞ is a constant, then
var X ( m)
lim
=c
m→∞
m −β
where c is also a constant.
Proof . Because of stationarity [4], [21],
m
m
i =1
i =1
(2.13)
var X ( m) = m −2 E( ∑ xi )2 = σ 2 m −1 + 2σ 2 m −2 ∑ (m − i)r(i)
Under condition (2.12), we have
β
lim σ m [var X
−2
m→∞
( m)
] = lim 2cm
m→∞
β −2
m
∫ (m − x)x
−β
dx = c / H(2H − 1),
1
where we used the following equations:
m
m
i =1
1
lim ( ∑ (m − i)r(i)) / ( ∫ (m − x)x − β dx) = c,
m→∞
m
∫ (m − x)x
−β
dx = m 2 − β [(1 − β )−1 − (2 − β )−1 ] − m(1 − β )−1 + (2 − β )−1 . ∴
1
Statement 6. If (2.12) holds then
lim r (k) = g(k), k ∈ I1 .
( m)
(2.14)
m→∞
Proof: Statement 6 follows directly from Statement 5 and the known general equation
r ( m) (k) = [(k + 1)2 V( k +1)m − 2k 2V km + (k − 1)2 V( k −1)m ] / 2V m , k ∈ I1
where V m = var X ( m) . ∴
Comparison of Statements 2, 5, and 6 shows that an exactly second-order self-similar
β
process with parameter H = 1 − , 0 < β < 1 must have r(k) ~ H(2H − 1)k − β , whereas a
2
β
process is asymptotically second-order self-similar with parameter H = 1 − , 0 < β < 1 if
2
−β
r(k) ~ ck with some constant c which is not necessarily equal to H(2H − 1).
Comparison with other definitions
Now we compare the above definition (2.11) and the definitions given in [11] and [10] for
asymptotically second-order self-similar (a.s.o.s.s.) process.
According to [10], X is a.s.o.s.s. with parameter H , if for all k large enough,
( m)
r (k) → r(k) as m → ∞ with r(k) given by (2.3).
r
( m)
Also, according to [11], X is a.s.o.s.s. with
(k) → g(k) as m → ∞ and r(k) satisfies (2.12).
parameter H ,
if for
k ≥ 1,
The definition in [10] is not equivalent to the definition (2.11), since (as m → ∞ ) the
former requires the convergence of r ( m) (k) to r(k) for large k instead of the convergence to g(k)
for all k, as the latter requires.
-6-
The definition in [11] is in question as being equivalent to the definition (2.11), since the
former has the extra condition (2.12) for r(k). In fact, (2.12) is a sufficient condition for a
process to be a.s.o.s.s. under the definition (2.11).
Our definition (2.11) is the same as in [4].
We conclude this Section with the next two statements, which are usefull in applications
where data traffic from many streams is merging into one, as for example in a data multiplexer or
in an ATM switch.
Statement 7. If X ′ and X ′′ are such uncorrelated processes that r(k) ~ c1k − β1 , k → ∞
for X ′ and r(k) ~ c2 k − β2 , k → ∞ for X ′′ , where ci and β i ,i = 1,2 are constants,
0 < ci < ∞, 0 < β i < 1, then X ′ + X ′′ is an asymptotically second-order self-similar process with
β
parameter H = 1 − where β = min(β1 , β 2 ) .
2
Note that, according to Statement 6, both processes X ′ and X ′′ in Statement 7 are
β
asymptotically self-similar, X ′ with
H1 = 1 − 1 , 0 < β1 < 1
and
X ′′
with
2
β
H2 = 1 − 2 , 0 < β 2 < 1 . Thus merging asymptotically second-order self-similar data streams
2
produces an asymptotically self-similar stream.
Statement 8. Let the uncorrelated processes X ′ and X ′′ be exactly second-order selfsimilar, X ′ with H1 and X ′′ with H2 . If H1 = H2 = H , then X ′ + X ′′ is exactly second-order
self-similar with parameter H . If H1 ≠ H2 , then X ′ + X ′′ is not exactly second-order selfsimilar but is asymptotically second-order self-similar with H = max(H1 , H2 ) .
Proofs : The proofs of these statements are straightforward. ∴
Statement 8 is very important too, as it shows that multiplexing exactly second-order selfsimilar data streams may or may not result in a stream of the same type.
Finally, let us note in passingthat the term "self-similar" may not be quite appropriate from
the linguistics point of view, since every thing is obviously self-similar to itself! We are not
insisting, however, on changing that established term!
3. A Model for Self-Similar Traffic
In this Section,we present a model for self-similar input cell traffic to an ATM buffer.There
are many known publications on self-similar models for processes arising in natural science and
engineering [13], [20]. The recently published book [2] gives an excellent survey of models, data
examples, a historical overview, discussion, and a list of about 800 references on long memory
processes. We shall make a comparison of our model with some models that are close to it.
Consider the following stationary random process
Y = (...,Y −1 ,Y0 ,Y1 ,...)
-7-
which is presented here as a mathematical model for ATM self-similar traffic. The traffic is
formed as an aggregation of packet sources each generating a cell stream during its active period.
Let us give a more specific presentation of the traffic Y .
At time t ∈ I− ∞ = {..., −1,0,1,...} , a random number of traffic sources, ξ t , begin their
active periods. The variables ξ t , t ∈ I− ∞ are independent and Poissonian with mean λ 1 . We
choose ξ as generic symbol for ξ t .
All sources are numbered by s ∈ I− ∞ in the order of beginnings of their active periods.
Under this numbering, sources with active periods beginning at the same time are ordered
randomly. The time of beginning of active period s is denoted by ω s . The active period s lasts
for τ s time units; ω s ,..., ω s + τ s − 1 . The variable τ s is called the length of active period s . The
variables τ s , s ∈ I− ∞ are assumed to be random, independent, independent of ξ t , ω s , t ∈ I− ∞ ,
s ∈ I− ∞ , and identically distributed. They take values in the set I1 = {1,2,...} . We choose τ as
generic symbol for τ s .
The source s generates ϑ ω( s)s ,..., ϑ ω( s)s + τ s −1 cells at times ω s ,..., ω s + τ s − 1 , respectivelly.
At other times, the source s does not generate cells, i.e. ϑ t( s) = 0 , if t < ω s or t ≥ ω s + τ s .
We assume that
ϑ ( s) = (...,0, ϑ ω( s)s ,..., ϑ ω( s)s + τ s −1 ,0,...)
(3.1)
is formed from some random stationary sequence
)
) ( s) ) ( s) ) ( s)
ϑ ( s) = (..., ϑ −1
, ϑ 0 , ϑ1 ,...)
(3.2)
by the natural way:
)
)
ϑ ( s) = (...,0, ϑ ω( s)s ,..., ϑ ω( s)s + τ s −1 ,0,...)
(3.3)
) ( s)
We assume also that the sequences ϑ , s ∈ I− ∞ are stationary, independent, and
)
identically distributed and also independent of ξ t , ω s , τ s , s ∈ I− ∞ ,t ∈ I− ∞ . We choose ϑ as
)
generic symbol for ϑ t( s) and ϑ for ϑ t( s) .
The number of new cells Yt that appeared in traffic Y at time t is the sum of numbers of
cells generated by all active (at time t ) sources,
Yt =
∞
∑ϑ
( s)
t
(3. 4)
s=−∞
We give two special cases of such traffic:
)
Traffic with Poisson sources. In this example, ϑ t( s) are independent (for all s and t) and
Poissonian with mean λ 2 . The intensity of this traffic is EYt = λ 1λ 2 Eτ . However, the
distribution of Yt is not Poissonian (since the sum of a random number of Poisson random
variables is not necessarily a Poisson variable). Moreover, Yt are dependent variables.
)
Traffic with sources of constant rate. In this example, ϑ t( s) = R ∈ I1 . The mean and
variance of the traffic are
EYt = R(Eξ )(Eτ ), E(Yt − EYt )2 = R2 (Eξ )(Eτ )
-8-
(3.5)
The distribution of Yt for given t is Poissonian.
Note that this process, but with only R = 1, was first introduced by Cox and Isham [6] for
modeling an immigration-death process and, in the general context of self-similarity, by Kosten
[9] in a multi-entry buffer consideration. It was also used in [12] for the modeling of ATM traffic.
In our modeling of self-similar ATM traffic with the general process Y , we do not give
any real physical meaning to the above described sources and their active periods. We consider that
only the total traffic Y has physical meaning. As to the bursty sources, we need them only as
mathematical buiding blocks in the construction of traffic Y . Similarly, we do not give any
physical meaning to some internal parameters of Y (such as R and λ 2 in the special cases).
In connection with it, it is interesting to know how to extract the parameter λ 1 and the
probabilistic description of sources with measurements of the process Yt .
Parameter identification in the case of traffic with sources of constant rate. It will be shown
that, if Yt is the process with sources of constant rate R , then ,to find λ 1 and Pr{τ ≥ l}, it is
enough to measure only the autocovariance function of Yt ,
∆
w(k) = cov{Yt ,Yt + k } = E{(Yt − EYt )(Yt + k − EYt + k ), k ∈ I0 .
(3.6)
λ 1 and Pr{τ ≥ l} can be expressed by the following equations:
λ 1 = [w(0) − w(1)] / R2 ,
Pr{τ ≥ l + 1} = [w(l) − w(l + 1)] / λ 1 R2 , l ∈ I1
(3.7)
(3.8)
To find the constant rate parameter R , it is convenient to use the equation
R = min{Yt :Yt > 0,t ∈ I− ∞}.
(3.9)
To prove (3.7) and (3.8), let us denote by ∆ i (l) the number of sources that started their
active periods with length τ = l at time i . We have
∞
Yt = R∑
t
∑ ∆ (l).
(3.10)
i
l =1 i =t −l +1
Let Ai be the set of source numbers for the sources arrived at i . The size of Ai is ξ i where
ξ i is Poissonian with parameter λ 1 . Let δ s (l) be the indicator of τ s = l for source s ,
 1, with probability Pr{τ s = l},
δ s (l) = 
(3. 11)
 0, with probability 1 − Pr{τ s = l}.
The random variables δ s (l) , s ∈ Ai are i. i. d.and independent of ξ i . Also,
∆ i (l) = ∑ δ s (l)
(3.12)
Similarly as in the continuous-time case [6], we have that
E∆ i (l) = λ 1 Pr{τ = l},
E∆ i (l)∆ j (m) = λ 21 Pr{τ = l}Pr{τ = m} + λ 1 Pr{τ = l}δ ij δ lm ,
cov{∆ i (l), ∆ j (m)} = λ 1 Pr{τ = l}δ ij δ lm .
(3.13)
(3.14)
(3.15)
s∈Ai
Use of (3.6), (3.10), and (3.15) yields
-9-
w(l) = λ 1 R
∞
2
∑ Pr{τ ≥ l},
l ∈ I0 .
(3.16)
t =l +1
Equations (3.7) and (3.8) follow from (3.16).
These equations (3.8) and (3.16) not only allow us to extract λ 1 , and Pr{τ ≥ l} from
measurements of the process Y but they also show when Y is exactly self-similar.
Statement 9. The process Y with sources of constant rate R is exactly second-order selfβ
similar with H = 1 − , 0 < β < 1, if and only if
2
Eτ
Eτ 3
1
Pr{τ ≥ l + 1} =
[3(l + 1)2 − β − 3l 2 − β − (l + 2)2 − β + (l − 1)2 − β ] = −
δ ((l + )2 − β ), l ∈ I1 ,
2
2
2
3
where δ ( f ) denotes the central third difference operator applied to a function f [note that
1
1 ∞
δ ( f (x)) = f (x + 1 2 ) − f (x − 1 2 )] and Eτ = (1 + ∑ δ 3 ((l + )2 − β ))−1 .
2
2 l =1
The distribution Pr{τ ≥ l + 1} is Pareto-type,
Eτ
Pr{τ ≥ l + 1} ~
β (1 − β )(2 − β )l −( β +1) , l → ∞.
2
Sufficiency of the condition of exact self-similarity, claimed by Statement 9 was given in
[4] for the case R = 1 .
Fig. 3 plots the above probability as a function of l.
Practical use of identification results When R = 1 , Y is determined from three parameters,
λ 1 , β , Eτ where Eτ can be measured with the help of the equation w(0) = λ 1 Eτ . We can thus
identify these parameters form measurements of actual traffic and then use the model for the
buffer dimensioning of an ATM multiplexer or switch, using the bounds given in Section 4 below.
Following our above study of the special case of Y with sources of constant rate, we
return to the general process Y .
It is interesting to note that the traffic Y can be obtained as a limit of aggregation of
traffics of M independent individual sources, each of which, after being in active state, spends
some time in idle (non-active) state. Let us give a more specific description for this limit
transition.
There are M independent individual sources 1,..., M . Each source, at any time t ∈ I− ∞ ,
can be in one of two states, active or idle. For the source s , let active states begin at
s)
( s)
] is parted into two, one with length τ l( s) and
times..., ω −1
, ω 0(s) , ω 1( s) ,... The interval [ω l( s) , ω l(+1
s)
− ω l( s) . The interval [ω l( s) , ω l( s) + τ l( s) ) is called the l-th
another with length ϕ l( s) , τ l( s) + ϕ l( s) = ω l(+1
s)
) is called the l-th idle period of
active period of source s , and the interval [ω l( s) + τ l( s) , ω l(+1
source s .
-10-
The random variables τ l( s) , ϕ l( s) , s ∈{1,..., M}, l ∈ I− ∞ are assumed independent. The
variables τ l( s) are identically distributed. The variables ϕ l( s) are also identically distributed. Let
aτ = Eτ , aϕ = Eϕ be the means of τ l( s) and ϕ l( s) respectivelly.
At the time t , the source s generates ϑ t( s) cells. If t is in idle period, ϑ t( s) = 0 .
)
)
During its l-th active period, the source s generates ϑ ω( s)(s) = ϑ ω( s)(s) ,..., ϑ ω( s)(s) + τ (s) −1 = ϑ ω( s)(s) + τ (s) −1 cells
l
l
l
l
l
) ( s) l
)
( s)
( s)
( s)
at times ω l ,..., ω l + τ l − 1 respectivelly, where ϑ t are from the stationary process ϑ ( s)
given by (3.2).
We consider the aggregate traffic of M individual sources. In this traffic, at any time t,
ξ t (M) active periods start and
M
∑ϑ
( s)
t
cells appear. Let us, now, go to the limit when
s =1
M → ∞, λ 1 = M / (aτ + aϕ ) , and Pr{τ ≤ t} remain unchanged but Pr{ϕ ≤ t} → 0 for any t < ∞.
In this limit, we obtain the process presented above.
The considered limit transition (in that its part where ξ t (M), t ∈ I− ∞ tends to the sequence
of independent Poisson variables) is nothing more than an illustration of the well known fact that
superposition of renewal processes is a Poisson point process in limit and under appropriate
normalization. (For the necessary and sufficient condition for the superposition of arbitrary point
processes to converge to a Poisson process, see [7] ).
In Appendix A (see Statement A.1), we show that the process Y is asymptotically secondorder self-similar, if Pr{τ > t} ~ ct − α , 1 < α < 2 .
For the special case of process Y with sources of constant rate R , Statement A.1 in
Appendix A was given in [4] for R = 1 and in [12] for any R . In this special case, Statement A.1
follows easily from (3.16) and Statenent 6. Moreover, in this case, if Y is asymptotically secondorder self-similar with r(k) satisfying (2.12), then Pr{τ > l} ~ c(Eτ )(α − 1)l − α , α = β + 1 , l → ∞
follows directly from (3.8). In less precise words and using Statement 9, we can say that selfsimilarity (exact or asymptotical) of traffic Y heavely linked with the Pareto-type distribution of τ .
Another model of asymptotically self-similar process (of continuous time),which is a sum
of short-range independent attenuated and weighted stationary processes was given in [5].
In the next section, we consider a queueing system with input traffic modelled by the
process Y with sources of constant rate R = 1. Our results in Section 4 are related to a more
general input traffic than such Y .The more general traffic has generally distributed and independent
numbers of arrived sources, ξ t , instead of Poisson distributed and independent ξ t as above.
4. An ATM Queue with Asymptotically Self-Similar Input Traffic Y
In this section,we first consider a finite-buffer queueing system with input cell traffic Y
having sources of constant rate R = 1 . We will obtain the lower bounds to the stationary buffer-
-11-
overflow and cell-loss probabilities, expressed in terms of λ 1 , Pr{τ = n} and h , where h is the
size of the buffer. The bound decays hyperbollically rather then exponentially fast with increasing
h when τ has the Pareto-type distribution and Y is self-similar.
At the end of the section, we will give the stationary distribution of cell delay in the case of
an infinite buffer and when the cells of different sources are distinguishable, so the source-bysource service discipline can be applied.
We use the usual Kendall's symbols to denote a discrete time queueing system. In the
notation G/D/1/ h , the symbol G represents the general input cell (of length 1) traffic, D
represents the constant service time 1 for a cell, 1 represents one server (channel), and h
represents the buffer size in cells. Additionally, the symbol GY1 at the position of G signifies
input traffic Y having independent and generally distributed ξ t , t ∈ I− ∞ and sources of constant
rate R = 1 . The symbol Y1 at the position of G means the traffic Y with Poissonian and
independent ξ t and with the sources of constant rate R = 1 .
4.1 Buffer Overflow Probability Lower Bound
Statement 10. If Eτ < ∞ and Pr{ξ = 0} < 1 , the queueing system GY1/D/1/ h with any
service discipline from D(h) (Appendix B) has the overflow probability Pover lowerbounded by
∞
1
Pover ≥
Pr{τ ≥ n} =ˆ G1
(4.1)
∑
(Eτ + Eκ )2 n = n1
h + b
where n1 = 
+ 2, [x] is the integer part of x , and
 a 
Eκ = (1 − Pr{ξ = 0})−1 − 1, a = (Eτ + Eκ )−1 ≤ 1, b = a + 1.
Proof: See Appendix B. ∴
We will apply the bound (4.1) to the most interesting case of the Pareto-type distribution,
Pr{τ = n} = cn − α −1 , 1 < α < 2 , and (according to Statement A1 in Appendix A) self-similar traffic
Y1.
Using the inequality
∞
n0− α +1
−α
≥
n
, n0 > 1, α > 1
(4.2)
∑
α −1
n = n0
and noting that, in this case,
c
Pr{τ ≥ n} ≥ n − α
a
we get
− α +1
c
 h + b 
=ˆ G2
Pover ≥
+ 2

α (α − 1)(Eτ + Eκ )2   a  
(4.3)
(4.4)
where
∞
∞
Eτ = c ∑ n , c = ( ∑ n − α −1 )−1 ,
n =1
−α
(4.5)
n =1
-12-
Eκ = (1 − e − λ1 )−1 − 1,
(4.6)
and a and b are given in Statement 10.
Asymptotically, when h is large, (4.4) gives the bound Pover ≥ c1h − α +1 = c1h − β = c1h −2(1− H )
where c1 is a constant independent of h but dependent on λ 1 and α and where we used the
β 3−α
equation H = 1 − =
from Statement A.1. With this bound, we reach the important
2
2
conclusion that, for the considered self-similar traffic Y1, the overflow probability Pover cannot
decrease faster than hyperbolically with the growth of buffer size h .
A numerical example
To get a numerical result, let us consider, for example the case of α=1.5 (the Hurst
parameter is H = (3-α)/2=0.75 ) and λ 1 = 0.2. In this case, c=0.745, Eτ=1.95, Εκ=4.51 , and
0.0238
Pover ≥
.
(4.7)
 h + 1.15  + 2
 0.15 
According to (4.4) and (4.7),
Pover ≥ 7.9∗10 −3 for h = 0,
Pover ≥ 2.8∗10 −3 for h = 10,
Pover ≥ 9.3∗10 −4 for h = 100,
Pover ≥ 3∗10 −4
for h = 10 3 ,
Pover ≥ 3∗10 −4
for h = 10 3 ,
Pover ≥ 9.4∗10 −5
for h = 10 4 .
The singular case of h = 0 was calulated since it is easy to find the exact value of Pover when
h = 0 , Pover = 1 − e − r − re − r = 5.9∗10 −2 , r = λ 1 Eτ = 0.389.
Note that even for a buffer size h = 100,000, the bound is of the order of 10 E-5, which is
still too high an overflow value for an ATM buffer, under the above traffic parameters. We can
thus use the above results for the practical design of buffers in ATM multiplexers and switches.
For large λ 1 , the bounds (4.1) and (4.4) evidently lose their tightness, since Pover → 1
when λ 1 → ∞ . whereas the bounds do not go to 1 with λ 1 → ∞ . Specially for large λ 1 , we can
present a very primitive but asymptotically right bound, namely,
∞
Pover ≥
∑ q n Pr{τ = n}
n
n=0
,
(4.8)
Eτ + Eκ
where qn = Pr{at each moment t ∈[1,n), h + 1 or more sources arrive, and at time t = 0, h + 2 or
more sources arrive including the initial source 1}. When λ 1 → ∞ , we have Eκ → 0 and qn → 1.
This means that the right-hand side of (4. 8) goes to 1, giving the asymptotically right value for
overflow probability.
-13-
4.2 Buffer Cell-Loss Probability Lower Bound
Bounds similar to those of Statements B.2 (Appendix B) and 10 can be obtained for the
cell-loss probability. To show that, we will first define this probability.
Let cells of the input traffic be numbered by m (for example, in order of arrivals of their
sources and in order of cell arrivals within active periods). As usual, for the queueing system
G/D/1/h with a given service discipline d ∈ D(h) , the probability of cell-m loss, Ploss (m), is the
probability that the cell m is discarded. The stationary loss probability, Ploss , is the limit of Ploss (m)
as m → ∞ , if this limit exists.
Similar to the overflow probability calculations, if Ploss exists, it does not depend on the
service discipline d ∈ D(h) according to Statement B.1 in Appendix B.
The following major result is obtained:
Statement 11. If Eτ < ∞ and Pr{ξ = 0} < 1 ,the queueing system GY1/D/1/h with any
service discipline from D(h) has the loss probability Ploss lowerbounded by (C.1) (Appendix C)
and
Ploss ≥ C(Eτ + Eκ )G1 =
G1
(Eξ )(Eτ )
(4.9)
and, in the case of input traffic Y1 and Pareto-type distribution, Pr{τ = n} = cn − α −1 , by
G1
Ploss ≥ C(Eτ + Eκ )G2 =
λ 1 (Eτ )
(4.10)
where G1 and G2 are defined in (4.1) and (4.4) respectively.
The bound (4.10) shows that, similar to the overflow probability, the cell-loss probability
decreases not faster than hyperbolically with increasing buffer size h, when Y is self-similar input
traffic. These results mean that previous analysis and dimensioning of ATM buffers using
traditional traffic models require major revisions in order to provide adequate quality of service
(QoS) to self-similar traffic, e.g. LAN traffic observed at Bellcore and VBR video traffic.
4.3 Buffer Cell Delay Distribution
Let us now consider the case when the cells of different sources in traffic GY1 are
distinguishable, so the source-by-source discipline (Appendix A) can be applied. The source-bysource discipline is FIFO for sources (not for cells). The sources are serviced in order of their
arrivals and the cells of each source during its transmission are transmitted in order of their
generations.For this case, when the buffer is infinite, the traffic Y is with generally distributed ξ t ,
and sources of constant rate R = 1, it is possible to find the probability distribution for the cell
delay.
-14-
We define the cell delay in the following way.Let,at time t, a source additional to the traffic
Y and called the test source arrive and start its active period of random length τ ′ independent of all
τ s and ξ t .The test source is randomly ordered among ξ t + 1 sources arrived at t.
Definition. The cell delay δ is the time spent in the buffer by a cell randomly chosen
among the cells generated by the test source.
For our statement below, we need some notations. Let Pm , m ∈ I0 denote the solution of
the steady-state equations:
Pn =
∞
∑ Pm pmn ,
m=0
∞
∑P
m
=1
(4.11)
m=0
with transition probabilities pmn , m ∈ I0 , n ∈ I0 given by the equations
p00 = po + p1 ,
pmn = pn − m +1 , 0 ≤ m ≤ n + 1,
pmn = 0, m > n + 1,
(4.12)
where
∞
pl = ∑ Pr{ξ = j}W(l| j),
(4.13)
j =0
W(l| j) =
j
∑ ∏ Pr{τ = l }.
l1 +...+l j =l
i =1
(4.14)
i
Equation (4.14) defines W(l| j) as the probability that the total number of cells in active
periods of j sources is equal to l, W(l| j) = Pr{τ1 +...+ τ j = l}.
The transition probabilities (4.12) are for the Markov chain νt , t ∈ I∞ where νt is the
number of cells which were generated or will be generated by the sources which started their active
periods at t − 1 or earlier but whose cells were not transmitted by t. We note that the number of
cells in the buffer at t does not form a Markov chain whereas νt form one.
Statement 12. For the queueing system GY1/D/1/ ∞ with source-by-source discipline,
the stationary distribution of cell delay is given by
∞
Pr{ξ = m} m
Pr{δ = x} = ∑ Pn [ ∑
(4.15)
∑ W(l| j)], x ∈ I0
m + 1 j =0
n∈I0 ,l∈I0
m=0
n +l = x
Proof. To derive (4.15), it is necessary to note that (a) the cell delay equals the delay of the
first cell in the test-source active period, and (b) the delay of the first cell equals γ t + νt . Here, γ t
is the number of cells in active periods of those sources which arrive at the same time t as the test
source and have to be served before it. The random variables γ t and νt are independent. In (4.15),
Pn = Pr{ν = n} is the steady-state distribution for νt and the expression in the square brackets is
the steady-state distribution for γ t ∴
After the original submission of this paper, Anatharam [1] has presented a result similar to
Statement 12 and the papers [18] and [19] have presented the upper bounds to the overflow
probability.
-15-
5. Conclusions
In this paper, we first gave some definitions and clarifications regarding second-order selfsimilar processes. We have shown that some previous definitions had some redundancy. We also
gave some results on the merging of second-order self-similar processes. These results are
important for the performance analysis of ATM multiplexers and switches, since they give
conditions under which second-order self-similar streams merge onto either an exactly or
asymptotically second-order self-similar stream. We then considered a broad class of
asymptotically second-order self-similar processes, to model self-similar input in an ATM queue.
This class includes the [4] model of self-similar process, namely Poisson arriving bursts with the
Pareto distribution of active periods and the constant rate 1 inside of these periods. We can use
results of this modeling for identification of parameters from actual traffic measurements. We then
derived a lower bound to the buffer overflow probability in a finite ATM buffer of size h cells. It
was shown that this probability cannot decrease faster than hyperbolically with the growth of
buffer size h, in contrast to the one in traditional Markov models that decreases exponentially with
increasing h. This is a direct consequence of the self-similarity of input traffic. We have also
established a lower bound to the buffer cell-loss probability and showed that it also decreases
hyperbolically with increasing h. .These theoretical proofs, presented in the literature for the first
time ( in [12], we had given a conjecture on the asymptotic of the cell-loss probability, when the
buffer size goes to infinity), firmly establish that ATM buffers, designed under traditional traffic
modeling and analysis, should be increased significantly, in order to provide adequate QoS to
traffic exhibiting self-similar features. Finally, considering a particular service discipline that we
called "source-by-source" service, we established the stationary distribution of the cell delay in an
infinite ATM buffer. We believe that these results will be most useful for the buffer dimensioning
and design of ATM switching systems, and will also stimulate further research in this fascinating
field.
Acknowledgement
The authors are grateful to the anonymous reviewers for their constructive comments and
suggestions.
References
[1]
[2]
V.Anatharam, "On the Sojourn Time of Sessions at an ATM Buffer with LongRange Dependent Input Traffic", Proc. 34th IEEE CDC, New Orleans, Dec. 1995.
J.Beran,"Statistics for Long-Memory Processes", New York, Chapman and Hall,
1994.
-16-
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
J.Beran, R.Sherman, M.S.Taqqu and W.Willinger, "Long-Range Dependence in
Variable-Bit-Rate Video Traffic ", IEEE Trans. on Communications , Vol. 43,
No.2-4, pp.1566-1579,1995.
D.R.Cox,"Long-Range Dependence:A Review", in Statistics:An Apprisal,
H.A.David and H.T.David, eds.Ames, IA:The Iowa State University Press, pp.5574, 1984.
D.R.Cox,"Long-Range Dependence,Non-Linearity and Time Irreversibility,
Journal of Time Series Analysis, vol.12, No.4, pp.329-335,1991.
D.R.Cox and V.Isham, Point Prosesses , London and New York, Chapman and
Hall, 1980.
B.Grigelionis, "On the Convergence of Sums of Random Step Processes to a
Poisson Process", Probability Theory and its Applications, vol.8, No.2, pp.177182, 1963 (In Russian).
A.N.Kolmogorov and Y.V.Prokhorov, "On Sums of Random Number of
Random Variables", Uspehi Matematicheskih Nauk, vol.4, No.4, pp.168-172 ,
1949 (In Russian).
L.Kosten,"Stochastic Theory of a Multi-Entry Buffer", Delft Progress Report,
1:10-18, 1974.
W.E.Leland,M.S.Taqqu,W.Willinger,and D.V.Wilson, "On the Self-Similar
Nature of Ethernet Traffic (Extended Version)", IEEE/ACM Trans.on Networking,
vol.2, No.1, pp.1-15, 1994. ( IEEE Award Winning Paper)
W.E.Leland,M.S.Taqqu,W.Willinger,and D.V.Wilson, "On the Self-Similar
Nature of Ethernet Traffic ", in Proc.ACM Sigcomm'93, San Fransisco, CA,
pp.183-193, 1993.
N.Likhanov, B.Tsybakov and N.D.Georganas, "Analysis of an ATM Buffer with
Self-Similar ("Fractal") Input Traffic", Proc.IEEE INFOCOM'95, Boston, MA,
1995, pp.985-992 .
I. Norros, "A Storage Model with Self-Similar Input", Queuing Systems, 16, pp.
387-396, 1994.
Y.G.Sinai, "Automodel Probability Distributions", Probability Theory and its
Applications, vol.21, No.1, pp.63-80, 1976 (In Russian).
W.L.Smith,"Regenerative Stochastic Processes", Proc. Roy. Soc. Ser.A, No.232,
pp.6-31, 1955.
G.Samorodnitsky and M.S.Taqqu, Stable non-Gaussian Random Processes, New
York, London, Chapman and Hall,1995.
-17-
[17]
B.Tsybakov and N.D.Georganas,"Self-Similar Traffic: Upper Bounds to BufferOverflow Probability in an ATM Queue", CCBR'97 ,The Canadian Conference
on Broadband Research, April 16-17, 1997.
B.Tsybakov and N.D.Georganas,"Overflow Probability in an ATM Queue with
Self-Similar Traffic", ICC'97, 1997.
B.Tsybakov and P.Papantoni-Kazakos, "The Best and the Worst Packet
Transmission Policies", Problems of Information Transmission , Vol.32,No.4,
1996.
M.Villen and J. Gamo, " Simple, Tentative Model for Explaining the Statistical
Characteristic of LAN Traffic", COST-242 TD(94)28.
G.U.Yule,"On a Method of Studying Time Series Based on their Internal
Correlation", Journal of the Royal Statistical Society , vol.108, pp.208-225, 1945.
[18]
[19]
[20]
[21]
APPENDIX A: Proof of Traffic-Model Self-Similarity
Statement A.1. If
α
t Pr{τ > t} → c, t → ∞, 1 < α < 2
(A.1)
)
)2
(where c is a constant, 0 < c < ∞ ) and Eϑ < ∞, Eϑ < ∞ , then Y = (Y1 ,Y2 ,...) is asymptotically
second-order self-similar with parameter H = (3 − α ) / 2 > 0.5 .
Proof. We consider the active periods which begin in the interval [t − k + 1,t] and end at
time t + 1 or later. We denote the set of these active-period numbers as Aχ and the size of the set
Aχ simply as χ , χ =| Aχ | .
Also, we consider the active periods which begin in the interval [−∞,t − k] and end at time
t + 1 or later. We denote the set of their numbers as Aη and its size as η =| Aη | .
Finally, we consider the active periods which begin in the interval [−∞,t − k] and end in
the interval [t − k + 1,t] . We denote the set of their numbers as Aζ and its size as ζ =| Aζ | .
The random variables χ , η, ζ are mutually independent and have the Poisson distributions
[12].
We introduce the following random variables:
Zχ = ∑ ϑ t( s) , Zη = ∑ ϑ t( s) , Zζ = ∑ ϑ t( s)
s∈Aχ
s∈Aη
(A.2)
s∈Aζ
The traffic variables Yt and Yt − k can be expressed as
Yt = Zη + Zχ , Yt − k = Zη + Zζ .
(A.3)
Since the sets Aχ , Aη , Aζ have no intersections, the autocorrelation function for process Y
has the form
r(k) = c1 (EZη2 − (EZη )2 )
(A.4)
-18-
where here and below ci are constants independent of k .
Since ϑ t( s) , s ∈ Aη are independent and identically distributed and since they are independent
of η , then from (A.2)-(A.4) , we obtain
)
)
)
)
r(k) = c2 {(Eη )[Eϑ 2 − (Eϑ )2 ] + (Eη 2 )(Eϑ )2 − (Eη )2 (Eϑ )2}
(A.5)
α
α −1
α −1
2
If t Pr{τ > t} → c3 , t → ∞ , then k Eη → c4 and k Eη → c4 , k → ∞ . Taking this
)
)
into consideration and using (A.5) when Eϑ < ∞, Eϑ 2 < ∞ , we get k α −1r(k) → c5 , k → ∞ .
According to Statement 6, it means that Y is asymptotically self-similar process with parameter
H = (3 − α ) / 2 .∴
)
Note that Statement A.1 holds in the more general situation when ϑ takes its values not in
the set I0 but in a different set, for example , in a set of real numbers.
APPENDIX B: Derivation of the buffer-overflow probability lower bound
First, we consider the discrete time (t ∈ I− ∞ ) queueing system G/D/1/ h . We define a
general class D(h) of service disciplines [19]. Denote by yt the number of new cells delivered by
the input traffic to the buffer at time t and denote by zt the number of "old" cells which were in
the buffer at t.
Definition.
two conditions: (i)
service at t ; (ii) If
then exactly yt + zt
A service discipline d is said to be in class D(h), if it satisfies the following
If yt + zt > 0, then some cell (out of the yt + zt total) goes definitely into
yt + zt ≤ h + 1, then no cells are discarded at time t . If yt + zt > h + 1, instead,
− h − 1 cells are discarded at t .
We underline that the choice of the cell to be served (transmitted) in the case (ii) and the
choice of the yt + zt − h − 1 cells to be discarded in the case (i) are determined by a particular
discipline and is not restricted at all.
The event { yt + zt > h + 1} is called the buffer overflow at time t.
We are interested in the stationary overflow probability, Pover . We will find a convenient
service discipline to compute it, according to the following:
Statement B.1 . For any given sample function of input traffic and for any given time t,
the occurrence of buffer overflow at t and the number of discarded cells at t do not depend on the
service discipline d if d ∈ D(h) .
It follows from Statement B.1that, if for the system G/D/1/ h under a discipline d ∈ D(h)
there exists a stationary probability for buffer overflow, Pover , then Pover exists and does not
change for the same G/D/1/ h under any other discipline from D(h) . This means that the
overflow probability depends on the input traffic and the buffer size only. The same conclusions
are also true for the cell-loss probability introduced in 4.2.
-19-
After the above general observation, we now consider the queueing system GY1/D/1/ h .
Our aim is to get the lower bound to overflow probability Pover for GY1/D/1/ h . In getting the
bound, we consider GY1/D/1/ h under a special service discipline which, maybe, is not realizable
but, certainly, is in D(h). We need this discipline only to facilitate our derivation of the lower
bound to Pover .
To present the discipline, we suppose that at time t = 0, the buffer is empty and a source
(we will call it the initial source 1) arrives, that is, starts its active period of length denoted by τ1 .
The cells of the initial source 1 are transmitted in the interval [0, τ1 ), called the transmission
interval T1.
After time τ1 , let the first new source arrive at τ1 + κ 1 with active-period length denoted
by τ 2 . This source is called the initial source 2. The interval [ τ1 , τ1 + κ 1 ) is called the intermission
interval Ι1 . The cells of the initial source 2 are transmitted in the transmission interval
T 2 = [ τ1 + κ 1 , τ1 + τ 2 + κ 1 ) .
In general, after the transmission of the initial source i (i ≥ 2) in the transmission interval
T i = [ti −1 ,ti −1 + τ i ), the intermission interval Ι1 = [ti −1 + τ i ,ti ) follows without new source arrivals,
∆
i
where ti = ∑ ( τ j + κ j ), τ j is the active-period length of the initial source j , and κ j is the length
j =1
of the intermission interval Ι j .
Suppose that some other sources (exept the initial source i ) arrive in T i . We denote the
set of these sources by Bi . At each time t ∈T i , the discipline puts the new cells of Bi -sources into
the buffer, if it has enough empty space, or discards some of new and old cells and saves exactly
h cells in the buffer, if it does not have enough empty space to accomodate all cells. In the latter
case, the cells of B j -sources can be discarded only if all cells of B1 ,... B j −1 -sources have been
discarded or transmitted.
The cells of the Bi -source, if they were generated but not discarded, can be transmitted in
the intermission intervals Ι j , j ≥ i , according to some given discipline which we do not specify.
Now, we conclude the presentation of our discipline and go to construct the lower bound to
overflow probability Pover .
Consider the random sequence jt , t ∈ I0 such that jt = 1 if t is an overflow moment and
jt = 0 otherwise. We have Pover = lim Pr{jt = 1}.
t →∞
Since j0 , j1 ,... is a sequence of random variables with complicated dependence
structure,there are doubts that it is possible to find the precise closed expression for Pover .That is
why,we are focusing our attention on finding a lower bound to Pover .To construct the bound,we
need to destroy some dependences in the considered sequence j0 , j1 ,... . We will neglect (do not
account for) all overflow events in intermission intervals and those overflow events in each
transmission interval T i , in which ( in events) the cells of Bi -sources are not discarded, i ∈ I1 . In
-20-
doing so, we convert the sequence jt , t ∈ I0 , into the sequence ĵt by substituting ĵt = 0 for all the
neglected overflow events and not changing the other jt (i.e., ĵt = jt ). In this new sequence ĵt :
(1) the number of moments t with ĵt = 1 on T i ∪ I i does not exceed evidently the number of
moments with jt = 1 on T i ∪ I i , where T i ∪ I i is the union of T i and I i , and (2) the segments
of the sequence ĵt on intervals T i ∪ I i become independent and identically distributed, so that the
sequence ĵt can be considered as a regenerative process having the i-th regenarative cycle in the
interval T i ∪ I i .
Now with Smith's theorem [15], we can express the steady-state probability of overflow in
the regenerative process in terms of Eρ and Eτ + Eκ where ρ, τ , and κ are generic for ρi , τ i ,
and κ i respectively. Here ρi is the number of overflow moments in T i , at which the cells of Bi sources are discarded. The variables ρi , i ≥ 1 , are identically distributed and independent. In
particular, Pr{ρi = k} = Pr{ρ1 = k} and Pr{ρ1 = k} is related to the case of empty buffer at t = 0.
As a result, we have the following bound:
Eρ
Pover ≥
.
Eτ + Eκ
In (B.1), Eκ can be expressed as
Eκ = (1 − Pr{ξ = 0})−1 − 1.
(B.1)
(B.2)
We resume with a statement.
Statement B.2. If Eτ < ∞ and Pr{ξ = 0} < 1 ,the queueing system GY1/D/1/ h with any
service discipline from D(h) has the overflow probability Pover lowerbounded by (B.1).
Now we want to find a lower bound to Eρ . Consider the transmission interval [0, τ1 ) and
assume that its length is given, τ1 = n. We will find a lower bound to J(n) where J(n) denotes
the average number of overflow moments in [1,n) given the buffer is empty at t =1. We denote
by B1′ the set of sources that arrived (i.e. started their active periods) in the interval [1,n) .
A moment t is called the generation moment of [1,n) if at least one B1′ -source generates
a cell at time t .Let J1 (n) be the average number of generation moments in [1,n) given the buffer
is empty at t =1. It is clear that Eρ ≥ EJ( τ ) ≥ E[J1 ( τ )] − h .We want to find a lower bound to
J2 (n) = J1 (n) − h .
The number of generation moments in [1,n) is less than the number of generation
moments made with the following sequence of sources.The source 1' in the sequence is that which
arrives first after time 0.Let t1′ be the moment of arrival of source 1' and τ1′ be the length of its
active period.The source 1' makes τ1′ generation moments if all of them are inside of [1,n) .The
source 2' in the sequence is that which arrives first after time t1′ + τ1′ − 1 .Let t2′ be the moment of
arrival of source 2' and τ 2′ be the length of its active period.The source 2' makes τ 2′ generation
moments if t2′ + τ 2′ ∈[1,n). And so on, the source i ′ arrives at ti′ with its active-period length τ i′
-21-
and it gives τ1′ +...+ τ i′ generation moments together with all previous sources 1',..., (i − 1)′ if
ti′ + τ i′ ∈[1,n) . Let π denote the number of sources in the sequence, which end their active periods
in (1,n] . We have
π
J2 (n) ≥ max{(E ∑ τ i′) − h,0}.
(B.3)
i =1
Since
π +1
∑ τ′ + κ ′ ≥ n −1
i
i
i =1
where κ 1′ = t1′ − 1, κ i′ = ti′ − (ti′−1 + τ i′−1 ) and since the random variable π + 1 does not depend on
τ i′ + κ i′ with i > π + 1 , we have [8]
(Eτ + Eκ )E( π + 1) ≥ n − 1
and
n − (Eτ + Eκ ) − 1
,0}.
Eτ + Eκ
We noticed in (B.4) and above that Eτ i′ = Eτ and Eκ i′ = Eκ .
Eπ ≥ max{
(B.4)
π
Using the obvious inequality τ i′ ≥ 1 ,we get the bound E ∑ τ i′ ≥ Eπ .(Here we can not use
i =1
π
[8] result to write E ∑ τ i′ ≥ (Eπ )(Eτ ) since the random variable π does depend on τ π′ +1 .)With
i =1
the last bound and also with (B.3) and (B.4), we have
n − (Eτ + Eκ ) − 1
J2 (n) ≥ max{
− h,0}.
(B.5)
Eτ + Eκ
Before averaging over n , we adopt the bounds J(n) ≥ 0 for n ≤ h and J(n) ≥ [right-hand
side of (B. 5)] for n ≥ h + 1. From (B. 5), we get
∞
∞
h+b
n − (Eτ + Eκ ) − 1
Eρ ≥ ∑ max{
− h,0} Pr{τ = n} ≥ a ∑ (n −
) Pr{τ = n}
Eτ + Eκ
h+b 
a
n = h +1
n=
+1
 a 
where
a=
1
≤ 1,
Eτ + Eκ
b = a + 1 ≥ 1,
(B.6)
[x] is the integral part of x , and we used the inequality
∞
∑
n = n0 +1
h+b
≥ h + 1.
a
h + b
h+b
Substituting 
in the summand and using the equation
+ 1 for

a
 a 
(n − n0 − 1) Pr{τ = n} =
∞
∑ Pr{τ ≥ n},
n0 ≥ 0
n = n0 + 2
we obtain the lower bound (4.1) of Statement 10.
-22-
APPENDIX C: Derivation of Cell-Loss Probability Lower Bound
We first will show that for GY1/D/1/h,
Ploss ≥ CEρ
(C.1)
where
C −1 = (Eξ )(Eτ )[Eτ − 1 + (1 − Pr{ξ = 0})−1 ] = (Eξ )(Eτ )(Eτ + Eκ )
and Eρ is defined as in (B.1). We will use the same service discipline d as in deriving (B.1).
For the sake of simplicity, we suppose, as above, that at time t =0, the buffer is empty and
an initial source 1 (Appendix B) arrives with its active period of length τ1 . Let cells generated at
time t =0 and after be numbered (again for simplicity) with m ∈ I1 in the same order as in 4.2.
Consider the sequence of cells ordered according to their numbers and part this sequence in
intervals L1 , L 2 ,... such that the interval L j contains only cells of sources that arrived in the
interval T j ∪ I j introduced before Statement B.2.
Consider the random sequence im , m ∈ I1 , such that im = 0 if the cell m is transmitted
(eventually), and im = 1 if the cell m is discarded (eventually) and note that
P = lim Pr{i = 1}.
Then, we convert this sequence i into the sequence iˆ by putting
loss
m→∞
m
m
m
iˆm = 0 for all cells in L j which are not discarded or transmitted during T j . We do this for each
j ∈ I . In this new sequence iˆ , 1) the number of cells with iˆ = 1 on L is not greater than the
m
1
m
j
analogous number in the sequence im and 2) the segments of the sequence iˆm on intervals L j
become independent and identically distributed, so that the sequence iˆm can be considered as a
regenerative process having the j -th regeneration cycle on the interval L j .
The application of Smith's theorem [15] gives
∧
Ploss
average number of i m = 1 on
≥
average length of
Lj
Lj
Since 1) the numerator is not less than Eρ (the average number of overflow moments in
T j , at which the cells of B j -sources are discarded) and 2) the denominator equals
*
ξ0
E[ τ1 + ∑ τ
τ1 −1 ξ t
(0)
v
v =1
+ ∑ ∑ τ v(t ) ] = C −1
t =1 v =1
(where ξ is the number of other sources arrived at t =0 together with the initial source 1;
τ1(0) ,..., τ are the lengths of active periods of these other sources, τ1 is the length of the initial
∗
0
(0)
ξ 0∗
source 1; τ1(t ) ,..., τ ξ(t ) , for t > 0 are the lengths of active periods of sources arrived at t, and the
0
sums over v are 0 if ξ t = 0 or ξ t∗ = 0), we get the bound (C.1).
The bounds (4.9) and (4.10) follow from the lower bound to Eρ given in Appendix B and
(C.1).
In other way,(4.9) and (4.10) can be proven using a relation between Pover and Ploss .
-23-
Fig.1 The autocorrelation function of exactly second-order self-similar process with Hurst
parameter H=1−β/2 .
-24-
Fig.2 The spectral density of exactly second-order self-similar process with Hurst parameter H=1−
β/2 .
-25-
Fig.3 The probability that the active-period length is greater than l for an exactly second-order selfsimilar process Y with Hurst parameter H= 1- β/2.
-26-