Download Using only Proportional Jitter Scheduling at the boundary of a

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Deep packet inspection wikipedia , lookup

Net bias wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Distributed firewall wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Computer network wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Network tap wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Airborne Networking wikipedia , lookup

Quality of service wikipedia , lookup

Transcript
appeared in Proc. of 2nd European Conf. on Universal Multiservice Networks ECUMN'2002, April 2002
Using only Proportional Jitter Scheduling at the boundary of a
Differentiated Service Network: simple and efficient
Thu Ngo Quynh (*)
Holger Karl (**)
(*) Interdepartmental Research Center
for Networking and Multimedia Technology
PRZ / FSP-PV / TUBKOM
Adam Wolisz(**)
(**)
Klaus Rebensburg (*)
Telecommunication Networks Group
Department of Electrical Engineering
and Computer Science
Technical University of Berlin
Strasse des 17. Juni 136 10623, Berlin, Germany
Email: {thu, klaus}@prz.tu-berlin.de, {wolisz,karl}@ee.tu-berlin.de
Abstract: There exist some studies in Proportional
Delay Scheduling Algorithms for Differentiated Service
(DiffServ) Network which schedule the packets between
different classes proportionally based on queuing delay.
Traditionally, it is necessary to implement
such
proportional delay scheduling algorithms at all the
routers of the network for receiving proportional delay
between different classes. In this paper, we propose a new
and simple model, which does not provide proportional
delay but delay jitter between different classes. In
addition, we point out that for receiving proportional
jitter in such a DiffServ Network, it is only necessary to
implement a Proportional Jitter Scheduling algorithm at
the boundary of the network. Furthermore, we propose
some networks using RJPS (Relative Jitter Packet
Scheduling) and WTP (Waiting Time Priority) at
different positions (core or egress) and a play out buffer
delay adjustment algorithm (Concord algorithm) at the
receiver. Finally we compare its performance in terms of
normalized end-to-end delay with different loss
probabilities at the play out buffer. Our first result
showed that a proportional jitter DiffServ Network
achieves better performance while keeping the cost of
implementation lower in some cases.
1. Introduction
Currently, the Internet can only provide a single service
class – best effort – that requires no pre-specified quality
of service (QoS) contracts and provides no minimum QoS
guarantees for packet flows. To provide adequate service,
some level of quantitative or qualitative determinism – IP
services must be supplemented and that is what QoS
protocols are designed to do. A number of QoS protocols
have evolved to satisfy a variety of application needs:
Integrated Services (IntServ) and Differentiated Services
(DiffServ).
The DiffServ approach [1] is newer than the IntServ
approach and proposes a coarser notion of quality of
service, focusing primarily on aggregated flows in the core
routers, and intends to differentiate between service classes
rather than provide absolute per-flow QoS guarantees. In
particular, access routers process packets on the basis of
finer traffic granularity such as per-flow or perorganization and core routers do not maintain fine grained
state, but process traffic based on a small number of Per
Hop Behaviours (PHBs) [4] encoded in the packet header.
Since late 1997, the DiffServ working group has discussed
several proposals for per-class relative QoS guarantees [2,
3]. With the exception of the Expedited Forwarding service
[4], proposals for relative per class QoS discussed within
the DiffServ context define the service differentiation
qualitatively, in the sense that some classes receive lower
delays and a lower loss rate than others, but without
quantifying the differentiation. Recently, research studies
have tried to strengthen the guarantees of relative per-class
QoS, and have proposed new buffer management and
scheduling algorithms, which can support a stronger notion
of relative QoS [5, 6, 7, 8]. Probably the best know such
effort is the proportional service differentiation model [5,
6], which attempts to enforce that the ratios of delays or
loss rates of successive priority classes be roughly
constant. For two priority classes such a service could
specify that the delays of packets from the higher priority
class be half of the delays of the lower priority class, but
without specifying an upper bound on the delays. The
Waiting Time Priority scheduler [6] is such a scheduling
algorithm, which implements a well known scheduling
algorithm with time dependent priorities.
Traditionally, for achieving proportional delay between
different classes in a DiffServ network, it is necessary to
implement these proportional delay scheduling algorithms
as WTP at all the routers.
Our earlier work [9] showed that it is possible to meet the
requirement of Relative Proportional Differentiated
Service Model in terms of delay jitter by using a simple
new scheduling algorithm called RJPS (Relative Jitter
Packet Scheduling). This paper also examined the
behaviors of RJPS in different contexts as variable window
size, variable packet size, variable load condition,
etc…within only one hop.
In [11] the performance of two networks which use only
RJPS or only WTP at its routers showed that our scheduler
RJPS could produce smaller delay than WTP in some
cases, but the delay in a RJPS network fluctuates much
more than a WTP network while WTP is a very stable
scheduler under different load conditions. Hence we
proposed some other topologies, which use RJPS as
Ingress or Egress router while others are WTP for gaining
proportional jitter between different classes and
minimizing the delay at the receiver after the play out
buffer.
It is very important to know that we should not only
examine the behaviors of delay and jitter proportional
schedulers in the same network to verify which can
produce better network delay, but the influence of these
network delays at play out buffer algorithm because play
out buffer delay is adjusted with the variation of network
delay (that means delay jitter) and loss rate.
In this paper, we propose another Model of DiffServ
Network which does not provide proportional delay
between different classes, but proportional jitter. We prove
that such a proportional jitter network requires only
proportional jitter scheduling algorithms at the boundary of
the network, and hence decreases extremely the cost of
implementation.
Whether to implement a model of Relative Proportional
Differentiated Service in terms of delay or delay jitter in a
network, it is very noteworthy to know that the objective
of our works is to improve end-to-end quality of service,
and in this case, a better end-to-end delay (the sum of
network delay and play out buffer delay)
We continue to focus our research on some networks,
which could provide proportional delay and/or jitter
between different classes and develop a new performance
criteria for comparing its qualities based on end-to-end
delay, which is called normalized end-to-end delay.
This paper is organized as follows. In Section 2 we give an
overview of the current works on relative per class QoS
guarantees and some play out buffer delay adjustment
algorithms. In Section 3 we propose some different
network topologies. Section 4 models our network and the
performance criteria for comparing the quality of different
topology. The detailed result is shown in Section 5 and in
Section 6 we present brief conclusions.
2. Related Work
Proportional Jitter Scheduler: RJPS is a workconserving packet scheduler that serves N queues, one for
each class. The goal of this algorithm is providing the
proportional jitter between different classes. Suppose that
each router has a pre-specified number of jitter classes N.
Each jitter class is served by a single first-in-first-out
(FIFO) packet queue. Packets of a flow belonging to a
jitter class i are queued in the corresponding queue in each
router that the flow passes through. All flows with the
same jitter class specification share the same FIFO queue
at the router. The goal of RJPS scheduling algorithm is to
serve the packets such that the short term average jitter
(calculated over a moving window) and long term average
jitter (calculated from the beginning time of simulation)
experienced by packets in a jitter class will stay
proportional for all pairs of classes. In [9] the behaviors of
RJPS in different contexts as variable window size,
variable packet size, variable load condition, etc. are
examined.
Proportional Delay Scheduler: Early work by Dovrolis
outlined definition and main issues of relative service
differentiation. It also introduced two scheduling
algorithms for delay differentiation, namely the Backlog
Proportional Rate Scheduler (BPR) and the Waiting Time
Priority scheduler (WTP) [6]. The BPR scheduler is a
version of Weighted Fair Queuing, whose rates can be
dynamically adapted to the load situation of the system in
order to maintain delay differentiation between classes of
service. Delay Differentiation under BPR is found to be
inaccurate in the case of moderate traffic load or
asymmetric load distribution between the classes of
service. In contrast, WTP can obtain more accurate delay
differentiation by decoupling delays from service rates.
Another proportional delay scheduler is proposed by
Timely group, which is called MDP [8]. This scheduler is
more convergenz than WTP because it maintains a window
of packet over time for calculating the average delay and
priority of each class of the expense of the complexity.
We conclude that among delay proportional schedulers
WTP, on the one hand, can estimate delay differentiation
very well. The BPR scheduler, on the other hand, can
support throughput differentiation and link sharing due to
its load-controlled nature, but its performance in terms of
delay differentiation is unpredictable. MDP is more
complex and depends strongly on traffic pattern. We
assume that WTP is the best among delay proportional
schedulers.
Play out Buffer Delay Adjustment Algorithm: Although
using such proportional schedulers at the routers in
Internet, a characteristic of the network is that the total
delay experienced by each packet is a function of variable
delays due to physical media access and relay queuing, in
addition to fixed propagation delays. The result is that the
time difference between transmitting any two packets at
the source is unlikely to be the same as that observed
between their arrivals at the destination.
This is a problem for a stream of multimedia packets,
because the presence of delay variation can have an impact
propagation delays. The result is that the time
Network
Sender
Buffer
Receiver
on the audio-visual quality as perceived by a human user.
buffer delay of each packet dynamically such that the total
end to end delay of all packets are TED, that means one
packet that arrives later than TED will be thrown away,
and all the packets that suffer network delay
Jitter
smaller
Figure 1. System Elements
To solve these synchronization problems, one must
introduce additional delay by buffering packets at or near
the point of presentation. Clearly, if every packet is
delayed in the buffer such that it suffers cumulatively (in
the network and buffer) a delay equal to the maximum
network delay, the receiver can reproduce a jitter-free play
back.
Because play out buffer delay is adjusted with the variation
of network delay (that means delay jitter) and loss rate, an
interesting question is not only to examine the behaviors of
delay and jitter proportional schedulers in the same
network to verify which can produce better network delay,
but the influence of these network delays at play out buffer
algorithm. In our network, we use Play out Buffer
Adjustment Algorithm described in [11], which is called
Concord Algorithm. This algorithm is described as
follows.
Concord Algorithm: As shown in Figure 1, we have a
network N, over which sender S wishes to send stream M
to receiver R, with the packets in M being produced every
T seconds. All packets have sequence numbers.
For the network N the Concord algorithm constructs a
packet delay distribution (PDD), an estimate of the
probable delays suffered by packets in the network over a
time window. This PDD may draw on existing traffic
conditions, history information or any negotiated service
characteristics to derive estimate for minimum, maximum
and/or mean delay distributions.
PDD
100%
D
than
Playoutbuffer
TED
= TED- D
will
network
be
D network
buffered
during
the
, we will receive the loss rate
of L%. We could write:
TED
TED = f PDD
( w, J network (t ), L)
L is the predefined loss rate. W is a window of
where
packets over it the Probability Delay Distribution is
calculated, and
at the time t.
J network (t ) is the actual variation of delay
Hence we could rewrite
D Playoutbuffer as follows:
D Playoutbuffer = TED - D network =
Playoutbuffer
f PDD
( w, J network (t ), L)
3. Formal Description of Proportional
Differentiated Service Model
The model for Relative Proportional
Differentiated Service: The proportional model
states that certain class performance metrics should be
proportional to the differentiated parameters that the
network operator chooses. A generic description of the
proportional differentiation model follows. Suppose that
qi (t , t + τ ) is the performance measure for class i in the
time interval (t , t + τ ) where τ > 0 is the monitoring
timescale. If differentiation over short timescales is
desired, the value of τ should be relatively small. The
proportional differentiation model imposes constraints of
the following form for all pairs of classes and for all time
interval
(t , t + τ ) in which both q i (t , t + τ ) and
q j (t , t + τ ) are defined:
L
qi (t , t + τ )
0%
M in
q j (t , t + τ )
D elay
TED
Figure 2. PDD Distribution
In the example shown in Figure 2, a PDD distribution is
computed. From this distribution, when we have a
predefined loss rate ratio L, we can easily find the value of
total end-to-end delay TED. If we adjust the play out
where
=
ci
cj
c1, c 2 ,..., c N are the generic Quality Differentiation
Parameters (QDPs). The basic idea is that, even though the
actual quality level of each class will vary with the class
loads, the quality ratio between classes will remain fixed.
The model of Relative Proportional Differentiated
Service for Delay: Based on the previous model, in the
context queuing delay, we can write as follows:
d i (t , t + τ )
d j (t , t + τ )
where parameters
Parameters and
=
∆j
∆i
∆ i are the Delay Differentiation
d i (t , t + τ ) are the average queuing delay
of class i packets (class i is better than class j if
∆ i > ∆ j ).
The model of Relative Proportional Differentiated
Service for Jitter: In the context of queuing delay jitter,
we created a new model of Proportional Jitter
Differentiated Service Network, which could provide
proportional jitter between different classes. This model is
illustrated by the equation :
ji (t , t + τ )
j j (t , t + τ )
where parameters
=
∆j
∆i
∆ i are the Jitter Differentiation
Parameters and ji (t , t + τ ) are the average queuing delay
jitter of class i packets (class i is better than class j if
∆ i > ∆ j ).
delay between all classes at any time, is a proportional
jitter scheduler with the same Differentiation Parameters.
Assumed that jitter of one packet in a queue is the
difference of queuing delay of this packet and the
preceding packet in this class. It is easy to realize that
when the current delays between two classes are
proportioned with its Delay Differentiation Parameters, its
difference will stay proportional with its Delay
Differentiation Parameters, too.
In addition, along the work of the Differentiated Services
Working Group of the IETF all streams refer to
unidirectional streams. And although WTP could provide
only average proportional delay between different classes
and is not an optimal proportional delay scheduling
algorithm, these features lead us to further interesting ideas
of using WTP and RJPS in different positions of a network
(egress or core routers) and compare its performance to
know which topology is the best under which conditions.
4. The model of network
Suppose that the routers in our network use WTP or RJPS
for scheduling packets proportionally between different
classes i (i is from 1 to N), each class i has a weight ∆ i .
Through our network, each packet number n of class i
suffers a network delay
D network ,i ,n .
At the receiver, this packet belong to class i is buffered,
Property: For providing proportional jitter between
different classes in a DiffServ Network, it is just necessary
to have one proportional jitter scheduler at the Egress
router. The others could be FIFO or other scheduling
algorithms. For providing proportional delay between
different classes, it is necessary to have Proportional
Delay Scheduler at all the routers of the network.
It is easy to see that delay has accumulated property, that
means delay of a class or flow through a network is the
sum of the queuing delay at each router and the time of
transmission. Hence there is a need to implement
proportional delay scheduler at every router in a DiffServ
Network for receiving proportional delay between different
classes. Jitter, however, could not be accumulated and it is
only necessary to have a proportional jitter scheduler at the
Egress of the network to receive proportional jitter
between different classes.
This property pointed out that for providing proportional
jitter in a DiffServ Network there is only a need to
implement a proportional jitter scheduling algorithm (as
RJPS), and for providing proportional delay between
different classes in a DiffServ Network it is necessary to
implement proportional delay scheduling algorithms (as
WTP) in all routers of this network.
Furthermore, we should note that an optimal proportional
delay scheduler, which produces precisely proportional
and its play out buffer delay is called
D Playoutbuffer ,i , n
is
depending
on
D Playoutbuffer ,i , n .
delay
variation
J network ,i (t ) , loss rate Li and the window w over it the
PDD is calculated and defined by Concord algorithm.
Playoutbuffer
D Playoutbuffer ,i , n = f PDD
( w, J network ,i (t ), Li )
The end-to-end delay of this packet is calculated as the
sum of network delay and play out buffer delay:
endtoend ,i , n
TED= Di
=
D network ,i ,n + D Playoutbuffer ,i , n = D network ,i ,n +
Playoutbuffer
f PDD
( w, J network ,i (t ), Li )
The average end to end delay of one class i is calculated as
follows:
D endtoend ,i =
∑D
endtoend ,i , n
n
n
Performance Comparison Criteria: Because the
DiffServ philosophy is to realize Relative Differentiated
Service, which seeks to provide per-hop per-class relative
services, there are no absolute guarantees due to the lack of
admission control or resource reservations. Consequently,
the network cannot provide worst-case bounds for any
service metric. Instead, each router only guarantees that the
service invariant is locally maintained, even though the
absolute service might vary with network conditions. And
whether to implement a model of Relative Proportional
Differentiated Service in terms of delay or delay jitter in a
network, the objective of our works is receiving better end
to end quality of service, and in this case, a better end-toend delay (the sum of network delay and play out buffer
delay). Hence we choose end-to-end delay as performance
metrics for comparing the quality of different network
topologies.
Supposing we have N classes, each has a weight of
Network
endtoend , 0
( D NTj
Topology
endtoend ,1
, D NTj
number
j
endtoend , N −1
,..., D NTj
∆ i and
produces
)
end-to-end
delay for N classes when the loss rate is L%. Under the
same condition (the same load, the same loss rate, the same
load distribution between different classes) the Network
Topology
number
k
produces
endtoend , 0
( D NTk
endtoend ,1
, D NTk
endtoend , N −1
,..., D NTk
)
end-to-end
delay for these N classes.
We believe that it is complicated to establish a set of
performance criteria for comparing the performance of
proposed network topologies, because it is depending on
pricing structure, on the importance of each class...One
possibility is introducing some cost for each byte in one
class, and comparing the gain we receive. Some could
introduce also the reward for decreasing the delay in the
higher class and some loss for the decrease of the delay in
the lower class.
For simplicity, our proposition is using normalized end-toend delay for comparing the quality of these topologies.
routers in the network we could not assure we will always
have proportional delay because IP packets traverse
network through different routes whose number of hops is
different and hence its queuing delays will not stay
proportional with each other. It is a big disadvantage of
Proportional Delay DiffServ Network. Due to the lack of
the page numbers of this paper, we will just focus on some
simple network topologies as follows:
Topology of
Network
Property
Name
Only RJPS
Proportional Jitter
between different classes
Proportional Delay
between different classes
Proportional Jitter only
between different classes
Proportional Jitter only
between different classes
NT1
Only WTP
WTP at core router,
RJPS at egress
FIFO at core
routers, RJPS at
egress
FIFO at core
routers, WTP at
egress
RJPS as core
routers, WTP at
egress
Table 1. Different Network Topology
Pk = ∑i =1
N
D
endtoend ,i
NTk
∑
N
i =1
RJPS
RJPS
RJPS
RJPS
Network Topology 1
WTP
WTP
* ∆i
∆i
WTP
WTP
WTP
WTP
WTP
RJPS
Network Topology 3
FIFO
We say that the Network Topology, whose normalized
end-to-end delay is smaller, is better.
Network Topologies: We examine the following network
topologies and its performance under the same context. In
Section 3 we have concluded that it is only necessary to
have RJPS at egress router for receiving proportional jitter
between different classes, the others could be FIFO, WTP,
or RJPS. But for providing proportional delay between
different classes in such a network we need to have WTP
at all routers, and even though when we have WTP at all
NT3
NT4
NT5
NT6
For example, these different network topologies are
illustrated in Figure 3.
Network Topology 2
For the Network Topology k, we define the normalized
end-to-end delay Pk of this network as:
NT2
FIFO
FIFO
RJPS
Network Topology 4
FIFO
FIFO
FIFO
WTP
Network Topology 5
RJPS
RJPS
RJPS
WTP
Network Topology 6
Figure 3. Different Network Topologies
5. Simulation
Our simulation study uses the ns-2.1b7a Simulator [12].
The simulation model is as follows. The routers use packet
sources of type on-off traffic. The topology used is shown
in Figure 4. The links are 6Mb, 3Mb, 1.5 and 0.75Mbps.
There is a total of 3 classes 0, 1, 2. The weights of class 0,
1, 2 are 2.0, 1.0, and 1.5 respectively. We will compare the
end-to-end delays of different topologies NT1 to NT6
listed in Table 1. The Concord algorithm is used at the
receivers. It is necessary to note that we should maintain a
window of packets in order to calculate the PDD function.
In our simulations, we use PDD taken over a moving
packet window of size 3000 packets for Concord
algorithm.
C1
C1
C1
C1
C0
R1
R2
R3
FIFO
C1
C0
C0
Source of class i
The result in the Table 2 is the average normalized end-toend delay calculated for NT from 1 to 6, with different loss
rate probabilities from 5% to 15%. The graph 6 plots the
quality of these Network Topologies and puts it in order of
the decreasing quality.
It is easy to see that in this case the quality of the networks
which contain RJPS at its routers (NT1, NT3, NT4, NT6)
increases when the loss rate increases, and compared to
others topologies which use WTP at its routers (NT2,
NT5), it achieves higher performance. This result leads us
to a conclusion that in certain cases, the network which
uses only RJPS at its egress router receives better
performance quality than the network which uses WTP at
all of its routers. On the other hand, the implementation of
RJPS at the boundary of the network improves the quality
of the network while reducing the cost of implementation
extremely.
C0
C0
Ci
It is very interesting to note that although NT4 requires
only RJPS at its egress router, it could generate better
quality in this case than NT2, which implements WTP at
all its routers. This property allows us to say that
implement RJPS at the egress router in a DiffServ Network
improves the performance of this network while reducing
the implementation cost extremely in some cases.
Ci
Receiver of class i
Figure 4. Topology of simulated Network
We have simulated for the different NTs from 1 to 6, with
a simple topology of only 3 routers. At the receiver we
receive end-to-end delay, and the Figure 5 shows the
variation of the normalized delay P(s) of these network
topologies while the predefined loss rate at the receiver is
5%.
From this graph, we have the following conclusions:
•
The NT1, which uses only RJPS at all the routers,
produces the smallest normalized end-to-end
delay, and that means the best performance.
•
The last topology NT6, which contains WTP at
the egress router and others are RJPS, generates
worse quality than NT1, NT3 and NT4, but better
quality than NT2 and NT5.
•
The quality of NT3 and NT4 stays between the
quality of NT6 and NT1.
•
The two other network topologies, NT2 and NT5,
which use all WTP at all its routers or only at
egress router, receive the biggest normalized endto-end delay. We could say NT2 and NT5
generate the worst performance in this case.
6. Conclusion
In this paper a new model of DiffServ Network which
could provide proportional jitter is proposed, and its
performance quality is compared with some other models
of DiffServ Network providing proportional delay. This
comparison is based on our new performance criteria,
which is called normalized end-to-end delay. Our first
result showed that the model of Proportional Jitter could
achieve better performance in some cases while reducing
the complexity of the network extremely.
References:
[1] S. Blake, D. Black, M. Carlson, E. Davies, Z.
Wang, and W. Weiss, ‘’An Architecture for
differentiated services,’’, RFC 2475.
[2] D. Clark and W. Fang, ‘’Explicit allocation of
best-effort packet delivery service,’’ in
IEEE/ACM Trans. on Networking, 6(4), August,
1998, pp. 362-373.
[3] K. Nichols, V. Jacobson, and L. Zhang, ‘’Two-bit
differentiated services architecture for the
Internet,’’, RFC 2638, July 1999.
[4] V. Jacobson, K. Nichols, and K. Poduri, ‘’An
expedited forwarding PHB,’’, RFC 2598, June
1999.
[5] C. Dovrolis, ‘’Proportional Differentiated
Services for the Internet,’’ in PhD thesis,
University of Wisconsin-Madison, December
2000.
[6] C. Dovrolis and P. Ramanthan, ‘’Proportional
differentiated services, part II: Loss rate
differentiation and packet dropping,’’ in Proc. of
IWQoS 2000, Pittsburgh, PA, June 2000, pp. 5261.
[7] Y. Moret and S.Fdida, ‘’A proportional queue
control mechanism to provide differentiated
services,’’ in Proc. of the International
Symposium on Computer and Information System
(ISCIS), Belek, Turkey, October 1998, pp. 17-24.
[8] T. Nandagopal, N. VentikaramanR. Sivakumar,
and V. Bharghavan, ‘’Delay differentiation and
adaptation in core stateless networks,’’ in Proc. of
IEEE INFOCOM 2000, Tel-Aviv, Israel, April
2000, pp. 421-430.
[9] T. Ngo-Quynh, H. Karl, A. Wolisz and K.
Rebensburg, ‘’Relative Jitter Packet Scheduling
for Differentiated Service,’’ in Proc. of 9th IFIP
Working Conference on Performance Modelling
and Evaluation of ATM&IP Networks IFIP
ATM&IP 2001.
[10] T. Ngo-Quynh, H. Karl, A. Wolisz and K.
Rebensburg, ‚’The Influence of Proportional Jitter
and Delay on End to End Delay in Differentiated
Service Network,’’ in Proc. of IEEE International
Symposium on Network Computing and
Application NCA 01, Cambrige, MA, USA.
October 2001.
[11] N. Shivakumar, C. J. Sreeman, B. Narendran and
P. Agrawal, ‘’The Concord algorithm for
synchronization of networked multimedia
streams,’’ in Proc. Of International Conference
on Multimedia Computing and Systems, 1995.
[12] UCB/LBNL/VINT
Network
Simulator-ns
(version
2),
http://wwwmash.cs.berkeley.edu/ns/ns.html
Normalized End-to-End Delay (s)
17,4
17,35
17,3
NT1
17,25
NT2
17,2
NT3
17,15
NT4
17,1
NT5
17,05
NT6
17
16,95
16,9
0
3,415 6,83 10,25 13,66 17,08 20,49 23,9 27,32 30,73 34,15 37,57 40,98 44,4 47,81
Time (s)
Figure 5. Performance Comparison between different Network Topologies
NT1
Loss 17,070029
5%
Loss 15,1934032
10%
Loss 13,0381438
15%
NT2
17,2595945
NT3
17,1555514
NT4
17,1347128
NT5
17,2633509
NT6
17,1870795
16,8002956
15,2134038
16,1908581
17,2265406
16,3924606
15,2202508
13,1752436
13,247668
17,2633509
14,8793064
Table 2. Performance of NT1 to NT6 with different loss probabilities
18
Normalized Delay (s)
17
16
Loss 5%
15
Loss 10%
14
Loss 15%
13
12
NT1
NT3
NT4
NT6
NT2
NT5
Figure 6. Performance Comparison with different loss probabilities