Download A Scheme for a Guaranteed Minimum Throughput Service

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Airborne Networking wikipedia , lookup

Computer network wikipedia , lookup

Internet protocol suite wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Network tap wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Distributed firewall wikipedia , lookup

Wake-on-LAN wikipedia , lookup

IEEE 1355 wikipedia , lookup

RapidIO wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Net bias wikipedia , lookup

TCP congestion control wikipedia , lookup

Quality of service wikipedia , lookup

Deep packet inspection wikipedia , lookup

Throughput wikipedia , lookup

Transcript
A Scheme for a Guaranteed Minimum Throughput Service Based on Packet Classes
Lluís Fàbrega, Teodor Jové, Pere Vilà, José Marzo
Institute of Informatics and Applications (IIiA), University of Girona
Campus Montilivi, 17071 Girona, Spain
phone: +34 972 418889 fax: +34 972 418098
e-mail: {fabrega, teo, perev, marzo}@eia.udg.es
duplicated packets (which are discarded by the destination).
From the point of view of the network, the decisive Quality
of Service (QoS) parameter is the average receiving rate or
network throughput (which includes the duplicates).
A basic feature of TCP flows is their elastic nature.
Sources vary the sending rate (up to the capacity of the
network input link) to match the maximum available
throughput in the network path. Since the available
throughput change over time, TCP uses rate-adaptive
algorithms that increase and decrease the sending rate in
order to match these variations and minimize packet loss.
TCP increases the sending rate if packets are correctly
delivered and decreases it if packets are lost [2]. On the
other hand, another important feature of TCP flows is the
heavy tail behavior of the file size distribution observed in
traffic measurements [1], that is, most elastic flows carry
short files and a few flows carry very long files.
Although the traditional view is that elastic flows don’t
require a minimum throughput, unsatisfied users or high
layer protocols impose a limit on the file transfer time, and
if it is exceeded, the transaction is aborted. This situation
implies a waste of resources, which can get even worse if
the file transfer is tried again [3]. Moreover in a commercial
Internet, the users will pay extra for the performance they
require. Hence, there is a minimum throughput required or
desired by users.
Elastic flows would be satisfactorily supported by a
guaranteed minimum throughput service, which provides a
minimum throughput and, if possible, an extra throughput.
The service ahs an input traffic profile, based on some
sending traffic parameters (average rate, burst size, etc.),
which defines the desired minimum throughput. If the
average sending rate is within the profile, packets have a
guaranteed (minimum) delivery, and otherwise packets have
a possible (extra) delivery only whem there are available
resources, which are shared among flows according to a
policy (best-effort, priority to short flows, etc.).
The delivery of services between the provider and the
user (an end-user or a neighbor domain) is defined in a
bilateral agreement (Service Level Agreements or SLAs).
Regarding our service, the technical part of this agreement
(the Service Level Specification or SLS) specifies that the
Keywords – Quality of service, TCP, minimum throughput
service, admission control, elastic traffic, packet classes.
Abstract – Flows from elastic applications such as web or
ftp require a minimum network throughput and can benefit
by an extra throughput due to their rate-adaptive algorithms
(built in TCP). We propose a scheme for a guaranteed
minimum throughput service that is able to provide different
minimum throughput to different users and protection
against non-responsive sources. It is built using a small set
of packet classes in a core-stateless network. At the ingress
of the network each flow packet is marked as one of the set
of classes, and within the network, each class is assigned a
different discarding priority. It includes Admission Control,
using a method that is based on edge-to-edge flow
measurements and that requires flows to send with a
minimum rate. We evaluate the scheme through simulation
using different traffic loads consisting of TCP flows that
carry files of varying sizes. In the simulation, TCP uses a
new algorithm that forces the source to keep the short-term
sending rate above a desired minimum rate. We study the
influence of several parameters on the performance in
different network topologies. The results prove that the
scheme guarantees the requested throughput to accepted
flows and achieves a high utilization of network resources.
1. INTRODUCTION
Internet traffic is currently dominated by TCP
connections carrying files generated by applications such as
the web, ftp, or peer-to-peer file sharing [1]. The users of
these applications expect no errors in the file transfer and
the best possible response time. The source breaks up the
file and sends a flow of packets at a certain sending rate,
which the network delivers to the destination with some
variable delays and some losses. Losses are detected and
recovered by TCP through retransmission, which adds more
delay (increasing the transfer time) and may also cause
This work was partially supported by the Spanish Science
and Technology Ministry under contract TIC2003-05567
SPECTS '05
391
ISBN: 1-56555-300-4
two packet classes (“in” or “out”), which are assigned at the
ingress to each flow packet through the comparison between
the actual sending rate and the desired minimum throughput.
If packets have to be discarded in queues, then the “out”
class has a higher discarding priority than the “in” class. The
Assured Service provides the desired minimum throughput
when the aggregated “in” traffic does not cause an overload
in any of the links of the path. When an overload occurs, the
provided throughput to each flow is a share of the
bottleneck link’s bandwidth that is proportional to (and
smaller than) the desired one. Therefore the difference
between the provided throughputs in congestion comes from
the different desired throughputs of the flows. Summing up,
like the best-effort service, in congestion the Assured
Service cannot provide a desired minimum throughput to
any flow.
Congestion could be avoided using resource
overprovisioning, that is, dimensioning network resources
so that they are never exceeded by users’ demands.
However this is a highly wasteful solution. If more efficient
provisioning is used congestion can be dealt with by AC.
Instead of sharing the resources among all flows so that
none of them receive the desired service, AC allocates the
resources to certain flows, thus receiving the desired
service, while the rest of the flows do not. The goal of AC is
to provide a desired minimum throughput to the maximum
possible number of flows during congestion. Therefore AC
is actually better at satisfying more service requests,
although it does make the network more complex.
A classical distributed AC method with hop-by-hop
decision, based on per-flow state and signaling in the core
routers, is not appropriate for elastic traffic because of the
low scalability (the number of flows can be very high) and
the high overhead of explicit signaling messages (most
flows have a short lifetime). Therefore other kind of AC
methods have been proposed [7][8][9]. They have in
common that avoid the use of per-flow signaling, reduce the
per-flow state only at the edge or do not need it, and are
based on measurements, which compared to parameterbased methods, provides a better estimate of the actual
traffic load.
In the schemes proposed in [7][8] the provided
throughput is the same for all accepted flows, where a flow
is defined as a TCP connection. A single link is considered,
and its current load is estimated from measurements of the
queue occupancy. If the current load exceeds a certain
threshold, new TCP connections are rejected by dropping
the connection request packets or by sending reset
connection packets. Therefore the scheme does not require
per-connection state.
In the scheme proposed in [9] the provided throughput
is the same for all accepted flows, where a flow is defined as
a sequence of related packets (from a single file transfer)
within a TCP connection. A network path is considered, and
two measurements methods are proposed to estimate the
available throughput: one measures the throughput of a
phantom TCP flow, and the other measures the current loss
user can ask for the service for each flow of an aggregation
of any number of flows to any destination, and also for any
flow’s minimum throughput, as long as a contracted value is
not exceeded. It also specifies the minimum percentage of
requests the provider should satisfy (which usually is
expected to be high).
We propose a scheme for a guaranteed minimum
throughput service using Admission Control (AC), with
simple and scalable mechanisms. The scheme does not need
per-flow processing in the network core since it only uses a
small set of packet classes. At the network ingress each flow
packet is marked as one of the classes, and within the
network, each class is assigned a different discarding
priority. The AC method is based on edge-to-edge per-flow
throughput measurements using the first packets of the flow,
and it requires flows to send with a minimum rate. In this
paper we develop this scheme, initially outlined in [4], and
we present simulation results using statistical traffic
variations to prove its validity as well as to study the
influence of several parameters on its performance.
The paper is organized as follows. In section 2 we
review the related work, in section 3 we describe our
scheme including the AC method, and in section 4 we
present simulation results using TCP flows in several
network topologies. Finally we summarize the paper in
section 5.
2. PREVIOUS WORK ON THROUGHPUT SERVICES
IN THE INTERNET
The traditional network service in the Internet is the
simple and robust best-effort service. This service together
with TCP rate-adaptive algorithms [2] has the goal of
providing a fair throughput service. This means that it
provides a throughput equal to the sending rate when none
of the links of the path are overloaded, and when
overloading does occur it shares out the bottleneck link’s
bandwidth among the contending flows equally. A
consequence is that if the number of flows in the bottleneck
passes a certain limit, then none of the flows receive the
desired throughput. Summing up, during a congestion
situation (when users’ demands exceed network resources)
this scheme can not provide a desired minimum throughput
to any flow. Moreover it can not provide a different
throughput to different users, and TCP sources are not
protected against non-responsive sources (and therefore they
receive smaller throughput).
The Assured Service [5] is a proportional throughput
service able to provide different throughputs to different
users and protection against non-responsive sources. It was
defined in the Differentiated Services (Diffserv) architecture
[6], which keeps per-flow state only at the edge but not in
the core, making the network more simple and scalable. At
the edge of a Diffserv network, each flow packet is assigned
to a class and a mark that identifies the class is written on
the packet’s header (the number of classes is small). Having
classified each packet into a class, the routers then apply a
different treatment to each class. The Assured Service uses
SPECTS '05
392
ISBN: 1-56555-300-4
we explain the intradomain operation of our scheme, and in
the fourth one we discuss its needs related to the
interconnection with neighbor domains.
rate of the path to convert it into a throughput estimate. Perflow state is kept only at the edge, in a list of active flows
updated using an implicit way, that is, the start of a flow is
detected when the first packet is received, and its end is
detected when no packet is received within some defined
timeout interval.
3.1. The Intradomain Architecture of the Scheme
When the first packet of a new flow arrives to the
network, the flow is assigned to a logical path and the list of
active flows is updated at the ingress. The desired
throughput of the flow is obtained from the corresponding
user-provider agreement. The AC evaluates if this minimum
throughput requirement can be provided without losing the
minimum throughput guaranteed to the accepted flows. If
the flow is accepted it receives a guaranteed minimum
throughput service, and otherwise, a best-effort service.
The kind of AC we propose belongs to the methods
based on end-to-end per-flow measurements [10][11], where
end-points send a special probing flow and measure its
experienced QoS to estimate if the available resources in a
path are enough to satisfy the flow’s request. After this
initial time period, called the AC phase, the AC decision is
made. In our method [4], the first packets of the flow act as
the special probing flow to test the network. The throughput
is measured at the egress and then is sent to the ingress,
where it is compared with the requested minimum one to
make the AC decision.
The whole scheme including the AC method uses five
packet classes, each one with a different discarding priority.
The classes are two A (Acceptance) classes, AIN and AOUT,
two R (Requirement) classes, RIN and ROUT, and the BE
(Best-Effort) class. The scheme is the following (Fig. 1):
3. OUR PROPOSAL FOR A GUARANTEED
MINIMUM THROUGHPUT SERVICE
Our scheme is similar to [9] in that a flow is defined as
a sequence of related packets (from a single file transfer)
within a TCP connection, a network path is considered and
per-flow state is only kept at the edge. However our scheme
is able to provide different throughput to different flows as
well as protection against non-responsive sources. Moreover
it also uses measurements, but it is built from the Assured
Service scheme [5] and the set of AC methods that uses endto-end per–flow measurements [10][11]. The purpose is
adding AC to the Assured Service scheme using also a small
set of packet classes. The scheme considers the following
assumptions:
• We consider a multidomain scenario, where each
domain has a user-provider agreement (SLAs) with
each of its neighbor domains, and our scheme is used
in one of the domains.
• A flow is a sequence of related packets (from a single
file transfer) within a TCP connection (not a complete
TCP connection, which usually has periods of
inactivity).
• A list of active flows is maintained at the edge using an
implicit mechanism to detect the start and end of flows
[9] (therefore the use of explicit flow signaling is not
needed).
• The network architecture uses packet classes, such as
for example Diffserv [6].
• The network architecture uses pre-established logical
paths from ingress points to egress points, and each
new arriving flow at the edge is assigned to one of
them. An example of this mechanism is the Label
Switched Paths of MPLS networks [12], when they are
used without a long-term resource reservation. Instead,
once a flow is accepted by our AC, resources in the
logical path are reserved during the flow’s lifetime. We
use the logical paths to pin each flow to a route, so that
all flow packets follow the same path where the
reservation has been made.
• In the agreement each user specifies the desired
minimum throughput of flows, e.g., the same value for
all flows, according to the application type (ftp, web,
etc.) or other (this again avoids the use of explicit flow
signaling).
• During the AC phase and at the ingress, packets are
marked as R by a traffic meter and a marker: each flow
packet is classified as "in" or "out", depending on
whether the (measured) average sending rate is smaller
or greater than the desired minimum throughput, and
then in packets are marked as RIN and out packets as
ROUT.
• During the AC phase and at the egress, the throughput
of the flow is measured.
• At the end of the AC phase and at the ingress, the
measured throughput is compared to the requested
minimum one: if it is smaller the flow is not accepted,
and otherwise it is accepted.
• After the AC phase and at the ingress, if the flow has
been accepted its packets are marked as A (AIN or
AOUT, depending on the comparison between the
average sending rate and the desired minimum
throughput), and if it has been rejected its packets are
marked as BE.
• In the output links of routers there is a single FIFO
queue (to maintain packet ordering) with discarding
priorities for each class in the order (from low to high)
AIN, RIN, AOUT, ROUT and BE.
Although we focus on a single network service, our
view is a multiservice network where our scheme coexists
with a low jitter and no loss service for real-time flows,
using for example, scheduling priority in the queues over
the elastic flows of our scheme. In the next three sections
SPECTS '05
The priorities are chosen so that accepted flows are
protected against flows that are in the AC phase. Since the
lowest discarding priority class achieves the resources in the
393
ISBN: 1-56555-300-4
SLS
desired
throughput
ingress
egress
measured
throughput
admission
controller
core
flow
classifier
traffic meter
&
marker
class
classifier
FIFO
&
priority discarding
flow
classifier
traffic
meter
Figure1. Functional block diagram of our scheme.
first place, the next class the remaining ones, and so on, this
roughly means that the R class packets must be discarded
before the A class packets (A < R). Specifically, the scheme
needs that the flows that are in the AC phase can get a
measurement of the unreserved throughput in the path (AIN
< RIN < AOUT). Moreover the remaining extra throughput is
given to accepted flows rather than to flows that are in the
AC phase (AOUT < ROUT).
the sending rate during the flow’s lifetime is kept above this
minimum throughput approximately. However the shortterm fluctuations of the sending rate of a “standard” TCP
source, above and below the average, might cause some
inaccuracies that could reduce the performance of the
scheme. In order to avoid this situation we propose to
modify the sending algorithms of TCP to keep the shortterm sending rate above a minimum value. The details are
explained in section 4, where we use this modified TCP
source for the evaluation of the scheme through simulation.
Another option might be to apply some kind of traffic
shaping to the input flow at the ingress, to add dummy
packets, etc., to obtain a flow with the desired short-term
minimum rate. Anyway, a consequence of this requirement
is that at the ingress it should be checked if the flow’s rate is
always above the minimum throughput, because otherwise
the service should be denied.
3.2. The Measurement Process
The throughput of the flow is measured at the egress
from the received data packets. A list of active flows that
are being measured is maintained at the egress, that is, when
a new flow arrives it is added to the list and when the
measurement is finished it is removed. The measured
throughput is simply the ratio between the total bytes of the
received packets during the measurement period divided by
the measurement duration. Then signaling packets carry the
throughput measurement to the corresponding ingress. Note
that the AC phase duration is equal to the measurement
duration plus the packet round-trip time.
An important point is to decide on the measurement
duration. If it is too short the measurement could be
incomplete, but if it is too long the method could tend to
reject too many flows in some situations. We study the
influence of the measurement duration through simulations
in section 4.
3.4. The Interdomain Operation
In a multidomain scenario the end-to-end service is
provided by the concatenation of the service provided by
each of the domains of the followed path. Each domain uses
its own way to provide the guaranteed minimum throughput
service to the flow (e.g., our scheme) and has a userprovider relationship with its neighbor domains according to
an agreement (SLAs). In the previous sections we dealt with
the intradomain operation of our scheme, and here we
discuss the interdomain aspects and the details of the edge
node operation.
For the interconnection of our domain O (the one using
our scheme) and the downstream domain D (Fig. 2), we
propose the use of signaling packets in order that domain D
notifies domain O whether it can provide a guaranteed
minimum throughput service to the flow or not. This
information will be used in domain O in the following way:
if the answer is “no”, then the AC will reject the flow, while
if the answer is “yes”, then the AC decision will depend on
the throughput measurement. The need for this signaling can
be justified by two reasons. Suppose that a flow is accepted
in domain O but it is rejected in domain D, and that
signaling is not used. Firstly, domain O would keep a
reservation for the flow that would be useless, wasting
resources that might be used by other flows. Secondly,
domain O would wrongly consider that the flow does
receive the service and that the agreement with the upstream
domain U is being satisfied.
3.3. The Requirement of a Minimum Sending Rate
In end-to-end measurement based AC, per-flow
reservations are based on “occupancy” instead of “state”
(per-flow state is only kept at the edge). The reservation is
setup by a probing flow (in our scheme, the first packets of
the flow, marked as RIN), maintained as long as the flow
transmits, and released when it stops, because measurements
will reflect all these situations. Per-flow processing in the
core is not needed, but in order for the AC to work properly
sources should never send below the desired minimum
throughput. Otherwise, during the AC phase the measured
throughput will be smaller than the requested one for sure,
and after the AC phase, if an accepted flow does not use the
guaranteed throughput, other flows in the AC phase could
steal it, and the throughput allocated to them later on may be
incorrect.
When TCP transfers a file and a minimum throughput
is available in the network path, the long-term average of
SPECTS '05
394
ISBN: 1-56555-300-4
domain U
domain O
source
IU
EU
SLA
domain D
EO
IO
[measure]
SLA
destination
ID
ED
[yes/no]
Figure 2. Interdomain operation of a domain using our scheme with neighbor domains.
value. We study the influence of some parameters on the
performance in several network topologies using different
traffic loads consisting in TCP flows that carry files of
varying sizes. We show results for the throughput obtained
by the flows as well as for the efficiency of using the link’s
bandwidth.
Note that we are only discussing the interconnection of
our domain with its neighbors but not the reversal, that is,
surely the neighbors will need their own and specific
operations for the interconnection with other domains, e.g.,
some kind of signaling, traffic conditioning, marking or
other.
Summing up, the operations performed at the ingress of
our domain are the following:
4.1. The Modification of the TCP Source
TCP provides a reliable data transfer by detecting lost
packets and then retransmitting them until they are received
correctly. The usual circumstances to consider that a packet
is lost and trigger packet retransmission from the last
acknowledged packet, are when the corresponding ACK
packet is not received during a timeout period or when three
duplicated ACKs of a previous packet are received [2]. TCP
also uses pipelining and the sliding window mechanism (for
flow control), a combination that allows the source to have
multiple transmitted but yet-to-be-acknowledged packets,
with a maximum number equal to the value of the window.
On the other hand TCP uses rate-adaptive algorithms to
match the maximum available throughput in the path. When
TCP detects that packets are lost the sending rate is
decreased, otherwise it is increased. Loss detection and the
consequent rate variations occur at a timescale of the roundtrip time. Rate variations are done according to (congestion
control) algorithms such as slow start, congestion avoidance
and fast recovery [2]. These algorithms modify the value of
the window (usually ranging from one to the receiver’s
advertised window), causing the desired rate variations
because the average sending rate can be seen as the actual
window divided by the round-trip time.
We have modified the TCP sources of the ns simulator
to send with a minimum rate and to avoid the short-term
fluctuations of the sending rate. Instead of sending a burst of
packets (the ones that would be allowed by the actual
window) within approximately the period of one round-trip
time, packets are sent smoothly within the same time period.
That is, one packet is sent every ∆t time units, which is
increased and decreased causing the rate variations. The
time ∆t has a maximum value that guarantees the desired
minimum rate. The variations of ∆t follow proportionally
the window variations of the TCP New-Reno algorithms
implemented in the ns simulator. Moreover, retransmission
of packets from the last acknowledged packet is triggered by
the usual circumstances, and also by the following two
situations: when the packet sequence number reaches the
end value; when the corresponding ACK has not been
received yet.
• Implicit detection of the start and end of flows and the
update of the list of active flows.
• Assignment of new flows to logical paths.
• Obtaining the desired flow’s throughput from the
agreement with the upstream domain.
• Checking whether the new request together with the
actually accepted ones are within the contracted value
specified in the agreement.
• The AC decision, after the reception of signaling
packets from the egress carrying the throughput
measurement.
• The corresponding traffic measurement and marking
after and before the AC decision, as well as the
checking whether the flow’s rate is always above the
minimum throughput or not.
• Registering of provided services (accepted and
rejected) to the upstream domain for SLS auditing.
• Other operations related to the interconnection needs of
the upstream domain.
On the other hand, the operations performed at the
egress of our domain are the following:
• Implicit detection of the start of flows and the update
of the list of active flows.
• The measurement of the flow’s throughput.
• Reception of the signaling packets with the “yes/no”
decision from the downstream domain.
• Sending of signaling packets to the corresponding
ingress with the throughput measurement and the
“yes/no” decision.
• Registering of provided services (accepted and
rejected) by the downstream domain for SLS auditing.
• Other operations related to the interconnection needs of
the downstream domain.
4. EVALUATION OF THE SCHEME THROUGH
SIMULATION
Using the ns simulator [13], we have implemented our
scheme for TCP flows. We use the modified TCP source
that keeps the short-term sending rate above a minimum
SPECTS '05
395
ISBN: 1-56555-300-4
except that the packet round-trip time of the two logical
paths is different. Topology 3 has four logical paths, e0-e6,
e2-e6, e3-e6, and e5-e6, with similar packet round-trip times
but different number of hops, and two bottleneck links of 2
and 4 Mbps. For each logical path we generate the same
quantity of average offered traffic load, which we vary from
0.05 Mbps to 3 Mbps in steps of 0.3 Mbps, in order to study
underloading and overloading situations in the links.
For all TCP flows we use the same values of the token
bucket algorithm, a rate of 90 Kbps and a burst size of two
packets, which represent the desired minimum throughput.
The user’s impatience is twice the desired file transfer time,
which means getting a throughput of approximately 45
Kbps.
Finally, another feature we have added to the TCP
sources is the users’ impatience, that is, the source stops
sending if the transfer time is too high.
4.2. The Functional Blocks of the Scheme
We have added the functional blocks of our scheme
(Fig. 1) to the Diffserv module of the ns simulator. At the
ingress routers there is a sending rate meter, a marker and an
admission controller, and at the egress routers there is a
throughput meter. There is a set of these blocks for each
flow arriving at the network. Moreover in the output links of
all routers there is a single FIFO queue with priority
discarding.
All the functional blocks have been designed for the
traffic sent by the modified TCP source. The sending rate
meter uses the algorithm of the token bucket, which is
characterized by two parameters, the rate in bps, and the
burst size in bytes (which constitute the input traffic profile
of the service). The marker uses five marks that identify the
packet classes defined in section 3.1. The throughput meter
starts to measure when the first packet of the flow arrives,
then it simply counts the total received bytes during the
chosen measurement duration, and finally it notifies the
throughput measurement to the ingress router (with the
corresponding packet delay). The FIFO queue with priority
discarding for the five packet classes (according to section
3.1) is based on the drop-tail algorithm, that is, when the
queue is full and a new packet arrives, a packet is discarded
(the highest discarding priority one).
4.4. Performance Parameters
The goal of the AC method is to provide the desired
minimum throughput to the maximum possible number of
flows, and therefore we evaluate the utilization of resources
as well as the throughput obtained by flows. First we
calculate the throughput of each flow as the ratio between
the total received packets divided by the flow’s lifetime.
Then we consider the satisfied flows, defined as the ones
that complete the file transfer and get at least 95% of the
desired minimum throughput (85.5 Kbps). We obtain three
performance parameters, for an ingress-egress pair: the
average “total” satisfied traffic load, which is the aggregated
throughput of all satisfied flows, taking into account the
s4
4.3. Traffic Model and Network Topologies
We generate TCP flows that carry a single file from an
ingress point to an egress point through a logical path. Each
flow is characterized by the file size and the starting time.
File sizes are obtained from a Pareto distribution because it
approximates reasonably well the heavy tail behavior of the
file size distribution observed in measurements [1]. In all
simulations the Pareto constant (or tail parameter) is 1.1 and
the minimum file size is 10 packets (the packet length is
1000 bytes). With these parameters the distribution has an
infinite variance. After generating the values (about 10000),
more than 50 % are below 20 packets, the mean σ is about
74 packets and the maximum may even reach 19637
packets. On the other hand, the starting times are obtained
from a Poisson arrival process, which is a simple and useful
model that was proved to be valid for more realistic
scenarios in [14]. This process is characterized by the
parameter λ flow/s, i.e. the average number of arrivals per
second, which we vary to study different loads. Using all
these statistical distributions the average offered traffic load
from an ingress point to an egress point through a logical
path is equal to λσ bps.
We use several network topologies (Fig. 3). Topology
1 has a bottleneck link of 2 Mbps (the length of the output
queue is 50 packets) and two logical paths between pairs of
ingress-egress routers (e0-e3, e2-e3), both with the same
packet round-trip time. Topology 2 is equal to topology 1
SPECTS '05
Topology 1: t1 = t2 = 20 ms
Topology 2: t1 = t2 = 1 ms
e0
d5
c1
20 Mbps
t1 ms
e2
2 Mbps
20 ms
50 packets
e3
20 Mbps d7
t2 ms
Rest of the links:
20 Mbps
20 ms
s6
s7
s11
e0
e3
Topology 3
d8
d10
c1
c4
2 Mbps
1 ms
50
packets e5
e2
s9
s13
4 Mbps
20 ms
50 packets
e6
d12
d14
Rest of the links:
20 Mbps
20 ms
Figure 3. Network topologies 1, 2 and 3 (showing the link’s
bandwidth and delay and the queue’s length).
396
ISBN: 1-56555-300-4
minimum and the extra throughput; the average “minimum”
satisfied traffic load, which takes into account only the
minimum throughput; and the average throughput of
satisfied flows (note that it will always be a value above
85.5 Kbps). The first parameter evaluates the total use of
resources, the second one its reserved use, and the third one
indicates how much extra throughput the satisfied flows get.
The average values of satisfied traffic load are obtained
by averaging over the simulation time, but without
considering the initial period of the simulation transient
phase. We make 10 independent replications of each
simulation, where independence is accomplished by using
different seeds of the random generator (proposed by
L’Ecuyer, with code from [15]). The simulation length is
chosen so that after the initial transient period a minimum of
10000 flows are generated. From the 10 results we estimate
the mean value by computing the sample mean and the 95 %
confidence interval [15].
our scheme: e0-e3
e0-e3
e2-e3
Total traffic (Mbps)
0.8
0.6
0.4
0.2
0.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
Minimum traffic (Mbps)
1.0
4.5. Simulation Results
We first compare our scheme with the best-effort
service in topology 1 (the measurement duration is 0.2 s). In
Fig. 4 we show the total and the minimum satisfied traffic,
as well as the average throughput of satisfied flows, versus
the average offered traffic load (λσ bps), for each ingressegress pair. As expected, in underloading, when network
resources are enough to satisfy all flows, our scheme
achieves the same value for the total satisfied traffic as the
best-effort service, which at the same time is equal to the
offered traffic (the dashed line indicates the equality). In
overloading, our scheme achieves higher utilization, which
stays almost constant for the entire range of offered traffic
loads, while the best-effort service achieves utilization
tending to zero. On the other hand, the average throughput
of satisfied flows decreases for high values of the offered
traffic because the unreserved resources decrease. Note also
that our scheme achieves almost the same results for the two
ingress-egress pairs.
In second place, we study in topology 1 the influence
of the measurement duration on the performance of the
scheme (Fig. 5). We have used four different measurement
duration, 0.04, 0.2, 0.5 and 0.9 s. For clarity, we only show
the results for the e0-e3 pair (the dashed line indicates the
equality), but they are similar for the e2-e3 pair. When the
measurement duration is 0.2 s the best performance is
achieved. When the measurement duration is too short (0.04
s) the measurements are wrong and the performance is very
bad. Finally, an increase of the measurement duration (0.5
and 0.9 s) causes a worse performance.
The reason of the behavior of the curves in Fig. 5 is
that our AC method rejects too many flows in some
situations where a lot of flows simultaneously meet during
the AC phase. Imagine the situation that the AC phases of a
set of flows overlap in time, and that these flows have a
common bottleneck link, i.e. their total desired throughput is
greater than the available one (the total RIN traffic exceeds
the available throughput). Then this available throughput is
SPECTS '05
e2-e3 best-effort:
1.0
0.8
0.6
0.4
0.2
0.0
Average throughput (Kbps)
200
160
120
80
40
0
Figure 4. Comparison of our scheme (the measurement
duration is 0.2 s) with the best-effort service, in topology 1.
shared among the flows proportionally to their desired
throughputs, and the measured throughput of each flow is
smaller than the desired one. The AC rejects all flows, and
therefore some decisions are erroneous. Specifically, the
number of affected flows depends on how strong the
overlapping of the AC phases is, since the more the flows
overlap the more flows are erroneously rejected. An
increase in the measurement duration causes an increase in
the number of flows that overlap and also in the number of
erroneous AC decisions. Moreover this also happens when
the flow’s arrival rate λ increases. Summing up, our scheme
obtains a better performance using a short measurement
duration, but there is a limit, because if it is too short the
measurements are wrong.
In third place we study the fairness in the sharing
between the ingress-egress pairs, using topology 1 and a
397
ISBN: 1-56555-300-4
0.04 s
0.2 s
0.5 s
e0-e3
0.9 s
0.8
Total traffic (Mbps)
Total traffic (Mbps)
1.0
0.6
0.4
0.2
0.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
Minimum traffic (Mbps)
Minimum traffic (Mbps)
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
1.6
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
3.0
200
Average throughput (Kbps)
200
Average throughput (Kbps)
e2-e3
1.6
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
160
120
80
40
0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
120
80
40
0
Figure 6. Sharing between ingress-egress pairs in topology 1
(the measurement duration is 0.2 s). The average offered
traffic load of e0-e3 is 1 Mbps.
Figure 5. The influence of the measurement duration on the
performance of our scheme, in topology 1.
measurement duration of 0.2 s (Fig. 6). We keep constant
the average offered traffic load of the e0-e3 pair to 1 Mbps,
and vary the average offered traffic load of the e2-e3 pair
from 0.05 to 3 Mbps. We expect that the 2 Mbps of the
bottleneck link be shared between the two ingress-egress
pairs proportionally to their respective average offered
traffic load (indicated by the dashed lines of Fig. 6). The
simulation results are quite closed to the expected results.
In fourth place we study the influence of the packet
round-trip time (for hosts and for edge routers) on the
performance of our scheme. We use the topology 2, where
the packet round-trip time of edge routers and the packet
round-trip time of hosts for the ingress-egress pair e0-e3 is
approximately twice the one for e2-e3. The measurement
duration is 0.2 s. The simulation results (Fig. 7) show that
SPECTS '05
160
the performance is very similar for the two ingress-egress
pairs.
Finally we study the influence of the number of hops of
the logical paths on the performance of our scheme, using
topology 3 (Fig. 8). The measurement duration for the
ingress-egress pairs e3-e6 and e5-e6 is 0.2 s (1 hop), while
for e0-e6 and e2-e6 is 0.4 s (2 hops). The results show that
in congestion the ingress-egress pairs of one hop obtain
better results than the ingress-egress pairs of two hops.
However this behavior is similar to the observed in classical
hop-by-hop AC schemes or in end-to-end measurementbased AC schemes [11], where flows passing through
multiple congested links experience a higher blocking
probability.
398
ISBN: 1-56555-300-4
e2-e3
e0-e6
1.2
0.8
1.0
Total traffic (Mbps)
Total traffic (Mbps)
e0-e3
1.0
0.6
0.4
0.2
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
e5-e6
0.8
0.6
0.4
0.2
3.0
1.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
1.2
Minimum traffic (Mbps)
Minimum traffic (Mbps)
e3-e6
0.0
0.0
0.8
0.6
0.4
0.2
0.0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
1.0
0.8
0.6
0.4
0.2
0.0
3.0
200
200
Average throughput (Kbps)
Average throughput (Kbps)
e2-e6
160
120
80
40
0
0.0
0.5
1.0
1.5
2.0
Offered traffic (Mbps)
2.5
3.0
160
120
80
40
0
Figure 7. The influence of the packet round-trip time on the
performance of our scheme, in topology 2 (the measurement
duration is 0.2 s).
Figure 8. The influence of the number of hops on the
performance of our scheme, in topology 3.
5. CONCLUSIONS
We have proposed a new scheme for a network service
that guarantees a minimum throughput to flows accepted by
AC, suitable for TCP flows. It considers a network path and
is able to provide different throughput to different flows
(users) as well as protection against non-responsive sources.
The scheme is simple and scalable, because in the core it
does not need per-flow processing and uses a small set of
packet classes, each class having a different discarding
priority. The AC method is based on edge-to-edge per-flow
throughput measurements using the first packets of the flow.
Moreover it requires flows to send with a minimum rate
since reservations are based on “occupancy” rather than on
“state”. On the other hand, we have considered a scenario
with multiple domains, where each domain has a userprovider agreement with each of its neighbor domains, and
we have discussed the interconnection needs of our scheme
and the details of the edge node operation.
We have evaluated the scheme through simulation in
several network topologies. We have used different traffic
loads of TCP flows carrying files of varying sizes, using a
modified source that keeps the short-term sending rate
above a minimum value. The results confirm that the
scheme guarantees the minimum requested throughput to
the flows and achieves a high value of the utilization of
resources. We have seen that it provides the desired
minimum throughput to much more flows than the besteffort service. We have also studied the influence of the
measurement duration, and shown that the scheme obtains a
better performance using a short measurement duration (but
with a limit). We have also shown that it achieves fairness
in the sharing between the ingress-egress pairs even when
SPECTS '05
399
ISBN: 1-56555-300-4
there are different round-trip times. Finally we have seen
that our scheme discriminates against the multi-hop flows,
but this is similar to the behavior observed in classical hopby-hop or in end-to-end AC schemes.
[8]
6. REFERENCES
[1] N. Brownlee, KC Claffy, 2002 “Understanding
Internet Traffic Streams: Dragonflies and Tortoises”,
IEEE Communications Magazine.
[2] V. Jacobson, 1998, “Congestion Avoidance and
Control”, ACM Computer Communication Review.
[3] J. W. Roberts, L. Massoulié, 1999, “Arguments in
Favour of Admission Control for TCP flows”,
Proceedings of the 1999 16th ITC.
[4] Ll. Fàbrega, T. Jové, Y. Donoso, 2003, “Throughput
Guarantees for Elastic Flows using End-to-end
Admission Control”, Proceedings of the 2003
IFIP/ACM LANC.
[5] D. D. Clark, W. Fang, 1998, “Explicit Allocation of
Best-effort Packet Delivery Service”, IEEE/ACM
Transactions on Networking.
[6] S. Blake, D. Black, M. Carlson, E. Davies, W. Weiss,
Z. Wang, 1998, “An Architecture for Differentiated
Services”, RFC 2475.
[7] R. Mortier, I. Pratt, C. Clark, S. Crosby, 2000,
“Implicit Admission Control”, IEEE Journal on
Selected Areas in Communications.
[9]
SPECTS '05
[10]
[11]
[12]
[13]
[14]
[15]
400
A. Kumar, M. Hegde, S.V.R. Anand, B.N. Bindu, D.
Thirumurthy, A.A. Kherani, 2000, “Nonintrusive TCP
Connection Admission Control for Bandwidth
Management of an Internet Access Link”, IEEE
Communications Magazine.
S. Ben Fredj, S. Oueslati-Boulahia, J.W. Roberts,
2001, “Measurement-based Admission Control for
Elastic Traffic”, Proceedings of the 2001 ITC 17.
G. Bianchi, A. Capone, C. Petrioli, 2000, “Throughput
Analysis
of
End-to-end
Measurement-based
Admission Control in IP”, Proceedings of the 2000
INFOCOM.
L. Breslau, E. Knightly, S. Shenker, I. Stoica, H.
Zhang, 2000, “Endpoint Admission Control:
Architectural Issues and Performance”, Proceedings of
the 2000 ACM SIGCOMM.
E. Rosen, A. Viswanathan, R. Callon, 2001,
“Multiprotocol Label Switching Architecture”, RFC
3031.
UCB/LBL/VINT Network Simulator – ns (version 2)”,
http://www.isi.edu/nsnam/ns/.
S. Ben Fredj, T. Bonald, A. Proutière, G. Régnié, J.W.
Roberts, 2001, “Statistical Bandwidth Sharing: a
Study of Congestion at Flow Level”, Proceedings of
the 2001 ACM SIGCOMM.
Averill M. Law, W. David Kelton, 2000, “Simulation
modelling and analysis”, Ed. MacGraw-Hill.
ISBN: 1-56555-300-4