Download Corelite-1 - Waymond Scott

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bernoulli's principle wikipedia , lookup

Flow measurement wikipedia , lookup

Flow conditioning wikipedia , lookup

Compressible flow wikipedia , lookup

Turbulence wikipedia , lookup

Reynolds number wikipedia , lookup

Aerodynamics wikipedia , lookup

Fluid dynamics wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Rheology wikipedia , lookup

Transcript
Achieving Per-Flow Weighted
Fairness in a Core-Stateless
Network:
CORELITE
Eric Woods and Nitin Namjoshi
ECE 4894A
Outline






Introduction
Diffserv vs. Intserv QoS models
Corelite protocol structure
Corelite performance evaluation
Conclusion and Critique
Questions
Introduction


Corelite protocol developed by UIUC
Designed to allow core routers to
provide QoS and adaptively handle
congestion for different flow classes
without overhead of tag switching,
per-flow QoS / quota management,
MPLS, or similar schema..
Intserv QoS model

Integrated Services (Intserv)



Designed to provide absolute
performance guarantees (e.g. 5us
latency, 100kB/s data throughput,
20kB/s burst capacity, etc.)
Modeled on circuit-switched networks;
implemented with virtual circuits (VC)
on IP-based networks
Does not scale well for large numbers
of flows
Diffserv QoS model

Differentiated service (Diffserv) QoS model




Designed to provide relative performance
parameters
Limited to a few traffic classes
 Gold, Silver, Bronze – premium to best-effort
Provides performance guarantees for one class of
service vs. another, but no per-flow management
Scales well; currently used on ISP IP backbone
traffic networks (e.g. aggregate throughput >
1Tbps (e.g. DWDM SONET-based optical
networks)
Corelite Protocol

Designed to have edge routers
provide Intserv-type QoS while
allowing core network routers to
use Diffserv traffic parameters


Edge routers to provide traffic shaping,
per-flow management, traffic labeling
Core routers to route according to
traffic class and flow label
Edge router duties


Traffic shaping
 Each flow is allowed a maximum flow rate
 Excess traffic is buffered and sender is sent back-off messages
 Traffic / rate adaptation in response to feedback from core
routers
 Marker cache is created for feedback from core routers
 Once per epoch (time unit chosen arbitrarily), marker cache
is checked
 If no markers, maximum flow rates for each incoming
flow are incremented upwards by a fixed constant
 If markers are received from core routers, flows are
scaled back in proportion to numbers of markers
received
Marker / label insertion
 Each flow has markers inserted at a rate that corresponds to its
maximum flow rate
 e.g. a flow allowed 100kB/s gets 100 markers inserted every
1s; a flow allowed 30 kB/s gets 30 markers
 Marker has source router information
Core router duties

Incipient congestion
detection
 Aggregate packet queue
size is monitored once
every epoch
 Number of queues
will depend on
number of preset
traffic classes
 Once the queue size
exceeds some preset
threshold, router knows
congestion could /
would occur (is
incipient) and begins
sending markers back
to edge routers
Core router duties

Marker queuing and
feedback


All markers received are
kept in a cache with
some preset persistence
value – that is, markers
are assigned a TTL value
and dropped once it
expires
When incipient
congestion occurs,
markers are randomly
pulled from the cache
and sent to their source
routers (e.g. edge
routers that sent them)
to get traffic rates to
back-off
Core router duties

Core router determines how
many markers to send back
using a complex function




Function based on assumption
that edge router queues are
M/M/1 plus a correction factor
for the case where they are not
Correction factor k is only
incremented when queue size
continues to increase (e.g.
when arrival function for
packets is not a Poisson
distribution)
Function is readily replaceable –
protocol only requires some
method for congestion detection
be present
Congestion detection and
fairness lines will intersect
and protocol will adjust rates
correspondingly in a very
quick fashion
Marker selection and feedback



CSFQ is not ideal because of computational overhead
Queues are not required at core routers
A scheme that is better than CSFQ and where no core
router marker queue is necessary exists – it is explained
below:



Edge routers include normalized packet transmission rate in
all marker packets
Core router maintains an internal variable for the average
flow rate rn and an internal ‘deficit counter’ variable reset
each epoch
Core router randomly selects marker packets



If marker is from a flow whose rate exceeds rn, the marker is sent
back to the edge router it came from
If marker is from a flow whose rate is less than rn, deficit counter
is incremented
IF and ONLY IF the deficit counter is above zero, markers are
examined sequentially and markers from flows that have rates
greater than rn are sent back to the edge routers they came from
and the deficit counter decremented until it is zero again
Corelite simulation details




Simulations run using ns2 network
simulator version ns-2.1b4a
Topology included 3 congested links and
flows that traverse different numbers of
congested links
Links had widely varying delay, latency, and
RTT
CSFQ was used for comparison purposes
Corelite simulation results

Multiple flows with different
weights are introduced at
different times for a total
of 20 flows



The light colored flows
represent flows with a
higher weight
Some flows were
introduced at 250ms and
thus flow adjusted -dropped rapidly -- and
reconverged after 250ms
Epochs had to pass so that
counters would reset and
start sending feedback,
resulting in flow
reequalization after 250ms
Corelite simulation results

Time t=0ms


Flows with relative
weights of 2 and 3 are
introduced
Time t=250ms

New flows are added



Rates of main flows
decline (change in
slope)
Rates of new flows
are seen as constant
Time t=500ms



Epochs have passed
Back-off messages have
been sent
Rates all converge
simultaneously (new
flows get more priority
over initial interval until
back-off messages ent)
CSFQ v. Corelite – static network

CSFQ

Corelite
Corelite vs. CSFQ – static network
simulation results

Flows completed slow-start and entered LIMD –
end results both closely approximate ideal steady
state behavior, HOWEVER



CSFQ does not converge quickly because its fair
share measurement method is not quickly
responsive
Corelite converges almost immediately – note the
30 second + disparity between convergence times
shown on the last slide
Difference lies in the fact that Corelite does not
send congestion notifications until flows are close to
respective fair share rates, thus eliminating the
need for re-entry into slow-start
Corelite vs. CSFQ – dynamic network
Corelite vs. CSFQ – dynamic network
simulation results




Flows with different weights are
introduced at different times
Many flows exit in the 50 – 70s time
range after entering in the first 15s
CSFQ results in short-lived flows getting
more bandwidth, particularly those of
higher weights
Corelite converges quickly and handles
flows exiting gracefully, without
noticeable impact on transmission rates
Conclusions


Corelite converges much faster and
gracefully than CSFQ under the
tested circumstances
Corelite is more robust in the face
of congestion than CSFQ and
demonstrates Intserv control on the
edge routers and Diffserv scalability
in the core – with only two different
service classes
Critiques

Paper introduces new flow marking
scheme after going into details of core
router queues




Never clarified which scheme (core router
queues or deficit-counter based) is being used
in simulation results
Simulation covered case with two classes
of service, not more than that
Simulation only covered one dynamic
network scenario, not multiple
Details were not provided on TCP
implementations used
Questions




How does Corelite selectively throttle
misbehaving flows w/o maintaining perflow state?
How is Corelite better/worse than CSFQ?
In Corelite, what two mechanisms allow
core routers to provide weighted fair
feedback w/o maintaining per-flow state?
What kind of QoS service model does
Corelite adopt?