Download BGRP: (Border Gateway Reservation Protocol) A Tree

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
BGRP
(Border Gateway Reservation Protocol)
A Tree-Based Aggregation Protocol
for Inter-Domain Reservations
Ping Pan
Bell Labs + Columbia
Ellen Hahne
Bell Labs
Henning Schulzrinne
Columbia
1
Outline
• Resource Reservation
– Applications
– Architectures
– Challenges
• Protocol Scaling Issues
• BGRP Protocol
– Major Messages
– Performance
• Conclusions & Future Work
2
Resource Reservation
• Applications
• Old Architecture: Int Serv + RSVP
• Challenges
• New Architecture: Diff Serv + BGRP
3
Reservation Applications
• Real-Time QoS
– Voice over IP
– Video
• Virtual Private Networks
• Differentiated Services
– Better than Best Effort
• Traffic Engineering
– Offload congested routes
– Integration of ATM, Optical & IP (MPLS)
– Inter-Domain Agreements
4
Reservation Architectures
• Old Solution: Int Serv + RSVP
– End-to-end
– Per-flow
• Challenges
– Data Forwarding Costs
– Protocol Overhead
– Inter-Domain Administration
• New Solution: Diff Serv + BGRP
– Aggregated
– Scalable
– Manageable
5
Two Scaling Challenges
• Data Forwarding Costs
– Int Serv: per micro-flow
– Diff Serv: ~32 AF/EF Code Points
– Solves that problem !
• Control Protocol Overhead
– RSVP: O(N2), N = # hosts
– BGRP: O(N) , N = something much smaller
– Much more to say about this !
6
Protocol Scaling Issues
•
•
•
•
Network Structure
Network Size
How much Aggregation?
How to Aggregate?
7
Network Structure: Multiple Domains (AS)
Multi-homed
Stub Domain
H1
H5
h1, h2, h3
h1, h2, h3
Transit Domain
S1
S3
AS1
R3
R4
R6
R2
AS2
AS3
R8
AS4
R7
S2
h1, h2, h3
H2
AS5
R5
R1
AS6
Single-homed
Stub Domain
8
Current Network Size
• 108 (60,000,000) Hosts
5
10
(60,000) Networks
• 104
(6,000) Domains
•
9
Traffic Trace
(90-sec trace, 3 million IP packet headers, at MAE-West, June 1, 1999)
Granularity
Application
Host
Network
Domain
# Sources
# Destinations
(Addr + Port) (Addr + Port + Proto)
143,243
208,559
56,935
13,917
2,244
40,538
20,887
2,891
# SourceDestination
Pairs
(5-tuple)
339,245
131,009
79,786
20,857
10
Traffic Trace
• Over 1-month span (May 1999) at MAE-West:
–
4,908 Source AS seen
–
5,001 Destination AS seen
– 7,900,362 Source-Destination pairs seen!
11
How many Reservations? (How much aggregation?)
• 1 reserv’n per source-dest’n pair?
– 1016 host pairs
– 1010 network pairs
– 108 domain pairs
• 1 reserv’n per source OR 1 reserv’n per dest’n?
– 108 hosts
– 105 networks
– 104 domains
• Router capacity:
104 < # Reserv’ns < 106
• Conclusion: 1 reserv’n per Network or Domain
for each Diff Serv traffic class
12
Network Growth (1994-1999)
13
Growth Rates
•
•
•
•
•
Graph has a Log Scale
H (# Hosts) :
Exponential growth
D (# Domains) : Exponential growth
Moore’s Law can barely keep up!
Overhead of control protocols?
– O(H) or O(D),
– O(H2) or O(D2),
May be OK
Not OK !
14
How to Aggregate?
• Combine Reserv’ns from all Sources
to 1 Dest’n for 1 Diff Serv class
• Data & Reserv’ns take BGP route to Dest’n
• BGP routes form Sink Tree rooted at
Dest’n domain (no load balancing)
• Aggregated Reserv’ns form Sink Tree
• Where 2 branches meet, Sum Reserv’ns
15
A Sink Tree rooted at S3
H1
H5
h1, h2, h3
h1, h2, h3
S1
9
6
AS1
R1
6
9
R3
R2
3
9
S3
AS5
R5
R4
R6
AS2
3
S2
AS3
R8
AS4
R7
h1, h2, h3
H2
AS6
16
How to handle end-user reservation?
end-to-end reservation request
Source
aggregated reservation request
Destination
User-Edge Routers
A
H
AS 100
Border Routers
B
AS 400
G
F
C
AS 300
AS 200
D
E
17
BGRP Protocol
•
•
•
•
Basic Operation
Comparison with RSVP
Enhancements
Performance Evaluation
18
BGRP Basics
•
•
•
•
•
•
•
Inter-Domain only
Runs between Border Routers
Follows BGP Routes
Reserves for Unicast Flows
Aggregates Reserv’ns into Sink Trees
Delivers its Messages Reliably
3 Major Messages
– Probe:
– Graft:
source to dest’n; reserv’n path discovery
dest’n to source; reserv’n establishm’t & aggreg’n
– Refresh: adjacent routers; reserv’n maintenance
19
Tree Construction: 1st Branch
H1
H5
h1, h2, h3
h1, h2, h3
S1
6
6
AS1
6
6
6
6
R1
6
R2
R3
R4
6
R5
S3
6
AS5
6
R6
AS2
AS3
S2
R8
AS4
R7
h1, h2, h3
H2
AS6
20
Tree Construction: 2nd Branch
H1
H5
h1, h2, h3
h1, h2, h3
S1
6
6
+3
AS1
R1
6
6
3
AS2
S2
R2
3
3
AS3
R3
3
+3
S3
6
+3 R5
R4
AS5
3
3
R6
3
R8
AS4
R7
h1, h2, h3
H2
AS6
21
Tree Construction: Complete
H1
H5
h1, h2, h3
h1, h2, h3
S1
6
9
AS1
R1
6
3
AS2
3
R2
AS3
S2
9
9
R3
S3
AS5
R5
R4
R6
R8
AS4
R7
h1, h2, h3
H2
AS6
22
PROBE Message
• Source (leaf) toward Destination (root)
• Finds reservation path
• Constructs Route Record:
– Piggybacks Route Record in message
– Checks for loops
– Checks resource availability
• Does not store path (breadcrumb) state
• Does not make reservation
23
GRAFT Message
• Destination (root) toward Source (leaf)
• Uses path from PROBE’s Route Record
• Establishes reservations at each hop
• Aggregates reservations into sink tree
• Stores reservation state per-sink tree
24
REFRESH Message
• Sent periodically
• Between adjacent BGRP hops
• Bi-directional
• Updates all reserv’n state in 1 message
25
Comparison of BGRP vs. RSVP
• Probing:
– BGRP PROBE vs. RSVP PATH
– Stateless vs. Stateful [O(N2)]
• Reserving:
– BGRP GRAFT vs. RSVP RESV
– State-light [O(N)] vs. Stateful [O(N2)]
– Aggregated vs. Shared
• Refreshing:
– Explicit vs. Implicit
– Bundled vs. Unbundled
26
BGRP Enhancements
Keeping Our
Reservation Tree
Beautiful Despite:
• Flapping leaves
• Rushing sap
• Broken branches
27
Problem: Flapping Leaves
• Over-reservation
• Quantization
• Hysteresis
28
Problem: Rushing Sap
• CIDR Labeling
• Quiet Grafting
29
Quiet Grafting: 1st Branch
H1
H5
h1, h2, h3
h1, h2, h3
S1
AS1
10
6
6
6
10
10
R1
6
R2
R3
R4
10
R5
S3
10
AS5
10
R6
AS2
AS3
S2
R8
AS4
R7
h1, h2, h3
H2
AS6
30
Quiet Grafting: 2nd Branch
H1
H5
h1, h2, h3
h1, h2, h3
S1
6
10
AS1
R1
6
3
AS2
S2
R2
3
3
AS3
10
10
R3
3
S3
AS5
R5
R4
R6
R8
AS4
R7
h1, h2, h3
H2
AS6
31
Quiet Grafting: Complete
H1
H5
h1, h2, h3
h1, h2, h3
S1
6
10
AS1
R1
6
3
AS2
3
R2
AS3
S2
10
10
R3
S3
AS5
R5
R4
R6
R8
AS4
R7
h1, h2, h3
H2
AS6
32
Problem: Broken Branches
• Self-Healing
• Filtering Route Changes
33
Performance Evaluation
Show BGRP benefits
as function of:
• Region Size
• Topology
• Traffic Load
• Refresh Rate
• Quantum Size
34
Flow Counts vs. Region Size
Time
Interval
Region
Granularity
90 sec.
Application
Host
Network
Domain
Domain
1 month
# Source- # Destinations
Destination
Pairs
(For RSVP) (For BGRP)
339,245
131,009
79,786
20,857
7,900,362
Ratio
208,559
1.6
40,538
3.2
20,887
3.8
2,891
7.2
5,001 1,579.8
35
Flow Counts vs. Region Size
• Assume reserv’n is popular.
• Aggregation is needed !
• Region-based aggregation works.
• BGRP helps most when:
– Aggregating Region is Large.
– Reserv’n Holding Time is Long.
• Theoretical “N vs. N2 ” problem is real !
36
Number of Flows
(broken down by BW)
37
BGRP / RSVP Gain for each BW Class
38
Modeling the Topological Distribution of Demand
s0
l1
l0
n0
d0
si
s1
n1
d1
sD
li
ni
di
nD
dD
3 distributions: Flat, Hierarchical, Selected Source
39
Reservation Count vs. Link Number
40
Reservation Count vs. Node Number
41
Gain: Nrsvp / Nbgrp
42
Reservation Count vs. Traffic Load
• Model for given hop H:
– P paths thru H
– T sink trees thru H
– r micro-flows @ path (Poisson l, m, r)
-r
• # RSVP reserv’ns = (1 e )  P
- r F / T
)T
• # BGRP reserv’ns = (1 e
• BGRP helps most for large r
– Gain ~ P / T
• Graph: P =100000, T =1000
43
Reservation Count vs. Traffic Load
44
Message Rate vs. Refresh Rate
• Model for given hop H:
–
–
–
–
P
T
r
h
paths thru H
sink trees thru H
micro-flows @ path (Poisson l, m, r)
refresh rate
• RSVP msg rate = 3l  P + 2h  P  (1 - e - r )
• BGRP msg rate = 3l  P + 2h  T  (1 - e - r  P / T )
• BGRP helps most for h >> l , r >> 1
– Gain ~ P / T
• Graph: P =100K, T =1000, r =10, l =.001
45
Message Rate vs. Refresh Rate
46
Message Reduction vs. Quantum Size
• Single hop H (tree leaf)
• r micro-flows on H (birth/death, Poisson)
• Each micro-flow needs 1 unit of BW
• H manages aggregate BW reserv’n
• Quantization:
Reserv’n must be k  Q
• Hysteresis:
Descent lags by Q
47
Quantization with Hysteresis
l
8,12
8m
l
9,12
l
10,12
9m
10m
l
l
11,12
11m
12,12
12m
l
l
5,9
l
6,9
6m
5m
l
7,9
7m
l
8,9
8m
9,9
9m
l
l
2,6
l
3,6
l
4,6
3m
2m
l
4m
5,6
5m
6,6
6m
l
l
0,3
l
1,3
m
l
2,3
2m
3,3
3m
l
State Transition Diagram for Q=3
0, 0
48
Message Reduction vs. Quantum Size
• Closed-form expression for state probabilities
• Quantization & Hysteresis cut message rate by:
e
-r
  k =1 ( r

( k Q )
/
Q -1
i
r
 ( k  Q - i )!])
[
i =0
• E.g., r=100 & Q=10, message rate cut by 100
• Multi-hop model with Quiet Grafting:
– Further improvement
– Approximate analysis
– Simulation
49
Message Reduction vs. Load & Quantum Size
50
Conclusions
BGRP meets Challenges
• Scalable Protocol State
• Scalable Protocol Processing
• Scalable Protocol Bandwidth
• Scalable Data Forwarding
• Inter-Domain Administration
51
Future Work
•
•
•
•
•
•
•
Detailed Protocol Specification
Simulation
Reference Implementation
MPLS
Lucent products
Internet 2 (Q-bone)
IETF: Draft; BOF; Working Group
52
Future Work: Bandwidth Broker Model
BB
BB
RT
RT
BR
RT
BR
BR
Backbone
Network
RT
Regional
ISP-B
Regional
ISP-A
BB
BR
BB
RT
BR
Bandwidth Broker
First-hop Router
Border Router
End User
RT
Regional
ISP-C
DiffServ Trunk
53
Related documents