Download Tesseract* A 4D Network Control Plane

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Net bias wikipedia , lookup

Deep packet inspection wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

Peering wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Distributed firewall wikipedia , lookup

Computer network wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Network tap wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Routing wikipedia , lookup

Airborne Networking wikipedia , lookup

IEEE 1355 wikipedia , lookup

Transcript
Tesseract*
A 4D Network Control Plane
Hong Yan, David A. Maltz, T. S. Eugene Ng
Hemant Gogineni, Hui Zhang, Zheng Cai
1
*Tesseract is a 4-dimensional cube
Ideally…


Managing network in a simple way
Directly and explicitly apply policies to network
Internet
Split load
Shut
downbetween
S6 for maintenance
S5 and S6 on May 1
S1
accurate network view S5
S6
forwarding state
S2
2
S3
S4
Indirect Control - Fact #1:
Infer network view by reverse engineering


Probe routers to fetch configuration
Monitor control traffic (e.g., LSAs, BGP update)
Internet
S1
?
probe routers and
guess network view
?
S5
?
S2
3
?
S3
S6
?
S4
Indirect Control - Fact #2:
Policies buried in box-centric configuration


Many knobs to tune
Trial and error
Modify routing
Change
OSPFpolicies
link weights
on S2,
onS3,
S2,S4…
S3,
S4..
Internet
S1
?
probe routers and
guess network view
configuration
commands
4
?
S5
?
S2
?
S3
S6
?
S4
Complex configuration is error-prone
and is causing network outages
interface Ethernet0
ip address 6.2.5.14 255.255.255.128
interface Serial1/0.5 point-to-point
ip address 6.2.2.85 255.255.255.252
ip access-group 143 in
frame-relay interface-dlci 28
access-list 143 deny 1.1.0.0/16
access-list 143 permit any
route-map 8aTzlvBrbaW deny 10
match ip address 4
route-map 8aTzlvBrbaW permit 20
match ip address 7
ip route 10.2.2.1/16 10.2.1.7
router ospf 64
redistribute connected subnets
redistribute bgp 64780 metric 1 subnets
network 66.251.75.128 0.0.0.127 area 0
router bgp 64780
redistribute ospf 64 match route-map 8aTzlvBrbaW
neighbor 66.253.160.68 remote-as 12762
neighbor 66.253.160.68 distribute-list 4 in
5
Indirect Control - Fact #3:
Indirect Control Creates Subtle Dependencies

Example:

Policy #1: use C as egress point for traffic from AS X
 Policy #2: enable ECMP for A-C flow
3
Desired
AS X
1
Unexpected!
1
A
3
1
24
B
6
D
C
1
AS Y
Direct Control: A New World

Express goals explicitly




Design network to provide timely and accurate view



Topology, traffic, resource limitations
Give decision maker the inputs it needs
Decision maker computes and pushes desired
network state



7
Security policies, QoS, egress point selection
Do not bury goals in box-specific configuration
Make policy dependencies explicit
FIB entries, packet filters, queuing parameters
Simplify router functionality
Add new functions without modifying/creating protocols or
upgrading routers
How can we get there? 4D
Generating table entries
Decision Computation Service
D
Dissemination Service
D
Routing Table
Access Control Table
NAT Table
Tunnel Table
Install table entries
D
Discovery
D
Data Plane
8
Modeled
as a set
of tables
Tesseract: A 4D System
R1
Decision Element
R2
Discovery
Decision Element
9
Dissemination
Bootstrapping Dissemination
DE1
Beac1: DE1
R1
Beac1: DE1 R3
Beac1: DE1 R3 R2
R3
Beac1: DE1 R3
R2
Beac1: DE1 R3 R2
Beac1: DE1 R3 R2
R5
Beac1: DE1 R3 R2 R4
R4
Beac1: DE1 R3 R2 R4 R5
DE2
10
Bootstrapping Dissemination
DE1
R1
R3
R2
R5


DE2
11

DE beacons establish ctrl topology
R4
LSAs flow back from routers over ctrl
topology
After link/switch crash, next beacon
heals topology
Making Decision
R2’s Routing Table:
10.0.1/24: R3
10.0.2/24: R5
10.0.3/24: eth0
0/0: R5



12
R2
DE’s input includes TE goals, reachability matrix
DE creates tables for each router (FIB, filters)
Tables source-routed to destination via dissemination
Decision/Dissemination Interface
Decision Plane
Dissemination Plane
• Flood (pkt)
• Send (pkt, dst)
• RegisterUpCall (*fun)
• LinkFailure(link)
• PreferredRoute(dst, route)
DE1
R1
LSA
LSA
LSA
13
Reusable Decision Algorithms
14
Code Snippet
Floyd-Warshall
for (unsigned k = 0; k < num; k++)
for (unsigned i = 0; i < num; i++)
for (unsigned j = 0; j < num; j++) {
if (CostMatrix[i][k] != -1 && CostMatrix[k][j] != -1)
if (CostMatrix[i][j] == -1 ||
CostMatrix[i][j] > CostMatrix[i][k] + CostMatrix[k][j]
)
{
CostMatrix[i][j] = CostMatrix[i][k] + CostMatrix[k][j];
FirstHopMatrix[i][j] = FirstHopMatrix[i][k];
LastHopMatrix[i][j] = LastHopMatrix[k][j];
}
15
DE Robustness
DE1

R1
All DEs send beacons



DE1DE1
heard
is alive
too long
DE1ago
is boss
I becoming boss
DE2
16

Routers send state updates
to all DEs on network
DEs can see each others’
beacons
DE with lowest ID is only
one to write configs to
routers
If active DE crashes, its
beacons stop

Next highest ranking DE
takes over
Evaluation
Emulab
 Topologies

 Rocketfuel
backbone network (114 nodes,
190 links) with a maximum round trip delay of
250 ms
 Production enterprise network (40 nodes, 60
links)
17
Routing Convergence Experiments


On both backbone and enterprise topologies
Failure scenarios
 Single
link failures
 Single node failures
 Regional failures for backbone (failing all nodes in
one city)
 Link flapping

18
Tesseract versus Aggressively Tuned OSPF
(Fast OSPF)
Enterprise Network, Switch Failures
Tesseract
Fast OSPF
19
Backbone Network, Switch Failures
Fast OSPF
Tesseract
20
Backbone Network, Regional Failures
Fast OSPF
Tesseract
21
Microbenchmark Experiments
A subset of Rocketfuel topologies with
varying sizes
 Independently fail each link
 Measure:

 DE
computation time
 Control traffic volume
22
DE Computation Time
23
Control Traffic Volume
24
Tesseract Applications

Joint Control of Packet Routing and Filtering
 Problem:
dynamic routing but static packet filter
placement
 Solution: in addition to computing routes, DE
computes filter placement based on a reachability
matrix

Link Cost Driven Ethernet Switching
 Problem:
Spanning tree switching makes inefficient
use of available links
 Solution: DE computes both spanning tree and
shortest paths
25
Link Cost Driven Ethernet Switching:
Multi-Tree
26
Revisiting
Randomize Equal-Cost Shortest Path Selection
for (unsigned k = 0; k < num; k++)
for (unsigned i = 0; i < num; i++)
for (unsigned j = 0; j < num; j++) {
if (CostMatrix[i][k] != -1 && CostMatrix[k][j] != -1)
if (CostMatrix[i][j] == -1 ||
CostMatrix[i][j] > CostMatrix[i][k] + CostMatrix[k][j]
|| CostMatrix[i][j] == CostMatrix[i][k] + CostMatrix[k][j] && rand() > RAND_MAX/2
)
{
CostMatrix[i][j] = CostMatrix[i][k] + CostMatrix[k][j];
FirstHopMatrix[i][j] = FirstHopMatrix[i][k];
LastHopMatrix[i][j] = LastHopMatrix[k][j];
}
27
Link Cost Driven Ethernet Switching:
Multi-Tree
28
Throughput Comparison
29
Related Work

Separation of forwarding elements and
control elements
 IETF:
FORCES, GSMP, GMPLS
 SoftRouter [Lakshman]

Centralization of decision making logic
 RCP

[Feamster], SANE [Casado]
Alternative frameworks for network control
 Tempest
30
[Rooney], FIRE [Partridge]
Summary

Direct control is desirable
 Make
sophisticated control policies easier to
understand and deploy
 Simplify router software
 Enable easy innovation

Direct control is implementable
 Tesseract
as proof-of-concept
 Sufficiently scalable
 Fast convergence
31
Future Work

Formulate models that establish bounds of Tesseract


Structuring decision logic




Better dissemination planes
Tesseract Router
Deployment in today’s networks

32
Arbitrate among multiple, potentially competing objectives
Unify control when some logic takes longer than others
Protocol improvements


Scale, latency, stability, failure models, objectives
Data center, enterprise, campus, backbone
Reality

Indirect control with primitive configuration
interface
Reverse-engineer
Routing Logic
Convert to
Control plane
configuration
TE/Security
Policy
Config commands
Configuration File
EIRGP
BGP
Configuration File
OSPF
EIRGP
BGP
OSPF
Configuration File
Access Control
NAT Table
Tunnel Table
Forwarding
Table
EIRGP
Access Control
NAT Table
Tunnel Table
33
BGP
OSPF
Forwarding
Table
Access Control
NAT Table
Tunnel Table
Forwarding
Table
Link Cost Driven Ethernet
Switching: Mesh
34
Effects of Switch Failure on
Aggregated Throughputs
35