Download csci5211: Computer Networks and Data Communications

Document related concepts

Cracking of wireless networks wikipedia , lookup

Network tap wikipedia , lookup

Net bias wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Deep packet inspection wikipedia , lookup

Computer network wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Airborne Networking wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Routing in delay-tolerant networking wikipedia , lookup

Peer-to-peer wikipedia , lookup

IEEE 1355 wikipedia , lookup

UniPro protocol stack wikipedia , lookup

Transcript
Overlay, End System Multicast
and i3
• General Concept of Overlays
• Some Examples
• End-System Multicast
– Rationale
– How to construct “self-organizing” overlay
– Performance in support conferencing applications
• Internet Indirection Infrastructure (i3)
– Motivation and Basic ideas
– Implementation Overview
– Applications
Readings: read the required papers
winter 2008
Overlay and i3
1
Overlay Networks
winter 2008
Overlay and i3
2
Overlay Networks
Focus at the application level
winter 2008
Overlay and i3
3
Overlay Networks
• A logical network built on top of a physical
network
– Overlay links are tunnels through the underlying network
• Many logical networks may coexist at once
– Over the same underlying network
– And providing its own particular service
• Nodes are often end hosts
– Acting as intermediate nodes that forward traffic
– Providing a service, such as access to files
• Who controls the nodes providing service?
– The party providing the service (e.g., Akamai)
– Distributed collection of end users (e.g., peer-to-peer)
winter 2008
Overlay and i3
4
Routing Overlays
• Alternative routing strategies
– No application-level processing at the overlay nodes
– Packet-delivery service with new routing strategies
• Incremental enhancements to IP
–
–
–
–
IPv6
Multicast
Mobility
Security
• Revisiting where a function belongs
– End-system multicast: multicast distribution by end
hosts
• Customized path selection
– Resilient Overlay Networks: robust packet delivery
winter 2008
Overlay and i3
5
IP Tunneling
• IP tunnel is a virtual point-to-point link
– Illusion of a direct link between two separated nodes
Logical view:
Physical view:
A
B
A
B
tunnel
E
F
E
F
• Encapsulation of the packet inside an IP
datagram
– Node B sends a packet to node E
– … containing another packet as the payload
winter 2008
Overlay and i3
6
6Bone: Deploying IPv6 over IP4
Logical view:
Physical view:
A
B
IPv6
IPv6
A
B
C
IPv6
IPv6
IPv4
Flow: X
Src: A
Dest: F
data
Src:B
Dest: E
Flow: X
Src: A
Dest: F
data
A-to-B:
IPv6
winter 2008
E
F
IPv6
IPv6
D
E
F
IPv4
IPv6
IPv6
tunnel
B-to-C:
IPv6 inside
IPv4
Overlay and i3
Src:B
Dest: E
Flow: X
Src: A
Dest: F
Flow: X
Src: A
Dest: F
data
data
B-to-C:
IPv6 inside
IPv4
E-to-F:
IPv6
7
MBone: IP Multicast
• Multicast
– Delivering the same data to many receivers
– Avoiding sending the same data many times
unicast
multicast
• IP multicast
– Special addressing, forwarding, and routing schemes
– Not widely deployed, so MBone tunneled between nodes
winter 2008
Overlay and i3
8
End-System Multicast
• IP multicast still is not widely deployed
– Technical and business challenges
– Should multicast be a network-layer service?
• Multicast tree of end hosts
– Allow end hosts to form their own multicast tree
– Hosts receiving the data help forward to others
winter 2008
Overlay and i3
9
RON: Resilient Overlay Networks
Premise: by building application overlay network, can
increase performance and reliability of routing
Princeton
Yale
application-layer
router
Two-hop (application-level)
Berkeley-to-Princeton route
Berkeley
winter 2008
Overlay and i3
10
RON Can Outperform IP Routing
• IP routing does not adapt to congestion
– But RON can reroute when the direct path is congested
• IP routing is sometimes slow to converge
– But RON can quickly direct traffic through intermediary
• IP routing depends on AS routing policies
– But RON may pick paths that circumvent policies
• Then again, RON has its own overheads
– Packets go in and out at intermediate nodes
• Performance degradation, load on hosts, and financial cost
– Probing overhead to monitor the virtual links
• Limits RON to deployments with a small number of nodes
winter 2008
Overlay and i3
11
Secure Communication Over Insecure
Links
• Encrypt packets at entry and decrypt at exit
• Eavesdropper cannot snoop the data
• … or determine the real source and destination
winter 2008
Overlay and i3
12
Communicating With Mobile Users
• A mobile user changes locations frequently
– So, the IP address of the machine changes often
• The user wants applications to continue
running
– So, the change in IP address needs to be hidden
• Solution: fixed gateway forwards packets
– Gateway has a fixed IP address
– … and keeps track of the mobile’s address changes
www.cnn.com
winter 2008
gateway
Overlay and i3
13
Unicast Emulation of Multicast
Stanford
Gatech
CMU
Berkeley
End Systems
Routers
winter 2008
Overlay and i3
14
IP Multicast
Gatech
Stanford
CMU
Berkeley
Routers with multicast support
•No duplicate packets
•Highly efficient bandwidth usage
Key Architectural Decision: Add support for multicast in IP layer
winter 2008
Overlay and i3
15
Key Concerns with IP Multicast
• Scalability with number of groups
– Routers maintain per-group state
– Analogous to per-flow state for QoS guarantees
– Aggregation of multicast addresses is complicated
• Supporting higher level functionality is difficult
– IP Multicast: best-effort multi-point delivery service
– End systems responsible for handling higher level functionality
– Reliability and congestion control for IP Multicast complicated
• Deployment is difficult and slow
– ISP’s reluctant to turn on IP Multicast
winter 2008
Overlay and i3
16
End System Multicast
CMU
Gatech
Stanford
Stan1
Stan2
Berk1
Berkeley
Overlay Tree
Gatech
Berk2
Stan
1
Stan2
CMU
Berk1
Berk2
winter 2008
Overlay and i3
17
Potential Benefits
• Scalability
– Routers do not maintain per-group state
– End systems do, but they participate in very few groups
• Easier to deploy
• Potentially simplifies support for higher level
functionality
– Leverage computation and storage of end systems
– For example, for buffering packets, transcoding, ACK aggregation
– Leverage solutions for unicast congestion control and reliability
winter 2008
Overlay and i3
18
Design Questions
• Is End System Multicast Feasible?
• Target applications with small and sparse
groups
• How to Build Efficient Application-Layer
Multicast “Tree” or Overlay Network?
– Narada: A distributed protocol for constructing efficient
overlay trees among end systems
– Simulation and Internet evaluation results to
demonstrate that Narada can achieve good performance
winter 2008
Overlay and i3
19
Performance Concerns
Gatech
Delay from CMU to
Berk1 increases
Stan1
Stan2
CMU
Berk1
Duplicate Packets:
Bandwidth Wastage
Gatech
Berk2
Stan1
Stan2
CMU
Berk1
Berk2
winter 2008
Overlay and i3
20
What is an efficient overlay tree?
• The delay between the source and receivers is small
• Ideally,
– The number of redundant packets on any physical link is low
Heuristic used:
– Every member in the tree has a small degree
– Degree chosen to reflect bandwidth of connection to Internet
CMU
Stan2
Stan1
Berk1
Berk2
High latency
winter 2008
Stan2
CMU
Stan2
Stan1
Gatech
Berk1
CMU
Stan1
Gatech
Berk2
High degree (unicast)
Overlay and i3
Berk1
Gatech
Berk2
“Efficient” overlay
21
Why is self-organization hard?
• Dynamic changes in group membership
– Members may join and leave dynamically
– Members may die
• Limited knowledge of network conditions
– Members do not know delay to each other when they join
– Members probe each other to learn network related information
– Overlay must self-improve as more information available
• Dynamic changes in network conditions
– Delay between members may vary over time due to congestion
winter 2008
Overlay and i3
22
Narada Design
“Mesh”: Richer overlay that may have cycles and
Step 1
includes all group members
• Members have low degrees
• Shortest path delay between any pair of members along mesh is small
•Source rooted shortest delay spanning trees of mesh
Step 2
•Constructed using well known routing algorithms
– Members have low degrees
– Small delay from source to receivers
Stan2
Berk2
CMU
Stan1
CMU
Stan1
Berk1
winter 2008
Stan2
Gatech
Berk2
Overlay and i3
Berk1
Gatech
23
Narada Components
• Mesh Management:
– Ensures mesh remains connected in face of membership changes
• Mesh Optimization:
– Distributed heuristics for ensuring shortest path delay between
members along the mesh is small
• Spanning tree construction:
– Routing algorithms for constructing data-delivery trees
– Distance vector routing, and reverse path forwarding
winter 2008
Overlay and i3
24
Optimizing Mesh Quality
CMU
Stan2
Stan1
A poor overlay topology
Gatech1
Berk1
Gatech2
• Members periodically probe other members at
random
• New Link added if
Utility Gain of adding link > Add Threshold
• Members periodically monitor existing links
• Existing Link dropped if
Cost of dropping link < Drop Threshold
winter 2008
Overlay and i3
25
The terms defined
• Utility gain of adding a link based on
– The number of members to which routing delay improves
– How significant the improvement in delay to each member is
• Cost of dropping a link based on
– The number of members to which routing delay increases,
for either neighbor
• Add/Drop Thresholds are functions of:
– Member’s estimation of group size
– Current and maximum degree of member in the mesh
winter 2008
Overlay and i3
26
Desirable properties of heuristics
• Stability: A dropped link will not be immediately readded
• Partition Avoidance: A partition of the mesh is unlikely to
be caused as a result of any single link being dropped
Stan2
CMU
Stan2
Stan1
CMU
Stan1
Probe
Berk1
Gatech1
Berk1
Probe
Gatech2
Delay improves to Stan1, CMU
but marginally.
Do not add link!
winter 2008
Gatech1
Gatech2
Delay improves to CMU, Gatech1
and significantly.
Add link!
Overlay and i3
27
CMU
Stan2
Stan1
Berk1
Gatech1
Gatech2
Used by Berk1 to reach only Gatech2 and vice versa.
Drop!!
Stan2
CMU
Stan1
Berk1
Gatech1
Gatech2
An improved mesh !!
winter 2008
Overlay and i3
28
Performance Metrics
• Delay between members using Narada
• Stress, defined as the number of identical copies of a
packet that traverse a physical link
Gatech
Delay from CMU to
Berk1 increases
Stan1
Stan2
CMU
Stress = 2
Berk1
Stan1
Gatech
Stan2
Berk
1
CMU
winter 2008
Berk2
Overlay and i3
Berk2 29
Factors affecting performance
• Topology Model
– Waxman Variant
– Mapnet: Connectivity modeled after several ISP backbones
– ASMap: Based on inter-domain Internet connectivity
• Topology Size
– Between 64 and 1024 routers
• Group Size
– Between 16 and 256
• Fanout range
– Number of neighbors each member tries to maintain in the mesh
winter 2008
Overlay and i3
30
ESM Conclusions
• Proposed in 1989, IP Multicast is not yet widely deployed
– Per-group state, control state complexity and scaling
concerns
– Difficult to support higher layer functionality
– Difficult to deploy, and get ISP’s to turn on IP Multicast
• Is IP the right layer for supporting multicast functionality?
• For small-sized groups, an end-system overlay approach
– is feasible
– has a low performance penalty compared to IP Multicast
– has the potential to simplify support for higher layer
functionality
– allows for application-specific customizations
winter 2008
Overlay and i3
31
Supporting Conferencing in ESM
Source rate
2 Mbps
2 Mbps
C
A
0.5 Mbps
Transcoding
D
2 Mbps
B
Unicast congestion control
(DSL)
• Framework
– Unicast congestion control on each overlay link
– Adapt to the data rate using transcoding
• Objective
– High bandwidth and low latency to all receivers along the overlay
winter 2008
Overlay and i3
32
Enhancements of Overlay Design
• Two new issues addressed
– Dynamically adapt to changes in network conditions
– Optimize overlays for multiple metrics
• Latency and bandwidth
• Study in the context of the Narada protocol
– Techniques presented apply to all self-organizing protocols
winter 2008
Overlay and i3
33
Adapt to Dynamic Metrics
• Adapt overlay trees to changes in network condition
– Monitor bandwidth and latency of overlay links
• Link measurements can be noisy
– Aggressive adaptation may cause overlay instability
persistent:
react
bandwidth
transient:
do not react
raw estimate
smoothed estimate
discretized estimate
time
• Capture the long term performance of a link
– Exponential smoothing, Metric discretization
winter 2008
Overlay and i3
34
Optimize Overlays for Dual Metrics
Source rate
2 Mbps
60ms, 2Mbps
Receiver X
Source
30ms, 1Mbps
• Prioritize bandwidth over latency
• Break tie with shorter latency
winter 2008
Overlay and i3
35
Example of Protocol Behavior
Mean Receiver Bandwidth
• All members join at time 0
• Single sender, CBR traffic
Reach a stable overlay
• Acquire network info
• Self-organization
winter 2008
Adapt to network congestion
Overlay and i3
36
Evaluation Goals
• Can ESM provide application level
performance comparable to IP Multicast?
• What network metrics must be considered
while constructing overlays?
• What is the network cost and overhead?
winter 2008
Overlay and i3
37
Evaluation Overview
• Compare performance of ESM with
– Benchmark (IP Multicast)
– Other overlay schemes that consider fewer network
metrics
• Evaluate schemes in different scenarios
– Vary host set, source rate
• Performance metrics
– Application perspective: latency, bandwidth
– Network perspective: resource usage, overhead
winter 2008
Overlay and i3
38
Benchmark Scheme
• IP Multicast not deployed
• Sequential Unicast: an approximation
– Bandwidth and latency of unicast path from source to each
receiver
– Performance similar to IP Multicast with ubiquitous deployment
A
B
Source
C
winter 2008
Overlay and i3
39
Overlay Schemes
Overlay Scheme
Choice of Metrics
Bandwidth
Latency
Bandwidth-Latency
Bandwidth-Only
Latency-Only
Random
winter 2008
Overlay and i3
40
Experiment Methodology
• Compare different schemes on the Internet
–
–
–
–
Ideally: run different schemes concurrently
Interleave experiments of schemes
Repeat same experiments at different time of day
Average results over 10 experiments
• For each experiment
– All members join at the same time
– Single source, CBR traffic
– Each experiment lasts for 20 minutes
winter 2008
Overlay and i3
41
Application Level Metrics
• Bandwidth (throughput) observed by each
receiver
• RTT between source and each receiver along
overlay
C
Source
A
Data path
D
RTT measurement
B
These measurements include queueing and processing delays at end
systems
winter 2008
Overlay and i3
42
Performance of Overlay Scheme
CMU
Exp1
CMU
Exp2
Exp1
Exp2
RTT
MIT
Harvard
30ms
Harvard
40ms
MIT
32ms
42ms
Rank
Different runs of the same scheme may
produce different but “similar quality” trees
1
2
Mean
Std. Dev.
“Quality” of overlay tree produced by a scheme
• Sort (“rank”) receivers based on performance
• Take mean and std. dev. on performance of same rank across
multiple experiments
• Std. dev. shows variability of tree quality
winter 2008
Overlay and i3
43
Factors Affecting Performance
• Heterogeneity of host set
– Primary Set: 13 university hosts in U.S. and Canada
– Extended Set: 20 hosts, which includes hosts in Europe,
Asia, and behind ADSL
• Source rate
– Fewer Internet paths can sustain higher source rate
– More intelligence required in overlay constructions
winter 2008
Overlay and i3
44
Three Scenarios Considered
Primary Set
1.2 Mbps
Primary Set
2.4 Mbps
Extended Set
2.4 Mbps
(lower)  “stress” to overlay schemes  (higher)
• Does ESM work in different scenarios?
• How do different schemes perform under
various scenarios?
winter 2008
Overlay and i3
45
BW, Primary Set, 1.2 Mbps
Internet pathology
Naïve scheme performs poorly even in a less “stressful” scenario
winter 2008
Overlay and i3
46
Scenarios Considered
Primary Set
1.2 Mbps
Primary Set
2.4 Mbps
Extended Set
2.4 Mbps
(lower)  “stress” to overlay schemes  (higher)
• Does an overlay approach continue to work
under a more “stressful” scenario?
• Is it sufficient to consider just a single
metric?
– Bandwidth-Only, Latency-Only
winter 2008
Overlay and i3
47
BW, Extended Set, 2.4 Mbps
no strong correlation between
latency and bandwidth
Optimizing only for latency has poor bandwidth performance
winter 2008
Overlay and i3
48
RTT, Extended Set, 2.4Mbps
Bandwidth-Only cannot avoid
poor latency links or long path length
Optimizing only for bandwidth has poor latency performance
winter 2008
Overlay and i3
49
Summary so far…
• For best application performance: adapt
dynamically to both latency and bandwidth
metrics
• Bandwidth-Latency performs comparably to IP
Multicast (Sequential-Unicast)
• What is the network cost and overhead?
winter 2008
Overlay and i3
50
Resource Usage (RU)
Captures consumption of network resource of overlay tree
• Overlay link RU = propagation delay
CMU
• Tree RU = sum of link RU
40ms
UCSD
2ms
U.Pitt
Scenario: Primary Set, 1.2 Mbps
(normalized to IP Multicast RU)
IP Multicast
Efficient (RU = 42ms)
1.0
40ms
UCSD
CMU
Bandwidth-Latency
1.49
Random
2.24
U. Pitt
Naïve Unicast
2.62
Inefficient (RU = 80ms)
winter 2008
Overlay and i3
40ms
51
Protocol Overhead
Protocol overhead =
total non-data traffic (in bytes)
total data traffic (in bytes)
• Results: Primary Set, 1.2 Mbps
– Average overhead = 10.8%
– 92.2% of overhead is due to bandwidth probe
• Current scheme employs active probing for
available bandwidth
– Simple heuristics to eliminate unnecessary probes
– Focus of our current research
winter 2008
Overlay and i3
52
Internet Indirection
Infrastructure (i3)
Motivations
• Today’s Internet is built around a unicast point-topoint communication abstraction:
– Send packet “p” from host “A” to host “B”
• This abstraction allows Internet to be highly
scalable and efficient, but…
• … not appropriate for applications that require
other communications primitives:
–
–
–
–
Multicast
Anycast
Mobility
…
winter 2008
Overlay and i3
53
Why?
• Point-to-point communication  implicitly
assumes there is one sender and one receiver,
and that they are placed at fixed and wellknown locations
– E.g., a host identified by the IP address 128.32.xxx.xxx is
located in Berkeley
winter 2008
Overlay and i3
54
IP Solutions
• Extend IP to support new communication
primitives, e.g.,
– Mobile IP
– IP multicast
– IP anycast
• Disadvantages:
– Difficult to implement while maintaining Internet’s scalability
(e.g., multicast)
– Require community wide consensus -- hard to achieve in
practice
winter 2008
Overlay and i3
55
Application Level Solutions
• Implement the required functionality at
the application level, e.g.,
– Application level multicast (e.g., Narada, Overcast,
Scattercast…)
– Application level mobility
• Disadvantages:
– Efficiency hard to achieve
– Redundancy: each application implements the same
functionality over and over again
– No synergy: each application implements usually only one
service; services hard to combine
winter 2008
Overlay and i3
56
Key Observation
• Virtually all previous proposals use
indirection, e.g.,
– Physical indirection point  mobile IP
– Logical indirection point  IP multicast
“Any problem in computer science can
be solved by adding a layer of indirection”
winter 2008
Overlay and i3
57
i3 Solution
Build an efficient indirection layer
on top of IP
• Use an overlay network to implement this layer
– Incrementally deployable; don’t need to change IP
Application
Indir.
TCP/UDP
layer
IP
winter 2008
Overlay and i3
58
Internet Indirection
Infrastructure (i3): Basic Ideas
• Each packet is associated an identifier id
• To receive a packet with identifier id, receiver R
maintains a trigger (id, R) into the overlay network
data id
Sender
Receiver (R)
data R
id R
winter 2008
Overlay and i3
trigger
59
Service Model
• API
– sendPacket(p);
– insertTrigger(t);
– removeTrigger(t) // optional
• Best-effort service model (like IP)
• Triggers periodically refreshed by endhosts
• ID length: 256 bits
winter 2008
Overlay and i3
60
Mobility
• Host just needs to update its trigger as it
moves from one subnet to another
Sender
winter 2008
Receiver
(R1)
id R2
R1
Overlay and i3
Receiver
(R2)
61
Multicast
• Receivers insert triggers with same identifier
• Can dynamically switch between multicast and
unicast
data id
Sender
data R1
id R1
Receiver (R1)
id R2
data R2
Receiver (R2)
winter 2008
Overlay and i3
62
Anycast
• Use longest prefix matching instead of exact
matching
– Prefix p: anycast group identifier
– Suffix si: encode application semantics, e.g., location
data p|a
Sender
data R1
p|s1 R1
p|s2 R2
Receiver (R1)
Receiver (R2)
p|s3 R3
Receiver (R3)
winter 2008
Overlay and i3
63
Service Composition: Sender
Initiated
• Use a stack of IDs to encode sequence of
operations to be performed on data path
• Advantages
– Don’t need to configure path
– Load balancing and robustness easy to achieve
Transcoder (T)
data idT,id
Sender
winter 2008
data id
data R
id R
data T,id
idT T
Overlay and i3
Receiver (R)
64
Service Composition: Receiver
Initiated
• Receiver can also specify the operations to be
performed on data
data id
Sender
Firewall (F)
data R
data F,R
idF F
Receiver (R)
data idF,R
id idF,R
winter 2008
Overlay and i3
65
Quick Implementation
Overview
• i3 is implemented on top of Chord
– But can easily use CAN, Pastry, Tapestry, etc
• Each trigger t = (id, R) is stored on the
node responsible for id
• Use Chord recursive routing to find best
matching trigger for packet p = (id, data)
winter 2008
Overlay and i3
66
Routing Example
• R inserts trigger t = (37, R); S sends packet p = (37, data)
• An end-host needs to know only one i3 node to use i3
– E.g., S knows node 3, R knows node 35
S
m-1
0
2
S
3
send(37, data)
20
[8..20]
7
7
[4..7]
3
[40..3]
Chord circle
37 R
[21..35]
41
41
[36..41]
20
37 R
35
trigger(37,R)
35
send(R, data)
R
R
winter 2008
Overlay and i3
67
Optimization #1: Path Length
• Sender/receiver caches i3 node mapping a
specific ID
• Subsequent packets are sent via one i3 node
[8..20]
[4..7]
data 37
[42..3]
Sender (S)
cache node
winter 2008
[21..35]
[36..41]
data R
37 R
Overlay and i3
Receiver (R)
68
Optimization #2: Triangular Routing
• Use well-known trigger for initial rendezvous
• Exchange a pair of (private) triggers well-located
• Use private triggers to send data traffic
[8..20]
[4..7]
37
data
[2] 30
2 S
Sender (S)
[42..3]
S
[30]
[21..35]
[36..41]
[2] R
37 R
winter 2008
Overlay and i3
data
R
2
[30]
30 R
Receiver (R)
69
Example 1: Heterogeneous Multicast
• Sender not aware of transformations
S_MPEG/JPEG
send(R1, data)
send(id, data)
id_MPEG/JPEG S_MPEG/JPEG
Sender
(MPEG)
Receiver R1
(JPEG)
send((id_MPEG/JPEG, R1), data)
id
(id_MPEG/JPEG, R1)
id
R2
send(R2, data)
Receiver R2
(MPEG)
winter 2008
Overlay and i3
70
Example 2: Scalable Multicast
• i3 doesn’t provide direct support for scalable multicast
– Triggers with same identifier are mapped onto the same i3 node
• Solution: have end-hosts build an hierarchy of trigger
of bounded degree
(g, data)
g
x
g g
R1 R2
R2
(x, data)
x
R3
R3
winter 2008
x
R4
R1
R4
Overlay and i3
71
Example 2: Scalable Multicast
(Discussion)
Unlike IP multicast, i3
1. Implement only small scale replication  allow
infrastructure to remain simple, robust, and
scalable
2. Gives end-hosts control on routing  enable
end-hosts to
–
–
Achieve scalability, and
Optimize tree construction to match their needs, e.g., delay,
bandwidth
winter 2008
Overlay and i3
72
Example 3: Load Balancing
• Servers insert triggers with IDs that have random
suffixes
• Clients send packets with IDs that have random suffixes
send(1010 0110,data)
S1
1010 0010 S1
A
S2
1010 0101 S2
1010 1010 S3
S3
send(1010 1110,data)
1010 1101 S4
S4
B
winter 2008
Overlay and i3
73
Example 4: Proximity
• Suffixes of trigger and packet IDs encode the
server and client locations
S2
S3
S1
send(1000 0011,data)
1000 0010
winter 2008
1000 1010 S2
1000 1101 S3
S1
Overlay and i3
74
Outline
• Implementation
• Examples
• Security
 Applications
 Protection against DoS attacks
– Routing as a service
– Service composition platform
winter 2008
Overlay and i3
75
Applications: Protecting Against DoS
• Problem scenario: attacker floods the
incoming link of the victim
• Solution: stop attacking traffic before it
arrives at the incoming link
– Today: call the ISP to stop the traffic, and hope for the
best!
• Our approach: give end-host control on what
packets to receive
– Enable end-hosts to stop the attacks in the network
winter 2008
Overlay and i3
76
Why End-Hosts (and not Network)?
• End-hosts can better react to an attack
– Aware of semantics of traffic they receive
– Know what traffic they want to protect
• End-hosts may be in a better position to
detect an attack
– Flash-crowd vs. DoS
winter 2008
Overlay and i3
77
Some Useful Defenses
1. White-listing: avoid receiving packets on
arbitrary ports
2. Traffic isolation:
–
–
Contain the traffic of an application under attack
Protect the traffic of established connections
3. Throttling new connections: control the rate
at which new connections are opened (per
sender)
winter 2008
Overlay and i3
78
1. White-listing
• Packets not addressed to open ports are dropped in
the network
– Create a public trigger for each port in the white list
– Allocate a private trigger for each new connection
IDR
[ID
data
P
S]ID
IDS S
Sender (S)
S [IDR]
[IDS] R
IDP R
IDP – public trigger
winter 2008
data R
IDS [IDR]
IDR R
Receiver (R)
IDS, IDR – private triggers
Overlay and i3
79
2. Traffic Isolation
• Drop triggers being flooded without affecting
other triggers
– Protect ongoing connections from new connection
requests
– Protect a service from an attack on another services
Transaction server
Legitimate client
(C)
ID1 V
ID2 V
Victim (V)
Web server
Attacker
(A)
winter 2008
Overlay and i3
80
2. Traffic Isolation (cont’d)
• Drop triggers being flooded without affecting
other triggers
– Protect ongoing connections from new connection
requests
– Protect a service from an attack on another services
Transaction server
Legitimate client
(C)
ID1 V
Victim (V)
Web server
Attacker
(A)
winter 2008
Traffic of transaction server
protected from attack on web server
Overlay and i3
81
3. Throttling New Connections
• Redirect new connection requests to a
gatekeeper
– Gatekeeper has more resources than victim
– Can be provided as a 3rd party service
Gatekeeper (A)
IDC C
Client (C)
X S
Server (S)
IDP A
winter 2008
Overlay and i3
82
Service Composition Platform
• Goal: allow third-parties and end-hosts to
easily insert new functionality on data path
– E.g., firewalls, NATs, caching, transcoding, spam filtering,
intrusion detection, etc..
• Why i3?
– Make middle-boxes part of the architecture
– Allow end-hosts/third-parties to explicitly route through
middle-boxes
winter 2008
Overlay and i3
83
Example
• Use Bro system to provide intrusion detection
for end-hosts that desire so
Bro (middle-box)
M
(idM:idBA, data)
(idBA, data)
idM M
client A
winter 2008
idAB A
(idAB, data)
i3
idBA B
server B
(idM:idAB, data)
Overlay and i3
84
Design Principles
1) Give hosts control on routing
–
–
–
A trigger is like an entry in a routing table!
Flexibility, customization
End-hosts can
•
•
•
•
•
Source route
Set-up acyclic communication graphs
Route packets through desired service points
Stop flows in infrastructure
…
2) Implement data forwarding in infrastructure
–
Efficiency, scalability
winter 2008
Overlay and i3
85
Design Principles (cont’d)
Host
Data plane
Internet &
Infrastructure overlays
p2p &
End-host overlays
i3
winter 2008
Infrastructure
Control plane
Data plane
Control plane
Control plane
Overlay and i3
Data plane
86
Conclusions
• Indirection – key technique to implement
basic communication abstractions
– Multicast, Anycast, Mobility, …
• This research
– Advocates for building an efficient Indirection Layer on top
of IP
– Explore the implications of changing the communication
abstraction; already done in other fields
• Direct addressable vs. associative memories
• Point-to-point communication vs. Tuple space (in Distributed
systems)
winter 2008
Overlay and i3
87