Download Single-copy Routing

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer network wikipedia , lookup

Distributed operating system wikipedia , lookup

Backpressure routing wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Airborne Networking wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

CAN bus wikipedia , lookup

Everything2 wikipedia , lookup

Computer cluster wikipedia , lookup

Kademlia wikipedia , lookup

Routing wikipedia , lookup

Routing in delay-tolerant networking wikipedia , lookup

Transcript
ATCN: Delay Tolerant Networks (II)
(24/11/08)
Thrasyvoulos Spyropoulos (Akis)
[email protected]
Routing with Scheduled Contacts
 Graph is disconnected and/or time-varying
 Set of contacts C: known
 Set of nodes V: known
(B,D) = {10,12},{19,21}
B
D
D
D
A
D
C (C,D) = {8,10},{15,17}
Tx Range
Tx Range
2
Opportunistic Routing with Unknown Contacts
 Graph is disconnected and/or time-varying
 Set of contacts C: unknown!
 Set of nodes V: often unknown too!
(B,D) = ??
B
WHERE IS D?
D
D
A
WHERE IS D?
C (C,D) = ??
Tx Range
D
WHERE IS D?
D
3
Epidemic Routing
 Give a message copy to every node encountered

essentially: flooding in a disconnected context
D
F
E
D
B
D
D
D
D
A
C
4
Epidemic Routing: Message Vectors
 Node A encounters node B
Message Vector of A
Message Vector of B
Dest ID
Seq. Num.
Dest ID
Seq. Num.
D
0
D
0
(G,1)
G
1
E
0
F
0
F
0
F
1
(E,0),(F,1)
5
Epidemic Routing: Message Vectors (2)
 After message exchange
Message Vector of A
Message Vector of B
Dest ID
Seq. Num.
Dest ID
Seq. Num.
D
0
D
0
E
0
E
0
F
0
F
0
F
1
F
1
G
1
G
1
6
Epidemic Routing Performance
 How many transmissions?

At least N

All nodes receive the message
 What is the delay?

Minimum among all possible routing schemes

If NO resource constraints (bandwidth, buffer space)
7
Epidemic Routing Performance


Redundant copies reduce delay
But: too much redundancy is wasteful and often
disastrous (due to contention)
Transmissions for Epidemic Routing
Delay for Epidemic Routing
160000
120000
epidemic
optimal
100000
80000
60000
40000
20000
0
delivery delay (time units)
total transmissions
140000
7000
6000
epidemic
optimal
5000
4000
3000
2000
1000
0
increasing traffic
Too many transmissions
increasing traffic
Plagued by contention
8
Reducing the Overhead of Epidemic
Randomized Flooding (“Gossiping”)
 Give message to neighbor with a probability p ≤ 1
 p = 1) epidemic
 p = 0) direct transmission
+ fewer transmissions
- given enough time, transmissions = O(N)
Other flooding-based variants:
 each node forward up to Kmax times
 self-limiting epidemic (SLEF)
9
Direct Transmission
 Source carries message until it meets the destination
 Transmissions?

Just one!
 Delay?

Largest possible

Finite?
 Throughput?

See homework exercise
10
2-hop Scheme: Version 1
 When message created at source

Forward to destination if within range

Forward to a neighbor relay if destination not in range
 Relay: forward only to destination
 Transmissions per message

At most 2
 Delay?

Finite if each node meets every other node…eventually
 Throughput?
11
2-hop Scheme: Version 2
 Source gives a copy to any relay encountered
 Relays can only give copy to destination
D
F
E
Relay C cannot FWD to B
B
D
D
Dst
D
Src
Relay C can FWD to Dst
C
E[Transmissions per msg] = (N-1)/2
12
Controlled Replication (“Spraying”)
 2-hop scheme uses (N-1)/2 copies

Still a lot! Only half of epidemic
 Limit number of copies to L (small, fixed)

Transmissions = L!
 L = 2) Achieves O(1) per node capacity and deals
with Kumar’s and Gupta’s conjecture (capacity
→0) (Grossglauser et al. ‘01)
 L > 2 and L = O(1): (constant L)



Retain capacity gain
Transmissions << N
Multi-path diversity to reduce delay (Spray & Wait)
13
Spray and Wait (Binary Spraying)
 Use forwarding tokens; SRC starts with L tokens
 When L = 1, can only forward to DST
L=1
D
F
E
L=1
L=1
L=1
B
L=4
D
Src
D
D
L=2
L=2
Dst
D
C
14
Benefits of Replication
Delay of Spray and Wait
4000
source
3500
time units
3000
2500
2000
1500
source spray and(analysis)
wait
binary spray and wait
optimal
100x100 grid
 100 nodes
 random walks

binary
1000
500
0
5
10
15
20
L (# of copies)
1. 10x fewer transmissions => only 2x increase in delay
2. Law of diminishing returns
15
Spray and Wait: A “good” scenario
Covered
by Relay 2
1
12
D
13
S
14
2
16
11
3
15
7
8
5
10
4
9
Covered
bymobile
Relay 1
6
Relays
are
highly
Relays routes are uncorrelated
16
Spray & Wait: How Many Copies?
 L (e.g. L = 10) copies might be

Too little if number of nodes M is large (e.g. 10000) => almost 2-hop.v1

Too many if number of nodes M small (e.g. 20) => almost epidemic
 Analytical equation for S&W delay as a function of L/M

Choose desired L (tradeoff delay vs. transmissions)
 What if number of nodes is unknown?

Estimate number of nodes online

Sample inter-meeting times

Estimator M^ based on sampled intermeeting times
17
?
Opportunistic Routing = “Random” Routing
 So far all schemes are random: no assumptions about relays

all relays equally fast, equally capable, similar mobility

epidemic, random flooding, 2-hop, spray & wait, etc.
 Is real life that random and homogeneous???

Nodes have different capabilities (PDA, sensor, laptop, BS)

Nodes move differently (vehicles vs. pedestrians, 1st year student vs.
PhD)

Nodes have social relations
- Same affiliation => same building, floor
- Friends => meet more often than others
 Learn and exploit the patterns => better routing
18
Predicting Future Encounters
1. Based on past encounter statistics

Mobility pattern non-random, but unknown

Essentially non-parametric learning\prediction

Per contact vs. end-to-end statistics
2. Model based prediction

Assume an abstract mobility model

Use past encounters to “fit” the model parameters and predict
future encounters
19
Past Encounters: Age of Last Encounter
D
t(D) = 26
t(D) = 0
A
tX(Y): time since X last saw Y
D
D
B
Indirect location information
tB(D) = 100

tA(D) = 138
t(D) = 68
t(D) = 218
Last encounter timers
diffused with node mobility
smaller timer  closer distance

For most mobility models
20
Prophet Routing
 Like Epidemic routing, but maintains a probability of delivery for
each node pair p(i,D)
 Node i copies message to j only if p(j,D) > p(i,D)
 Algorithm:
Path Probability
Per Hop Probability
Contact with Dest D
i
D
p(i,D) = 1
No Contact with Dest D
Contact with j
D
i
p(i,D) * γt, (γ < 1)
D
i
j
p(i,D) = f(p(i,D),p(j,D))
21
Past Encounters: Encounter Frequency
 Last encounter not necessarily representative
 Consider:

Node A meets D every 10min, last saw D before 5min

Node B meets D every 1h, last saw D before 1min
 Use frequency: p(i,D) = # encounters(i,D) / Timewindow
 Consider

Node A meets D every 10min, for 1sec each time

Node B meets D every 20min, for 2min each time
 Use total contact duration: p(i,D) = Timeconnected / Timewindow
 Consider

Node A meets D every 10 min

Node B meets D in bursts: average = 10min, average during burst =
1min, last meeting before 30sec
Prediction Becomes Complex!
22
Past Encounters: Encounter Frequency (cont’d)
 Predicting next hop delivery probability

Look at next hop’s p(j,D)

Forward if p(j,D) is high enough
 Predicting path probability

Try to estimate the probability of the whole path

Look ahead: p(j,D) = Σkp(j,k)*p(k,D)

Can look ahead multiple hops

How do we estimate p(k,D)? Maintain and distribute database of p(i,j)

Remember MEED algorithm? (Lecture 1)
23
Predicting Future Encounters
1. Based on past encounter statistics

Mobility pattern non-random, but unknown

Essentially non-parametric learning\prediction

Per contact vs. end-to-end statistics
2. Model based prediction

Assume an abstract mobility model

Use past encounters to “fit” the model parameters and predict
future encounters

Parametric learning/prediction
24
Model-based Prediction
Office
Cafeteria
50%
5%
Restaurant 2
Restaurant 1
3%
5%
Library
Rest of Area
2%
Class
15%
20%
 Mobility Profile: Each node visits specific locations more often
25
Mobility Profile based Prediction (1)
 Separate network into K locations
 Represent each user as a K-dimensional vector Mn

Binary (e.g. 1 = visits location i often): [1 0 1…0 1 0 1]

Scalar (e.g. visits location i with probability pi): [p1 p2…pK]
 “Distance” metric between nodes n and m: |Mn – Mm|

E.g. Euclidean distance
2


M
(
i
)

M
(
i
)
 n
m
i 1... K
 Encounter probability p(i,j) = f(|Mn – Mm|)
26
Mobility Profile based Prediction (2)
STEP 1: Decide on model

e.g. how many locations K

K too large: little overlap (overfit)

K too small: too crude prediction
STEP 2: “Learn” model parameters

Node n must estimate its vector Mn

Mn(i) = Time at location i / Timewindow

e.g. track associated AP, GPS, etc.
STEP 3: Use estimated model to predict future encounters

p(i,j) = f(|Mn – Mm|) : which function f()?

Per contact vs. path probability
27
Mobility Profile-based Prediction (3)
λ12
Office
Cafeteria
λ 21
λ13
λ42
Restaurant 2
λ24
λ45
Restaurant 1
λ34
Library
λ56
Class
Rest of Area
 Markov Chain model: Next location depends on current location
 Better prediction than before; BUT more parameters to estimate
28
Social-profile based Prediction
 Nodes carried (driven?) by humans
 Humans have social relations that govern their mobility patterns

Tend to see your “friends” more often

More frequent encounters with colleagues
 Model social relations between nodes

Identify “links” between specific nodes

Identify node “communities”

Explicit vs. Implicit friends
 Forward to node with strong links to destination

E.g. forward to node of same affiliation
 More sophisticated algorithms: Last lecture on Social Networks
29
A Complete DTN Routing Scheme
 Prediction-based schemes are more efficient than random
MAXIM 1: Use prediction based algorithm to forward a copy or to
decide whether to create one more
 Using a single copy is not enough

Get stuck in local maxima

Delay/delivery variance too high (depending on model)
MAXIM 2: Use multiple but few copies (e.g. spray) and do predictionbased routing for each in parallel
30
Hybrid DTN-MANET Scenarios
 What if the network is not very sparse?

Big clusters of connected nodes, but no connectivity between clusters
Use DTN if outside of
cluster
D
S
Use regular routing inside
cluster (e.g. DSR, OLSR)
31
Hybrid DTN-MANET Scenarios (cont’d)
 How to route between partitions (the DTN part)?
 Depending on how stable the clusters are there are a number of
options
Option 1: If destination outside cluster revert to DTN until delivery
Option 2: Construct/Estimate Clusters; find the shortest cluster path and
at each cluster find for the best node that will take the message to
the next hop cluster
Option 3: Obsolete Routing table approach
32
Epidemic Routing: Alternative Ways to Reduce Overhead
Anti-packets
 Epidemic routing hands over a copy to every node
encountered…
Even after the message has been delivered!
 After one relay finds destination


Remove message from buffer
Inform other nodes to stop forwarding
 Can do this with different aggressive-ness

Immune

Vaccine
33
An Example: IMMUNE-TX
 Propagate anti-packet to already infected nodes
D
D
Avoided this Tx
E
F
D
A
D
C
Norecovered!
new copy to
C
recovered
nodes
msg:
(S,D,0)
Tradeoff: Aggresiveness
X
D
B
D
dst
Delete
Recovered
local Node
copy
msg: (S,D,0)
vs. anti-packet overhead
34
Reducing the overhead of epidemic: Network Coding
 So far we were not changing packets’ content

Replication

Forwarding

Drops
 Coding may combine one or more packets
x1
Incoming links
x2
x3
x2
x1
x3
x2
x1
Outgoing links
x3
Store-and-forward
35
Reducing the overhead of epidemic: Network Coding(2)
 So far we were not changing packets’ content

Replication

Forwarding

Drops
 Coding may combine one or more packets
Incoming links
x3
x2
x1
f(x1,x2,x3)
Outgoing links
Network Coding
36
Coding Packets: A simple example
 XOR: The simplest combination:
msg x1:
1
f(x 1 , x 2 )  x1  x 2
0
1
1
1
0
0
1

msg x2:
0
1

f(x1,x2):
1
1
37
De-coding Packets: A simple example
 Assume node that send x1 receives the coded packet f(x1,x2)
msg x1:
1
0
1
1
0
1
1
0

f(x1,x2):
1
1

msg x2:
0
1
38
Network Coding for Wireless
 Broadcast nature of medium: natural ground for network coding
x2
Bx
1
A
A
x2
Bx
1
A
C
x1
Ax
2
B
B
No coding: delay = 4
39
Network Coding for Wireless
 Broadcast nature of medium: natural ground for network coding
x1  x 2
B
A
x1
x2
Bx
1
A
A
x1  x 2
C
x1  x 2
x2
B
Coding: delay = 3
40
Linear Network Coding
 m packets
 n linear combinations
b1 = a11x1+ a12x2+…+ a1mxm
b2 = a21x1+ a22x2+…+ a2mxm
……………………………….
bn = an1x1+ an2x2+…+ anmxm
 independent linear combinations ≥ m

Centralized choice of coefficients => Decode!
 Distributed) ai random and independent
=> decode (prob 1)
41
End-to-end vs. hop-by-hop decoding
1)
2)
Decoding of messages at end nodes

This is what we were looking at so far

Issues with generation management

Potentially long/unbounded delays
Opportunistic Network (De-)Coding

Keep track of neighbors messages

Code only if next hop can decode
x1
x3
x2
x1
x
f(x
2 1,x2,x3)
x3
x1
x
f(x
1 1,x2,x3)
x3
x2
42