Download Gossip Scheduling - Distributed Protocols Research Group

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Airborne Networking wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Distributed operating system wikipedia , lookup

CAN bus wikipedia , lookup

Real-Time Messaging Protocol wikipedia , lookup

Routing in delay-tolerant networking wikipedia , lookup

Transcript
Gossip Scheduling for Periodic
Streams in Ad-hoc WSNs
Ercan Ucan, Nathanael Thompson, Indranil Gupta
Department of Computer Science
University of Illinois at Urbana-Champaign
Distributed Protocols Research Group: http://dprg.cs.uiuc.edu
Gossip in Ad-hoc WSNs
Useful for broadcast applications:
 Broadcast of queries [TinyDB]
 Routing information spread [HHL06]
 Failure Detection, Topology Discovery and
Membership [LAHS05]
 Code propagation/Sensor reprogramming
[Trickle]
Canonical Gossip



Gossip (or epidemic) = probabilistic
forwarding of broadcasts
[BHOXBY99,DHGIL87]
In sensor networks: forward broadcast to
neighbors with a probability p [HHL06]
Compared to flooding, gossip:



Reliability is high (but probabilistic) if p > 0.7
Saves energy
Has latency that is comparable
Canonical Gossip [HHL06]
GOSSIP (per stream)
1.
p = gossip probability
2.
loop:
3.
for each new message m do
4.
r = random number between 0.0 and 1.0
5.
if (r < p) then
6.
broadcast message m
7.
sleep gossip period
Target Setting
Our target setting
 Multiple broadcast streams, each initiated by a
separate publisher node
 Each stream has a fixed stream period: source
initiates updates/broadcasts periodically
 Different streams can have different period
Canonical gossip doesn’t work!


Treats each stream individually
 Overhead grows as sum of number of streams
First-cut Idea: For periodic streams, combine gossips
Piggybacking

Combine multiple streams into one

Piggybacking: Create gossip message
containing latest updates from multiple
streams
Basically, each node gossips a
“combined” stream
 Generates fewer messages and allows
longer idle/sleep periods

Piggyback Gossip
PIGGYBACK GOSSIP
1.
p = gossip probability
2.
loop:
3.
for each constituent stream s
4.
for each new message m in s do
5.
r = random number between 0.0 and 1.0
6.
if (r < p) then
7.
add m to piggyback gossip message b
8.
broadcast message b
9.
sleep gossip period
Gossip Scheduling
 Basic piggybacking does not work if


Streams have different periods
Network packet payload size is finite
 Solution: create a gossip schedule


Determines which streams are
piggybacked/packed into which gossip messages
Runs asynchronously at each node
Static Scheduling Problem


Solved centrally, and then followed by all nodes
Given a set of periodic streams, satisfy two
requirements:
I. New piggyback message must not exceed maximum
network payload size
II. Maintain reliability, scalability and latency of canonical
gossiping on individual streams
 create groups of streams: k constraint
 for each stream group, send gossip with period =
min(all streams in that group)
 gossip contains latest updates from each stream in
group
Stream Groups: k constraint


Each stream group contains <= k streams
Due to:
1.
Limit on size of network packet payload

2.
Update message sizes from streams



For TinyOS, 28 B
Assume same for all streams
E.g., for 28 B payload and 5 B update message, k=5
k constraint specifies maximum number of
streams in one piggyback gossip
Relatedness Metric
For two streams with similar stream periods,
combining them maximizes the utilization of piggyback

Gossip message
containing Pub3 and Pub4
sent every 6 sec
(k=2)


Relatedness metric among each pair of streams i,j
with periods ti and tj:
R(i,j) = min(ti,tj)/max(ti,tj)
(note that 0 < R(i,j) <= 1.0)
Two streams are related if they have a high R value
Scheduling using Relatedness



In gossip schedule, highly related
streams should be combined
Yet satisfy k-constraint
Express relatedness between streams in
semblance graph
Semblance Graph
Example Stream Workload


Semblance Graph
Each gossip stream is a vertex in complete graph
Edge weights represent relatedness R(i,j) between
streams
Semblance Graph Sub-problem
Formally: partition semblance graph into
groups

Each Group has size no larger than k
Minimize sum of inter-group edge weights



i.e., maximize sum of intra-group edge weights
Greedy construction heuristics: based on
classical minimum spanning tree
algorithms

I.
II.
Prim-like
Kruskal-like
I. Prim-like Algorithm



Scheduled set of groups S = Φ
Initialize S with a single group consisting of
one randomly selected vertex
Iteratively:



Among all edges from S to V-S, select maximum
weight edge e
Suppose e goes from a vertex in group g (in S) to
some vertex v (in V-S)
Bring v into S



If |g| < k, then add v into group g
Otherwise, create new group g’ in S, containing single
vertex v
Time Complexity = O(V2.log(V))
II. Kruskal-like Algorithm



Each node initially in its own group (size=1)
Sort edges in decreasing order of weight
Iteratively consider edges in that order

Try to add edge




May combine two existing groups into one group
May be an edge within an existing group
If adding the edge causes a group to go beyond k,
drop edge
Time complexity: O(E.log(E) + E) = O(V2.log(V))
Comparison of Heuristics

Simulated algorithms on 5000 semblance graphs


Stream periods selected from interval within [0,1] of
size (1-homogeneity)
Kruskal-like better on majority of inputs

For any number of streams, homogeneity (and k)
Network Simulation


Canonical Gossip vs. Piggybacked Gossip
TinyOS simulator
Network size
225 nodes
Publishers
24
k constraint
7
Gossip probability
70%
Evaluation


Total messages sent will decrease
What are the effects on



Energy consumption?
Reliability?
Latency?
Energy Savings


“Flood” = Canonical Gossip
“PgFlood”= Piggybacked Gossip-Scheduled Gossip
Power consumption
based on mote
datasheet
Gossip scheduling
reduces energy
consumption by
40%
Reliability


Reliability reasonable
up to 10% failures,
and then degrades
gracefully
Slightly worse than
canonical gossip due
to update buffering at
nodes
“Flood” = Canonical Gossip
“PgFlood”= Piggybacked Gossip-Scheduled Gossip
Latency


Gossiping scheduling
delays delivery at
some nodes
But it has lower
latency for most
cases, and a lower
median and average
latency

“Flood” = Canonical Gossip
“PgFlood”= Piggybacked Gossip-Scheduled Gossip
Gossip scheduling
pushes some
updates quickly
Conclusion and Open
Directions


Canonical gossip inefficient under multiple publishers
sending out periodic broadcast streams
Use Gossip scheduling to efficiently piggyback
different streams at nodes





satisfies network packet size constraints
retains reliability (compared to canonical gossip)
improves latency
lowers energy consumption
Open directions:


Dynamic version: adding/deleting streams, varying periods
Distributed scheduling
Distributed Protocols Research Group: http://dprg.cs.uiuc.edu