Download notes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Network science wikipedia , lookup

One-time pad wikipedia , lookup

Quantum key distribution wikipedia , lookup

Corecursion wikipedia , lookup

Cryptography wikipedia , lookup

Cryptanalysis wikipedia , lookup

Public-key cryptography wikipedia , lookup

Web of trust wikipedia , lookup

History of cryptography wikipedia , lookup

Diffie–Hellman key exchange wikipedia , lookup

Post-quantum cryptography wikipedia , lookup

Transcript
Overlay Network
 Physical Layer
R : router
R
R
R
N
R
R
 Overlay Layer
N
Overlay Network
 Problem of IP Multicast (Physical Layer)

Multicast for key distribution requires router
must have specific function.

It costs to change every router has Multicast
function.
Overlay Network
 An overlay network is a computer network which is built
on top of another network. Nodes in the overlay can be
thought of as being connected by virtual or logical links,
each of which corresponds to a path, perhaps through
many physical links, in the underlying network. wikipedia -
 Advance : Multicast group self-organize into efficient
structures for delivering data without requiring any
support from the existing network infrastructure.
Confidential data delivery
 Previous work
 Security mechanisms can be efficiently provided by
using symmetric key based cryptographic algorithm,
which in turn require all participants to share a secret
key (Group key) using IP Multicast.
Problem of previous work
1. To use IP Multicast with previous existing router, it
might be replaced to new router which can support IP
Multicast function.  May incur cost to replace router
2. While a few recent works have considered issues with
key dissemination using overlays, these works rely on
analysis or simulations with synthetic workloads and
don’t consider issues such as resilient key delivery.
3. There doesn’t exist real implementation and Internet
experiments in an overlay context
Solution & Contribution
 Conduct a systematic performance evaluation of
strategies for key dissemination in the context of
overlay broadcasting system on the Planetlab testbed
using real traces of join/leave dynamics.
 Considering resilient key dissemination on an overlay
network.
 Design space for dissemination of data and keys using
decoupled architecture.
Why we consider?
 Using key dissemination for a network security
 Network bandwidth is limited
 Frequent rekeying causes network slowly
Key management algorithm
 Centralized key management schemes
 Relying on single key server
 Batch rekeying  several group changes are
accumulated in group key.
 In rekey period,


Low rekey period frequent rekeying, high overhead
High rekey period  make scheme more vulnerable to
violation of security properties
Key management algorithm
 2 key management algorithms
 1. Key-star
 encrypt new key when performing a rekey operation.
 Required O(N) encrypt message where N is the group
size.
2. Marking
variant of LKH protocol, using subgroup key to reduce
encryption cost.  Not considering members left
Resilient key dissemination
 Losing rekey packets can be severe.
 Focusing on minimizing loss of rekey packets
 Naïve Unicast : using TCP connection individually
 Tree-TCP, Tree-UDP : For overlay multicast
 Tree-Unicast
Key and Data dissemination
coupling strategies
Fig. 1. a) An LKH keys tree.
b) An overlay structure optimized for data delivery. Intermediate nodes are
positioned by their network characteristics. New keys are sent to all nodes.
c) An overlay structure optimized for keys delivery. Intermediate nodes are
positioned by their ID. New keys are sent only to nodes that need them.
Key and Data dissemination
coupling strategies
 Coupled-Data Optimized : sub-optimal,High overhead
 Coupled-Key Optimized :
 May reduce rekeying overhead
 Can violate saturation degree of nodes when bandwidth
demanding broadcasting application are considered.
 Decoupled : two specialized dissemination structure
 Advance : providing good performance data delivery and
reduction in overhead to disseminate key message.
 Drawback: source must maintain two structures, hence needs
additional complexity and overhead to maintain extra
structure
Evaluation goals
 Reliable key dissemination :
 Considering data & keys loss or delay.
 Which algorithm is the best?
 Key & data coupling :
 Reduction of overhead.
 Benefits significant under real work-load.
Test
 Based on real world Internet load, classified 5 types
 Conference 1
 Conference 2
 Portal
 Competition
 Rally
Each type has different type of data transmission, which
means numbers of join/leaves are variable.
Result
 Choice of rekey period :
 Marking algorithm
Result
 Choice of Resilient key dissemination:
 Tree-TCP , Tree-Unicast
Result
 Coupling strategies: Decoupled
 Overhead of key messages is reduced by 50%~67% of that
incurred with Coupled-DataOptimized.
 Even though there needs additional overhead of
maintaining the separate key-delivery structure, the
reduction in total overhead is still significant.
 Especially, it shows remarkably reduced where overhead of
key messages is the major component like type
“Rally”.
Limitation on the proposed solution
 Only considering single-source broadcasting
application.
 There might be many multi-source broadcasting system
in the real world.
 It may incur lots of rekeying overhead to the sub-group

Bottleneck