Download View

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Deep packet inspection wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Distributed operating system wikipedia , lookup

CAN bus wikipedia , lookup

Kademlia wikipedia , lookup

Peer-to-peer wikipedia , lookup

Transcript
Peer-to-Peer Networks:
Unstructured and Structured
• What is a peer-to-peer network?
• Unstructured Peer-to-Peer Networks
–
–
–
–
Napster
Gnutella
KaZaA
BitTorrent
• Distributed Hash Tables (DHT) and Structured
Networks
– Chord
– Pros and Cons
Readings: do required and optional readings if interested
winter 2008
P2P
1
Peer-to-Peer Networks:
How Did it Start?
• A killer application: Naptser
– Free music over the Internet
• Key idea: share the content, storage and bandwidth of
individual (home) users
Internet
winter 2008
P2P
2
Model
• Each user stores a subset of files
• Each user has access (can download) files
from all users in the system
winter 2008
P2P
3
Main Challenge
• Find where a particular file is stored
E
F
D
E?
A
winter 2008
C
B
P2P
4
Other Challenges
• Scale: up to hundred of thousands or
millions of machines
• Dynamicity: machines can come and go any
time
winter 2008
P2P
5
Peer-to-Peer Networks: Napster
• Napster history: the rise
•
–
–
–
–
January 1999: Napster version 1.0
May 1999: company founded
September 1999: first lawsuits
2000: 80 million users
Shawn Fanning,
Napster history: the fall
Northeastern freshman
– Mid 2001: out of business due to lawsuits
– Mid 2001: dozens of P2P alternatives that were harder
to touch, though these have gradually been constrained
– 2003: growth of pay services like iTunes
• Napster history: the resurrection
– 2003: Napster reconstituted as a pay service
– 2006: still lots of file sharing going on
winter 2008
P2P
6
Napster Technology: Directory Service
• User installing the software
– Download the client program
– Register name, password, local directory, etc.
• Client contacts Napster (via TCP)
– Provides a list of music files it will share
– … and Napster’s central server updates the directory
• Client searches on a title or performer
– Napster identifies online clients with the file
– … and provides IP addresses
• Client requests the file from the chosen supplier
– Supplier transmits the file to the client
– Both client and supplier report status to Napster
winter 2008
P2P
7
Napster
• Assume a centralized index system that
maps files (songs) to machines that are
alive
• How to find a file (song)
– Query the index system  return a machine that stores
the required file
• Ideally this is the closest/least-loaded machine
– ftp the file
• Advantages:
– Simplicity, easy to implement sophisticated search
engines on top of the index system
• Disadvantages:
– Robustness, scalability (?)
winter 2008
P2P
8
Napster: Example
m5
E
m6
F
E?
E
E?
m5
m1
m2
m3
m4
m5
m6
winter 2008
m4
C
A
m1
D
A
B
C
D
E
F
B
m3
m2
P2P
9
Napster Technology: Properties
• Server’s directory continually updated
– Always know what music is currently available
– Point of vulnerability for legal action
• Peer-to-peer file transfer
– No load on the server
– Plausible deniability for legal action (but not enough)
• Proprietary protocol
– Login, search, upload, download, and status operations
– No security: cleartext passwords and other vulnerability
• Bandwidth issues
– Suppliers ranked by apparent bandwidth & response time
winter 2008
P2P
10
Napster: Limitations of Central
Directory
• Single point of failure
• Performance bottleneck
• Copyright infringement
File transfer is
decentralized,
but locating
content is highly
centralized
• So, later P2P systems were more
distributed
winter 2008
P2P
11
Peer-to-Peer Networks: Gnutella
•
• Query flooding
Gnutella history
– Join: contact a few
– 2000: J. Frankel &
nodes to become
T. Pepper released
neighbors
Gnutella
– Publish: no need!
– Soon after: many
– Search: ask neighbors,
other clients (e.g.,
who ask their neighbors
Morpheus, Limewire,
– Fetch: get file directly
Bearshare)
from another node
– 2001: protocol
enhancements, e.g.,
“ultrapeers”
winter 2008
P2P
12
Gnutella
• Distribute file location
• Idea: flood the request
• Hot to find a file:
– Send request to all neighbors
– Neighbors recursively multicast the request
– Eventually a machine that has the file receives the
request, and it sends back the answer
• Advantages:
– Totally decentralized, highly robust
• Disadvantages:
– Not scalable; the entire network can be swamped
with request (to alleviate this problem, each
request has a TTL)
winter 2008
P2P
13
Gnutella
• Ad-hoc topology
• Queries are flooded for bounded number
of hops
• No guarantees on recall
xyz
xyz
Query: “xyz”
winter 2008
P2P
14
Gnutella: Query Flooding
• Fully distributed
– No central server
• Public domain
protocol
• Many Gnutella
clients
implementing
protocol
winter 2008
Overlay network:
graph
• Edge between peer X
and Y if there’s a TCP
connection
• All active peers and
edges is overlay net
• Given peer will
typically be connected
with < 10 overlay
neighbors
P2P
15
Gnutella: Protocol
• Query message sent
over existing TCP
connections
• Peers forward
Query message
• QueryHit
sent over
reverse
path
File transfer:
HTTP
Query
QueryHit
Query
QueryHit
Scalability:
limited scope
flooding
winter 2008
P2P
16
Gnutella: Peer Joining
• Joining peer X must find some other peer in
Gnutella network: use list of candidate peers
• X sequentially attempts to make TCP with peers on
list until connection setup with Y
• X sends Ping message to Y; Y forwards Ping
message.
• All peers receiving Ping message respond with Pong
message
• X receives many Pong messages. It can then setup
additional TCP connections
winter 2008
P2P
17
Gnutella: Pros and Cons
• Advantages
– Fully decentralized
– Search cost distributed
– Processing per node permits powerful search semantics
• Disadvantages
– Search scope may be quite large
– Search time may be quite long
– High overhead and nodes come and go often
winter 2008
P2P
18
Peer-to-Peer Networks: KaAzA
• KaZaA history
– 2001: created by Dutch
company (Kazaa BV)
– Single network called
FastTrack used by
other clients as well
– Eventually the protocol
changed so other
clients could no longer
talk to it
winter 2008
• Smart query flooding
– Join: on start, the client
contacts a super-node (and
may later become one)
– Publish: client sends list of
files to its super-node
– Search: send query to
super-node, and the supernodes flood queries among
themselves
– Fetch: get file directly
from peer(s); can fetch
from multiple peers at
once
P2P
19
KaZaA: Exploiting Heterogeneity
• Each peer is either a
group leader or
assigned to a group
leader
– TCP connection between
peer and its group leader
– TCP connections between
some pairs of group leaders
• Group leader tracks
the content in all its
children
winter 2008
ordinary peer
group-leader peer
neighoring relationships
in overlay network
P2P
20
KaZaA: Motivation for Super-Nodes
• Query consolidation
– Many connected nodes may have only a few files
– Propagating query to a sub-node may take more time than
for the super-node to answer itself
• Stability
– Super-node selection favors nodes with high up-time
– How long you’ve been on is a good predictor of how long
you’ll be around in the future
winter 2008
P2P
21
Peer-to-Peer Networks: BitTorrent
• BitTorrent history and motivation
– 2002: B. Cohen debuted BitTorrent
– Key motivation: popular content
• Popularity exhibits temporal locality (Flash Crowds)
• E.g., Slashdot effect, CNN Web site on 9/11, release
of a new movie or game
– Focused on efficient fetching, not searching
• Distribute same file to many peers
• Single publisher, many downloaders
– Preventing free-loading
winter 2008
P2P
22
BitTorrent: Simultaneous Downloading
• Divide large file into many pieces
– Replicate different pieces on different peers
– A peer with a complete piece can trade with
other peers
– Peer can (hopefully) assemble the entire file
• Allows simultaneous downloading
– Retrieving different parts of the file from
different peers at the same time
winter 2008
P2P
23
BitTorrent Components
• Seed
– Peer with entire file
– Fragmented in pieces
• Leacher
– Peer with an incomplete copy of the file
• Torrent file
– Passive component
– Stores summaries of the pieces to allow peers
to verify their integrity
• Tracker
– Allows peers to find each other
– Returns a list of random peers
winter 2008
P2P
24
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Leech]
Downloader
“US”
winter 2008
B
[Seed]
Peer
[Leech]
P2P
25
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Leech]
Downloader
“US”
winter 2008
B
[Seed]
Peer
[Leech]
P2P
26
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Seed]
B
[Leech]
Downloader
“US”
winter 2008
Peer
[Leech]
P2P
27
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Seed]
B
[Leech]
Downloader
“US”
winter 2008
Peer
[Leech]
P2P
28
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Seed]
B
[Leech]
Downloader
Peer
“US”
[Leech]
winter 2008
P2P
29
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Seed]
B
[Leech]
Downloader
“US”
winter 2008
Peer
[Leech]
P2P
30
BitTorrent: Overall Architecture
Tracker
Web Server
C
A
Peer
Peer
[Seed]
B
[Leech]
Downloade
r
Peer
[Leech]
“US”
winter 2008
P2P
31
Free-Riding Problem in P2P Networks
• Vast majority of users are free-riders
– Most share no files and answer no queries
– Others limit # of connections or upload speed
• A few “peers” essentially act as servers
– A few individuals contributing to the public good
– Making them hubs that basically act as a server
• BitTorrent prevent free riding
– Allow the fastest peers to download from you
– Occasionally let some free loaders download
winter 2008
P2P
32
Distributed Hash Tables
(DHTs)
• Abstraction: a distributed hash-table
data structure
– insert(id, item);
– item = query(id); (or lookup(id);)
– Note: item can be anything: a data object, document,
file, pointer to a file…
• Proposals
– CAN, Chord, Kademlia, Pastry, Tapestry, etc
winter 2008
P2P
33
DHT Design Goals
• Make sure that an item (file) identified is
always found
• Scales to hundreds of thousands of nodes
• Handles rapid arrival and failure of nodes
winter 2008
P2P
34
Structured Networks
•
•
•
•
Distributed Hash Tables (DHTs)
Hash table interface: put(key,item), get(key)
O(log n) hops
Guarantees on recall
K I
(K1,I1)
K I
K I
K I
K I
I1
K I
K I
put(K1,I1)
get (K1)
K I
winter 2008
K I
P2P
35
Chord
• Associate to each node and item a unique id
in an uni-dimensional space 0..2m-1
• Key design decision
– Decouple correctness from efficiency
• Properties
– Routing table size O(log(N)) , where N is the total
number of nodes
– Guarantees that a file is found in O(log(N)) steps
winter 2008
P2P
36
Identifier to Node Mapping Example
•
•
•
•
•
Node 8 maps [5,8]
Node 15 maps [9,15]
Node 20 maps [16, 20]
…
Node 4 maps [59, 4]
4
58
8
15
• Each node maintains a
pointer to its
successor
44
20
35
winter 2008
P2P
32
37
Lookup
• Each node maintains
its successor
lookup(37)
4
58
• Route packet (ID,
data) to the node
responsible for ID
using successor
pointers
node=44
15
44
20
35
winter 2008
8
P2P
32
38
Joining Operation
• Each node A periodically sends a stabilize()
message to its successor B
• Upon receiving a stabilize() message, node B
– returns its predecessor B’=pred(B) to A by sending a notify(B’)
message
• Upon receiving notify(B’) from B,
– if B’ is between A and B, A updates its successor to B’
– A doesn’t do anything, otherwise
winter 2008
P2P
39
Joining Operation
• Node with id=50
joins the ring
• Node 50 needs to
know at least one
node already in the
succ=nil
system
pred=nil
– Assume known node is
15
succ=4
pred=44
4
58
8
15
50
succ=58
pred=35
44
20
35
winter 2008
P2P
32
40
Joining Operation
• Node 50: send
join(50) to
node 15
• Node 44:
returns node succ=nil
succ=58
pred=nil
58
58
50
• Node 50
updates its
succ=58
pred=35
successor to
58
succ=4
pred=44
4
58
8
join(50)
15
44
20
35
winter 2008
P2P
32
41
Joining Operation
• Node 50:
send
stabilize() to
node 58
• Node 58:
–
–
update
predecessor
to 50
send notify()
back
succ=4
pred=50
pred=44
4
58
8
stabilize()
succ=58
pred=nil
15
50
succ=58
pred=35
44
20
35
winter 2008
P2P
3
2
42
Joining Operation (cont’d)
• Node 44 sends a stabilize
message to its successor,
node 58
• Node 58 reply with a notify
message
• Node 44 updates its
succ=58
successor to 50
pred=nil
succ=4
pred=50
4
58
8
stabilize()
15
50
succ=58
succ=50
pred=35
44
20
35
winter 2008
P2P
32
43
Joining Operation (cont’d)
• Node 44 sends a stabilize
message to its new
successor, node 50
• Node 50 sets its
predecessor to node 44
succ=58
pred=44
pred=nil
succ=4
pred=50
4
58
8
15
Stabilize()
50
succ=50
pred=35
44
20
35
winter 2008
P2P
32
44
Joining Operation (cont’d)
• This completes the
joining operation!
pred=50
4
58
succ=58
pred=44
succ=50
8
50
15
44
20
35
winter 2008
P2P
32
45
Achieving Efficiency: finger tables
Finger Table at 80
i
0
1
2
3
4
5
6
ft[i]
96
96
96
96
96
112
20
Say m=7
0
(80 + 26) mod 27 = 16
80 + 25
112
20
96
32
80 + 24
80 + 23
80 + 22
80 + 21
80 + 20
45
80
ith entry at peer with id n is first peer with id >= n  2i (mod 2m )
winter 2008
P2P
46
Achieving Robustness
• To improve robustness each node maintains
the k (> 1) immediate successors instead of
only one successor
• In the notify() message, node A can send
its k-1 successors to its predecessor B
• Upon receiving notify() message, B can
update its successor list by concatenating
the successor list received from A with A
itself
winter 2008
P2P
47
Chord Optimizations
• Reduce latency
– Chose finger that reduces expected time to reach
destination
– Chose the closest node from range [N+2i-1,N+2i) as
successor
• Accommodate heterogeneous systems
– Multiple virtual nodes per physical node
winter 2008
P2P
48
DHT Conclusions
• Distributed Hash Tables are a key
component of scalable and robust overlay
networks
• Chord: O(log n) state, O(log n) distance
• Both can achieve stretch < 2
• Simplicity is key
• Services built on top of distributed hash
tables
– persistent storage (OpenDHT, Oceanstore)
– p2p file storage, i3 (chord)
– multicast (CAN, Tapestry)
winter 2008
P2P
49