Download Introduction - Aaron Striegel

Document related concepts

Zero-configuration networking wikipedia , lookup

CAN bus wikipedia , lookup

Remote Desktop Services wikipedia , lookup

Lag wikipedia , lookup

Peer-to-peer wikipedia , lookup

Transcript
CSE 30264
Computer Networks
Prof. Aaron Striegel
Department of Computer Science & Engineering
University of Notre Dame
Lecture 25 – April 15, 2010
Today’s Lecture
• Project 4
– Overview / Discussion
Application
• Applications – Part 1
–
–
–
–
Transport
DNS
SMTP
HTTP
P2P
Network
Data
Physical
Spring 2010
CSE 30264
2
Project 4
• Implementation
– Multi-threaded server
– Performance will count
• Experimentation
– Latency
– Loss
Due: Thursday, April 29th, 5 PM
Spring 2010
CSE 30264
3
Premise
• Spaceship racing game in alpha development
– C# / WPF / Windows
– Up to 4 players
– Test network functionality
• Your task
– Write a multi-threaded server that
allows clients to connect and relays
messages back and forth
• Twist
– The protocol is changing
• Need to make your server agnostic with respect
to message content
Spring 2010
CSE 30264
4
Operation
• Client starts up
• Client sets its player name to a unique value
– Responsibility of client, not server
• Client connects to the server
– TCP, port defined in the client
• Client periodically sends data
– Send status / information to server
– Server relays that information to all other clients
Spring 2010
CSE 30264
5
Client App
• Provided the alpha version
– Core functionality
• Will also be provided a beta version (next week)
– Additional polish  different message contents
• Near-final version
– Different set of messages
– Only for evaluation of your code
Alpha, beta  Provided to you
Spring 2010
CSE 30264
6
Your Task - Server
• Write a multi-threaded server
– Listen for new TCP connections on a particular port
– Receive messages / relay to all other clients
• Receive and forward
• No inspection or modification
• What is different
– Server needs to be robust
• Client quits, error detection, etc.
– Thread safety
• Potential for clients to exit, arrive while receiving / sending messages
• Mutexes
– Performance
• Speedy or else the clients will lag
Spring 2010
CSE 30264
7
Notes - Implementation
• Console input
– Queries / status
• Thread – listen / accept
• Thread – each client
• Global object
– List of sockets / etc.
– Guard on write
– Guard on modifying the list
Spring 2010
CSE 30264
8
Experimentation
• Human experimenting
– Alpha version – raw racing field
– Beta version – collision detection / etc.
• Tinker with delay / loss
– Slider bar  inject loss
• Fails to report a new status
– Slider bar  inject delay
• Slightly queues the status report
– Where does it start to seem off?
Spring 2010
CSE 30264
9
Miscellaneous
Outline
Domain Name System
Peer-to-Peer Networks
Spring 2009
CS30264
10
Name Service
•
•
•
•
•
•
Names versus addresses
Location transparent versus location-dependent
Flat versus hierarchical
Resolution mechanism
Name server
DNS: domain name system
Spring 2009
CS30264
11
Examples
• Hosts
cheltenham.cs.princeton.edu
192.12.69.17
192.12.69.17
80:23:A8:33:5B:9F
• Files
/usr/llp/tmp/foo
Spring 2009
fileid
CS30264
12
Examples (cont)
• Mailboxes
Spring 2009
CS30264
13
Domain Naming System
• Hierarchy
edu
princeton
cs
■■■
com
mit
cisco
■■■
gov
yahoonasa
■■■
mil
nsf arpa
■■■
org
navy acm
■■■
net
uk
fr
ieee
ee physics
ux01 ux04
• Name
wizard.cse.nd.edu
Spring 2009
CS30264
14
Name Servers
• Partition hierarchy into zones
edu
princeton
cs
■■■
com
mit
cisco
■■■
gov
mil
org
yahoo nasa ■ ■ ■ nsf arpa ■ ■ ■ navy acm
■■■
net
uk
fr
ieee
ee physics
ux01 ux04
Root
name server
• Each zone implemented by
two or more name servers
Princeton
name server
CS
name server
Spring 2009
CS30264
■■■
■■■
Cisco
name server
EE
name server
15
Resource Records
• Each name server maintains a collection of resource records
(Name, Value, Type, Class, TTL)
• Name/Value: not necessarily host names to IP addresses
• Type
– A: IP addresses
– NS: value gives domain name for host running name server that
knows how to resolve names within specified domain.
– CNAME: value gives canonical name for a host; used to define
aliases.
– MX: value gives domain name for host running mail server that
accepts messages for specified domain.
• Class: allows other entities to define types
• TTL: how long the resource record is valid
Spring 2009
CS30264
16
Root Server
(princeton.edu, cit.princeton.edu, NS, IN)
(cit.princeton.edu, 128.196.128.233, A, IN)
(cisco.com, thumper.cisco.com, NS, IN)
(thumper.cisco.com, 128.96.32.20, A, IN)
…
Spring 2009
CS30264
17
Princeton Server
(cs.princeton.edu, optima.cs.princeton.edu, NS, IN)
(optima.cs.princeton.edu, 192.12.69.5, A, IN)
(ee.princeton.edu, helios.ee.princeton.edu, NS, IN)
(helios.ee.princeton.edu, 128.196.28.166, A, IN)
(jupiter.physics.princeton.edu, 128.196.4.1, A, IN)
(saturn.physics.princeton.edu, 128.196.4.2, A, IN)
(mars.physics.princeton.edu, 128.196.4.3, A, IN)
(venus.physics.princeton.edu, 128.196.4.4, A, IN)
Spring 2009
CS30264
18
CS Server
(cs.princeton.edu, optima.cs.princeton.edu, MX, IN)
(cheltenham.cs.princeton.edu, 192.12.69.60, A, IN)
(che.cs.princeton.edu, cheltenham.cs.princeton.edu,
CNAME, IN)
(optima.cs.princeton.edu, 192.12.69.5, A, IN)
(opt.cs.princeton.edu, optima.cs.princeton.edu,
CNAME, IN)
(baskerville.cs.princeton.edu, 192.12.69.35, A, IN)
(bas.cs.princeton.edu, baskerville.cs.princeton.edu,
CNAME, IN)
Spring 2009
CS30264
19
Name Resolution
• Strategy
• Local server
– need to know root at only one
place (not each host)
– site-wide cache
Spring 2009
CS30264
20
Electronic Mail
• RFC 822: header and body
• MIME: Multi-purpose Internet Mail Extensions
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=“-------417CA6E2DE4ABCAFBC5”
From: Alice Smith [email protected]
To: [email protected]
Subject: look at the attached image!
Date: Mon, 07 Sep 1998 19:45:19 -0400
-------417CA6E2DE4ABCAFBC5
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Bob,
here’s the jpeg image I promised.
-- Alice
-------417CA6E2DE4ABCAFBC5
Content-Type: image/jpeg
Content-Transfer-Encoding: base64
[unreadable encoding of a jpeg figure]
Spring 2009
CS30264
21
SMTP
• Mail reader, mail daemon, mail gateway
• SMTP messages: HELO, MAIL, RCPT, DATA,
QUIT; server responds with code.
Spring 2009
CS30264
22
World Wide Web
• URL: uniform resource locator
http://www.cse.nd.edu
• HTTP:
START_LINE <CRLF>
MESSAGE_HEADER <CRLF>
<CRLF>
MESSAGE_BODY <CRLF>
Spring 2009
CS30264
23
HTTP
• Request:
– GET: fetch specified web page
– HEAD: fetch status information about specified page
GET http://www.cse.nd.edu/index.html HTTP/1.1
• Response:
– HTTP/1.1 202 Accepted
– HTTP/1.1 404 Not Found
– HTTP/1.1 301 Moved Permanently
Location: http://www.nd.edu/cs/index.html
Spring 2009
CS30264
24
HTTP
• HTTP 1.0: separate TCP connection for each
request (each data item).
• HTTP 1.1: persistent connections
• Caching:
–
–
–
–
–
client: faster retrieval of web pages
server: reduced load
location: client, sitewide cache, ISP, etc.
EXPIRES header field (provided by server)
IF-MODIFIED-SINCE header field (issued by cache)
Spring 2009
CS30264
25
SNMP
• Request/reply protocol (on top of UDP)
• 2 main operations:
– GET: retrieve state info from hosts
– SET: set new state on host
• Relies on Management Information Base (MIB)
–
–
–
–
system: uptime, name, …
interfaces: physical address, packets sent/received, …
address translation: ARP (contents of table)
IP: routing table, number of forwarded datagrams, reassembly
statistics, dropped packets, …
– TCP: number of passive and active opens, resets, timeouts, …
– UDP: number of packets sent/received, …
Spring 2009
CS30264
26
P2P
• Overview:
–
–
–
–
–
–
centralized database: Napster
query flooding: Gnutella
intelligent query flooding: KaZaA
swarming: BitTorrent
unstructured overlay routing: Freenet
structured overlay routing: Distributed Hash Tables
Spring 2009
CS30264
27
Napster
• Centralized Database:
– Join: on startup, client contacts central server
– Publish: reports list of files to central server
– Search: query the server => return someone that stores
the requested file
– Fetch: get the file directly from peer
Spring 2009
CS30264
28
Gnutella
• Query Flooding:
– Join: on startup, client contacts a few other
nodes; these become its “neighbors”
– Publish: no need
– Search: ask neighbors, who ask their
neighbors, and so on... when/if found, reply to
sender.
– Fetch: get the file directly from peer
Spring 2009
CS30264
29
KaZaA (Kazaa)
• In 2001, Kazaa created by Dutch company KaZaA BV.
• Single network called FastTrack used by other clients as
well: Morpheus, giFT, etc.
• Eventually protocol changed so other clients could no
longer talk to it.
• 2004: 2nd most popular file sharing network, 1-5million at
any given time, about 1000 downloads per minute. (June
2004, average 2.7 million users, compare to BitTorrent: 8
million)
Spring 2009
CS30264
30
KaZaA
• “Smart” Query Flooding:
– Join: on startup, client contacts a “supernode” ... may at some
point become one itself
– Publish: send list of files to supernode
– Search: send query to supernode, supernodes flood query amongst
themselves.
– Fetch: get the file directly from peer(s); can fetch simultaneously
from multiple peers
Spring 2009
CS30264
31
KaZaA
“Super Nodes”
Spring 2009
CS30264
32
KaZaA: File Insert
insert(X,
123.2.21.23)
...
Publish
I have X!
123.2.21.23
Spring 2009
CS30264
33
KaZaA: File Search
search(A)
-->
123.2.22.50
123.2.22.50
Query
Replies
search(A)
-->
123.2.0.18
Where is file A?
123.2.0.18
Spring 2009
CS30264
34
KaZaA: Fetching
• More than one node may have requested file...
• How to tell?
– must be able to distinguish identical files
– not necessarily same filename
– same filename not necessarily same file...
• Use Hash of file
– KaZaA uses UUHash: fast, but not secure
– alternatives: MD5, SHA-1
• How to fetch?
– get bytes [0..1000] from A, [1001...2000] from B
Spring 2009
CS30264
35
KaZaA
• Pros:
– tries to take into account node heterogeneity:
• bandwidth
• host computational resources
– rumored to take into account network locality
• Cons:
– mechanisms easy to circumvent
– still no real guarantees on search scope or search time
Spring 2009
CS30264
36
BitTorrent
• In 2002, B. Cohen debuted BitTorrent
• Key motivation:
–
–
popularity exhibits temporal locality (flash crowds)
e.g., Slashdot effect, CNN on 9/11, new movie/game release
• Focused on efficient Fetching, not Searching:
–
–
–
–
distribute the same file to all peers
files split up in pieces (typically 250kBytes)
single publisher, multiple downloaders
each downloader becomes a publisher (while still downloading)
• Has some “real” publishers:
–
Blizzard Entertainment using it to distribute the beta of their new games
Spring 2009
CS30264
37
BitTorrent
• Swarming:
– Join: contact centralized “tracker” server, get a list of
peers.
– Publish: run a tracker server.
– Search: out-of-band, e.g., use Google to find a tracker
for the file you want.
– Fetch: download chunks of the file from your peers.
Upload chunks you have to them.
Spring 2009
CS30264
38
BitTorrent: Publish/Join
Tracker
Spring 2009
CS30264
39
BitTorrent: Fetch
Spring 2009
CS30264
40
BitTorrent: Sharing Strategy
• Employ “Tit-for-tat” sharing strategy
– “I’ll share with you if you share with me”
– be optimistic: occasionally let freeloaders download
• otherwise no one would ever start!
• also allows you to discover better peers to download from when they
reciprocate
• Approximates Pareto Efficiency
– game theory: “No change can make anyone better off without
making others worse off”
Spring 2009
CS30264
41
BitTorrent
• Pros:
– works reasonably well in practice
– gives peers incentive to share resources; avoids
freeloaders
• Cons:
– central tracker server needed to bootstrap swarm
Spring 2009
CS30264
42
Freenet
• In 1999, I. Clarke started the Freenet project
• Basic idea:
– employ Internet-like routing on the overlay network to
publish and locate files
• Additional goals:
– provide anonymity and security
– make censorship difficult
Spring 2009
CS30264
43
FreeNet
• Routed Queries:
– Join: on startup, client contacts a few other nodes it knows about;
gets a unique node id
– Publish: route file contents toward the file id. File is stored at node
with id closest to file id
– Search: route query for file id toward the closest node id
– Fetch: when query reaches a node containing file id, it returns the
file to the sender
Spring 2009
CS30264
44
Distributed Hash Tables DHT
• In 2000-2001, academic researchers said “we want to play too!”
• Motivation:
–
–
–
–
–
Frustrated by popularity of all these “half-baked” P2P apps :)
We can do better! (so we said)
Guaranteed lookup success for files in system
Provable bounds on search time
Provable scalability to millions of node
• Hot Topic in networking ever since
Spring 2009
CS30264
45
DHT
• Abstraction: a distributed “hash-table” (DHT) data
structure:
– put(id, item);
– item = get(id);
• Implementation: nodes in system form a distributed data
structure
– Can be Ring, Tree, Hypercube, Skip List, Butterfly Network, ...
Spring 2009
CS30264
46
DHT
• Structured Overlay Routing:
– Join: On startup, contact a “bootstrap” node and integrate yourself
into the distributed data structure; get a node id
– Publish: Route publication for file id toward a close node id along
the data structure
– Search: Route a query for file id toward a close node id. Data
structure guarantees that query will meet the publication.
– Fetch: Two options:
• Publication contains actual file => fetch from where query stops
• Publication says “I have file X” => query tells you 128.2.1.3 has X,
use IP routing to get X from 128.2.1.3
Spring 2009
CS30264
47
DHT Example: Chord
• Associate to each node and file a unique id in an
uni-dimensional space (a Ring)
– E.g., pick from the range [0...2m]
– Usually the hash of the file or IP address
• Properties:
– Routing table size is O(log N) , where N is the total
number of nodes
– Guarantees that a file is found in O(log N) hops
Spring 2009
CS30264
48
DHT: Consistent Hashing
Key 5
Node 105
K5
N105
K20
Circular ID space
N32
N90
K80
A key is stored at its successor: node with next higher ID
Spring 2009
CS30264
49
DHT: Chord Basic Lookup
N120
N10
N105
“N90 has K80”
“Where is key 80?”
N32
K80 N90
N60
Spring 2009
CS30264
50
DHT: Chord Finger Table
1/2
1/4
1/8
1/16
1/32
1/64
1/128
N80
• Entry i in the finger table of node n is the first node that succeeds or
equals n + 2i
• In other words, the ith finger points 1/2n-i way around the ring
Spring 2009
CS30264
51
DHT: Chord Join
• Assume an identifier space [0..7]
• Node n1 joins
Succ. Table
i id+2i succ
0 2
1
1 3
1
2 5
1
0
1
7
6
2
5
3
4
Spring 2009
CS30264
52
DHT: Chord Join
• Node n2 joins
Succ. Table
i id+2i succ
0 2
2
1 3
1
2 5
1
0
1
7
6
2
Succ. Table
5
3
4
Spring 2009
CS30264
i id+2i succ
0 3
1
1 4
1
2 6
1
53
DHT: Chord Join
Succ. Table
i id+2i succ
0 1
1
1 2
2
2 4
0
• Nodes n0, n6 join
Succ. Table
i id+2i succ
0 2
2
1 3
6
2 5
6
0
1
7
Succ. Table
i id+2i succ
0 7
0
1 0
0
2 2
2
6
2
Succ. Table
5
3
4
Spring 2009
CS30264
i id+2i succ
0 3
6
1 4
6
2 6
6
54
DHT: Chord Join
Succ. Table
i
i id+2
0 1
1 2
2 4
• Nodes:
n1, n2, n0, n6
• Items:
f7, f1
Succ. Table
0
1
7
Succ. Table
i id+2i succ
0 7
0
1 0
0
2 2
2
6
i
i id+2
0 2
1 3
2 5
Items
succ 1
2
6
6
2
Succ. Table
5
3
4
Spring 2009
Items
7
succ
1
2
6
CS30264
i id+2i succ
0 3
6
1 4
6
2 6
6
55
DHT: Chord Routing
Succ. Table
• Upon receiving a query for item
id, a node:
• Checks whether stores the item
locally
• If not, forwards the query to the
largest node in its successor table
that does not exceed id
i
i id+2
0 1
1 2
2 4
Items
7
succ
1
2
6
Succ. Table
0
1
7
i
i id+2
0 2
1 3
2 5
query(7)
Succ. Table
i id+2i succ
0 7
0
1 0
0
2 2
2
6
2
Succ. Table
5
3
4
Spring 2009
Items
succ 1
2
6
6
CS30264
i id+2i succ
0 3
6
1 4
6
2 6
6
56
DHT
• Pros:
– Guaranteed Lookup
– O(log N) per node state and search scope
• Cons:
– No one uses them? (only one file sharing app)
– Supporting non-exact match search is hard
Spring 2009
CS30264
57
P2P Summary
• Many different styles; remember pros and cons of each
–
centralized, flooding, swarming, unstructured and structured routing
• Lessons learned:
–
–
–
–
–
–
–
Single points of failure are very bad
Flooding messages to everyone is bad
Underlying network topology is important
Not all nodes are equal
Need incentives to discourage freeloading
Privacy and security are important
Structure can provide theoretical bounds and guarantees
Spring 2009
CS30264
58