Survey							
                            
		                
		                * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
PPCast: A Peer-to-Peer based Video
broadcast solution
Presented by Shi Lu
Feb. 28, 2006
Outline
Introduction
Motivation
Related work
Challenges
Our contributions
The p2p streaming framework
Overview
Peer control overlay
Data transfer protocol
Peer local optimization
Topology optimization
The streaming framework: design and implementation
Receiving peer software components
The look-up service
Deployment
Experiments
Empirical experience in CUHK
Experiments on PlanetLab
Conclusion
2
Motivation
Video over Internet is pervasive today
New challenge: on-line TV broadcasting fails on
traditional client-server architecture
 500Kbps, theoretical limitation for a 100Mbps Server is 200
concurrent users
3
Is there any efficient way to support video
broadcasting to a large group of network users?
Why Client/Server fail
Traditional client/server solution is not
scalable
3 bottlenecks
 Server load
The server bandwidth is the major
bottleneck
 Edge capacity
Client
One connection to one client Client
The connection may degrade
 End to end bandwidth
Video streaming
server
Server capacity:major
bottleneck
…...
Client
Client
Scalability?
Client
4
Client
Client
Related work
Peer-to-Peer file sharing system
 BitTorrent, eMule, DC++
 Peer collaborate with each other
 Without (or with very little) need to dictated resource
Why not suitable for video broadcasting?
 Without in-bound rate requirement
 Without real-time requirement
5
Related work
Content Distribution Network (CDN)
 Install a lot of dictated servers on the edge of the internet
 Requests are directed to the best server
 Very high cost on purchasing servers
Tree based overlay
 Coopnet, NICE
 Rigid structure, not robust to node-failure and network
condition changes
Other Mesh-based systems
 CoolStreaming, pplive, ppStream
6
Challenges
Bandwidth
 In-bound data bandwidth no less than the video rate
 In-bound bandwidth should not have large fluctuation
Network dynamics
 Network bandwidth and latency may change
 Peer nodes may leave and join at any time
 Peer nodes may fail or shut down
Real-time requirement
 All media packets must be fetched before its playback
deadline
7
Goals
For each peer:
 Provide satisfying in-bound bandwidth
 Assign its traffic in a balanced and fair manner
For the whole network:
 Keep it as one-piece while peer may fail/leave
 Keep the shape of the overlay from degrading
 Keep the radius small
8
Outline
Introduction
Motivation
Related work
Challenges
Our contributions
The p2p streaming framework
Overview
Peer control overlay
Data transfer protocol
Peer local optimization
Topology optimization
The streaming framework: design and implementation
Source peer
Receiving peer software components
The look-up service
Deployment
Experiments
Empirical experience in CUHK
Experiments on PlanetLab
Conclusion
9
P2P based solution: Overview
Collaboration between client peers
Source server alleviated
Video streaming
server
Better scalability
Source Peer
Better reliability
Lookup service
Retrieve peer list
1 stream out
New peer
Find neightbors
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
10
Participant peer
Infrastructure
Video source
 Windows media encoder, RealProducer…
 Source peer
Takes content from the video streaming server and feed it
to the p2p network
Wrap packets, add seq no.
Look-up service (tracker)
 Track the peers viewing each channel
 Help new peers to join the broadcast
Participant peers (organized into a random graph)
11
Schedule and dispatch the video packets
Adapt to the network condition
Optimize the performance
Support the local video player
Peer join
Peer join:
Obtain a peer list
Establish connections
Lookup service
(1) Retrieve peer list
 Neighborhood selection
Random
ip matching
History info
Peer depth
Peer performance
(3) Register service
New peer
(2)Find neightbors
Participant peer
Register its own service
Participant peer
Participant peer
Participant peer
12
The connection pool
“No one is reliable”
 A peer tries his best for better performance
 A peer maintains a pool of active connections to peers
 A peer keeps trying new peers when the incoming bandwidth is
not reached
 A peer keeps trying new connections after that, but in a slower
Lookup service
manner.
 Update peer list
 Others may establish new connection
Neighbor
Neighbor
New peer list
Neighbor
Neighbor
Neighbor
Active update
Participant peer
Connection
Neighbor
request
The connection pool
Neighbor
Neighbor
13
Neighbor
Neighbor
Connection pool maintenance
For each connection, define connection utility
 Recent bandwidth (I/O)
 Recent latency
Lookup service
 Peer depth
 Peer recent progress
Neighbor
When connections are more than
 Drop several bad connections
New peer list
Neighbor
Out-rate bound
Neighbor
Peer depth (distance to source)
Neighbor
Neighbor
Active update
Participant peer
Connection
Neighbor
request
The connection pool
Neighbor
Neighbor
Neighbor
14
Neighbor
The peer control overlay
Random-graph shape
 Evolving with time
 Radius: will not degrade since all peers are trying to
minimize its depth
 Integrity: will not be broken into pieces
Data transfer
 The data transfer path is determined from the control
overlay
 Each data packet is transferred along a tree
 Determined just-in-time from the control overlay
15
Data transfer protocol
Receiver driven
 While exchanging data, data availability information is also
exchanged
 The data receiver determine which block from which
neighbor
Driven by data distribution information
 Peer knows where the missing data is
 Peer issues data request to neighbors
Peer synchronization
 Content fetching progress
16
Data transfer protocol
Data status bitmap (DSB)
 Describe the data packet availability information
 <startoffset + 1110011000……>(1 available, 0 absent)
 Foreign data status <Available, non-available>
For each peer, it holds a connection to each neighbors
DSB of the neighbors are embedded with the connection
DSB of neighbors are frequently updated
Neighbor
Neighbor
…...
Participant peer
…...
…...
…...
The connection pool
Neighbor
Neighbor
…...
17
Neighbor
…...
…...Neighbor
Data transfer protocol
Data transfer load scheduling
Several factors:
 Latency
 Bandwidth
 Data availability
Neighbor
 Data arrived (->standby)
…...
Participant peer
…...
…...
Connection status (busy, standby)
 Data request issued (->busy)
Neighbor
Neighbor
Neighbor
Scheduling time: on data arrival
…...
Neighbor
 Get a packet that is recent available
 Estimate the latency
 The playback deadline is before the expected latency
18
…...
The connection pool
…...
…...Neighbor
Data transfer protocol
Data transfer load scheduling
Packet local status
 <Available, request-not-issued, request-issued>
Critical packet
 The data packets still unavailable
 The remaining time is less than t
Neighbor
When in-bound bandwidth is ok
 Check critical packet
Neighbor
 Issue request to the fastest link
Participant peer
…...
The connection pool
Neighbor
Neighbor
…...
19
…...
…...
When in-bound bandwidth is not ok
 Do not check critical packet
…...
Neighbor
…...
…...Neighbor
Data transfer protocol
Connection status
Busy<==> standby
Data request issued<==> data arrived
When data arrive:
 Measure the latency of the previous packet
 Get a weighted delay
 Find the packet whose playback deadline is ok for that delay
Neighbor
 Issue request (busy)
…...
Critical packet
Neighbor
Participant peer
…...
…...
…...
The connection pool
Neighbor
Neighbor
…...
20
Neighbor
…...
…...Neighbor
Data transfer protocol
Multicast tree
 Each data packet would not go by a peer twice
 For each data packet, a multicast tree is constructed
 The tree is built just-in-time
To adapt to the transient properties of the network links
Lookup service
Video streaming
server
Retrieve peer list
Source Peer
 Each data packet may have different trees
1 stream out
New peer
Find neightbors
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
Participant peer
21
Participant peer
Participant peer
Participant peer
Neighborhood management
Performance monitoring:
 Measured once in every interval (e.g. 10 sec.)
 Avg. in-bound data rate
 Avg. out-bound data rate
 Avg. data packet latency
Neighborhood goodness
Neighborhood number
 Lower bound of Nb
 Upper bound of Nb
22
Peer control protocol
Data and performance
While in-bound data rate is not enough
While neighbor number is lower than lower bound
 Establish new connections
While neighbor number is higher than upper bound
 Discard the worst connection
 The benefit is two fold: both side release something bad.
23
Local media player support
When incoming rate is satisfying
Media buffer size
Enough buffered data
When packet not ready
 Break and continue
 Lose some quality
 Up to the video codec
Data smoothness
 For one peer, in an interval
 The ratio of packets that can be fetched before its deadline
24
Overlay integrity
Peers may leave or fail
Normal leave: notify neighbors and look-up service
Abnormal leave: other will know when time-out
 If one neighbor quits, a peer can still get content from
other connections in the connection pool
 Establish new connections if the incoming bandwidth is less
than expected
 Buffer size
25
Overlay integrity
Peers may leave or fail
The overlay shall not be broken into pieces
Maintain the connectivity of the overlay
Solution: Peers try to connect to neighbors with lower
depth
Since each peer tries to lower its depth, so the
probability for (articulation) critical points to occur
becomes small
depth_i = \min_{j \in neightbor_i} depth_j +1
Connect to inner peers
Neighbor
Neighbor
Articulation
point
…...
Connect to inner peers
…...
Neighbor
Source peer, depth = 0
Articulation
point
Neighbor
…...
…...
Neighbor
Connect to inner peers
……
…...
Neighbor
…...
…...
source
26
Overlay integrity
Playback progress difference:
 Higher depth difference  higher playback progress difference
 The attempt to reduce local peer depth may reduce that
progress difference
Connect to inner peers
Neighbor
Neighbor
…...
Connect to inner peers
…...
Neighbor
Articulation
point
Neighbor
…...
…...
Neighbor
Connect to inner peers
…...
Neighbor
…...
…...
27
Behind-NAT problem
Some peers are running behind NAT(Network Address
Translation)
Other peers may not be able to actively connect to it
Port mapping (need to be set by the router admin)
Behind-NAT peers need some restrictions
 Can only connect to peers whose depth is higher
137.189.4.4:port
Gateway
 Need to actively connect to other peers
 Actively help others after its performance is ok
Direct communication between behind-NAT peers?
192.168.90.11
…...
192.168.90.1
192.168.90.12
28
192.168.90.15
192.168.90.13
192.168.90.34
Outline
Introduction
Motivation
Related work
Challenges
Our contributions
The p2p streaming framework
Overview
Peer control overlay
Data transfer protocol
Peer local optimization
Topology optimization
The streaming framework: design and implementation
Receiving peer software components
The look-up service
Deployment
Experiments
Empirical experience in CUHK
Experiments on PlanetLab
Conclusion
29
Receiving peer software components
Peer Peer Peer
Peer
Connection request from other peers
Establish new connections...
…...
Connection Manager
Heart beat messages to tracker
Status reporter
Packet Scheduler
…...
Connection server
Performance
Monitor
Local packet I/O Manager
Local streaming server
30
Local media player
The look-up service
Centralized server
31
Register new video channels
Register new peers
Receive peer reports from time to time
Provide peer list to new peers
Deployment
LAN deployment
 Pure java-based software
 Video broadcast source (windows media encoder)
 Client software on each participant machine
 Look-up service
 Source peer
Experiment
 Successfully deployed in the CSE department LAN
 Planet-lab experiment
32
Benefits
Low cost:
 Without any extra hardware expenditure
 Pure software solution
Scalable: can support theoretically infinite number
of users
Reliable and resilient to user join/leave
 Multiple connections
 Intelligent content scheduling between peers
 Quick adapt to the change of network conditions
Solution for:
 Low-cost large scale live broadcast
33
Limitations
Upstream capability
 Peers need to have some upstream ability to support each other
 ADSL (upload power is a problem)
 LAN users are just fine
 Solution: higher ratio of LAN users
Backbone stress
 The traffic may result in stress on back bones
 Example
Source in China
10000 peer connections from China to US
Backbone need to support 10000 connections
 Solution: Traffic localization
34
1 stream from China to US
US peers support each other
Outline
Introduction
Motivation
Related work
Challenges
Our contributions
The p2p streaming framework
Overview
Peer control overlay
Data transfer protocol
Peer local optimization
Topology optimization
The streaming framework: design and implementation
Receiving peer software components
The look-up service
Deployment
Experiments
Empirical experience in CUHK
Experiments on PlanetLab
Conclusion
35
Experiments
Planet-Lab
 300+ nodes deployed worldwide
Performance test
 450kb/s streaming
 Data packets: 32KB each
 Static performance
Start-up latency
Data smoothness
 Dynamic experiment
36
Peer sojourn time is exponential distributed (unstable)
All peers are unstable
Overlay size:
 50, 80, 120, 150, 180……
Experiments
Data smoothness
 The ratio of the data packets that can be fetched before its
playback deadline
 Determines the quality of the video playing on each client
peer
Influence
 Peer buffer size
 Peer stability
37
Static performance
38
The impact of overlay size and peer buffer size
Dynamic performance
The impact of peer stability
39
The peers’ mean sojourn time is exponentially distributed
The larger the mean time is, the more stable the peers are
Observations
40
More users, better performance
Resilient in dynamic network environment
Robust to peer join-leave
More stable, better performance
Bigger buffer increases performance
Conclusion
In this presentation, we have:
 Introduced the challenges in the p2p streaming system
 Defined several goals that a p2p system to support real-time
video broadcast
 Proposed a solution for large scale p2pvideo streaming service
 Discussed the design, implementation and experiments of the
system
Future work
 Distributed look-up service
 Content copyright protection (Identity authentication, data
encryption)
 Overlay topology control (Traffic localization)
 Possible VOD system based on this infrastructure
 Deal with firewalls
41
Q&A
Thank you!
42