Download Video Streaming over DiffServ Networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Distributed firewall wikipedia , lookup

Network tap wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Net bias wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Airborne Networking wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Serial digital interface wikipedia , lookup

Video on demand wikipedia , lookup

Deep packet inspection wikipedia , lookup

Lifecasting (video stream) wikipedia , lookup

Quality of service wikipedia , lookup

Transcript
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
Video Streaming over DiffServ Networks
Wei Wei, graduate student, EE, UC Berkeley
Email: [email protected]
Abstract:
In this project, we are exploring techniques for real-time video transmission over diffserv network. This
project first introduces a general framework for video streaming in DiffServ Networks. In the framework,
video stream are first classified into different categories due to their loss and delay sensitivity. Then the
categories are mapped into different DS level classes of DiffServ Network. The packet forwarding scheme
and buffer management of relative DiffServ scheme is applied in the framework.
Further, We propose a scheme that allows the video stream application to adapt to the time varying
conditions of the network. Also we propose one loose end-to-end admission control scheme for video
streaming application in DiffServ network to maintain the performance of applications without suffering
the scalability and flexibility of DiffServ.
1. Introduction
The current Internet only provides Best-Effort service with no specific performance
guarantees for individual application. Over the past several years, there is an increasing
demand for streaming multimedia applications over the Internet. Because many
streaming applications need more guarantees on their performance, the current BestEffort Internet can not meet the demand of those multimedia streaming applications.
How to provide quality of service (Qos) to different applications has been the focus of
research in the past few years. There are two approaches proposed by IETF (Internet
Engineering Task Force): integrated services (IntServ) and differentiated services
(DiffServ).
The IntServ approach[1] focuses on providing Qos for individual packet flows. In this
approach, each flow can request specific levels of service from the network. The levels of
service are typically quantified as a maximum tolerable end-to-end delay or loss rate. The
network grants or rejects the flow requests, based on the availability of resources.
However, there are two main problems in IntServ approach, the scalability and
manageability of this mechanism. Because IntServ requires routers to maintain control
and forwarding state for all flows passing through them, it is very difficult to implement
RSVP(Resource Reservation Protocol) and IntServ in wide area network, if not
impossible. And the IntServ architecture makes the management and accounting of IP
networks significantly more complicated.
The DiffServ approach[2] [3] was proposed more recently than the IntServ approach.
The main goal of DiffServ is to provide a more scalable and manageable architecture for
service differentiation in IP networks. The DiffServ architecture offers a framework
within which service provider can offer each customer a range of network services
differentiated on the basis of performance. The initial premise was that rather than
1
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
focusing on individual packet flows, the goal can be achieved by focusing on traffic
aggregates, large sets of flows with similar service requirements.
How to establish DiffServ-aware video application in fine-granularity is still an open
issue. In this project, we first introduce a general framework[4] for video streaming in
DiffServ networks. In this scheme, one video stream is decomposed into multiple substreams of different importance, and each sub-stream is mapped into a DS level class.
Then we propose one scheme to stream video adaptively when the available bandwidth is
varying. Also we propose one end-to-end admission control scheme for video streaming
application in DiffServ network to maintain the performance of applications without
suffering the scalability and flexibility of DiffServ.
The rest of this report is organized as follows. In section 2, we introduce some
background knowledge briefly, which includes some introductions for DiffServ and
properties of video streaming application. The framework of video streaming in DiffServ
is described in section 3. Then we propose the adaptive video streaming scheme in
DiffServ in section 4. In section 5, we propose the end-to-end admission control scheme.
And the report is concluded in section 6.
2. Background
2.1 A brief introduction for DiffServ
The approach that Diffserv model adopted is to classify individual microflows at the
edge of the network into one of several service classes, then apply a per-class forwarding
service at each router hop in the middle of the network. The classification occurs at the
network ingress based on an analysis of one or more fields in the packet. The packet is
then marked as belonging to a particular service class and injected into the network. The
core routers that forward the packet will examine the code points in the packet header to
determine how the packet should be handled without maintaining per flow state.
There are two different directions in the research of DiffServ architecture. One
direction is absolute service differentiation, which tries to meet the same goals with
IntServ, but without per-flow state in the backbone routers, and with only some semistatic resource reservations instead of a dynamic resource reservation protocol. Stoica et
al. propose a state-less core architecture that uses Qos parameter carried by packet header
to provide fair-queuing[5] or guaranteed delay[6]. The other direction is relative service
differentiation[7][8] which provides assurances for the relative quality ordering between
classes, rather than for the actual service level in each class. In relative DiffServ
networks, the network traffic is grouped into N service classes, which are ordered based
on their packet forwarding quality:
Class i is better than class (i-1) for 1  i  N , in terms of per-hop behavior for queuing
delays and packet losses.
In absolute DiffServ model, an admitted user is assured of its requested performance
level, somewhat like IntServ. But in order to guarantee a sufficient end-to-end QoS, one
2
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
simple requirement is that in each node per-hop behavior satifies the requirement of endto-end behavior, so admission control and route pinning is necessary for absolute service
differentiation, which causes more complexity, and sometimes also cause the problem of
scalability. Another disadvantage of absolute DiffServ model is that it is not as flexible as
relative DiffServ model. An application will be rejected if the required resources are not
available and the network cannot provide the requested assurance, even if the application
can get acceptable performance using current available resource.
On the other hand, in the relative DiffServ model, the only assurance from the network
is that a higher class will receive better service than a lower class. The amount of service
received by a class and the resulting quality of service perceived by an application
depend on the current network load in each class. The applications are supposed to adapt
their needs based on the observed performance level in their class. The relative DiffServ
model is more flexible than the absolute DiffServ, which can accommodate more
applications in the network. It is also easier to deploy, and is scalable. One problem in
relative DiffServ is that when the network’s load is very heavy, the applications can not
get enough quality of service no matter how they adapt to the varying of network’s
resource. So we propose a loose end-to-end admission control to alleviate this problem
without suffering the flexibility and scalability of relative DiffServ.
2.2 Characteristics of video streaming applications
In this part, we introduce several characteristics of video streams briefly, which will
help us understand how to transmit video stream more efficiently in the network.
a) Variable bit rate:
Generally, video streams have significant bit rate variability[9], depending on the
encoding system used, and require high network bandwidth. If transmitting video stream
in the absolute DiffServ networks, to get performance guarantees, the network have to
allocate a fixed bandwidth channel equal to the video peak rate to the video application,
which will waste a large amount of bandwidth resource.
b) Unequal Qos requirements for different packets:
Different packets in video stream have different Qos requirement. This property can
allow us to use multiple traffic classes to transmit video streams in DiffServ networks.
Video stream packets can be classified in the sender side according to their different
delay and loss sensitivities, and then be mapped to different classes of DiffServ. There
are several approaches to classify video stream. One approach is using scalable coding[10]
techniques, which encode video stream into different layers with different importance.
Another approach is to split MPEG/H.263 video stream directly into sub-streams with
different delay and loss sensitivities[4][11]. Hence, most video streaming applications are
tolerant to occasional delay/loss violations. They do not require tight delay/loss bounds.
In fact, the application can still get acceptable performance with losses of some
unimportant packets. From the above two characteristics of video stream application, it
appears that relative Diffserv model can be a better choice for most video streaming
applications.
3
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
c) Relative long duration:
Most video streaming applications last a relative long service time. This makes our
proposed end-to-end admission control scheme reasonable, which needs some pre-play
buffer delay.
3. A general framework for video streaming in DiffServ Networks[4][11]
Figure 1 shows diagram of the general framework for video streaming in DiffServ
Networks. The relative DiffServ architecture is used in the framework. The service
differentiation is expressed in terms of loss/delay associated with forwarding queues.
Each video flow of a user application is classified in the loss/delay preference, and each
packet is associated with a Relative Priority Index (RPI). These RPI associated packets
are categorized into intermediate DS categories in fine-grained manner. Then, premarked packets (i.e. RPI categorized) are conveyed into the DiffServ-aware node for Qos
mapping to different DS levels of DiffServ, which can be located at the end-user itself, or
in a DiffServ boundary node. The criterion for Qos mapping of a video application is to
maximize end-to-end video quality under a given cost constraint after mapping the
relative prioritized packets. Then, at the DiffServ boundary node, the packets are
classified, conditioned, and re-marked to certain network DS levels by considering the
traffic profile based on the SLA and the current network status. Finally, the packets with
DS level mapping are forwarded toward the destination through a packet forwarding
mechanism that is mainly composed of queue management and scheduling scheme.
4
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
IN: Interior Node
BAN: Boundary Node
SLA: Service Level Agreement
End User
Classifier
traffic conditioning
queue management &
scheduling
DiffServ
Domain B
SLA
IN
BN
DiffServ
Domain A
BN
End User
BN
IN
BN
Access
Network B
BN
Classifier
no traffic conditioning
queue management &
scheduling
End User
Figure 1. Framework for video streaming over DiffServ Networks
3.1 Video packet categorization
In one video streaming application, different demands on delay are connected with the
layered coding of video compression. For example, the I, P, B frames of MPEG frames
have variant demands on delay as well as loss. In fact, I and P frames can tolerate extra
delay corresponding to one video frame as compared to B frames[11].
One method to categorize video packets according to loss sensitivity is proposed in [4],
different video factors are taken into consideration for counting RLI (Relative Loss
Index) of different packets. First, the magnitude and direction of the motion vector for
each macroblock (MB) is included to reflect the loss strength and temporal loss
propagation due to motion. Then, encoding types (intra, intra-refreshed, inter, etc) are
considered. The refreshing effect is added by counting the number of intra-coded MBs in
the packet. Lastly the initial error due to packet loss is considered. Then RLI for a packet
may take into account these video factors together by summarizing the normalized video
factors with an appropriate weighting as
NVF
VFi n
n
RLI i  W  n
(1)
mi
n 1
5
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
where NVF stands for the number of video factors to be considered, W n for the
corresponding weight factors, VFi n for the magnitude of video factor VF activity n, and
m in for sampling mean of VF n for i-th packet.
Finally, RLIs are categorized into K DS categories to enable mapping to a limited
network DS levels.
Another more direct method to categorize video stream is using scalable coding. In
scalable coding, the video stream is encoded into different layers. Each layer has different
contribution to the quality of video stream. So we can map the layer that makes more
contribution to the video’s quality to a higher DS level class, and map the layer that
contributes less to a lower DS level class.
3.2 Mapping video categories to DS levels of DiffServ
Qos mapping of the relative prioritized packet to the DS level can be formulated as an
optimization problem. Each packet of a video category k, mapped to a certain network
DS level q ( {0,1Q  1} ), gets an average packet loss-rate lq (k ) by paying unit price
p q (k ) . The effort to achieve the best end-to-end quality while satisfying the total cost P
can be formulated into the minimization of the quality degradation QD.
min QD  min
qk
qk
K 1
K 1
 QDk  min ( RLI k  lq ( k )  nk )
qk
k 0
k 0
subject to
K 1
p
k 0
q(k )
 nk  P
for
K 1
n
k 0
k
N
(2)
where RLI k is the average loss effect of a packet belonging to category k, lq (k ) is the
average loss rate of category k, nk is the packet number of category k, p q (k ) is the unit

price payed by category k. A Qos mapping is denoted by qk  {q(0), q(1),, q( K  1)} ,
and q(k) is DS level to which k category is mapped.

This constrained optimization problem can be solved by finding the Qos mapping q k*
that minimizes the Lagrangian formula
J k ( )  [ RLI k  lq ( k )    pq ( k ) ]  nk
(3)
3.3 Relative DiffServ scheme in the framework
There are many possible implementation of relative DiffServ scheme, eg., a RED with
In and Out packets (RIO) mechanism; a pushout mechanism. In this framework, the
6
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
Proportional DiffServ Model[8] is chosen due to its ability to control the relative Qos in
different traffic classes.
The proportional DiffServ Model states that certain class performance metrics should
be proportional to the differentiation parameters of different classes. Suppose qi (t , t   )
is the observed Qos, such as packet loss rate or queuing delay, for traffic class i during
the time interval (t , t   ) , where   0 is the duration of Qos observation period. The
proportional DiffServ model states that for any two traffic classes i and j:
qi (t , t   ) ci
(4)

q j (t , t   ) c j
where ci and cj are specified quality parameters for class i and class j respectively.
To achieve loss differentiation, a proportional loss-rate dropper is used. When a queue
is full and a new packet arrives, one packet is dropped from the class with the lowest
normalized loss rate:
l (t , t   )
k  arg min i
(5)
i
i
where li and i are the observed loss rate and loss quality parameter for class i.
To achieve delay differentiation, a waiting time priority (WTP) scheduler is used. The
packet p* with the largest normalized waiting time in the set of all queued packets Q is
chosen to send:
w( p)
(6)
p *  arg max
 ( p)
pQ
where w(p) is the time packet p has spent in the queue, and  ( p ) is the delay quality
parameter for the traffic class that packet p belongs to.
4. Stream video adaptively in DiffServ Networks
In relative DiffServ model, the only assurance from the network is that a higher class
will receive better service than a lower class. In other words, it can not guarantee
available bandwidth to applications. Thus, a scheme that can allow video stream
application to adapt to the time varying conditions of the network is necessary for
transmitting video successfully in DiffServ Networks. Although there are quite a few
researchers working in the area of video streaming in DiffServ Networks, how to transmit
video in DiffServ Networks is still an open issue to research.
In this project, we propose a scheme under the framework of section 3 to transmit
video stream adaptively in DiffServ. In this scheme, first the available bandwidth of each
class for each application is estimated in the end user, and feedbacked to the sender. Then
the sender adjusts its sending rate in each class according to current network condition.
4.1 Available bandwidth estimation
7
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
We can use the TCP throughput model proposed in [12] to estimate the available
bandwidth of one service class.
The bandwidth Bi of class i can be represented as
MSS
(7)
Bi 
3 pi
2p
2
RTTi
 t RTOi min( 1,3
) pi (1  32 pi )
3
8
where MSS, the maximum segment size, corresponds to the number of bytes that the TCP
sending window increases per RTT when there is no loss. RTTi is the round-trip time
between sender and receiver of class i, and pi is the loss rate of class i. t RTOi denotes the
retransmit timeout value of a TCP algorithm.
The reason why we can use the above equation to estimate the available bandwidth of
each DS level class for the application is that we can imagine that the network is divided
into several virtual pipes, in each pipe, every UDP video application shares the
bandwidth fairly. So we use the TCP throughput model to estimate the fair bandwidth for
each video application in each DS level class.
4.2 Adaptive video transmission in DiffServ
The video stream is first classified into k substreams (s1, s2, … s3) according to
different delay/loss sensitivity. The bit-rate of each substream can be precomputed, and
stored as R1, R2, … Rk in the sender. After the available bandwidth B1, B2, … Bq of each
class C1, C2, … Cq are estimated from the feedback information, the problem is that how
we map the k substreams to the q DS level with the lowest distortion for the video
application under the constraint of available bandwidth. We assume that if i  j , the
service level of Class i is higher than Class j. We also assume that the video stream is
finely scalable encoded that k  q .
The mapping process is as follows:
Step 1: find the number m1 of substreams to be mapped in Class 1
m
m1  arg max ( Ri  B1 )
(8)
i 1
m
Then, map substreams 1 to m1 to Class 1.
Count a map ratio for substream m1+1 to Class 1,
m1
pm1 
B1   Ri
i 1
(9)
Rm1 1
Then map substream m1+1 to Class 1 with a probability pm1 , and map it to Class 2 with a
probability (1- pm1 ).

Step n: find the number mn of substreams to be mapped in Class n
8
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
mn  arg max (
m
m
R
i  mn 1 1
i
 (1  pmn 1 )  Rmn 1  Bn )
(10)
Then, map substreams mn-1 to mn to Class n.
Count a map ratio for substream mn+1 to Class n,
Bn 
pmn 
m
 R  (1  p
i  mn 1 1
i
mn 1
)  Rmn 1
(11)
Rmn 1
Then map substream mn  1 to Class n with a probability p mn , and map it to Class n+1
with a probability of (1- p mn ).
Until the available bandwidth of all classes for the application has fully been allocated,
and simply drop the left packets if there are still some.
The scheme estimates the available bandwidth once in a given interval, then adjusts the
mapping parameters m1,…, mq, pm1  pmq accordingly. When the channel condition does
not change very much, only the parameters of pm1  pmq need to be adjusted, which help
to transmit the video stream smoothly.
4.3 Comparison between adaptive scheme and non-adaptive scheme
In the non-adaptive scheme, if the bandwidth requirement can not be satisfied by the
network, the router simply drop some packets randomly from the higher service class to a
lower service class. So sometimes a more important packet receives a worse service than
a less important packet. This is simply not optimum.
Because the application itself knows what it need much better than the network knows,
it can adjust its sending scheme appropriately according to the change of the network
condition. So even when the available bandwidth can not satisfy the demand of the
application, this scheme can still help to maintain an acceptable video performance by
dropping some unimportant packets and assuring the most important packets for the
quality of the video stream be transmitted with a small loss rate. So this scheme is better
than the non-adaptive scheme qualitatively.
5. End-to-end loose admission control for video streaming applications
In traditional relative DiffServ architecture, there is no admission control component.
One reason is that the current admission control will affect the scalability and flexibility
of relative DiffServ model very much. But without admission control, when the
network’s load is very heavy, the applications will not maintain enough quality no matter
how adaptive they are. Thus, we propose a simple end-to-end loose admission control
scheme, which can help video applications maintain at least acceptable quality without
affecting the flexibility, scalability and simpleness of relative DiffServ architecture.
9
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
The objective of the loose end-to-end admission control scheme is to accept video
applications as much as possible, and maintain at least acceptable quality for those that
are already in the networks. The scheme is described as follows. When a new application
flow arrives, first it simply begins its transmission, and at the same time it stores a flag in
the end user side to remember that it is in a temporary sending status. Then other users
will notice the varying of available bandwidth, and will adjust their sending rate in
different DS level classes accordingly using the scheme described in section 4. After a
short period, (for video application, this can be the preplay buffer time of the receiver), if
the average performance of the new application is acceptable, the new application can
change its temporary status to a formal sending status. Else, it denotes not only the new
application itself can not get acceptable performance, but also it affects other
application’s performance. So the new application stops sending new packets, and rejects
the request from the receiver.
Compared with the current centralized admission control architecture: Bandwidth
Broker, the newly proposed loose end-to-end admission control scheme has the following
advantages. First, since the scheme is like the IP network scheme, it is highly scalable.
Instead, the Bandwidth Broker needs to maintain the topology and the state of all nodes
in the network. Thus, as the scale of the network increases, Bandwidth Broker needs to
save more and more information, and needs more computation when making a decision.
Thus, the current Bandwidth Broker scheme is not very scalable.
Second, our scheme maintains the flexibility of relative DiffServ model. When the new
application adds in the network, if the available resource is not enough for it, all other
applications will release some of their resource under the constraint of maintaining
acceptable quality. Thus this scheme can help the network provide services to
applications as mush as possible. This is the meaning of the word “loose” in the scheme’s
name. Instead, as a strict admission control scheme, Bandwidth Broker will allow the
application enters the network, if there is not enough resource for the new application.
Third, it is very simple. The scheme does not add any burden to the routers and does
not need an additional node in the network. All it needs to do is to send some trial
information, learn the network’s condition from the feedback information, and adjust its
actions appropriately in cooperation with other end users. It makes no change for current
network architecture, and is very easy to deploy. Instead, Bandwidth Broker adds lots of
complexity to the relative DiffServ architecture. To set up a flow across a domain, the
domain’s Bandwidth Broker needs to check whether there are enough resources between
the two end points of the flow across the domain. If yes, the request is granted and the
Bandwith Broker’s database is updated accordingly.
The disadvantage of our scheme is that it causes additional delay when rejecting
requests, since the application needs to send some trial information before making a
decision. But most video applications have long durations, thus several seconds delay is
tolerable for the receivers.
10
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
6. Conclusion
In this report, we first introduce a general framework for video streaming in DiffServ
Networks. In the framework, video stream are first classified into different categories due
to their loss and delay sensitivity. Then the categories are mapped into different DS level
classes of DiffServ Network. The packet forwarding scheme and buffer management of
relative DiffServ scheme is used in the frame work.
Further, We propose a scheme which allows the video stream application to adapt to
the time varying conditions of the network. When the available bandwidth in the network
is varying, the scheme can adjust the application’s sending rate of different classes, and
keep at least acceptable quality for the application. Also we propose one loose end-to-end
admission control scheme for video streaming application in DiffServ network to
maintain the performance of applications without suffering the scalability and flexibility
of DiffServ.
The two schemes proposed by us all solve problem from a network’s view. In such
schemes, all the applications shall obey the same rules to acquire good performance
together. So how to punish the “dishonest” application to assure the performance of the
“honest” applications is an issue to explore in the future.
Further work also includes simulations on NS and actual experiments on an
experimental testbed to verify the schemes we propose here.
Reference
[1] P.P.White, “RSVP and Integrated Services in the Internet: A Tutorial,” IEEE Commu.
Mag., May 1997, pp. 100-106
[2] S Blake et al., “An Architecture for Differentiated Services,” IETF RFC 2475, Dec.
1998
[3] K. Nichols et al., “Definition of the Differentiated Services Field (DS Field) in the
IPv4 and IPv6 Headers,” IETF RFC 2474, Dec. 1998
[4] J Shin, J Kim, and C Kuo, “Quality-of-Service Mapping Mechanism for Packet Video
in Differentiated Services Network”, IEEE Trans Multimedia, June 2001, pp219-231
[5] I. Stoica, S. Shenker, and H. Zhang, “Core-stateless fair queuing: Achieving
approximately fair bandwidth allocation in high speed networks,” in Proc SIGCOMM.
Vancouver, BC, Canada, Sept. 1998
[6] I. Stoica and H. Zhang, “Providing guaranteed services without per flow
management,”, In Proc SIGCOMM. Boston, MA, Sept. 1999, pp. 81-94
11
Prof. Jean Walrand, EE228a “Communication Networks”, Class Project Fall 2001
[7] C Dovrolis, D Stiliadis and P Ramanathan, “Proportional differentiated services:
Delay differentiation and packet scheduling,” in Proc SIGCOMM, Boston, MA, Sept.
1999.
[8] C Dovrolis and P Ramanathan, “A case for relative differentiated services and the
proportional differentiation model,” IEEE Network Magazine, Sept. 2001, pp.26-34
[9] M Furini and D Towsley, “Real-Time Traffic Transmissions Over the Internet”, IEEE
Trans Multimedia, Mar. 2001, pp.33-40
[10] R Aravind, M Civanlar, and A Reibman, “Packet Loss Resilience of MEPG-2
Scalable Video Coding Algorithm,” IEEE Trans Circuits Sys Video Tech, Oct. 1996,
pp.426-435
[11] W Tan, A Zakhor, “Packet Classification Schemes for Streaming MPEG Video over
Delay and Loss Differentiated Networks”, in Proceedings of Packet Video Workshop
2001, Kyongju, Korea, April 2001
[12] J Padhye, V Firoiu, D Towsley, and et al, “Modeling TCP throughput: a simple
model and its empirical validation,” ACM SIGCOMM Symposium on Communications
Architectures and Protocols, Aug. 1998
12