Download TCP Fairness Issues

Document related concepts

FCC fairness doctrine wikipedia , lookup

Airborne Networking wikipedia , lookup

Computer network wikipedia , lookup

RapidIO wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Net bias wikipedia , lookup

Deep packet inspection wikipedia , lookup

IEEE 1355 wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Quality of service wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Throughput wikipedia , lookup

Internet protocol suite wikipedia , lookup

TCP congestion control wikipedia , lookup

Transcript
NCCU.MCLab
A New TCP Congestion
Control Mechanism over
Mobile Ad Hoc Networks by
Router-Assisted Approach
Student: Ho-Cheng Hsiao
Advisor: Yao-Nan Lien
2006.10.5
Outline
•
•
•
•
•
Introduction
Related work
Our router-assisted approach
Performance evaluation
Conclusion
MCLab@NCCU
Outline
•
•
•
•
•
Introduction
Related work
Our router-assisted approach
Performance evaluation
Conclusion
MCLab@NCCU
Introduction
MCLab@NCCU
• TCP Congestion Control
– Trial-and-error based flow control for control
congestion
• Connection-oriented
• End-To-End
• Reliable
– Slow Start
– Congestion Avoidance
– Fast Retransmits and Fast Recovery
MCLab@NCCU
Introduction
Slow Start
Congestion
Avoidance
threshold
threshold
3 duplicate ACKs
time out
(RTT)
Introduction
MCLab@NCCU
• Objective of TCP Congestion Control Protocol
– Utilize resource as much as it can
– Dissolve congestion
– Avoid congestive collapse
• Generally occur in bottleneck point
• However, the nature of MANET has exposed the weakness of TCP
–
–
–
–
–
Lack of infrastructure
Unstable Medium
Mobility
Limited bandwidth
Difficulty to distinguish loss due to congestion or link failure
Introduction
MCLab@NCCU
• Analysis of TCP problem over MANET (1)
– Slow Start
• It takes several RTTs for slow-start to probe the
max. available bandwidth in MANET
• Connections spend most of time in Slow Start
phase due to frequent timeout
– up to 40% of connection time
• Slow-start tends to generate too much packets
– Network overloaded!!
Slow-start always overshoot and cause periodic packet loss
Introduction
MCLab@NCCU
• Analysis of TCP problem over MANET (2)
– Loss-based congestion indication
• Event of packet loss = congestion (regular network)
– by three duplicate ACKs or timeout
• Packet losses in MANET can be classified into
– Congestion loss
– Random loss
» Link failure or route change (in most case)
» Transmission error
Not every loss is due to congestion
Introduction
MCLab@NCCU
• Analysis of TCP problem over MANET (3)
– AIMD ( Additive Increase and Multiplicative Decrease )
• Additive Increase
– Slow convergence to the full available bandwidth
• Multiplicative Decrease
– Unnecessary decrease of congestion window size while
detecting packet losses
– This scheme is a good way to deal with congestion in regular
network
» Not a good scheme in MANET!
Avoid unnecessary congestion window drop is the key to better
performance
MCLab@NCCU
Introduction
Route Failure and
Random Loss
Slow Start
Falsely
End-to-End
Under or over
Performance
Trigger
Congestion
utilization
Degradation
Control
Overshooting
Unaware of Network Condition
What happen if we have more information about it??
Introduction
MCLab@NCCU
• Explicit Router-assisted Techniques
– Explicit router feedback is able to indicate internal
network condition
– Several explicit information provided by router can
enhance the performance of transport protocol
•
•
•
•
Available bandwidth (with respect to a path)
Queue length
Queue size
Loss rate
Introduction
MCLab@NCCU
• Our approach
– Design a new TCP congestion control
mechanism that is aware of network condition
over MANET
– The protocol can dynamically respond with
different situations according to the explicit
information from routers
Outline
•
•
•
•
•
Introduction
Related work
Our router-assisted approach
Performance evaluation
Conclusion
MCLab@NCCU
Related Work
• Router-assisted congestion control
– TCP-F (TCP-Feedback)
– TCP-ELFN (Explicit Link Failure Notificatoin)
– ATCP (Ad Hoc TCP)
• Other proposals
– Adaptive CWL (Congestion Window Limit)
MCLab@NCCU
Related Work
MCLab@NCCU
• TCP-F (TCP-Feedback)
– Sender is able to distinguish route failure and network
congestion
– Network components detect the route failure and
notify sender by RFN packet (Route Failure
Notification)
– Sender then freezes all the variable (RTO and cwnd)
until it receives the RRN packet (Route Recovery
Notification)
Related Work
MCLab@NCCU
• TCP-ELFN (Explicit Link Failure Notification)
– This scheme is based on DSR (Dynamic Source Routing)
– ELFN message is similar to “host unreachable” ICMP message.
It contains:
• Sender and receiver address
• Ports
• Packet sequence number
– Sender disables its retransmission timer and enter “standby
mode” after receiving ELFN
– Then sender keeps probing the network by sending a small
packet to restored the route (nature of DSR)
Related Work
MCLab@NCCU
• ATCP (Ad Hoc TCP)
– A layer called ATCP is inserted between the TCP and IP layers of the
source node
– ATCP listens to the network state by
• ECN (Explicit Congestion Notification) message
– Congestion!!
• ICMP “Destination Unreachable” message
– Network Partitioning!!
– Sender can be put into 3 states:
• Persist State – by ICMP message
• Congestion Control State – by ECN message
• Retransmit State – by packet loss without ECN flag
• Note - After receiving three duplicate ACKs, sender does not invoke
congestion control and puts TCP in Persist State and quickly retransmit the
loss packet from buffer ( Multipath routing or channel loss)
– Recomputation of congestion window size after route re-establishment
MCLab@NCCU
Related Work
• Adaptive CWL (Congestion Window Limit)
– If the congestion window size is greater than an upper
bound, the TCP performance will degrade.
、
– Find the BDP (Bandwidth Delay Product) of a path in
MANET
– They use this upper bound of BDP to dynamically
adjust TCP’s max. window size
Related Work
MCLab@NCCU
• In [29] [31] [33], they show TCP with a small
congestion window tends to outperform TCP
with large congestion window size in wireless
multihop network (e.g., 1 or 2)
• In [29], Fu et al. reported that , there exists an
optimal TCP window size W* by which TCP
achieves the best throughput
– However TCP operates at an average window size
which is much larger than W*. (Overshooting)
Outline
•
•
•
•
•
Introduction
Related work
Our router-assisted approach
Performance evaluation
Conclusion
MCLab@NCCU
Design Philosophy
MCLab@NCCU
• Fully utilize the available bandwidth alone
the path
• Reducing the chance of congestion
• Distinguish the loss between congestion
loss and random loss
Design Procedure
MCLab@NCCU
• Estimation of the available bandwidth
• Dynamic adjustment of sender sending
rate according to router feedback
• Recovery of random lost packets
Objective
MCLab@NCCU
• Allowing sender to reach appropriate
sending rate quickly
• Maintain throughput
• Dissolve congestion
• Provide fairness with other TCP variants
TCP Muzha
MCLab@NCCU
• Window-based router-assisted congestion
control
• Sender function
– Modification of Slow Start
• Router function
– Estimation of available bandwidth
– Computation of DRAI (Data Rate Adjustment Index)
– Handling random loss
• Receiver function
– Return the ACK with DRAI back to Sender
TCP Muzha
MCLab@NCCU
• Modification of slow-start
– Sender dynamically adjusts its sending rate
according to the router feedback collected by
receiver.
• Without causing periodic packet loss
• Avoid the overshooting problem
– Router feedback
• We use available bandwidth
TCP Muzha
MCLab@NCCU
• Router function
– Estimation of available bandwidth
• Most traffic flows are tend to pass through the routers which
have more bandwidth
• We assume each router is aware of :
– Incoming and outgoing traffic state
– Aggregate bandwidth state
– Routers have more precise information regarding to
TCP flows of a bottleneck node.
MCLab@NCCU
TCP Muzha
• How to use the available
bandwidth?
Sender
4
– Direct publication of actual
available bandwidth?
公佈出來的value是被5
個聯結所分享
Sender
1
• Seduce greedy TCP sender
• Bandwidth fluctuation
Destination
2
Sender
2
Router
• Our approach
– Routers compute a fuzzilized
index according to available
bandwidth as a guideline for
sender to adjust sending rate
Destination
1
Destination
5
Destination
3
Sender
3
Destination
4
Sender
5
TCP Muzha
MCLab@NCCU
• Why index?
– Consistency of bandwidth computation
– Simplicity
– Ease of implementation
– The simplest case
• ECN
– Bi-level data rate adjustment index (0,1)
– By router’s assistance , sender is able to control its
sending rate more precisely
• multi-level data rate adjustment index
Data Rate Adjustment Index
Conversion - Multi-level (1/6)
MCLab@NCCU
• How many levels are appropriate?
– Currently there is no such research or
investigation on this topic
• We use simulation to find the possible solution in
order to adapt to the nature of MANET
– Our leveling conception
• Avoid congestion
• Fine and delicate adjustment of data rate
• Maintain throughput
Data Rate Adjustment Index
Conversion - Multi-level (2/6)
• Our previous study
Aggressive
Deceleration
Moderate
Acceleration/
Deceleration
Aggressive
Acceleration
MCLab@NCCU
Data Rate Adjustment Index
Conversion - Multi-level (3/6)
Range
Node Number
4 ~ 32
Bandwidth
2Mbps
MAC
802.11
Receiver
window size
4 、 8 、 32
Parameters
Number of levels
Different setting for each level
Level range
Number of Node
Bandwidth
……
Sender
Avg. throughput (kbps)
Parameter
router
MCLab@NCCU
router
Receiver
200
150
rwnd= 4
100
rwnd= 8
rwnd= 32
50
0
3
4
5
# of level
6
7
Data Rate Adjustment Index
Conversion - Multi-level (4/6)
MCLab@NCCU
1
Index 1: Aggressive Deceleration
2
Index 2 : Moderate Deceleration
Data Rate
3
Index 3 : Stabilizing
Adjustment Index
Index 4 : Moderate Deceleration
4
5
Index 5 : Aggressive Acceleration
Data Rate Adjustment Index
Conversion - Multi-level (5/6)
MCLab@NCCU
• Each router estimates available bandwidth itself
and convert this to DRAI
– Total bandwidth / Number of Flow  Convert
• Then each router compares and marks every
passed packet with DRAI
– If the value is greater, then DRAI stay untouched
– Otherwise, replace it with new DRAI
• The smallest value of DRAI
– Minimal data Rate Adjustment Index (MRAI)
Data Rate Adjustment Index
Conversion - Multi-level (6/6)
MCLab@NCCU
DRAI
5
1
1
11Mbps
1Mbps
1Mbps
11Mbps
1Mbps
Router
Sender
5Mbps
Router
Receiver
bottleneck
1Mbps
1
MRAI
TCP Muzha
MCLab@NCCU
• Handling of Random loss
– The effect of random loss
• Retransmission
• Timeout
• Reduction of congestion window size
– Original indication of packet loss
• 3 duplicate ACK
• Our approach
» 3 duplicate ACK with deceleration marking (congestion)
» 3 duplicate ACK with acceleration marking or no marking
(random loss)
MCLab@NCCU
TCP Muzha
Send packet
Update CWND
by MRAI
New Ack / Timeout
CA
FF
Triple
New Ack
Duplicate ACK
• We simplify three phases
of TCP NewReno into two
phases
– Congestion Avoidance
(CA)
– Fast Retransmit and
Fast Recovery (FF)
Start
TCP Muzha
MCLab@NCCU
Outline
•
•
•
•
•
Introduction
Related work
TCP Muzha
Performance evaluation
Conclusion
MCLab@NCCU
Performance Evaluation
MCLab@NCCU
• Parameters
– Chain topology (4~32 hops)
– Link Layer : IEEE 802.11 MAC
– Bandwidth : 2MB/s
– Transmission radius : 250m
– Routing: AODV
Single TCP Flow
1
2
3
4
5
6
7
8
9
Performance Evaluation
• Evaluation metrics
– Congestion window size change
– Throughput
– Retransmission
– Fairness
MCLab@NCCU
Performance Evaluation
• Metrics
– Congestion window size change
– Throughput
– Retransmission
– Fairness
MCLab@NCCU
Performance Evaluation
MCLab@NCCU
Performance Evaluation
MCLab@NCCU
Performance Evaluation
MCLab@NCCU
Performance Evaluation
• Metrics
– Congestion window size change
– Throughput
– Retransmission
– Fairness
MCLab@NCCU
Performance Evaluation MCLab@NCCU
Throughput vs. number of hop (window_= 4)
Throughput [kbps]
250
200
Newreno
150
Sack
Vegas
100
Muzha
50
0
4
5
6
7
8
9
Number of Hops
10
13
16
32
Performance Evaluation MCLab@NCCU
Throughput vs. number of hops (window_ = 8)
Throughput [kbps]
250
200
Newreno
150
Sack
Vegas
100
Muzha
50
0
4
5
6
7
8
9
Number of Hops
10
13
16
32
MCLab@NCCU
Performance Evaluation
Throughput vs. number of hops (window_ = 32)
Throughput [kbps]
250
200
Newreno
150
Sack
Vegas
100
Muzha
50
0
4
5
6
7
8
9
Number of Hops
10
13
16
32
Performance Evaluation
• Metrics
– Congestion window size change
– Throughput
– Retransmission
– Fairness
MCLab@NCCU
Performance Evaluation
MCLab@NCCU
Number of Retransmission (window_ = 4)
35
30
25
Newreno
20
15
Sack
10
Muzha
Vegas
5
0
4
5
6
7
Number of Hops
8
16
32
Performance Evaluation
MCLab@NCCU
Number of Retransmission (window_ = 8)
70
60
50
40
Newreno
Sack
30
20
10
0
Vegas
Muzha
4
5
6
7
Number of Hops
8
16
32
Performance Evaluation
MCLab@NCCU
Number of Retransmission (window_ = 32)
80
Newreno
60
Sack
40
Vegas
20
Muzha
0
4
5
6
7
Number of Hops
8
16
32
Performance Evaluation
• Metrics
– Congestion window size change
– Throughput
– Retransmission
– Fairness
MCLab@NCCU
Performance Evaluation
MCLab@NCCU
• Fairness test 1
–
–
–
–
–
Cross Topology
4 hops, 6 hops, and 8 hops
Bandwidth : 2Mb/s
Simulation time: 50 sec
Two Sets
TCP Flow 1
• TCP Vegas, TCP NewReno
• TCP Muzha, TCP NewReno
TCP Flow 2
• Fairness test 2
– Throughput dynamics
Performance Evaluation
• Fairness Index
– Jain’s Fairness Index [ ]
n: Number of Flow
χi : throughput of the i-th flow
MCLab@NCCU
MCLab@NCCU
Performance Evaluation
• Fairness comparison - Throughput
Fairness Test 1 (Cross Topology)
250
200
TCP NewReno
150
TCP Vegas
100
Aggregate
50
0
4 hop
6 hop
Number of Hop
8 hop
Avg. Throughput (kbps)
Avg. Throughput (kbps)
Fairness Test 1 (Cross Topology)
250
200
TCP NewReno
150
TCP Muzha
100
Aggregate
50
0
4 hop
6 hop
Number of Hop
8 hop
Performance Evaluation
MCLab@NCCU
• Fairness comparison – Jain’s Index
Fairness Index (Vegas vs. Newreno)
Fairness Index (Muzha vs. Newreno)
1.2
1.2
1
1
0.8
0.8
0.6
Fairness Index
0.6
0.4
0.4
0.2
0.2
0
Fairness Index
0
4 hop
6 hop
8 hop
4 hop
6 hop
8 hop
Performance Evaluation
• Fairness test 1
–
–
–
–
Cross Topology
4 hops, 6 hops, and 8 hops
Bandwidth : 2Mb/s
Two Sets
• TCP Vegas, TCP NewReno
• TCP Muzha, TCP NewReno
• Fairness test 2
– Throughput dynamics
– Three flows, each starts at different times
MCLab@NCCU
MCLab@NCCU
Performance Evaluation
Throughput Dynamics (SACK)
16
18
14
12
10
flow1
8
flow2
6
flow3
4
2
Congestion Window Size (pkts)
0
16
14
12
flow1
10
flow2
8
flow3
6
4
2
28
26
24
22
20
18
16
Time (s)
Time (s)
Throughput Dynamic (Vegas)
Throughput Dynamic (NewReno)
flow1
flow2
flow3
Congestion Window Size (pkts)
35
20
18
16
14
12
10
8
6
4
2
0
30
25
flow1
20
flow2
15
flow3
10
5
Time (s)
Time (s)
28
26
24
22
20
18
16
14
12
10
8
6
4
2
0
28
26
24
22
20
18
16
14
12
10
8
6
4
2
0
0
Congestion Window Size (pkts)
14
12
10
8
6
4
28
26
24
22
20
18
16
14
12
10
8
6
4
2
0
0
0
2
Congestion Window Size (pkts)
Throughput Dynamics (Muzha)
Outline
•
•
•
•
•
Introduction
Related work
TCP Muzha
Performance evaluation
Conclusion
MCLab@NCCU
Conclusion
MCLab@NCCU
• TCP is still a dominant transport layer protocol for conventional and
emerging application
• We proposed a new TCP scheme over MANET by router-assisted
approach in order to improve the performance of TCP.
• By assistance of router, our scheme has about 5%~10% throughput
improvement and less retransmission in MANET.
• Our proposed protocol provides fairness service for different flows
while coexisting with other TCP variants.
• The future work is the further investigation of DRAI function and
different explicit information from router.
Discussion
• AIMD
• LIMD
• MIMD
MCLab@NCCU
MCLab@NCCU
Discussion
• Estimation of available bandwidth
– Moving Average
Moving Average
Link Utilization
Measurement
Utilization at time t
NCCU.MCLab
Q&A
MCLab@NCCU
MCLab@NCCU
Discussion
MCLab@NCCU
• Model ?
– BDP (Bandwidth-Delay Product)?
• TCP receiver estimates the optimum window
size according to the router feedback and sets
CWL by the awnd (Advertised Window)
• Use of Pacing ?
TCP Muzha
MCLab@NCCU
Summary
MCLab@NCCU
• MANET is a very unstable network
• Most versions of TCP suffer great performance
degradation over such network
• Router-assisted approach may not be easy to implement
on WAN
• However due to the unique characteristic of MANET
(Hybrid role for each host ) and ease of modification,
router-assisted approach can be a reality to improve the
performance of TCP over MANET
TCP Muzha Ver. 2
MCLab@NCCU
• Modification of TCP Muzha
– Congestion Control
• Based on TCP New Reno
– New Data Rate Adjustment Index (DRAI)
– Link Error and packet loss Handling
Conclusion
MCLab@NCCU
• TCP is still a dominant transport layer protocol
for conventional and emerging application
• We proposed a new TCP scheme over MANET
by router-assisted approach in order to improve
the performance of TCP.
• The future work is the consideration of mobility
Related Work
MCLab@NCCU
• TCP Reno, New Reno and SACK
– TCP Reno
• Slow-Start
• AIMD ( Additive Increase Multiplicative Decrease)
• Fast retransmit and recovery
– Only single packet drop within one window can be recovered
– Long retransmission timeout in case of multiple packet losses
– New Reno
• Deal with multiple losses within a single window
• Retransmit one lost packet per one RTT until all the lost packets
from the same window are recovered.
– SACK
• Deal with the same problem as New Reno
• Use SACK option field which contains a number of SACK blocks
• Lost packets in a single window can be recovered within one RTT
Related Work
MCLab@NCCU
• TCP Vegas
– Vegas measures RTT to calculate the amount
of packets that sender can transmit
– The congestion window size can only be
doubled every other RTT and reduced by 1/8
to keep proper amount of packets in the
network
• Smooth the change of data rate
Related Work
MCLab@NCCU
• TCP Veno
– Combinations of TCP Vegas and Reno
– Veno uses Vegas to determine types of
packet loss
• Random loss or actual congestion
– Veno modifies Reno with less aggressive
sending rate
• Prevention of unnecessary throughput degradation
Related Work
MCLab@NCCU
• ECN (Explicit Congestion Notification)
– ECN must be supported by both TCP senders
and receivers.
– ECN-compliant TCP senders initiate their
congestion avoidance algorithm after receiving
marked ACK packets from the TCP receiver
– A RED extension that marks packets to signal
congestion
Related Work
MCLab@NCCU
• Anti-ECN
– Routers set a bit in the packet header to
indicate an under-utilized link
– Allow sender to increase as fast as slow start
over an uncongested path
Related Work
MCLab@NCCU
• Quick Start
– Slow start require significant number of RTTs
and large amount of data to fully use the
available bandwidth .
– Quick Start allows sender to use higher
sending rate according to the explicit request
permission from routers along the path
• If the routers are underutilized, they may approve
the sender’s request for a higher sending rate
Related Work
MCLab@NCCU
• XCP (Explicit Congestion control protocol)
– XCP generalizes the ECN
• Instead of one bit congestion indication, XCP
routers inform the senders about the degree of
congestion at the bottleneck
– Decoupling of utilization control from fairness
control
Related Work
• TCP-Muzha
– Router-assisted approach
– Find out where the bottleneck is and the
information of the bottleneck
– Multi-level Date Rate Adjustment Index
• Fuzzy multilevel congestion notification
MCLab@NCCU
MCLab@NCCU
Our Approach
• AIMD vs AIAD vs MIAD
vs MIMD
–
–
–
–
AI : 1,2,3
AD: 1,2,3
MI: 1.125, 0.19, 0.25
MD: 0.5, 0.65, 0.8
• Final four setting
–
–
–
–
AIMD (1, 0.8)
AIAD (1, 3)
MIMD (1.125, 0.8)
MIAD (1.125, 3)
MCLab@NCCU
Performance Evaluation
Avg. delay vs. number of hops (window_ = 8)
600
Average delay [ms]
500
Avg. delay vs. number of hops (window_ = 4)
400
Newreno
Sack
300
Vegas
200
Muzha
100
250
4
5
6
7
8
9
10
13
16
32
Number of Hops
Newreno
150
Sack
Vegas
100
Muzha
Avg. delay vs. number of hops (window_ =32)
50
350
0
300
4
5
6
7
8
9
Number of Hops
10
13
16
32
Average delay [ms]
Average delay [ms]
0
200
250
Newreno
200
Sack
150
Vegas
Muzha
100
50
0
4
5
6
7
8
9
Number of Hops
10
13
16
32
Performance Evaluation
Muzha
NewReno
SACK
Vegas
MCLab@NCCU
MCLab@NCCU
Introduction
• TCP overshooting problem
Network overloaded
MAC loss due to
by overshooting
contention
TCP Restart
(slow-start again )
TCP connection failure
then timeout
Route Failure
Route Recovery
(Generate more traffic)
Related Work
MCLab@NCCU
• TCP-Bus (Buffering capacity and Sequence information)
–
Based on source-initiated on-demand ABR routing protocol
–
If a route fails ,pivoting node notify source
• Explicit Route Disconnection Notification
– With the sequence number of the TCP segment in the head of queue
–
During the route re-establishment, packets sent are buffered,RTO doubled
–
When a route is discovery, the receiver sends to the sender the last sequence number it has
successfully received
–
The sender only selectively retransmit the lost packets and the intermediate node starts
sending the buffered packets.
–
Reliable retransmission of the control message (ERDN/ERSN)
•
•
–
Detect the channel after sending the control message
Retransmission if sending fails
Contribution
•
packets buffered, reliable transmission of control packet
MCLab@NCCU
TCP Muzha
DRAI
Meaning
Change of CWND
5
Aggressive Acceleration
cwnd = cwnd *2
4
Moderate Acceleration
cwnd = cwnd+1
3
Stabilizing
cwnd = cwnd
2
Moderate Deceleration
cwnd = cwnd -1
1
Aggressive Deceleration
cwnd =cwnd *1/2
MCLab@NCCU
Performance Evaluation
• Index Comparison (4 hops) (Bar graph)
TCP Newreno
TCP Vegas
TCP Muzha
Avg. Throughput
(kbps)
175
214
186
Avg.
Retransmission
14
6
10
0.925
0.99
Fairness
( with Newreno)
MCLab@NCCU
Performance Evaluation
• Index Comparison (6 hops)
TCP Newreno
TCP Vegas
TCP Muzha
Avg. Throughput
(kbps)
54
89
65
Avg.
Retransmission
16
6
10
0.74
0.99
Fairness
( with Newreno)
MCLab@NCCU
Performance Evaluation
• Index Comparison (8 hops)
TCP Newreno
TCP Vegas
TCP Muzha
Avg. Throughput
(kbps)
34
35
37
Avg.
Retransmission
25
9
18
0.89
0.99
Fairness
( with Newreno)
Performance Evaluation
MCLab@NCCU
• 4 hop cross topology
Vegas +Newreno
Muzha +Newreno
Throughput
(Kbps)
Delay
(ms)
Throughput
(Kbps)
Delay
(ms)
Flow1 (Vegas)
35.7
72.4
Flow1 (Muhza)
106.0
63.9
Flow2 (Newreno)
118.0
100.4
Flow2 (Newreno)
102.0
65.3
Aggregate
153.7
Aggregate
208
Fairness
0.72
Fairness
0.99
Performance Evaluation
MCLab@NCCU
• 6 hop cross topology
Vegas +Newreno
Throughput
(Kbps)
Delay
(ms)
Flow1 (Vegas)
23.2
264.0
Flow2 (Newreno)
89.9
101.6
Aggregate
Fairness
Muzha +Newreno
Throughput
(Kbps)
Delay
(ms)
Flow1 (Muhza)
65.3
119.5
Flow2 (Newreno)
53.6
112.4
113.1
Aggregate
118.9
0.74
Fairness
0.99
Performance Evaluation
MCLab@NCCU
• 8 hop cross topology
Vegas +Newreno
Muzha +Newreno
Throughput
(Kbps)
Delay
(ms)
Throughput
(Kbps)
Delay
(ms)
Flow1 (Vegas)
20.2
91.9
Flow1 (Muhza)
43.7
97.2
Flow2 (Newreno)
41.7
168.0
Flow2 (Newreno)
36.8
112.4
Aggregate
61.9
Aggregate
80.5
Fairness
0.89
Fairness
0.99