Download Using SCTP to Improve QoS and Network Fault-

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

RapidIO wikipedia , lookup

AppleTalk wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Lag wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

TCP congestion control wikipedia , lookup

Computer network wikipedia , lookup

Deep packet inspection wikipedia , lookup

IEEE 802.11 wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Network tap wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

CAN bus wikipedia , lookup

Airborne Networking wikipedia , lookup

Internet protocol suite wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Real-Time Messaging Protocol wikipedia , lookup

IEEE 1355 wikipedia , lookup

UniPro protocol stack wikipedia , lookup

Transcript
Using SCTP to Improve QoS and Network FaultTolerance of DRE Systems
OOMWorks LLC
BBN Technologies
LM ATL
Metuchen, NJ
Cambridge, MA
Camden, NJ
Yamuna Krishnamurthy
Irfan Pyarali
Craig Rodrigues
Prakash Manghwani
Gautam Thaker
Real-Time and Embedded Systems Workshop
July 12-15, 2004
Reston, Virginia, USA
Transport Protocol Woes in DRE Systems
UDP
TCP
Unreliable data delivery
Non-Deterministic Congestion control
Unordered data delivery
No control over key parameters like
retransmission timeout
No Network Fault-Tolerance
No Network Fault-Tolerance
Stream Control Transport Protocol (SCTP)
•
•
IP based transport protocol originally designed for telephony signaling
SCTP supports features found to be useful in TCP or UDP
–
–
–
–
–
•
Reliable data transfer (TCP)
Congestion control (TCP)
Message boundary conservation (UDP)
Path MTU discovery and message fragmentation (TCP)
Ordered (TCP) and unordered (UDP) data delivery
Additional features in SCTP
– Multi-streaming: multiple independent data flows within one association
– Multi-homed: single association runs across multiple network paths
– Security and authentication: checksum, tagging and a security cookie mechanism to
prevent SYN-flood attacks
•
Multiple types of service
– SOCK_SEQPACKET – message oriented, reliable, ordered/unordered
– SOCK_STREAM – TCP like byte oriented, reliable, ordered
– SOCK_RDM – UDP like message oriented, reliable, unordered
•
Control over key parameters like
– Retransmission timeout
– Number of retransmissions
Integration of SCTP in DRE Systems
•
Issues in integrating SCTP directly
–
–
–
–
•
Possible re-design of the system
Accidental and Incidental errors
Evolving API
Multiple SCTP implementations
Solution
– Integrate SCTP via COTS Middleware
Proof of Concept
•
Integration of SCTP in UAV Application
– UAV is a reconnaissance image data streaming DRE application
•
Via CORBA COTS Middleware
– SCTP was added as a Transport Protocol for the GIOP messages
(SCIOP)
– SCTP was added as a Transport Protocol to the CORBA Audio/Video
Streaming Service
SCTP Inter-Orb Protocol (SCIOP)
•
•
•
Extension to GIOP that leverages the features of SCTP (OMG
standardization Completed May 2003)
Make CORBA objects resilient to network failures
Implementation in TAO
– By LM ATL
– Goals
• SCTP added as a pluggable protocol in TAO conforming to
the OMG SCIOP standard
– OMG TC Document mars/2003-05-03
• Demonstrate bounded time recovery of CORBA object
interactions after a network failure
CORBA AVStreams Overview
•
StreamCtrl
–
•
MMDevice
–
•
interface for logical or physical
device
StreamEndPoint
–
•
controlling the stream
network specific aspects of a
stream
Vdev
–
properties of the stream
Integration of SCTP in CORBA AVStreams
•
•
Added as a transport protocol to the pluggable protocol framework
Extended the Flow Specification Entry
– To include secondary endpoint addresses
– FlowName\IN\SCTP_SEQ\192.168.1.3,192.168.10.3..\192.168.1.4,192.
168.10.4…
•
Used the ACE Wrapper Façade designed and implemented by LM
ATL to
– Setup SCTP associations using multihomed addresses
– Send/Receive data using SCTP
AVStreams Pluggable Protocol Framework
Flow Protocol Component
RTP
UDP
RTCP
ATM
AAL5
SFP
Flow Protocol
Factory
TCP
Flow Transport Component
•
NULL
SCTP added as a transport protocol
SCTP
Flow Transport
Factory
Distributed UAV Image Dissemination
Mission
Requirements
Piloting
Surveillance
Command/Control
• Requires an out-of-the-window
view of imagery
• Must not lose any important
imagery
• Require high fidelity
imagery
Q(video capture,
processing)
Surveillance
Q(platform steering,
navigation,
weapon targeting
and delivery)
Q(bandwidth)
Q(data)
Q(bandwidth)
Adaptation
Strategies
UAV
Control
Q(network
management)
Q(control
signals)
Q(CPU)
Q(image display)
Command
Decision
Loadbalancing
balancing
Load
Migrating
•• Migrating
tasksto
toless
less
tasks
loadedhosts
hosts
loaded
Q(bandwidth)
Q(FT)
Video Processing,
ATR
Networkmanagement
management
Network
•
Diffserv
• Diffserv
Reservation
•• Reservation
CPUmanagement
management
CPU
Scheduling
•• Scheduling
Reservation
•• Reservation
Datamanagement
management
Data
•
Filtering
• Filtering
Tiling,compression
compression
•• Tiling,
Scaling
•• Scaling
OEP Scenario
• End-to-end, delivery of video (and other forms of sensor) data, with control
feeding back based upon data content and requirements
• Simultaneously, various forms of control stations receive real time video tailored
to their needs, display it, process it, disseminate it, and
• Users act upon UAV information and interact with UAVs and other subsystems in
real-time
High Level View of UAV OEP Architecture
UAV Host 1
Video
MPEG Source Filter
File Process
Filter
Contract
UAV Host 2
Video
MPEG Source Filter
File Process
Filter
Contract
Wired
Video
Distributor
Process 1
Throughput
Contracts
Bandwidth
Management
Video
Distributor
Process 2
Wireless
Ethernet
Bandwidth
Management
Video
Distributor
Process 3
Bandwidth
Management
Distributor
Control Station Host 6
Displays
Throughput
Contracts
CORBA A/V
Streaming Service
…
…
UAV Host 3
Video
Source Scale/
Process Compress
Quality
Contract
Control Station Host 5
Displays
Host 4
Control Station Host 7
Display
ATR
ATR
Contract
Experiments
Protocol Analysis and Comparisons
•
Several different experiments were performed
– Paced invocations
– Throughput tests
– Latency tests
•
Several different protocols were used
– TCP based IIOP
• -ORBEndpoint iiop://host:port
– UDP based DIOP
• -ORBEndpoint diop://host:port
– SCTP based SCIOP
• -ORBEndpoint sciop://host1+host2:port
Primary Configuration
.1.1
.1.2
Sender
•
•
•
Receiver
.11.1
Both links are up at 100 Mbps
1st link has 1% packet loss
Systematic link failures
Traffic
Shaping
Node
.11.2
2 second
down
4 second up
– Up 4 seconds, down 2 seconds
– One link is always available
•
Host configuration
–
–
–
–
–
RedHat 9 with OpenSS7 SCTP 0.2.19 in kernel (BBN-RH9-SS7-8)
RedHat 9 with LKSCTP 2.6.3-2.1.196 in kernel (BBN-RH9-LKSCTP-3)
Pentium III, 850 MHz, 1 CPU, 512 MB RAM
ACE+TAO+CIAO version 5.4.1+1.4.1+0.4.1
gcc version 3.2.2
Link 1 up/down timing
Link 2 up/down timing
Secondary Configuration
.1.1
.1.2
.2.1
.2.2
Sender
Receiver
.11.1
•
Traffic
Shaping
Node
.11.2
.22.1
Distributor
Traffic
Shaping
Node
.22.2
Results as expected:
– Roundtrip latency about doubled
– Throughput more or less the same
– Little effect on paced invocations
•
•
•
SCTP failures, maybe related to 4 network cards
Sender, Distributor, and Receiver were normal CORBA applications
Similar results were also obtained when AVStreaming was used to transfer data
between these components
Modified SCTP Parameters
•
•
Parameter
Default
OpenSS7
LKSCTP
RTO initial
3000
0
10
RTO min
1000
0
10
RTO max
60000
0
10
Heartbeat interval
30
1
10
SACK delay max
200
0
?
Path retrans max
5
0
0
Association retrans max
10
25
25
Initial retries
8
25
25
LKSCTP was less reliable, had higher latency and lower throughput relative
to OpenSS7 in almost every test
Also had errors when sending large frames
Paced Invocations
•
•
•
Emulating rate monotonic systems and audio/visual applications
Three protocols: IIOP, DIOP, SCIOP
Frame size was varied from 0 to 64k bytes
– DIOP frame size was limited to a maximum of 8k bytes
•
•
Invocation Rate was varied from 5 to 100 Hertz
IDL interface
– one-way void method(in octets payload)
•
Experiment measures
– Maximum inter-frame delay at the server
– Number of frames that were received at the server
Protocol = DIOP, Experiment = Paced Invocations
Network = Both links are up
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Frames Dropped (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
Since network capacity was not exceeded, no DIOP packets were dropped
Cannot measure missed deadlines when using DIOP because the client always
succeeds in sending the frame, though the frame may not reach the server
Protocol = DIOP, Experiment = Paced Invocations
Network = Both links are up
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
25
Max Inter-Frame Delay (msec)
20
15
10
5
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
•
Very low inter-frame delay
DIOP good choice for reliable links or when low latency is more desirable then reliable
delivery
Drawback: Frame size limited to about 8k
Protocol = IIOP, Experiment = Paced Invocations
Network = Both links are up
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Missed Deadlines (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
Frame size (bytes)
•
Under normal conditions, no deadlines were missed
32000
64000
Protocol = IIOP, Experiment = Paced Invocations
Network = Both links are up
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
25
Max Inter-Frame Delay (msec)
20
15
10
5
0
0
1000
2000
4000
8000
Frame size (bytes)
•
•
Very low inter-frame delay
IIOP good choice in most normal situations
16000
32000
64000
Protocol = SCIOP, Experiment = Paced Invocations
Network = Both links are up
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Missed Deadlines (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
Frame size (bytes)
•
Under normal conditions, very comparable to DIOP and IIOP
64000
Protocol = SCIOP, Experiment = Paced Invocations
Network = Both links are up
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
25
Max Inter-Frame Delay (msec)
20
15
10
5
0
0
1000
2000
4000
8000
16000
32000
Frame size (bytes)
•
Under normal conditions, very comparable to DIOP and IIOP
64000
Summary of Experiments under Normal Network
Conditions
• Under normal conditions, performance of DIOP, IIOP,
and SCIOP is quite similar
• No disadvantage of using SCIOP under normal
conditions
Protocol = DIOP, Experiment = Paced Invocations
Network = 1% packet loss on 1st link
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Frames Dropped (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
Packet loss introduced at Traffic Shaping Node causes frames to be dropped
1% to 7% frames were dropped – higher loss for bigger frames
Protocol = DIOP , Experiment = Paced Invocations
Network = 1% packet loss on 1st link
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
1000
900
Max Inter-Frame Delay (msec)
800
700
600
500
400
300
200
100
0
0
1000
2000
4000
8000
16000
Frame size (bytes)
•
•
Increase in inter-frame delay due to lost frames
Slowest invocation rate has highest inter-frame delay
32000
64000
Protocol = IIOP, Experiment = Paced Invocations
Network = 1% packet loss on 1st link
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Missed Deadlines (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
Packet loss did not have significant impact on smaller frames or slower invocation rates
since there is enough time for the lost packets to be retransmitted
Packet loss did impact larger frames, specially the ones being transmitted at faster rates
– 6% of 64k bytes frames at 100 Hz missed their deadlines
Protocol = IIOP, Experiment = Paced Invocations
Network = 1% packet loss on 1st link
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
1000
900
Max Inter-Frame Delay (msec)
800
700
600
500
400
300
200
100
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
IIOP does not recover well from lost packets
720 msec delay for some frames (only 13 msec under normal conditions)
Protocol = SCIOP, Experiment = Paced Invocations
Network = 1% packet loss on 1st link
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Missed Deadlines (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
SCIOP was able to use the redundant link during packet loss on the primary link
No deadlines were missed
Protocol = SCIOP, Experiment = Paced Invocations
Network = 1% packet loss on 1st link
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
1000
900
Max Inter-Frame Delay (msec)
800
700
600
500
400
300
200
100
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
Inter-frame delay in this experiment is very comparable to the inter-frame delay under
normal conditions (26 msec vs. 14 msec)
Summary of Experiments under 1% packet loss
on 1st link
• Client was able to meet all of its invocation
deadlines when using SCIOP
• DIOP dropped up to 7% of frames
• IIOP missed up to 6% of deadlines
Protocol = DIOP, Experiment = Paced Invocations
Network = Systemic link failure
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Frames Dropped (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
Frame size (bytes)
•
•
Link was down for 33% of the time
33% of frames were dropped
16000
32000
64000
Protocol = DIOP, Experiment = Paced Invocations
Network = Systemic link failure
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
4000
3500
Max Inter-Frame Delay (msec)
3000
2500
2000
1500
1000
500
0
0
1000
2000
4000
8000
Frame size (bytes)
•
•
Link was down for 2 seconds
Inter-frame delay was about 2 seconds
16000
32000
64000
Protocol = IIOP, Experiment = Paced Invocations
Network = Systemic link failure
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Missed Deadlines (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
•
Link failure has significant impact on all invocation rates and frame sizes
Impact is less visible for smaller frames at slower rates because IIOP is able to buffer
packets thus allowing the client application to make progress
Up to 58% deadlines are missed for larger frames at faster rates
Protocol = IIOP, Experiment = Paced Invocations
Network = Systemic link failure
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
4000
3500
Max Inter-Frame Delay (msec)
3000
2500
2000
1500
1000
500
0
0
1000
2000
4000
8000
16000
Frame size (bytes)
•
•
IIOP does not recover well from temporary link loss
Maximum inter-frame delay approaching 4 seconds
32000
64000
Protocol = SCIOP, Experiment = Paced Invocations
Network = Systemic link failure
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
100
90
80
Missed Deadlines (%)
70
60
50
40
30
20
10
0
0
1000
2000
4000
8000
16000
32000
Frame size (bytes)
•
•
SCIOP was able to use the redundant link during link failure
No deadlines were missed
64000
Protocol = SCIOP, Experiment = Paced Invocations
Network = Systemic link failure
5 Hertz
25 Hertz
50 Hertz
75 Hertz
100 Hertz
4000
3500
Max Inter-Frame Delay (msec)
3000
2500
2000
1500
1000
500
0
0
1000
2000
4000
8000
Frame size (bytes)
•
Inter-frame delay never exceeded 40 msec
16000
32000
64000
Summary of Experiments under Systemic
link failure
• Client was able to meet all of its invocation
deadlines when using SCIOP
• DIOP dropped up to 33% of frames
• IIOP missed up to 58% of deadlines
Throughput Tests
•
•
Emulating applications that want to get bulk data from one
machine to another as quickly as possible
Two protocols: IIOP, SCIOP
– DIOP not included because it is unreliable
•
•
•
Frame size was varied from 1 to 64k bytes
Client was sending data continuously
IDL interface
– one-way void method(in octets payload)
– void twoway_sync()
•
Experiment measures
– Time required by client to send large amount of data to server
Experiment = Throughput
Network = Both links are up
140
120
Throughput (Mbps)
100
80
IIOP
SCIOP
60
40
20
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
•
IIOP peaks around 94 Mbps
SCIOP is up to 28% slower for smaller frames
SCIOP is able to utilize both links for a combined throughput up to 122 Mbps
Experiment = Throughput
Network = 1% packet loss on 1st link
140
120
Throughput (Mbps)
100
80
IIOP
SCIOP
60
40
20
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
•
1% packet loss causes maximum IIOP bandwidth to reduce to 87 Mbps (8% drop)
IIOP outperforms SCIOP for smaller frames
SCIOP maintains high throughput for larger frames, maxing out at 100 Mbps
Experiment = Throughput
Network = Systemic link failure
140
120
Throughput (Mbps)
100
80
IIOP
SCIOP
60
40
20
0
1000
2000
4000
8000
16000
32000
64000
Frame size (bytes)
•
•
•
Link failure causes maximum IIOP throughput to drop to 38 Mbps (60% drop)
SCIOP outperforms IIOP for all frame sizes
SCIOP maxes out at 83 Mbps
Latency
Tests
•
•
Emulating applications that want to send a message and get a
reply as quickly as possible
Two protocols: IIOP, SCIOP
– DIOP not included because it is unreliable
•
•
•
Frame size was varied from 0 to 64k bytes
Client sends data and waits for reply
IDL interface
– void method(inout octets payload)
•
Experiment measures
– Time required by client to send to and receive a frame from the
server
Experiment = Latency
Network = Both links are up
10000
Roundtrip Latency (msec)
1000
100
IIOP
SCIOP
10
1
0.1
0
1000
2000
4000
8000
16000
32000
64000
Frame Size (bytes)
•
•
Mean IIOP latency comparable to SCIOP
For larger frames, maximum latency for SCIOP is 15 times maximum latency for IIOP
Experiment = Latency
Network = 1% packet loss on 1st link
10000
Roundtrip Latency (msec)
1000
100
IIOP
SCIOP
10
1
0.1
0
1000
2000
4000
8000
16000
32000
64000
Frame Size (bytes)
•
•
1% packet loss causes maximum IIOP latency to reach about 1 second
SCIOP outperforms IIOP for both average and maximum latencies for all frame sizes
Experiment = Latency
Network = Systemic link failure
10000
Roundtrip Latency (msec)
1000
100
IIOP
SCIOP
10
1
0.1
0
1000
2000
4000
8000
16000
32000
64000
Frame Size (bytes)
•
•
Link failure causes maximum IIOP latency to reach about 4 seconds
SCIOP outperforms IIOP for both average and maximum latencies for all frame sizes
Conclusions
• SCTP combines the best features of TCP and UDP and adds several new useful
features
• SCTP can be used to improve network fault tolerance and improve QoS
• Under normal network conditions, SCTP doesn’t seem to have substantial
overhead and compares well with TCP and UDP
– In addition, it can utilize redundant links to provide higher effective throughput
• Under packet loss and link failures, SCTP provides automatic failover to redundant
links, providing superior latency and bandwidth relative to TCP and UDP
• Integrating SCTP as a pluggable protocol into middleware allows effortless and
seamless integration for DRE applications
• SCTP is available when using TAO, CIAO and AVStreaming
• We can continue to use other network QoS mechanisms such as DiffServ and
IntServ with SCTP
• Both OpenSS7 and (specially) LKSCTP implementations need improvement to
reduce node wedging
• Emulab provides an excellent environment for testing SCTP
Issues in SCTP Integration
• SCTP implementations not robust
– Crashes during failover
– Differences in conformance to SCTP specification
– Limited technical support
Future
• Load-Sharing in SCTP
– Concurrent Multipath Transfer
• Adaptive Failover
• Ongoing research at Protocol Engineering
Laboratory, University of Delaware
Emulab Network Simulation (NS)
Configuration for SCTP Tests
# Create a simulator object
set ns [new Simulator]
source tb_compat.tcl
# The nodes and hardware
set sender [$ns node]
set distributor [$ns node]
set receiver [$ns node]
# Special hardware setup instructions
tb-set-hardware $sender pc850
tb-set-hardware $distributor pc850
tb-set-hardware $receiver pc850
# Special OS setup instructions
tb-set-node-os $sender BBN-RH9-SS7-8
tb-set-node-os $distributor BBN-RH9-SS7-8
tb-set-node-os $receiver BBN-RH9-SS7-8
# SCTP setup
tb-set-node-startcmd $sender "sudo /groups/pces/uav_oep/oomworks/openss7debug/TAO/performance-tests/Protocols/set_sctp_params.sh"
tb-set-node-startcmd $distributor "sudo /groups/pces/uav_oep/oomworks/openss7debug/TAO/performance-tests/Protocols/set_sctp_params.sh"
tb-set-node-startcmd $receiver "sudo /groups/pces/uav_oep/oomworks/openss7debug/TAO/performance-tests/Protocols/set_sctp_params.sh"
# The links
set link1 [$ns duplex-link $sender $distributor 100Mb 0ms DropTail]
set link2 [$ns duplex-link $sender $distributor 100Mb 0ms DropTail]
set link3 [$ns duplex-link $distributor $receiver 100Mb 0ms DropTail]
set link4 [$ns duplex-link $distributor $receiver 100Mb 0ms DropTail]
set link5 [$ns duplex-link $sender $receiver 100Mb 0ms DropTail]
set link6 [$ns duplex-link $sender $receiver 100Mb 0ms DropTail]
# The addresses
tb-set-ip-link $sender $link1 10.1.1.1
tb-set-ip-link $sender $link2 10.1.11.1
tb-set-ip-link $sender $link5 10.1.3.1
tb-set-ip-link $sender $link6 10.1.33.1
tb-set-ip-link $distributor $link1 10.1.1.2
tb-set-ip-link $distributor $link2 10.1.11.2
tb-set-ip-link $distributor $link3 10.1.2.1
tb-set-ip-link $distributor $link4 10.1.22.1
tb-set-ip-link $receiver $link3 10.1.2.2
tb-set-ip-link $receiver $link4 10.1.22.2
tb-set-ip-link $receiver $link5 10.1.3.2
tb-set-ip-link $receiver $link6 10.1.33.2
# Since the links are full speed, emulab will not place traffic
# shaping nodes unless we trick it to do so.
$ns at 0 "$link1 up"
$ns at 0 "$link2 up"
$ns at 0 "$link3 up"
$ns at 0 "$link4 up"
$ns at 0 "$link5 up"
$ns at 0 "$link6 up"
$ns run
Emulab Network Simulation (NS)
Configuration for DiffServ Tests
# Create a simulator object
set ns [new Simulator]
source tb_compat.tcl
# Topology creation and agent definitions, etc. here
# The nodes
set sender1 [$ns node]
set sender2 [$ns node]
set sender3 [$ns node]
set router [$ns node]
set receiver [$ns node]
# Special hardware setup instructions
tb-set-hardware $sender1 pc850
tb-set-hardware $sender2 pc850
tb-set-hardware $sender3 pc850
tb-set-hardware $router pc850
tb-set-hardware $receiver pc850
# Special OS setup instructions
tb-set-node-os $sender1 BBN-RH9-SS7-8
tb-set-node-os $sender2 BBN-RH9-SS7-8
tb-set-node-os $sender3 BBN-RH9-SS7-8
tb-set-node-os $router FBSD48-ALTQ
tb-set-node-os $receiver BBN-RH9-SS7-8
# The links
set sender1-router [$ns duplex-link $sender1 $router 100Mb 0ms
DropTail]
set sender2-router [$ns duplex-link $sender2 $router 100Mb 0ms
DropTail]
set sender3-router [$ns duplex-link $sender3 $router 100Mb 0ms
DropTail]
set router-receiver [$ns duplex-link $router $receiver 100Mb 0ms
DropTail]
# The addresses
tb-set-ip-link $sender1 $sender1-router 10.1.1.1
tb-set-ip-link $router $sender1-router 10.1.1.2
tb-set-ip-link $sender2 $sender2-router 10.1.2.1
tb-set-ip-link $router $sender2-router 10.1.2.2
tb-set-ip-link $sender3 $sender3-router 10.1.3.1
tb-set-ip-link $router $sender3-router 10.1.3.2
tb-set-ip-link $router $router-receiver 10.1.9.1
tb-set-ip-link $receiver $router-receiver 10.1.9.2
# Run the simulation
$ns rtproto Static
$ns run
Toggling Network Links
in Emulab
use Time::localtime;
use Time::Local;
# amount of delay between running the servers
$sleeptime = 2;
$phasetime = 3;
for (;;) {
print_with_time ("L2 UP");
system ($L2_UP);
select undef, undef, undef, $sleeptime / 2;
$project = "pces/sctp";
print_with_time ("L1 DOWN");
system ($L1_DOWN);
$L1_DOWN = "tevc -e $project now link1 down &";
$L1_UP = "tevc -e $project now link1 up &";
$L2_DOWN = "tevc -e $project now link2 down &";
$L2_UP = "tevc -e $project now link2 up &";
select undef, undef, undef, $sleeptime;
$now = localtime;
$start_seconds = timelocal ($now->sec, $now->min, $now->hour, $now->mday,
$now->mon, $now->year);
print_with_time ("L1 UP");
system ($L1_UP);
print_with_time ("L1 UP");
system ($L1_UP);
select undef, undef, undef, $sleeptime / 2;
print_with_time ("L2 DOWN");
system ($L2_DOWN);
print_with_time ("L2 Down");
system ($L2_DOWN);
select undef, undef, undef, $phasetime;
select undef, undef, undef, $sleeptime;
}
sub print_with_time
{
$now = localtime;
$now_seconds = timelocal ($now->sec, $now->min, $now->hour, $now->mday,
$now->mon, $now->year);
print "@_ at ", $now_seconds - $start_seconds, "\n";
}