Download Suman Bannerjee, U Wisconsin, “Wireless Virtualization of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Network tap wikipedia , lookup

Computer network wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Backpressure routing wikipedia , lookup

Wireless security wikipedia , lookup

Policies promoting wireless broadband in the United States wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Distributed operating system wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Airborne Networking wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

CAN bus wikipedia , lookup

Routing in delay-tolerant networking wikipedia , lookup

Kademlia wikipedia , lookup

Transcript
Virtualizing a Wireless Network:
The Time-Division Approach
Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh Mishra
Contact email: [email protected]
http://www.cs.wisc.edu/~suman
Department of Computer Sciences
University of Wisconsin-Madison
Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory
Virtualizing a wireless network
• Virtualize resources of a node
• Virtualize the medium
– Particularly critical in wireless environments
• Approaches
•
•
•
•
Time
Frequency
Space
Code
Courtesy:
ORBIT
Virtualizing a wireless network
• Virtualize resources of a node
• Virtualize the medium
– Particularly critical in wireless environments
• Approaches
•
•
•
•
Time
Frequency
Space
Code
Expt-3
Expt-2
Expt-1
Expt-3
Expt-2
Expt-1
Time
Space, Freq, Code, etc.
TDM-based virtualization
• Need synchronous behavior between node interfaces
– Between transmitter and receiver
– Between all interferers and receiver
A
B
Expt-1
Expt-1
A
B
Expt-1
Expt-2
A
B
AC
B
C
D
D
Problem statement
To create a TDM-based virtualized wireless
environment as an intrinsic capability in GENI
• This work is in the context of TDMvirtualization of ORBIT
Current ORBIT schematic
Controller
UI
Node
nodeHandler
nodeAgent
Node
• Manual scheduling
• Single experiment on grid
Node
Node
Our TDM-ORBIT schematic
Controller
Node
Master Overseer
nodeHandler
nodeAgent
nodeHandler
nodeAgent
VM
nodeHandler
nodeAgent
VM
UI
VM
VM =
User
Mode
Linux
Node Overseer
• Virtualization: abstraction + accounting
• Fine-grained scheduling for
multiple expts on grid
• Asynchronous submission
Overseers
Master overseer:
Policy-maker that governs the grid
Controller
submit
experiment
queue
scheduler
handler
feedback
Master Overseer
UI
monitor
handler
handler
mcast
commands
Node overseer:
- Add/remove experimen
VMs
-Swap experiment VMs
Monitor node health and
experiment status
NodeMostly mechanism, no
policy
Node Overseer
Virtualization
• Why not process-level virtualization?
– No isolation
• Must share FS, address space, network stack, etc.
– No cohesive “schedulable entity”
• What other alternatives are there?
– Other virtualization platforms (VMware, Xen,
etc.)
TDM: Virtualization
• Virtualization
– Experiment runs inside
a User-Mode Linux
VM
Node
Host Kernel
net_80211
tunneled
ioctl()
• Wireless configuration
– Guest has no way to
read or set wifi config!
– Wireless extensions in
virtual driver relay
ioctls to host kernel
Guest VM
UML Kernel
virt_net
ioctl()
iwconfig
TDM: Routing ingress
Node
experiment
wifi
channel
192.169.x.y
iptables
Routing
Table
VM
DNAT: 192.169 -> 192.168
VM
eth
nodeHandler
commands
(multicast) 10.10.x.y
mrouted
forwarded to
all VMs in
mcast group
VM
192.168.x.y
Synchronization challenges
• Without tight synchronization, experiment packets
might be dropped or misdirected
• Host: VMs should start/stop at exactly the same
time
– Time spent restoring wifi config varies
– Operating system is not an RTOS
– Ruby is interpreted and garbage-collected
• Network latency for overseer commands
– Mean: 3.9 ms, Median: 2.7 ms, Std-dev: 6 ms
• Swap time between experiments
Synchronization: Swap time I
• Variables involved in swap time
– Largest contributor: wifi configuration time
• More differences in wifi configuration = longer config time
– Network latency for master commands
– Ruby latency in executing commands
Synchronization: Swap Time II
• We can eliminate wifi config latency and reduce
the effects of network and ruby latencies
• “Swap gaps”
– A configuration timing buffer
– VMs not running, but incoming packets are still
received and routed to the right place
Ruby Network Latency
• Inside VM, Ruby
shows anomalous
network latency
– Example at right:
tcpdump and simple
ruby recv loop
– No delays with C
– Cause yet unknown
24+ secs
00.000 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30
00.035 received 30 bytes
01.037 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 30
01.065 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 56
01.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 40
01.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45
01.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 44
11.018 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30
12.071 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45
23.195 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30
24.273 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45
26.192 received 30 bytes
34.282 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 30
35.332 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 45
40.431 received 56 bytes
40.435 received 40 bytes
40.438 received 45 bytes
40.450 received 44 bytes
40.458 received 30 bytes
40.462 received 45 bytes
40.470 received 30 bytes
40.476 received 45 bytes
40.480 received 30 bytes
40.484 received 45 bytes
UI screen shots
Time slice 1
Time slice 2
Performance: Runtime Breakdown
Booting a VM is fast
Each phase slightly
longer in new system
– Ruby network delay
causes significant
variance in data set
– Handler must
approximate sleep
times
Performance: Overall Duration
• Advantages
– Boot duration
• Disadvantages
– Swap gaps
Future work: short term
• Improving synchrony between nodes
– More robust protocol
– Porting Ruby code to C, where appropriate
• Dual interfaces
– Nodes equipped with two cards
– Switch between them during swaps, so that
interface configuration can be preloaded at zero
cost
Dual interfaces
VM
nodeAgent
VM
nodeAgent
VM
nodeAgent
wifi0
Routing
Logic
Essid: “expA”
Mode: B
Channel: 6
wifi1
“current
card is…”
Node Overseer
Essid: “expB”
Mode: G
Channel: 11
Future work: long term
• Greater scalability
– Allow each experiment to use, say 100s of
nodes, to emulate 1000s of nodes
– Intra-experiment TDM virtualization
– Initial evaluation is quite promising
Intra-experiment TDM
Any communication topology can be modeled as a graph
Intra-experiment TDM
We can emulate all communication on the topology
accurately, as long as we can emulate the reception
behavior of the node with the highest degree
Intra-experiment TDM
Time-share of different logical nodes to
physical facility nodes
Testbed of
8 nodes
Time
Unit 1
Intra-experiment TDM
Time-share of different logical nodes to
physical facility nodes
Testbed of
8 nodes
Time
Unit 2
Intra-experiment TDM
Time-share of different logical nodes to
physical facility nodes
Testbed of
8 nodes
Time
Unit 3
Some challenges
• How to perform the scheduling?
– A mapping problem
• How to achieve the right degree of
synchronization?
– Use of a fast backbone and real-time approaches
• What are the implications of slowdown?
– Bounded by the number of partitions
Conclusions
• Increased utilization through sharing
• More careful tuning needed for smaller time slices
– Need chipset vendor support for very small times
• Non real-time apps, or apps with coarse real-time
needs are best suited to this virtualization
approach