Download Lecture 2 Addendum - Computer Science and Engineering

Document related concepts

Distributed operating system wikipedia , lookup

Buffer overflow protection wikipedia , lookup

Transcript
Lecture 2 Addendum:
Software Platforms
Anish Arora
CIS788.11J
Introduction to Wireless Sensor Networks
Lecture uses slides from tutorials
prepared by authors of these platforms
Outline
•
Platforms (contd.)
 SOS
slides from UCLA
 Virtual machines (Maté)
slides from UCB
 Contiki
slides from Upsaala
2
References
•
SOS Mobisys paper
•
SOS webpage
•
Mate: A Virtual Machine for Sensor Networks. ASPLOS
•
Mate webpage
•
Contiki Emnets Paper
•
Contiki webpage
3
SOS: Motivation and Key Feature
•
Post-deployment software updates are necessary to
•
customize the system to the environment
•
upgrade features
•
remove bugs
•
re-task system
•
Remote reprogramming is desirable
•
Approach: Remotely insert binary modules into running kernel
•

software reconfiguration without interrupting system operation

no stop and re-boot unlike differential patching
Performance should be superior to virtual machines
4
Architecture Overview
Tree
Routing
Application
Dynamic Memory
Timer
Clock
Scheduler
Serial
Framer
UART
•
•
Function Pointer
Control Blocks
Kernel
Services
Sensor
Manager
Device
Drivers
Comm.
Stack
ADC
SPI
Static Kernel
•
Provides hardware abstraction &
common services
Maintains data structures to enable
module loading
Costly to modify after deployment
Dynamically
Loadable
Modules
Light
Sensor
I2C
Hardware
Abstraction
Layer
Dynamic Modules
•
•
•
Drivers, protocols, and applications
Inexpensive to modify after
deployment
Position independent
5
SOS Kernel
•
Hardware Abstraction Layer (HAL)
•
•
Low layer device drivers interface with HAL
•
•
Clock, UART, ADC, SPI, etc.
Timer, serial framer, communications stack, etc.
Kernel services
•
Dynamic memory management
•
Scheduling
•
Function control blocks
6
Kernel Services: Memory Management
Fixed-partition dynamic memory allocation
•
•
Constant allocation time
•
Low overhead
Memory management features
•
•
•
Guard bytes for run-time memory overflow checks
•
Ownership tracking
•
Garbage collection on completion
pkt = (uint8_t*)ker_malloc(hdr_size + sizeof(SurgeMsg), SURGE_MOD_PID);
7
Kernel Services: Scheduling
•
•
SOS implements non-preemptive priority scheduling via priority
queues
Event served when there is no higher priority event
•
Low priority queue for scheduling most events
•
High priority queue for time critical events, e.g., h/w interrupts &
sensitive timers
•
Prevents execution in interrupt contexts
•
post_long(TREE_ROUTING_PID, SURGE_MOD_PID, MSG_SEND_PACKET,
hdr_size + sizeof(SurgeMsg), (void*)packet, SOS_MSG_DYM_MANAGED);
8
Modules
•
Each module is uniquely identified by its ID or pid
•
Has private state
•
Represented by a message handler & has prototype:
int8_t handler(void *private_state, Message *msg)
•
Return value follows errno
 SOS_OK for success. -EINVAL, -ENOMEM, etc. for failure
9
Kernel Services: Module Linking
•
Orthogonal to module distribution protocol
•
Kernel stores new module in free block located in program memory
and critical information about module in the module table
•
Kernel calls initialization routine for module
•
Publish functions for other parts of the system to use
char tmp_string = {'C', 'v', 'v', 0};
ker_register_fn(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_header_size);
•
Subscribe to functions supplied by other modules
char tmp_string = {'C', 'v', 'v', 0};
s->get_hdr_size = (func_u8_t*)ker_get_handle(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string);
•
Set initial timers and schedule events
10
Module–to–Kernel Communication
System Call
System
Jump Table
HW Specific API
•
Module A
System Messages
High Priority
Message
Buffer
SOS Kernel
Hardware
Interrupt
Kernel provides system services and access to hardware
ker_timer_start(s->pid, 0, TIMER_REPEAT, 500);
ker_led(LED_YELLOW_OFF);
•
Kernel jump table re-directs system calls to handlers
•
•
upgrade kernel independent of the module
Interrupts & messages from kernel dispatched by a high priority message buffer
•
low latency
•
concurrency safe operation
11
Inter-Module Communication
Module A
Module B
Module A
Module B
Indirect Function Call
Post
Module Function
Pointer Table
Inter-Module Function Calls
•
•
•
•
Synchronous communication
Kernel stores pointers to
functions registered by modules
Message
Buffer
Inter-Module Message Passing
•
•
Blocking calls with low latency
•
Type-safe runtime function
binding
•
Asynchronous communication
Messages dispatched by a twolevel priority scheduler
Suited for services with long
latency
Type safe binding through
publish / subscribe interface
12
Synchronous Communication
•
Module A
Module can register function for
low latency blocking call (1)
Module B
•
3
2
Modules which need such function
can subscribe to it by getting
function pointer pointer (i.e.
**func) (2)
1
Module Function
Pointer Table
•
When service is needed, module
dereferences the function pointer
pointer (3)
13
Asynchronous Communication
3
2
Module A
4
Module B
Network
1
Msg Queue
5
Send Queue
•
Module is active when it is handling the message (2)(4)
•
Message handling runs to completion and can only be
interrupted by hardware interrupts
•
Module can send message to another module (3) or send
message to the network (5)
•
Message can come from both network (1) and local host (3)
14
Module Safety
•
•
Problem: Modules can be remotely added, removed, & modified on
deployed nodes
Accessing a module
•
•
•
If module doesn't exist, kernel catches messages sent to it & handles
dynamically allocated memory
If module exists but can't handle the message, then module's default
handler gets message & kernel handles dynamically allocated memory
Subscribing to a module’s function
•
•
•
•
Publishing a function includes a type description that is stored in a
function control block (FCB) table
Subscription attempts include type checks against corresponding FCB
Type changes/removal of published functions result in subscribers being
redirected to system stub handler function specific to that type
Updates to functions w/ same type assumed to have same semantics 15
Module Library
•
•
Some applications created by combining already written and
tested modules
SOS kernel facilitates loosely coupled modules
•
Passing of memory ownership
•
Efficient function and messaging interfaces
Surge
Photo
Sensor
Surge Application
with Debugging
Memory
Debug
Tree
Routing
16
Module Design
#include <module.h>
typedef struct {
uint8_t pid;
uint8_t led_on;
} app_state;
DECL_MOD_STATE(app_state);
DECL_MOD_ID(BLINK_ID);
int8_t module(void *state, Message *msg){
app_state *s = (app_state*)state;
switch (msg->type){
case MSG_INIT: {
s->pid = msg->did;
s->led_on = 0;
ker_timer_start(s->pid, 0, TIMER_REPEAT, 500);
break;
}
case MSG_FINAL: {
ker_timer_stop(s->pid, 0);
break;
}
case MSG_TIMER_TIMEOUT: {
if(s->led_on == 1){
ker_led(LED_YELLOW_ON);
} else {
ker_led(LED_YELLOW_OFF);
}
s->led_on++;
if(s->led_on > 1) s->led_on = 0;
break;
}
default: return -EINVAL;
}
return SOS_OK;
•
Uses standard C
•
Programs created by “wiring”
modules together
17
Sensor Manager
Module A
Periodic
Access
Module B
Signal
Data Ready
•
Polled
Access
•
Sensor
Manager
•
getData
dataReady
MagSensor
•
Enables sharing of sensor data between
multiple modules
Presents uniform data access API to
diverse sensors
Underlying device specific drivers register
with the sensor manager
Device specific sensor drivers control
•
•
ADC
I2C
•
Calibration
Data interpolation
Sensor drivers are loadable: enables
•
•
post-deployment configuration of sensors
hot-swapping of sensors on a running node
18
Application Level Performance
Comparison of application performance in SOS, TinyOS, and MateVM
Surge Tree Formation Latency
Surge Forwarding Delay
Platform
ROM
RAM
SOS Core
Dynamic Memory Pool
TinyOS with Deluge
Mate VM
20464 B 1163 B
1536 B
21132 B 597 B
39746 B 3196 B
Memory footprint for base operating system
with the ability to distribute and update node
programs.
System
TinyOs
SOS
Mate VM
Surge Packet Delivery Ratio
Active Time Active Time Overhead relative to
(in 1 min)
(%)
TOS (%)
3.31 sec
5.22%
NA
3.50 sec
5.84%
5.70%
3.68 sec
6.13%
11.00%
CPU active time for surge application.
19
Reconfiguration Performance
Module Name
sample_send
tree_routing
photo_sensor
Energy (mJ)
Latency (sec)
Code Size (Bytes)
568
2242
372
2312.68
46.6
Code Size Write Cost Write
System
(Bytes)
(mJ/page) Energy (mJ)
SOS
1316
0.31
1.86
TinyOS
30988
1.34
164.02
Mate VM
NA
NA
NA
Energy cost of light sensor driver update
Code Size Write Cost Write
System
(Bytes)
(mJ/page) Energy (mJ)
SOS
566
0.31
0.93
TinyOS
31006
1.34
164.02
Mate VM
17
0
0
Module size and energy profile
for installing surge under SOS
Energy cost of surge application update
•
Energy trade offs
 SOS has slightly higher base operating cost
 TinyOS has significantly higher update cost
 SOS is more energy efficient when the system is updated
one or more times a week
20
Platform Support
Supported micro controllers
•
Atmel Atmega128
•
•
•
4 Kb RAM
128 Kb FLASH
Oki ARM
•
•
32 Kb RAM
256 Kb FLASH
Supported radio stacks
•
Chipcon CC1000
•
•
BMAC
Chipcon CC2420
•
IEEE 802.15.4 MAC
(NDA required)
21
Simulation Support
•
•
•
Source code level network simulation
•
Pthread simulates hardware concurrency
•
UDP simulates perfect radio channel
•
Supports user defined topology & heterogeneous software configuration
•
Useful for verifying the functional correctness
Instruction level simulation with Avrora
•
Instruction cycle accurate simulation
•
Simple perfect radio channel
•
Useful for verifying timing information
•
See http://compilers.cs.ucla.edu/avrora/
EmStar integration under development
22
Mate: A Virtual Machine for Sensor Networks
Why VM?
•
Large number (100’s to 1000’s) of nodes in a coverage area
•
Some nodes will fail during operation
•
Change of function during the mission
Related Work
•
PicoJava
assumes Java bytecode execution hardware
•
K Virtual Machine
requires 160 – 512 KB of memory
•
XML
too complex and not enough RAM
•
Scylla
VM for mobile embedded system
24
Mate features
•
Small
(16KB instruction memory, 1KB RAM)
•
Concise
(limited memory & bandwidth)
•
Resilience (memory protection)
•
Efficient
•
Tailorable (user defined instructions)
(bandwidth)
25
Mate in a Nutshell
•
Stack architecture
•
Three concurrent execution contexts
•
Execution triggered by predefined events
•
Tiny code capsules; self-propagate into network
•
Built in communication and sensing instructions
26
When is Mate Preferable?
•
•
For small number of executions
GDI example:
Bytecode version is preferable for a program running
less than 5 days
•
In energy constrained domains
•
Use Mate capsule as a general RPC engine
27
Mate Architecture

Stack based architecture

Single shared variable
•

Three events:
•
Message reception
•
Message send
0
1
Less prone to bugs
PC
Code
•
Simplifies programming
3
gets/sets
Hides asynchrony
•
2
Receive
Clock timer
Events
Send
•
Subroutines
Clock

gets/sets
Operand
Stack
Return
Stack
28
Instruction Set

One byte per instruction

Three classes: basic, s-type, x-type
•
basic: arithmetic, halting, LED operation
•
s-type: messaging system
•
x-type: pushc, blez
 8 instructions reserved for users to define

Instruction polymorphism
•
e.g. add(data, message, sensing)
29
Code Example(1)
•
Display Counter to LED
gets
pushc 1
add
copy
sets
pushc 7
and
putled
halt
#
#
#
#
#
#
#
#
#
Push heap variable on stack
Push 1 on stack
Pop twice, add, push result
Copy top of stack
Pop, set heap
Push 0x0007 onto stack
Take bottom 3 bits of value
Pop, set LEDs to bit pattern
30
Code Capsules
•
One capsule = 24 instructions
•
Fits into single TOS packet
•
Atomic reception
•
Code Capsule
 Type and version information
 Type: send, receive, timer, subroutine
31
Viral Code
•
Capsule transmission: forw
•
Forwarding other installed capsule: forwo (use within clock
capsule)
•
Mate checks on version number on reception of a capsule
-> if it is newer, install it
•
Versioning: 32bit counter
•
Disseminates new code over the network
32
Component Breakdown
•
Mate runs on mica with 7286 bytes code, 603 bytes RAM
33
Network Infection Rate
•
42 node network in 3 by
14 grid
•
Radio transmission: 3 hop
network
•
Cell size: 15 to 30 motes
•
Every mote runs its clock
capsule every 20 seconds
•
Self-forwarding clock
capsule
34
Bytecodes vs. Native Code
•
Mate IPS: ~10,000
•
Overhead: Every instruction executed as separate TOS task
35
Installation Costs
•
•
Bytecodes have computational overhead
But this can be compensated by using small packets
on upload (to some extent)
36
Customizing Mate
•
Mate is general architecture; user can build customized VM
•
User can select bytecodes and execution events
•
Issues:
 Flexibility vs. Efficiency
Customizing increases efficiency w/ cost of changing requirements
 Java’s solution:
General computational VM + class libraries
 Mate’s approach:
More customizable solution -> let user decide
37
How to …
•
Select a language
-> defines VM bytecodes
•
Select execution events
-> execution context, code image
•
Select primitives
-> beyond language functionality
38
Constructing a Mate VM

This generates
a set of files
-> which are
used to build
TOS application
and
to configure
script program
39
Compiling and Running a Program
Send it over
the network
to a VM
VM-specific
binary code
Write programs in
the scripter
40
Bombilla Architecture



Once context: perform operations that only need
single execution
16 word heap sharing among the context;
setvar, getvar
Buffer holds up to ten values;
bhead, byank, bsorta
41
Bombilla Instruction Set





basic: arithmetic, halt, sensing
m-class: access message header
v-class: 16 word heap access
j-class: two jump instructions
x-class: pushc
42
Enhanced Features of Bombilla
•
Capsule Injector: programming environment
•
Synchronization: 16-word shared heap; locking scheme
•
Provide synchronization model: handler, invocations,
resources, scheduling points, sequences
•
Resource management: prevent deadlock
•
Random and selective capsule forwarding
•
Error State
43
Discussion
•
•
Comparing to traditional VM concept, is Mate platform
independent? Can we have it run on heterogeneous hardware?
Security issues:
How can we trust the received capsule? Is there a way to
prevent version number race with adversary?
•
•
In viral programming, is there a way to forward messages other
than flooding? After a certain number of nodes are infected by
new version capsule, can we forward based on need?
Bombilla has some sophisticated OS features. What is the size of
the program? Does sensor node need all those features?
44
Contiki
Dynamic loading of programs (vs. static)
Multi-threaded concurrency managed execution
(in addition to event driven)
Available on MSP430, AVR, HC12, Z80, 6502, x86, ...
Simulation environment available for BSD/Linux/Windows
45
Key ideas
Dynamic loading of programs
•

Selective reprogramming

Static vs dynamic linking
Concurrency management mechanisms
•

Events and threads

Trade-offs: preemption, size
46
Contiki size (bytes)
Module
Code MSP430
Code AVR
RAM
Kernel
810
1044
10 + e + p
Program loader
658
-
8
Multi-threading library
582
678
8+s
Timer library
60
90
0
Memory manager
170
226
0
Event log replicator
1656
1934
200
µIP TCP/IP stack
4146
5218
18 + b
47
Loadable programs
One-way dependencies
•
Core resident in memory


Programs “know” the core


•
Language run-time, communication
Core
Statically linked against core
Individual programs can be
loaded/unloaded
48
Loadable programs (contd.)
Programs can be loaded from
anywhere
•

Radio (multi-hop, single-hop), EEPROM,
etc.
Core
•
During software development, usually
change only one module
49
How well does it work?
Works well
•
Program typically much smaller than entire system

image (1-10%)


Much quicker to transfer over the radio
Reprogramming takes seconds
Static linking can be a problem
•

Small differences in core means module cannot be run

We are implementing a dynamic linker
50
Revisiting Multi-threaded Computation
Thread
Threads blocked, waiting

Thread
Thread
for events
Kernel unblocks threads

when event occurs
Thread runs until next

blocking statement
Kernel
Each thread requires its

own stack

Larger memory usage
51
Event-driven vs multi-threaded
Event-driven
Multi-threaded
- No wait() statements
+ wait() statements
- No preemption
+ Preemption possible
- State machines
+ Sequential code flow
+ Compact code
- Larger code overhead
+ Locking less of a problem
- Locking problematic
+ Memory efficient
- Larger memory requirements
How to combine them?
52
Contiki: event-based kernel with threads
Kernel is event-based
•

Most programs run directly on top of the kernel
•
Multi-threading implemented as a library
•
Threads only used if explicitly needed

Long running computations, ...
Preemption possible
•

Responsive system with running computations
53
Responsiveness
Computation in a thread
54
Threads implemented atop an event-based kernel
Event
Thread
Thread
Event
Kernel
Event
Event
55
Implementing preemptive threads 1
Thread
Event
handler
Timer IRQ
56
Implementing preemptive threads 2
Event
handler
yield()
57
Memory management
Memory allocated when module is loaded
•

Both ROM and RAM

Fixed block memory allocator
Code relocation made by module loader
•

Exercises flash ROM evenly
58
Protothreads: light-weight stackless threads
Protothreads: mixture between event-driven and threaded
•

A third concurrency mechanism
•
Allows blocked waiting
•
Requires per-thread no stack
•
Each protothread runs inside a single C function
•
2 bytes of per-protothread state
59