Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Concurrency and
Real-Time
Programming Support
in Java, Ada, POSIX
From
Tutorial for TOOLS-USA 2001
August 1, 2001
Santa Barbara, CA
Presented by Robert Dewar
Programming Languages
Fall 2002, NYU
Topics
Concurrency issues
Basic model / lifetime
Mutual exclusion
Coordination / communication
Asynchrony
Interactions with exception handling
Real-time issues
Memory management / predictability
Scheduling and priorities (priority inversion
avoidance)
Time / periodic activities
Java approach
Java language specification
Real-Time Specification for Java (Real-Time for
Java Expert Group)
Core Extensions to Java (J-Consortium)
Ada 95 approach
Core language
Systems Programming and Real-Time Annexes
POSIX approach
Pthreads (1003.1c)
Real-time extensions (1003.1b)
For each issue, we present / compare the
languages’ approaches
-1-
Concurrency Granularity / Terminology
“Platform”
Hardware + OS + language-specific run-time library
“Process”
Unit of concurrent execution on a platform
• Communicates with other processes on the same
platform or on different platforms
Communication / scheduling managed by the OS (same
platform) or CORBA etc (different platforms)
Concurrency on a platform may be true parallelism
(multi-processor) or multiplexed (uniprocessor)
Per-process resources include stack, memory,
environment, file handles, ...
Switching/communication between processes is
expensive
“Thread” (“Task”)
Unit of concurrent execution within a process
• Communicates with other threads of same process
Shares per-process resources with other threads in
the same process
Per-thread resources include PC, stack
Concurrency may be true parallelism or multiplexed
Communication / scheduling managed by the OS or by
language-specific run-time library
Switching / communication between threads is cheap
Our focus: threads in a uniprocessor environment
-2-
Summary of Issues
Concurrency
Basic model / generality
Lifetime properties
• Creation, initialization, (self) termination, waiting
for others to terminate
Mutual exclusion
• Mechanism for locking a shared resource,
including control over blocking/awakening a task
that needs the resource in a particular state
Coordination (synchronization) / communication
Asynchrony
• Event / interrupt handling
• Asynchronous Transfer of Control
• Suspension / resumption / termination (of / by
others)
Interactions with exception handling
Libraries and thread safety
Real-Time
Predictability (time, space)
Scheduling policies / priority
• Range of priority values
• Avoidance of “priority inversion”
Clock and time-related issues and services
• Range/granularity, periodicity, timeout
Libraries and real-time programming
-3-
Overview of Java Concurrency Support (1)
Java Preliminaries
Smalltalk-based, dynamic, safety-sensitive OO
language with built-in support for concurrency,
exception handling
Dynamic data model
• Aggregate data (arrays, class objects) on heap
• Only primitive data and references on stack
• Garbage Collection required
Two competing proposals for real-time extensions
• Sun-sponsored Real-Time for Java Expert Group
• J-Consortium
Basic concurrency model
Unit of concurrency is the thread
• A thread is an instance of the class
java.lang.Thread or one of its subclasses
• run() method = algorithm performed by each
instance of the class
Programmer either extends Thread, or implements
the Runnable interface
• Override/implement run()
All threads are dynamically allocated
• If implementing Runnable, construct a Thread
object passing a Runnable as parameter
-4-
Overview of Java Concurrency Support (2)
Example of simple thread
public class Writer extends Thread{
final int count;
public Writer(int count){this.count=count;}
public void run(){
for (int i=1; i<=count; i++){
System.out.println("Hello " + i);
}
}
public static void main( String[] args )
throws InterruptedException{
Writer w = new Writer(60);
w.start(); // New thread of control invokes w.run()
w.join();
// Wait for w to terminate
}
}
Lifetime properties
Constructing a thread creates the resources that
the thread needs (stack, etc.)
“Activation” is explicit, by invoking start()
Started thread runs “concurrently” with parent
Thread terminates when its run method returns
Parent does not need to wait for children to
terminate
• Restrictions on “up-level references” from inner
classes prevent dangling references to parent
stack data
-5-
Overview of Java Concurrency Support (3)
Mutual exclusion
Shared data (volatile fields)
synchronized blocks/methods
Thread coordination/communication
Pass data to new thread via constructor
Pulsed event - wait() / notify()
Broadcast event - wait() / notifyAll()
join() suspends caller until the target thread
completes
Asynchrony
interrupt() sets a bit that can be polled
Asynchronous termination
• stop() is deprecated
• destroy() is discouraged
suspend() / resume() have been deprecated
RTJEG, J-C proposals include event / interrupt
handling, ATC, asynchronous termination
Interaction with exception handling
No asynchronous exceptions
Various thread-related exceptions
Thread propagating an unhandled exception
• Terminates silently, but first calls
uncaughtException
Other functionality
Thread group, dæmon threads, thread local data
-6-
Overview of Ada Concurrency Support (1)
Ada 95 preliminaries
Pascal-based ISO Standard OO language with built-in
support for packages (modules), concurrency, exception
handling, generic templates, ...
Traditional data model (“static” storage, stack(s), heap)
• Aggregate data (arrays, records) go on the stack
unless dynamically allocated
• Implementation not required to supply Garbage
Collection
“Specialized Needs Annexes” support systems
programming, real-time, several other domains
Basic concurrency model
Unit of concurrency (thread) is the task
• Task specification = interface to other tasks
• Often simply just the task name
• Task body = implementation (algorithm)
• Comprises declarations, statements
• Task type serves as a template for task objects
performing the same algorithm
Tasks and task types are declarations and may appear in
“global” packages or local scopes
• Tasks follow normal block structure rules
• Each task has own stack
• Task body may refer (with care :-) to data in outer
scopes, may declare inner tasks
Task objects may be declared or dynamically allocated -7-
Overview of Ada Concurrency Support (2)
Example of declared task object
with Ada.Text_IO;
procedure Example1 is
Count : Integer := 60;
task Writer; -- Specification
task body Writer is -- Body
begin
for I in 1..Count loop
Ada.Text_IO.Put_Line( "Hello" & Integer'Image(I));
delay 1.0; -- Suspend for at least 1.0 second
end loop;
end Writer;
begin
-- Writer activated
null;
-- Main procedure suspended until Writer terminates
end Example1;
Lifetime properties
Declared task starts (is activated) implicitly at the
begin of parent unit
Allocated task starts at the point of allocation
Task statements execute “concurrently” with
statements of parent
Task completes when it reaches its end
“Master” is suspended when it reaches its end, until
each child task terminates
• Prevents dangling references to local data
No explicit mechanism (such as Java’s join()) to
wait for another task to terminate
-8-
Overview of Ada Concurrency Support (3)
Example of task type / dynamic allocation
with Ada.Text_IO;
procedure Example2 is
task type Writer(Count : Natural);
-- Specification
type Writer_Ref is access Writer;
Ref : Writer_Ref;
task body Writer is -- Body
begin
for I in 1..Count loop
Ada.Text_IO.Put_Line( "Hello" & I'Img);
delay 1.0; -- Suspend for at least 1.0 second
end loop;
end Writer;
begin
Ref := new Writer(60); -- activates new Writer task object
-- Main procedure suspended until Writer object terminates
end Example2;
Mutual exclusion
Shared data, pragma Volatile / Atomic
Protected objects / types
• Data + “protected” operations that are executed
with mutual exclusion
“Passive” task that sequentializes access to a data
structure via explicit communication (rendezvous)
Explicit mutex-like mechanism (definable as
protected object/type) that is locked and unlocked
-9-
Overview of Ada Concurrency Support (4)
Coordination / communication
Pass data to task via discriminant or rendezvous
Suspension_Object
• Binary semaphore with 1-element “queue”
Rendezvous
• Explicit inter-task communication
Implicit wait for dependent tasks
Asynchrony
Event handling via dedicated task, interrupt
handler
Asynch interactions subject to “abort deferral”
• abort statement
• Asynchronous transfer of control via timeout
or rendezvous request
• Hold / Continue procedures (suspend / resume)
Interaction with exception handling
No asynchronous exceptions
Tasking_Error raised at language-defined points
Task propagating an (unhandled) exception
terminates silently
Other functionality
Per-task attributes
Restrictions for high-integrity / efficiencysensitive applications
• Ravenscar Profile
-10-
Overview of POSIX Concurrency Support (1)
Basic concurrency model
A thread is identified by an instance of (opaque) type
pthread_t
Threads may be allocated dynamically or declared
locally (on the stack) or statically
Program creates / starts a thread by calling
pthread_create, passing an “attributes” structure,
the function that the thread will be executing, and
the function’s arguments
• Thread function takes and returns void*
• Return value passed to “join”ing thread
Example
Notation: POSIX call in upper-case is a macro whose
expansion includes querying the error return code
#include <pthread.h>
#include <stdio.h>
void *tfunc(void *arg){ // thread function
int count = *( (int*)arg );
int j;
for (j=1; j <= count; j++){
printf("Hello %d\n", j);
}
return NULL;
}
int main(int argc, char *argv[]){ // main thread
pthread_t pthread;
int
pthread_arg = 60;
PTHREAD_CREATE( &pthread, NULL,
tfunc, (void*)&pthread_arg);
PTHREAD_JOIN( pthread, NULL );
}
-11-
Overview of POSIX Concurrency Support (2)
Lifetime properties
Thread starts executing its thread function as
result of pthread_create, concurrent with creator
Termination
• A thread terminates via a return statement or by
invoking pthread_exit
• Both deliver a result to a “join”ing thread, but
pthread_exit also invokes cleanup handlers
• A terminated thread may continue to hold
system resources until it is recycled
Detachment and recycling
• A thread is detachable if
• It has been the target of a pthread_join or
a pthread_detach (either before or after it
has terminated), or
• it was created with its detachstate attribute
set
• A terminated detachable thread is recycled,
releasing all system resources not released at
termination
No hierarchical relationship among threads
• Created thread has a pointer into its creator’s
memory danger of dangling reference
Main thread is special in that when it returns it
terminates the process, killing all other threads
• To avoid this mass transitive threadicide, main
thread can pthread_exit rather than return
-12-
Overview of POSIX Concurrency Support (3)
Mutual exclusion
Shared (volatile) data
Mutexes (pthread_mutex_t type) with
lock/unlock functions
Coordination / communication
Condition variables (pthread_cond_t type) with
pulsed and broadcast events
Semaphores
Data passed to thread function at
pthread_create, result delivered to “joining”
thread at return or pthread_exit
Asynchrony
Thread cancellation with control over immediacy
and ability to do cleanup
Interaction with exception handling
Complicated relationship with signals
Consistent error-return conventions
• The result of each pthread function is an int
error code (0 normal)
• If the function needs to return a result, it
does so in an address (“&”) parameter
• No use of errno
Other
Thread-specific data area
“pthread once” functions
-13-
Comparison: Basic Model / Lifetime
Points of difference
Nature of unit of concurrency: class, task,
function
Implicit versus explicit activation
How parameters are passed / how result
communicated
Methodology / reliability
Ada and Java provide type checking, prevent
dangling references
Flexibility / generality
All three provide roughly the same expressive
power
POSIX allows a new thread to be given its
parameters explicitly on thread creation
POSIX allows a thread to return a value to a
“join”ing thread
Efficiency
Ada requires run-time support to manage task
dependence hierarchy
-14-
Mutual Exclusion in Ada via Shared Data
Example:
One task repeatedly updates an integer value
Another task repeatedly displays it
with Ada.Text_IO;
procedure Example3 is
Global : Integer := 0;
pragma Atomic( Global );
Note: the assignment
task Updater;
statement is not atomic
task Reporter;
task body Updater is
begin
loop
Global := Global+1;
delay 1.0; -- 1 second
end loop;
end Updater;
task body Reporter is
begin
loop
Ada.Text_IO.Put_Line( Global'Img );
delay 2.0; -- 2 seconds
end loop;
end Reporter;
begin
null;
end Example3;
Advantage
Efficiency
Need pragma Atomic to ensure that
Integer reads/writes are atomic
Optimizer does not cache Global
Drawbacks
Methodologically challenged
Does not scale up (e.g. aggregate data, more than
one updating task)
-15-
Mutual Exclusion in Java via Shared Data
Java version of previous example
public class Example4{
static volatile int global = 0;
public static void main(String[] args){
Updater u = new Updater();
Reporter r = new Reporter();
u.start();
r.start();
}
}
class Updater extends Thread{
public void run(){
while(true){
Example1.global++;
... sleep( 1000 ); ... // try block omitted
}
}
}
class Reporter extends Thread{
public void run(){
while(true){
System.out.println(Example1.global);
}
... sleep( 2000 ); ... // try block omitted
}
}
}
Comments
Same advantages and disadvantages as Ada
version
Need volatile to prevent hostile optimizations
-16-
Mutual Exclusion in Ada via Protected Object
with Ada.Integer_Text_IO;
procedure Example5 is
type Position is record
X, Y : Integer := 0;
end record;
protected Global is
Interfac
procedure Update;
e
function Value return Position;
private
Implementatio
Data : Position;
n
end Global;
protected body Global is
procedure Update is
begin
Executed
Data.X := Data.X+1; Data.Y := Data.Y+1;
with
end Update;
mutual
function Value return Position is
exclusion
begin
return Data;
end Value;
end Global;
task Updater;
task Reporter;
task body Updater is
begin
loop
Global.Update;
delay 1.0; -- 1 second
end loop;
end Updater;
task body Reporter is
P : Position;
begin
loop
P := Global.Value;
Ada.Integer_Text_IO.Put (P.X);
Ada.Integer_Text_IO.Put (P.Y);
delay 2.0; -- 2 seconds
end loop;
end Reporter;
begin
null;
end Example5;
-17-
Basic Properties of Ada Protected Objects
A protected object is a data object that is
shared across multiple tasks but with mutually
exclusive access via a (conceptual) “lock”
The rules support “CREW” access (Concurrent
Read, Exclusive Write)
Form of a protected object declaration
protected Object_Name is
{ protected_operation_specification ; }
[ private
{ protected_component_declaration } ]
end Object_Name;
Data may only be
in the private part
Encapsulation is enforced
Client code can only access the protected
components through protected operations
Protected operations illustrated in Example2
Procedure may “read” or “write” the components
Function may “read” the components, not “write”
them
The protected body provides the implementation
of the protected operations
Comments on Example2
Use of protected object ensures that only one of
the two tasks at a time can be executing a
protected operation
Scales up if we add more accessing tasks
Allows concurrent execution of reporter tasks
-18-
Mutual Exclusion in Java via
Synchronized Blocks
global
class Position{
int x=0, y=0;
}
u
r
Updater
pu
pr
Reporter
x
y
Position
public class Example6{
public static void main(String[] args){
Position global = new Position();
Updater u = new Updater( global );
Reporter r = new Reporter( global );
u.start();
r.start();
}
}
class Updater extends Thread{
private final Position pu;
Updater( Position p ){
pu=p;
}
public void run(){
while(true){
synchronized(pu){
pu.x++;
pu.y++;
}
... sleep( 1000 ); ...
}
}
}
class Reporter extends Thread{
private final Position pr;
Reporter( Position p ){
pr=p;
}
public void run(){
while(true){
synchronized(pr){
System.out.println(pr.x);
System.out.println(pr.y);
}
... sleep( 2000 ); ...
}
}
}
-19-
Semantics of Synchronized Blocks
Each object has a lock
Suppose thread t executes synchronized(p){...}
In order to enter the {...} block, t must
acquire the lock associated with the object
referenced by p
If the object is currently unlocked, t acquires
the lock and sets the lock count to 1, and then
proceeds to execute the block
If t currently holds the lock on the object,
t increments its lock count for the object by 1,
and proceeds to execute the block
If another thread holds the lock on the object, t
is “stalled”
Leaving a synchronized block (either normally or
“abruptly”)
t decrements its lock count on the object by 1
If the lock count is still positive, t proceeds in
its execution
If the lock count is zero, the threads “locked
out” of the object become eligible to run, and t
stays eligible to run
• But this is not an official scheduling point
If each thread brackets its accesses inside a
synchronized block on the object, mutually
exclusive accesses to the object are ensured
No need to specify volatile
-20-
Mutual Exclusion in Java via
Synchronized Methods
class Position{
private int x=0, y=0;
public synchronized void incr(){
x += 1;
y += 1;
}
public synchronized int[] value(){
return new int[2]{x, y}
}
}
global
u
r
Updater
pu
pr
Reporter
x
y
Position
public class Example7{
public static void main(String[] args){
Position global = new Position();
Updater u = new Updater( global );
Reporter r = new Reporter( global );
u.start();
r.start();
}
}
class Updater extends Thread{
private final Position pu;
Updater( Position p ){
pu=p;
}
public void run(){
while(true){
pu.incr();
... sleep( 1000 ); ...
}
}
}
class Reporter extends Thread{
private final Position pr;
Reporter( Position p ){
pr=p;
}
public void run(){
while(true){
int[] arr = pr.value();
System.out.println(arr[0]);
System.out.println(arr[1]);
... sleep( 2000 ); ...
}
}
}
-21-
Comments on Synchronized Blocks / Methods
Effect of synchronized instance method is as though
body of method was in a synchronized(this) block
Generally better to use synchronized methods
versus synchronized blocks
Centralizes mutual exclusion logic
For efficiency, have a non-synchronized method
with synchronized(this) sections of code
Synchronized accesses to static fields
A synchronized block may synchronize on a class
object
• The “class literal” Foo.class returns a reference
to the class object for class Foo
• Typical style in a constructor that needs to
access static fields
class MyClass{
private static int count=0;
MyClass(){
synchronized(MyClass.class){ count++; }
...
}
}
A static method may be declared as synchronized
Constructors are not specified as synchronized
Only one thread can be operating on a given object
through a constructor
Invoking obj.wait() releases lock on obj
All other blocking methods (join(), sleep(),
blocking I/O) do not release the lock
-22-
Mutual Exclusion in POSIX via Mutex
A mutex is an instance of type pthread_mutex_t
Initialization determines whether a pthread can
successfully lock a mutex it has already locked
PTHREAD_MUTEX_INITIALIZER (“fast mutex”)
• Attempt to relock will fail
PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP
(“recursive mutex”)
• Attempt to relock will succeed
Operations on a mutex
pthread_mutex_lock(&mutex)
• Blocks caller if mutex locked
• Deadlock condition indicated via error code
pthread_mutex_trylock(&mutex)
• Does not block caller
pthread_mutex_unlock(&mutex)
• Release waiting pthread
pthread_mutex_destroy(&mutex)
• Release mutex resources
• Can reuse mutex if reinitialize
-23-
Monitors
In most cases where mutual exclusion is required
there is also a synchronization* constraint
A task performing an operation on the object needs
to wait until the object is in a state for which the
operation makes sense
Example: bounded buffer with Put and Get
• Consumer calling Get must block if buffer is empty
• Producer calling Put must block if buffer is full
The monitor is a classical concurrency mechanism that
captures mutual exclusion + state synchronization
Encapsulation
• State data is hidden, only accessible through
operations exported from the monitor
• Implementation must guarantee that at most one
task is executing an operation on the monitor
Synchronization is via condition variables local to the
monitor
• Monitor operations invoke wait/signal on the
condition variables
• A task calling wait is unconditionally blocked (in a
queue associated with that condition variable),
releasing the monitor
• A task calling signal awakens one task waiting for
that variable and otherwise has no effect
Proposed/researched by Dijkstra, Brinch-Hansen,
Hoare in late 1960s and early 1970s
* “Synchronization” in the correct (versus Java) sense
-24-
Monitor Example: Bounded Buffer
monitor Buffer
export Put, Get, Size;
const
Max_Size = 10;
var
Data : array[1..Max_Size] of Whatever;
Next_In, Next_Out : 1..Max_Size;
Count
: 0..Max_Size;
Next_Out
NonEmpty, NonFull : condition;
Data
procedure Put(Item : Whatever);
begin
if Count=Max_Size then
Wait( NonFull );
Data[Next_In] := Item;
Next_In := Next_In mod Max_Size + 1;
Count: 4
Count
:= Count + 1;
Next_In
Signal( NonEmpty );
Snapshot of
end {Put};
procedure Get(Item : var Whatever); data structures after
inserting 5 elements
begin
and removing 1
if Count=0 then
Wait( NonEmpty );
Item := Data[Next_Out];
Next_Out := Next_Out mod Max_Size + 1;
Count
:= Count - 1;
Signal( NonFull );
end {Get};
function Size : Integer;
begin
Size := Count;
end {Size};
begin
Count
:= 0;
Next_In := 1;
Next_Out := 1;
end {Buffer};
Max_Size
1
-25-
Monitor Critique
Semantic issues
If several tasks waiting for a condition variable,
which one is unblocked by a signal?
• Longest-waiting, highest priority, unspecified, ...
Which task (signaler or unblocked waiter) holds the
monitor after a signal
• Signaler?
• Unblocked waiter?
• Then when does signaler regain the monitor
• Avoid problem by requiring signal either to
implicitly return or to be the last statement?
• Depending on semantics, may need while vs if in
the code that checks the wait condition
Advantages
Encapsulation
Efficient implementation
Avoids some race conditions
Disadvantages
Sacrifices potential concurrency
• Operations that don’t affect the monitor’s state
(e.g. Size) still require mutual exclusion
Condition variables are low-level / error-prone
• Programmer must ensure that monitor is in a
consistent state when wait/signal are called
Nesting monitor calls can deadlock, even without
using condition variables
-26-
Monitors and Java
Every object is a monitor in some sense
Each object obj has a mutual exclusion lock, and
certain code is executed under control of that lock
• Blocks that are synchronized on obj
• Instance methods on obj’s class that are
declared as synchronized
• Static synchronized methods for obj if obj is a
class
But encapsulation depends on programmer style
Non-synchronized methods, and accesses to nonprivate data from client code, are not subject to
mutual exclusion
No special facility for condition variables
Any object (generally the one being accessed by
synchronized code) can be used as a condition
variable via wait() / notify()
But that means that there is only one condition
directly associated with the object
To invoke wait() or notify() on an object, the
calling thread needs to hold the lock on the object
Otherwise throws a run-time exception
The notifying thread does not release the lock
Waiting threads thus generally need to do their
wait in a while statement versus a simple if
No guarantee which waiting thread is awakened by a
notify
-27-
Bounded Buffer in Java
public class BoundedBuffer{
public static final int maxSize=10;
private final Object[] data = new Object[maxSize];
private int nextIn=0, nextOut=0;
private volatile int count=0;
public synchronized void put(Object item)
throws InterruptedException{
while (count == max) { this.wait(); }
data[nextIn] = item;
nextIn
= (nextIn + 1) % max;
count++;
this.notify(); //a waiting consumer, if any
}
public synchronized Object get()
throws InterruptedException{
while (count == 0) { this.wait(); }
Object result = data[nextOut];
data[nextOut] = null;
nextOut
= (nextOut + 1) % max;
count--;
this.notify(); // a waiting producer, if any
return result;
}
public int size(){ // not synchronized
return count;
}
Notes
Essential for each wait() condition to be in a
while loop and not simply an if statement
Using the buffer object for both conditions
works since there is no way for both a producer
and a consumer thread to be in the object’s wait
set at the same time
-28-
Monitors and Ada Protected Objects
Encapsulation enforced in both
Data components are inaccessible to clients
Mutual exclusion enforced in both
All accesses are via protected operations, which are
executed with mutual exclusion (“CREW”)
Condition variables
A protected entry is a protected operation guarded
by a boolean condition (“barrier”) which, if false,
blocks the calling task
Barrier condition can safely reference the
components of the protected object and also the
“Count attribute”
• E'Count = number of tasks queued on entry E
• Value does not change while a protected operation
is in progress (avoids race condition)
Barrier expressions are Ada analog of condition
variables, but higher level (wait and signal implicit)
• Caller waits if the barrier is False (and releases
the lock on the object)
• Barrier conditions for non-empty queues are
evaluated at the end of protected procedures and
protected entries
• If any are True, queuing policy establishes which
task is made ready
Protected operations (unlike monitor operations) are
non-blocking
Allows efficient implementation of “lock”
-29-
Bounded Buffer in Ada
package Bounded_Buffer_Pkg is
Max_Length : constant := 10;
type W_Array is
array(1 .. Max_Length) of Whatever;
protected Bounded_Buffer is
entry Put( Item : in Whatever );
entry Get( Item : out Whatever );
function Size;
private
Next_In, Next_Out : Positive := 1;
Count : Natural := 0;
Data : W_Array;
end Bounded_Buffer;
end Bounded_Buffer_Pkg;
package body Bounded_Buffer_Pkg is
protected body Bounded_Buffer is
entry Put( Item : in Whatever ) when Count < Max_Length is
begin
Data(Next_In) := Item;
Next_In
:= Next_In mod Max_Length + 1;
Count
:= Count+1;
end Put;
entry Get( Item : out Whatever ) when Count > 0 is
begin
Item
:= Data(Next_Out);
Next_Out := Next_Out mod Max_Length + 1;
Evaluate
Count
:= Count-1;
barriers
end Get;
function Size is
begin
return Count;
end Size;
end Bounded_Buffer;
end Bounded_Buffer_Pkg;
-30-
Monitors and POSIX:
Mutex + Condition Variables
POSIX supplies type pthread_cond_t for condition
variables
Always used in conjunction with a mutex
• Avoids race conditions such as a thread calling
wait and missing a signal that is issued before
the thread is enqueued
May be used to simulate a monitor, or simply as an
inter-thread coordination mechanism
Initialized via PTHREAD_COND_INITIALIZER or via
pthread_cond_init function
Operations
Signaling operations
• pthread_cond_signal( &cond_vbl )
• Pulsed event
• No guarantee which waiter is awakened
• pthread_cond_broadcast (&cond_vbl )
• Broadcast event
Waiting operations
• pthread_cond_wait( &cond_vbl, &mutex )
• pthread_cond_timedwait(&cond_vbl, &mutex,
&timeout)
Initialization
• pthread_cond_init( &cond_val )
Resource release
• pthread_cond_destroy( &cond_vbl )
-31-
Bounded Buffer in POSIX (*)
#include <pthread.h>
#define MAX_LENGTH 10
#define WHATEVER float
typedef struct{
pthread_mutex_t mutex;
pthread_cond_t non_full;
pthread_cond_t non_empty;
int next_in, next_out, count;
WHATEVER data[MAX_LENGTH];
} bounded_buffer_t;
void put( WHATEVER item, bounded_buffer_t *b ){
PTHREAD_MUTEX_LOCK(&(b->mutex));
while (b->count == MAX_LENGTH){
PTHREAD_COND_WAIT(&(b->non_full), &(b->mutex));
}
... /* Put data in buffer, update count and next_in */
PTHREAD_COND_SIGNAL(&(b->non_empty));
PTHREAD_MUTEX_UNLOCK(&(b->mutex));
}
void get( WHATEVER *item, bounded_buffer_t *b ){
PTHREAD_MUTEX_LOCK(&(b->mutex));
while (b->count == 0){
PTHREAD_COND_WAIT(&(b->non_empty), &(b->mutex));
}
... /* Get data from buffer, update count and next_out */
PTHREAD_COND_SIGNAL(&(b->non_full));
PTHREAD_MUTEX_UNLOCK(&(b->mutex));
}
int size( bounded_buffer_t *b ){
int n;
PTHREAD_MUTEX_LOCK(&(b->mutex));
n = b->count;
PTHREAD_MUTEX_UNLOCK(&(b->mutex));
return n;
}
/* Initializer function also required
(*) Based on example in Burns & Wellings, Real-Time Systems and
Programming Languages, pp. 253-254
-32-
Comparison of Mutual Exclusion Approaches
Points of difference
Expression of mutual exclusion in program
• Explicit code markers in POSIX (lock/unlock
mutex)
• Either explicit code marker (synchronized block)
or encapsulated (synchronized method) in Java
• Encapsulated (protected object) in Ada
No explicit condition variables in Java
Blocking prohibited in protected operations (Ada)
Locks are implicitly recursive in Java and Ada,
programmer decides whethr “fast” or recursive in
POSIX
Methodology / reliability
All provide necessary mutual exclusion
Ada entry barrier is higher level than condition
variable
Absence of condition variable from Java can lead to
clumsy or obscure style
Main reliability issue is interaction between mutual
exclusion and asynchrony, described below
Flexibility / generality
Ada restricts protected operations to be nonblocking
Efficiency
Ada provides potential for concurrent reads
Ada does not require queue management
-33-
Coordination / Communication Mechanisms
Pulsed Event
Waiter blocks unconditionally
Signaler awakens exactly one waiter (if one or
more), otherwise event is discarded
Broadcast Event
Waiter blocks unconditionally
Signaler awakens all waiters (if one or more),
otherwise event is discarded
Persistent Event (Binary Semaphore)
Signaler allows one and only one task to proceed
past a wait
• Some task that already has, or the next task
that subsequently will, call wait
Counting semaphore
A generalization of binary semaphore, where the
number of occurrences of signal are remembered
Simple 2-task synchronization
Persistent event with a one-element queue
Direct inter-task synchronous communication
Rendezvous, where the task that initiates the
communication waits until its partner is ready
-34-
Pulsed Event
Java
Any object can serve as a pulsed event via wait()
/ notify()
Calls on these methods must be in code
synchronized on the object
• wait() releases the lock, notify() doesn’t
wait() can throw InterruptedException
An overloaded version of wait() can time out,
but no direct way to know whether the return
was normal or via timeout
Ada
Protected object can model a pulsed event
protected Pulsed_Signal is
entry Wait;
procedure Signal;
private
Signaled : Boolean := False;
end Pulsed_Signal;
protected body Pulsed_Signal is
entry Wait when Signaled is
begin
Signaled := False;
end Wait;
procedure Signal is
begin
Signaled := Wait'Count>0;
end Signal;
end Pulsed_Signal;
Can time out on any entry via select statement
Can’t awaken a blocked task other than via abort
POSIX
Condition variable can serve as pulsed event
-35-
Broadcast Event
Java
Any object can serve as a broadcast event via
wait() / notifyAll()
Calls on these methods must be in code
synchronized on the object
Ada
Protected object can model a broadcast event
protected Broadcast_Signal is
entry Wait;
procedure Signal;
private
Signaled : Boolean := False;
end Pulsed_Signal;
protected body Broadcast_Signal is
entry Wait when Signaled is
begin
Signaled := Wait'Count>0;
end Wait;
procedure Signal is
begin
Signaled := Wait'Count>0;
end Signal;
end Pulsed_Signal;
Protected object can model more general forms,
such as sending data with the signal, to be
retrieved by each awakened task
Locking protocol / barrier evaluation rules
prevent race conditions
POSIX
Condition variable can serve as broadcast signal
-36-
Semaphores (Persistent Event)
Binary semaphore expressible in Java
public class BinarySemaphore {
private boolean signaled = false;
public synchronized void await() throws InterruptedException{
while (!signaled) { this.wait(); }
signaled=false;
}
public synchronized void signal(){
signaled=true;
this.notify();
}
}
J-Consortium spec includes binary and counting
semaphores
Binary semaphore expressible in Ada
protected type Binary_Semaphore is
entry Wait;
procedure Signal;
private
Signaled : Boolean := False;
end Binary_Semaphore;
protected body Binary_Semaphore is
entry Wait when Signaled is
begin
Signaled := False;
end Wait;
procedure Signal is
begin
Signaled := True;
end Signal;
end Binary_Semaphore;
POSIX
Includes (counting) semaphores, but intended for
inter-process rather than inter-thread
coordination
-37-
Simple Two-Task Synchronization
Java, POSIX
No built-in support
Ada
Type Suspension_Object in package
Ada.Synchronous_Task_Control
• Procedure Suspend_Until_True(SO) blocks caller
until SO becomes true, and then resets SO to false
• Procedure Set_True(SO) sets SO’s state to true
• “Bounded error” if a task calls
Suspend_Until_True(SO) while another task is
waiting for the same SO
procedure Proc is
task Setter;
task Retriever;
SO : Suspension_Object;
Data : array (1..1000) of Float;
task body Setter is
begin
... -- Initialize Data
Set_True(SO);
...
end Setter;
task body Retriever is
begin
Suspend_Until_True(SO);
... -- Use data
end Setter;
begin
null;
end Proc;
-38-
Direct Synchronous Inter-Task
Communication (1)
Calling task (caller)
Requests action from another task (the callee),
and blocks until callee is ready to perform the
action
Called task (callee)
Indicates readiness to accept a request from a
caller, and blocks until a request arrives
Rendezvous
Performance of the requested action by callee,
on behalf of a caller
Parameters may be passed in either or both
directions
Both caller and callee are unblocked after
rendezvous completes
T1
“T2, do action A”
• Wait for T2 to start action A
Rendezvous
• (T2 does action A)
• Wait for T2 to complete action A
T2
“Accept request for action A [from T1]”
• Wait for request for action A [from T1]
• Do action A
• Awaken caller
Java
No direct support
Can model via wait / notify, but complicated
POSIX
Same comments as for Java
-39-
Direct Synchronous Inter-Task
Communication (2)
Ada
“Action” is referred to as a task’s entry
• Declared in the task’s specification
• Caller makes entry call, similar syntactically to
a procedure call
• Callee accepts entry call via an accept
statement
Caller identifies callee but not vice versa
• Many callers may call the same entry, requiring
a queue
Often callee is a “server” that sequentializes
access to a shared resource
• Sometimes protected object is not sufficient,
e.g. if action may block
• In most cases the server can perform any of
several actions, and the syntax needs to
reflect this flexibility
• Also in most cases the server is written as an
infinite loop (not known in advance how many
requests will be made) so termination is an
issue
• Ada provides special syntax for a server to
automatically terminate when no further
communication with it is possible
Caller and/or callee may time out
• Timeout canceled at start of rendezvous
-40-
Direct Synchronous Inter-Task
Communication (3)
Ada example
task Sequentialized_Output is
entry Put_Line( Item : String );
entry Put( Item : String );
end Sequentialized_Output;
task body Sequentialized_Output is
begin
loop
select
accept Put_Line( Item : String ) do
Ada.Text_IO.Put_Line( Item );
end Put_Line;
or
accept Put( Item : String ) do
Ada.Text_IO.Put( Item );
end Put;
or
terminate;
end select;
end loop;
end Sequentialized_Output;
task Outputter1;
task body Outputter1 is
begin;
...
Sequentialized_OutPut.
Put("Hello");
...
end Outputter1;
task Outputter2;
task body Outputter2 is
begin;
...
Sequentialized_Output.
Put("Bonjour");
...
end Outputter2;
-41-
Comparison of Coordination/Communication
Mechanisms
Points of difference
Different choice of “building blocks”
• Ada: Suspension_Object, protected object,
rendezvous
• Java, POSIX: pulsed/broadcast events
Java allows “interruption” of blocked thread
Methodology / reliability
Ada’s high-level feature(rendezvous) supports
good practice
Potential for undetected bug in Ada if a task
calls Suspend_Until_True on a
Suspension_Object that already has a waiting
task
Flexibility / generality
Major difference among the languages is that
Ada is the only one to provide rendezvous as
built-in communication mechanism
Efficiency
No major differences in implementation
efficiency for mechanisms common to the three
approaches
Ada’s Suspension_Object has potential for
greater efficiency than semaphores
-42-
Asynchrony Mechanisms
Setting/Polling
Setting a datum in a task/thread that is polled
by the affected task/thread
Asynchronous Event Handling
Responding to asynchronous events generated
internally (by application threads) or externally
(by interrupts)
Resumptive: “interrupted” thread continues at
the point of interruption, after the handler
completes
Combine with polling or ATC to affect the
interrupted thread
Asynchronous Termination
Aborting a task/thread
Immediacy: are there regions in which a task /
thread defers requests for it to be aborted?
ATC
Causing a task to branch based on an
asynchronous occurrence
Immediacy: are there regions in which a task /
thread defers requests for it to have an ATC?
Suspend/resume
Causing a thread to suspend its execution, and
later causing the thread to be resumed
Immediacy: are there regions in which a task /
thread defers requests for it to be suspended?
-43-
Setting / Polling
Not exactly asynchronous (since the affected
task/thread checks synchronously)
But often useful and arguably better than
asynchronous techniques
Ada
No built-in mechanism, but can simulate via
protected object or pragma Atomic variable
global to setter and poller
Java
t.interrupt() sets interruption status flag in
the target thread t
Static Thread method boolean interrupted()
returns current thread’s interruption status flag
and resets it
Boolean method t.isInterrupted() returns
target thread’s interruption status flag
If t.interrupt() is invoked on a blocked thread
t, t is awakened and an InterruptedException (a
checked exception) is thrown
Each of the methods thr.join(),
Thread.sleep(), and obj.wait() has a “throws
InterruptedException” clause
POSIX
No built-in mechanism, but can simulate via
volatile variable global to setter and poller
-44-
Asynchronous Event Handling
Ada
No specific mechanism for asynch event handling
Interrupt handlers can be modeled by specially
identified protected procedures, executed (at
least conceptually) by the hardware
Other asynch event handlers modeled by tasks
Java (RTSJ)
Classes AsyncEvent (“AE”), AsyncEventHandler
(“AEH”) model asynchronous events, and handlers
for such events, respectively
• Programmer overrides one of the AEH
methods to define the handler’s action
Program can register one or more AEHs with any
AE (listener model)
An AEH is a schedulable entity, like a thread (but
not necessarily a dedicated thread)
When an AE is fired, all registered handlers are
scheduled based on their scheduling parameters
• Program needs to manage any data queuing
• Methods allow dealing with event bursts
Scales up to large number of events, handlers
POSIX
Messy interaction between signals (originally a
process-based mechanism) and threads
-45-
Asynchronous Termination (1)
Ada
Abort statement sets the aborted task’s state to
abnormal, but this does not necessarily terminate
the aborted task immediately
For safety, certain contexts are abort-deferred; e.g.
• Accept statements
• Protected operations
Real-Time Annex requires implementation to
terminate an abnormal task as soon as it is outside an
abort-deferred region
Java Language Spec
No notion of abort-deferred region
Invoke t.stop(Throwable exc) or t.stop()
• Halt t asynchronously, and throw exc or
ThreadDeath object in t
• Then effect is as though propagating an
unchecked exception
• Deprecated (data may be left in an inconsistent
state if t stopped while in synchronized code)
Invoke t.destroy()
• Halt t, with no cleanup and no release of locks
• Not (yet :-) deprecated but can lead to deadlock
Invoke System.exit(int status)
• Terminates the JVM
• By convention, nonzero status
abnormal termination
-46-
Asynchronous Termination (2)
Java Language Spec (cont’d.)
Recommended style is to use interrupt()
class Boss extends Thread{
Thread slave;
Boss(Thread slave){ this.slave=slave; }
public void run(){
...
if (...){
slave.interrupt(); // abort slave
}
...
}
}
class PollingSlave extends Thread{
public void run(){
while (!Thread.interrupted()){
... // main processing
}
... // pre-shutdown actions
}
}
Main issue is latency
RTSJ
Synchronized code, and methods that do not
explicitly have a throws clause for AIE, are
abort deferred
To abort a thread, invoke t.interrupt() and
have t do its processing in an asynchronously
interruptible method
-47-
Asynchronous Termination (3)
J-Consortium
abort() method aborts a thread
Synchronized code is not necessarily abortdeferred
• May need to terminate a deadlocked thread that
is in synchronized code
Synchronized code in objects that implement the
Atomic interface is abort deferred
POSIX
A pthread can set its cancellation state (enabled or
disabled) and, if enabled, its cancellation type
(asynchronous or deferred)
• pthread_set_cancelstate(newstate, &oldstate)
• PTHREAD_CANCEL_DISABLE
• PTHREAD_CANCEL_ENABLE
• pthread_set_canceltype(newtype, &oldtype)
• PTHREAD_CANCEL_ASYNCHRONOUS
• PTHREAD_CANCEL_DEFERRED
• Default setting: enabled, deferred cancellation
Deferred cancel at next cancellation point
• Minimal set of cancellation points defined by
standard, others can be added by
implementation
pthread_cancel( &pthr ) sends cancellation request
Cleanup handlers give the cancelled thread the
opportunity to consistentize data, unlock mutexes
-48-
Asynchronous Transfer of Control (“ATC”)
What is it
A mechanism whereby a triggering thread
(possibly an async event handler) can cause a
target thread to branch unconditionally, without
any explicit action from the target thread
Controversial facility
Triggering thread does not know what state the
target thread is in when the ATC is initiated
Target thread must be coded carefully in
presence of ATC
Implementation cost / complexity
Interaction with synchronized code
Why included in spec
User community requirement
Useful for certain idioms
• Time out of long computation when partial
result is acceptable
• Abort an iteration of a loop
• Terminate a thread
ATC may have shorter latency than polling
-49-
Asynchronous Transfer of Control (1)
Ada
Allow controlled ATC, where the effect is
restricted to an explicit syntactic context
Restrict the ATC triggering conditions
• Time out
• Acceptance of an entry call
Defer effect of ATC until affected task is
outside abort-deferred region
function Eval(Interval : Duration) return Float is
X : Float := 0.0;
begin
select
1
delay Interval;
return X;
3a
then abort
2
while ... loop
... X := ...; ...
end loop;
3b
end select;
return X;
end Eval;
Java (RTSJ)
ATC based on model of asynchronous exceptions,
thrown only at threads that have explicitly
enabled them
ATC deferred in synchronized code and in
methods that lack a “throws AIE” clause
Timeout is a specific kind of AIE
-50-
Asynchronous Transfer of Control (2)
abstract class Func{
abstract double f(double x) throws AIE;
volatile double current; // assumes atomic
}
class MyFunc extends Func{
double f(double x) throws AIE {
current = ...;
while(...){ ... current = ...; }
return current;
}
}
class SuccessiveApproximation{
static boolean finished;
static double calc(Func func, double arg, long ms){
double result = 0.0;
new Timed( new RelativeTime(ms, 0) ).doInterruptible(
new Interruptible(){
public void run(AIE e) throws AIE{
result
= func.f(arg);
finished = true;
}
public void interruptAction(AIE e){
result
= func.current;
finished = false;
}
});
return result;
}
public static void main(String[] args){
MyFunc mf = new MyFunc();
double answer = calc(mf, 100.0, 1000);
// run mf.f(100.0) for at most 1 second
System.out.println(answer);
System.out.println("calc completed? " + finished );
}
}
-51-
Suspend / Resume
Ada
Real-Time Annex defines a package
Ada.Asynchronous_Task_Control with
procedures Hold, Continue
Hold(T) conceptually sets T’s priority less than
that of the idle task
• Effect deferred during protected operations,
rendezvous
Continue(T) restores T’s pre-held priority
Java
t.suspend() suspends t, without releasing locks
t.resume() resumes t
These methods have been deprecated
• If a thread t is suspended while holding a lock
required by the thread responsible for
resuming t, the threads will deadlock
• Arguably this programming bug should not
have caused the methods to be deprecated
POSIX
Not supported
-52-
Comparison of Asynchrony Mechanisms
Points of difference
Ada attempts a minimalist approach, whereas the
real-time Java specs (and to some extent
POSIX) provide more general models
Methodology / reliability
Asynchronous operations are intrinsically
dangerous, the goal is to minimize / localize the
code that needs to be sensitive to disruption
Regular Java’s interrupt mechanism, though
requiring polling, is a reasonable approach
Java RTSJ has nice model for asynchronous
event handling
POSIX cancellation semantics allows thread
owning a mutex to cleanly deal with cancellation
request
Ada ATC constrains the effect of an
asynchronous request to a clearly identified
syntactic region, and defines orderly cleanup
POSIX signal interactions are messy
Flexibility / generality
Java RTSJ offers a general ATC model based on
asynchronous exceptions
Efficiency
ATC may incur distributed overhead in Java
RTSJ (check on method returns)
-53-
Scheduling and Priorities: Introduction
Scheduler decides which ready task to run
(“dispatching”), which task to unblock when a
resource with a queue of waiters is available
Variety of dispatching policies, including:
Priority-based fixed priority(*), FIFO within
priority
• Run until blocked (non-preemptive)
• Run until blocked or preempted
• Run until blocked or preempted or timeslice
expires
Priority-based non-fixed priority
• Priority aging
• Earliest deadline first
Variety of queue service policies, such as:
FIFO ignoring priorities
FIFO within priorities
Unspecified
Finer levels of detail also arise
When thread is preempted, or when its priority
is modified, where in its ready queue is it placed?
Scheduling policies affect predictability and
throughput, goals which are in conflict
Real-time programs generally require
predictability at expense of throughput
(*) “Fixed priority” scheduler does not implicitly change a thread’s priority
except to avoid priority inversions; program can change a thread’s priority -54-
Priority Inversion
What is a “priority inversion”?
A higher-priority thread is blocked / stalled while a
lower-priority thread is running
It is sometimes necessary
When the lower priority thread holds a lock that is
needed by the higher priority thread
Scheduling policy affects worst case blocking time
A high priority thread may be blocked (stalled on a
lock) during execution of a lower-priority thread
not holding the lock - “unbounded priority inversion”
• Mars lander mission in 1999
Priority Inheritance and Highest Lockers (Priority
Ceiling) considerably reduce worst-case blocking
time, at expense of throughput
Priority inheritance
When a thread H attempts to acquire a lock that is
held by a lower-priority thread L, L inherits H’s
priority as long as it is holding the lock
Applied transitively if L is waiting for a lock held
by a yet-lower-priority thread
Highest lockers (Priority ceiling)
While holding a lock, a thread executes at a priority
higher than or equal to that of any thread that
needs the lock
-55-
Priority Inversion Example
H
M
L
H is a high-priority thread, M a medium priority
thread, and L a low-priority thread
L awakens and starts to run (the other two threads
are blocked, waiting for the expiration of delays)
L starts to use a mutually-exclusive resource
• Enters a monitor, locks a mutex
H awakens and preempts L
H tries to use the resource held by L and is
blocked, thus allowing L to resume
• This priority inversion is necessary
M awakens and preempts L
• This “unbounded” priority inversion is evil, since
M is indirectly preventing H from running
M completes, and L resumes
L releases the mutually exclusive resource and is
preempted by H, which can then use the resource
H releases the resource
H completes execution, allowing L to resume
L completes execution
-56-
Priority Inheritance
H
H
H H
M
L
L L
H
M
H
L
L awakens and starts to run at priority L
L starts to use a mutually-exclusive resource
H awakens, preempts L and runs at priority H
H tries to use the resource held by L and is
blocked, thus allowing L to resume
• At this point L inherits H’s priority (H)
M awakens but does not preempt L
• This avoids the unbounded priority inversion
L releases the mutually exclusive resource,
reverts to its pre-inheritance priority L, and is
preempted by H, which can then use the resource
H releases the resource
H completes execution, allowing M (the higher
priority of the two ready threads) to execute
M completes, allowing L to resume
L completes execution
Effect of Priority Inheritance
A thread holding a lock executes at the maximum
priority of all threads currently requiring that
lock
-57-
Priority Ceilings (Highest Lockers)
H
M
L
H
L H´
H´
H´ H
M
H´
L
L awakens and starts to run at priority L
L starts to use a mutually-exclusive resource with
ceiling H' > H, and runs at priority H'
• This will prevent unbounded priority inversion
H awakens but does not preempt L
M awakens but does not preempt L
L releases the mutually exclusive resource, reverts
to its pre-ceiling priority L, and is preempted by H
(the higher-priority of the two ready tasks) which
then runs at priority H
H starts to use the resource with ceiling H' > H,
and runs at priority H'
H releases the resource and reverts to priority H
H completes execution, allowing M (the higher
priority of the two ready threads) to execute
M completes, allowing L to resume
L completes execution
Effect of Priority Ceiling
A thread holding a lock executes at a priority
higher than that of any thread that might need the
lock
-58-
Priority Inversion Avoidance Techniques
Priority Inheritance
Supported by many RTOSes
Only change priority when needed (thus no cost in
common case when resource not in use)
Thread may be blocked once for each lock that it
needs (“chained blocking”)
Implementation may be expensive
• Thread’s priority is being changed as a result of
an action external to the task
Ceiling Priorities
If no thread can block while holding the lock on a
given shared object, then a queue is not needed for
that object
In effect, the processor is the lock
Prevents deadlock (on uniprocessor)
Ensures that a thread is blocked only once each
period, by one lower priority thread holding the lock
Fixed ceilings not appropriate for applications where
priorities need to change dynamically
Requires check and priority change at each call
• Overhead even if object not locked
• But this is inconsequential in the queueless case
If ceiling high, effect disabling thread switching
Both sacrifice responsiveness for predictability
A thread may be prevented from running in order to
guarantee that deadlines are met overall
-59-
Java for Real-Time Programming:
Language Features and Issues
Scheduling/priorities
sleep(millis) suspends the calling thread
Priority is in range 1..10
Thread can change or interrogate its own or
another thread’s priority
yield() gives up the processor
Thread model
Priority range (1..10) too narrow
Priority semantics are implementation dependent
and fail to prevent unbounded priority inversion
Relative sleep() not sufficient for periodicity
Memory management
Predictable, efficient garbage collection
appropriate for real-time applications is not (yet)
in the mainstream
Java lacks stack-based objects (arrays and class
instances)
Heap used for exceptions thrown implicitly as an
effect of other operations
Run-time semantics
Dynamic class loading is expensive, and it is not
easy to see when it will occur
Array initializers run-time code
OOP for real-time programming?
Dynamic binding complicates analyzability
Garbage Collection defeats predictability
-60-
Regular Java Semantics for Scheduling
Section 17.12 of the Java Language Specification
“Every thread has a priority. When there is
competition for processing resources, threads
with higher priority are generally executed in
preference to threads with lower priority. Such
preference is not, however, a guarantee that the
highest priority thread will always be running,
and thread priorities cannot be used to reliably
implement mutual exclusion.”
Problems for real-time applications
This rule makes it impossible to guarantee that
deadlines will be met for periodic threads
No guarantee that priority is used for selecting a
thread to unblock when a lock is released
• No prevention of priority inversion
• High priority thread may be blocked for
longer than desired when it is waiting to
acquire a lock
No guarantee that priority is used for selecting
which thread is awakened by a notify(), or
which thread awakened by notifyAll() is
selected to run
-61-
Garbage Collection and
Real-Time Programming
No Garbage Collection
Require that all allocations be performed at
system initialization
Common in many kinds of real-time applications
Difficult in Java since all non-primitive data are
dynamically allocated
Real-Time Garbage Collector
Techniques exist that have predictable /
bounded costs
• Incremental or concurrent, vs. mark-sweep
But programmer still needs to ensure that
allocation rate does not exceed rate at which GC
can reclaim space
Also, in the absence of specialized hardware,
such techniques tend to introduce high latencies
• GC needs to run at high priority or with the
heap locked, to prevent an application thread
from referencing an inconsistent heap
Hybrid approach
For low latency, allow a thread to preempt GC if
the thread never references the heap
• In absence of optimization, need run-time
check on each heap reference
Allow a thread to allocate objects in a scopeassociated area
• Area flushed at end of scope/thread
-62-
Real-Time Specification for Java Scheduling and Priority Support (1)
Basics
Class RealtimeThread extends java.lang.Thread
Flexible scheduling framework + default scheduler +
priority inversion avoidance
Memory management
Garbage-Collected heap
Kinds of memory areas
Immortal memory
Scoped memory
Assignment rules prevent dangling references
NoheapRealtimeThread can preempt GC
Initial default scheduler
At least 28 distinct priority values, beyond the 10
for regular Java threads
Fixed-priority preemptive, FIFO within priority
Implementation defines where in ready queue a
preempted thread goes
User may replace with a different scheduler
General concept of schedulable object
Classes RealtimeThread, NoHeapRealtimeThread,
AsyncEventHandler
Constructors for these classes take different kinds
of “parameters” objects
• SchedulingParameters (priority, importance)
• ReleaseParameters (cost, deadline, period, ...)
• MemoryParameters (memory area, ...)
-63-
Real-Time Specification for Java Scheduling and Priority Support (2)
Priority Inversion avoidance
Priority inheritance protocol by default for
synchronization locks
Priority ceiling emulation (with queuing) also available
Programmer can set monitor control either locally (per
object) or globally
Synchronization between no-heap real-time threads and
regular Java threads needs some care
• Use non-blocking queues
Support for feasibility analysis
Implementation can use data in “parameters” objects to
determine if a set of schedulable objects can satisfy
some constraint
• Example: Rate-Monotonic Analysis
Methods to add/remove a schedulable object to/from
feasibility analysis
Implementation not required to support feasibility
analysis
Flexibility
Implementation can install arbitrary scheduling
algorithms and feasibility analysis
Users can replace these dynamically, can have different
schedulers for different schedulable objects
-64-
J-Consortium’s Real-Time Core Extensions Scheduling and Priority Support
Concurrency
Class CoreTask (method work()) Thread.run
Fixed-priority preemptive scheduler + priority
inversion avoidance
Memory management
GC heap for baseline objects, non-GC “allocation
contexts” for Core objects
Per-task allocation context, implicitly freed
On-the-fly allocation contexts, explicitly freed
Stackable objects
Base scheduler
128 task priorities, above the 10 from regular Java
Fixed-priority, preemptive dispatching
Timeslicing allowed within highest priority
Priority inversion avoidance
Priority Inheritance for regular synchronized code
Priority Ceiling (without queues) for synchronization
on objects whose classes implement the PCP
interface (blocking not allowed)
Priority Inheritance for Mutex objects, which can
be locked and unlocked around code that needs
mutually exclusive access to some resource
Queue management
A task t goes to head of ready queue for its
priority when it is preempted by a higher-priority
task, or when it loses inherited priority
-65-
Ada Scheduling / Priority Support
(Real-Time Annex)
Priorities
Priority range must include at least 30 values, and
at least one higher value for interrupt handlers
Dynamic_Priorities package
• Concepts of base versus active priority
• Subprograms to set / get a task’s base priority
• Deferral of priority changes in certain contexts
Scheduling-related policies - per partition (program)
pragma Dispatching_Policy(policy-id) affects
selection of which ready task to run
• FIFO_Within_Priority
• Run until blocked or preempted
• Implies Ceiling_Locking locking policy
• Preempted task, or task which loses inherited
priority, or task whose timeslice expires,
goes to head of ready queue
• Default dispatching policy not specified
pragma Locking policy(policy-id) for priority
inversion avoidance on protected objects
• Ceiling_Locking
• Default locking policy implementation defined
pragma Queuing_Policy(policy-id) for entry queues
• FIFO_Queuing (default)
• Priority_Queuing
Implementation may add further policies
-66 “delay 0.0;” yield processor (scheduling point)
POSIX Scheduling / Priority Support
Real-time scheduling is optional facility
Check if _POSIX_THREAD_PRIORITY_SCHEDULING is
defined
If so, then struct sched_param structure is
provided, with at least a sched_priority member
Scheduling policies
SCHED_FIFO run until blocked or higher priority
thread is ready, FIFO within highest priority
SCHED_RR similar to SCHED_FIFO but with time
slice (“round robin” within highest priority)
SCHED_OTHER implementation defined
Basic properties
Priority range is implementation defined
Set a thread’s scheduling policy / priority on
creation (via attribute) and/or dynamically
When creating a thread, set the inheritsched
attribute to control whether scheduling
properties are inherited from creator
With SCHED_FIFO or SCHED_RR, priority dictates
which ready thread runs, including after a mutex
is unlocked or a condition variable is signaled or
broadcast
Other properties
pthread_yield voluntarily relinquishes processor
Contention scope: system vs process
Allocation domain: relevant for multiprocessors
-67-
Priority Inversion Avoidance in POSIX
Optionally provided support for priority ceiling and
priority inheritance protocols, for mutexes
Set protocol in an attribute that is passed to a
mutex creation function
Priority Ceiling Protocol
Available if _POSIX_THREAD_PRIO_PROTECT defined
Set priority ceiling in attribute passed to mutex
creation function
• Ceiling should be >= priority of any locker
Locker at priority <= ceiling runs at ceiling priority
while holding lock
Locker at priority > ceiling runs at own priority but
may get priority inversion
Ceiling can be reset dynamically
Priority Inheritance Protocol
Available if _POSIX_THREAD_PRIO_INHERIT defined
A mutex locker’s priority is boosted dynamically to
the priority of a higher priority thread that
attempts to lock the mutex, and is reset when the
mutex is unlocked
Transitive if the lock holder is itself blocked on
another mutex
These protocols apply only to mutexes and not to
condition variables or semaphores
No “owner” of a condition variable or semaphore
-68-
Clock- and Time-Related Features (1)
Time and clock (range, granularity)
Java
• JLS
• System.currentTimeMillis() returns
milliseconds (long) since epoch
• Range is epoch (00:00:00 UTC, 1/1/1970)
263 milliseconds
• RTSJ
• HighResolutionTime measured in
(long milliseconds, int nanoseconds) and
subclasses for AbsoluteTime (relative to
epoch), RelativeTime, RationalTime
• Support for multiple clocks
• J-Consortium
• Time represented as long (nanoseconds)
relative to most recent system start
Ada
• Ada.Real_Time.Time reflects monotonically nondecreasing time values since implementationdefined origin (“epoch”)
• Range of time values must be at least from
program start to 50 years later
• Clock tick 1msec, time unit 20 sec
POSIX
• Time value structure: seconds and nanosec
• Realtime clock requires 20 msec resolution
-69-
Clock- and Time-Related Features (2)
Delay / sleep
Java
• JLS
• Relative sleep methods Thread.sleep(),
taking a long (millis) or a long (millis) and
an int (nanos)
• RTJEG
• Overloadings of sleep() taking a
HighResolutionTime (which may be
absolute)
• J-Consortium
• Absolute sleepUntil(Time time) method
Ada
• delay expr; relative delay, where expr is of
type Duration
• delay until expr; absolute delay, where
expr is of a time type
POSIX
• Relative delay via
unsigned int sleep(unsigned int seconds )
which suspends for seconds seconds
• Returns 0 if suspended for the specified
duration, else the time remaining (if awakened
by a signal)
-70-
Clock- and Time-Related Features (3)
Timeout
Java
• Timeouts allowed on wait, join (but not on
entering synchronized code)
Ada
• Timeouts (including “conditional” calls that
check and continue without blocking) allowed
on entry calls, but not for acquiring a lock
POSIX
• Timeouts on wait, join, and mutex lock
Periodic / sporadic real-time tasks / threads
Java
• RTJEG
• Via release parameters for real-time
thread constructor, with control over
deadline miss / budget overrun
• J-Consortium
• Via event handlers
Ada
• Via loop on absolute delay (or rendezvous from
dispatching task)
POSIX
• Via loop on relative sleep method
-71-
Periodic RealtimeThread in Real-Time
Specification for Java
class Position{ double x, y; }
class Sensor extends RealtimeThread{
final Position ps;
Sensor(Position p){
super(
new PriorityParameters(
PriorityScheduler.instance().getMinPriority() + 15
),
new PeriodicParameters(
null, // when to start (null means now)
new RelativeTime(100, 0), // 100 ms period
new RelativeTime(20, 0), // 20 ms cost
new RelativeTime(90, 0), // 90 ms deadline
null,
// no overrun handler
null ) // no miss handler
);
ps = p;
}
public void run(){
while ( true ){
double x = InputPort.read(1); // application class
double y = InputPort.read(2); // application class
synchronized(ps){ ps.x=x; ps.y=y;} // update position
try { this.waitForNextPeriod(); }
catch (InterruptedException e) { return; }
}
}
}
class Test{
public static void main(String[] args){
Position p = new Position();
Sensor s = new Sensor(p);
s.start();
...
s.interrupt(); // terminate s
}
}
-72-
Periodic Task in Ada
type Proc_Ref is access procedure;
task type Periodic is
entry Init(Prio
:
Period :
Action :
Start :
end Periodic;
System.Priority;
Ada.Real_Time.Time_Span;
Proc_Ref;
Ada.Real_Time.Time);
task body Periodic is
Prio
: System.Priority;
Period
: Ada.Real_Time.Time_Span;
Action
: Proc_Ref;
Next_Time : Time;
begin
accept Init(Prio
: System.Priority;
Period : Ada.Real_Time.Time_Span;
Action : Proc_Ref;
Start : Ada.Real_Time.Time) do
Periodic.Prio
:= Prio;
Periodic.Period := Period;
Periodic.Action := Proc_Ref;
Next_Time
:= Start;
end Init;
Set_Priority(Prio);
loop
delay until Next_Time;
Action.all;
Next_Time := Next_Time + Period;
end loop;
end Periodic;
-73-
Other Real-Time Support
Java
RTJEG
• Access to raw memory, physical memory
J-Consortium
• Low-Level I/O
• Unsigned integer conversions / comparisons
Ada
Storage management
• Not an issue as in Java, since GC not required
• Programmer can arrange reclamation via
Unchecked_Deallocation or memory pools
• Controlled types (user-defined finalization)
possible but may compromise predictability
Restrictions that facilitate more efficient or
high-integrity run-time library
POSIX
Control over per process or per system thread
contention
Process-oriented concurrency mechanisms
-74-
Comparison of Real-Time Support
Points of difference
Real-time scheduling support is optional for a
POSIX implementation
RTSJ provides an extensible framework
J-Consortium spec provides flexible scheduling
options
Both sets of real-time Java extensions need to
cope with storage management, what to do about
garbage collection
Methodology / reliability
“Absolute” delay in Ada and the two RT Java
specs helps meet deadlines
Flexibility / generality
Ada is more restrictive than POSIX and both of
the real-time Java specs
• Policies are partition-wide
• Bias toward priority ceiling protocol
Efficiency
Queueless protected objects can be implemented
efficiently
-75-
Conclusions - Ada
Advantages
Software engineering
• Portability / standardization
• Encapsulation
• Abort-deferred region
Flexibility
• Comprehensive / general set of features
• Only one of the three languages to include
rendezvous
Practical concerns
• Implementations exist
• Efficiency
Disadvantages
Ada not as popular as other languages
Some run-time error conditions not required to
be detected
Common idioms should be in standard
Conservative mechanisms may be restrictive
• Per-partition scheduling policies
• Non-blocking protected operations
-76-
Conclusions - Java
Advantages
Language popularity
Applicable to dynamic real-time domains
RTJEG
• Flexible, dynamic scheduling framework
• Support for periodic activities with overrun /
miss detection and handling, async events
• Control over memory areas
J Consortium
• Good performance
• Certain constructs require analyzable code
Disadvantages
Not a standard
Real-Time support not yet implemented
Performance questions
Requires programmer to pay attention to memory
allocations
RTJEG
• ATC is complex
J Consortium
• Model is not easy to grasp (kernel-like
facilities external to Java Virtual Machine)
• Relationship to the Java language not clear
-77-
Conclusions - POSIX and Recommendations
Advantages
Language independent, in principle
Implementations exist
Attention to resource cleanup
Flexible approach to thread cancellation
C-based spec has large potential audience
Disadvantages
Many opportunities for undetected errors
• Dangling references
• Type mismatches (casts to/from void*)
Nonportabilities
• Implementation dependences
• Optional or incompatibly supported features
Clash of process and thread oriented features
Bottom line If you need something that works today: Ada or
POSIX
If you need something that reduces the
likelihood of undetected programmer error: Ada
or Java
If you need something in wide use: POSIX (and
perhaps some day one of the Java RT specs)
If you need code portability: Ada or Java
If you need something flexible / dynamic: Java
(especially the RTSJ)
-78-
References (1)
General
Best overall
resource
A. Burns and A. Wellings; Real-Time Systems and
Programming Languages (3rd ed.); Addison
Wesley, 2001; ISBN 0-201-72988-1
Comparison Papers
B. Brosgol and B. Dobbing; “Real-Time
Convergence of Ada and Java”; to be presented
at SIGAda 2001 Conference, Minneapolis, MN;
October 2001
B. Brosgol; “A Comparison of the Concurrency and
Real-Time Features of Ada and Java”; Proc. of
Ada UK Conference, Bristol, UK; October 1998.
Ada
Ada 95 Reference Manual, International
Standard ANSI/ISO/IEC-8652:1995; Jan. 1995
Ada 95 Rationale (The Language, The Standard
Libraries); January 1995
J. Barnes; Programming in Ada 95 (2nd ed.);
Addison-Wesley, 1998; ISBN 0-201-34293-6
Current research reported in proceedings of
annual ACM SIGAda and Ada Europe Conferences
General Ada Web site: www.acm.org/sigada
-79-
References (2)
Java
J. Gosling, B. Joy, G. Steele, G. Bracha; The Java
Language Specification (2nd ed.); Addison Wesley,
2000; ISBN 0-201-31008-2.
S. Oaks and H. Wong; Java Threads (2nd edition);
O’Reilly, 1999; ISBN 1-56592-418-5.
D. Lea; Concurrent Programming in Java (2nd ed.);
Addison Wesley; 2000; ISBN 0-201-31009-0
G. Bollella, J. Gosling, B. Brosgol, P. Dibble, S. Furr,
D. Hardin, M. Turnbull; The Real-Time
Specification for Java; Addison Wesley, 2000;
ISBN 0-201-70323-8
International J Consortium Specification; RealTime Core Extensions, Draft 1.0.14, September
2000. Available at www.j-consortium.org
POSIX
ISO/IEC 9945-1: 1996 (ANSI/IEEE Standard
1003.1, 1996 Edition); POSIX Part 1: System
Application Program Interface (API) [C Language]
D. Butenhof; Programming with POSIX Threads;
Addison Wesley, 1997; ISBN 0-201-63392-2
-80-