Download COMP 493

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
COMP 493
SENIOR PROJECT REPORT
SPRING 2006
IMPLEMENTATION OF VYRD+
Driving Refinement and Atomicity Checking with Java PathFinder
Project Students
Selçuk ATLI
Nihal DİNDAR
Under Supervision of
Assist. Prof. Serdar TAŞIRAN
TABLE OF CONTENTS
1. Abstract
4
2. Introduction
5
2.1 Project Description
2.2 Scope and Limitations of Project
3. Background Information
3.1 Correctness Criteria
7
7
3.1.1 Atomicity Checking
3.1.1.1 Definition of Atomicity
3.1.1.2 Atomizer Algorithm
3.1.2 Refinement Checking
3.2 JPF
14
3.2.1 JPF Listeners
3.2.2 JVM
3.2.3 Java Bytecodes
3.2.4 Search Mechanisms
4. VyrdPlus
22
4.1 Configuring VyrdPlus
23
4.2 Logging
24
4.2.1 Instruction log item
4.2.2 Commit log item
4.2.3 Backtrack log item
4.2.4 State log item
4.3 Replaying
27
4.3.1 Log tracing
4.3.2 Managed JVM
4.3.3 Backtracking
4.3.4 Check pointing
4.4 Checkers
32
4.4.1 Interacting with Checkers
4.4.2 Refinement Checker
4.4.2.1 Checker
4.4.2.1 Checkpointing
4.4.3 Atomicity Checker
4.4.3.1 Design & Implementation
2
4.4.3.2 Data Structures
4.4.3.3 Checkpointing
4.5 Debugger
43
4.5.1 Features
4.5.2 Architecture
5. Evaluation
44
5.1 Experiments with VyrdPlus
44
5.2 Performance results
45
5.2.1 Performance results for Refinement Checker
5.2.2 Performance results for Atomicity Checker
6. Conclusions
49
7. How to Run
50
8. Acknowledgement
51
9. References
3
1 - ABSTRACT
The project that is to be described is VyrdPlus; that is project that provides modular
tools for detecting concurrency bugs. VyrdPlus provides tools for checking atomicity and
refinement criteria for concurrently-accessed software components. Essentially VyrdPlus
combines runtime verification and model checking and thereby automates tracking and
analysis of concurrent executions. It provides a spectrum of correctness criteria including
atomicity, I/O and view.
We have implemented VyrdPlus such that it verifies individual software components
each of which are separate Java classes, targets. The main task that VyrdPlus does is to check
the implementation of the public methods of our target against a spectrum of correctness
criteria.
VyrdPlus incorporates a number of dynamic error checking algorithms and allows new
algorithms to be integrated into the system in a modular manner. Each of these algorithms is
implemented as a concurrency checker. VyrdPlus is a fully automated toolbox that drives the
target software against the entire concurrency checker in a parallel manner. During the
checking process VyrdPlus drives the target through all possible execution states in the
coverage and verifies against the checking criteria.
The main working mechanism of VyrdPlus is that; it first records the execution traces
of the target, driving it along all execution states in the coverage area. Then the main
mechanism re-traces this execution, and as the trace is driven; the concurrency checkers
verify the target in a parallel manner.
The project is built on the open source code of JPF (Java Path Finder). Java PathFinder
(JPF) is a system to verify executable Java bytecode programs. In its basic form, it is a Java
Virtual Machine (JVM) that is used as an explicit state software model checker,
systematically exploring all potential execution paths of a program to find violations of
properties like deadlocks or unhandled exceptions. VyrdPlus uses and is extensively
dependant on JPF.
4
2 – INTRODUCTION
Many aspects of our lives are dependent on large-scale software systems that have
components that interact with each other in a concurrent manner. Consider a bank accounts
transactions application that handles all of its clients’ account transactions concurrently.
Every client needs to assume as if they are the only client using the system at that instant.
However the system should handle the clients concurrently and the resulting situation of
accounts should be as if all clients made their transactions sequentially.
Performance requirements of similar programs like databases, file systems and many
industrial and non-industrial applications create the need to build mechanisms that enable
concurrent access to their data structures. However highly concurrent nature of these
applications make them prone to functional errors and make it difficult to detect and fix errors
due to the interleaving of the threads that share the data. The criticality of the failure of such
systems leads us to the necessity of software verification applications.
One method of software verification is model based checking. In model checking, one
specifies the system in a formal language, such as SMV or PROMELA, along with formal
specifications that indicate desirable properties of the system that one wants verified. In SMV,
properties are specified in a temporal logic called CTL. The model checker then determines
whether the properties hold under all possible execution traces. Essentially, model-checking
exhaustively (but intelligently) considers the complete execution tree.
Another method of software verification is execution based checking. With this model,
all execution possibilities are traced at run-time and it is checked whether the properties hold.
However such systems have limited applicability due to state-space explosion problems.
The software verification method that this project is based on is a new scalable runtime analysis technique called refinement-checking.
2.1 - Project Description
Our project is implementation of an extensible verification framework, Vyrd+ on top
of JPF that provides modular tools for detecting concurrency bugs in component-based
software. Vyrd+ is designed to verify individual software components, each of which is
Implemented as a separate Java class, so that correctness of entire system will be tested. Each
component to be verified is called a target of our tool. Vyrd+ checks the implementation
of the public methods of the target with respect to a spectrum of correctness
5
Criteria which will be explained in detail at the background information section, but generally
it can be said that these criteria ensure the absence of common concurrency errors like
wrong synchronization of accesses to some shared variables.
The key feature of Vyrd+ is that it gathers different concurrency checkers around
the same framework that combines the runtime checking technique. A model checker
supporting partial order methods is used to drive the test programs and generate all
quantitatively distinct execution traces of the programs due to thread interleaving. This makes
our runtime technique achieve improved coverage over pure testing. We currently use Java
Pathfinder (JPF) [3], which is an explicit state model checker that executes Java programs as
is in a customized Java virtual machine (JVM) . Details of JPF ands usage of it in VyrdPlus
will be explained in background information section.
In addition to this, Vyrd+ includes a range of dynamic error checking algorithms and
also allows new algorithms to be integrated in a modular way. One of the most important
parts of Vyrd+ is separate models called concurrency checkers. These checkers implement
separate algorithms to make appropriate checkers. Vyrd+ standardizes and provides
mechanisms for instrumentation, annotation and runtime monitoring tasks required by these
modules. Vyrd+ also automates the application of the concurrency checkers on automatically
generated execution traces of the target which makes Vyrd+ very modular. The current
version of Vyrd+ supports checking race freedom uses the Eraser algorithm, atomicity using
the Atomizer algorithm, and I/O and view refinement by the algorithms introduced by Elmas
et.al. These algorithms provide thorough global monitoring and improved observability over
pure testing that add up to our tool with significantly improved potential of catching
concurrency errors.
Besides concurrency checkers, Vyrd+ has debug property in a user friendly way. Thus,
user can observe trace of execution of concurrency checkers and analyze the test application
in an easy way.
During the model checking of a test program, the entire execution tree of the program
is recorded in a sequential log. By reading from this log, Vyrd+ dispatches runtime events to
the concurrency checkers and provides runtime information necessary for then to run their
verification procedures. The communications between the modules are accomplished by
sharing a common log. For example, the model checker and the concurrency checkers can be
run simultaneously, or the latter can be run offline after the model checking terminates.
6
2.2 - Scope and Limitations of the Project
Main use of Vyrd+ is to detect concurrency bugs by using the strength of combination
of runtime verification and model checking for automating monitoring and analysis of
concurrent executions with improved coverage. Vyrd+ is implemented on top of JPF and it
uses JVM. Therefore, Vyrd+ is dependent to execution of JPF at least logging phase and
possible bugs of JPF can affect it so much. In addition to these, Vyrd+ is tested in Windows
XP and Linux operating systems and only small programs are tested in Vyrd+. It may cause
unexpected output in different platforms.
3 – BACKGROUND INFORMATION
3.1 – The Correctness Criteria
VyrdPlus incorporates a number of dynamic error checking mechanisms and
algorithms against which our target program is verified against. We refer to these
concurrency checker algorithms as concurrency checkers. There are two concurrency
checkers currently embedded into the VyrdPlus; Refinement Checker and Atomicity Checker.
In order for the test software to pass the correctness criteria; all the execution states traversed
should successfully be verified by all the embedded concurrency checkers. The two of the
concurrency checking algorithms currently embedded into VyrdPlus are explained in the
subsections. While VyrdPlus is extensible such that other concurrency checkers can be
developed and embedded into the system modularly.
3.1.1 - Atomicity Checking
3.1.1.1 Definition of Atomicity
Atomicity is non-interference property. A method (or in general a code block) is
atomic if for every (arbitrarily interleaved) program execution, there is an equivalent
execution with the same overall behavior where the atomic method is executed serially, that
is, the method’s execution is not interleaved with actions of other threads. Figure 1 is an
example of atomic block, since thread first acquires lock then accesses race-free variable and
releases the lock. The execution of thread cannot interleave with actions of other concurrent
threads.
7
public void inc()
{
int t;
synchronized(this)
{
t= i ;
i = t + 1;
}
}
Figure 1: Example of atomic method
3.1.1.2 – The Atomizer Algorithm
Vyrd+ uses Atomizer algorithm defined in [7] in order to detect atomicity violations.
Atomizer algorithm synthesizes Lipton’s theory of reduction and Eraser’s Lockset algorithm.
Thus, these algorithms are reviewed in each subsection.
3.1.1.2.1 Reduction
The theory of reduction is based on the concept of right-mover and left-mover actions.
A path through a code block contains a sequence of right-movers, followed by at most one
non-mover action and then a sequence of left-movers. Then this path can be reduced to an
equivalent serial execution with the same behavior. This implies that the path is executed
without any interference with concurrent threads.
Definitions
Firstly, let’s consider the definitions of right-mover, left-mover, both-mover and nonmover action. An action b is a right-mover if, for every execution where the action b
performed by one thread is immediately followed by an action c of a concurrent thread. In this
case, swapping actions b and c do not change the resulting state, as shown figure 2. Lock
acquire operation is right-mover action.
∑0
b
∑1
c
∑0
∑2
c
b
∑1’
∑2
Figure 2: Commuting Actions
8
An action c is a left-mover if whenever c immediately follows an action b of a
different thread, swapping the action b and c do not change the resulting state. Each lock
release operation is classified as left-mover action.
By reading or writing a shared variable, if variable is protected by some lock and that
is held each access to that variable, then two threads can never access the variable at the same
time, and each access to that variable is classified as both-mover.
By reading or writing a shared variable, if variable is not consistently protected by
some lock, then access to that variable is classified as non-mover.
Consider example of atomic method whose code is in figure 1 in order to show how
reduction enables to verify atomicity. In inc method, thread acquires lock which is rightmover, then reads and writes protected variables which are both-mover and releases lock
which is left-mover. Therefore, there exists an equivalent serial execution in which the
operations of this path are not intervened by operations of other threads. So the execution path
is atomic.
∑0 acq(this) ∑1
b1
∑2
t = i ∑3 b2
∑4 i = t + 1 ∑5 b3
∑0 b1
∑6 rel(this)
∑1’ acq(this) ∑2’ t = i ∑3’ i = t + 1 ∑4’ rel(this) ∑5’ b2 ∑6’
∑4 ’
Figure 3: Reduced execution sequence of inc() method in figure 1
b3
∑7
∑7
3.1.1.2.2 Lockset Algorithm
Lockset algorithm is simply used to detect race conditions and enforces the simple
locking discipline that every shared variable is protected by some lock so that lock should
held by any thread whenever it accesses the variable. Since it is impossible to know which
locks protect which variable, this information is reached from the execution history with the
following way. For the remaining part of the section, Eraser Lockset algorithm will de
described.
Let’s begin with the first version of the Lockset algorithm:
For each shared variable v, C (v) is set of candidate locks of v.
For each thread t, locks_held (t) is set of locks held by thread t.
For each v, initially C (v) is set of all locks.
9
On each access to v by thread t,
Set C (v) : = C (v) ∩ locks_held (t);
If C (v) = {} then concern warning
The simple locking discipline gives false race alarms following cases:
1) Initialization. Shared variables are frequently initialized without holding a lock.
2) Read-shared data. Some shared variables are written during initialization and are readonly thereafter. It is safe to be access without lock.
3) Read-write locks. Read-write locks allow multiple readers to access a shared variable, but
allow only a single writer.
Eraser Lockset algorithm is extended by using transition state to avoid false alarms at the
initialization and read-shared data cases.
Figure 4: State transition of Eraser Lockset algorithm.
Virgin: When a variable is first allocated, its state is Virgin which means that the data is new
and none of threads accesses it.
Exclusive: When data is first accessed, it enters Exclusive state which means that the data is
accessed only by one thread; therefore, read and write operations to that variable by this
thread do not change variable’s state and C (v) is not updated. This prevents false alarms due
to the initialization issue.
Shared : When data is read by the second thread, its state changes to Shared which means that
the data is written during initialization only and then read-only. In this state, C (v) is updated,
but data races are not reported even if C (v) becomes empty. This prevents false alarms due to
the read-shared data issue, for it is safe to read read-only variable without lock.
10
Shared-Modified: When data is written by another thread in both Shared and Exclusive states,
its state changes to Shared-Modified which means that the data is read and write by multiple
threads. In this state, C (v) is updated and races are reported.
Eraser Lockset algorithm is extended to avoid false alarms due to the read-write locks.
Checking is slightly differentiated from simple Lockset algorithm when variable enters
Shared-Modified state.
For each thread t, locks_held (t) is set of locks held in any mode by thread t.
For each thread t, write_locks_held (t) is set of locks held in write mode by thread t.
For each v, initially C (v) is set of all locks.
On each read access to v by thread t,
Set C (v) : = C (v) ∩ locks_held (t);
If C (v) = {} then concern warning
On each write access to v by thread t,
Set C (v) : = C (v) ∩ write_locks_held (t);
If C (v) = {} then concern warning
So, Eraser Lockset algorithm avoids false race alarms due to the initialization, the
read-shared data and the read-write locks.
3.1.1.2.3 Atomizer Algorithm
Definition
Function Φ: ( t
{InRight | InLeft}) U (x
2Lock), where t means any thread and x
means any variable. When thread t is inside an atomic block, Φ(t) is used to indicate whether
thread is in the right-mover or left-mover part of that atomic block. Φ0(t) = InRight for all
thread t. Φ(x) is used to store candidate locks for x. As described in Lockset algorithm
section, each variable has lock set pair consisting of
i)
access-protecting lock set, which contains locks held on every access to that
field, Φ0_A(x) = all Locks for all variable x.
ii)
write-protecting lock set, which contains locks held on every write to that
field.
Φ0_W(x) = all Locks for all variable x.
Assume that t is a thread, function H(t, σ) is used to denote the set of locks held by
thread t in state σ. σ(P(x)) = t means that thread t held appropriate lock to access shared
variable x.
11
Assume that t is a thread, function A is used to the number of atomic blocks that are
currently active. This count is set to zero at the initial state and increased by one when thread
enters an atomic block and decreased by one when it quits an atomic block. A(∏(t)) is equal
to 0, when thread t is not in the atomic block and A(∏(t)) is equal to nonzero when thread t is
inside atomic block.
Each state in the instrumentation store is represented with three notations, (σ, Φ, ∏).
a
tΦ
means instrumentation store whenever thread t performs operation a.
Rules
There exist generally four types of actions done by threads: acquire lock, release lock,
enter atomic block and access shared variable. The rules for each case will be described
separately.
1) Acquire Lock: Instruction acquire is generally right-mover action, as stated in Reduction
section. On the other hand, if a lock is used by only a single thread or an object is first
initialized by one thread and then ownership of both object and its appropriate lock is
transferred to another thread, acquires and releases of that lock are both-movers. When lock is
used by multiple threads in atomic block, lock acquire is allowed when it is right-mover part
of the atomic block. Outside the atomic block, lock acquire does not change the state of the
thread: see [INS ACQUIRE]
Figure 5: Rule for lock acquire
When thread is inside an atomic block and left-mover part of the block, lock acquire operation
violates atomicity, since it is right-mover operation and it does not come after a left-mover
operation: see [WRONG ACQUIRE]
Figure 6: Rule for lock acquire that causes atomicity violation
2) Release Lock: Instruction release is left-mover action, as stated in Reduction section. In an
atomic block, lock release is allowed each case, since when it is right-mover part of the
12
atomic block, release lock changes the state of thread from right to left and it is a valid
operation. When it is left-mover part of the atomic block, release lock does not change the
state. Outside the atomic block lock release does not change the state of the thread: see [INS
RELEASE]
Figure 7: Rule for lock release
3) Enter Atomic Block : When any thread enters in the atomic block, its state should be set
to InRight: see [INS ENTER]
Figure 8: Rule for enter atomic block
4) Access Variable: Since one of the base algorithms is Eraser Lockset algorithm, each
shared variable has lock set pair consisting of read_lock set and write_lock set. Write and
read actions will be handled separately.
a. Read access: Read action is considered as both-mover, if thread holds
appropriate read locks to read the variable. Otherwise, read access is nonmover. When thread is not inside atomic block or its state is InRight, nonmover access changes state of thread into InLeft. When thread is inside atomic
block and its state is InLeft, then read operation violates atomicity.
b. Write access: Write action is considered as both-mover, if thread holds
appropriate write locks to write the variable. Otherwise, write access is nonmover. When thread is not inside atomic block or its state is InRight, nonmover access changes state of thread into InLeft. When thread is inside atomic
block and its state is InLeft, then write operation violates atomicity.
13
3.2.3 – I/O and View Refinement
Refinement checking is basically checking by subsequent comparisons between
concurrent execution trace of a program and its atomic execution trace. This problem and
solution is in fact very similar to the bank account application case described in the
introduction part of the discussion. Refinement checking considers two versions of the
execution trace; the concurrent execution trace (implementation) and the atomized execution
trace (specification). Implementation is the actual concurrent execution trace of the test class
in which the instructions of different transactions are interleaved.
The specification execution trace is basically simulating how the test class would have
executed if the calls to transactions were sequential. A commit point is specified for each
transaction. When we come across a commit point of a specific transaction in the
implementation’s execution trace, we call that specific transaction in the specification.
Therefore we have a concurrent execution trace and a sequential execution trace whose
behavior is approximated to the concurrent version of the test program. We proceed with
checking; comparing implementation and specification at commit points.
There are two variant methods of checking used at comparison between the
implementation and specification that are both used. One of them is I/O checking and the
other one is view refinement. With I/O checking we compare the resulting values of the same
transaction at the commit point, both in the implementation and the specification. In a correct
concurrent execution of a test program, all resultant values of the transactions should be
equivalent in both execution traces.
In addition view checking is done at commit points. There are two representations;
ViewI (for implementation) and ViewS(for specification). View variable is the canonical
representation of the abstract data structures’ state when the commit action is taken. Making
view checking enables early detection of any variation between the implementation and
specification even though this difference may go undetected by the I/O checking method.
3.2.4 – The Vyrd Algorithm
What is done in refinement checking; methodologically is that the specification and
implementation are driven together by the information extracted by the execution of the test
program. At commit points of the transactions; the return value of the transaction that comes
from the specification and the implementation are compared to be equal. Also at the commit
point; the abstract states of the specification and the implementation are compared to be
14
equivalent. If the program is verified against these two criteria successfully; then the
transaction is passes refinement checking successfully.
Vyrd+ communicates with the tested program through a logging mechanism which
will be explained in more through detail in the following sections. To put it simply; the log is
the data structure in which the execution information of the test program is put for later retracing.
The replaying mechanism of Vyrd+; sequentially reads the execution trace of the
implementation execution. Then, applies the same actions on the specification transaction by
transaction. Vyrd+ achieves this by making use of commit points (see previous section), that
are logged for each transaction in the execution of the implementation. At these commit
points, the transaction that the commit point belongs to is executed on the specification side.
This how, a transaction by transaction execution that follows the order of the commit points in
the implementation execution is obtained in the specification.
Just as described in the previous section; at each of these commit points there are two
checking criteria: view and I/O checking. At these points; the abstract state provided by an
abstract function are compared to be equivalent; which is the view checking. Also at these
commit points the return values of the transaction in which the commit point belongs, are
compared to be equal. If both of these checking criteria are passed by Vyrd+, this state is
deemed to be correct for Refinement Checking in Vyrd+
3.2 - The Java Pathfinder Model Checker (JPF)
As previously stated in the abstract, VyrdPlus is excessively dependent on Java Path
Finder (JPF). In fact, the code for the tool is immersed into the source code of Java Path
Finder. Especially, the JVM (Java Virtual Machine) implementation in the JPF source code,
which we will refer to as JJVM, and similar structures in JPF have been extensively used in
the project, and some required alterations to the source code of JPF were made.
JPF is a system to verify executable Java bytecode programs. In its basic form, it is a
Java Virtual Machine (JVM) that is used as an explicit state software model checker. JPF
executes the given program not just once (like a normal VM), but theoretically in all possible
ways, checking for property violations like deadlocks or unhandled exceptions along all
potential execution paths. If it finds an error, JPF reports the whole execution that leads to it.
15
Figure 9 – JPF mechanism is depicted as a diagram
As you can see from figure 9 when running JPF, we create a configuration object that
is composed of the name of our main class to be tested, the arguments to our test class and the
type of the search strategy that we want JPF to follow. JPF has some various search strategies
to traverse the state-space of our test program such as Depth First Search, Path Search,
Random Search and Simulation. The search strategies that have been used in VyrdPlus will be
explained in the Search Mechanism subsection below.
After the configuration object is specified, JPF is run. JPF passes this information to
the JVM code inside JPF (JJVM) and an execution of the test program occurs within JPF. At
the end of the execution trace, JPF prints out relevant information or notifies any errors that
occurred.
Any tool or application that is developed over JPF, can make use of two interfaces that
JPF provides; VMListener and SearchListener interfaces. In the following subsection these
two listener interfaces are going to be explained in more detail.
3.2.1 - JPF Listeners:
The tool uses the VMListener interface to obtain execution information from JPF on
the air. Among the notifier methods that VMListener provides us the following were used:
16
public void gcEnd(Vm vm): JPF notifies its listener with this method when the
execution trace is complete. Within the arguments it provides the JJVM from which
we can obtain further information.
public void instructionExecuted(Vm vm): JPF notifies its listener with this method
when an instruction (Java byte code) is executed inside JJVM. Within the arguments it
provides the JJVM from which we can obtain further information.
The tool also uses the SearchListener interface to listen to the state search events
during the execution of the code in JPF. SearchListener instances are used to monitor the state
space search process, e.g. to create graphical representations of the state-graph. They provide
notification methods for all major Search actions. The notifiers that are being used extensively
are listed below:
public void stateProcessed(Search search): JPF notifies its listener with this method
when a new execution state is processed. Within the argument information like the
current state id is provided.
public void stateBacktracked(Search search): JPF notifies its listener with this
method when the search algorithm of JPF backtracks to a previous state. The
information on the state that is backtracked to is provided in the argument.
3.2.2 – JJVM: The JPF-specific Java Virtual Machine
As we have stated in the previous sections, JPF has a built in JVM package that we
refer to as JJVM. The tool uses JJVM and Java Byte Codes extensively, therefore a discussion
of these tools and how our project uses them is beneficial.
JJVM some portion of JPF’s code that simulates the behavior of the actual Java
Virtual Machine. The Java virtual machine is an abstract computing machine, it knows
nothing of the Java programming language, only of a particular binary format, the class file
format. A class file contains Java virtual machine instructions (or bytecodes) and a symbol
table, as well as other ancillary information.
The following diagram depicts how JPF and JJVM components can be reached from
each other through a tree diagram:
17
Figure 10 – The reachability of JJVM components from JPF
I have exerted the diagram in figure 10 by debugging an execution of JPF and
examining the source code. The diagram depicts how main components in JPF and JJVM can
be reached from each other through the code. Although there are some other connections, for
instance KernelState can be reached from ThreadList, one way of reaching these components
are depicted in the figure. I had drawn out this diagram while examining the source code of
JPF, and found this diagram to be useful. Some short descriptions of components in the figure
are as follows:
JPF: The main component of Java Path Finder. It encapsulates all of the components
and is responsible of running the sample code on Java Path Finder.
JJVM: Although depicted as JVM in the source code, we refer to it as JJVM for a
clean description. This component represents the virtual machine.
SystemState: It encapsulates not only the current execution state of the VM, but also
some part of its history and some annotations that can be used to control the search.
KernelState: This component represents the stack of JJVM
DynamicArea: This component is the heap, the runtime data area from which
memory for all class instances and arrays is allocated. Therefore, the garbage
collection mechanism resides here.
Static Area: This component is the memory area for static fields.
18
ThreadList: It contains the list of threads
ThreadInfo: This represents a thread. It contains the state of the thread, static
information about the thread, and the stackframes.
Stack: A Java virtual machine stack stores frames. It holds local variables and partial
results, and plays a part in method invocation and return.
StackFrame: A stack frame is used to store data and partial results, to perform
dynamic linking , return values for methods, and dispatch exceptions. A new frame is
created each time a method is invoked. A frame is destroyed when its method
invocation completes. Each frame has its own array of local variables, its own operand
stack, and a reference to the runtime constant pool of the class of the current method.
3.2.3 – Java Bytecodes
Every method to be executed are composed of instructions that can be
interleaved in a concurrent execution of the program. In our case, these instructions are called
Java Bytecodes. We use Java bytecode instructions one by one, in our implementation
execution (see section 2.2) portion of the project. Therefore a brief discussion of Java
bytecodes is required.
The class file binary format consists of Java bytecode instructions that are similar to an
assembly code that form the basic instructions for Java. Some basic instructions like LOAD,
STORE, AND, POP, NEW, RETURN and INVOKE are all represented by Java Bytecodes.
For each bytecode, JJVM code contains a class that handles the actions to simulate these
instructions. The following are some Java bytecodes and their descriptions that are relevant to
our project:
Load and Store Instructions:
Load a variable: iload, lload, fload, dload, aload
Save a variable: istore, istore, fstore, dstore, astore
Load a constant: bipush, sipush, iconst, fconst,...
The first letter in these instructions symbolize for which type family this instruction
was implemented. For instance “ iload” loads an integer, while “dstore” stores a
double.
Arithmetic Instructions:
Add: iadd, ladd, fadd, dadd.
19
Subtract: isub, lsub, fsub, dsub.
Multiply: imul, lmul, fmul, dmul.
Divide: idiv, ldiv, fdiv, ddiv.
Negate: ineg, lneg, fneg, dneg.
Bitwise OR: ior, lor.
Bitwise AND: iand, land.
Bitwise exclusive OR: ixor, lxor.
Local variable increment: iinc.
The arithmetic instructions compute a result that is typically a function of two values
on the operand stack, pushing the result back on the operand stack.
Object Creation and Manipulation instructions:
Create a new class instance: new
Create a new array: newarray, anewarray, multinewarray
Access fields or classes: getfield, putfield, getstatic, putstatic
Although both class instances and arrays are objects, the Java virtual machine creates
and manipulates class instances and arrays using distinct sets of instructions
Operand Stack Management instructions:
Some instructions: pop, pop2, dup, dup2, dup_x1, dup2_x1, dup_x2, dup2_x2, swap
A number of instructions are provided for the direct manipulation of the opreand stack.
Method Invocation instructions:
invokevirtual: invokes an instance method of an object, dispatching on the type of the
object. This is the normal method dispatch in the Java programming language.
invokeinterface: invokes a method that is implemented by an interface, searching the
methods implemented by the particular runtime object to find the appropriate method
invokespecial: invokes an instance method requiring special handling, whether an
instance initialization method, a private method, or a superclass method
invokestatic: invokes a class (static) method in a named class
An important point to consider when using invoke instructions is the following. The
first parameter in any method invocation except invokestatic is the reference number
of the object upon which the method is called. There is no such parameter in
invokestatic because it is a static method.
20
Method Return instructions:
Some instructions: ireturn, freturn, dreturn, areturn
Again the first letters symbolize which type the return instruction returns. In addition,
the return instruction is used to return from methods declared to be void, instance
initialization methods, and class or interface initialization methods.
Especially method invocation and method return instructions were instrumental for our
project. Because we needed to catch when certain methods in the specification were executed
and when the resulting values for these methods were returned. Therefore, we needed to
check each instruction (bytecode) executed and handle the case appropriately when the
instruction was an invoke instruction or a return instruction.
3.2.4 – Search Mechanisms
There are several state search algorithms implemented in JPF; on which our tool
depends on. Using these state search algorithms, JPF goes through execution states of the test
program in several ways. The search algorithms implemented in JPF are: DFS search, Path
search and Random search. Our tool depends mainly on DFS and Path searches:
DFS Search: This is the conventional method of depth first search. It forms a tree like
execution path containing all the states that can be traversed in the coverage area.
These states are traversed using a depth first search algorithm.
Path Search: This actually is not a search mechanism. When this mechanism is used,
the execution is traced just like a normal VM would traverse the execution in a linear
manner.
21
4 – VyrdPlus
VM
VM
JPF
Tests
Target
Log Writer
log
Log Reader
Replay &
Search Engine
Debug
VyrdPlus
Notifcation event
Checker 1
VyrdVM
Checker 2
Target Class
Spec Class
.
.
.
Figure 11: Vyrd+ diagram
The diagram above displays general execution of Vyrd+. As we stated before, Vyrd+
is built on the JPF and it uses JPF as a model checking toll in order to create log. Firstly, test
classes are executed by JPF and sequential log of execution is created. After that, log is read
by log reader. At each time log reader reads a log item, log item is sent to Vyrd+. Vyrd+
notifies corresponding checkers and waits for any violation related to the checkers. In case of
debugging, debug module displays execution of checkers as thread based according to the
sequential log read by log reader. In order to provide re-execution of application until selected
line, VyrdVM is used.
The key feature of VyrdPlus is that it combines different concurrency checking
algorithms around a framework that uses runtime checking technique. We can basically
separate the application into two modules: the testing and verification module.
During the model checking of the test program; the entire execution tree of it (all
possible execution states traversed) is recorded into a sequential log to be replayed when
verifying. This process is the fundamental function of the testing module (logging). The
22
testing module of Vyrd+, feeds the test program execution into JPF and drives it through all
possible thread interleavings. Vyrd+, then logs this execution making use of JPF’s listener
interfaces.
The driving of the test program according to the sequential log, and verification
against the checking criteria is the fundamental function of the verification module
(replaying). The task of the verification module is to read the execution trace from the log and
dispatch this information to the concurrency checkers embedded to the system. Vyrd+ also
replays the instructions stored on the log on a separate JVM. After the replay of each
instruction on the log, or when a special event like thread creation occurs; the required
information is supplied to the concurrency checkers. Then, the concurrency checkers do the
checking according to their individual checking criteria.
The following sections describe several mechanisms that have been used and that form
the Vyrd+ tool.
4.1 - Configuring VyrdPlus
Vyrd+ tool is configured with the help of configuration file. Mainly, configuration file
provides Vyrd+ information related to the target class, internal and external log files, test class
information and checkers list and their status. In order to set up the whole system, the path of
target class and spec of target class and tester are needed. Besides these, paths of internal and
external log files are also determined here. So that, user can reaches the logs easily and has
ability to change the location of logs. In addition to these, test class path is also given here,
which makes possible to test the application with different testers easily. And finally, checkers
and status of its elements whether they are enabled or disabled are listed here, so that user can
check its application according to the wanted checkers separately and it also provides
modularity by making the integration of new checkers with the whole system easily.
With the help of configuration file, JPF arguments are created and JPF initialized like
Checkers and Log files and VyrdVM. User can modify configuration file from Vyrd+ Panel,
create a new configuration file or load any configuration file.
23
4.2 – Logging
During the model checking of a test program, in the testing module, the entire
execution tree of the program consisting of interleaved sequences of Java byte code level
instructions are recorded to a sequential log as LogItem objects. This is required in order to
funtionalize the testing module. Vyrd+ uses the Java object serialization method to save and
retrieve Logitem objects to/from a binary file; which we call the log.
Vyrd+ drives the test class through all possible execution states of the program. The
tool forms a tree of execution states and records this execution tree onto a sequential log.
Using logging enables the communication between the framework and the checkers through a
common log.
1
2
1
3
BT1
2
4
4
5
3
5
7
6
BT4
6
The Tree Representation
of execution space
7
How the execution state space
is represented on the sequential log
Figure 12 – Sequential representation of the execution state tree
The log keeps execution information as Java bytecode by Java bytecode (instruction
by instruction). These instructions belong to an execution in which the threads are interleaved.
In addition to instruction informations; some extra information like commit log items, and
backtrack log items are recorded in the log. These log item types and what they keep are
being explained in the following subsections.
24
Figure 13 – Diagram depicts the working mechanism of Log Writing
4.2.1 – Instruction LogItem
The instruction log item keeps all the information we need to replay an individual
instruction. Therefore, when logging instruction by instruction; we keep each instruction
information in an instruction log item and we serially write them onto the log.
Instruction object built in JJVM is not a serializable object and it has many
connections to other structures in the system, so we can not put an Instruction object as it is in
the LogItem. However we can store some values from which we can obtain the Instruction
object using JPF at any time later. In order to construct a LogItem from an instruction we
listen to, we need to have the following data:
Thread id: We need to store the index of the thread in which the instruction is executed. By
using this thread id, when doing the re-execution of the log we will know in which thread to
execute this instruction.
Class name: We need to store the name of the class where the instruction resides. By using
JPF structures we can obtain the ClassInfo (see section 2.6) using the class name. Among
other uses this will be beneficial for obtaining the MethodInfo.
Unique Method name: We need to store the unique method name of the method in which the
instruction executes. Having the ClassInfo at hand, we can use this method name to obtain the
MethodInfo.
Instruction offset number: We need to store the instruction offset number. This number
represents the number of instructions between this instruction and the first instruction in the
method, or simply the order rank of the instruction in the method. Having the MethodInfo at
hand we can use the offset number to obtain the specific Instruction object that we need.
The data explained above is kept for each instruction log item, so that the encapsulated
instruction can be replayed in the verification module.
25
4.2.2 – Commit LogItem
A commit log item represents the commit points determined for each transaction (see
section 3.2). When the execution of the test class is logged; at commit points in the
transactions a commit log item is inserted onto the log that keeps some information that will
be beneficial in some concurrency checker modules like Refinement Checker.
A commit log item keeps depth and return value informations of the transaction which
it belongs to. The sole existence of a commit log item; helps concurrency checkers to replay
the interleaved execution; in a transaction by transaction manner, in the order of commit log
items read. Another use of the commit log item is that it keeps the return value information of
the transaction that it belongs to. This information will be used in comparing the return value
informations of the specification and implementation in view checking of the Refinement
Checker. This will be thoroughly explained in the Refinement Checker subsection.
4.2.3 – Backtrack LogItem
The backtrack log item is instrumental in realizing the tree structure of the execution
path into a sequential log. As we can see in figure 12, there is an execution state tree when the
execution of the test program is being driven to be logged. We need a way to store this
execution tree onto a sequential log. Our method of realizing this objective is using a
backtrack log item.
JPF makes a depth first search algorithm to drive our test program through its
execution states and that’s how an execution state tree is formed. Whenever we move
upwards on the execution state tree, we mark the sequential log with a backtrack log item; in
which we store the required information to backtrack to the node on which we will start
running.
Therefore, we keep the depth and the id of the execution state that we are going to
backtrack to. In the following sections how we will make use of this backtrack log item will
be explained.
4.2.3 – State LogItem
State log item keeps the execution state that we are in currently. At each time change
our execution state; we keep a state log item that encapsulates the current state id and the
depth of our place on the execution state tree.
26
4.3 – The Replaying Mechanism
As previously explained the replaying mechanism is basically the verification module
of the Vyrd+ tool. The replaying mechanism’s fundamental functionality is that the stored log
is read log item by log item; and the test program is re-driven according to this information.
This re-driving is the exact same execution of the previously logged execution because of this
reason. Then, when we are re-executing the test program log item by log item, at each step we
make sure that the program is in accordance with our checking criteria. For this purpose, at
each step we get feedback from the embedded concurrency checkers in parallel. Below you
find the explanations of several mechanisms used in the replaying functionality.
4.3.1 – Log Tracing
In the replaying mechanism, as we have explained previously we obtained the
necessary information for the replay from the log. Therefore, we needed a mechanism that
would read through the sequential log; parse the log items and act accordingly. We use Java
object serialization mechanism to read and write log item objects to the log file. Therefore,
when we do log tracing we read the log file log item by log item and inform the main Vyrd+
module about the log items read.
Vyrd+, receives each log item read and treats each log item type accordingly (see
section 4.2). When we come across an instruction log item; we go on and execute it via
Managed JVM (see section 4.3.2), and we inform the concurrency checkers about the
instruction that we executed, and we let them do whatever they do with it.
If the received log item is a state log item; then we extract the execution state tree
depth, and the state id that the state log item encapsulates. We will need this information in
Vyrd+ because we are following the tree that the execution states form. We will need this
information in backtracking and checkpointing mechanisms also that will be explained in the
following sections.
If the received log item is a backtrack log item, then the backtracking mechanism is
started and the execution state is backtracked to the state whose information is encapsulated in
the log item. The backtracking mechanism will be explained thoroughly in the sections below.
4.3.2 – Managed JVM
At certain points a concurrency checker might need to perform some additional
operations on the JJVM (see section 3.2) such as creating objects or calling methods of
27
existing objects. Vyrd+ provides an abstraction layer over JJVM and provides an easy to use
mechanism to apply common operations like method invocations and object creations on the
JJVM. This is called Managed JVM in Vyrd+.
Managed JVM encapsulates a JJVM, and several procedures that forms an abstraction
layer over the JJVM. Below you find what kind of operations managed JVM provides to its
users.
You can initialize the JVM through a simple call to Managed JVM. You can execute
an instruction on managed JVM; by providing it the instruction and the thread on which it
executes. You can attain an instruction object by providing the managed JVM with the
instruction’s class name, method name and instruction offset number (see section 3.2.2).
You can create an object by providing its class name. You can execute a method by
providing it the object reference of the owner object, method name and the arguments of the
method which we are calling. We can get a class by providing its name, and we can get an
object by providing its object reference number.
Among the other operations we provide within the Managed JVM are; cloning an
object, executing the equals operation between two objects, restarting the JVM, and check
pointing operations (see section 4.3.4) like storing checkpoints and loading checkpoints.
4.3.3 – Backtracking
As we have proposed a mechanism to store the execution state tree; in a sequential
manner in the logging module; we also need a mechanism to open this sequential log into an
execution state tree to be followed. At this point, the mechanism of backtracking comes into
the picture. We have explained in the logging section, that when we decrease our tree depth in
the depth first traversal of the execution state tree; we put a backtrack log item in the log that
encapsulates the information of the state that we are backtracking to in a sense.
And as explained in section 4.3.2, when we are tracing the log we come across a
backtrack item. At this point there are two mechanisms that Vyrd+ can use to backtrack to the
given state. The first is the standard rollback algorithm that will be explained here. The other
and more efficient algorithm is restoring a checkpointed state; which will be explained in the
section below. Let’s constrain ourselves with the standard backtrack algorithm in this section.
A very basic explanation of the rollback mechanism would be; when we come across a
backtrack log item, we just restart the program and drive the program up to the point where
we want to backtrack to. But there are certain details and points that we should be careful
about when we do the rollback.
28
When we do the rollback, we restart the Vyrd+ replay mechanism; we temporarily
disable the checking mechanism (because we have already checked up to the point where we
are going to backtrack to), and start moving towards the backtrack point (state). When we
reach the point where we are going to backtrack to; we skip the log until the state where
backtracked from and we enable checking. To explain basically, we moved to the start, we
moved to the backtrack destination without checking; then we skipped until the point where
we backtracked from. The Figure below demonstrates this process more clearly:
I
II
III
IV
A - start
B - Backtrack
destination
C - Backtrack from
D - Backtrack from + 1
Figure 13 – Diagram depicts the basic mechanism of rollback. At the process I, log is read from A to C. At C we
come across a backtrack log item. At the process II; we restart the log and go to A. At the process III, we disable
checking and move towards B – our backtrack destination. Next, at the process IV; we skip until the state where
we backtracked from plus 1. And now we accomplished the rollback process.
An extra detail that we should be careful about in the algorithm is to keep track of the
backtrack information. This had caused us some problems while developing the algorithm;
and we solved it by keeping the backtrack information. The problem is; because that we
repeat the backtrack process many times there are many backtrack log items that we might
come across our way that we had used in our previous backtrack operations.
Before we do our backtrack we keep the backtrack information. That is; we keep a
hash map that matches the state ids from which we backtrack, to log item ids to which we
would backtrack to. So when we keep the backtrack information; we update this structure in
the following way:
29

If we have not stored any information of backtracking from this state; we just store the backtrack
information in the structure.

If we have stored any information of backtracking from this state; we check the following:

If the stored backtrack leads to a point that is before the point where we want to
backtrack to;
we store this backtrack information over the previously stored
information.

If the stored backtrack leads to a point that is after the point where we want to
backtrack to; we do not store anything.
So, when backtracking; after we have restarted the log and when we are moving
towards the backtrack destination from the start; if we come across a state log item whose
state is saved in the data structure above; we skip to the state to which we had backtracked
from. This how, we move in the tree in consideration of previously backtracked tree nodes.
Although the rollback mechanism works all the time, it is a very demanding process
since at each branching of the execution tree, this process is called and it is traversing the log
from the start at each time it is called. Therefore, a helper mechanism that makes use of the
memory is used in our backtracking system. This checkpointing mechanism is explained in
the following section.
4.3.4 – Checkpointing
Checkpointing is a mechanism that we have developed that achieves backtracking
without the overhead of a rollback operation. When checkpointing the overhead is moved to
using extra memory. The idea in checkpointing is that; instead of moving in the log over again
to apply a previous state to Vyrd+; we can store the whole memory block information of
states that might come in handy into the memory. This how, when we need to backtrack to
any state; we can just load that state that we had saved previously instead of moving in the log
here and there.
However, like the fact that rollback is CPU demanding; checkpointing is memory
demanding. Therefore, we needed a mechanism to store a limited number of checkpoint
states. We have achieved that behaviour by implementing a circular data set. This structure
exhibits the properties of a set structure and contains the properties of a queue that should
contain a limited number of elements at all times. So the structure should keep memory states
like a set structure, and it should only keep the latest 20 entries into the structure.
In order to achieve this structure, we keep a set structure that contains checkpointed
states that are saved into memory. As we store checkpoints we insert this memory state data
30
into our set. But in parallel to this set, we keep a queue structure that contains at most a
certain number of elements (e.g.: 20). And when ever we add to our set, we add to the queue
as well. When the number of elements in the queue exceed that certain number; we dequeue
from the queue, and in parallel we remove the same element that is dequeued from the set
structure. This is how we keep a certain number of the latest checkpoint entries. Therefore,
we parameterized how much memory will be consumed by the checkpointing operation.
So when backtracking, if the state that we are going to backtrack to is previously
stored in our set structure, we use the checkpointing mechanism to backtrack to that state.
However, if the state that we want to backtrack to has not been backtracked before, we use the
rollback mechanism described in the previous section, instead of checkpointing. So, if we
increase the parameter described in the previous paragraph, we decrease CPU time consumed
in backtracking, while we increase memory consumption. If we decrease the parameter, the
overhead moves from the memory to CPU consumption.
Also, where we do the checkpointing is important, too. At branching points in the
executions state tree, we store checkpoints. Because these points are where we need to
backtrack. So, after every rollback operation; we store a checkpoint.
When checkpointing another important issue is to synchronize the checkpointing
operation among the concurrency checkers. Every concurrency checker and keeps different
kinds of information as execution states. And what to store and restore is left up to the
concurrency checker implementers. We refer to them as checkpoint listeners and
communicate with them through a CheckpointListener interface.
When we need to call a store check point operation; Vyrd+ checks with all the
checkpoint listeners to see if they are all available to make a checkpoint. If any of them is
unavailable; then the checkpoint storage operation is ignored in order to keep the consistency
in the system. If all the listeners are available; a store checkpoint command is delivered to all
of them with the required data. Also, when a checkpoint needs to be restored; all the
checkpoint listeners are sent a restore checkpoint command.
Every checker has a different method of checkpointing according to their own
algorithms. These checkpointing methods will be explained together with the algorithms in
section 4.4.
31
4.4 – Checkers
The sole purpose of Vyrd+ is to check the concurrent execution of test programs
against our concurrency checking criteria. In the verification module of the project, when
Vyrd+ is replaying the execution, the execution is checked by communicating the execution
states to the concurrency checkers. The concurrency checkers verify each execution state and
instruction against their own checking criteria. This verification process of the concurrency
checkers are driven in parallel by the core module of Vyrd+. Below you can find how this
interaction occurs, and the design of the currently embedded concurrency checkers of Vyrd+;
refinement checker, and atomicity checker.
4.4.1 – Interacting with Checkers
The main Vyrd+ module interacts with the concurrency checkers through several
interfaces. One of them is the CheckpointListener interface which was explained in section
4.3.4. Another way, that is the main communication mechanism between the main module
and the concurrency checkers is the EventListener interface. All concurrency checkers
implement this interface.
When the execution state tree is replayed by the main module, several notable events
occur that the concurrency checkers need to know about. Concurrency checkers are notified
with the relevant and required information; when each instruction is executed, when each
object is created or released, when each thread is started or ended, when a method call or
return occurs, when an exception is thrown, when a class is loaded and when each log item is
read. The corresponding concurrency checkers handle these events according to their
checking criteria and algorithms.
4.4.2 – Refinement Checker
You can find the theoretical explanation of the refinement checking; I/O and View
refinement methodology in sections 3.2.2 and 3.2.3. In this section the technical details of the
Refinement Checker implemented in Vyrd+ will be explained. As previously explained in the
background information section; a specification (spec) and an implementation (imp) of the
test program is driven in parallel. The TargetClass encapsulates these two, and Refinement
Checker drives these two in parallel.
32
Figure 14 – Example Interaction between Implementation, Specification and Checker at commit points
4.4.2.1 – Checking
The implementation contains the interleaved multithreaded execution of the test
program. And it is driven by the given sequence of instruction log items read from the log;
instruction by instruction. As previously explained; there are commit points logged for each
transaction (method). When we are driving the implementation, at these commit points we
call the transaction in which this commit point belongs; on the specification side. This can be
cleanly done by using the Managed JVM described in section 4.3.2. Therefore, a transaction
by transaction execution that is in the order of commit points in the implementation, is
maintained at the specification side.
Refinement checker creates two instances of the target class one for the specification
and one for the implementation by the help of the Managed JVM. So, refinement checker
33
drives these two in parallel by using the methods that Managed JVM provides. These two
instances are present in the same VM as two objects.
The refinement checker needs to keep all the method and thread relevant information
of the implementation side; in some structures to keep these and compare with the
specification side. Whenever a method invocation event is received at the Refinement
Checker, relevant information regarding the method to be executed is encapsulated inside a
MethodExec object. This object keeps refinement checking information such as; “is the
method yet committed?”, method specific information such as parameters, return type, return
values. It also keeps the ids of the states in which it was called and it was committed. How we
make use of these values will be explained later.
Refinement checker needs to keep method execution information of the
implementation side in an organized and clean manner in order to check with the specification
side. Therefore, refinement checker keeps a hash map that contains thread id’s as keys and
ThreadState objects as values. So we have a ThreadState for each thread. We basically could
say that each ThreadState acts like a stack that contains MethodExec objects as stack frames.
It is as if we keep a stack for each thread.
For example; when we a method invocation event is passed to the checker, we create
the MethodExec with the information at hand and we push it onto the ThreadState that
corresponds to that thread. When there is a commit event, or a method return event; we can
easily find out the MethodExec by only having the thread id and other information at hand.
We use the hash map to obtain the ThreadState from the given thread id. The MethodExec
that is on the top of the stack of the ThreadState should be the MethodExec that we need. Just
as we would manipulate a method stack; we manipulate (push and pop methods) the
ThreadState at method calls and returns.
As we keep these structures for the implementation side, we keep a specification
instance of the test program in the managed VM (see section 4.3.2). Whenever a method
return occurs at the implementation side; the method information required to call a method at
the specification side is obtained from the MethodExec (that is kept at the implementation).
With the required information obtained from the MethodExec, the method is executed on the
managed VM.
When we know that the transaction is executed on both the specification and the
implementation sides, two criteria are checked. First the I/O refinement is made, then the
View refinement occurs. In I/O refinement we compare the return values of the transactions at
34
the specification and the implementation sides. When we execute the method on the managed
VM, we easily get the return value of the method that we need for the specification. At this
point, the MethodExec that belongs to the implementation side already contains the return
value for the implementation execution. This is because, the commit log item read from the
log contains the return value of the implementation execution (remember that while logging
we make an interleaved execution that is basically the implementation execution). So when
refinement checker receives a commit event, it receives the return value of the transaction and
puts in into the MethodExec that the commit belongs to.
So when the I/O comparison takes place, the return value that is returned from the
method call to the managed VM (specification) and the return value that resides in the
MethodExec (implementation) are compared to be equal.
Then the view refinement takes place. Actually in the managed VM, two instances of
the target class are present; the specification and the implementation as two objects. After the
I/O refinement, we do the view refinement. The specification and the implementation have a
method named “abstractFunction()”. This function returns the current abstract state of the test
program as an object. The function is called both at the specification and the implementation
sides with the help of Managed VM. Then, the returned abstract states are compared to be
equal again with the help of Managed VM’s object comparison functionality.
If both of these criteria hold, we say that the transaction passes refinement checking
criteria. We repeat this verification for all public methods called on the target class. If all of
the verification processes pass to hold; then the test program is said to be correct according to
our refinement checking criteria.
4.4.2.2 – Checkpointing
We had explained the checkpointing idea and the main functionality in section 4.3.4.
Here the individual checkpointing mechanism of the refinement checker will be explained.
The current version of the refinement checker always replies to the main module that it is ok
to store a checkpoint when it is asked for acknowledgement. It also does not do anything
when it is told to store a checkpoint; because no extra storage is required for it.
At each MethodExec, the state ids where the MethodExec was created and where it
was committed are stored. So, when a request to restore the checkpoint with a certain id; for
all the thread states that are kept within the refinement checker, the following actions take
place.
35
We keep removing (popping) methods from the ThreadState while the call id (state id
when the MethodExec was invoked) is larger than the state id that we want to restore to. If the
state id that we want to restore to is between the call id and commit id (state id when the
MethodExec was committed), then the MethodExec is altered such that it is not committed yet
(because we want to restore to a time where the MethodExec was not committed). And if the
state id that we want to restore to is greater than the commit state id, we do not do anything
and we break. Therefore, by completing these operations we restore the ThreadState’s to the
state with the given state id.
Call id
Commit id
done
restore state
Call id
Commit id
Set not comitted
restore state
Call id
Commit id
Pop the method
restore state
Figure 15 – Depiction of checkpoint restore algorithm of the refinement checker
4.4.3 - Atomicity Checker
Atomicity Checker uses Atomizer Algorithm which detects atomicity violations for
given code blocks and reports whether blocks are atomic or not. These blocks are determined
by calling static methods of beginCheck() and endCheck() of AtomicityCaller.java.
AtomicityCaller.java has two static empty methods; they are just used to determine borders of
atomic blocks.
public final class AtomicityCaller{
public static void beginCheck() {
return;
}
public static void endCheck() {
36
return;
}
}
As an example, consider the following code. In this code, there are 2 atomic blocks as
shown in the figure.
AtomicBlock.beginCheck()
……
AtomicBlock.beginCheck()
……
Block 2
Block 1
AtomicBlock.endCheck()
…….
AtomicBlock.endCheck()
Figure 16: Explanation about atomic blocks
Atomicity Checker algorithm listens concurrency checker and notifies at each
instruction executed. To implement the rules which are stated in section 3.1.2.2, only some
types of instructions are required. These are lock acquire and lock release operations, shared
variable read and write operations, thread creation operations and atomic block enter and quit
operations. After catching up these operations, Atomicity checker applies appropriate rules
each of them, checks whether atomicity is violated or not. In the case of atomicity violation,
it sends appropriate error message to the Vyrd+.
4.3.3.1 - Design & Implementation
Atomicity Checker can be explained in three main parts.
1) Filter required instruction among all instructions and send corresponding parts of
program. (AtomicityChecker performs the action)
2) Apply atomicity Rules (AtomicityRuleApplier performs the action)
3) Create corresponding error messages and atomic block information that violation
happened (AtomicBlockCollection and AtomicityViolation perform these actions)
4.3.3.1.1 Atomicity Checker
AtomicityChecker listens EvenListener, Concurrency Checker and Checkpoint
37
Interface. Main duty of AtomicityChecker is filtering instruction executed by VyrdVM in
order to check atomicity property of application and create and restore checkpoint if it is
wanted from VyrdVM. First of all, filtering of instruction will be explained next section.
Check pointing mechanism of Atomicity Checker is going to be explained in Check pointing
section.
Filtering of Instructions
Atomicity Checker class filters instructions related to the following events and it stores
threads, locks and shared variable information in three different hash table.

Thread Creation: After any thread started, EventListener notifies Atomicity
Checker. Then Atomicity Checker creates thread state info in order to apply
atomizer algorithm and stores it in the thread state info hash table.
ThreadStateInfo will be explained in details in the AtomicityChecker data
structures section.

Locks acquire and release operation: AtomicityChecker understands whether
locks acquire and release operation is performed by any thread with the help of
EventListener since it notifies AtomicityChecker at each time instruction
executed. JPF Source code is changed at lock() and unlock() methods in
Monitor class, so that Atomicity Checker gets the information that which
thread acquire or release which lock. As a lock, threads store the reference of
the object which they are locked. In the case of lock operation, if lock is held
by first time, LockStateInfo is created and added in the LockStateInfo hash
table. Otherwise, the lock state info is updated. Details of lock state info will
be explained in the AtomicityChecker data structure section.

Shared Variable Access: Shared variable access is realized again with the help
instructionExecuted method of EventListener. JPF Source code is changed at
putfield() and getfield() classes, so that Atomicity Checker gets the information
that which thread access which variable. When a variable is accessed first time
by a thread, VarStateInfo is created and stored at VarStateInfo hash table.
Details of shared variable state information will be explained in the
AtomicityChecker data structure section.

AtomicityCaller methods Call: Atomicity Checker needs to understand
whether beginCheck() and endCheck() methods of Atomicity Caller is called to
38
determine start and end of the atomic blocks. Eventlistener’s methodCalled
method sends corresponding information to AtomicityChecker. When any
thread enters an atomic block or exit from atomic block, it notifies
AtomiserListener.
4.3.3.1.2 AtomizerRuleApplier
AtomizerRuleApplier applies rules which described in section 3.1.12.3 in its update()
method. Actually there are two main types of update: shared variable access operations and
lock operations. If atomicity violations occur, appropriate error is created and atomic blocks
are notified. Wrong race or wrong acquire of lock can cause atomicity violations. In the case
of wrong race, WrongRaceError and in the case of wrong acquire WrongAcquireError’s are
created and sent to AtomicBlockCollection class.
4.3.3.1.2 AtomicBlockCollection, AtomicBlock
AtomicBlockCollection stores AtomicBlock objects. This class implements
AtomizerListener. It stores AtomicBlock objects in an arraylist, called: atomicblocks. First of
all, fields of AtomicBlock class are described to explain clearly the execution of code.
signature(String): It stores classname.methodname of atomic blocks such that
“AtomicRaceFree.run”.
enterLineNumber (int): It is line number where atomic block begins.
leaveLineNumber (int): It is line number where atomic block ends, it is initially
equal to Integer.MAX_VALUE. Both enterLineNumber and leaveLineNumber are
used to indicate whether given error occurs in this atomic block.
threads (HashSet) : Set of thread indexes that are currently in atomic block.
errors (ArrayList): It stores errors that occurred in this atomic block, initially it is
equal to null.
isAtomic (boolean) : Indicates whether atomic-block is atomic or not. Initially all
blocks are atomic.
When any thread enters any atomic block, AtomicBlockCollection is notified. If this
atomic block is not in its atomicblocks list previously, it adds atomic block to the list. When
any thread exits from atomic block, it updates liveLineNumber of the atomic block. To handle
nested atomic blocks, AtomicBlockCollection has stack. When new atomic block added to the
atomicblocks list, its enterLineNumber is pushed into the stack. Similarly, when atomic block
exists, stack is popped.
39
When error occurs, AtomicBlockCollection is notified and corresponding error is sent
by Atomicity Rule Applier. At this time, Atomic Block Collection finds the atomic block in
which error occurred by checking line numbers of each block with line number of error. After
finding the non-atomic block, it sets isAtomic field of it to false. Consequently, when
analyzing ends, AtomicBlockCollection reports the errors to AtomicityRuleApplier.
4.3.3.2 - Data Structures of Atomicity Checker
ThreadStateInfo
As understood from its name, ThreadStateInfo stores a thread’s information which is
necessary to imply the rules in section 3.1.1.2.3 The following data is required.
threadIndex (int): Thread index is stored in order to understand to which thread the
information it belongs to.
threadLocks (HashSet): Locks that held by thread should be stored to apply rules. HashSet
data structure is preferred to use for this purpose, since it is appropriate for intersection and
union operations. Cost of these operations is constant execution time.
transitionState (int) : Each thread has transition state whether InRIGHT = 0 or InLEFT =
1 according to the reduction algorithm.
numberOfAtomicBlock (int): This field is used to understand atomicity level of thread. It
is initially 0. When thread enters in atomic block, it increases by one and when thread exits
from atomic block, it decreases by one.
ss (String) : This field is used to create user-friendly error messages, it stores which class,
which method and which line number of test class is executed by thread.
ThreadStateInfo object is created by giving threadIndex and ss as inputs when
AtomicityChecker notified that thread started.
LockStateInfo
As understood from its name, LockStateInfo stores a lock’s information which is
essential to imply the rules in section 3.1.1.2.3 The following data is required.
lock (int) : lock is stored in order to understand to which lock the information it belongs
to.
threads (HashSet) : Threads which holds this lock should be stored in order to understand
lock situation: 1) lock can be used only by one thread or 2) lock can be initialized by one
thread, but later transferred to the second thread or 3) lock can be used by many threads.
40
HashSet is preferred to use for this purpose because it is appropriate for intersection and union
operations due to the constant execution time.
state (String) : Each lock has a state, it can be “Thread-local” which means that lock is
used only by one thread; or “Thread-local2” which means that an object is first initialized by
one thread and then transfers ownership of both object and its appropriate lock to another
thread; or “Shared-Modified” which means that lock is used by many threads. . It is stored to
implement ACQ operation described in section 3.1.1.2.3
owner (int): Threadindex, which held this lock first. It is required to change state.
secondthread (int) : Threadindex, which held this lock secondly. It is required to change
state.
ss (String) : With the same purpose as described in ThreadStateInfo.
action (int) : It can be whether acquire = 1 or release = 0. It is again uses to create userfriendly error messages.
lastSS (String) : It is also used to create user-friendly error messages. When error occurs, it
is used to get the information of last valid access line to that lock.
.
lastAction (int) : It can be whether acquire = 1 or release = 0. It is used to create userfriendly error messages, too. When error occurs, it is used to get the information of last valid
action to that lock.
LockStateInfo object is created by giving lock and ss as inputs when
AtomizerItemAnayzer meets acquire operation of a lock for the first time. LockStateInfo
objects update themselves when AtomizerItemAnayzer analyzes LockAcquireItem or
LockReleaseItem.
VarStateInfo
As understood from its name, VarStateInfo stores a variable’s information which is
needed to imply the rules in section 3.1.1.2.3. The following data is required.
varCode (String): It is a unique code which is constructed by concatenating class hash
code with “.” and field index at static fields’ access. At instance fields’ access, element index
is used instead of class hash code. It represents variable.
state (String): Each variable has state information to implement Eraser Lockset
Algorithms which is described in section 3.1.1.2.3. Initial station of a variable is "Virgin".
ss (String): With the same purpose as described in ThreadStateInfo.
action (int) : It can be whether READ = 1 or WRITE = 0. As described in Eraser Lockset
algorithm, different locks should be modified in read and write access to any variable.
41
firstThread (int) : As understood from name, first thread that accesses the variable. It is
used to implement state diagram described in section Eraser Lockset algorithm.
currThread (int): As understood from name, thread that accesses the variable. It is again
used to implement state diagram described in section Eraser Lockset algorithm.
readLocks (HashSet) : It is used to store read locks, as described in section Eraser Lockset
algorithm
writeLocks (HashSet) : It is used to store write locks, as described in section Eraser
Lockset algorithm
report (boolean ): It is stored to understand whether there is race or not.
exState (String), lastAcessSS (String), lastThread (int), lastAct (int): These fields are
used to store information about last access of the variable.
VarStateInfo object is created by giving varCode as an input when AtomicityChecker
meets shared variable access to a variable for the first time. VarStateInfo objects update
themselves according to the atomicity checker rules in AtomicityRuleApplier class’s update
method.
4.3.3.3- Checkpointing
We had explained the checkpointing idea and the main functionality in section 4.3.4.
Here the individual checkpointing mechanism of the atomicity checker will be explained. The
current version of the atomicity checker always replies to the main module that it is ok to
store a checkpoint when it is asked for acknowledgement since there is not any unsafe state
for atomicity checker to store checkpointing.
When it is told to store a checkpoint, atomicity checker makes copy of its variable
state information hash table, thread state information hash table and lock state information
table, by requested each entry to make its own copy. All these three tables are enough to
define the state of atomicity checker. After adding also a state id to the copy of state,
checkpointing state is stored in a hash table according to its state id.
When it is told to restore a checking point, atomicity checker checks whether the
wanted state exists, if state was stored in the state table, it uses the stored tables as thread
state, shared variable and lock state table. If state does not exist, it clears all current tables and
execution starts from beginning.
42
4.5 –The Debugger Mechanism
Vyrd+ has debugging property, so that user can observe trace of execution of
concurrency checkers and analyze the test application in an easy way. Debugging tool of
Vyrd+ is still in progress and not completed. In the following sections, features and
architecture of debugging tool of Vyrd+ will be explained.
4.5.1 – Features
The main property of debugging tool of Vyrd+ is giving opportunity to user to observe
execution trace. Besides functionality, we also aimed to display the execution trace in a user
friendly way since Vyrd+ supports also backtracing. The big design problem that we faced
was to display tree structure of each thread execution in a intelligent way. Our solution to this
problem was to display the tree structure instead of vertical, in a horizontal way where
branches can be gathered. Thus, user can expand only wanted information and see the big
Figure. In addition to observing the execution, by clicking any wanted node, user will also
have ability to execute the program in until the selected node. This is a future plan and current
version of debugging tool does not provide this facility yet.
Figure 17. Interface of Debugging Tool
4.5.2 - Architecture
Debugging tool work starts until debug button clicked at the user interface. After that
VyrdmcDebugger reaches log information with the help of log reader, it creates debug items,
which will be nodes of execution tree. Each thread has its own execution tree. While new
node is created, it added in the execution tree of its thread. This is a hierarchical tree that the
43
root of it displays which thread the tree belongs. And each node is also a tree, where root
displays method name, children display lines and leafs display byte code instructions. This
recursive tree structure is implemented by using JTree, since it is easy to manipulate and
attaches action listener to each leaf, so that programming of user action when leaf clicked is
very easy.
5. EVALUATION
We have made several tests and experiments with VyrdPlus. We have tested a multiset
program and a ray tracing program. We have observed that tracing the whole execution space
(in the coverage area) can be very demanding in CPU consumption terms. The checkpointing
mechanism implemented decreases CPU time, however it introduces an overhead of memory.
We believe that the currently being developed stateless search algorithm will increase the
efficiency of the tool greatly. In the following sections you can find the experiments
performed on Vyrd+.
5.1 - Experiments with VyrdPlus
5.1.2 –Testing Vyrd+
We have used a multiset class and a ray tracing program for testing the refinement
checker. The ray tracing program, was a very demanding application (as the conventional ray
tracing algorithm is a very high complexity algorithm). And the application ran out of
memory before we saw any result. We believe that the implementation of stateless search will
solve such problems. Because tracing all the possible execution states and storing memory
data related to these can be a problem when we are dealing with very resource demanding
algorithms like ray tracing.
The testing of Multiset application was done successfully with and without
checkpointing. Also we have performed several tests on checkpointing; increasing and
decreasing the parameter that controls maximum number of checkpoints kept. In the
following section the tests made on the multiset test program and the results observed are
going to be explained. The testing log results can be found in .txt format on the “test results”
file given provided together with the project report.
44
5.2 Performance results
In the following subsections you can find the performance tests and their results
explained and analyzed. The tests and observations explained in 5.2.1 were performed by
Selçuk Atlı and the tests and observations explained in 5.2.2 were performed by Nihal Dindar.
5.2.1 – Performance Results for Refinement Checker
Checkpointed
Execution
CP SIZE Checkers time
Test Program
Log/Replay
Multiset
Log
Multiset
Multiset
Multiset
Multiset
Multiset
Replay
Replay
Replay
Replay
Replay
No
Yes
Yes
Yes
Yes
20
50
70
100
… none
none
none
none
none
47750 sec
41422 sec
39031 sec
38375 sec
35406 sec
Multiset
Multiset
Multiset
Multiset
Multiset
Replay
Replay
Replay
Replay
Replay
No
Yes
Yes
Yes
Yes
20
50
70
100
… refinement
refinement
refinement
refinement
refinement
77187 sec
53641 sec
50454 sec
49094 sec
48750 sec
…
…
… 734 sec
Figure 18 – The performance results that are being explained below. The first column is the test program tested,
the next shows whether the logging or replaying was timed. The next column indicates if the checkpoint
mechanism was used. The next column shows the number of maximum checkpoints kept in memory at a single
instance. The next column shows the concurrency checkers attached to the execution. And the last column shows
the execution time in seconds.
We have performed several tests that would turn out to be beneficial for analysis of the
Vyrd+ tool. We first executed the logging facility and timed the execution. Then we started
timing the verification facility.
We first timed the verification module without embedding attaching any concurrency
checkers to the tool. Therefore, we timed the sole execution of the main Vyrd+ module. We
first timed it without any checkpointing mechanism. Then we timed with the checkpointing
mechanism. We did experiments by sequentially increasing the maximum checkpoint storage
size. We tried with 20, 50, 70 and 100 elements.
Then we timed the verification module by attaching the Refinement Checker as a
concurrency checker. We again followed the same methodology. We timed it without
45
checkpointing, then with checkpointing, sequentially with 20, 50, 70 and 100 maximum
checkpoint sizes.
The performance results show us that making use of checkpointing provides us with a
significant performance increase that we observed up to 20% when we set the checkpoint size
to 100. As we increased the max checkpoint elements size, the performance continually
increases; as you can observe from the graphs given below.
60000
50000
40000
30000
Series2
20000
10000
0
0
20
50
70
100
Figure 19 – The graph shows the execution time in seconds on the y axis and the CP size on the x axis.
The graphs shows the execution graph when no concurrency checkers are attached.
90000
80000
70000
60000
50000
Series2
40000
30000
20000
10000
0
0
20
50
70
100
Figure 20 – The graph shows the execution time in seconds on the y axis and the CP size on the x axis.
The graphs shows the execution graph when refinement concurrency checker is attached.
46
This is in accordance with our theoretical approach. The sole purpose of checkpointing
was to direct the CPU overhead to the memory. A lot of performance overhead is caused
when a previous state is backtracked to. As we had explained in the previous sections, there
are two methods to backtrack: rollback and restoring checkpoint. When rollback is used, a
very demanding process that restarts the log and starts over to the backtrack point is started.
However, when restore checkpoint is used, there is a simple loading of that memory state to
where we are going to backtrack to. This is not very CPU demanding.
And the reason why the performance increases as we increase the max checkpoint size
is quite straightforward. Our algorithm is such that it uses a checkpoint restore whenever it is
possible, at other times it uses the rollback method. However, we know that keeping
checkpoint states consumes memory and we need to constrain it somehow. So we
parameterize the maximum number of checkpoint states kept by a certain number. So
increasing this number naturally leads to more often usage of checkpoint restores instead of a
rollback. And because of this transition in overhead, the CPU performance increases; just as
we thought it would in the performance results.
Another observation we should make comes from the comparison between the tests
that don’t contain any checkers and that contain the refinement checker attached to it. As you
can see from the result tables given above, there is not a very large performance difference
between any checkers attached execution and refinement checker attached execution. This
shows us that multiple checkers can easily be embedded in Vyrd+ without causing a very
large performance overhead.
5.2.2 – Performance Results for Atomicity Checker
Test
Program
Multiset
Multiset
Multiset
Multiset
Multiset
Multiset
Multiset
Checkpointed
No
No
No
Yes
Yes
Yes
Yes
Multiset
Multiset
Multiset
Yes
Yes
Yes
Atomic
Block
No
Yes
Yes
No
No
No
No
Yes
Atomic
/NonAtomic
--NonAtomic
Atomic
----NonAtomic
Yes
Yes
Atomic
NonAtomic
CP SIZE
---20
50
70
100
Checkers
Atomicity
Atomicity
Atomicity
Atomicity
Atomicity
Atomicity
Atomicity
Execution
time
56530.0
82641.0
55360.0
39937.0
39719.0
38844.0
37922.0
20
20
50
Atomicity
Atomicity
Atomicity
82719.0
39688.0
79390.0
47
Multiset
Multiset
Multiset
Multiset
Multiset
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Atomic
NonAtomic
Atomic
NonAtomic
Atomic
50
70
70
100
100
Atomicity
Atomicity
Atomicity
Atomicity
Atomicity
39297.0
77782.0
38625.0
77141.0
37844.0
Figure 21 - The performance results that are being explained below. The first column is the test program tested,
the next indicates if the checkpoint mechanism was used. The next column shows whether there exists atomic
block and next one represents whether tested block is atomic or non atomic. The next column represents the
number of maximum checkpoints kept in memory at a single instance. The next column shows the concurrency
checkers attached to the execution. And the last column shows the execution time in seconds.
In order to test performance of Atomicity Checker, same test class is run under
different conditions. In our test conditions related to Atomicity Checker, we tried to measure
the effect of check pointing, size of checkpointing, atomicity property of tested class. In order
to visualize the effect of changes in checkpointing, we did experiments by sequentially
increasing the maximum checkpoint storage size. We tried with 20, 50, 70 and 100 elements.
The performance results show us that making use of checkpointing provides us with a
significant performance increase like refinement checker. Results also show that, the most
important factor at performance of atomicity checker is the status of block. Non atomic blocks
cause big performance lost. The reason for that is once atomizer algorithm fails, it continues
to fail on the each executing. This problem is fixed with the Boolean variable that stops the
checking of atomizer any time atomicity violation occurs. The performance results also
showed that there is almost no difference between execution of atomic blocks or having no
block to test. The reason for that it, even atomicity checker does not meet any
atomicitybegin() call, it executes the algorithm, updates threads, variables and locks info, but
does not report anything. When any thread enters atomic block, at this time algorithms report.
48
Performance Results for Atomicity Checker
90000
80000
Execution Time
70000
60000
Non Atomic Block
50000
Atomic Block
40000
No Atomicity Block
30000
20000
10000
0
0
20
50
70
100
Checkpointing size
Figure 22. Graph of Performance Resuls
6. CONCLUSIONS
In this project, we developed a runtime concurrency checker tool in which we applied
software verification methods of refinement checking and atomicity checking. We have built
several mechanisms to ensure the modular execution of several verification mechanisms
under the roof of Vyrd+ in parallel. Some of these mechanisms explained in detail above, are
logging mechanisms, log tracing mechanism, concurrency checker interactions, refinement
checker, atomicity checker, debugger, managed JVM, checkpointing and backtracking
mechanisms. We have developed, put together these modules and built their interactions with
each other successfully. The experimental observations that we have made on the tool show
that Vyrd+ seems to work successfully.
A difficulty of building the tool was overcoming certain problems sourced by
the underlying design of Java Path Finder (JPF). As previously discussed in the background
Section JPF was designed to make the execution of a given Test program without any outer
interference. This point of difference created some difficulties while developing the project,
49
but we overcame these problems. For example we built the Managed VM in which we
abstracted the interaction with JPF into our own mechanism.
Another challenging part of the project was understanding and fully interpreting the
source code of a very large project like JPF. Unfortunately, although JPF is an open source
project, it is not very well documented and commented. There was no javadoc available for
Java Path Finder, therefore we needed to read and interpret most of the code. Debugging
executions of JPF, learning about the real mechanisms of the actual JVM and trial
& error was mostly useful in solving problems that we came across.
In this project we learnt to use a previously developed project, to develop another
project; depending on some of its components. Another accomplishment we made was
understanding and digesting a theoretical approach like refinement checking verification and
atomicity checking verification, and building it into a tool, coming up with some innovative
solutions.
We also needed to come up with our own theoretical mechanisms and algorithms to
solve some problems we came across and increase the performance of the tool. For instance,
checkpointing mechanism was example to such a process. Because that, traversing all
possible execution states of a test program is a very demanding process; we were really
concerned about performance issues. Therefore, we came up with innovative solutions to such
problems. The innovative part of the project was large, and at many points during the project;
we bumped into a problem and had to come up with an innovative solution to make our way
around that obstacle.
In conclusion, we gained a lot in research and creating and applying solutions to
certain problems and making software design decisions. We therefore, think that developing
the project was very beneficial for both our research and software development skills.
7. HOW TO RUN
Following figure is screen shut of Vyrd+. Although it can be understood easily, we wanted to
explain the function of each button briefly.
Print target information: prints the information of target class in the screen.
Run JPF offline: Initialize JPF, run it and creates log file.
Print log: Print log that created by log writer.
Replay execution: This button is not active.
50
Run VyrdMC offline: Initialize checkers which are added and enabled in configuration file.
Save output to file: Saves output of execution in a file.
Print internal log: Prints internal log in the screen.
Debug: Starts debugging.
Exit: Exits from the program.
Load Configuration: Configuration files can be loaded.
Edit Configuration: Configuration files can be edited.
New Configuration: New configuration file can be written.
Figure 23. User interface of Vyrd+
8. Acknowledgement
We would like to thank our advisors, Dr. Serdar Taşıran and Tayfun Elmas, for their
guidance, advice, and encouragement toward successful completion of this project.
51
9. REFERENCES
[1]
T. Elmas, S. Tasiran. VyrdMC: Driving Runtime Refinement Checking with Model
Checkers. Fifth Workshop on Runtime Verification (RV’05). The University of
Edinburgh, Scotland, UK. July 12, 2005.
[2]
Tayfun Elmas, Serdar Tasiran, Shaz Qadeer. VYRD: VerifYing Concurrent Programs
by Runtime Refinement-Violation Detection. ACM SIGPLAN 2005 Conference on
Programming Language Design and Implementation (PLDI), Chicago, Illinois, USA,
June 12-15, 2005.
[3]
P. C. Mehlitz, W. Visser and J. Penix. The JPF runtime verification System.
http://javapathfinder.sourceforge.net/JPF.pdf
[4]
Information about java virtual machine; definitions and history
http://java-virtual-machine.net/index.html
[5]
R. Simmons. Formal Verification of Autonomous Systems Overview. An overview for types of formal
verification of autonomous systems.
http://www.cs.cmu.edu/afs/cs/user/reids/www/verification/verification.html
[6]
T. Lindholm, F. Yellin. The Structure of the Java Virtual Machine. Chapter 3 of the Java Virtual
Machine Specification, Second Edition.
http://java.sun.com/docs/books/vmspec/2nd-edition/html/Overview.doc.html#
[7]
C. Flanagan and S. N. Freund. Atomizer: A dynamic atomicity checker for multithreaded programs. In
Proceedings of the ACM Symposium on the Principles of Programming Languages, 2004.
[8]
S. Savage, M. Burrows, G. Nelson, P. Sobalvarro, and T. E. Anderson. Eraser: A dynamic data race
detector for multi-threaded programs. ACM Transactions on Computer Systems, 15(4):391–411, 1997.
[9]
C. Flanagan, A. Qadeer and S. N. Freund. Atomicity for reliable Concurrent Software
http://www.soe.ucsc.edu/~cormac/talks/tutorial-pldi05-atomicity1a.ppt
[10]
S. Atlı, N. Dindar, T. Elmas, S. Tasiran. Vyrd+: A Toolbox for Runtime Verification of
Concurrent Software. Tool presentation paper.
52