Download On an affordable, static, component-based software verification system

yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

On an aordable, static, component-based software
verication system
n Cernuda del R
o, Jose Emilio Labra Gayo, Juan Manuel Cueva Lovelle
Abstract | The Itacio component model intends to bring
a method of verifying software systems made up of components. This verication is done statically, without the need
of executing -or even building- the program, and is based
on the available knowledge about the involved components.
The verication process is founded on a simple inference
mechanism; Itacio-based tools can be easily built since they
rely on widely available technologies and do not involve complex theories.
The model itself is deliberately simple and exible; this
method can be applied at dierent levels of abstraction, and
to dierent facets of the software development process, without the need of a deep training in formal methods. This
exibility has been tested by applying Itacio to a variety of
problems for which it had not been specically designed.
Keywords |Software components, component model, technology transfer, verication, evolution, reuse.
I. Introduction
UILDING software upon components allows reducing development costs and shortening time-to-market.
But the problem of verifying whether two components will
interact properly is not completely solved.
Three main levels of interoperability between objects
(and this may be extended to components) have been recognized [1] :
The signature level : names and signatures of operations.
The protocol level : relative order between exchanged
messages and blocking conditions.
The semantic level : "meanings" of operations.
Current component technologies (such as COM,
CORBA, JavaBeans) usually solve cross-platform or lowlevel technical problems to communicate components. Automatic checking in such environments is usually restricted
to the level of signature matching; the construction, evolution and maintenance of systems can lead to problems as
active restrictions are unintentionally violated. Many software defects arise which are not caused by a malfunction
of any particular component, but by the very combination
of them, which (even indirectly) can break the working requirements of some component.
Unfortunately, system testing cannot guarantee that all
such defects have been detected; however, many of them
could be predicted if all knowledge available a priori was
taken into account and properly analyzed and matched.
Analytic and formal techniques have been developed in
order to describe and examine software. Many of these
techniques are critizised for being overly complex in practice, and they are considered to require a remarkable training eort for the average developer. Part of the formal
Department of Computing Science, University of Oviedo (Spain)
methods community recognizes it to be true to some extent [2], [3], [4], although they nd the results worth the
eort. Nevertheless, the software development industry
is reluctant to try them, be that approach right or not,
so technology transfer is problematic. Even where formal methods do not apply well for technical reasons (such
as computability limits) the knowledge of the developers
about their software would be invaluable. This knowledge
is usually lost in documents and does not play any role in
automatic verication.
In this paper, we briey describe a component- and
kwnowledge-based, static verication model which has
been conceived under strict requirements of accessibility
and exibility. The model is simple and general, so that
it can be applied to very dierent scopes and problems.
It also is easy to support by widely available and wellknown technologies. These self-imposed restrictions intend
to make it useful for the average developer without a highly
specialised training, and to make it useful for many dierent facets of software development, promoting easy adoption and maximizing the return of investment.
This model is constrasted with other possibly useful techniques. The prototipe tools are briey presented to support
our claim of viability, and the experiences in applying the
model to dierent problems are described; conclusions and
future work are also proposed.
II. The Itacio Component Model
As said above, the main advantages of the Itacio component model [5] are that no execution of the program is
needed for verication, and the inuence of components
(even indirect or side-eect inuence) is easily taken into
account without techniques like data ow analysis. Also,
the specication system is fully modular and, in addition,
it can be easily supported by a Constraint Logic Programming system. Finally, this model can be applied at dierent levels of abstraction. The model deliberately avoids to
bind the user to a specic semantic notion of component,
so that he can apply a general verication framework to a
very wide spectrum of problems.
A precise description of this model can be found in [6].
The central idea of the Itacio model is a exible denition
of a component. A component C is an entity which has a
frontier F(C) and a set of restrictive expressions E(C).
F(C) is a nite set whose elements are called connection
points ; these connection points can be sources (whose set is
denoted by S(C)) or sinks (whose set is denoted by K(C)).
Informally stated, sources carry information outside of a
component (e.g., a function call) and sinks introduce infor-
mation into a component (e.g., a function's entry point).
Restrictive expressions are divided into two disjoint subsets. The set of requirements R(C) contains restrictive expressions that are Horn clauses (a special form of rst-order
logic predicates) over the sinks. The set of guarantees G(C)
contains Horn clauses over both sinks and sources. In addition, there is a one-to-one correspondence between the
sinks and the requirements (there is one requirement predicate associated to each sink, although this predicate can
refer to more than one sink). Requirements do not refer
to sources because this system intends to verify the composition of components, not the internal behaviour of a
component; so it is assumed that we have control over the
behaviour of the component itself and we do not need to
restrict our own outputs. Maybe another component will
(in its restrictions over its own inputs or sinks).
A system = f; ; Lg is a graph whose nodes are components and whose edges are source/sink pairs, together
with a set L of auxiliary predicates called the library. In
other words, a system is built by taking components and
connecting sources with sinks, and adding some auxiliary
predicates. The rst requirement for a system (the socalled topological correctness ) establishes that there will be
no isolated connection points.
It can be seen that the concepts involved in the Itacio model are fairly simple. In order to apply Itacio to
some problem, all that is needed is to perform an instantiation process, matching these basic concepts to the elements
of the problem domain. Then, the components -whatever
they may be- can be described and the verication process
can take place.
IV. Implementation and Use
This schema can be easily implemented. First-order logic
predicates in the form of Horn clauses is the chosen carrier
for restrictive expressions; the computerized handling of
Horn clauses was achieved long ago in Logic Programming.
For a basic implementation, a Prolog-based inference engine could suÆce; but Constraint Logic Programming, or
CLP for short, is a much more powerful tool. It allows
the developer to face complex problems that are beyond
the scope of Prolog. For instance, the unication process
between terms can easily handle ranges and domains, and
the specier can use sophisticated problem-solving libraries
and algorithms (such as linear programming). In our case,
the ECLi PSe CLP system was used.
Implementing the generation of a system's knowledge
base is also simple; it is almost a matter of proper string
substitution and atom name generation. So Itacio-based
tools should be easy to develop.
Our experience developing prototypes supports that
claim. A prototype for this model, Itacio-SEDA, was
implemented as an extension of a proprietary CASE diagramming tool developed by Seresco [5]. After that, a
web-based Java/XML/VML prototype was built [6]. The
third prototype, Itacio-XDB, makes use of VB, XML,
ASP, etc. for the GUI, but information about components
and systems is stored in an ODBC database. ECLi PSe is
the inference engine in all cases.
Regarding use, thanks to CLP writing restrictive expressions becomes relatively easy. Logic Programming and related techniques (such as Articial Intelligence) are usually
well covered in computing curricula at universities [7], and
many developers can become uent with a limited training
eort (other authors agree on the potential role of Logic
Programming in this eld [8]). The underlying component
III. The Verification Model
model, based in components, sources and sinks, is clearly
We dene the raw knowledge base K r (
) = fp / (p 2 simple and easy to understand.
R(C) _ p 2 G(C)), C 2 g; it is the concatenation of all
the restrictive expressions of the components of the system. V. Differences With Other Existing Techniques
There are many techniques related with some of the
From its denition, it can be seen that K r (
) does not
depend on ; so it does not contain any information about problems stated here; in our opinion none of them targets
connections. The knowledge base K (
) is built by taking the same set of problems as Itacio. Here we will oer a
K r (
) and following an iterative substitution process (de- very brief survey of them and remark where their respectailed in [6]) over all the source and sink names so that, if tive goals dier from ours.
some si 2 S(Cm ) and kj 2 K(Cn ) are connected, a new,
unique atom name a is generated, and all the occurrences A. Architectural Styles and ADLs
of either si or ki in K r (
) are substituted by a. The knowl- Several research groups have developed classications
edge base K (
) resulting from this process implicitly con- of architectural styles (such as pipe-and-lter, objecttains the information about the topology of the system. oriented, main program/subroutines, event-based, and the
The building process also ensures that the relationship be- like) and studied the properties of systems combining diftween each resulting requirement and its associated sink is ferent styles. Architectural Description Languages or ADLs
not lost.
for short (such as Wright, Darwin, Rapide, UniCon) ofFinally, the verication model relies on an inference pro- fer notations to describe systems and their architectural
cess over K (
). The system is considered to be correct if components/connectors structure. Some ADLs oer also
each and every requirement of K (
) is proven to be true. verication models.
Also, since each requirement in K (
) is related to one sink,
There is a partial coincidence in the goals, but ADLs
if some requirement is not fullled it is possible to know and architectural styles are conceived mostly for a high
exactly which connection point is failing and why.
abstraction level: large, coarse-grained systems [9]. Also,
they are not a general-purpose technique.
B. Formal Methods
There are many dierent formal methods for dierent
purposes. Basically, formal methods oer a rigurous notation for specifying or describing software systems (such
as the Z notation; VDM or B delve also into the development process). These methods aim at proving that a
specic implementation is a rened version of the original
specication. Formal Description Techniques such as SDL,
Estelle or Lotos have been succesfully applied to protocol
Technology transfer issues have been described in section I. Using formal notations for simple purposes may be
costly. Also, component modelling is a recent addition to
most formal methods [3], and tool support for easy automatic verication is limited.
C. Contracts
There are several approaches to software contracts,
but probably the most inuential one has been Bertrand
Meyer's. This one is also partially useful for our purposes,
but these contracts are usually built as executable statements; static analysis may be diÆcult -or impossible- to
achieve, and it is usually not intended.
D. Component Platforms
It has been already pointed out in this paper that commercial component platforms oer a verication system
usually limited to the signature level. In addition, they do
not oer a general method for other notions of component.
E. Process Specication
Several kinds of process algebra
have been developed
which are the foundation of interesting initiatives in the
verication of protocols, such as CSP and derivatives, or
-calculus. These methods are good at formally describing
and verifying processes, but they are targeted mostly to
this specic problem.
F. Static Analysis and Abstract Interpretation
Obtaining conclusions about the behaviour of a program
without executing it (program analysis ) can be a very difcult task; the potential state space that a program can
reach is huge, and there are even absolute limits to computability. Abstract interpretation leverages that the semantics of a program can be more or less precise, depending on the chosen observation level. It is possible, then, to
observe a program under a less precise semantics, that in
turn is computable.
Although these techniques have been around for more
than two decades, and have been incorporated to code optimizing and error detection in compilers, they have not
been widely applied to verication of general purpose programs. Their underlying theoretic model requires a certain
mathematical background to be understood. Also, the developed analysis algorithms solve specic problems, and no
general-purpose verication method is oered.
G. OCL, Catalysis and Other Analysis Notations
These techniques are well suited for describing and modelling components and as analysis tools, but their goal is
not the automatic verication process pursued by Itacio.
The list of benets derived from using Catalysis, for instance [10, p. 40], does not show much connection with
many of our goals.
VI. Case Studies
A. Microcomponents
The rst level at which Itacio was applied was microThe worst problems in software development,
involving budget overruns or project failures, are considered to lie in high abstraction levels (requirements management, analysis or architectural design); development tasks
at lower abstraction levels are usually left to the programmer's ability.
However, small errors can also have an enormous impact
in quality. They are easier to correct once detected than
requirement mishandlings, but they can be more diÆcult
to detect and remain unnoticed until production time. And
one of such defects can bring down an entire system.
The use of a component model can be a step towards
software correct by construction. Very small components
(such as language operators or library functions, with their
associated restrictive expressions) could be used to build a
program, so that the divide operator would statically require a non-zero denominator, and so on. If there is no
guarantee that these requirements are fullled, the verication system would pinpoint the error (without the need
that these "potential" errors become "real" at runtime in
some test case).
A small system based on these ideas was developed over
the Itacio-SEDA prototype. It was able to generate small C
programs that performed math calculations. If the working
requirements of some microcomponent were not statically
fullled, the system raised an error.
We soon noticed that for this schema to become fully
functional the code generation system had to be much better dened and developed, and this was beyond the scope
of the Itacio project.
B. Reuse Contracts
The idea of contracts applied to software has been explored under dierent interpretations [11], [12], [13] (see
subsection V-C). For applying Itacio, the contract model
by Carine Lucas [12] was chosen, since it allowed for a static
analysis (the contents of this section are explained in detail
in [14]). In Lucas' contract model, a reuse contract is a set
of participants, each with a unique name, an acquaintance
clause and an interface. An acquaintance clause denotes
whether a participant "knows about" other. The interface
of a participant is basically a set of operation descriptions,
each with a name and a list of the operations it calls, so the
calling structure between several collaborating components
can be modelled. Lucas oers well-formedness criteria, so
that an individual contract can be morphologically veried.
A reuse contract can be modied over time; for instance,
participants or operations can be added or removed. Lucas
identied eight basic, atomic operators that can be combined in higher-order ones. Applying an operator to a contract produces a new contract. A chain of modications
to a contract can lead to errors or inconsistencies as the
system evolves. Lucas studied the potentially illegal combinations of operators so that invalid modications could
be avoided in advance.
Itacio was applied to this schema to model the modications of a system over time. The obvious instantiation
would be to consider each participant as an Itacio component, and its operations as sources and sinks. But in
this case the goal was to verify system evolution, not inner
structure, so the notion of component is applied at a higher
level of abstraction. Each contract (will all its participants
"inside") is considered a component, which has only one
source: the contract name, which oers the retrieval of any
information about it. Operators have several sinks (parameters) and one source. To verify system evolution, all
that is needed is to put a nal vericator component. With
this schema, inconsistencies can be detected by the inference process, without the need of explicitly listing illegal
combinations as in Lucas' work.
As an example, a case of fragile base class problem in
a framework like MFC [15] can be modelled. The class library is modelled as a contract, and modications made
to the class library are described as several operations, as
described in Fig. 1. If the next version of the class library alters its contract, some modication may become
inconsistent, and this is pointed out by the system as an
invalid connection. An advantage of this approach is that
the original contract model can be arbitrarily enriched with
additional restrictions about specic properties.
C. Remote Diagnostics of the Conguration of Personal
The software present in personal computers (specially in
the Windows operating systems family) is usually a combination of dierent programs, and many problems arise
related to installation, conguration, versioning, etc. In
some environments, the workstations must meet certain
standard conguration of applications and versions for the
daily work to be done. Diagnosing these kind of problems
may involve too advanced operations for the average user
(such as verifying library versions, registry values, conguration les, etc). So support personnel has to move and
make the same verications once and again.
A component model like Itacio can be used to model the
correct conguration of a machine; then, the verication
process could go one step further than usual, and not be
limited to theoretic knowledge about a system, but collect
the real data. All that is needed is to link the inference
statements to native code; in this case, a Windows DLL
has done the work. In addition, this DLL plays the role
of a proxy, so that it collects the predicate values in a remote PC over a network; the diagnosed PC does not have
Itacio installed, but only a small stub program running
[ParticipantCoarsening] [MFCChange2]
[ParticipantCancellation] [cMyDocument]
[ContextExtension] [routerDeps]
20-removeRouterDeps 21-newLinks
[ParticipantCoarsening] [newLinks]
22-linkNewClasses 23-newCalls
[ContextRefinement] [newCalls]
Fig. 1. Modication chain in MFC modelled as reuse contracts and
operators. Both of them are in turn modelled as Itacio components
(graph generated by prototype Itacio-XDB).
(Fig. 2). Itacio can pinpoint the problem and oer explanations (Fig. 3).
D. WaveX: A Component-Based Real-Time Sound Processing System
Previous examples are cases of applying Itacio to problems which are usually not considered by traditional
component-oriented environments. Of course, applying
Itacio to a typical software component concept [16] is also
The WaveX sound processing system (see Fig. 4) intends
to bring an aordable, software-based real-time sound processing system. It is built as a set of independent components, implemented as Windows DLLs with a standard
interface; each DLL processes a stream of digitized sound
(for instance, adding distortion, echo, delay, reverberation,
reducing noise and so on). The user of WaveX describes
a topology : a set of component instances (each with its
parameter set) and their interconnection scheme. WaveX
loads this specication and creates the DLL structure,
Fig. 2. TCP-based remote diagnostics. The Itacio component (in
this case a model of the MFC runtime library) has a predicate about
its version, which is veried remotely on a real DLL in a real (remote)
Fig. 5. Itacio prototype working on a WaveX model. The system
simply inverts left and right channels of a stereo signal, but recorder is
recording in mono mode. In addition, it produces a 22050 Hz signal,
whereas the player expects a 44100 Hz signal.
Fig. 3. A PC diagnostics to verify that certain machine follows
the standard software conguration reveals that there are potential
interoperability problems with the PostScript viewer. The system
signals it with a big square in the oending connection point. Clicking
on it the system gives more detailed explanations (Itacio-XDB screen
capture - detail).
Fig. 4. The WaveX sound processing system in action.
putting it to work, so the user can use a personal computer as a real-time digital sound processor (built-in ones
are usually expensive).
With improper parameters or topologies, the resulting
processor can have many potential inconsistencies able to
bring down the system. Itacio has been succesfully applied
to modeling the WaveX components and detecting such
inconsistencies (Fig. 5, [17]).
E. A Component Reliability Model by Hamlet et al
One of the interesting properties to study about a component system is its reliability. Hamlet, Woit and Mason [18] have proposed an underlying theory for the technical quality information to appear on a software component
data sheet, so as to enable the designer to make reliability
calculations. These data are statistical, obtained by random testing. The reliability data are not directly useful,
since the operational prole in which the component will
be used may dier signicantly from the testing prole; so
Hamlet et al introduce a prole mapping for computing the
reliability based on the actual operational prole at which
the component will perform.
Itacio was also applied to this model, in order to test
whether reliability requirements could be incorporated to
the verication process. The result was successful, although work by Hamlet et al is still in progress.
F. A Procotol Compatibility Model by Yellin and Strom
file protocol
fileReader protocol
Along this section, several facets of the verication of
component compatibility (mostly in a functional sense)
have been approached. Protocol compatibility verication
is also possible by applying Itacio.
There are several protocol modeling techniques (see section V-E); we have chosen to work on a model due to Yellin
and Strom [19]. This model is, in our opinion, simpler and
easier to understand than others, so it ts better our technology transfer goals.
Yellin and Strom present:
A way of describing protocols (and testing their compatibility).
A method for automatically creating adaptor when protocols do not match.
We are only interested in the rst goal. Basically, Yellin
and Strom describe protocols as nite state machines. Each
state transition is binded to a message, sent or received.
Their model is deterministic (two transitions are not alnotReadingFile
lowed to begin on the same state and be bounded to the
same message) and it has synchronous semantics, since reasoning about asynchronous systems is more problematic.
The dynamic behaviour of a component can then be Fig. 6. Protocols for a le and a le reader. It may be diÆcult to
modeled as a protocol. If two components must interop- tell whether they are compatible.
erate, problems may arise involving protocol compatibility.
Unspecied receptions occur when a component is in a state
in which it receives a message from its collaboration mate
for which it has no transition. Deadlocks occur when computation will not continue, since both components expect
a message that the other party cannot send. Two protocols are compatible if the potential state combinations that
they can reach are never aected neither by unspecied receptions nor by deadlocks. Yellin and Strom oer a simple
algorithm that, given two protocol specications, computes
the problematic state pairs.
The instantiation of Itacio used here is similar to that of
subection VI-B. Sources and sinks of the component could
be identied with sent and received messages for veriying
individual message requirements, but there will be a source
in the component with information about the whole proto- Fig. 7. Verication of a collaboration between le and le reader
col. A component that models the collaboration can verify components. badFileReader is not compatible with le, as shown by
protocol compatibility. The algorithm for this verication the Itacio-XDB prototype (screen capture).
is implemented as part of a library and appended to the
system. As an example, the protocols in Fig. 6 and Fig. 8
to a minimum, to choose well known computing resources
are veried in Fig. 7
and techniques, and to prune the complexity wherever it
VII. Conclusions and Future Work
In this paper, a component model oriented to verication In order to test our assumptions, we have developed sevhas been presented. The main goal of this model is a static, eral prototypes making use of common technological reautomatic verication of a component assembly by match- sources, and this process has proved the development of
ing and putting in relation all the information available tool support to be a clearly achievable goal. Also, we have
about the components, but always considering the technol- challenged the model to be used in very dierent, unrelated,
ogy transfer problem. It has been designed to be easy to unplanned problems of component verication, including a
understand, easy to apply and easy to develop tool sup- spectrum of functional issues and also protocol compatibilport for (so as to be readily available), and also to be use- ity. In every attempt, the model behave as expected. In
ful in very dierent scenarios (so as to maximize return the case of microcomponents, some more work in the eld
of investment). These design guidelines have led us to a of code generation should be done before making a real test
constant eort to reduce the number of concepts involved of Itacio, but this comes as no surprise, since the use of miinitFile
[1] Antonio Vallecillo, Juan Hernandez, and Jose M. Troya,
\Woi'00: New issues in object interoperability," in ECOOP'2000
Workshop Reader. 2000, pp. 256{269, Springer-Verlag, LNCS
No. 1964.
[2] Jonathan P. Bowen and Michael G. Hinchey, \Ten commandments of formal methods," IEEE Computer, vol. 28, no. 4, pp.
56{63, 1995.
[3] Edmund M. Clarke, Jeannette M. Wing, and Rajeev Alur et al,
\Formal methods: state of the art and future directions," ACM
Computing Surveys, vol. 28, no. 4, pp. 626{643, 1996.
[4] David Lorge Parnas, \Mathematical methods: What we need
and don't need," IEEE Computer, vol. 29, no. 4, pp. 28{29, apr
[5] Agustn Cernuda del Ro, Jose Emilio Labra Gayo, and Juan
Manuel Cueva Lovelle, \Itacio: A component model for verifying software at construction time," in Third International Workshop on Component-Based Software Engineering- 22nd. Inter-
, Limerick, Ireland,
June 2000.
[6] Agustn Cernuda del Ro, Jose Emilio Labra Gayo, and Juan
Manuel Cueva Lovelle, \A model for integrating knowledge into
component-based software development," in Proceedings 4th Innational Conference on Software Engineering
ternational ICSC Symposium - Soft Computing and Intelligent
, Paisley, Scotland, June 2001, ICSC Academic Press.
[7] Allen B. Tucker, Bruce H. Barnes, Robert M. Aiken, Keith
Barker, Kim B. Bruce, J. Thomas Cain, Susan E. Conry, Gerald L. Engel, Richard G. Epstein, Doris K. Lidtke, Michael C.
Mulder, Jean B. Rogers, Eugene H. Spaord, and A. Joe Turner,
Systems for Industry
Fig. 8. Protocol for a le reader that does not open the le.
crocomponents involves a deep change in the way code is
built, and goes way beyond the scope of Itacio.
This approach has some problems, also. Integrating
restrictive expressions from several parties may be problematic (because of inconsistent naming and reasoning
schemes). The design of eÆcient and consistent knowledge
structures is not a trivial task.
Another issue is that the restrictive expressions may describe incorrectly how a component behaves; the real code
may not match them. This, however, is a deliberate tradeo; the main goal of Itacio is to avoid the interoperability
problems that can be predicted with the available knowledge. With or without Itacio, it is possible that a program
does not match its specications; if this is not acceptable,
other techniques (maybe formal methods) must be used
against that particular aspect of the problem.
In the future, the Itacio project could approach the following issues:
Knowledge engineering. Regarding knowledge consistency and eÆciency, incorporating knowledge engineering
tehcniques to the model itself could be very useful.
Correct restrictive expressions. Maybe relating Itacio to program derivation techniques so that restrictive expressions are proven to match the component they represent.
Tool development. Of course, integrating this model
with the development process and developing production
tools is an important step.
Semi-automatic design. If the topological correctness
rule is relaxed, Itacio could handle unnished systems, with
unconnected components; the system could infer what restrictive expressions must be fullled, suggesting a description of the missing component or even selecting it from a
Computing Curricula 1991, Report of the ACM/IEEE-CS Joint
Curriculum Task Force, ACM Press, New York, 1991.
[8] Kung Kiu Lau, \The role of logic programming in nextgeneration component-based software development," in Proceed-
ings of Workshop on Logic Programming and Software Enginer-
, Gopal Gupta and I. V. Ramakrishnan, Eds., London, July
[9] Ahmed Abd-Allah and Barry Boehm, \Reasoning about the
composition of heterogeneous architectures - usc technical report
usc-cse-95-503," Tech. Rep., Center for Software Engineering,
Computer Science Department, University of Southern California, 1995.
[10] Desmond Francis D'Souza and Alan Cameron Wills, Objects,
Components, and Frameworks with UML: The Catalysis Ap-
proach, Object Technology Series. Addison-Wesley, 1999.
[11] Ian Holland, The Design and Representation of Object-Oriented
Components, Ph.D. thesis, College of Computer Science, Northeastern University, 1992.
[12] Carine Lucas, Documenting Reuse and Evolution with Reuse
Ph.D. thesis, Vrije Universiteit Brussel, Belgium,
Sept. 1997.
[13] Bertrand Meyer,
Object-Oriented Software Construction,
Prentice-Hall, 2 edition, 1997.
[14] Agustn Cernuda del Ro, Jose Emilio Labra Gayo, and Juan
Manuel Cueva Lovelle, \Verifying reuse contracts with a component model," in VI Jornadas de Ingeniera del Software y
Bases de Datos, Oscar D
az, Arantza Illarramendi, and Mario
Piattini, Eds., Almagro (Ciudad Real), Spain, Nov. 2001, Universidad de Castilla-La Mancha, pp. 405{418, Grupo Alarcos.
[15] Je Prosise, Programming Windows with MFC, Microsoft Press,
2 edition, May 1999.
[16] Clemens Szyperski, Component Software: Beyond ObjectOriented Programming, ACM Press and Addison-Wesley, New
York, NY, 1998.
[17] Agustn Cernuda del Ro, Jose Emilio Labra Gayo, and Juan
Manuel Cueva Lovelle., \Applying the itacio verication model
to a component-based real-time sound processing system," in
Workshop on Constraint Logic Programming and Software Engineering (CLPSE), Seventeenth International Conference on
Logic Programming, Paphos, Cyprus, Dec. 2001.
[18] Dick Hamlet, Dave Mason, and Denise Woit, \Theory of software component reliability," in 23rd International Conference
on Software Engineering (ICSE 2001), Toronto (Canada), May
[19] Daniel M. Yellin and Robert E. Strom, \Protocol specications
and component adaptors.," ACM Transactions on Programming
Languages and Systems, vol. 19, no. 2, pp. 292{333, Mar. 1997.
Demonstration Facility for U-233 Separation from Irradiated
Demonstration Facility for U-233 Separation from Irradiated