Download Clocks, Dice and Processes Taolue Chen

Document related concepts
no text concepts found
Transcript
Clocks, Dice and Processes
Taolue Chen
c 2009, Taolue Chen, Amsterdam. All rights reserved.
Copyright ISBN 978-90-8659-339-2
IPA dissertation series 2009-17
Typeset by LATEX2ε
Printed by Ponsen & Looijen B.V., Wageningen
c 2009 Creative kids
Cover designed by Tingting Han, images Promotion committee:
Prof. dr. W.J. Fokkink
Prof. dr. J.C. van de Pol
(promotors)
Vrije Universiteit Amsterdam
Universiteit of Twente
Prof. dr. L. Aceto
Prof. dr. J.A. Bergstra
Prof. dr. J.-P. Katoen
Prof. dr. J.W. Klop
Dr. S.P. Luttik
Dr. M.I.A. Stoelinga
Reykjavı́k University
Universiteit van Amsterdam
RWTH Aachen University
Vrije Universiteit Amsterdam
Technische Universiteit Eindhoven
Universiteit of Twente
The work reported in this dissertation has been carried out at the Centrum
voor Wiskunde en Informatica (CWI) in Amsterdam, under the auspices of
the Instituut voor Programmatuurkunde en Algoritmiek (IPA). The research
has been supported by the BRICKS project (Basic Research in Informatics for
Creating the Knowledge Society) funded by the Besluit Subsidies Investeringen
Kennisinfrastructuur (BSIK).
VRIJE UNIVERSITEIT
Clocks, Dice and Processes
ACADEMISCH PROEFSCHRIFT
ter verkrijging van de graad van doctor aan
de Vrije Universiteit Amsterdam,
op gezag van de rector magnificus
prof.dr. L.M. Bouter,
in het openbaar te verdedigen
ten overstaan van de promotiecommissie
van de faculteit der Exacte Wetenschappen
op maandag 21 september 2009 om 13:45 uur
in de aula van de universiteit,
De Boelelaan 1105
door
Taolue Chen
geboren te Taizhou, Jiangsu, China
promotoren:
prof.dr. W.J. Fokkink
prof.dr. J.C. van de Pol
Contents
Preface
v
1 Introduction
1.1 Axiomatizability of Process Algebras . . . . . . . . . . . . . . .
1.2 Verification of Probabilistic Real-time Systems . . . . . . . . .
1.3 Overview of the Dissertation . . . . . . . . . . . . . . . . . . .
1.3.1 Part I: Axiomatizability of Process Algebras . . . . . . .
1.3.2 Part II: Verification of Probabilistic Real-time Systems .
1.4 Origins of the Chapters and Credits . . . . . . . . . . . . . . .
1.5 Suggested Way of Reading . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
1
2
4
8
8
10
10
13
2 Preliminaries
2.1 Background for Part I . . . . . . . . . . . . . . . . . .
2.1.1 Labeled Transition Systems . . . . . . . . . . .
2.1.2 The Linear Time – Branching Time Spectrum
2.1.3 Universal Algebra and Equational Logic . . . .
2.1.4 Process Algebras BCCSP and BCCS . . . . . .
2.1.5 Two Proof Techniques . . . . . . . . . . . . . .
2.2 Background for Part II . . . . . . . . . . . . . . . . . .
2.2.1 Preliminaries for Probability Theory . . . . . .
2.2.2 Discrete-time Markov Chains . . . . . . . . . .
2.2.3 Timed Automata . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
15
15
15
15
19
22
26
27
27
28
30
I
Axiomatizability of Process Algebras
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
3 Meta-theories for Axiomatizability
35
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 From Preorder to Equivalence . . . . . . . . . . . . . . . . . . . . 37
3.2.1 An Algorithm for Producing Equational Axiomatizations 38
3.2.2 Correctness Proof of the Algorithm . . . . . . . . . . . . . 39
3.2.3 Applying the Algorithm to Weak Semantics . . . . . . . . 40
3.2.4 A Generalization to Infinite Processes . . . . . . . . . . . 46
3.2.5 Concluding Remark . . . . . . . . . . . . . . . . . . . . . 48
3.3 From Concrete to Weak Semantics . . . . . . . . . . . . . . . . . 48
i
ii
Contents
3.4
3.5
4 On
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
3.3.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . .
Inverted Substitutions . . . . . . . . . . . . . . . . . . . . . . . .
Related and Future Work . . . . . . . . . . . . . . . . . . . . . .
Finite Alphabets and Infinite Bases
Introduction . . . . . . . . . . . . . . . . . . . . . . . .
Basic Facts . . . . . . . . . . . . . . . . . . . . . . . .
Failures . . . . . . . . . . . . . . . . . . . . . . . . . .
Failure Traces . . . . . . . . . . . . . . . . . . . . . . .
From Ready Pairs to Possible Worlds . . . . . . . . . .
4.5.1 Cover Equations . . . . . . . . . . . . . . . . .
4.5.2 Cover Equations a1 Xn + Θn ≈ Θn for n ≥ |A|
Simulation . . . . . . . . . . . . . . . . . . . . . . . . .
Completed Simulation . . . . . . . . . . . . . . . . . .
Ready Simulation . . . . . . . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . . . .
5 Impossible Futures
5.1 Introduction . . . . . . . . . . . . . . . . . . . . .
5.2 Ground-Completeness . . . . . . . . . . . . . . .
5.2.1 Concrete Impossible Futures Preorder . .
5.2.2 Weak Impossible Futures Preorder . . . .
5.2.3 Weak Impossible Futures Equivalence . .
5.2.4 Concrete Impossible Futures Equivalence
5.3 ω-Completeness . . . . . . . . . . . . . . . . . . .
5.3.1 Infinite Alphabet . . . . . . . . . . . . . .
5.3.2 Finite Alphabet . . . . . . . . . . . . . . .
5.4 n-Nested Impossible Futures . . . . . . . . . . . .
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
56
57
59
59
60
61
67
71
73
75
77
82
85
88
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
89
. 89
. 91
. 91
. 95
. 96
. 99
. 99
. 99
. 100
. 105
. 108
6 Priority
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 The Language BCCSPΘ . . . . . . . . . . . . . . . . . . .
6.3 |Act| < ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4 |Act| = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally .
6.5.1 A Negative Result . . . . . . . . . . . . . . . . . . . . . .
6.5.2 Positive Results . . . . . . . . . . . . . . . . . . . . . . . .
6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
109
109
112
112
114
115
117
120
125
131
II
133
Verification of Probabilistic Real-time Systems
7 Model Checking of CTMCs Against DTA Specifications
135
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
iii
Contents
7.2
7.3
7.4
7.5
Preliminaries . . . . . . . . . . . . . . . . . . . . .
7.2.1 Continuous-time Markov Chains . . . . . .
7.2.2 Deterministic Timed Automata . . . . . . .
7.2.3 Piecewise-deterministic Markov Processes .
Model Checking DTA Specifications . . . . . . . .
7.3.1 Deterministic Markovian Timed Automata
7.3.2 Product DMTAs . . . . . . . . . . . . . . .
7.3.3 Region Construction for DMTA . . . . . . .
7.3.4 Characterizing Reachability Probabilities .
7.3.5 Approximating Reachability Probabilities .
Single-clock DTA Specifications . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . .
8 Probabilistic Time-abstracting Bisimulations
8.1 Introduction . . . . . . . . . . . . . . . . . . .
8.2 Preliminaries . . . . . . . . . . . . . . . . . .
8.2.1 Probabilistic Timed Automata . . . .
8.2.2 Probabilistic Timed Structures . . . .
8.2.3 Obtaining a PTS from a PTA . . . . .
8.3 Probabilistic Time-abstracting Bisimulations
8.3.1 Region Equivalences . . . . . . . . . .
8.4 Minimization of PTA . . . . . . . . . . . . .
8.4.1 Partition Refinement . . . . . . . . . .
8.4.2 Bisimulation Quotienting Algorithm .
8.5 Verification of Branching-time Properties . .
8.6 Conclusion . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
136
136
138
138
141
141
143
148
150
155
157
166
for PTA
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
169
169
170
170
172
173
174
175
177
177
178
184
185
References
187
Summary
205
Nederlandse Samenvatting
207
Curriculum Vitae
211
iv
Contents
Preface
This dissertation can be viewed as an end point of my journey of being at CWI
as an onderzoeker in opleiding.
First I would like to provide some very general remarks regarding my scientific career and the contents of the dissertation. The decision to work on it
four years ago was mostly driven by my strong interests in mathematics and
natural science in general. This interest is persistent, starting when I was a kid
in primary school. I cannot recall which subject exactly held a great fascination
for me at that time, but what I do realize afterward is that it shapes my life.
A witness might be that during my education, it is always the mathematicsoriented subjects that allure me in the end. Whether it is a gift or a curse is
left as an open problem. My conjecture (or hope) is that it is a gift.
The particular subjects in this dissertation are of purely theoretical consideration, although I hope (and believe) at least part of them can be applied in
practice somehow. I study axiomatization problems of process algebras because
of the beauty and elegance they bring to me, while the work on probabilistic
systems contents partially my long-term quest on the nature of randomness.
However, on top of them I have to admit that an important reason is that
fortunately they fall into the scope of my expertise. The studies are largely
problem-driven, on which I also would like to comment. It is well-recognized
that there are two main axes to mathematical research: (1) Refining and building upon existing mathematical theories, e.g. trying to prove or disprove the
remaining conjecture in well explored branches of mathematics (Andrew Wiles’s
proof of Fermats’s Last Theorem is a very typical case of such efforts); and (2)
Developing mathematical theories for new areas of interest. In my point of
view, mathematicians of the second type are a little bit more appreciated in the
community. This might be biased, but seems true at least in the theoretical
computer science community. The list of winners of the Turing Award, which is
generally recognized as the “Nobel Prize of computing”, serves a strong support
of my opinion. I fully understand this situation, and I admire the contributions
of the second type with the highest respect. Nevertheless, I found that being
the first type of mathematician gratifies my need of self-actualization more. In
other words, my main scientific interest lies in trying to solve open questions,
instead of developing new theories. This tendency decides the style of the current dissertation, and explains the diversity of topics it contains. One might
criticize that it leads to fragmented accounts instead of a coherent story of the
v
vi
Preface
PhD study. This is probably true; however, I am proud of the fact that the
problems solved here are in general of a complicated nature, which is the virtue
I cherish most.
Acknowledgements. This dissertation would not have been accomplished
without the help (input) of many people.
First of all, I would like to thank my supervisor and promotor Wan Fokkink,
who directed my research in the last four years. His rigorous research attitude
and profound professional skills have deeply impressed me, and his high standards of integrity and easygoing manner have given me valuable influences that
will surely last throughout my whole life. On the one hand, Wan worked closely
with me, always willing to discuss my work and ready to give enlightening suggestions. On the other hand, he gave me the largest liberty to follow my own
research interests. It’s an honor for me to have had such a precious time with
him and I am deeply indebted to him.
I would like to deliver my sincere thanks to my other promotor, Jaco van de
Pol. Jaco was the head of our group until the end of 2007 when he became a
full professor at the University of Twente. I enjoyed the fruitful discussions in
his office and the pleasant conversations during lunch very much. His excellent
ideas, brightness, enthusiasm and lots of care, both personally and professionally, remain an enjoyable memory of my PhD study.
I am very grateful to all my co-authors for their productive and pleasant cooperation. Apart from Wan and Jaco, they are Luca Aceto, Jasper Berendsen,
Rob van Glabbeek, Tingting Han, Anna Ingólfsdóttir, David Jansen, JoostPieter Katoen, Bas Luttik, Alexandru Mereacre, Sumit Nain, Bas Ploeger,
Yanjing Wang and Tim Willemse. I enjoyed their contributions and the fruitful discussions a lot. Although I could not fit all the results here, they have
certainly influenced the state of my mind. I learned a lot from them, thanks!
Apart from my co-authors, many people also contributed to this dissertation
(or my PhD study in general) in different ways. In particular, for Chapter 3, the
second meta-theorem was inspired by a comment from an anonymous referee of
[CFvG09]. For Chapter 4, Luca Aceto and Anna Ingólfsdóttir had stimulating
discussions, and an anonymous referee for [CFLN08] suggested many improvements. For Chapter 5, the research was initiated by a question from Jos Baeten,
and Bas Luttik, Marc Voorhoeve and Yanjing Wang provided very useful feedback. For Chapter 6, Jaco van de Pol, Vincent van Oostrom and the other participants in the PAM seminar at CWI provided useful comments. For Chapter 7,
Boudewijn Haverkort, Jeremy Sproston, Mariëlle Stoelinga and an anonymous
referee of [CHKM09b] provided fruitful discussions and/or insightful comments.
For Chapter 8, Jeremy Sproston provided useful suggestion for improvements.
Moreover, for some other works (not presented here), many people, in particular, Eric Allender, Christel Baier, Patricia Bouyer, Krishnendu Chatterjee,
Flavio Corradini, Luca de Alfaro, Kousha Etessami, Jan Friso Groote, Marcin
Jurdziński, Orna Kupferman, Kim Larsen, Nicolas Markey, Gordon Plotkin and
Moshe Vardi provided inspiring comments and suggestions. I thank all of them!
vii
I am grateful to the members of the reading committee, Luca Aceto, JoostPieter Katoen, Jan Willem Klop and Mariëlle Stoelinga for their extremely
careful reviews of the dissertation, constructive comments and inspiring questions; to Jan Bergstra and Bas Luttik as well for their willingness to take part
in the opposition.
Collective and individual thanks are also owed to all my (former) colleagues
and friends at CWI or VU, in particular, Bahareh Badban, Rena Bakhshi, Jens
Calamé, Susanne van Dam, Natalia Ioustinova, Bert Lisser, Bas Luttik, Simona
Orzan, Mohammad Torabi Dashti, Yaroslav Usenko, Yanjing Wang, Michael
Weber, Anton Wijs, Mike Zonsveld. I also wish to extend my gratitude to
other (Chinese and non-Chinese) friends, in particular, Chao Li, Xirong Li, Bo
Gao, Fangbin Liu, Huiye Ma, Qian Ma, Alexandru Mereacre, MohammadReza
Mousavi, Jun Pang, Bas Ploeger, Judi Romijn, Meng Sun, Mengxiao Wu, Ping
Yu, . . . (I have a truly long list which this place is too narrow to contain :-) I
enjoyed the great time I had with them and thank them for sharing many sides
of life with me, both scientifically and personally.
My family in China, especially my parents, deserve my endless thanks for
their unconditional love and support.
Last but not the least, I reserve my greatest thanks to Tingting for everything
she did for me . . .
Taolue Chen
Amsterdam, April 2009
P.S. I would like to explain the cover of the dissertation briefly. Basically, it
contains two cartoons, which illustrate two stories, corresponding to the two
parts of the dissertation. The hero of them is a mouse named “Process Algebra” (Let us call him Mr. Process Algebra). He is originally from China with a
Chinese name “Shu Buqing” (meaning “countless” in English and “Shu” indicating the species in Chinese) and now he lives in the Netherlands. Of course,
Mr. Process Algebra likes cheese very much1 and he prefers a specific brand
“Axiomatization”. The first cartoon shows that, since Mr. Process Algebra has
a good appetite on “Axiomatization” cheese, he always tries his best to seek
axiomatizations, preferably finite ones. However, unfortunately, this task is not
very easy since he has to go through a maze, which is quite complicated to him.
Can he succeed? This is the problem I addressed in Part I of the dissertation.
The second cartoon on the back cover shows that, YES, our cute mouse is enjoying his favorite cheese! He is very smart since he managed to go though the maze
within 12min and 27sec, with the help of a dice by which he can make a random
choice at the crossroad. So we can claim that “mouse |= P>0 (F<15min cheese)”.
This suggests the contents of Part II of the dissertation.
1 Scientific research reveals that mice do not have a particular interest in cheese, but let
us follow the folklore here.
viii
Preface
Chapter 1
Introduction
This dissertation summarizes a large part of my PhD study on concurrency
theory in the past four years. Concurrency theory, in general, aims to develop
mathematical accounts of behaviors of concurrent systems, which typically consist of a number of components, each of which evolves simultaneously with the
others, subject to (typically frequent) interaction amongst the components. In
practice, it is fair to say that concurrency theory is mainly concerned with the
modeling and verification of concurrent systems. This is the main objective of
the research field of formal methods, which serve as the general context of this
dissertation as well. In general, formal methods refer to a collection of notations
and techniques for describing and analyzing systems. A formal method typically
consists of a formalism to model a system, a specification language to express
the desired properties of the system, a formal semantics to interpret both the
system and the properties, and verification techniques to check whether these
properties are satisfied by the system.
To obtain a more down-to-earth feeling on what “concurrency theory” concentrates on, one of the best avenues is to go through the themes listed by
WG1.8 “Concurrency Theory” of IFIP TC11 which, in my opinion, encompass most aspects of concurrency theory and its applications. These themes
include (1) process algebras and calculi; (2) expressiveness of formalisms for
concurrency; (3) modal and temporal logics for concurrency and their extensions; (4) resource-sensitive approaches to concurrency and their developments;
(5) tools for verification and validation of concurrent systems; (6) reactive models for real-time and hybrid systems; (7) calculi and typing systems for mobile
processes and global computing; (8) stochastic and probabilistic models of concurrent processes; (9) behavioral relations for processes; and (10) decidability
and complexity issues in concurrency theory.
The current dissertation mostly concerns (1), (6), (8) and (9), which presents
results obtained along two lines of research: the axiomatizability of process algebras and the verification of probabilistic real-time systems. This also explains
the title, since the contents can be divided into two distinct parts: “clocks”
1 URL:
www.ru.is/luca/IFIPWG1.8/.
1
2
Chapter 1 Introduction
and “dice”, which form a metaphor for time and randomness; and “processes”,
which reminds of process algebra specifically and denotes the object of the dissertation in general. As a matter of fact, in this dissertation I do not pursue any
particular link between these two parts, although conceptually they are closely
related indeed. The only obvious link is that I carried out both of the research
lines during the last four years.
Below I shall offer some general, but more technical introductions regarding
these two topics.
1.1
Axiomatizability of Process Algebras
Process algebra, or process theory, constitutes an attempt to reason about “behaviors of systems” in a mathematical framework. Generally speaking, they are
prototype specification languages for reactive systems – that is, for systems that
compute by reacting to stimuli from their environment. There are a plethora of
applications of process algebras (see, e.g. [Bae90]), but mostly they are applied
to prove the correctness of system behaviors (see, e.g. [Fok07, AILS07]). In a
nutshell, they enable us to express (un)desirable properties of the behavior of
a system in an abstract way, and to deduce by mathematical manipulations
whether or not the behavior satisfies such a property.
To put it more concretely, a process algebra usually consists of a collection of basic operations for constructing new system descriptions from existing
ones, together with some facility for the recursive definition of system behaviors. This “algebraic” flavor explains somehow its name – process algebra, which
was coined by Bergstra and Klop in 1982 [BK82]. (See also the essay [Lut06].)
Well-known examples of such languages are CCS [Mil80, Mil89a], CSP [Hoa85]
and ACP [BK84, BW90]. Starting from a syntax for a process algebra, each
syntactic object is supplied with some kind of behavior, which is usually described by labeled transition systems (LTSs, [Kel76]). In general, LTSs are a
fundamental formalism for the description of concurrent computation, which
is widely used in light of its flexibility and applicability. In process algebras,
LTSs model processes by explicitly describing their states and the transitions
from state to state, together with the actions that produce them. In particular,
they underlie Plotkin’s structural operational semantics (SOS, [Plo04a, Plo04b])
and, following Milner’s pioneering work on CCS, LTSs are by now the standard
formalism for describing the operational semantics of various process algebras.
Since this LTS view of process behaviors is very detailed, several notions of
behavioral semantics, in terms of equivalences and preorders have been proposed
for LTSs. They are largely surveyed by van Glabbeek, forming the celebrated
linear time – branching time spectrum [vG90, vG93b]. The aim of such behavioral semantics is to identify those (states of) LTSs that afford the same
behaviors in some appropriate technical sense. These preorders and equivalences are mostly interesting when they are precongruences and congruences
respectively, i.e., they are closed under the operators of the considered process
algebras. In light of this, one may define intuitively appealing semantic models
1.1 Axiomatizability of Process Algebras
3
for a process algebra as quotients of the collection of LTSs modulo some behavioral (pre)congruence. Notable examples of semantic equivalences include two
extreme cases (in the sense of coarsest and finest), trace equivalence, and concrete (a.k.a. strong) bisimulation equivalence [Par81], together with two weak
versions of concrete bisimulation equivalence, i.e. weak [Mil89a] and branching
[vGW96] bisimulation equivalences.
Process algebra has become a full-fledged branch of theoretical computer science (TCS). Instead of supplying an exhaustive introduction, I refer the readers
to [Bae05] for a historical account, [Hoa85, Hen88, Mil89a, BW90, Fok00] for
excellent textbooks, [Mil90, BV95] for representative tutorials, and [BPS01] as
the encyclopedia.
After this very condensed introduction to process algebra, let us focus on
the main topic of the first part of the dissertation, axiomatizability. Typically,
process algebra expresses a plethora of preorders and equivalences in axioms, or
(in)equational laws. Axiom systems (or axiomatizations2 ) arise from the desire
of isolating the features that are common to a collection of algebraic structures,
namely, their semantics models. One requires that a set of axioms is sound (i.e.,
if two behaviors can be equated, then they are semantically related), and one
desires that it is complete (i.e., if two behaviors are semantically related, then
they can be equated). As a matter of fact, having defined a semantic model
for a process algebra in terms of LTSs, it is natural to study its (in)equational
theory, that is, the collection of (in)equations that are valid in the given model.
The key questions here are:
• Are there reasonably informative sound and complete axiom systems for
the chosen semantic model?
• Does the algebra of LTSs modulo the chosen notion of behavioral semantics
afford a finite (in)equational axiomatization?
A sound and complete axiomatization of a behavioral congruence (resp. precongruence) yields a purely syntactic characterization, independent of LTSs and
of the actual details of the definition of the chosen behavioral equivalence (resp.
preorder), of the semantics of the process algebra. This bridge between syntax
and semantics plays an important role in both the practice and the theory of
process algebras. From the point of view of practice, these proof systems can
be used to perform system verifications in a purely syntactic way using general
purpose theorem provers or proof checkers, and form the basis of purpose-built
axiomatic verification tools like, e.g. PAM [Lin95]. A positive answer to the first
basic question raised above is therefore not just theoretically pleasing, but has
potential practical applications. From the theoretical point of view, complete
axiomatizations of behavioral equivalences (resp. preorders) capture the essence
of different notions of semantics for processes in terms of a basic collection of
identities, and this often allows one to compare semantics which might have
been defined in very different styles and frameworks.
2 Both
of these two terminologies will be used interchangeably throughout the dissertation.
4
Chapter 1 Introduction
In universal algebra [BS81], a sound and complete axiomatization is referred
to as a basis for the (in)equational theory of the algebra it axiomatizes. The
existence of a finite basis for an (in)equational theory is a classic topic of study in
universal algebra (see, e.g. [MMT87]), dating back to Lyndon [Lyn51]. Murskiĭ
[Mur75] proved that “almost all” finite algebras (namely all quasi-primal ones)
are finitely based, while in [Mur65] he presented an example of a three-element
algebra that has no finite basis. Henkin [Hen77] showed that the algebra of
naturals with addition and multiplication is finitely based, while Gurevic̆ [Gur90]
showed that after adding exponentiation the algebra is no longer finitely based.
McKenzie [McK96] settled Tarski’s Finite Basis Problem in the negative, by
showing that the general question whether a finite algebra is finitely based is
undecidable.
Process algebra, as a branch of universal algebra and equational logic, naturally features the study of results pertaining to the existence or non-existence
of finite bases for algebras modulo given semantics. After nearly thirty years
of intensive research on this topic, numerous problems still remain open (see
e.g. [Ace03, AFIL05, AI07] for surveys). The first part of the dissertation is
devoted to solving some of them (see Section 1.3.1 for a summary).
1.2
Verification of Probabilistic Real-time Systems
Embedded software is now omnipresent: it controls telephone switches and satellites, drives our climate control, runs pacemakers, and makes our cars and TVs
work. It permanently interacts with its – mostly physical – environment via
sensors and actuators.
Embedded applications often feature systems which exhibit both probabilistic
and real-time behaviors: Embedded software must robustly and autonomously
operate under highly unpredictable environmental conditions, which are typically modeled by randomness. They should also promptly react to stimuli from
their environment (timeliness), which requires one to tackle quantitative information about time elapsing explicitly in their proper modeling. In general,
systems exhibiting real-time and probabilistic aspects are referred to as probabilistic real-time systems in the current dissertation. The increasing reliance
on embedded systems in diverse fields has led to growing interest in obtaining formal guarantees of system correctness. In light of this, the second part
of the dissertation concentrates on applying formal verification techniques to
probabilistic real-time systems.
Model checking is one of the most successful verification techniques for hardand software3 , which, in a nutshell, is an automatic method for guaranteeing
that a mathematical model of a system satisfies a formula representing a desired
property. Usually the system is modeled by transition systems (typically, Kripke
structures or labeled transition systems), and the formula is written in temporal
logics, typically, computation tree logic (CTL, [CES86]), a branching-time logic
3 The pioneers of model checking received the ACM Turing Award in 1996 (Amir Pnueli)
and 2007 (Edmund M. Clarke, E. Allen Emerson and Joseph Sifakis).
1.2 Verification of Probabilistic Real-time Systems
5
due to Clarke and Emerson and linear temporal logic (LTL, [Pnu77]), a lineartime logic due to Pnueli and Manna. For textbooks on model checking, see
[CGP99, BK08]; [GV08] presents state-of-the-art surveys. Soon after the birth
of the flourishing research area of model checking in the early eighties (which
largely focused on finite Kripke structures), researchers started to apply this
technique to real-time systems extensively, with much attention given to the
formalism of timed automata and to probabilistic systems, which are basically
finite automata equipped with probabilities. A brief introduction now follows.
Real-time systems. Timed automata (TA, [AD94]), proposed by Alur and
Dill, are a well-established model for real-time systems. This formalism extends
classical automata with a set of real-valued variables – called clocks – that increase synchronously with time. Moreover, each transition associates a guard
specifying when, i.e., for which values of the clocks, the transition can be performed and a reset operation to be applied when the transition is performed.
Thanks to these clocks, it becomes possible to express constraints over delays
between two transitions.
As for their formal verification, one of the fundamental properties of TA is
that, though in general there are (uncountably) infinitely many possible system
configurations, many verification problems are still decidable, e.g., reachability
and safety properties, untimed ω-regular properties, and branching real-time
temporal properties. This mainly goes back to a powerful region construction [AD94], yielding a finite-state abstraction for a TA, referred to as region
automaton.
Various timed temporal logics [AH93] have been proposed, which extend
classical untimed temporal logics by enriching modal operators with timing
constraints. Examples are a timed extension of CTL called TCTL [ACD93],
timed extensions of LTL such as M(I)TL [Koy90, AFH96] and TPTL [AH94],
and a timed extension of µ-calculus called Lµ [LLW95]. Region abstraction is
also pivotal to the decidability of model checking a TA against TCTL [ACD93];
for the decidability of M(I)TL, see [OW08] for a survey.
TA are a special class of hybrid automata [ACHH92], proposed by Alur et al.,
which are a formal model for hybrid systems. A hybrid system is a dynamic system which exhibits both continuous and discrete dynamic behaviors – a system
that can both flow (described by a differential equation) and jump (described
by a difference equation). In general, a hybrid system can be described by a few
pieces of information. The state of the system consists of vector signals, which
can change according to dynamic laws in the system data. The data includes
a flow equation, which describes the continuous dynamics, a flow set, in which
flow is permitted, a jump equation, which describes the discrete dynamics, and
a jump set, in which discrete state evolution is permitted. Hybrid automata
provide an elegant and economical way to model hybrid systems. In contrast
to TA, they are equipped with variables that evolve continuously with time according to dynamical laws, instead of simple clocks. Hybrid automata are very
expressive, and model checking them is mostly undecidable [Hen96]. To facil-
6
Chapter 1 Introduction
itate the verification task, numerous subclasses of hybrid automata have been
identified [HKPV98], including rectangular hybrid automata, and linear hybrid
automata, to name a few.
There is a sea of publications on TA. The readers are referred to, among
others, [AH91, AH97, Hen98, Alu99, BY03, AM04, BL08, Bou09] for surveys
from different perspectives.
Probabilistic systems. To model random phenomena, transition systems
are enriched with probabilities. This can be done in different ways. In discretetime Markov chains (DTMCs), all choices are probabilistic. Roughly speaking,
DTMCs are transition systems with probability distributions for the successors
of each state. That is, instead of a nondeterministic choice, the next state is
chosen probabilistically. DTMCs are not appropriate for modeling randomized
distributed systems, since they cannot model the interleaving behavior of the
concurrent processes in an adequate manner. For this purpose, Markov decision
processes (MDPs, [Put94]) are used. In MDPs, both nondeterministic and probabilistic choices co-exist. To put ut in a nutshell, MDPs are transition systems in
which in any state a nondeterministic choice between probability distributions
exists. Once a probability distribution has been chosen nondeterministically,
the next state is selected in a probabilistic manner as in DTMCs. Any DTMC
is thus an MDP in which in any state the probability distribution is uniquely
determined.
The verification of probabilistic systems can be focused on either quantitative
properties or qualitative properties (or both). Quantitative properties typically
put constraints on the probability or expectation of certain events. Example
instances of quantitative properties are the requirement that the probability of
delivering a message within the next t time units is at least 0.98, or that the
expected number of unsuccessful attempts to find a leader in a concurrent system
is at most seven. Qualitative properties, on the other hand, typically assert that
a certain (good) event will happen almost surely, i.e., with probability one, or
dually, that a certain (bad) event almost never occurs, i.e., with zero probability.
Typical qualitative properties for Markov models are reachability, persistence
(does from some point on a property always hold?), and repeated reachability
(can certain states be reached repeatedly?).
One way to specify properties over DTMCs and MDPs is to use LTL, or
alternatively ω-regular properties [Tho97]. The quantitative model-checking
problem is then to compute the probability for the set of paths in the model
that satisfy the property. The readers are referred to [Var85, CY95] for original
work. On the other hand, probabilistic computation tree logic (PCTL, [HJ94])
proposed by Hansson and Jonsson, is a quantitative variant of CTL where the
path quantifiers ∀ and ∃ are replaced by a probabilistic operator PJ (ϕ) that
specifies lower and/or upper probability bounds (given by J) for the event ϕ.
PCTL model checking for finite DTMCs relies on the standard CTL modelchecking procedure in combination with methods for computing reachability
probabilities [HJ94]. When interpreting PCTL on MDPs, the formula PJ (ϕ)
1.2 Verification of Probabilistic Real-time Systems
7
ranges over all schedulers (a.k.a. strategies, policies, adversaries). The PCTL
model-checking problem for MDPs is reducible to the reachability problem, as
shown by Bianco and de Alfaro [BdA95].
Probabilistic real-time systems. Unsurprisingly, a number of models encompassing both probabilistic and real-time aspects have appeared already in
the literature. In particular, here I mention two classes of them:
• Probabilistic timed automata (PTA) extend both TA and MDPs. In contrast to TA, the edge relation of PTA is both nondeterministic and probabilistic in nature. More precisely, in a location one nondeterministically
chooses amongst the set of enabled discrete probability distributions, each
of which is defined over a finite set of locations. Once a selection has been
made, the next location is randomly determined according to the selected
distribution. This formalism has proved particularly useful for the study
of timed randomized algorithms, such as the IEEE1394 FireWire standard
[KNS03]. PTA have been verified against quantitative probabilistic timed
properties [KNSS02, KNSW07], which refer to the satisfaction of a property with a certain probability. An example of such a property is “with
probability at least 0.99, a request will be followed by a response within 5
milliseconds”. Model checking a PTA against a probabilistic extension of
TCTL (i.e., PTCTL) is decidable using a small adaptation of the region
construction for TA [KNSS02]. Recently, a natural extension of PTA, i.e.,
priced PTA (PPTA) has been defined, and a semi-decidable algorithm has
been provided for cost-bounded probabilistic reachability [BJK06].
• Continuous-time stochastic models. The aforementioned probabilistic models are discrete, as the notion of time involved is of a discrete nature: each
transition represents a single time step. This contrasts with continuoustime models where state residence times are determined by some probability distribution like a normal, uniform, or negative exponential one.
One of the most fundamental models is continuous-time Markov chains
(CTMCs). They can be seen as generalizations of DTMCs, by enhancing
them with negative exponential state residence time distributions. With
the similar extension to MDPs, one can obtain continuous-time Markov
decision processes (CTMDPs). CTMCs play an essential role in performance and dependability analysis. They are exploited in a broad range of
applications, and constitute the underlying semantical model of a plethora
of modeling formalisms for probabilistic real-time systems such as Markovian queueing networks, stochastic Petri nets, stochastic variants of process algebras, and, more recently, calculi for systems biology.
CTMC model checking has been focused on the temporal logic continuous
stochastic logic (CSL, [ASSB00, BHHK03]), a variant of TCTL where the
CTL path quantifiers are replaced by a probabilistic operator as in the
case of PCTL. CSL model checking proceeds – like CTL model checking
– by a recursive descent over the parse tree of the formula. One of the
8
Chapter 1 Introduction
key ingredients is that the reachability probability for a time-bounded
until-formula can be characterized as the least solution of a system of
integral equations and approximated arbitrarily closely by a reduction to
transient analysis in CTMCs. This results in a polynomial-time algorithm
that has been realized in model checkers such as PRISM [HKNP06] and
MRMC [KKZ05].
Numerous (more general or incomparable) models do exist. They include,
for instance, probabilistic timed I/O automata [ML07], generalized semi-Markov
processes (studied in [ACD91a, ACD91b] almost twenty years ago and still a very
active area of research [AB06, BA07]), stochastic transition systems [CSKN05],
labeled Markov processes [DEP02], to name a few. To give an exhaustive survey
of research on probabilistic (real-time) systems is far beyond the scope of the
dissertation. For further information, the readers are referred to [BHH+ 04] for
general tutorials (see also the reference therein). Moreover, a leisurely introduction for (discrete-time) probabilistic models can be found in [Sto02]; [Hav00]
offers a tutorial for Markovian models, [Kat08a, Kat08b] provide state-of-theart surveys regarding the probabilistic verification, and [Spr04, KNPS08] supply
extensive introductions to probabilistic real-time systems.
1.3
Overview of the Dissertation
This section lists the results that are achieved in this dissertation.
1.3.1
Part I: Axiomatizability of Process Algebras
Chapters 3 – 6 concentrate on the axiomatizability of process algebras, mostly
on two basic process algebras BCCSP and BCCS (see Section 2.1.4 for their
formal accounts).
Chapter 3: Meta-theories for Axiomatizability. This chapter mainly
presents two meta-theorems regarding axiomatizability. The first one concerns
the relationship between preorders and equivalences. We show that the same
algorithm proposed by Aceto et al. and de Frutos Escrig et al. for concrete semantics, which transforms an axiomatization for a preorder to the one for the
corresponding equivalence, applies equally well to weak semantics. This makes
it applicable to all 87 preorders surveyed in the “linear time – branching time
spectrum II” that are at least as coarse as the ready simulation preorder. We
also extend the scope of the algorithm to infinite processes, by adding recursion
constants. As an application of both extensions, we provide a ground-complete
axiomatization of the CSP failures equivalence for BCCS processes with divergence. The second meta-theorem concerns the relationship between concrete
and weak semantics. For any semantics which is not finer than failures or impossible futures semantics, we provide an algorithm to transform an axiomatization
for the concrete version to the one for the weak counterpart. As an application
of this algorithm, we derive ground- and ω-complete axiomatizations for the
1.3 Overview of the Dissertation
9
weak failure, weak completed trace, weak trace preorders. Finally, we provide
an adaptation of Groote’s inverted substitution technique from equivalence to
preorder.
Chapter 4: On Finite Alphabets and Infinite Bases. Van Glabbeek
(1990) presented the “linear time – branching time spectrum I” of concrete
semantics. He studied these semantics in the setting of the process algebra
BCCSP, and gave finite, sound and ground-complete axiomatizations for most
of these semantics. Groote (1990) proved for some of van Glabbeek’s axiomatizations that they are ω-complete, meaning that an equation can be derived
if (and only if) all of its closed instantiations can be derived. We settle the
remaining open questions for all the semantics in the linear time – branching
time spectrum I, either positively by giving a finite sound and ground-complete
axiomatization that is ω-complete, or negatively by proving that such a finite
basis for the equational theory does not exist. We show that in case of a finite alphabet with at least two actions, failure semantics affords a finite basis,
while for ready simulation, completed simulation, simulation, possible worlds,
ready trace, failure trace and ready semantics, such a finite basis does not exist.
Completed simulation semantics also lacks a finite basis in case of an infinite
alphabet of actions.
Chapter 5: Impossible Futures. This chapter studies the (in)equational
theories of concrete and weak impossible futures semantics over the process
algebras BCCSP and BCCS. We present a finite, sound, ground-complete axiomatization for BCCSP modulo the concrete impossible futures preorder, which
implies a finite, sound, ground-complete axiomatization for BCCS modulo the
weak impossible futures preorder. By contrast, we prove that no finite, sound
axiomatization for BCCS modulo the weak impossible futures equivalence is
ground-complete, and this negative result carries over to the concrete case. If
the alphabet of actions is infinite, then the aforementioned ground-complete axiomatizations are shown to be ω-complete. However, if the alphabet is finite and
non-empty, we prove that the inequational (resp. equational) theories of BCCSP
and BCCS modulo the impossible futures preorder (resp. equivalence) lack such
a finite basis. Finally, we show that the negative result regarding impossible
futures equivalence extends to all n-nested impossible futures equivalences for
n ≥ 2, and to all n-nested impossible futures preorders for n ≥ 3.
Chapter 6: Priority. This chapter studies the equational theory of bisimulation equivalence over the process algebra BCCSPΘ , i.e., BCCSP extended
with the priority operator Θ of Baeten et al. It is proven that, in the presence of
an infinite set of actions, bisimulation equivalence has no finite, sound, groundcomplete axiomatization over that language. This negative result applies even
if the syntax is extended with an arbitrary collection of auxiliary operators, and
motivates the study of axiomatizations using equations with action predicates
as conditions. In the presence of an infinite set of actions, it is shown that,
10
Chapter 1 Introduction
in general, bisimulation equivalence has no finite, sound, ground-complete axiomatization consisting of equations with action predicates as conditions over
BCCSPΘ . Finally, sufficient conditions on the priority structure over actions
are identified that lead to a finite, sound, ground-complete axiomatization of
bisimulation equivalence using equations with action predicates as conditions.
1.3.2
Part II: Verification of Probabilistic Real-time Systems
Chapter 7 – 8 concentrate on the verification of probabilistic real-time systems.
Chapter 7: Model Checking of CTMCs Against DTA Specifications.
This chapter investigates the following problem: given a continuous-time Markov
chain (CTMC) C, and a linear real-time property provided as a deterministic
timed automaton (DTA) A, what is the probability of the set of paths of C that
are accepted by A (C satisfies A)? It is shown that this set of paths is measurable and computing its probability can be reduced to computing the reachability
probability in a piecewise deterministic Markov process (PDP). The reachability probability is characterized as the least solution of a system of integral
equations and is shown to be approximated by solving a system of partial differential equations. For the special case of single-clock DTA, the system of
integral equations can be transformed into a system of linear equations where
the coefficients are solutions of ordinary differential equations.
Chapter 8: Probabilistic Time-abstracting Bisimulation for PTA.
This chapter focuses on probabilistic timed automata (PTA), an extension of
timed automata with discrete probabilistic branchings. As the region construction of these automata often leads to an exponential blow-up over the size of
original automata, reduction techniques are of the utmost importance. In this
chapter, we investigate probabilistic time-abstracting bisimulation (PTaB), an
equivalence notion that abstracts from exact time delays. PTaB is proven to
preserve probabilistic computation tree logic. The region equivalence is a (very
refined) PTaB. Furthermore, we provide a non-trivial adaptation of the traditional partition-refinement algorithm to compute the quotient under the PTaB.
This algorithm is symbolic in the sense that equivalence classes are represented
as polyhedra.
1.4
Origins of the Chapters and Credits
Chapter 3 is based on [CFvG08] and [CFvG09]:
[CFvG08] Taolue Chen, Wan Fokkink, and Rob J. van Glabbeek. Ready
to preorder: The case of weak process semantics. Inf. Process. Lett.,
109(2):104–111, 2008.
[CFvG09] Taolue Chen, Wan Fokkink, and Rob J. van Glabbeek. On
finite bases for weak semantics: Failures versus impossible futures. Proc.
of SOFSEM, LNCS 5404, pages 167–180. Springer, 2009.
1.4 Origins of the Chapters and Credits
11
In 2007, Wan Fokkink, Rob van Glabbeek and I were studying the axiomatization for weak failures equivalence. We found that the elegant algorithm
given in [AFI07] can be applied equally well to weak semantics, which gives
rise to the results in [CFvG08] and now the first meta-theorem in this chapter. The second meta-theorem is inspired by an anonymous referee’s comment
for [CFvG09]. Thanks to this result, part of the proofs in [CFvG09] can be
simplified considerably.
Chapter 4 is based on [CFLN08]:
[CFLN08] Taolue Chen, Wan Fokkink, Bas Luttik, and Sumit Nain. On
finite alphabets and infinite bases. Inf. Comput., 206(5):492–519, 2008.
It subsumes a series of “on finite alphabets and infinite bases” papers [FN04,
FN05, CFN06, CF06] trying to solve the open problem posed by Rob van
Glabbeek dating back to 1990. The results on failures, ready pairs, ready traces
and possible worlds presented in [FN04, FN05] are due to Wan Fokkink and
Sumit Nain and were finished before I started the PhD study, although in the
end I did contribute to writing the journal version [CFLN08], including rewriting these parts. Nevertheless, I am not supposed to get any credit regarding
these results. However, I decided to include them here for the only sake of
completeness, since without them the chapter would have become fragmented.
[CFN06, CF06] are:
[CFN06] Taolue Chen, Wan Fokkink, and Sumit Nain. On finite alphabets
and infinite bases II: Completed and ready simulation. Proc. of FoSSaCS,
LNCS 3921, pages 1–15. Springer, 2006.
[CF06] Taolue Chen and Wan Fokkink. On finite alphabets and infinite
bases III: Simulation. Proc. of CONCUR, LNCS 4137, pages 421–434.
Springer, 2006.
Chapter 5 is based on two conference papers [CF08] and [CFvG09]:
[CF08] Taolue Chen and Wan Fokkink. On the axiomatizability of impossible futures: Preorder versus equivalence. In LICS, pages 156–165.
IEEE Computer Society, 2008.
[CFvG09] Taolue Chen, Wan Fokkink, and Rob J. van Glabbeek. On
finite bases for weak semantics: Failures versus impossible futures. Proc.
of SOFSEM, LNCS 5404, pages 167–180. Springer, 2009.
In 2007, spurred by a question from Jos Baeten, Wan Fokkink and I started to
investigate the impossible futures semantics, which can be viewed as a continuation of the work in [CFLN08]. To our great surprise, this semantics enjoys a
special (at least in the aforementioned linear time – branching time spectrum)
in the sense that it affords a ground-complete axiomatization for the preorder
but lacks one for the equivalence. Our results regarding the concrete impossible
12
Chapter 1 Introduction
futures semantics were reported in [CF08]. After that, we were trying to push
the work to the weak version and realized that, surprisingly, it resembles the
failure semantics in definition, but differs in the axiomatizability considerably.
Together with Rob van Glabbeek, we obtained a first result regarding weak
impossible futures and failures semantics, reported in [CFvG09]. As said, after
that, inspired by a referee’s comment, we found that [CFvG09] can be simplified,
which leads to the current formulation of the results.
Chapter 6 is based on [ACFI08], which is the extended version of [ACFI06]:
[ACFI06] Luca Aceto, Taolue Chen, Wan Fokkink, and Anna Ingólfsdóttir.
On the axiomatizability of priority. Proc. of ICALP (2), LNCS 4052, pages
480–491. Springer, 2006.
[ACFI08] Luca Aceto, Taolue Chen, Wan Fokkink, and Anna Ingólfsdóttir.
On the axiomatisability of priority. Mathematical Structures in Computer
Science, 18(1):5–28, 2008.
The question of the axiomatization for the priority operator appears as an open
problem in [AFIN06]. In 2005, Wan Fokkink brought this question into my
attention and together with Luca Aceto, Anna Ingólfsdóttir and him, I finally
managed to solve it in a large sense.
Chapter 7 is based on [CHKM09b]:
[CHKM09b] Taolue Chen, Tingting Han, Joost-Pieter Katoen, and Alexandru Mereacre. Quantitative model checking of continuous-time Markov
chains against timed automata specifications. Proc. of LICS. IEEE Computer Society, 2009.
This work was inspired by a question posed by myself on how to verify linear
real-time properties for CTMCs, during a visit to RWTH Aachen in August,
2008. While the initial attempt on MTL specifications failed in the end, together
with Tingting Han, Joost-Pieter Katoen and Alexandru Mereacre, I found an
approach to tackle DTA specifications which gives rise to the results presented
in this chapter.
Chapter 8 is based on [CHK08]:
[CHK08] Taolue Chen, Tingting Han, and Joost-Pieter Katoen. Timeabstracting bisimulation for probabilistic timed automata. Proc. of TASE,
pages 177–184. IEEE Computer Society, 2008.
The question was posed by Joost-Pieter Katoen in 2007. Together with Tingting
Han and him, I worked out the definition of PTaB over PTA, and a zone-based
algorithm for computing it.
Besides the aforementioned work, I also studied some other problems. For
instance, together with Bas Ploeger, Jaco van de Pol and Tim Willemse, I
1.5 Suggested Way of Reading
13
studied equivalence checking for infinite systems using parameterized boolean
equation systems, which is reported in [CPvdPW07]. Together with Jaco van
de Pol and Yanjing Wang, I studied the propositional dynamic logic over accelerated labeled transition systems, including the model checking, satisfiability
checking and axiomatization problems, which is reported in [CvdPW08]. Together with Jasper Berendsen and David Jansen, I proved the undecidability
of cost-bounded probabilistic reachability problem for PPTA, which is reported
in [BCJ09]. Together with Tingting Han, Joost-Pieter Katoen and Alexandru
Mereacre, I studied LTL model checking for inhomogeneous CTMCs, which is
reported in [CHKM09a]. However, these studies are not included in the current
dissertation. The references are in order:
[CPvdPW07] Taolue Chen, Bas Ploeger, Jaco van de Pol, and Tim A. C.
Willemse. Equivalence checking for infinite systems using parameterized
boolean equation systems. Proc. of CONCUR, LNCS 4703, pages 120–
135. Springer, 2007.
[CvdPW08] Taolue Chen, Jaco van de Pol, and Yanjing Wang. PDL over
accelerated labeled transition systems. Proc. of TASE, pages 193–200.
IEEE Computer Society, 2008.
[BCJ09] Jasper Berendsen, Taolue Chen, and David N. Jansen. Undecidability of cost-bounded reachability in priced probabilistic timed automata. Proc. of TAMC, LNCS 5532, pages 128–137. Springer, 2009.
[CHKM09a] Taolue Chen, Tingting Han, Joost-Pieter Katoen, and Alexandru Mereacre. LTL model checking of time-inhomogeneous Markov chains.
Proc. of ATVA, To appear in LNCS. Springer, 2009.
1.5
Suggested Way of Reading
This dissertation is written as a collection of independent chapters. Each chapter (from Chapter 3 to Chapter 8) comes with its own introduction and road
map. Related and future work and conclusions, when relevant, appear for each
chapter. These chapters are however not entirely self-contained. Some preliminary notions and notations are given in Chapter 2. (Note that only (very)
basic notions which will be addressed in multiple chapters later are collected in
this chapter, while those pertaining to an individual chapter are included in the
preliminary section of the corresponding chapter.) Together with them, each
chapter can be read separately. If the reader is interested in the first part, I
suggest to first read Chapter 3, since it gives a flavor of the whole of part I, and
contains results which suggest the later developments.
Due to the diversity of the topics and heavy usage of symbols in the dissertation, I found it difficult to unify the notations for all the chapters. In other
words, overloading is unavoidable. However, for each chapter, the notation is
consistent and unambiguous, so readers are kindly requested to take a local instead of a global view of notations throughout this dissertation. Moreover, since
14
Chapter 1 Introduction
most of results presented here are fruits of collaborations (see Section 1.4), to
emphasize this fact, I shall use “we” instead of “I” in the remainder of this
dissertation. (Note that this already applies to Section 1.3.)
Chapter 2
Preliminaries
2.1
2.1.1
Background for Part I
Labeled Transition Systems
We assume a non-empty, countable set A of concrete (a.k.a. observable, external,
visible) actions, not containing the distinguished symbol τ . Following Milner,
the symbol τ will be used to denote a hidden (a.k.a. unobservable, internal,
def
invisible) action of a system. We define Aτ = A ∪ {τ }, and use a, b, . . . to range
over A and α, β to range over Aτ .
Definition 2.1.1 (Labeled transition system) A labeled transition system
(LTS) consists of
• A set of states S, with typical element s; and
• A transition relation → ⊆ S × L × S, where L ⊆ Aτ is a set of labels
ranged over by ℓ.
ℓ
We write s −−→ s′ if the triple (s, ℓ, s′ ) is an element of −−→. Let ℓ1 · · · ℓk be a
ℓ ···ℓk
sequence of labels; we write s −−1−−→
s′ if there exist states s0 , . . . , sk such that
ℓk
ℓ1
′
s = s0 −−→ · · · −−→ sk = s .
Given any X ⊆ Aτ , we refer to an X-labeled transition system, if the laℓ
bel set L is X. For the case that τ ∈
/ X, we define I(s) = {ℓ ∈ X | s −−→
s′ for some state s′ }. For the case that τ ∈ X, we write ⇒ for the transiτ
def
τ
ℓ
tive closure of −−→, i.e., ⇒ = (−−→)∗ , and define I(s) = {ℓ ∈ A | s ⇒−−→
s′ for some state s′ }.
2.1.2
The Linear Time – Branching Time Spectrum
A large number of notions of behavioral semantics have been proposed, with the
aim to identify those states of LTSs that afford the same behaviors. The diversity
of the proposals is mainly due to the lack of consensus on what constitutes
15
16
Chapter 2 Preliminaries
the most appropriate notion of observable behaviors for reactive systems. Van
Glabbeek presented in [vG90, vG01] the linear time – branching time spectrum
I of behavioral semantics for finitely branching, concrete, sequential processes.
In this section, we first define the preorder and equivalence relations in this
spectrum.
First we define six semantics based on decorated versions of execution traces.
Note that here we only consider finite (decorated) traces. However, it suffices
for the later development, since in this dissertation, we only tackle finite process
algebras which give rise to tree-like LTSs (see Section 2.1.4).
Definition 2.1.2 (Decorated traces) Assume an A-labeled transition system.
• A sequence a1 · · · ak , with k ≥ 0, is a trace of a state s if there is a state s′
a1 ···ak
such that s −−−
−−→ s′ . It is a completed trace of s if moreover I(s′ ) = ∅.
We write T (s) (resp. CT (s)) for the set of traces (resp. completed traces)
of s.
• A pair (a1 · · · ak , B), with k ≥ 0 and B ⊆ A, is a ready pair of a state s0
ak
a1
if there is a sequence of transitions s0 −−→
· · · −−→
sk with I(sk ) = B. It
is a failure pair of s0 if there is such a sequence with I(sk ) ∩ B = ∅.
• A sequence B0 a1 B1 . . . ak Bk , with k ≥ 0 and B0 , . . . , Bk ⊆ A, is a ready
ak
a1
· · · −−→
sk
trace of a state s0 if there is a sequence of transitions s0 −−→
with I(si ) = Bi for i = 0, . . . , k. It is a failure trace of s0 if there is such
a sequence with I(si ) ∩ Bi = ∅ for i = 0, . . . , k.
We write s -2 s′ with 2 ∈ {T, CT, R, F, RT, FT} if the traces, completed
traces, ready pairs, failure pairs, ready traces or failure traces, respectively, of
s are included in those of s′ .
Next we define five semantics based on simulation.
Definition 2.1.3 (Simulations) Assume an A-labeled transition system.
a
• A binary relation R on states is a simulation if s0 R s1 and s0 −−→ s′0
imply s1 −−a→ s′1 for some state s′1 with s′0 R s′1 .
• A simulation R is a completed simulation if s0 R s1 and I(s0 ) = ∅ imply
I(s1 ) = ∅.
• A simulation R is a ready simulation if s0 R s1 and a 6∈ I(s0 ) imply
a 6∈ I(s1 ).
• A simulation R is a 2-nested simulation if R−1 is included in a simulation.
• A bisimulation is a symmetric simulation.
2.1 Background for Part I
17
We write s -2 s′ with 2 ∈ {S, CS, RS, 2N} if there exists a simulation,
completed simulation, ready simulation or 2-nested simulation R, respectively,
with s R s′ . We write s ↔ s′ if there exists a bisimulation R with s R s′ .
Finally, we define three semantics based on (im)possible futures and on possible worlds.
Definition 2.1.4 ((Im)Possible futures/worlds) Assume an A-labeled transition system.
• A pair (a1 · · · ak , X), with n ≥ 0 and X ⊆ A∗ is a possible future of a state
ak
a1
s0 if there is a sequence of transitions s0 −−→
· · · −−→
sk where X is the
set of traces of sk .
• A pair (a1 · · · ak , X), with k ≥ 0 and X ⊆ A∗ , is an impossible future of
ak
a1
a state s0 if there is a sequence of transitions s0 −−→
· · · −−→
sk for some
state sk with T (sk ) ∩ X = ∅.
• A state s is deterministic if for each a ∈ I(s) there is exactly one state
s0 such that s −−a→ s0 , and moreover s0 is deterministic. A state s is a
possible world of a state s0 if s is deterministic and sRs0 for some ready
simulation R.
We write s -2 s′ with 2 ∈ {PF, IF, PW} if the possible futures, impossible
futures or the possible worlds, respectively, of s are included in those of s′ .
In general, we write s ≃2 s′ if both s -2 s′ and s′ -2 s for 2 ∈ {T, CT, R, F,
RT, FT, S, CS, RS, 2N, PF, IF, PW}.
Fig. 2.1 depicts the linear time – branching time spectrum I1 , where an arrow
from one semantics to another means that the source of the arrow is finer than
the target.
In contrast, in [vG93b], 155 weak semantics, which take into account the
hidden action τ , are surveyed. They constitute the “linear time – branching
time spectrum II” for finitely branching, abstract, sequential processes. To give
a complete description of this spectrum falls outside the scope of the current
dissertation. Below we only select some of the semantics in the spectrum which
will be used in the later chapters. We first define weak versions of decorated
traces.
Definition 2.1.5 (Weak decorated traces) Assume an Aτ -labeled transition system.
• A sequence a1 · · · ak , with k ≥ 0, is a weak trace of a state s if there is a
ak
a1
state s′ such that s ⇒−−→⇒
· · · ⇒−−→⇒
s′ . It is a weak completed trace
′
of s if moreover I(s ) = ∅. We write WT (s) (resp. WCT (s)) for the set
of weak traces (resp. completed traces) of s.
1 Note that the completed simulation and impossible futures semantics were missing in the
original spectrum I [vG90, vG01], but are included here.
18
Chapter 2 Preliminaries
bisimulation
2-nested simulation
ready simulation
possible worlds
possible futures
completed simulation
ready traces
failure traces
simulation
readies
failures
impossible futures
completed traces
traces
Figure 2.1: The linear time – branching time spectrum I
• A pair (a1 · · · ak , B), with k ≥ 0 and B ⊆ A, is a weak failure pair of a
ak
a1
· · · ⇒−−→⇒
s′
state s0 if there is a sequence of transitions s0 ⇒−−→⇒
′
with I(s ) ∩ B = ∅.
• A pair (a1 · · · ak , X), with k ≥ 0 and X ⊆ A∗ , is a weak impossible future
ak
a1
· · · ⇒−−→⇒
s′
of a state s if there is a sequence of transitions s ⇒−−→⇒
′
with WT (s ) ∩ X = ∅.
As in the concrete case, the notion of weak decorated traces naturally induces
weak (linear-time) semantics, as follows.
Definition 2.1.6
• We write s -2 s′ with 2 ∈ {WT, WCT, WF, WIF}
if the weak traces, weak completed traces, weak failure pairs, or weak
impossible futures, respectively, of s are included in those of s′ .
• Weak trace preorder WT , weak completed trace preorder WCT and weak
failures preorder WF are defined as: for 2 ∈ {WT, WCT, WF}, s 2 s′
τ
τ
iff (1) s -2 s′ and (2) s −−
→ implies that s′ −−
→.
• Weak impossible future preorder WIF is defined as follows: s WIF s′ if
τ
τ
(1) s -WIF s′ ; (2) s −−→ implies that s′ −−→ and (3) WT (s) = WT (s′ ).
• Weak trace equivalence ≃WT , weak completed trace equivalence ≃WCT ,
weak failures equivalence ≃WF and weak impossible futures equivalence
≃WIF are defined as: for 2 ∈ {WT, WCT, WF, WIF}, ≃2 =2 ∩ −1
2 .
19
2.1 Background for Part I
2.1.3
Universal Algebra and Equational Logic
The aim of this section is to present a suitable general framework within which
the necessary technical algebraic background can be described. This framework
consists of the classic fields of universal algebra and equational logic. We therefore begin by introducing the basic notions from these areas of mathematical
research that will be used throughout the first part of the dissertation. Most
of material here is excerpted from [AFIL05]. We state at the outset that we
shall not need very deep results or constructions from universal algebra in what
follows, and that much more on it could be found in, e.g., the classic reference
[BS81, MMT87] (see also [DW02] for applications in TCS). A self-contained
presentation from a computer science perspective of the topics we now proceed
to introduce may be found in [Hen88].
Σ-algebras We start from a countably infinite set V of variables, with typical
elements x, y, w, z. A signature Σ consists of a set of operation symbols, disjoint
from V , together with a function arity that assigns a natural number to each
operation symbol. The set of terms over Σ is the least set such that
• Each x ∈ V is a term.
• If f is an operation symbol of arity n, and t1 , . . . , tn are terms, then
f (t1 , . . . , tn ) is also a term.
An operation symbol f of arity 0 will be often called a constant symbol, and
the term f () will be abbreviated as f .
We write (Σ) for the set of all terms over Σ, and use t, u, v, possibly subscripted and/or superscripted, to range over terms. A term is closed (a.k.a.
ground) if it contains no occurrences of variables. We denote by T(Σ) the set
of closed terms over Σ. A substitution, denoted by ρ, σ, is a mapping from
variables to terms. A substitution is closed if it maps variables to closed terms.
For every term t and substitution σ, the term obtained by replacing every occurrence of a variable x in t with the term σ(x) will be written σ(t). Note that
σ(t) is closed if σ is.
The collection of terms over a signature Σ yields a language. The semantics
of this language can be defined canonically once we equip the set of intended
denotations with the structure of a Σ-algebra. A Σ-algebra is a structure
A = (A, {f A | f ∈ Σ}) ,
where A is a non-empty set (often called the carrier of the algebra), and
f A : An → A
for each operation symbol f ∈ Σ of arity n. Note that if f is a constant symbol,
then f A can be viewed as an element of A.
In order to interpret terms in (Σ) in a Σ-algebra A = (A, {f A | f ∈ Σ}) we
need the notion of an environment. An environment is a function ρ mapping
20
Chapter 2 Preliminaries
variables to elements of A. The mapping ρ can be extended homomorphically
to (Σ) in a unique way by stipulating that
ρ(f (t1 , . . . , tn )) = f A (ρ(t1 ), . . . , ρ(tn ))
for each operation symbol f of arity n and terms t1 , . . . , tn . Note that ρ(t) is
independent of ρ whenever t is closed. For each closed term t, we write tA for
the element of A that is the interpretation of t in the algebra A. An element of
the carrier set of A is denotable if it is the interpretation of some closed term.
The interpretation of the language (Σ) in a Σ-algebra A = (A, {f A | f ∈ Σ})
naturally induces a congruence relation =A over (Σ), which is defined as:
t =A u if, and only if, ρ(t) = ρ(u), for each environment ρ .
The results presented in the first part of the dissertation all aim at using the classical logic of equality to offer a syntactic characterization of the relation =A for
algebras of processes. The study of such axiomatic characterizations of semantic equivalences (or preorders) falls therefore within the realm of (in)equational
logic, whose basics we now proceed to present.
Equational Logic. An equational axiom system is a collection E of equations
t ≈ u and an inequational axiom system is a collection of inequations t 4 u,
both over the language (Σ). The (in)equations in E are often referred to as
axioms. An equation t ≈ u is derivable from an equational axiom system E,
notation E ⊢ t ≈ u, if it can be proven from the axioms in E using the rules of
equational logic (viz. reflexivity, symmetry, transitivity, substitution and closure
under Σ-contexts), as in Tab. 2.1.
t≈t
t≈u
u≈t
t≈u u≈v
t≈v
t≈u
σ(t) ≈ σ(u)
ti ≈ ui (1 ≤ i ≤ n)
f (t1 , . . . , tn ) ≈ f (u1 , . . . , un )
Table 2.1: Equational logic
The first three rules above state that ≈ is an equivalence relation, whereas
the latter two state that ≈ is closed under substitutions, and is a congruence.
For the derivation of an inequation t 4 u from an inequational axiom system
E, denoted by E ⊢ t 4 u, the second rule, for symmetry, is omitted. Below, for
simplicity, we only speak the equational case while the inequational case can be
defined accordingly.
Formally, a proof of an equation t ≈ u from E is a sequence ti ≈ ui (1 ≤ i ≤
n) of equations such that
21
2.1 Background for Part I
• tn = t and un = u, and
• for each 1 ≤ i ≤ n, the equation ti ≈ ui is obtained by applying one of
the aforementioned inference rules using equations in E or some of the
equations that precede it in the sequence as premises.
Without loss of generality one may assume that the substitution rule is only
applied to axioms, i.e., that the rule
t≈u
σ(t) ≈ σ(u)
may only be used when (t ≈ u) ∈ E. In this case, the equation σ(t) ≈ σ(u) is
called a substitution instance of an axiom in E.
Moreover, by postulating that for each axiom in E also its symmetric counterpart is present in E, one may assume that there are no applications of the
symmetry rule in equational proofs. It is also well-known (see, e.g., Section 2
in [Gro90]) that if an equation relating two closed terms can be proven from an
axiom system E, then there is a closed proof for it. These facts can be used to
simplify proofs by induction on equational derivations. Let E ′ be the collection
of equations that consists of all substitution instances of the axioms in E and
their symmetric variants, i.e.,
E ′ = {ρ(t) ≈ ρ(u) | (t ≈ u) ∈ E or (u ≈ t) ∈ E, ρ a substitution} .
By a normalized derivation of an equation t ≈ u from E we shall henceforth
mean a derivation of the equation t ≈ u from E ′ by means of the rules of
equational logic but not using the symmetry and substitution rules. Now if
E ⊢ t ≈ u, then there exists a normalized derivation of t ≈ u from E.
Definition 2.1.7 (Soundness) Let A be a Σ-algebra. An equation t ≈ u is
sound with respect to =A iff t =A u. An axiom system is sound with respect to
=A iff so is each of its equations.
The collection of all equations that are sound with respect to =A is called
the equational theory of A.
In other words, an axiom system is sound with respect to =A if it can only
be used to prove equations that are valid in the algebra A. This is, of course,
a most natural requirement on an axiom system. However, ideally an axiom
system should also allow us to prove all of the equations that hold in a given
algebra. This is captured by the technical requirement of completeness.
Definition 2.1.8 (Completeness) Let A be a Σ-algebra. An axiom system
E is ground-complete with respect to =A iff E ⊢ t ≈ u whenever t =A u, for all
closed terms t, u.
E is complete with respect to =A iff E ⊢ t ≈ u whenever t =A u, for all
(open) terms t, u.
22
Chapter 2 Preliminaries
Definition 2.1.9 (Equational bases, finitely based algebras) An equational basis for an algebra A is a sound axiom system E that is complete with
respect to =A . We say that an algebra A is finitely based if it has a finite
equational basis.
The notion of completeness of an axiom system relates the proof-theoretic notion
of derivability using the rules of equational logic with the model-theoretic one
of “validity in a model”. From a proof-theoretic perspective, a useful property
of an axiom system E is that, for all terms t, u ∈ (Σ),
E ⊢ t ≈ u iff E ⊢ σ(t) ≈ σ(u), for each closed substitution σ .
(2.1)
An axiom system with the above property is called ω-complete. In theorem
proving applications, it is convenient if an axiomatization is ω-complete, because this means that proofs by (structural) induction can be avoided in favor
of purely equational reasoning; see [LLT90]. In [Hee86] it was argued that ωcompleteness is desirable for the partial evaluation of programs. In fact, suppose
that σ(t) ≈ σ(u) is provable from an axiom system E, for each closed substitution σ. If E is ω-complete, then we know that an equational proof of the actual
equation t ≈ u from E exists. However, in general the equation t ≈ u might not
be derivable from E if E is just ground-complete. In that case, we might have
to content ourselves with showing that all closed instantiations of that equation
are derivable from E, and this is usually done by induction on the structure of
the closed terms that can be substituted for the variables occurring in t and u.
In the computer science community, notable examples of ω-incomplete axiomatizations in the literature are the λKβη-calculus [Plo74] and the equational
theory of CCS [Mol90a]. Therefore, laws such as commutativity of parallelism,
which are valid in the initial model but which cannot be derived, are often
added to the latter equational theory. For such extended equational theories,
ω-completeness results were presented in the setting of CCS [Mol89, AFIL09]
and ACP [FL00]. Moreover, [Mil84, Mil89b] presented pioneering work regarding ω-complete axiomatization (although not equational) for CCS.
It turns out that completeness and ω-completeness are closely related properties of an axiom system. Indeed, assume that A is a Σ-algebra each of whose
elements is denotable. Suppose that E is sound and complete with respect to
=A . It is not hard to argue that, in this case, E is also ω-complete. Moreover,
consider the Σ-algebra obtained by quotienting the set of closed terms T(Σ)
with respect to the congruence relation that equates two closed terms t, u iff the
equation t ≈ u is provable from an axiom system E. We have that an equational
basis for that algebra is also ω-complete.
2.1.4
Process Algebras BCCSP and BCCS
BCCSP and BCCS2 are two universal algebras, specific to the realm of process
algebras where they play a fundamental role.
2 The name of BCCSP was coined by van Glabbeek [vG90], abbreviating Basic CCS and
CSP. Accordingly, BCCS abbreviates Basic CCS.
23
2.1 Background for Part I
BCCSP is a common fragment of CCS and CSP for describing finite synchronization trees [Mil89a]. Its signature consists of the constant 0, the binary
operator + , and unary prefix operators a , where a ranges over a nonempty
set A of actions, called the alphabet, with typical elements a, b, c. In other words,
the language is given by the following grammar:
t ::= 0 | at | t + t | x .
Intuitively, closed BCCSP terms, denoted by p, q, r, represent finite process
behaviors, where 0 does not exhibit any behavior, p + q offers a choice between
the behaviors of p and q, and ap executes action a to transform into p. This
intuition is captured by the transition rules given in Tab. 2.2 (see [AFV01] for an
extensive introduction). They give rise to A-labeled transitions between closed
BCCSP terms, which are of the form (p, a, p′ ), where p, p′ are closed terms and
a ∈ A. Henceforth, as usual, we shall use the suggestive notation p −−a→ p′ in
lieu of (p, a, p′ ). A transition relation is a collection of A-labeled transitions. It
is well-known that the transition relation −−→ is the one defined by structural
induction over closed terms using the above rules.
a
a
ax −−→ x
x1 −−→ y
a
x1 + x2 −−→ y
a
x2 −−→ y
a
x1 + x2 −−→ y
Table 2.2: SOS for BCCSP
We also assume a countably infinite set V of variables; x, y, z denote elements
of V , and X, Y, Z denote finite subsets of V . Open BCCSP terms, which may
contain variables from V , are denoted by t, u, v, w. Following the convention of
the previous section, we use T(BCCSP) and (BCCSP) to denote the collection
of closed and open BCCSP terms respectively.
In order to study weak semantics, BCCSP can be extended with the unary
prefix operator τ , which constitutes BCCS. In other words, its signature includes, in addition to BCCSP, the unary prefix operator τ and the language is
thus given by the following grammar:
t ::= 0 | αt | t + t | x .
Closed BCCS terms are ranged over by p, q, r as well and share the same intuition
as BCCSP terms. In particular, τ p executes action τ to transform into p. The
operational semantics can be specified smoothly in Tab. 2.3, which gives rise
to Aτ -labeled transitions between closed BCCS terms. Accordingly, we use
T(BCCS) and (BCCS) to denote the collection of closed and open BCCS terms
respectively.
It is technically convenient to extend the operational semantics to open
BCCS(P) terms. There are two options: either (1) we do not include additional rules for variables, which effectively means that they do not exhibit any
24
Chapter 2 Preliminaries
α
α
αx −−→ x
x1 −−→ y
α
x1 + x2 −−→ y
α
x2 −−→ y
α
x1 + x2 −−→ y
Table 2.3: SOS for BCCS
behavior; or (2) we treat each variable occurrence x in a term as if it were a
subterm x0 with x a concrete action. For open BCCSP (resp. BCCS) terms
t and u, and a preorder - (or equivalence ≃) on closed BCCSP (resp. BCCS)
terms, we define t - u (or t ≃ u) if ρ(t) - ρ(u) (resp. ρ(t) ≃ ρ(u)) for all closed
substitutions ρ. Note that one can alternatively define - (or ≃) directly over
open terms via (2), and it is not very difficult to verify that for BCCS(P), both
of these two ways yield the same preorder (or equivalence) for each semantics,
unless the underlying alphabet A = ∅, which is not interesting.
A context C[·] of BCCSP (resp. BCCS) is a closed BCCSP (resp. BCCS)
term with exactly one occurrence of a hole [·] in it. For every context C[·] and
closed term p, we write C[p] for the closed term that results by placing p in the
hole in C[·].
The preorders - in the linear time – branching time spectrum I are all
precongruences with respect to BCCSP, meaning that p1 - q1 and p2 - q2
imply p1 + p2 - q1 + q2 and ap1 - aq1 for a ∈ A. Recall that the kernel of
a preorder - is an equivalence ≃=- ∩ -−1 . Likewise, the equivalences in the
spectrum are all congruences with respect to BCCSP.
Similarly, 2 for 2 ∈ {WT, WCT, WF, WIF} are precongruences with respect to BCCS. And the corresponding equivalences ≃2 are congruences for
BCCS.
Remark 2.1.10 The requirements (2) in Def. 2.1.6 are called root conditions,
which are imposed for weak semantics routinely to make sure that the preorder
under consideration is a precongruence (see [Mil89a]). Moreover, the requirement (3) CT (s) = CT (s′ ) for the weak impossible futures preorder is indispensable because otherwise that definition would fail to yield a precongruence for
BCCS. For instance, in that case we would have τ a0 WIF τ a0 + b0. However,
clearly c(τ a0) 6WIF c(τ a0 + b0).
The core axioms A1-4 [Mil89a] for BCCSP given in Tab. 2.4 are ω-complete,
and sound and ground-complete modulo bisimulation equivalence. Since every
equivalence in the linear time – branching time spectrum I (see Fig. 2.1) includes bisimulation equivalence, and each weak semantics is coarser than its
concrete counterpart, it follows that the axioms A1-4 are sound modulo every
equivalence in the spectrum. Furthermore, each of the axioms A1-4 induces two
inequations, obtained by replacing ≈ by 4 or <, that are both sound modulo
every preorder in the linear time – branching time spectrum. We write t = u if
terms t and u are equal modulo A1-4, namely, the associativity, commutativity
25
2.1 Background for Part I
A1
A2
A3
A4
x+y
(x + y) + z
x+x
x+0
≈
≈
≈
≈
y+x
x + (y + z)
x
x
Table 2.4: Core axioms A1-4
and idempotence of +, and the absorption of 0 summands. For every preorder
- and equivalence ≃ in the linear time – branching time spectrum, soundness
of the axioms A1-4 ensures that whenever we write t = u, then also t - u and
t ≃ u. Furthermore, we will (tacitly) assume that the axioms A1-4 above are
included in every axiomatization E considered in what follows, so that from
t = u we may always conclude t 4 u and t ≈ u.
Notions on process terms. The depth of a BCCSP term t, denoted by
depth(t), is the length of the longest trace that t can exhibit, i.e.,
a ···a
1
k
depth(t) = max{k | ∃a1 · · · ak , t′ .t −−−
−−→
t′ } .
Alternatively it is defined inductively as follows: depth(0) = depth(x) = 0;
depth(at) = 1 + depth(t); and depth(t + u) = max{depth(t), depth(u)}.
a1 ···ak
Let k ≥ 0. If t −−−
−−→ t′ for some sequence of actions a1 · · · ak , and t′ has
the variable x as a summand, then we say that x occurs in t at depth k. The set
of variables with an occurrence in t at depth k will be denoted by var k (t); the
set of all variables with an occurrence in t will be denoted by var (t). Similarly,
a1 ···ak
if t −−−
−−→ t′ for some sequence of actions a1 · · · ak , and the action a is an
element of I(t′ ), then we say that a occurs in t at depth k. The set of actions
with an occurrence in t at depth k will be denoted by act k (t).
α
α
For BCCS terms, t −−
→ denotes that there is a term u with t −−
→ u, and
α
α
likewise t ⇒−−→ denotes that there are a terms u, v with t ⇒ u −−→ v. The
depth of a term t, denoted by depth(t), is the length of the longest trace of that
t can exhibit, not counting τ -transitions. It is defined inductively as follows:
depth(0) = depth(x) = 0; depth(at) = 1 + depth(t); depth(τ t) = depth(t); and
depth(t + u) = max{depth(t), depth(u)}.
Given any equation t ≈ u or inequation t 4 u, we define its depth as
max{depth(t), depth(u)}.
P In general, let {t1 , . . . , tn } be a finite set of terms; we use summation
{t1 , . . . , tn } to denote t1 + · · · + tn , adopting the convention that the summation of the empty set denotes 0. A term t is called a prefix if t = αt′ for
some α ∈ Aτ and for some term t′ . We write αn t to denote the term obtained
from t by prefixing it n times with α, i.e., α0 t = t and αn+1 t = α(αn t). When
writing terms, we adopt as binding convention that + and summation bind
weaker than α . With abuse of notation, we often let a finite set of variables
26
Chapter 2 Preliminaries
P
X denote the term x∈X x. Note that, with the above notational conventions,
for every BCCSP term t there always exist a unique finite family of actions
{ai | i ∈ I} ⊆ A, terms {ti | i ∈ I}, and a finite set of variables X ⊆ V such
that
X
ai ti + X ,
t=
i∈I
while for every BCCS term t can be further written as
X
X
τ tj + X ,
ai ti +
t=
i∈I
j∈J
with a finite family of terms {tj | j ∈ J}. A term t is called a summand of u
(notation: t ⋐ u) if it is a variable or a prefix and u = u + t.
2.1.5
Two Proof Techniques
We give a short introduction to two proof techniques that will be exploited
extensively in the first part of the dissertation. The first technique, called cover
equations, is especially designed for BCCSP, while the second technique, which
is based on proof theory, is more generally applicative.
Cover equations. This technique, which is introduced in [FN04], aims to
obtain an explicit description of the equational theory of BCCSP modulo some
equivalence.
The central idea is that if an equation t ≈ u is sound for BCCSP modulo
some equivalence in the linear time – branching time spectrum I, then u + t ≈ t
and t + u ≈ u are sound as well; and from the last two equations one can derive
t ≈ u. Therefore, to extend an axiomatization consisting of A1-4 to a complete
axiomatization of some equivalence in the linear time – branching time spectrum
I, it suffices to add sound equations of the form x + u ≈ u and at + u ≈ u; such
equations are called cover equations.
In order to further limit the form of the cover equations that need to be considered, one usually tries to establish the following properties for the equivalence
≃ at hand:
1. If at + u + bv ≃ u + bv with a 6= b, then at + u ≃ u.
2. If t ≃ u, then t and u contain the same variables, at the same depths.
3. If t + x ≃ u + x, and x is not a summand of t + u,3 then t ≃ u.
If the properties above hold, then it suffices to only consider cover equations of
the form at + au1 + · · · + aun ≈ au1 + · · · + aun .
As we will see in Chapter 4, the second property holds for all equivalences
finer than or as fine as trace equivalence, in case |A| > 1. The first and third
3 To
see that this side condition is needed, note that, in general, x + x ≃ 0 + x but x 6≃ 0.
2.2 Background for Part II
27
properties have to be proved for each equivalence separately. Proving the first
property is generally easy, but proving the third property can be a challenge
(see Section 4.3 and [AFI07]).
When the cover equations have been identified, one can proceed in two ways.
Either one can determine a finite basis among the cover equations, or one can
determine an infinite family of cover equations that obstructs a finite basis. We
will follow the latter approach in Section 4.5, considering only equations of depth
at most one, for congruences that are finer than or as fine as ready equivalence
and coarser than or as coarse as possible worlds equivalence. Moreover, the cover
equations technique turned out to be helpful in finding the infinite families of
equations that obstruct a finite basis in Chapter 4 and Chapter 5.
Proof-theoretic technique. To prove that no finite basis exists for an equivalence ≃, it suffices to provide an infinite family of equations tn ≈ un (n =
1, 2, 3, . . . ) that are all sound modulo ≃, and to associate with every finite set of
sound equations E a property PE that holds for all equations derivable from E,
but does not hold for at least one of the equations tn ≈ un . It then follows that
for every finite set of sound equations E there exists a sound equation tn ≈ un
that is not derivable from E. It follows that every finite set of sound equations
is necessarily incomplete, and hence ≃ is not finitely based.
We shall apply this proof-theoretic technique in Section 4.4 with Sections 4.6–
4.8, in Section 5.2.3 and Section 5.3 (with straightforward adaption for preorders), and in Section 6.5.1. In each case we proceed in the following three
steps (usually with separated lemmas):
1. We provide an infinite family of sound equations tn ≈ un (n = 1, 2, 3, . . . )
and a suitable family of properties Pn (n = 1, 2, 3, . . . ) such that the
property Pn fails for all the equations ti ≈ ui with i ≥ n.
2. We establish that the property Pn holds for every substitution instance of
any sound equation t ≈ u with depth(t), depth(u) ≤ n.
3. We prove that Pn holds for every equation derivable from a collection E
of sound equations t ≈ u with depth(t), depth(u) ≤ n; the latter proof is
by induction on normalized derivations, using (2) for the base case.
2.2
2.2.1
Background for Part II
Preliminaries for Probability Theory
The development of Part II needs some basic concepts from measure theory, in
particular probability spaces and σ-algebras. Below we give a short summary.
For further details, one can consult, e.g. [AD00, Bil95]; [Pan01] offers a gentle
introduction for computer scientists, in particular for concurrency theorists.
Definition 2.2.1 A Borel space is a pair (O, C), where O is a nonempty set,
and C ⊆ 2O is a σ-algebra associated with O. Namely, it is a set consisting of
28
Chapter 2 Preliminaries
subsets of O which contains the empty set ∅ and is closed under complementation
and countable union, i.e.,
• ∅ ∈ C;
• C ∈ C implies that C = O \ C ∈ C; and
S
• C1 , C2 , · · · ∈ C implies that i≥1 Ci ∈ C.
Occasionally, the set O is supposed to be fixed and C by itself is called a σalgebra. The elements of O are often called outcomes, while the elements of C
are called events.
A probability measure on (O, C) is a function P : C → [0, 1] such that P(O) = 1,
and if (Cn )n≥1 is a family of pairwise disjoint events Cn ∈ C, then:
X
[
P(Cn ) .
Ci ) =
P(
n≥1
n≥1
A probability space is a σ-algebra equipped with a probability measure (thus it
is a special case of measure space), i.e., it is a triple (O, C, P) where (O, C) is a
σ-algebra and P is a probability measure on (O, C). For an event E ∈ C, the
value P(E) is called the probability measure of E, or simply the probability of
E. In the context of probability measures, the events (i.e., the elements of C)
are often said to be measurable.
Given a countable set S, we denote the set of (finite) discrete probability
distributions over S byPDistr (S). Namely, each µ ∈ Distr (S) is a function
µ:S → [0, 1] such that s∈S µ(s) = 1 and the support of µ, Supp(µ) = {s ∈
S | µ(s) > 0}, is finite. The Dirac distribution µ1s is the discrete probability
distribution such that µ1s (s) = 1 and µ1s (t) = 0 for t 6= s.
2.2.2
Discrete-time Markov Chains
Discrete-time Markov chains (DTMCs) behave as transition systems with the
only difference that nondeterministic choices among successor states are replaced by probabilistic ones. That is to say, the successor state of a state s
is chosen according to a probability distribution. This probability distribution
only depends on the current state s, and not on, e.g., the path fragment that
led to state s from some initial state. Accordingly, the system evolution does
not depend on the history (i.e., the path fragment that has been executed so
far), but only on the current state s. This is known as the memoryless property.
We refer the readers to, among others, [KSK76] as a standard textbook.
Let AP be a fixed, finite set of atomic propositions on states ranged over by
a, b, c, . . . .
Definition 2.2.2 (DTMCs) A (labeled) discrete-time Markov chain (DTMC)
is a triple D = (S, AP, L, α, P), where:
2.2 Background for Part II
29
• S is a (countable) set of states;
• L : S → 2AP is a labeling function which assigns to each state s ∈ S the
set L(s) of atomic propositions that are valid in s;
• α ∈ Distr (S) is the initial distribution; and
• P
P : S × S → [0, 1] is a stochastic matrix, i.e., for each state s ∈ S,
t∈S P(s, t) = 1.
D is called finite if S and AP are finite.
Intuitively, a DTMC is a Kripke structure in which all transitions are
equipped with discrete probabilities such that the sum of outgoing transitions of
each state equals one. A state s in D is called absorbing if P(s, s) = 1. Without
loss of generality, we often assume a DTMC to have a unique initial state, i.e.
the initial distribution α is a Dirac distribution.
Definition 2.2.3 (Paths) Let D = (S, AP, L, P) be a DTMC.
• An infinite path ρ in D is an infinite sequence s0 ·s1 ·s2 · · · of states such
that ∀i ≥ 0. P(si , si+1 ) > 0.
• A finite path σ is a prefix of an infinite path.
Let Paths ω
D (s) denote the set of all infinite paths in D that start in state s
and Paths ⋆D (s) denote the set of all finite paths of s. The subscript D is omitted
when it is clear from the context. For state s and finite path σ = s0 · · · sn with
P(sn , s) > 0, let σ·s denote the path obtained by extending σ by s.
Let ρ denote either a finite or infinite path. Let |ρ| denote the length of
ρ, i.e., |s0 ·s1 · · · sn | = n, |s0 | = 0 and |ρ| = ∞ for infinite ρ. For finite ρ
and 0 ≤ i ≤ |ρ|, ρ[i] denotes the (i+1)-st state in ρ. For i ≤ |ρ|, we use ρ↓i
to denote the prefix of ρ truncated at length i (thus ending in si ), formally,
ρ↓i = ρ[0]·ρ[1] · · · ρ[i]. We use Pref (ρ) to denote the set of prefixes of ρ, i.e.,
Pref (ρ) = {ρ↓i | 0 ≤ i ≤ |ρ|}.
Cylinder set. In order to be able to associate probabilities to events in
DTMCs, the intuitive notion of probabilities in DTMC D is formalized by
associating a probability space with D. The set O of outcomes consists of the
infinite paths of D. To define an appropriate σ-algebra for D, we will use the
fact that for each σ-algebra (O, C) and each subset Π of 2O , there exists a
smallest σ-algebra that contains Π. This is due to the facts that (1) 2O itself
is a σ-algebra; andT(2) the intersection of σ-algebras is a σ-algebra. Thus the
intersection CΠ = C C, where C ranges over all σ-algebras on O that contain
Π, is a σ-algebra and is contained in any σ-algebra C such that Π ⊆ C. CΠ is
called the σ-algebra generated by Π, and Π is the basis for CΠ . The σ-algebra
associated with D is generated by the cylinder sets (a.k.a. basic cylinder set)
spanned by the finite path fragments in D.
30
Chapter 2 Preliminaries
Definition 2.2.4 (Cylinder set) For ρ ∈ Paths ⋆ , the cylinder set of ρ is defined as
∆(ρ) = {π ∈ Paths ω | ρ ∈ P ref (π)} .
Namely, this cylinder set (spanned by finite path ρ) consists of all infinite paths
that start with ρ.
We are now in a position to define the probability space induced by a DTMC
D. The underlying σ-algebra is defined as the smallest σ-algebra that contains
all the cylinder sets induced by the finite paths starting in the initial state
s0 . It follows from classical concepts of probability theory (see e.g. [AD00]),
Carathéodory’s extension theorem in particular, that there exists a unique probability measure PrD
s0 (or, briefly Pr) on the σ-algebra associated with D where
the probabilities for the cylinder sets are given by:
Y
P(si , si+1 ) .
Pr({ρ ∈ Paths ω
D (s0 ) | ρ↓n = s0 · · · sn }) =
{z
}
|
∆(s0 ···sn )
0≤i<n
Q
The probability of finite path σ = s0 · · · sn is defined as P(σ) = 0≤i<nP(si , si+1 ).
Note that although Pr(∆(σ)) = P(σ), they have different meanings: Pr is a
probability measure on infinite paths whereas P refers to finite ones. For a set
′
C of finite paths which is prefix containment free, i.e.,
P for any σ, σ ∈ C with
′
′
σ 6= σ , σ ∈
/ Pref (σ ), the probability of C is P(C) = σ∈C P(σ), and paths in
C induce disjoint cylinder sets.
2.2.3
Timed Automata
Throughout of the second part of the dissertation, we use N, Q≥0 and R≥0
to denote the sets of naturals, nonnegative rationals and nonnegative reals,
respectively.
Clocks. We consider a finite set X of real-valued clocks. A clock valuation
over X is a function ν : X → R≥0 which assigns to each clock a time value in
X
R≥0 . We write RX
≥0 for the set of valuations over X . For every ν ∈ R≥0 , the
valuation ν + t is defined as (ν + t)(x) = ν(x) + t for any x ∈ X . For X ⊆ X ,
we write ν[X := 0] for the valuation which maps each clock in X to the value 0
and agrees with ν over X \X.
Clock constraints.
the grammar
We use B(X ) to denote the set of formulae defined by
g ::= x ⊲⊳ c | x − y ⊲⊳ c | g ∧ g ,
where x, y ∈ X , ⊲⊳∈ {≤, <, =, ≥, >} and c ∈ N. The elements of B(X ) are called
clock constraints. An atomic constraint does not contain any conjunctions. We
also consider (in Chapter 7) a proper subset of diagonal-free clock constraints,
where constraints of the form x − y ⊲⊳ c are not allowed. This restricted set of
constraints is called diagonal-free because constraints of the form x − y ⊲⊳ c are
31
2.2 Background for Part II
called diagonal clock constraints. We note that TA with diagonal constraints
are not more expressive than diagonal-free TA [BPDG98], but they might incur
an exponential blowup [BC05] and require a very careful treatment [Bou04].
Clock constraints are interpreted over valuations for X . For clock valuation
ν and time constraint g, we write ν |= g meaning ν satisfies g, which is defined
by: ν |= x ⊲⊳ c if ν(x) ⊲⊳ c, ν |= x − y ⊲⊳ c if ν(x) − ν(y) ⊲⊳ c, and ν |= g1 ∧ g2 if
ν |= g1 and ν |= g2 . We also write JgK = {ν ∈ RX | ν |= g}, i.e., JgK is the set of
valuations satisfying g.
An X -hyperplane is a set of valuations satisfying an atomic constraint. The
X
class of HX -polyhedra is defined as the smallest subset of 2R which contains
all X -hyperplanes and is closed under set union, intersection, and complement. In particular, we remark that polyhedra are a geometrical term for zones
[HNSY94]. Intersection (∩), union (∪) and complement( ) are well-defined operations on polyhedra. Given a polyhedron Z and a subset of clocks X ⊆ X ,
the operation Z[X := 0] is defined as {ν ∈ RX
≥0 | ν[X := 0] ∈ Z}.
Definition 2.2.5 (Timed automata) A timed automaton (TA) is a tuple
A = (Σ, X , Q, Q0 , QF , →) where:
• Σ is a finite alphabet;
• X is a finite set of clocks;
• Q is a finite set of locations;
• Q0 ⊆ Q is a nonempty set of initial locations;
• QF ⊆ Q is a set of accepting locations;
• → ∈ Q × Σ × B(X ) × 2X × Q is an edge relation; and
• Inv : Q→B(X ) is an invariant-assignment function.
a,g,X
We refer to q −−−−→ q ′ as a transition, where a ∈ Σ is the input symbol, the
guard g is a clock constraint on the clocks of A, X ⊆ X is a set of clocks to be
reset, and q ′ the successor location. The intuition is that the timed automaton
A can move from location q to location q ′ when the input symbol is a and the
guard g holds, while the clocks in X should be reset when entering q ′ .
Operational semantics. The semantics of a timed automaton is defined by
associating an (infinite-state) (Σ ∪ R≥0 )-labeled transition system (a.k.a. timed
transition system, or TTS) where the state space is {(q, ν) | q ∈ Q and ν ∈ JgK},
and there are two types of transitions:
• Elapse of time: (q, ν) −−d→ (q, ν + d) if ν ∈ JInv(q)K and ν + d ∈ JInv(q)K
(note that it follows that ν + d′ ∈ JInv(q)K for any 0 ≤ d′ ≤ d); and
a,g,X
• Location switch: (q, ν) −−a→ (q ′ , ν ′ ) if there is an edge q −−−−→ q ′ such
that ν ∈ JgK, ν ′ = ν[X := 0] and ν ′ ∈ JInv(q ′ )K.
32
Chapter 2 Preliminaries
In general, we assume an initial state of A is of the form (q0 , ~0) where q0 ∈ Q0
and ~0 denotes the clock valuation that assigns 0 to all clocks in X .
Part I
Axiomatizability of Process
Algebras
33
Chapter 3
Meta-theories for Axiomatizability
3.1
Introduction
This chapter is mainly devoted to two meta-theorems regarding the axiomatizability of BCCSP and BCCS. The general motivation is that we intend to
obtain new axiomatizability results from existing ones. The first one concerns
a relationship between a preorder and its corresponding equivalence; The second one concerns a relationship between a concrete and its corresponding weak
semantics. Some more detailed introductions are in order.
Preorder versus equivalence. As said, the lack of consensus on what constitutes an appropriate notion of observable behavior for reactive systems has
led to a large number of proposals for behavioral semantics for concurrent processes. These have been surveyed in the linear time-branching time spectrum
I for concrete semantics [vG01], and in spectrum II for weak semantics that
take into account the internal action τ [vG93b]. Typically, a given semantical
notion induces both a preorder and an equivalence, where the equivalence is the
kernel of the corresponding preorder, meaning that two processes are considered
equivalent if, and only if, each is a refinement of the other with respect to the
preorder.
In the literature, positive and negative axiomatizability results were always
proved separately for a preorder and the corresponding equivalence. Recently,
Aceto, Fokkink and Ingólfsdóttir [AFI07] showed that for BCCSP such double
effort can be avoided, by presenting an algorithm to turn a sound and groundcomplete axiomatization of any preorder in the linear time – branching time
spectrum I which is at least as coarse as the ready simulation preorder, into a
sound and ground-complete axiomatization of the corresponding equivalence.1
Moreover, if the former axiomatization is ω-complete, so is the latter. The requirement that the preorder is at least as coarse as ready simulation is essential;
as we shall see in Chapter 5, for impossible futures semantics (which does not
1 Another way to avoid the double effort is by deriving axiomatizations of preorders from
those of the corresponding equivalences. This line of research is explored in [dFG09].
35
36
Chapter 3 Meta-theories for Axiomatizability
satisfy this requirement), there is a finite, ground-complete axiomatization for
the preorder, but not for the equivalence.
A serious drawback of the work reported in [AFI07] is that their algorithm
requires several properties to hold for the preorders to which it is applied, which
have to be checked for each preorder separately. Especially their variable cancellation property is usually rather hard to prove, see [AFI08]. Subsequently,
de Frutos Escrig, Gregorio Rodrı́guez and Palomino [dFGP08b, dFGP08a] improved upon this result, so that the algorithm is applicable not only to those
preorders specifically mentioned in the “linear time – branching time spectrum
I” but to any preorder at least as coarse as the ready simulation preorder, provided it is initials preserving, meaning that a preorder relation (p ⊑ q) implies
inclusion of initial action sets (Iτ (p) ⊆ Iτ (q), cf. Def. 3.2.1.) This condition is
needed to guarantee soundness of the generated axiomatization.
The first meta-theorem presented in this chapter stems from an effort to
apply the algorithm of Aceto et al. to weak semantics, which take into account
the hidden action τ . The results in [dFGP08b, dFGP08a] do not suffice in this
setting, because weak semantics tend to violate the initials preserving condition.
So a new round of generalization is needed. To this end, we show that in the
setting of BCCS, the algorithm originally proposed in [AFI07] applies equally
well to weak semantics; the proviso of initials preserving can be replaced by
other conditions. We give three sufficient conditions on the preorder ⊑ and its
corresponding equivalence ≡:
1. p ≡ τ p for all closed terms p;
2. p ≡ τ p for all p with Iτ (p) 6= ∅, and p ⊑ q with Iτ (p) 6= ∅ implies Iτ (q) 6= ∅;
3. τ p ≡ τ p + p for all closed terms p, and ⊑ is weak initials preserving,
meaning that a preorder relation (p ⊑ q) implies inclusion of weakly initial
action sets (Iτ (p) ⊆ Iτ (q), cf. Def. 3.2.1).
This makes the algorithm applicable to all 87 preorders surveyed in the “linear
time – branching time spectrum II” [vG93b] that are coarser than the (concrete)
ready simulation preorder. That is, each of these preorders satisfies either the
original initials preserving condition from [dFGP08b, dFGP08a], or one of our
three new conditions.
Moreover, we extend the scope of the algorithm to infinite processes, by
adding recursion constants to BCCS. As an application of both extensions, we
provide a ground-complete axiomatization of the CSP failures equivalence, also
known as must-testing equivalence, for BCCS processes with divergence.
Concrete versus weak. In process algebras, the dichotomy of concrete and
weak semantics is ubiquitous. Typically, a given semantical notion gives rise
to both a concrete and a weak version, where the weak version takes a special
treatment of the hidden action τ , which is deemed unobservable. The general
idea is to abstract away from internal details of system descriptions. It is wellrecognized that to obtain a ground-complete axiomatization for weak semantics
3.2 From Preorder to Equivalence
37
is usually much more difficult than for the concrete semantics. In the literature,
this is usually done by adding τ -laws to the existing ground-complete axiomatization for the concrete version. A typical example given by [HM85] is the three
τ -laws which lift the ground-complete axiomatization from concrete bisimulation equivalence to weak bisimulation equivalence (observational congruence).
Here we establish a general link between the axiomatizability of concrete
and weak semantics partially. Namely for any semantics which is not finer than
failures or impossible futures semantics (cf. Fig. 2.1), we provide an algorithm
to turn a sound and ground-complete axiomatization of the concrete version
into a sound and ground-complete axiomatization of the corresponding weak
version. Moreover, if the former axiomatization is ω-complete, so is the latter.
This result suggests that for positive axiomatizability results, a stronger result
is obtained by considering the concrete semantics. On the other hand, negative
axiomatizability results become more general if they are proved for the weak
counterpart.
As an application of this algorithm, in particular, we derive a ground- and
ω-complete axiomatization for the weak failures preorder. This preorder plays a
prominent role in the process algebra CSP [BHR84]. Moreover, for convergent
processes, it coincides with testing semantics [NH84, RV07], and thus its equivalence counterpart is the coarsest congruence for the CCS parallel composition
that respects deadlock behavior. Note that a ground-complete axiomatization
for the weak failures equivalence has appeared in [vG97]. As further applications, we derive complete axiomatizations for weak completed trace and trace
semantics. We also mention that in Chapter 5, the link established here will be
applied to impossible futures semantics extensively.
In the end, we consider the third meta-theorem, which employs the inverted
substitution technique to show that an axiomatization is ω-complete. This technique was originally developed by Groote in [Gro90] for equivalences; here we
adapt it in such a way that it is applicable to preorders, which will be exploited
in later chapters.
Structure of the chapter. Section 3.2 contains the first meta-theorem regarding a preorder and its corresponding equivalence. Section 3.3 presents the
second meta-theorem regarding a concrete and its corresponding weak semantics. Section 3.4 deals with the adaption of the inverted substitution technique.
Section 3.5 discusses the related and future work.
3.2
From Preorder to Equivalence
We start from a definition on the initial actions of a process term.
Definition 3.2.1 (Initial actions) For any closed term p, the set Iτ (p) of
α
strongly initial actions is Iτ (p) = {α ∈ Aτ | p −−
→}, whereas the set Iτ (p) of
α
weakly initial actions is Iτ (p) = {α ∈ Aτ | p ⇒−−→}.
38
Chapter 3 Meta-theories for Axiomatizability
A preorder ⊑ is (strong) initials preserving if p ⊑ q implies Iτ (p) ⊆ Iτ (q) for
all p and q; it is weak initials preserving if p ⊑ q implies Iτ (p) ⊆ Iτ (q). With
I(p) we denote Iτ (p) ∩ A, the weakly initial concrete actions of p (note that this
is in accordance with Def. 2.1.1).
3.2.1
An Algorithm for Producing Equational Axiomatizations
In [AFI07], Aceto, Fokkink and Ingólfsdóttir presented an algorithm that takes
as input a sound and ground-complete inequational axiomatization E for BCCSP
modulo a precongruence in the linear time – branching time spectrum I that
contains the ready simulation preorder, and generates as output an equational
axiomatization A(E) which is sound and ground-complete for BCCSP modulo
the corresponding congruence. Moreover, if the original axiomatization E is
ω-complete, so is the resulting axiomatization.
The results of [AFI07] were obtained for the language BCCSP. Naturally,
these results generalize smoothly to BCCS by treating τ just like any concrete
action from A. Preorders or equivalences that do so are called concrete (or
strong). Below we rephrase the algorithm in the context of BCCS, in order to
smoothen the presentation. The axiomatization A(E) generated by the algorithm from E contains the axioms A1-4 as well as the axioms:
RS≡
α(βx + z) + α(βx + βy + z) ≈ α(βx + βy + z)
for α, β ∈ Aτ , that are valid in ready simulation semantics, together with the
following equations, for each inequational axiom t 4 u in E:
(1) t + u ≈ u; and
(2) α(t + x) + α(u + x) ≈ α(u + x) (for each α ∈ Aτ , and some variable x that
does not occur in t + u).
Instead of explicitly adding the axioms RS≡ one can equivalently add the axioms
RS
βx 4 βx + βy
for β ∈ Aτ
to E prior to invoking steps (1) and (2) above. Moreover, as observed in
[dFGP08b], the conversion from E to A(E) can be factored into two steps:
• Given an inequational axiomatization E, its BCCS-context closure E is
E ∪ {α(t + x) 4 α(u + x) | α ∈ Aτ ∧ t 4 u ∈ E} ∪ {RS}
where x is a variable not occurring in E.
• Now A(E) = {t + u ≈ u | t 4 u ∈ E} ∪ (A1-4).
In [AFI07] the correctness of this algorithm was shown for all precongruences
listed in the linear time – branching time spectrum I that are included between
trace inclusion and the ready simulation preorder. The proof contained a few
arguments that had to be checked for each of these preorders separately. Subsequently, in de Frutos Escrig, Gregorio and Palomino [dFGP08b] the following
more general result was obtained:
3.2 From Preorder to Equivalence
39
Theorem 3.2.2 Let ⊑ be an initials preserving precongruence that contains
the ready simulation preorder -RS , and let E be a sound and ground-complete
axiomatization of ⊑. Then A(E) is a sound and ground-complete axiomatization
of the kernel of ⊑. Moreover, if E is ω-complete, then so is A(E).
As all preorders in the linear time – branching time spectrum I of [vG01] between
trace inclusion and ready simulation are initials preserving, the above theorem
strengthens the result of [AFI07].
3.2.2
Correctness Proof of the Algorithm
Below we recreate the proof of Thm. 3.2.2. Lem. 3.2.3 and Prop. 3.2.4 constitute
the completeness argument, and are taken directly from [dFGP08b]. However,
the proofs below are significantly simpler – in the case of Lem. 3.2.3 employing
ideas from the completeness proof in [AFI07]. The essence of Lem. 3.2.5 and
its proof come from [dFGP08b] as well; this is the soundness argument. Our
rewording of Lem. 3.2.5 allows it to be reused in Sections 3.2.3, 3.2.4 on weak
semantics.
Lemma 3.2.3 Let E be an inequational axiomatization. Then for any t 4 u ∈ E
and any context C[·] we have A(E) ⊢ C(t) + C(u) ≈ C(u).
Proof: By structural induction on the context C[·].
In case of the trivial context C[·] = · we have to show A(E) ⊢ t + u ≈ u,
which follows immediately from step (1) in the construction of A(E).
For a context α( ·+v) we have to show A(E) ⊢ α(t+v)+α(u+v) ≈ α(u+v),
which follows from step (2) in the construction of A(E), substituting the term
v for the variable x.
Now let the result be obtained for a context D[·] and let C[·] be of the
form D[·] + v, where v is an arbitrary term, possibly 0. We have to show that
A(E) ⊢ D(t) + v + D(u) + v ≈ D(u) + v. This follows immediately from the
induction hypothesis.
Finally, let the result be obtained for a context βD[·] and let C[·] be of the
form α(βD[·] + v). We have to obtain
A(E) ⊢ α(βD(t) + v) + α(βD(u) + v) ≈ α(βD(u) + v) .
By the induction hypothesis we have A(E) ⊢ βD(t) + βD(u) ≈ βD(u), so it
suffices to obtain
A(E) ⊢ α(βD(t) + v) + α(βD(t) + βD(u) + v) ≈ α(βD(t) + βD(u) + v) .
This is an instance of the axiom RS≡ .
Proposition 3.2.4 Let E be an inequational axiomatization. Then whenever
E ⊢ t 4 u we also have A(E) ⊢ t + u ≈ u.
40
Chapter 3 Meta-theories for Axiomatizability
Proof: If E ⊢ t 4 u then there is a chain of terms t0 , . . . , tn for n ≥ 0 with t0 = t
and tn = u such that for 0 ≤ i < n the inequation ti 4 ti+1 is provable from
E by one application of an axiom. We now prove the claim by induction on n.
The case n = 0 is an instance of axiom A3, and the case n = 1 is an immediate
consequence of Lem. 3.2.3, by applying substitution.
Now for the general case, let v be ti for some 0 < i < n. By induction we
have A(E) ⊢ t + v ≈ v and A(E) ⊢ v + u ≈ u. Applying once again A3 this
yields A(E) ⊢ t + u ≈ t + v + u ≈ v + u ≈ u.
Lemma 3.2.5 Let ⊑ be a precongruence containing -RS and ≡ be its kernel.
Let p, q be closed terms with p ⊑ q and Iτ (p) ⊆ Iτ (q). Then p + q ≡ q.
Proof: As p ⊑ q and ⊑ is a precongruence
P for choice, we have
P p + q ⊑ q + q ⊑ q.
To show that q ⊑ p + q, let p ↔ 2 i∈I αi pi and q ↔ j∈J βj qj . It is well
known [vG01] that p ↔
Pq implies Iτ (p) = Iτ (q) as well as p -RS q and hence
collection of β-summands
p ⊑ q. Writing
p|
for
β
αi =β αi pi , theP
P
P of p, and likewise q|β = βj :=β βj qj , we have p ↔ β∈Iτ (p) p|β and q ↔ β∈I(q) q|β . Using
that Iτ (p) ⊆ Iτ (q), and that ⊑ is a precongruence for the choice operator +,
it suffices to show that q|β ⊑ p|β + q|β for all β ∈ Aτ . This is an immediate
consequence of the axiom RS, which is sound for -RS and hence for ⊑.
Proof of Thm. 3.2.2: As ⊑ is a precongruence contained in the ready simulation
preorder, all inequations in the BCCS-context closure E of E are sound w.r.t.
⊑. Considering that the soundness of an (in)equation is tantamount to the
soundness of its closed substitution instances, the soundness of A(E) now follows
from Lem. 3.2.5.
Ground-completeness and ω-completeness follow directly from Prop. 3.2.4:
If t ≡ u, that is t ⊑ u and u ⊑ t, we have E ⊢ t 4 u and E ⊢ u 4 t by the
completeness of E. So Prop. 3.2.4 yields A(E) ⊢ t ≈ t + u ≈ u.
3.2.3
Applying the Algorithm to Weak Semantics
As said, the results of [AFI07, dFGP08b] were obtained for the language BCCSP.
The main purpose of the present section is to apply the ideas from Section 3.2.2
to weak preorders: those that in some way abstract from internal activity, by
treating τ differently from concrete actions.
When reading Thm. 3.2.2 in the context of weak process semantics, it helps
to remember that -RS is the concrete ready simulation preorder, and “initials
preserving” refers to preservation of the strongly initial actions, where τ is considered as simply another concrete action. Thm. 3.2.2 directly applies to the
rooted variants of the η-simulation surveyed in [vG93b], for these preorders are
coarser than the concrete ready simulation preorder and strong initials preserving. However, most weak semantics are not strong initials preserving (for
2 Recall
that this denotes the concrete bisimulation equivalence as in Def. 2.1.3.
3.2 From Preorder to Equivalence
41
instance, typically τ x 4 x is sound), and consequently Thm. 3.2.2 fails to apply
to them.
The precondition of being initials preserving is in fact nowhere used in the
completeness proof in [dFGP08b], or its recreation in Section 3.2.2. Hence, this
condition applies to the soundness claim only. Therefore, in order to apply the
algorithm to weak semantics, all we need is to find another way of guaranteeing
the soundness of the generated axioms.
Given that we deal with preorders containing the ready simulation preorder,
the axiom RS≡ will always be sound. Moreover, the axioms generated by step
(2) in the construction of A(E) are guaranteed to be sound by Lem. 3.2.5, for
we have α(t + x) ⊑ α(u + x) and I(α(t + x)) = I(α(u + x)) = {α}. One way to
guarantee soundness of the remaining axioms, is to check this for each of them
explicitly:
Theorem 3.2.6 Let ⊑ be a precongruence that contains the ready simulation
preorder -RS , and let E be a sound and ground-complete axiomatization of
⊑, such that for each axiom t 4 u in E the law t + u ≈ u is sound as well.
Then A(E) is a sound and ground-complete axiomatization of the kernel of ⊑.
Moreover, if E is ω-complete, then so is A(E).
Note that for the axioms stemming from t 4 u with Iτ (σ(t)) ⊆ Iτ (σ(u)) for any
closed substitution σ, no check is needed, by Lem. 3.2.5. Next we present three
other conditions that guarantee soundness of A(E).
Theorem 3.2.7 Let ⊑ be a precongruence that contains the concrete ready
simulation preorder -RS , such that p ≡ τ p, with ≡ the kernel of ⊑, for all
processes p. Let E be a sound and ground-complete axiomatization of ⊑. Then
A(E) is a sound and ground-complete axiomatization of ≡. Moreover, if E is
ω-complete, then so is A(E).
Proof: It suffices to show that p ⊑ q implies p + q ≡ q. So assume p ⊑ q. Let
p′ := τ p and q ′ := τ q. By assumption we have p ≡ p′ and q ≡ q ′ , and therefore
p′ ⊑ q ′ . As Iτ (p′ ) = Iτ (q ′ ) = {τ }, Lem. 3.2.5 yields p′ + q ′ ≡ q ′ , which implies
p + q ≡ q.
Theorem 3.2.8 Let ⊑ be a precongruence that contains the concrete ready
simulation preorder -RS , such that p ≡ τ p, with ≡ the kernel of ⊑, for all
processes p with Iτ (p) 6= ∅, and such that p ⊑ q implies that if Iτ (p) 6= ∅ then
Iτ (q) 6= ∅. Let E be a sound and ground-complete axiomatization of ⊑. Then
A(E) is a sound and ground-complete axiomatization of ≡. Moreover, if E is
ω-complete, then so is A(E).
Proof: Again it suffices to show that p ⊑ q implies p + q ≡ q. So assume p ⊑ q.
If Iτ (p) = ∅ then trivially Iτ (p) ⊆ Iτ (q) and the result follows from Lem. 3.2.5.
Otherwise, we have p ≡ τ p and q ≡ τ q and the result follows as in the previous
42
Chapter 3 Meta-theories for Axiomatizability
proof.
Let Tau2 be the second τ -law of CCS [HM85, Mil89a]:
τx ≈ τx + x .
Tau2
Theorem 3.2.9 Let ⊑ be a weak initials preserving precongruence that contains the concrete ready simulation preorder -RS and satisfies Tau2, and let E
be a sound and ground-complete axiomatization of ⊑. Then A(E) is a sound
and ground-complete axiomatization of the kernel of ⊑. Moreover, if E is ωcomplete, then so is A(E).
Proof: A straightforward induction on the length of a path p ⇒ p′ , using the
α
soundness of Tau2, yields that if p ⇒ p′ −−
→ p′′ then p ≡ p + αp′′ , where ≡
is the kernel of ⊑. Hence for any closed term p there is a closed term p′ such
that p ≡ p′ and Iτ (p) = Iτ (p′ ). Using this, the soundness claim follows from
Lem. 3.2.5, reasoning as in the proof of Thm. 3.2.7.
τ
Note that Iτ (p) = I(p) ∪ {τ | p −−→} (see Def. 3.2.1). Thus, the precondition
τ
of Thm. 3.2.9 is that p ⊑ q implies that I(p) ⊆ I(q) and that if p −−
→ then
τ
q −−→.
So far, Thm. 3.2.6 applies to the widest selection of preorders, but it comes
with the need to check the soundness of some of the generated axioms separately. We can go even further in this direction by observing that also the
precondition of containing the ready simulation preorder is not used anywhere
in the completeness proof:
Theorem 3.2.10 Let ⊑ be any precongruence, and let E be a ground-complete
axiomatization of ⊑. Then A(E) is a ground-complete axiomatization of the
kernel of ⊑. Moreover, if E is ω-complete, then so is A(E).
Note that this theorem makes no statement on the soundness of A(E). Hence
an application of this theorem to achieve a sound and ground-complete axiomatization involves checking the soundness of all axioms generated by both step (1)
and step (2) of the algorithm explicitly, as well as the soundness of the axioms
A1-4 and RS≡ . As A1-4 and RS≡ constitute a sound and ground-complete axiomatization of concrete ready simulation equivalence, checking the soundness
of these axioms is naturally done by checking that the kernel of ⊑ contains
concrete ready simulation equivalence. As we shall illustrate in the next section, this is a meaningful improvement over the precondition of Thm. 3.2.6 that
⊑ contains the concrete ready simulation preorder. The price to be paid for
this improvement is that also the soundness of the axioms generated by step
(2) of the algorithm has to be checked separately. This is because the proof of
Lem. 3.2.5 uses that ⊑ contains the concrete ready simulation preorder.
In [vG93b] 155 weak preorders are reviewed. Most of them fail to be congruences for the choice operator of BCCS. Axiomatizations are typically proposed for the congruence closures of these preorders: the coarsest congruence
43
3.2 From Preorder to Equivalence
contained in them. All preorders ⊑ in [vG93b] and their congruence closures
satisfy the property that if p ⊑ q then I(p) ⊆ I(q).3
Of the 155 preorders surveyed in [vG93b], 87 contain the concrete ready
simulation preorder. We can partition this collection into four classes.
6 preorders are variants of trace inclusion and the simulation preorder. They
are precongruences for BCCS and satisfy the axiom x ≈ τ x. Consequently, they
fall in the scope of Thm. 3.2.7.
16 preorders are variants of completed trace inclusion or the completed simulation preorder. Each of their congruence closures ⊑ has the property that
p ⊑ q implies that if Iτ (p) 6= ∅ then I(q) 6= ∅. Moreover, the kernels ≡ of ⊑
have the property that p ≡ τ p for all processes p with Iτ (p) 6= ∅. Consequently,
these congruence closures fall in the scope of Thm. 3.2.8.
22 are variants of the η-simulation or the η-ready simulation. Their congruence closures are strong initials preserving, and hence fall under the scope of
Thm. 3.2.2.
The congruence closures ⊑ of the remaining 43 preorders satisfy the property
τ
τ
that p ⊑ q implies that if p −−
→ then q −−
→, and hence Iτ (p) ⊆ Iτ (q). These
precongruences therefore fall in the scope of Thm. 3.2.9.
Thus, the algorithm of [AFI07] applies to all congruence closures of preorders
in [vG93b] coarser than the ready simulation preorder.
Applications
In De Nicola and Hennessy [NH84] three testing preorders are defined, and for
each of them a sound and ground-complete axiomatization over BCCS is provided. In fact the axiomatizations apply to all of CCS, enriched with a special
constant Ω, and the semantics of processes involves, besides Aτ -labeled transitions, a convergence predicate. However, the completeness proofs remain valid
when restricting attention to the sublanguage BCCS, and there the convergence
predicate plays no role (for all processes are convergent). The combined mayand must-testing preorder is axiomatized by the laws A1-4 together with the
axioms
N1
N2
N3
N4
3 In
αx + αy
x + τy
αx + τ (αy + z)
τx
≈
4
≈
4
α(τ x + τ y)
τ (x + y)
τ (αx + αy + z)
x
fact, most preorders in [vG93b] are actually pairs of preorders, as for every semantics a
may and a must preorder are proposed. Inspired by [NH84], there are two differences between
the may and the must preorders. One is a different treatment of divergence – this has no effect
when restricting attention to BCCS processes. The other is that the preorders are oriented in
opposite directions. This entire paper, as well as [AFI07, dFGP08b], has been written from
the perspective of the may preorders. When dealing with must preorders ⊑ we have that
if p ⊑ q then I(p) ⊇ I(q). Moreover, none of these preorders contains -RS – at best their
inverses have this property. Consequently, for preorders oriented in the must direction, the
algorithm is to be applied in the reverse direction, where an inequational axiom t 4 u gives
rise to equational axioms like t ≈ t + u.
44
Chapter 3 Meta-theories for Axiomatizability
where α ranges over Aτ . The must preorder has the additional axiom
E1
τx + τy 4 x
and the may preorder has the additional axiom
F1
x 4 τx + τy .
Note that Tau2 follows from N2 and N4. We will now apply the algorithm
to obtain sound and ground-complete axiomatizations of the three associated
testing equivalences.
Beforehand, we mention a trivial simplification in applying the algorithm: if
the inequational axiomatization features an equation t ≈ u, formally speaking
this is an abbreviation for the two axioms t 4 u and u 4 t. Thus, step (1) of
the algorithm generates the equations t + u ≈ u and u + t ≈ t. Together, these
are equivalent to the original equation t ≈ u. Moreover, in the presence of t ≈ u
the two axioms generated by step (2) of the algorithm are redundant. Thus, we
can simplify the algorithm by leaving equations untouched.
The may preorder. The may preorder of [NH84] coincides with weak trace
inclusion, which is coarser than the ready simulation preorder. As remarked in
[NH84], it is not hard to see that the axiomatization above can be simplified to
A1-4 together with
τx ≈ x
αx + αy ≈ α(x + y)
x 4 x+y
Applying Thm. 3.2.7 yields a sound and ground-complete axiomatization of may
-testing equivalence, which coincides with weak trace equivalence. It consists of
A1-4, RS≡ and
τx
αx + αy
x+x+y
α(x + z) + α(x + y + z)
≈
≈
≈
≈
x
α(x + y)
x+y
α(x + y + z)
As RS≡ is an instance of the last axiom above, that last axiom follows from the
second, and the third from A3, this axiomatization can be simplified to A1-4
together with
τx ≈ x
αx + αy ≈ α(x + y)
which is a standard axiomatization for weak trace equivalences.
The must preorder. On BCCS, the must preorder of [NH84] coincides with
the failures preorder of CSP [BHR84]. Its inverse contains the ready simulation
preorder and is weak initials preserving. Hence we can apply Thm. 3.2.9 to
obtain a sound and ground-complete axiomatization of must-testing equivalence.
45
3.2 From Preorder to Equivalence
First we note that N4 is a simple consequence of E1 and thus can be omitted.
Now Thm. 3.2.9 yields the axioms A1-4, RS≡ and
N1
N21
N22
N3
E11
E12
αx + αy
x + τy
α(x+τ y+z)
αx + τ (αy + z)
τx + τy
α(τ x + τ y + z)
≈
≈
≈
≈
≈
≈
α(τ x + τ y)
x + τ y + τ (x + y)
α(x+τ y+z) + α(τ (x+y)+z)
τ (αx + αy + z)
τx + τy + x
α(τ x + τ y + z) + α(x + z)
This axiomatization can be simplified to A1-4 together with
N1
N2∗
N3
αx + αy ≈ α(τ x + τ y)
x + τ y ≈ τ y + τ (x + y)
αx + τ (αy + z) ≈ τ (αx + αy + z)
Namely, E11 implies Tau2 which allows us to reformulate N21 as N2∗ . The
latter axiom implies Tau2 (by taking y = x) and hence also N21 and E11 . It
remains to derive N22 , E22 and RS≡ . In all three cases, by N1 it suffices to
derive the instance where α = τ . Substituting τ y for y in N2∗ and applying
τ τ y ≈ τ y (which follows from N1) and Tau2 gives τ (x + τ y) ≈ x + τ y. Now it
is straightforward to derive N2τ2 , E2τ2 and RSτ≡ .
This axiomatization has been mentioned in [vG97], just like the axiomatization of weak trace equivalence mentioned above. However, we have not found
an actual proof of its ground-completeness (or the ground-completeness of any
other axiomatization of must-testing equivalence over BCCS) in the literature.
The combined may and must preorder. The combined may- and musttesting preorder is the intersection of the may- and the must-testing preorders.
On BCCS, it is contained in weak trace equivalence, and hence contains neither
the concrete ready simulation preorder, nor its inverse. Therefore, Thm. 3.2.6
– 3.2.9 are not applicable to it. However, its kernel does contain concrete ready
simulation equivalence, and with help of Thm. 3.2.10 we can obtain a sound and
ground-complete axiomatization of it. The algorithm yields the axioms A1-4,
RS≡ and
N1
N21
N22
N3
N41
N42
αx + αy
x + τy
α(x+τ y+z)
αx + τ (αy + z)
τx
α(τ x + z)
≈
≈
≈
≈
≈
≈
α(τ x + τ y)
x + τ y + τ (x + y)
α(x+τ y+z) + α(τ (x+y)+z)
τ (αx + αy + z)
τx + x
α(τ x + z) + α(x + z)
The soundness of these axioms follows from the fact that they are derivable
both from the axioms for the may preorder and from the axioms for the must
preorder.
The axiomatization above is easily seen to be equivalent to the axiomatization of must-testing equivalence. This is no surprise, as it is known that on
BCCS the must preorder and the combined preorder have the same kernel.
46
3.2.4
Chapter 3 Meta-theories for Axiomatizability
A Generalization to Infinite Processes
The results in [AFI07, dFGP08b] were obtained for finite processes only: processes that can be expressed in BCCSP. Hereby we extend these results to
infinite processes that can be expressed by adding constants to BCCS. This is
an easy way of dealing with recursion – an alternative to introducing recursion
as a syntactic construct and requiring congruence properties for it. An infinite
process can be defined by introducing one or more constants C together with
axioms like C ≈ abC; in this example, C represents a process that performs an
infinite alternating sequence of a and b actions.
In order to obtain completeness of the axiomatizations A(E), any extension
of BCCS with constants will do. Lem. 3.2.3, Prop. 3.2.4 and Thm. 3.2.10 remain
valid in this setting. The only place where structural induction is used is in the
proof of Lem. 3.2.3, and there constants do not bother us, as they cannot occur
on a path from the root of a context, seen as a parse tree, to the hole.
In order to obtain soundness, we furthermore
assume that for any constant
P
C in the language there is a closed term i∈I αi pi in our extension
P of BCCS
with constants – so the index set I is finite – such that C ↔ i∈I αi pi . It
then follows
P that any closed term is bisimulation equivalent to a closed term of
the form i∈I αi pi . With this assumption, all our results generalize to BCCS
augmented with constants.
The proof of Lem. 3.2.5 goes through unaltered. The only proof that needs
to be adapted is the one of Thm. 3.2.9.
Lemma 3.2.11 Let ≡ be a congruence containing ↔ that satisfies Tau2. If
α
p ⇒ p′ −−
→ p′′ then p ≡ p + αp′′ .
′
Proof: By induction on the length
P of the path p ⇒ p .
′ ↔
↔ there must be
In the base case p = p
i∈I αi pi , and by definition of
an i ∈ I with αi = α and pi ↔ p′′ . It follows that p ↔ p + αp′′ and hence
p ≡ p + αp′′ .
τ
α
Now assume p −−
→ p′ ⇒−−
→ p′′ . By induction, p′ ≡ p′ + αp′′ . Tau2 yields
p ≡ p + αp′′ .
Proof of Thm. 3.2.9: Suppose p ⊑ q. We have to show
P that p + q ≡ q, where
≡ is the kernel of ⊑. By the assumption above, p ↔ i∈I αi pi for a finite index
set I and closed terms αi pi in our extension of BCCS with constants. We have
{αi | i ∈ I} = Iτ (p) ⊆ Iτ (p) ⊆ P
Iτ (q), so for every i ∈ I there is a term qi such
αi
qi . Letq ′ := q + i∈I αi qi . Applying Lem. 3.2.11 once for every
that q ⇒−−→
i ∈ I we obtain q ≡ q ′ . Now Iτ (p) ⊆ Iτ (q), so Lem. 3.2.5 yields p + q ′ ≡ q ′ ,
which implies p + q ≡ q.
Applications (continued)
Adding divergence. In [NH84] a special constant Ω denoting divergence is
considered, and the three ground-complete axiomatizations of the preorders
3.2 From Preorder to Equivalence
47
mentioned in Section 3.2.3 extend to the presence of divergence by means of the
extra axiom
Ω
Ω4x .
Although Ω is defined in terms of a convergence predicate, in all three testing
preorders it is equivalent to a process engaging in an infinite τ -loop only. We
could therefore equivalently think of Ω as the process generated by adding the
τ
transition rule Ω −−→ Ω to BCCS. This way we obtain Ω ↔ τ Ω, thereby fulfilling
the soundness requirement of Section 3.2.4. Note that Iτ (Ω) = Iτ (Ω) = {τ }.
Invoking Thm. 3.2.7 we obtain a ground-complete axiomatization for maytesting equivalence by adding the extra axioms
Ω+x ≈ x
α(Ω + z) + α(x + z) ≈ α(x + z)
to the ones mentioned in Section 3.2.3. The second one is derivable from the
first and αx + αy ≈ α(x + y). Using A4, the first one is equivalent to Ω ≈ 0.
As the must preorder ⊑ satisfies Ω ⊑ a0 for some a 6= τ , it is not weak initials
preserving (in either direction) and we may not apply Thm. 3.2.9, as we did in
Section 3.2.3. In order to obtain a sound and ground-complete axiomatization
of must-testing equivalence, we therefore resort to Thm. 3.2.6. Applying the
algorithm to the ground-complete axiomatization of the must preorder yields
the extra axioms
Ω1
Ω2
Ω ≈ Ω+x
α(Ω + z) ≈ α(Ω + z) + α(x + z)
Thm. 3.2.6 requires us to explicitly check the soundness of N21 , E11 and Ω1 . We
may not use the soundness of N21 and E11 obtained in Section 3.2.3, as it could
have been invalidated by the addition of Ω to the language. The soundness of
N21 follows from Lem. 3.2.5, applying the remark right after Thm. 3.2.6. The
soundness of E11 follows because it is derivable from Tau2, which is derivable
from N2 and N4. The soundness of Ω1 follows because it is derivable from Ω,
Tau2 and E1, as shown in [NH84].
E2 and Tau2 yield Ω ≈ Ω + τ Ω ≈ τ Ω. With N1 the axiom Ω2 follows from
its instance where α = τ , which follows from E2 and τ Ω = Ω. Hence a sound
and ground-complete axiomatization of must-testing equivalence, also known as
the failures equivalence of CSP, consists of N1, N2∗ , N3 and Ω1 .
Applying the algorithm to the combined may and must preorder again yields
the extra axioms Ω1 and Ω2 , and using Thm. 3.2.10 we cannot assume soundness
without establishing this separately. In the presence of Ω the kernels of the must
preorder and the combined preorder do not coincide, and this time Ω1 turns out
not to be sound. This is an example where we cannot apply the algorithm to
obtain a sound and ground-compete axiomatization. We conjecture that such
an axiomatization exists nonetheless, namely consisting of N1, N2∗ , N3 and
D1 Ω + τ x ≈ Ω + x
D3 Ω + αx ≈ Ω + α(Ω + x) .
48
Chapter 3 Meta-theories for Axiomatizability
In [NH84] the axioms D1 and D3 have been derived from N1-4, thereby establishing their soundness.
3.2.5
Concluding Remark
In [dFGP08b], de Frutos Escrig, Gregorio Rodrı́guez and Palomino also present
a simplification of the algorithm of [AFI07] for a large class of applications. The
simplification consists in skipping step (2) in favor of a constrained similarity
axiom
NS≡
N (x, y) =⇒ αx + α(x + y) ≈ α(x + y) for α ∈ Aτ .
Here N (x, y) is a congruence relation on processes such that N (p, q) is implied by Iτ (p) = Iτ (q). The constrained similarity axiom is a conditional equation, but it can in several cases be recast in equational terms. In the special
case where N (p, q) holds iff Iτ (p) = Iτ (q), NS≡ is equivalent to RS≡ . They
show that the simplified algorithm applies to preorders ⊑ satisfying
NS
N (x, y) =⇒ x x + y
and such that p ⊑ q implies N (p, q). In case N (p, q) ⇔ Iτ (p) = Iτ (q) we have
that NS is equivalent to RS.
In applying this algorithm to τ -free preorders in the linear time – branching time spectrum I, they use three different constraints N , whose ranges of
application match those of our Thm. 3.2.7, 3.2.8 and 3.2.9. Yet, we have not
been able to apply the simplified algorithm to weak preorders, due to the fact
that we would need an asymmetric precongruence N , whereas symmetry is used
crucially in the proofs in [dFGP08b]. The same applies to the generalizations
of the constrained similarity approach investigated in [dFGP08a].
3.3
From Concrete to Weak Semantics
In this section, we establish a link between the axiomatizability of concrete and
weak semantics. Namely, we present a general method to derive a ground- (resp.
ω-)complete axiomatization for the weak semantics from its concrete counterpart. The method requires that for the semantics the following two axioms are
sound:
W1
αx + αy ≈ α(τ x + τ y)
W2 τ (x + y) + τ x ≈ τ x + y
In particular, this is the case for weak impossible futures, weak failures, weak
completed trace and weak trace semantics; see Def. 2.1.6.
Before presenting the algorithm, we define what is “the weak counterpart”
of a concrete semantics. We first look at the case of preorders, then adapt it to
the case of equivalences.
Definition 3.3.1 Given any concrete preorder - for which A1-4 are sound, the
corresponding weak preorder ⊑ is a precongruence satisfying:
49
3.3 From Concrete to Weak Semantics
1. The inequational theory of BCCS modulo ⊑ is a conservative extension
of the inequational theory of BCCSP modulo -. Namely, for any τ -free
terms p and q, p - q iff p ⊑ q;
2. W1-2 are sound modulo ⊑;
τ
τ
3. If p ⊑ q for some p and q with p −−
→ and q 6−−
→, then τ x 4 x is sound
modulo ⊑;
τ
τ
4. If p ⊑ q for some p and q with p 6−−
→ and q −−
→, then x 4 τ x is sound
modulo ⊑.
Remark 3.3.2
• Requirements (3) and (4) concern root conditions, which
are usually indispensable to make sure that the preorder under consideration is a precongruence. A typical root condition says: t ⊑ u only if
τ
τ
t −−→ implies that u −−→ (note that this is case for weak preorders, cf.
Def. 2.1.6).
• Given any concrete preorder -, we have defined a family of corresponding
weak preorders. However, it will become clear later that, due to the
constraint imposed by (2), the family of weak preorders gives rise to a
unique one, except for the treatment of τ ’s at the root, which is determined
by (3) and (4).
The case of an equivalence can be defined accordingly, as follows.
Definition 3.3.3 Given any concrete equivalence ≃, the corresponding weak
equivalence ≡ is a congruence satisfying:
1. The equational theory of BCCS modulo ≡ is a conservative extension of
the equational theory of BCCSP modulo ≃. Namely, for any τ -free p and
q, p ≃ q iff p ≡ q;
2. W1-2 are sound modulo ≡;
τ
τ
3. If p ≡ q for some p and q with p −−→ and q 6−−→, then τ x ≈ x is sound
modulo ≡.
We now embark on presenting the algorithm. Again we first deal with the
case of preorders. From any axiomatization EA for a concrete preorder - for
which A1-4 are sound, the following algorithm generates an axiomatization
A(EA ) for the corresponding weak preorder ⊑, which contains the following
(in)equations:
• For each t 4 u in EA , the same inequation t 4 u with the action names in
t and u ranging over A ∪ {τ } (the resulting set of inequations is denoted
by EA∪{τ } );
• W1 and W2; and
50
Chapter 3 Meta-theories for Axiomatizability
• The following option:
τ
τ
– If p ⊑ q for some p and q with p −−→ and q 6−−→, then τ x 4 x is
included;
τ
τ
– If p ⊑ q for some p and q with p 6−−
→ and q −−
→, then x 4 τ x is
included.
Remark 3.3.4 4 One might argue that the third bullet in the algorithm is
difficult to verify since a naı̈ve procedure might have to check infinitely many
process terms, and thus it lacks the algorithmic aspect. However, if the axiomatization E for ⊑ is finite and ground-complete which is the usual case of
applying the algorithm, this shortcoming can be remedied. As a matter of fact,
for the former condition, one can verify whether there is some inequation t 4 u
in E such that
• either τ ∈ I(t) \ I(u);
• or var (t) ) var (u).
It is not difficult to see that a positive answer can be obtained iff for some p
τ
τ
and q, p ⊑ q, p −−
→ and q 6−−
→. And the procedure terminates since E is finite.
The latter condition can be tackled in a similar way.
The algorithm for equivalences can be adapted accordingly. That is, A(EA )
contains the following equations for the weak equivalence ≡, given the axiomatization EA for the concrete equivalence ≃:
• For each t ≈ u in EA , the same equation t ≈ u with the action names in
t and u ranging over A ∪ {τ } (the resulting set of inequations is denoted
by EA∪{τ } );
• W1 and W2; and
τ
τ
• If p ≡ q for some p and q with p −−
→ and q 6−−
→, then τ x ≈ x is included.
Below we establish the correctness of the algorithm for the preorder (the
equivalence case follows the same lines), namely,
Theorem 3.3.5 Let - be a concrete preorder for which A1-4 are sound and
⊑ a corresponding weak preorder. Let E be an axiomatization such that EA is
sound and ground-complete for BCCSP(A) modulo - and EA∪{κ} is groundcomplete for BCCSP(A ∪ {κ}) modulo - for some κ ∈
/ A. Suppose that A(E) is
sound for BCCS(A) modulo ⊑. Then A(E) is ground-complete axiomatization
for BCCS(A) modulo ⊑. Moreover, if EA is ω-complete for BCCSP(A) and
EA∪{κ} is ω-complete for BCCSP(A ∪ {κ}) for some κ ∈
/ A, then A(E) is ωcomplete for BCCS(A).
4 This
is pointed by Luca Aceto when he reviewed a manuscript of the dissertation.
51
3.3 From Concrete to Weak Semantics
We first present some lemmas. For a start, the following inequations can be
derived from A1-4+W1-2:
D1
D2
≈
≈
P τ (τ x + y)
a( i∈I τ xi + y)
τ xP
+y
P
a( i∈I xi + y) + i∈I axi .
Lemma 3.3.6 D1-2 are derivable from A1-4+W1-2.
Proof: For D1,
τ (τ x + y) ≈ τ (τ (x + y) + τ x)
≈ τx + y .
(W2)
(W1, W2)
For D2, we apply induction on |I|. The base case I = ∅ is trivial. For |I| ≥ 1,
pick an i0 ∈ I,
X
X
τ xi + y)
τ xi + y) = a(τ xi0 +
a(
i∈I
i∈I\{i0 }
≈ a(τ (xi0 +
X
τ xi + y) + τ xi0 )
(W2)
i∈I\{i0 }
≈ a(xi0 +
X
τ xi + y) + axi0
i∈I\{i0 }
≈ a(xi0 +
X
xi + y) +
X
(W1)
axi + axi0
(induction)
i∈I\{i0 }
i∈I\{i0 }
X
X
axi .
xi + y) +
= a(
i∈I
i∈I
The proof is now complete.
Lemma 3.3.7 The following properties hold:
τ
1. Given any BCCS term t such that t 6−−
→, A1-4+W1-2 ⊢ t ≈ t′ for some
′
τ -free term t ; and
P
τ
2. Given any BCCS term t such that t −−→, A1-4+W1-2 ⊢ t ≈ i∈I τ ti for
some index set I 6= ∅ where for each i ∈ I, ti is τ -free.
Proof:
P
τ
1. We apply induction on |t|. Since t 6−−→, t = i∈I ai ti + X. For each i ∈ I,
X
X
ti ≈
τ t′j +
bk t′k + Yi .
j∈Ji
k∈Ki
τ
Because of D1, we can guarantee that for each j ∈ Ji , t′j −
6 −→. By D2,
ai ti ≈ ai (
X
j∈Ji
τ t′j +
X
k∈Ki
bk t′k +Yi ) ≈
X
j∈Ji
ai t′j +ai (
X
j∈Ji
t′j +
X
k∈Ki
bk t′k +Yi ) .
52
Chapter 3 Meta-theories for Axiomatizability
τ
′
Since |t′j | < |t| and
6 −→, byPinduction ⊢ tj ≈ t′′j such that t′′j is τ -free.
Ptj −
Moreover, since | j∈Ji t′j + k∈KP
bk t′k + Yi | P
< |t|, by induction there
i
′′′
exists some τ -free term t with ⊢ j∈Ji t′j + k∈Ki bk t′k + Yi ≈ t′′′ . It
follows that
X
ai t′′j + ai t′′′ .
ai ti ≈
j∈Ji
Hence (1) is obtained.
τ
2. Since t −−
→, we have
t≈
X
i∈I
τ ti +
X
aj tj + X ,
j∈J
τ
where I 6= ∅. Because of D1 we can guarantee that for each i ∈ I, ti 6−−
→.
Choosing arbitrary i0 ∈ I, by W2 we have
X
X
X
X
t≈
τ ti +
aj tj + X ≈
τ ti +
τ (ti0 + aj tj + X) .
i∈I
j∈J
i∈I
j∈J
Hence (2) follows from (1).
The proof is now complete.
P
Given any BCCS term
P t such that t = i∈I τ ti and for each i ∈ I, ti is
τ -free. We define I(t) as i∈I κti where κ is a fresh action.
P
P
Lemma 3.3.8 Given two BCCS terms t = i∈I τ ti and u = j∈J τ uj where
ti , uj are all τ -free for i ∈ I and j ∈ J. Let - be a concrete semantics for which
A1-4 are sound and ⊑ the corresponding weak semantics satisfying W1 and W2.
Then t ⊑ u iff I(t) - I(u).
Proof:
⇒) Suppose that t ⊑ u. Since ⊑ is a precongruence, κt ⊑ κu. That is,
X
X
τ uj ) .
τ ti ) ⊑ κ(
κ(
i∈I
j∈J
It follows from the soundness of W1 that
X
X
κuj .
κti ⊑
i∈I
j∈J
In other words, I(t) ⊑ I(u). Since I(t) and I(u) are both τ -free, I(t) - I(u).
⇐) If I(t) - I(u), then I(t) ⊑ I(u). Renaming κ into τ transforms I(t) and
I(u) into t and u respectively, and so I(t) ⊑ I(u) implies t ⊑ u.
53
3.3 From Concrete to Weak Semantics
Proof of Thm. 3.3.5: Let E be a sound and ground-complete axiomatization
of -. Suppose t ⊑ u, where we assume that either t and u are closed terms, or
E is ω-complete. It suffices to show A(E) ⊢ t 4 u. We distinguish the following
cases:
P
τ
τ
1. t −−
→ and u −−
→. P
Since t ⊑ u, by Lem 3.3.7(2), A(E) ⊢ t ≈ i∈I τ ti
and A(E) ⊢ u ≈ j∈J τ uj such
P i ∈ I and j ∈PJ, ti and
P that for each
τ
t
)
I(
u
are
τ
-free.
By
Lem.
3.3.8,
I(
i
j
i∈I κti j∈J τ uj ), i.e.,
i∈I
P
κu
Since
either
t
and
u
are
closed
terms
or
E
is
ω-complete,
and
j
Pj∈J
P
κt
and
κu
are
terms
over
BCCSP(A
∪
{κ}),
it
follows
from
i
j
i∈I
j∈J
the ground- (ω-)completeness of EA∪{κ} that
X
X
EA∪{κ} ⊢ I(
τ ti ) 4 I(
τ uj ) .
i∈I
j∈J
By a simple renaming of κ into τ , one can easily show that EA∪{τ } ⊢ t 4 u.
Hence A(E) ⊢ t 4 u.
τ
τ
2. t 6−−
→ and u 6−−
→. Since t ⊑ u, by Lem 3.3.7(1), A(E) ⊢ t ≈ t′ and
′
A(E) ⊢ u ≈ u such that t′ and u′ are τ -free. Clearly t′ - u′ and since
either t and u are closed terms or E is ω-complete, it follows from the
ground-completeness of E that E ⊢ t′ 4 u′ . Hence A(E) ⊢ t 4 u.
τ
τ
3. t −−
→ and u 6−−
→. Then the axiom τ x 4 x must be included. Since t ⊑ u,
τ t ⊑ τ u. The conclusion follows from A(E) ⊢ t ≈ τ t 4 τ u 4 u. Note that
the first step follows from D1 and the second one from Case (1).
τ
τ
4. t 6−−
→ and u −−
→. Then the axiom x 4 τ x must be included. Since t ⊑ u,
τ t ⊑ τ u. The conclusion follows from A(E) ⊢ t 4 τ t 4 τ u ≈ u. Note that
the second step follows from Case (1) and the last one from D1.
The proof is now complete.
3.3.1
Applications
In this section, we apply the above algorithm to produce complete axiomatizations for weak failures, weak completed trace and weak trace semantics from
their concrete counterparts respectively.
Failures Semantics
The definitions for concrete failures preorder -F and weak failures preorder
WF are given in Def. 2.1.4 and Def. 2.1.6 respectively. One can verify that
WF is the corresponding weak preorder of -F in a straightforward way.
54
Chapter 3 Meta-theories for Axiomatizability
It is known (see e.g. Section 4.3) that A1-4 together with the following axiom
F1
a(x + y) 4 ax + a(y + z)
constitute a ground-complete axiomatization for BCCSP(A) modulo concrete
failures preorder -F . Moreover, if |A| = ∞, it is even ω-complete. In contrast, if
|A| < ∞, it is not ω-complete anymore. To get a finite basis for the inequational
theory of BCCSP(A) modulo -F in case 1 < |A| < ∞, we need to add the
following axiom:
X
X
F2A
axa 4
axa + y .
a∈A
a∈A
Applying the algorithm described above, we can obtain a ground-complete
axiomatization which consists of A1-4 together with
WF
W1
W2
W3
α(x + y)
αx + αy
τ (x + y) + τ x
x
4
≈
≈
4
αx + α(y + z)
α(τ x + τ y)
τx + y
τx .
It is ground-complete in general, and even ω-complete if |A| = ∞ w.r.t. BCCS(A)
modulo weak failures preorder WF . It turns out that we can simplify it a little,
namely, the following axioms together with A1-4 suffice.
W1
W2′
W3′
αx + αy ≈ α(τ x + τ y)
τ (x + y) 4 τ x + y
x 4 τx + y
As a matter of fact, by W3′ , x+y 4 τ (x+y)+y +z 4 τ x+y +z 4 τ x+τ (y +z).
It follows that α(x + y) 4 αx + α(y + z).
In the case of a finite alphabet, we have to add one extra axiom
X
X
axa + τ xτ + y
axa + τ xτ 4
a∈A
a∈A
which is obtained directly from the algorithm. And again we can simplify it.
That is, F2A suffices. In summary, we have
Theorem 3.3.9 For BCCS(A) modulo weak failures preorder WF , the axiomatization A1-4+W1+W2′ +W3′ is ground-complete. Moreover, if |A| = ∞, it is
also ω-complete. If 1 < |A| < ∞, A1-4+W1+W2′ +W3′ +F2A is ω-complete.
Completed Trace Semantics
The definitions for concrete completed trace preorder -CT and weak completed
trace preorder WCT are given in Def. 2.1.2 and Def. 2.1.6 respectively. It is
known that A1-4 together with the following axioms
CT1
CT2
ax 4 ax + y
a(bw + cx + y + z) 4 a(bw + y) + a(cx + z)
3.3 From Concrete to Weak Semantics
55
constitute a ground-complete axiomatization for complete traces preorder. Moreover, axiom F1 is needed to make the axiomatization ω-complete. By applying
the algorithm, we obtain a ground-complete axiomatization which consists of
A1-4, W1-3 together with
WCT1
WCT2
αx 4 αx + y
α(βw + γx + y + z) 4 α(βw + y) + α(γx + z) .
It turns out that W1+W2′ +W3′ +CT1 together A1-4 suffice. To see this, clearly
W3 is an instance of W3′ , and WCT2 can be derived as:
α(βw + γx + y + z) 4 α(τ (βw + y) + τ (γx + z)) ≈ α(βw + y) + α(γx + z) .
(The first step follows from W3 and the second one follows from W1.) Moreover,
the instance τ x 4 τ x + y of WCT1 can be derived as τ x 4 τ (τ x + y) 4
τ x + y. (The first step follows from W3′ and the second one follows from D1.)
We now can conclude that A1-4+W1+W2′ +W3′ +CT1 is ground-complete for
BCCS modulo the weak completed traces preorder. Interesting, it is already
ω-complete since the axiom WF can be derived, as shown before. In summary,
Theorem 3.3.10 For BCCS(A) modulo weak completed trace preorder WCT ,
the axiomatization A1-4+W1+W2′ +W3′ +CT1 is ω-complete.
Trace Semantics
The definitions for concrete trace preorder -T and weak trace preorder WT are
given in Def. 2.1.2 and Def. 2.1.6 respectively. It is known that A1-4 together
with the following axioms
T1
T2
a(x + y) 4 ax + ay
x 4 x+y
constitute a ground-complete axiomatization for traces preorder. Moreover, if
|A| > 1, it is also ω-complete. When |A| = 1, the following extra axiom is
needed to make the axiomatization ω-complete:
T3
x 4 ax .
By applying the algorithm, we can obtain a ground-complete axiomatization
which consists of A1-4, W1-2 together with
WT1
T2
WE
α(x + y) 4 αx + αy
x 4 x+y
x ≈ τx .
Clearly, due to WE, W1-2 become redundant and WT1 can be replaced simply
by T1. In summary, we have
Theorem 3.3.11 For BCCS(A) modulo weak trace preorder WT , the axiomatization A1-4+T1-2+WE is ground-complete. Moreover, if 1 < |A| ≤ ∞, it is
also ω-complete. If |A| = 1, A1-4+T1-2+WE+T3 is ω-complete.
56
3.4
Chapter 3 Meta-theories for Axiomatizability
Inverted Substitutions
Groote [Gro90] introduced the technique of inverted substitutions to prove that
an equational axiomatization is ω-complete, which works as follows. Recall that
T(Σ) and (Σ) denote the sets of closed and open terms, respectively, over some
signature Σ. Consider an axiomatization E. For each equation t ≈ u of which
all closed instances can be derived from E, one must define a closed substitution
ρ and a mapping R : T(Σ) → (Σ) such that:
(1) E ⊢ R(ρ(t)) ≈ t and E ⊢ R(ρ(u)) ≈ u;
(2) For each function symbol f (with arity n) in the signature, E ∪ {pi ≈
qi , R(pi ) ≈ R(qi ) | i = 1, . . . , n} ⊢ R(f (p1 , . . . , pn )) ≈ R(f (q1 , . . . , qn ))
for all closed terms p1 , . . . , pn , q1 , . . . , qn ; and
(3) E ⊢ R(σ(v)) ≈ R(σ(w)) for each v ≈ w ∈ E and closed substitution σ.
Then, as proved in [Gro90], E is ω-complete.
Here we adapt his technique to make it suitable for inequational axiomatizations.
Theorem 3.4.1 Consider an inequational axiomatization E over Σ. Suppose
that for each inequation t 4 u of which all closed instances can be derived from
E, there are a closed substitution ρ and a mapping R : T(Σ) → (Σ) such that:
(1) E ⊢ t 4 R(ρ(t)) and E ⊢ R(ρ(u)) 4 u;
(2) E ⊢ R(σ(v)) 4 R(σ(w)) for each v 4 w ∈ E and closed substitution σ;
and
(3) For each function symbol f (with arity n) in the signature, and all closed
terms p1 , . . . , pn , q1 , . . . , qn :
E ∪ {pi 4 qi , R(pi ) 4 R(qi ) | i = 1, . . . , n} ⊢
R(f (p1 , . . . , pn )) 4 R(f (q1 , . . . , qn )) .
Then E is ω-complete.
Proof: Let t, u be terms such that for each closed substitution σ,
σ(t) 4 σ(u) .
By assumption, there are a closed substitution ρ and a mapping R : T(Σ) →
(Σ) such that properties (1)-(3) above are satisfied. We have to prove that
E ⊢ t 4 u. This is an immediate corollary of the following claim:
Claim. For all closed terms p, q:
E ⊢ p 4 q
=⇒
E ⊢ R(p) 4 R(q) .
3.5 Related and Future Work
57
Namely, by assumption, E ⊢ ρ(t) 4 ρ(u), and then the claim above implies that
E ⊢ R(ρ(t)) 4 R(ρ(u)). So by property (1), E ⊢ t 4 u.
Proof of the claim: By induction on the proof of E ⊢ p 4 q. We have to check
the four kinds of inference rules of inequational logic (cf. 2.1.3).
• p = q. Then R(p) = R(q).
• p 4 q is an instance of some v 4 w ∈ E and a closed substitution σ. By
property (2), E ⊢ R(p) 4 R(q).
• E ⊢ p 4 q has been proved by E ⊢ p 4 r and E ⊢ r 4 q, for some r. By
induction, E ⊢ R(p) 4 R(r) and E ⊢ R(r) 4 R(q). So E ⊢ R(p) 4 R(q).
• p = f (p1 , . . . , pn ) and q = f (q1 , . . . , qn ), and E ⊢ p 4 q has been proved
by E ⊢ pi 4 qi for i = 1, . . . , n. By induction, E ⊢ R(pi ) 4 R(qi ) for
i = 1, . . . , n. So by property (3), E ⊢ R(f (p1 , . . . , pn )) 4 R(f (q1 , . . . , qn )).
The whole proof is now complete.
3.5
Related and Future Work
Meta-theories for axiomatizability are an interesting topic. However, it is largely
unexplored in the literature. Notable exception, which shares the same spirit of
the results presented in this chapter, is a recent result due to Aceto, Fokkink, Ingolfsdottir and Mousavi [AFIM08] where they present a general technique for obtaining new results pertaining to the non-finite axiomatizability of behavioral semantics over process algebras from old ones. The proposed technique is based on
a variation on the classic idea of reduction mappings which are basically translations between languages that preserve sound (in)equations and (in)equational
proofs over the source language, and reflect families of (in)equations responsible
for the non-finite axiomatizability of the target language. This technique is applied to obtain a number of new non-finite axiomatizability theorems in process
algebra via reduction to Moller’s celebrated non-finite axiomatizability result
for CCS, for instance, discrete-time CCS modulo timed bisimilarity, temporal
CCS modulo timed bisimilarity.
For the future work, on the one hand we expect more meta theorems; On
the other hand, we believe that these theorems can be cast into more general
models such as those given by Plotkin-style SOS rules.
58
Chapter 3 Meta-theories for Axiomatizability
Chapter 4
On Finite Alphabets and Infinite Bases
4.1
Introduction
To give further insight into the identifications made by the respective behavioral equivalences in the linear time – branching time spectrum I, van Glabbeek
[vG90, vG01] studied them in the setting of the process algebra BCCSP. In
particular, he associated with every behavioral equivalence a sound equational
axiomatization. Most of the axiomatizations were also shown to be groundcomplete. In this chapter, we shall consider the existence of finite bases for
these semantics. Given a finite ground-complete axiomatization, to prove that
it is a finite basis, it suffices to establish that it is ω-complete (see Section 2.1.3).
Groote [Gro90] proposed a general “inverted substitution” technique to prove
that an axiomatization is ω-complete (see also Section 3.4). He applied his technique to establish ω-completeness of several of van Glabbeek’s ground-complete
axiomatizations. In practice, Groote’s technique only works in case of an infinite alphabet of actions.1 On the other hand, in case of a singleton alphabet,
most of the semantics in the linear time – branching time spectrum I collapse to
either trace or completed trace semantics, in which case the equational theory
of BCCSP is known to have a finite basis. However, in case of a finite alphabet
with at least two actions, for most semantics in the linear time – branching time
spectrum I it remained unknown whether the equational theory of BCCSP has
a finite basis. In this chapter, we settle all remaining open questions.
We first give a brief summary of what was known up to now, and which open
questions remained. Moller [Mol89] proved that the sound and ground-complete
axiomatization for BCCSP modulo bisimulation equivalence is ω-complete, independent of the cardinality of the alphabet A. Groote [Gro90] presented ωcompleteness proofs for completed trace equivalence (again independent of the
cardinality of A), for trace equivalence (if |A| > 1), and for ready and failure equivalence (if |A| = ∞). Van Glabbeek [vG01, page 78] noted (without
proof) that Groote’s technique of inverted substitutions can also be used to
1 In case of an infinite alphabet, occurrences of action names in axioms are interpreted as
variables, as otherwise most of the axiomatizations mentioned in this introduction would be
infinite.
59
60
Chapter 4 On Finite Alphabets and Infinite Bases
prove that the ground-complete axiomatizations for BCCSP modulo simulation, ready simulation and failure trace equivalence are ω-complete if |A| = ∞.
The same observation can be made regarding possible worlds semantics. Blom,
Fokkink and Nain [BFN03] proved that BCCSP modulo ready trace equivalence
does not have a finite sound and ground-complete axiomatization if |A| = ∞.
Aceto, Fokkink, van Glabbeek and Ingolfsdottir [AFvGI04] proved such a negative result for 2-nested simulation and possible futures equivalence, for any A.
If |A| = 1, then all semantics from completed traces up to ready simulation
coincide with completed trace semantics, while simulation coincides with trace
semantics. And there exists a finite basis for the equational theories of BCCSP
modulo completed trace and trace equivalence if |A| = 1.
This chapter settles all the remaining questions: we prove that there is a
finite basis for the equational theory of BCCSP modulo failure semantics, in
case 1 < |A| < ∞; for all the other cases standing open up to now (in Tab. 4.1
on page 88 with grey shadow), we prove that such a finite basis lacks.
Recall that the semantics considered in this chapter have a natural formulation as a preorder relation -, where p - q if p is in some way simulated by
q, or if the decorated traces of p are included in those of q. The corresponding
equivalence relation ≃ is defined as: p ≃ q iff both p - q and q - p. Recently,
Aceto, Fokkink and Ingolfsdottir [AFI07] gave an algorithm that, given a sound
and ground-complete axiomatization for BCCSP modulo a preorder no finer
than ready simulation, produces a sound and ground-complete axiomatization
for BCCSP modulo the corresponding equivalence. Moreover, if the original
axiomatization for the preorder is ω-complete, then so is the resulting axiomatization for the equivalence (see also Section 3.2 for more details). So for the
positive result regarding failure semantics, the stronger result is obtained by
considering failure preorder. On the other hand, the negative results become
more general if they are proved for the equivalence relations.
Structure of the chapter. Section 4.2 presents some basic facts. Section 4.3
contains a positive result for failure preorder. The remainder of the chapter
presents negative results: Section 4.4 for failure trace equivalence, Section 4.5 for
any equivalence from possible worlds up to ready pairs, Section 4.6 for simulation
equivalence, Section 4.7 for completed simulation equivalence, and Section 4.8
for ready simulation equivalence. We conclude in Section 4.9 with an overview
of the positive and negative results achieved in this chapter.
4.2
Basic Facts
Lemma 4.2.1
1. If t -T u, then depth(t) ≤ depth(u).
2. If t -T u, then act k (t) ⊆ act k (u) for all k ≥ 0. Moreover, if t -F u, then
also act 0 (u) ⊆ act 0 (t), so I(t) = I(u).
a ···a
1
k
3. Suppose |A| > 1. If t -T u, then, for all variables x, t −−−
−−→
x + t′ for
a1 ···ak
′
′
′
some term t implies u −−−−−→ x + u for some term u . Hence var k (t) ⊆
61
4.3 Failures
var k (u) for all k ≥ 0.
Proof:
1. If depth(t) = k, then there exists a sequence of actions a1 · · · ak and a
a1 ···ak
term t′ such that t −−−
−−→ t′ . Let ρ be the closed substitution defined by
ρ(x) = 0 for all x ∈ V . Then a1 · · · ak is a trace of ρ(t) and hence, since
t -T u, of ρ(u). From the definition of ρ it is then clear that there exists
a1 ···ak
a term u′ such that u −−−
−−→ u′ . It follows that depth(t) = k ≤ depth(u).
2. First suppose t -T u and let a ∈ act k (t) for some k ≥ 0. Then there
a1 ···ak
exists a sequence of actions a1 · · · ak and a term t′ such that t −−−
−−→ t′
′
and a ∈ I(t ). Now, let ρ be the closed substitution defined by ρ(x) = 0
for all x ∈ V . Then a1 · · · ak a is a trace of ρ(t) and hence, since t -T u,
of ρ(u). From the definition of ρ it is then clear that there exists a term
a1 ···ak
−−→ u′ with a ∈ I(u′ ), so a ∈ act k (u).
u′ such that u −−−
Next, suppose t -F u and let ρ be the closed substitution defined by
ρ(x) = 0 for all x ∈ V . Then (λ, A \ I(t)) (with λ denoting the empty
sequence) is a failure pair of ρ(t), and hence of ρ(u), so I(u)∩(A\I(t)) = ∅;
it follows that act 0 (u) ⊆ act 0 (t). Since t -F u implies t -T u, and hence
act 0 (t) ⊆ act 0 (u), it immediately follows that I(t) = act 0 (t) = act 0 (u) =
I(u).
a ···a
1
k
3. Let x be a variable and suppose t −−−
−−→
x + t′ for some term t′ . Let
m ≥ depth(u), let a and b be two distinct elements of A, and let ρ be
the closed substitution defined by ρ(x) = am b0 and ρ(y) = 0 for any
a1 ···a
b
k+m
variable y 6= x. Then ρ(t) −−−−−
−−→ 0 (with ak+1 · · · ak+m = am ). Since
ρ(t) -T ρ(u), a1 · · · ak+m b is also a trace of ρ(u). Since m ≥ depth(u),
a
a ···a
···a
b
k+m
1
i
−−→
z + u′ for some i < m, where ρ(z) −−i+1
−−−−−
−−→ p. By
clearly u −−−
a1 ···ak
′
the definition of ρ, z = x and i = k, so u −−−−−→ x + u for some term u′ .
Clearly it follows that x ∈ var k (t) implies x ∈ var k (u) for all variables x,
so var k (t) ⊆ var k (u).
Note that Lem. 4.2.1(3) fails in case |A| = 1, for if A = {a}, then x -T ax.
In the remainder of this chapter we will assume that |A| > 1.
4.3
Failures
In this section we consider the failures preorder -F . Van Glabbeek [vG01]
presented a sound and ground-complete axiomatization of the failures preorder
consisting of the axioms A1-4, the axiom
F1
a(x + y) 4 ax + a(y + z) ,
and the axiom ax 4 ax + az. Note that the latter axiom is actually superfluous,
since it can be obtained from F1 by substituting 0 for y and applying A3.
62
Chapter 4 On Finite Alphabets and Infinite Bases
Below, we provide a basis for the equational theory of BCCSP modulo -F .
We shall prove that A1-4+F1 is a basis if |A| = ∞ (see Corollary 4.3.6). To get
a basis for the case that 1 < |A| < ∞, it will be necessary to add the following
axiom:
X
X
F2A
axa + y ,
axa 4
a∈A
a∈A
where {xa | a ∈ A} is a family of distinct variables and y 6∈ {xa | a ∈ A}.
To see that F2A is sound modulo -F , let ρ be an arbitrary
closed substiP
tution and consider a failure pair (a1 · · · ak , B) of ρ( a∈A axa ). If k > 0,
then clearly (aP
2 · · · ak , B) is a failure pair of ρ(xa1 ), so (a1 · · · ak , B) is a failure pair
of
ρ(
a∈A axa + y). On the other hand, if k = 0, then note that
P
I(ρ(
ax
))
= A, so B = ∅, and hence (a1 · · · ak , B) is a failure pair of
a
a∈A
P
ρ( a∈A axa + y). To see that F2A′ is not sound modulo -F if A′ is a proper
′
subset of A,
substitution such
P let ρ be the closed
Pthat ρ(y) = b0 for some b 6∈ A ;
′
′
then I(ρ( a∈A′ axa )) = A 6= A ∪ {b} = I(ρ( a∈A′ axa + y)). Since A1-4+F1
are sound modulo -F independent of the alphabet, it also follows that F2A
cannot be derived from A1-4+F1.
Axiom F2A expresses that additional variable summands may be added to
a term t whenever I(t) = A. The following lemma confirms that the proviso
I(t) = A is necessary.
Lemma 4.3.1 If t -F u, then var 0 (t) ⊆ var 0 (u), and if moreover I(t) 6= A,
then var 0 (t) = var 0 (u).
Proof: Suppose t -F u. That var 0 (t) ⊆ var 0 (u) follows immediately from
Lem. 4.2.1(3). To prove that I(t) 6= A implies var 0 (t) = var 0 (u), suppose,
towards a contradiction, that a 6∈ I(t) for some a ∈ A and that x ∈ var 0 (u) \
var 0 (t) for some x ∈ V . Define a closed substitution ρ by ρ(x) = a0 and
ρ(y) = 0 for y 6= x. Since a 6∈ I(t) and x 6∈ var 0 (t), (λ, {a}) (with λ the empty
trace) is a failure pair of ρ(t). Since x ∈ var 0 (u), (λ, {a}) is not a failure pair
of ρ(u + Y ). This contradicts the assumption that t -F u. We conclude that
I(t) 6= A implies var 0 (t) = var 0 (u).
According to Lem. 4.3.1, all the variable summands of t are also summands
of u. Moreover, if u has a variable summand x that t does not have, then
I(t) = A, so we can derive t 4 t + x with an application of F2A . We proceed
to establish, for all a ∈ A, a relation between a prefix summand at′ of t and
the sum of all similar prefix summands au′ of u. To conveniently express this
relation, we first introduce some further notation.
Let t be a term, and let A′ ⊆ A; we define the restriction t↾A′ of t to A′ by
X
t↾A′ =
{at′ | a ∈ A′ and at′ ⋐ t} .
Recall that t -F u if, for all closed substitutions ρ, the failure pairs of ρ(t) are
included in ρ(u). The preorder -F fails to have certain structural properties
with respect to the operations of BCCSP; in particular, we cannot in general
63
4.3 Failures
conclude from at -F au that t -F u. It will therefore be technically convenient
to also have notation for a preorder that is slightly coarser than -F . We define
the length of a failure pair (a1 · · · ak , B) as the length of the sequence a1 · · · ak ,
and we write t -1F u if, for all closed substitutions ρ, the failure pairs of length
≥ 1 of ρ(t) are included in those of ρ(u). We leave it to the readers to verify
that t -F u iff t -1F u and I(u) ⊆ I(t), and that at -1F au implies t -1F u.
Lemma 4.3.2 If t -1F u, then, for every summand at′ of t, at′ -F u↾{a} .
Proof: Suppose t -1F u. Let at′ ⋐ t and let ρ be a closed substitution.
We first prove that the failure pairs of length ≥ 1 of ρ(at′ ) are included in
those of ρ(u↾{a} ), and then we will conclude that also the failure pairs of length
0 of ρ(at′ ) are included in those of ρ(u↾{a} ).
Consider a failure pair (a1 · · · ak , B) of ρ(at′ ) with k ≥ 1. Then (a1 · · · ak , B)
is a failure pair of ρ(t). By our assumption that t -1F u, it follows that
(a1 · · · ak , B) is a failure pair of ρ(u). From this we cannot directly conclude
that u has a summand au′ such that (a1 · · · ak , B) is a failure pair of ρ(au′ ), as
u may have a variable summand x such that (a1 · · · ak , B) is a failure pair of
ρ(x). To ascertain that u nevertheless also has the desired summand au′ , we
define a modification ρ′ of ρ such that for all ℓ < k and for all terms v, ρ(v)
and ρ′ (v) have the same failure pairs (b1 · · · bℓ , B), while (a1 · · · ak , B) is not a
failure pair of ρ′ (x) for all x ∈ V .
We obtain ρ′ (x) from ρ(x) by replacing subterms ap at depth k − 1 by 0 if
a 6∈ B and by aa0 if a ∈ B. That is,
ρ′ (x) = chop k−1 (ρ(x))
with chop m for all m ≥ 0 inductively defined by
chop m (0)
chop m (p + q)
chop 0 (ap)
chop m+1 (ap)
= 0
= chop
m (p) + chop m (q)
0
if a 6∈ B
=
aa0 if a ∈ B
= a chop m (p) .
We first prove two properties concerning the failure pairs of chop m (p), for
m ≥ 0 and closed terms p.
I. For all ℓ ≤ m, the closed terms p and chop m (p) have the same failure pairs
(b1 · · · bℓ , B). To show this, we apply induction on m.
Base case: Since the summands of chop 0 (p) are aa0 for all a ∈ I(p) ∩ B,
I(p) ∩ B = ∅ iff I(chop 0 (p)) ∩ B = ∅.
Inductive case: Let ℓ ≤ m + 1; we distinguish cases according to whether
ℓ = 0 or ℓ > 0. If ℓ = 0, then, since I(p) = I(chop m+1 (p)), it follows that
I(p) ∩ B = ∅ iff I(chop m+1 (p)) ∩ B = ∅, so (b1 · · · bℓ , B) is a failure pair
b
1
p′
of p iff it is a failure pair of chop m+1 (p). If ℓ > 0, then, since p −−→
64
Chapter 4 On Finite Alphabets and Infinite Bases
b
1
chop m (p′ ) and, by the induction hypothesis, p′ and
iff chop m+1 (p) −−→
′
chop m (p ) have the same failure pairs (b2 · · · bℓ , B), (b1 · · · bℓ , B) is a failure
pair of p iff it is a failure pair of chop m+1 (p).
II. chop m (p) does not have any failure pair (b1 · · · bm+1 , B). To show this, we
apply induction on m.
Base case: Since the summands of chop 0 (p) are aa0 with a ∈ I(p) ∩ B,
chop 0 (p) does not have a failure pair (b1 , B).
Inductive case: By induction, for closed terms q, chop m (q) does not
have failure pairs (b2 · · · bm+2 , B). Since the transitions of chop m+1 (p)
b
b
1
1
p′ , it follows that chop m+1 (p)
chop m (p′ ) for p −−→
are chop m+1 (p) −−→
does not have failure pairs (b1 · · · bm+2 , B).
We proceed to prove that ρ′ has the desired properties mentioned above.
A. For all ℓ < k and for all terms v, ρ(v) and ρ′ (v) have the same failure pairs
(b1 · · · bℓ , B). To show this, we apply induction on ℓ.
Base case: From the definition of chop k−1 it follows that I(ρ′ (x)) ∩ B =
I(ρ(x)) ∩ B for all x ∈ V . Hence, I(ρ(v)) ∩ B = ∅ iff I(ρ′ (v)) ∩ B = ∅.
Inductive case: Let ℓ + 1 < k. We prove for each summand of v that
applying ρ or ρ′ gives rise to the same failure pairs (b1 · · · bℓ+1 , B). By
property (I), ρ(x) and ρ′ (x) = chop k−1 (ρ(x)) have the same failure pairs
(b1 · · · bℓ+1 , B). Furthermore, by induction, for each summand b1 v ′ of v,
ρ(v ′ ) and ρ′ (v ′ ) have the same failure pairs (b2 · · · bℓ+1 , B); so ρ(b1 v ′ ) and
ρ′ (b1 v ′ ) have the same failure pairs (b1 · · · bℓ+1 , B).
B. (a1 · · · ak , B) is not a failure pair of ρ′ (x) for all x ∈ V . This is immediate
from property (II).
Now, since (a1 · · · ak , B) is a failure pair of ρ(at′ ), (a2 · · · ak , B) is a failure pair
of ρ(t′ ), and hence, by property (A), of ρ′ (t′ ). It follows that (a1 · · · ak , B) is
a failure pair of ρ′ (t), and hence, by our assumption that t -1F u, of ρ′ (u).
Since, according to property (B), u does not have a variable summand x such
that (a1 · · · ak , B) is a failure pair of ρ′ (x), and since a1 = a, u must have a
summand au′ such that (a1 · · · ak , B) is a failure pair of ρ′ (au′ ) of u. Then,
again by property (A), (a1 · · · ak , B) is a failure pair of ρ(au′ ) and hence of
ρ(u↾{a} ).
We have now established that the failure pairs of length ≥ 1 of ρ(at′ ) are included in those of ρ(u↾{a} ). In particular, since ρ(at′ ) has the failure pair (a, ∅),
so does ρ(u↾{a} ), and hence I(ρ(at′ )) = {a} = I(ρ(u↾{a} )). As an immediate
consequence we get that also the failure pairs of length 0 of ρ(at′ ) are included
in those of ρ(u↾{a} ). We conclude that at′ -F u↾{a} .
P
We now proceed to establish that if the inequation at′ 4 j∈J auj is sound
modulo the failures preorder, then it can be derived from A1-4+F1+F2A . For
the case that I(t) 6= A, we need the following lemma.
65
4.3 Failures
P
Lemma 4.3.3 If at -F j∈J auj and I(t) 6= A, then there exists j ∈ J such
that I(uj ) ⊆ I(t) and var 0 (uj ) ⊆ var 0 (t).
P
Proof: Suppose at -F j∈J uj and I(t) 6= A. Let b ∈ A \ I(t) and define the
closed substitution ρ by ρ(x) = 0 if x ∈ var 0 (t) and ρ(x) = b0 if x 6∈ var 0 (t).
Then (a, A \ I(t)) is a failure pair of ρ(at), so there exists j ∈ J such that
(a, A \ I(t)) is a failure pair of auj . From (A \ I(t)) ∩ I(ρ(uj )) = ∅ it follows
that I(uj ) ⊆ I(t) and var 0 (uj ) ⊆ var 0 (t).
The following lemma constitutes the crucial step in our completeness proof.
Lemma 4.3.4 If at -F
P
auj , then A1-4+F1+F2A ⊢ at 4
j∈J
P
j∈J auj .
P
Proof: We apply induction on the depth of t. Note that from at -F j∈J auj
P
P
it follows that t -1F
i∈I bi ti . Then, for all i ∈ I,
j∈J uj . Let t↾I(t) =
P
by Lem. 4.3.2 bi ti -F Pj∈J uj ↾{bi } , and hence by the induction hypothesis
A1-4+F1+F2A ⊢ bi ti 4 j∈J uj ↾{bi } . It follows that
A1-4+F1+F2A ⊢ t↾I(t) =
X
bi t i 4
i∈I
XX
uj ↾{bi } =
i∈I j∈J
X
uj ↾I(t) .
(4.1)
j∈J
We distinguish two cases.
Case 1: I(t) 6= A.
According to Lem. 4.3.3 that there exists j0 ∈ J such that I(uj0 ) ⊆ I(t)
and var 0 (uj0 ) ⊆ var 0 (t), and hence
uj0 ↾I(t) + var 0 (t) = uj0 + var 0 (t) .
We get the following derivation:
at = a(t↾I(t) + var 0 (t))
X
4 a(
uj ↾I(t) + var 0 (t))
(by (4.1))
j∈J
= a(uj0 +
X
uj ↾I(t) + var 0 (t))
(by (4.2))
j∈J
X
uj + var 0 (t))
4 auj0 + a(
(by F1)
j∈J
= auj0 + a
X
uj
(by Lem. 4.2.1(3))
j∈J
4 auj0 +
X
j∈J
=
X
j∈J
auj .
auj
(by F1)
(4.2)
66
Chapter 4 On Finite Alphabets and Infinite Bases
Case 2: I(t) = A.
S
Then since var 0 (t) ⊆ j∈J var 0 (uj ) by Lem. 4.2.1(3), with an application
of F2A
[
var 0 (uj ) .
(4.3)
t = t↾I(t) + var 0 (t) 4 t↾I(t) +
j∈J
We now get the following derivation:
at = a(t↾I(t) + var 0 (t))
[
var 0 (uj ))
4 a(t↾I(t) +
(by (4.3))
j∈J
[
X
var 0 (uj ))
uj ↾I(t) +
4 a(
=a
X
(by (4.1))
j∈J
j∈J
uj
(since I(t) = A)
j∈J
4
X
auj
(by F1)
j∈J
Concluding, we have proved that A1-4+F1+F2A ⊢ at 4
P
j∈J
auj .
We are now in a position to establish that A1-4+F1+F2A constitutes a
complete axiomatization of the failures preorder.
Theorem 4.3.5 If 0 < |A| < ∞, then A1-4+F1+F2A is a complete axiomatization of BCCSP modulo failures preorder, i.e., for all terms t and u, if t -F u,
then A1-4+F1+F2A ⊢ t 4 u.
P
Proof: Suppose t -F u, and suppose t = i∈I ai ti + var 0 (t). Then, for all i ∈ I,
by Lem. 4.3.2 ai ti -F u↾{ai } , so by Lem. 4.3.4, A1-4+F1+F2A ⊢ ai ti 4 u↾{ai } .
Clearly, since I(t) = I(u) by Lem. 4.2.1(2), it follows that
A1-4+F1+F2A ⊢ t↾I(t) 4 u↾I(u) .
There are now two cases:
Case 1: I(t) 6= A.
Then var 0 (t) = var 0 (u) by Lem. 4.3.1, so clearly
A1-4+F1+F2A ⊢ t = t↾I(t) + var 0 (t) 4 u↾I(u) + var 0 (u) = u .
Case 2: I(t) = A.
Then var 0 (t) ⊆ var 0 (u) by Lem. 4.3.1, so t = t↾I(t) + var 0 (t) 4 t↾I(t) +
var 0 (u) by F2A , and hence
A1-4+F1+F2A ⊢ t = t↾I(t) + var 0 (t) 4 u↾I(u) + var 0 (u) = u .
67
4.4 Failure Traces
The proof is now complete.
Groote [Gro90] proved that in case |A| = ∞, BCCSP modulo failures equivalence has a finite basis. Here we can obtain the same result for failure preorder,
by copying the proofs of Lem. 4.3.4 and Thm. 4.3.5, but omitting in both proofs
“Case 2”, which is only relevant for finite alphabets.
Corollary 4.3.6 If |A| = ∞, then A1-4+F1 is an ω-complete axiomatization
for BCCSP modulo failures preorder.
4.4
Failure Traces
In this section we consider failure trace equivalence ≃FT . (We mention in passing that for finitely branching processes, this is the same as refusal semantics
[Phi87].) Blom, Fokkink and Nain [BFN03] gave a finite axiomatization that
is sound and ground-complete for BCCSP modulo ≃FT . It consists of axioms
A1-4 together with
FT
RS
ax + ay ≈ ax + ay + a(x + y)
a(bx + by + z) ≈ a(bx + by + z) + a(bx + z) ,
where a, b range over A. Groote [Gro90] applied his technique of inverted substitutions to prove that this axiomatization is ω-complete in case A is infinite.
In this section we consider the case 1 < |A| < ∞. We prove that then there
does not exist a finite sound and ground-complete axiomatization for BCCSP
modulo ≃FT that is ω-complete as well, and therefore failure trace equivalence
is not finitely based over BCCSP. The cornerstone for this negative result is the
following infinite family of equations en (n ≥ 1):
X
an (b0 + x)
an+1 x + a(an x + x) + a
b∈A\{a}
≈ a(an x + x) + a
X
an (b0 + x) .
b∈A\{a}
These equations are sound modulo ≃FT . The idea is that, given a closed substitution ρ, either I(ρ(x)) ⊆ {a}, in which case the failure traces of ρ(an+1 x) are
included in those of ρ(a(an x + x)). Or c ∈ I(ρ(x)) for some
Pc 6= a, in which case
the failure traces of ρ(an+1 x) are included in those of ρ(a b∈A\{a} an (b0 + x)).
We shall use the proof-theoretic technique to show that ≃FT is not finitely
based. The intuition behind our proof is that if the axioms in E have depth at
most n, then the summand an+1 x at the left-hand side of en cannot be eliminated by means of a derivation from E. There is, however, one complication: the
summand an+1 x may be “glued together” with other summands. For example,
using the axioms FT and RS we can derive for n ≥ 1:
X
X
an+1 x + a
an (b0 + x) ≈ a(an x +
an (b0 + x)) .
b∈A\{a}
b∈A\{a}
68
Chapter 4 On Finite Alphabets and Infinite Bases
The right-hand side of the equation above does not have a summand an+1 x,
so the property of having a summand an+1 x is not preserved. Note that the
n
right-hand side still
Pdoes haven a summand of the form av such that a x -FT v
n
(take v = (a x + b∈A\{a} a (b0 + x))). We shall be able to show that if the
equation t ≈ u is derivable from a collection of sound equations of terms with a
depth ≤ n, then it satisfies the following property PnFT :
P
If t, u -FT a(an x+x)+a b∈A\{a} an (b0+x), then t has a summand
at′ such that an x -FT t′ , then u has a summand au′ such that
an x -FT u′ .
In Lem. 4.4.1 we shall first establish that a substitution instance of a sound
equation of terms with a depth ≤ n satisfies PnFT . Then, in Prop. 4.4.2, we
prove that PnFT is preserved in derivations from a collection of sound equations
of depth ≤ n. Finally, we shall conclude that the family of equations en (n ≥ 1)
obstructs a finite basis, because the left-hand side has the summand an+1 x,
while the right-hand side does not have a summand au′ with an+1 x -FT au′ .
Lemma 4.4.1 Suppose that t ≃FT u, let n ≥ 1 be a natural number greater
than or equal to the depth of t and u, and suppose
X
an (b0 + x) .
(4.4)
σ(t), σ(u) -FT a(an x + x) + a
b∈A\{a}
Then σ(t) has a summand av such that an x -FT v iff σ(u) has a summand aw
such that an x -FT w.
Proof: Clearly, by symmetry, it suffices to only consider the implication from
left to right. So suppose that σ(t) has a summand av such that an x -FT v;
then there are two cases:
Case 1: t has a variable summand z and σ(z) has av as a summand.
Since t ≃FT u, by Lem. 4.2.1(3), u also has z as summand. Therefore,
since σ(z) has av as a summand, so does σ(u).
Case 2: t has a summand at′ such that an x -FT σ(t′ ).
First, we establish that
n
a
σ(t′ ) −−
→ x and var m (σ(t′ )) = ∅ for all 0 ≤ m < n.
(4.5)
From the assumption (4.4) we conclude using Lem. 4.2.1 (2,3) that I(σ(t)) =
{a}, var 0 (σ(t′ )), var n (σ(t′ )) ⊆ {x} and var m (σ(t′ )) = ∅ for all 0 < m < n.
It follows that aσ(t′ ) -FT σ(t), and hence
X
an (b0 + x) .
(4.6)
aσ(t′ ) -FT a(an x + x) + a
b∈A\{a}
69
4.4 Failure Traces
Now, let ρ1 be a closed substitution with ρ1 (x) = 0. Since an 0 -FT
an
ρ1 (σ(t′ )), we have ρ1 (σ(t′ )) −−→ 0. Since var n (σ(t′ )) ⊆ {x}, it follows
n
a
an
that either σ(t′ ) −−→ x or σ(t′ ) −−→ 0.
an
Note that, to establish (4.5), it remains to prove σ(t′ ) 9 0 and x 6∈
var 0 (σ(t′ )). For this we consider σ(t′ ) under another closed substitution
ρ2 that satisfies ρ2 (x) = c0 with c an action distinct
from a. Then,
P
according to (4.6), aρ2 (σ(t′ )) -FT a(an c0 + c0) + a b∈A\{a} an (b0 + c0),
and since the closed term at the right-hand side does not exhibit the failure
trace
∅| a ·{z
· · ∅ a} A ,
n+1 times
an
n
a
we have ρ2 (σ(t′ )) 9 0, so σ(t′ ) 9 0. Furthermore, since an x -FT
σ(t′ ), we have an c0 -FT ρ2 (σ(t′ )). So c 6∈ I(ρ2 (σ(t′ ))), and hence
x 6∈ var 0 (σ(t′ )). This completes the proof of (4.5).
We proceed to prove that u has a summand au′ such that
an
σ(u′ ) −−→ x and var 0 (σ(u′ )) = ∅ .
(4.7)
From (4.5) and the assumption that depth(σ(t)) ≤ n it follows that there
aℓ
exist ℓ < n, a variable y and a term t′′ such that t′ −−→ y + t′′ and
n−ℓ
a
σ(y) −−
−−→ x.
Define Z as the set of variables z such that σ(z) has x as a summand, i.e.,
Z = {z ∈ V | x ∈ var 0 (σ(z))} .
Since y has an occurrence in t′ at depth ℓ < n, it follows from (4.5) that
x 6∈ var 0 (σ(y)), so y 6∈ Z. Therefore, we can define a closed substitution
ρ3 by
 n+1
0 if z = y
 a
c0
if z ∈ Z
ρ3 (z) =

0
otherwise ,
where c is again an action distinct from a.
a
aℓ
an+1
Since t −−→ t′ −−→ y + t′′ , ρ3 (y) −−−−→ 0, c 6∈ I(t′ ), and x 6∈ var 0 (σ(t′ ))
implies var 0 (t′ ) ∩ Z = ∅, ρ3 (t) admits the failure trace
∅ a {c} a
· · a ∅} a {a} ,
| ∅ ·{z
ℓ+n times
which by the assumption t ≃FT u is then also a failure trace of ρ3 (u). Since
depth(u′ ) < n, and in view of the definition of ρ3 , this clearly means that
ℓ
a
u has a summand au′ such that c 6∈ I(ρ3 (u′ )) and u′ −−
→ y + u′′ for some
n−ℓ
n
a
a
term u′′ . Since σ(y) −−
−−→ x, it follows that σ(u′ ) −−
→ x. Moreover, from
70
Chapter 4 On Finite Alphabets and Infinite Bases
c 6∈ I(ρ3 (u′ )) it follows that var 0 (u′ ) ∩ Z = ∅, and hence x 6∈ var 0 (σ(u′ )).
So we have now established (4.7).
From the assumption (4.4) we can conclude, by Lem. 4.2.1 (2)(3), that
act m (σ(u′ )) ⊆ {a} for all 0 ≤ m < n and that var m (σ(u′ )) = ∅ for all
an
0 < m < n, and (4.7) adds that σ(u′ ) −−
→ x, and var 0 (σ(u′ )) = ∅. These
n
facts together easily imply a x -FT σ(u′ ).
The proof is now complete.
We shall now prove that the property PnFT holds for every equation derivable
from a collection of equations between terms of depth less than or equal to n.
By the preceding lemma, it suffices to prove that the transitivity and congruence
rules preserve PnFT .
Proposition 4.4.2 Let E be a finite axiomatization over BCCSP that is sound
modulo ≃FT , let n ≥ 1 be a natural number greater than or equal to the depth
of any term in E, and suppose E ⊢ t ≈ u and
X
t, u -FT a(an x + x) + a
an (b0 + x) .
b∈A\{a}
Then t has a summand at′ such that an x -FT t′ iff u has a summand au′ such
that an x -FT u′ .
Proof: We prove the proposition by induction on the depth of a normalized
derivation of the equation t ≈ u from E.
To establish the base case, note that if the derivation of t ≈ u consists of
an application of the reflexivity rule, then the proposition is immediate, and if
there exist terms v and w and a substitution σ such that σ(v) = t and σ(w) = u
and (v ≈ w) ∈ E or (w ≈ v) ∈ E, then v ≃FT w by the soundness of E, so the
proposition follows by Lem. 4.4.1.
For the inductive case we distinguish cases according to the last rule applied.
Case 1: the last rule applied is the transitivity rule.
Then there exist a term v and normalized derivations ofPt ≈ v and v ≈ u.
By the soundness of E, v ≃FT u -FT a(an x + x) + a b∈A\{a} an (b0 +
x). Hence, by the induction hypothesis, v has a summand av ′ such that
an x -FT v ′ , and therefore, again by induction, u has a summand au′ such
that an x -FT u′ .
Case 2: the last rule applied is the congruence rule for a.
Then t = at′ and u = au′ for some terms t′ and u′ , and there exists a
normal derivation of t′ ≈ u′ . Since t consists of a single summand at′ ,
an x -FT t′ . So by the soundness of E, an x -FT u′ .
71
4.5 From Ready Pairs to Possible Worlds
Case 3: the last rule applied is the congruence rule for +. Then t = t1 + t2
and u = u1 + u2 for some terms t1 , t2 , u1 and u2 , and there exist normal
derivations of t1 ≈ u1 and t2 ≈ u2 . Since t has a summand at′ with
an x -FT t′ , so does either t1 or t2 . Assume, without loss of generality,
that t1 has a summand at′ suchP
that an x -FT t′ . Since I(u) = {a}, clearly
n
u1 -FT u -FT a(a x + x) + a b∈A\{a} an (b0 + x). So by the induction
hypothesis u1 , and hence u, has a summand au′ with an x -FT u′ .
The proof is now complete.
Now we are in a position to prove the main theorem of this section.
Theorem 4.4.3 Let 1 < |A| < ∞. Then the equational theory of BCCSP
modulo ≃FT is not finitely based.
Proof: Let E be a finite axiomatization over BCCSP that is sound modulo ≃FT .
Let n ≥ 1 be greater than or P
equal to the depth of any term in E.
Note that a(an x + x) + a b∈A\{a} an (b0 + x) does not contain a summand
au′ such that an x -FT u′ . So according to Prop. 4.4.2, the equation
an+1 x + a(an x + x) + a
X
an (b0 + x)
b∈A\{a}
≈ a(an x + x) + a
X
an (b0 + x) ,
b∈A\{a}
which is sound modulo ≃FT , cannot be derived from E. It follows that every finite collection of equations that are sound modulo ≃FT is necessarily incomplete,
and hence the equational theory of BCCSP modulo ≃FT is not finitely based. 4.5
From Ready Pairs to Possible Worlds
In this section we consider all congruences ≃ that finer than or as fine as
ready equivalence and coarser than or coarse as possible worlds equivalence
(i.e., ≃PW ⊆ ≃ ⊆ ≃R ). We prove that if 1 < |A| < ∞, then no finite sound and
ground-complete axiomatization for BCCSP modulo ≃ is ω-complete.
In [vG90, vG01], van Glabbeek gave a finite axiomatization that is sound and
ground-complete for BCCSP modulo ≃R . It consists of axioms A1-4 together
with
R
a(bx + z1 ) + a(by + z2 ) ≈ a(bx + by + z1 ) + a(by + z2 ) ,
where a, b range over A. In case A is infinite, Groote [Gro90] proved with his
technique of inverted substitutions that this axiomatization is ω-complete. So
in that case, ready equivalence is finitely based over BCCSP.
72
Chapter 4 On Finite Alphabets and Infinite Bases
Note that ≃PW ⊆ ≃RT ⊆ ≃R . Blom, Fokkink and Nain [BFN03] proved
that if |A| = ∞, then no finite axiomatization is sound and ground-complete
for BCCSP modulo ≃RT . They also proved that if |A| < ∞, then a finite sound
and ground-complete axiomatization for BCCSP modulo ≃RT is obtained by
extending axioms A1-4 with
RT
|A|
|A|
|A|
X
X
X
bi yi + z) ,
bi xi + z) + a(
a( (bi xi + bi yi ) + z) ≈ a(
i=1
i=1
i=1
where a, b1 , . . . , b|A| range over A.
In [vG90, vG01], van Glabbeek gave a finite axiomatization that is sound and
ground-complete for BCCSP modulo ≃PW . It consists of axioms A1-4 together
with
PW
a(bx + by + z) ≈ a(bx + z) + a(by + z) ,
where a, b range over A. If A is infinite, then Groote’s technique of inverted
substitutions can be applied in a straightforward fashion to prove that this
axiomatization is ω-complete. So in that case, possible worlds equivalence is
finitely based over BCCSP.
To prove the result mentioned above, originally we started out with the
following infinite family of equations en for n > |A|:
a(x1 + · · · + xn ) +
n
X
a(x1 + · · · + xi−1 + xi+1 + · · · + xn )
i=1
≈
n
X
a(x1 + · · · + xi−1 + xi+1 + · · · + xn ) .
i=1
These equations are sound modulo ≃PW . Namely, it is not hard to see that for
each closed substitution ρ, the possible worlds of the summand ρ(a(x1 +· · ·+xn ))
at the left-hand side of ρ(en ) are included in the possible worlds of the right-hand
side of ρ(en ).
However, our expectation that the equations en for n > |A| would obstruct
a finite ω-complete axiomatization turned out to be false. Namely, en can be
obtained by (1) applying to en−1 a substitution ρ with ρ(xi ) = xi + xn for
i = 1, . . . , n − 1, and (2) adding the summand a(x1 + · · · + xn−1 ) at the leftand right-hand side of the resulting equation. Hence, from e|A|+1 (together with
A1-3) we can derive the en for n > |A|.
Therefore we then moved to a more complicated family of equations (see
Def. 4.5.8), similar in spirit to the equations en . However, while cancellation of
the summand a(x1 + · · · + xn−1 ) from en for n > |A| + 1 leads to an equation
that is again sound modulo ≃PW , such a cancellation is not possible for the new
family of equations (see Lem. 4.5.10). We prove that they do obstruct a finite
ω-complete axiomatization (see Thm. 4.5.14).
4.5 From Ready Pairs to Possible Worlds
4.5.1
73
Cover Equations
We introduce a class of cover equations (cf. Section 2.1.5), and show that they
are sound modulo ≃PW . We prove that each equation that involves terms of
depth ≤ 1 and that is sound modulo ≃R can be derived from the cover equations.
Moreover, if such an equation contains no more than k summands at its leftand right-hand side, then it can be derived from cover equations containing no
more than k summands at their left- and right-hand sides (see Prop. 4.5.7).
P
Definition 4.5.1 A term i∈I aYi is a cover of a term aX if:
1. ∀Z ⊆ X with |Z| ≤ |A| − 1, ∃i∈I (Z ⊆ Yi ⊆ X); and
2. ∀Z ⊆ X with |Z| = |A|, ∃i∈I (Z ⊆ Yi ).
P
P
P
This is denoted by i∈I aYi D aX. We say that aX + i∈I aYi ≈ i∈I aYi is
a cover equation.
Pn
Example 4.5.2
i=1 a(x1 + · · · + xi−1 + xi+1 + · · · + xn ) D a(x1 + · · · + xn )
for n > |A|. Hence the equations that were given at the start of this section are
cover equations.
If |X| ≤ |A| − 1, then by Def. 4.5.1(1), t D aX implies that aX is a summand
of t. So the only interesting cover equations are the ones where |X| ≥ |A| (cf.
Def. 4.5.8).
We proceed to prove that the cover equations are sound modulo ≃PW .
P
P
P
Lemma 4.5.3 If i∈I aYi D aX, then aX + i∈I aYi ≃PW i∈I aYi .
Proof: Let ρ be an arbitrary closed substitution. It suffices
to show that the
P
possible worlds of ρ(aX) are also possible worlds of ρ( i∈I aYi ). Let ap be a
possible world of ρ(aX). Then p is a possible world of ρ(X). By the definition of
possible worlds equivalence, p has exactly |I(ρ(X))| summands, one summand
bpb for each b ∈ I(ρ(X)); and for each b ∈ I(ρ(X)) there is an xb ∈ X such
b
that ρ(xb ) −−→ qb and pb is a possible world of qb . Let Z = {xb | b ∈ I(ρ(X))}.
Then I(ρ(Z)) = I(ρ(X)). Clearly p is a possible world of ρ(Z). Note that
|Z| ≤ |I(ρ(X))|. We distinguish two cases.
Case 1: |I(ρ(X))| ≤ |A| − 1.
By Def. 4.5.1(1), Z ⊆ Yi0 ⊆ X for some i0 ∈ I. Then
P clearly p is a possible
world of ρ(Yi0 ). Thus ap is a possible world of ρ( i∈I aYi ).
Case 2: |I(ρ(X))| = |A|.
By Def. 4.5.1(2), Z ⊆ Yi0 for some i0 ∈ I. Then I(ρ(Z)) ⊆ I(ρ(Yi0 )),
and hence, since I(ρ(Z)) = A, it follows that I(ρ(Yi0 )) = I(ρ(Z)). From
Z ⊆ Yi0 and I(ρ(Yi0 )) = I(ρ(Z)) we conclude that every possible world
of Z is a possible word of Yi0 . Since p is a possible world of ρ(Z), it
follows
P that p is a possible world of ρ(Yi0 ). Thus ap is a possible world of
ρ( i∈I aYi ).
74
Chapter 4 On Finite Alphabets and Infinite Bases
The proof is now complete.
We proceed to prove that each sound equation t ≈ u modulo ≃R where t and
u have depth 1 and contain no more than k summands, can be derived from the
cover equations with |I| ≤ k (see Prop. 4.5.7). First we present some notations.
Definition 4.5.4 C k = {aX+
for k ≥ 0.
P
i∈I
aYi ≈
P
i∈I
aYi |
P
i∈I
aYi D aX∧|I| ≤ k}
Definition 4.5.5 R1 denotes the set of equations t ≈ u with depth(t) =
depth(u) ≤ 1 that are sound modulo ≃R .
Let S(t) denote the number of distinct summands (modulo A1-4) unequal
to 0 of term t. For k ≥ 0,
R1k = {t ≈ u ∈ R1 | S(t) ≤ k ∧ S(u) ≤ k} .
In the remainder of this section we assume that A = {a1 , . . . , a|A| }.
We present part of the proof of Prop. 4.5.7 as a separate lemma, as this
lemma will be reused in the proof of Lem. 4.5.11.
Lemma 4.5.6 If t ≈ u ∈ R1 , then t and u contain exactly the same summands
aX with |X| ≤ |A| − 1.
Proof: Let aX be a summand of t where X = {x1 , . . . , xk } with k ≤ |A| − 1.
We define ρ(xi ) = ai 0 for i = 1, . . . , k and ρ(y) = ak+1 0 for y 6∈ X. Then
(a, {a1 , . . . , ak }) is a ready pair of ρ(t), so it must be a ready pair of ρ(u). Since
depth(u) ≤ 1, this implies that aX is a summand of u.
By symmetry, each summand aX with |X| ≤ |A| − 1 of u is also a summand
of t.
Proposition 4.5.7 C k ⊢ R1k for k ≥ 0.
Proof: Let t ≈ u ∈ R1k . Consider a summand aX of t with |X| ≥ |A|. We prove
that a subset of the summands of u form a cover of aX.
Case 1: Z = {z1 , . . . , zk } ⊆ X with k ≤ |A| − 1.
We define ρ(zi ) = ai 0 for i = 1, . . . , k, ρ(x) = 0 for x ∈ X \ Z, and
ρ(y) = a|A| 0 for y 6∈ X. The ready pair (a, {a1 , . . . , ak }) of ρ(aX) must
also be a ready pair of ρ(u). Since depth(u) ≤ 1, this implies that there is
a summand aY of u with Z ⊆ Y ⊆ X.
Case 2: Z = {z1 , . . . , z|A| } ⊆ X.
We define ρ(zi ) = ai 0 for i = 1, . . . , |A| and ρ(y) = 0 for y 6∈ Z. The ready
pair (a, A) of ρ(aX) must also be a ready pair of ρ(u). Since depth(u) ≤ 1,
this implies that there is a summand aY of u with Z ⊆ Y .
75
4.5 From Ready Pairs to Possible Worlds
Concluding, in view of Def. 4.5.1, u = u1 + u2 with u1 D aX. Since S(u1 ) ≤
S(u) ≤ k, we have aX + u1 ≈ u1 ∈ C k . So C k ⊢ aX + u ≈ u.
By Lem. 4.2.1(3) and 4.5.6, each summand x ∈ V and aX with |X| ≤ |A|−1
of t is a summand of u. Moreover, C k ⊢ aX + u ≈ u for each summand aX of
t with |X| ≥ |A|. Hence, C k ⊢ t + u ≈ u.
By symmetry, also C k ⊢ t + u ≈ t. So C k ⊢ t ≈ t + u ≈ u.
4.5.2
Cover Equations a1 Xn + Θn ≈ Θn for n ≥ |A|
We now turn our attention to a special kind of cover equation a1 Xn + Θn ≈ Θn
for n ≥ |A|, where Θn contains n + 1 summands (see Def. 4.5.8 and Lem. 4.5.9).
If a term u is obtained by eliminating one or more summands from Θn , then
a1 Xn + u 6≃R u (see Lem. 4.5.10); moreover, if a summand of a term u is
not a summand of a1 Xn + Θn , then Θn 6≃R u (see Lem. 4.5.11). These two
facts together imply that a1 Xn + Θn ≈ Θn cannot be derived from C n (see
Prop. 4.5.13). Prop. 4.5.7 and 4.5.13 form the cornerstones of the proof of
Thm. 4.5.14, which contains the main result of this section.
Definition 4.5.8 Let n ≥ |A|. Let x1 , . . . , xn , x̂|A| , . . . , x̂n be distinct variables.
Let X|A|−1 and Xn denote {x1 , . . . , x|A|−1 } and {x1 , . . . , xn }, respectively. We
define that Θn denotes the term
|A|−1
a1 X|A|−1 +
X
a1 (Xn \ {xi }) +
i=1
n
X
a1 (X|A|−1 ∪ {xi , x̂i }) .
i=|A|
Lemma 4.5.9 Θn D a1 Xn for n ≥ |A|.
Proof: Let Z ⊆ Xn with |Z| ≤ |A| − 1. We need to find a summand a1 Y
of Θn with Z ⊆ Y ⊆ Xn . We distinguish two cases. On the one hand, if
Z ⊆ X|A|−1 , then Z ⊆ X|A|−1 ⊆ Xn . On the other hand, if Z 6⊆ X|A|−1 , then
Z ⊆ Xn \ {xi } ⊆ Xn for some 1 ≤ i ≤ |A| − 1.
Let Z ⊆ Xn with |Z| = |A|. We need to find a summand a1 Y of Θn with
Z ⊆ Y . Again there are two cases. On the one hand, if X|A|−1 ⊂ Z, then
Z ⊆ X|A|−1 ∪ {xi , x̂i } for some |A| ≤ i ≤ n. On the other hand, if X|A|−1 6⊂ Z,
then then Z ⊆ Xn \ {xi } for some 1 ≤ i ≤ |A| − 1.
Lemma 4.5.10 Let n ≥ |A|. If the summands of u are a proper subset of the
summands of Θn , then a1 Xn + u 6≃R u.
Proof: Suppose that all summands of u are summands of Θn , but that some
summand a1 Y of Θn is not a summand of u. We consider the three possible
forms of Y , and for each case give a closed substitution ρ such that some ready
pair of ρ(a1 Xn ) is not a ready pair of ρ(u).
76
Chapter 4 On Finite Alphabets and Infinite Bases
Case 1: Y = X|A|−1 .
We define ρ(xi ) = ai 0 for i = 1, . . . , |A| − 1, ρ(xi ) = 0 for i = |A|, . . . , n,
and ρ(y) = a|A| 0 for y 6∈ Xn . Then the ready pair (a1 , {a1 , . . . , a|A|−1 })
of ρ(a1 Xn ) is not a ready pair of ρ(u).
Case 2: Y = Xn \ {xi0 } for some 1 ≤ i0 ≤ |A| − 1.
We define ρ(xi ) = ai 0 for i = 1, . . . , i0 − 1, i0 + 1, . . . , |A|, ρ(xi ) = 0 for
i = i0 and i = |A| + 1, . . . , n, and ρ(y) = ai0 0 for y 6∈ Xn . Then the ready
pair (a1 , {a1 , . . . , ai0 −1 , ai0 +1 , . . . , a|A| }) of ρ(a1 Xn ) is not a ready pair of
ρ(u).
Case 3: Y = X|A|−1 ∪ {xi0 , x̂i0 } for some |A| ≤ i0 ≤ n.
We define ρ(xi ) = ai 0 for i = 1, . . . , |A| − 1, ρ(xi0 ) = a|A| 0, and ρ(y) = 0
for y 6∈ X|A|−1 ∪{xi0 }. Then the ready pair (a1 , {a1 , . . . , a|A| }) of ρ(a1 Xn )
is not a ready pair of ρ(u).
The proof is now complete.
Lemma 4.5.11 Let n ≥ |A|. If Θn ≃R u, then each summand of u is a
summand of a1 Xn + Θn .
Proof: Let Θn ≃R u. By Lem. 4.2.1(1), depth(u) = 1. By Lem. 4.2.1(3), u does
not have summands x ∈ V , so clearly each summand of u is of the form a1 Y .
If |Y | ≤ |A| − 1, then by Lem. 4.5.6, a1 Y is a summand of Θn . Let |Y | ≥ |A|;
we prove that a1 Y is a summand of a1 Xn + Θn .
By Lem. 4.2.1(3), Y ⊆ Xn ∪ {x̂i | i=|A|, . . . , n}. We distinguish two cases.
Case 1: x̂i ∈ Y for some |A| ≤ i ≤ n.
Suppose, towards a contradiction, that there is a y ∈ Y \(X|A|−1 ∪{xi , x̂i }).
We define ρ(y) = a1 0, ρ(x̂i ) = a2 0, and ρ(z) = 0 for z 6∈ {y, x̂i }. The
ready pair (a1 , {a1 , a2 }) of ρ(a1 Y ) is not a ready pair of ρ(Θn ), contradicting Θn ≃R u.
Suppose, towards a contradiction, that there is an x ∈ (X|A|−1 ∪{xi , x̂i }) \
Y . Note that x̂i ∈ Y implies x 6= x̂i . We define ρ(x) = a1 0, ρ(x̂i ) = a2 0,
and ρ(z) = 0 for z 6∈ {x, x̂i }. The ready pair (a1 , {a2 }) of ρ(a1 Y ) is not a
ready pair of ρ(Θn ), contradicting Θn ≃R u.
Hence, Y = X|A|−1 ∪ {xi , x̂i }.
Case 2: Y ⊆ Xn .
Since |Y | ≥ |A|, there is a Z = {z1 , . . . , z|A|−1 } ⊆ Y with Z 6⊆ X|A|−1 .
We define ρ(zi ) = ai 0 for i = 1, . . . , |A| − 1, ρ(y) = 0 for y ∈ Y \ Z, and
ρ(z) = a|A| 0 for z 6∈ Y . The ready pair (a1 , {a1 , . . . , a|A|−1 }) of ρ(a1 Y )
must be a ready pair of ρ(Θn ), which implies that there is a summand
a1 Y ′ of Θn with Z ⊆ Y ′ ⊆ Y . Since Z 6⊆ X|A|−1 and Y ⊆ Xn , it follows
that Y ′ = Xn \ {xi0 } for some 1 ≤ i0 ≤ |A| − 1. Hence, either Y = Xn or
Y = Xn \ {xi0 }.
4.6 Simulation
Concluding, each summand of u is a summand of a1 Xn + Θn .
77
The following example shows that Lem. 4.5.11 would fail if |A| = 1.
Example 4.5.12 Let |A| = 1 and n = 1. Note that Θ1 = a1 0 + a1 (x1 + x̂1 )
and a1 X1 = a1 x1 . Since |A| = 1, a1 0 + a1 (x1 + x̂1 ) ≃R a1 x̂1 + a1 0 + a1 (x1 + x̂1 ).
However, a1 x̂1 is not a summand of a1 x1 + a1 0 + a1 (x1 + x̂1 ).
Proposition 4.5.13 C n 0 a1 Xn + Θn ≈ Θn for n ≥ |A|.
Proof: Suppose, towards a contradiction, that there is a derivation of a1 Xn +
Θn ≈ Θn using only equations in C n : a1 Xn +Θn = u0 ≈ u1 ≈ · · · ≈ uj = Θn for
some j ≥ 1. By Lem. 4.2.1(1), u1 , . . . , uj have depth 1. Since u0 = a1 Xn + Θn ,
uj = Θn , and the equations in C n are of the form aY + v ≈ v, there must be
a 1 ≤ i ≤ j such that ui−1 = a1 Xn + ui and a1 Xn is not a summand of ui .
Since Θn ≃R ui , Lem. 4.5.11 implies that all summands of ui are summands
of Θn . Since a1 Xn + ui ≃R ui , Lem. 4.5.10 implies that ui = Θn . Hence,
a1 Xn + Θn ≈ Θn can be derived using a single application of an equation
a1 Y + v ≈ v ∈ C n . Then σ(Y ) = Xn and σ(v) + w = Θn for some substitution
σ and term w. Since a1 Xn + σ(v) ≃R σ(v) and σ(v) + w = Θn , Lem. 4.5.10
implies that σ(v) = Θn . However, a1 Y + v ≈ v ∈ C n implies S(v) ≤ n, and v
does not contain summands from V , so clearly S(σ(v)) ≤ n. This contradicts
the fact that S(σ(v)) = S(Θn ) = n + 1. Concluding, C n 0 a1 Xn + Θn ≈ Θn . Theorem 4.5.14 Let 1 < |A| < ∞. Let ≃ be a congruence that is included in
ready equivalence and includes possible worlds equivalence. Then the equational
theory of BCCSP modulo ≃ is not finitely based.
Proof: Let E be a finite axiomatization that is sound and ground-complete
for BCCSP modulo a congruence ≃ that is included in ready equivalence and
includes possible worlds equivalence. Suppose, towards a contradiction, that
E is ω-complete. By Lem. 4.5.9 and 4.5.3, a1 Xn + Θn ≈ Θn for n ≥ |A| is
sound modulo ≃PW , so also modulo ≃. Then these equations can be derived
from E. Let E1 denote the equations in E of depth ≤ 1. By Lem. 4.2.1(1),
E1 ⊢ a1 Xn + Θn ≈ Θn for n ≥ |A|.
Choose an n ≥ |A| such that S(t) ≤ n and S(u) ≤ n for each t ≈ u ∈ E1 .
Since E1 is sound modulo ≃, so also modulo ≃R , it follows that E1 ⊆ R1n .
By Prop. 4.5.7, C n ⊢ E1 . This implies that C n ⊢ a1 Xn + Θn ≈ Θn , which
contradicts Prop. 4.5.13.
Concluding, E is not ω-complete.
4.6
Simulation
In this section we consider simulation equivalence ≃S . In [vG90, vG01], van
Glabbeek gave a finite axiomatization that is sound and ground-complete for
78
Chapter 4 On Finite Alphabets and Infinite Bases
BCCSP modulo ≃S . It consists of axioms A1-4 together with
a(x + y) ≈ a(x + y) + ax ,
S
where a ranges over A. In case A is infinite, Groote’s technique of inverted
substitutions from [Gro90] can be applied in a straightforward fashion to prove
that van Glabbeek’s axiomatization is ω-complete; see [CF06].
An infinite supply of actions is crucial in this particular application of the
inverted substitutions technique, for we shall prove below that the equational
theory of BCCSP modulo ≃S does not have a finite basis if 1 < |A| < ∞. The
cornerstone for this negative result is the following infinite family of equations:
X
X
a(x + Ψθn ) + aΦn
a(x + Ψθn ) + aΦn ≈
(n ≥ 0) .
a(x + Ψn ) +
θ∈An
θ∈An
Here, the Φn are defined inductively as follows:
Φ0
= 0
P
Φn+1 =
b∈A bΦn .
Moreover, the Ψn and Ψθn are defined by:
P
Ψn =
b1 ···bn ∈An b1 · · · bn 0
Ψθn
=
P
b1 ···bn ∈An \{θ} b1
· · · bn 0
for θ ∈ An .
For any closed term p with depth(p) ≤ n, clearly p -S Φn . So in particular,
Ψn -S Φn .
It is not hard to see that the equations above are sound modulo ≃S . The
idea is that, given a closed substitution ρ, either depth(ρ(x)) < n, in which case
a(ρ(x) + Ψn ) is simulated by aΦn . Or some b1 · · · bn ∈ An is a trace of ρ(x), in
which case a(ρ(x) + Ψn ) is simulated by a(ρ(x) + Ψbn1 ···bn ).
We shall prove below that ≃S is not finitely based, using the proof-theoretic
technique, by showing that whenever an equation t ≈ u is derivable from a set
of sound axioms of depth ≤ n, then it satisfies the following property PnS :
P
If t, u -S θ∈An a(x + Ψθn ) + aΦn , then t has a summand similar to
a(x + Ψn ) iff u has a summand similar to a(x + Ψn ).
We shall first establish in Lem. 4.6.2 that an equation satisfies PnS if it is a
substitution instance of a sound equation of terms with a depth ≤ n. Then,
in Prop. 4.6.3, we prove, using Lem. 4.6.2, that PnS holds for every equation
derivable from a collection of sound equations E, provided that the depth of
the terms in E does not exceed n. From the proposition we can directly infer
that the infinite family of equations above obstructs a finite basis, because the
left-hand side contains a summand similar to a(x + Ψn ), while the right-hand
side does not.
The following lemma constitutes an important step in the proof that PnS is
preserved by substitution instances of sound equations of terms with a depth
≤ n.
79
4.6 Simulation
Lemma 4.6.1 If a(x + Ψn ) -S at -S
a(x + Ψn ).
P
θ∈An
a(x + Ψθn ) + aΦn , then at ≃S
Proof: Since x + Ψn -S t, by Lem. 4.2.1(3), x is a summand of t. Clearly, there
exists a term t′ that doesPnot have x as a summand such that t = x + t′ (modulo
A3). Since a(x + t′ ) -S θ∈An a(x + Ψθn ) + aΦn , by Lem. 4.2.1(3), t′ is a closed
term.
n+1
We prove that t′ -P
0.
S Ψn . Consider a closed substitution ρ with ρ(x) = a
′
θ
′
Since a(ρ(x) + t ) -S θ∈An a(ρ(x) + Ψn ) + aΦn and clearly ρ(x) + t 6-S Φn , it
follows that P
ρ(x) + t′ -S ρ(x) + Ψθn for some θ ∈ An . Hence t′ -S an+1 0 + Ψθn .
Since at -S θ∈An a(x+Ψθn )+aΦn , by Lem. 4.2.1(1), depth(t′ ) ≤ depth(t) ≤ n.
So t′ -S an 0 + Ψθn -S Ψn .
Then at = a(x + t′ ) -S a(x + Ψn ), and, by assumption, a(x + Ψn ) -S at, so
at ≃S a(x + Ψn ).
We shall now establish that substitution instances of sound equations of
depth ≤ n satisfy PnS .
Lemma 4.6.2 Suppose t ≃S u, let n > 1 be a natural P
number greater than or
equal to the depth of t and u, and suppose σ(t), σ(u) -S θ∈An a(x+Ψθn )+aΦn .
Then σ(t) has a summand similar to a(x+Ψn ) if and only if σ(u) has a summand
similar to a(x + Ψn ).
Proof: Clearly, by symmetry, it suffices to only consider the implication from
left to right. So suppose that σ(t) has a summand similar to a(x + Ψn ); then
there are two cases:
Case 1: t has a variable summand z and σ(z) has a summand similar to a(x +
Ψn ).
Since t ≃S u, by Lem. 4.2.1(3), u also has z as summand. Since σ(z) has
a summand similar to a(x + Ψn ), the same holds for σ(u).
Case 2: t has a summand at′ and σ(at′ ) ≃S a(x + Ψn ).
Note that from σ(t′ ) ≃S x + Ψn it follows by Lem. 4.2.1(3) that x is a
summand of σ(t′ ), and this means that t′ has a variable summand y with
x a summand of σ(y).
The following claim constitutes a crucial step in the remainder of the proof
for this case.
Claim. The term u has a summand au′ such that, for every m ≥ 0
a ···a
and for every variable z, if t′ −−1−−−m→ z + v for some term v, then
′ a1 ···am
u −−−−−→ z + w for some term w.
Proof of the claim: We consider the terms t and u under a special closed
substitution ρ, that we now proceed to define. Let a and b be distinct
80
Chapter 4 On Finite Alphabets and Infinite Bases
actions, and let p.q : V → Z>0 be an injection (which exists since V is
countable); then ρ is defined by
ρ(z) = apzq·n b0 for all z ∈ V .
From the assumption that t ≃S u, it follows that ρ(t) ≃S ρ(u). Since
a
a
ρ(t) −−→ ρ(t′ ), there exists a closed term p such that ρ(u) −−→ p and
ρ(t′ ) -S p.
To establish that u has a summand au′ such that ρ(t′ ) -S ρ(u′ ), we argue
that u cannot have a variable summand z such that ρ(z) −−a→ p. Recall
that t′ has a variable summand y; since ρ(y) = apyq·n b0 and ρ(t′ ) -S p, it
follows that b has an occurrence at depth pyq·n in p. Now assume towards
a contradiction that z is a variable summand of u such that ρ(z) −−a→ p.
Then p = apzq·n−1 b0, which, since clearly pyq · n 6= pzq · n − 1, contradicts
that b occurs in p at depth pyq · n in p. So u has a summand au′ such
that ρ(t′ ) -S ρ(u′ ).
a ···a
Now suppose that t′ −−1−−−m→ z + v for some term v. Then, since ρ(t′ ) -S
a ···a
ρ(u′ ), there exists a closed term q such that ρ(u′ ) −−1−−−m→ q and ρ(z +
v) -S q.
a ···a
We shall now first prove that there exists u′′ such that u′ −−1−−−m→ u′′
and ρ(u′′ ) = q. Assume towards a contradiction that there is no such
u′′ . Then clearly there exist ℓ < m, a variable z ′ , and a term u′′′ such
′
aℓ+1 ···am
a1 ···aℓ
−−−−→ q. Since ρ(z ′ ) = apz q·n b0, it
that u′ −−−
−−→ z ′ + u′′′ and ρ(z ′ ) −−−
′
follows that q = apz q·n−(m−ℓ) b0, and hence the single occurrence of b in p
is at depth pz ′ q · n − (m − ℓ). Since 0 < m − ℓ < n, it follows that b does
not occur at depth pzq · n in q; this contradicts ρ(z + v) -S q.
a ···a
So there exists u′′ such that u −−1−−−m→ u′′ and ρ(u′′ ) = q. Since ρ(z +
apzq·n
v) -S q = ρ(u′′ ) and ρ(z) = apzq·n b0, ρ(u′′ ) −−−−−→ b0. Hence, since
depth(u′′ ) < n and pzq > 0, there exists a variable z ′ , a term w, and ℓ < n
ℓ
pzq·n−ℓ
a
a
such that u′′ −−
→ z ′ + w and ρ(z ′ ) −−
−−−−→ b0. From the definition of ρ
′
it is clear that pzq · n − ℓ = pz q · n. Since ℓ ≤ depth(u′′ ) < n, it follows
that ℓ = 0, so pz ′ q = pzq, and hence, since p.q is an injection, z ′ = z. We
have established that u′′ = z + w, and thereby the proof of the claim is
complete.
Now consider any a1 · · · an ∈ An . Since Ψn -S σ(t′ ) and depth(t′ ) < n,
a ···a
there exist 0 ≤ m < n, a variable z and a term v such that t′ −−1−−−m→ z +v
a
···a
and am+1 · · · an a trace of σ(z). By our claim above, u′ −−1−−−m→ z + w for
some term w. Since am+1 · · · an is a trace of σ(z), it follows that a1 · · · an
is a trace of σ(u′ ). This holds for all a1 · · · an ∈ An , so Ψn -S σ(u′ ).
Furthermore, recall that y is a summand of t′ , and that x is a summand
λ
→ t′ (with λ the empty sequence of actions), by our
of σ(y). Since t′ −−
λ
claim it follows that u′ −−
→ u + w for some term w. So y is a summand of
u′ , and hence x is a summand of σ(u′ ).
4.6 Simulation
81
We conclude that x + Ψn -S σ(u′ ), and hencePa(x + Ψn ) -S aσ(u′ ).
From the assumption of the lemma that σ(u) -S θ∈An a(x + Ψθn ) + aΦn
P
θ
it follows that aσ(u′ ) -S
θ∈An a(x + Ψn ) + aΦn . So, by Lem. 4.6.1,
′
aσ(u ) ≃S a(x + Ψn ).
The proof is now complete.
We shall now prove that PnS holds for every equation derivable from a collection of equations between terms of depth less than or equal to n. By the
preceding lemma, it only remains to prove that the transitivity and congruence
rules preserve PnS .
Proposition 4.6.3 Let E be a finite axiomatization over BCCSP that is sound
modulo ≃S , let n be a natural numberP
greater than the depth of any term in
E, and suppose E ⊢ t ≈ u and t, u -S θ∈An a(x + Ψθn ) + aΦn . Then t has a
summand similar to a(x + Ψn ) iff u has a summand similar to a(x + Ψn ).
Proof: We prove the proposition by induction on the depth of a normalized
derivation of the equation t ≈ u from E.
To establish the base case, note that if the derivation of t ≈ u consists of
an application of the reflexivity rule, then the proposition is immediate, and if
there exist terms v and w and a substitution σ such that σ(v) = t, σ(w) = u,
and (v ≈ w) ∈ E or (w ≈ v) ∈ E, then v ≃S w by the soundness of E, so the
proposition follows from Lem. 4.6.2.
For the inductive case we distinguish cases according to the last rule applied.
Case 1: the last rule applied is the transitivity rule.
Then there exist a term v and normalized
derivations of t ≈ v and v ≈ u.
P
By the soundness of E, v ≃S u -S θ∈An a(x + Ψθn ) + aΦn . So, by the
induction hypothesis, v has a summand similar to a(x + Ψn ), and hence,
again by the induction hypothesis, u has a summand similar to a(x + Ψn ).
Case 2: the last rule applied is the congruence rule for a.
Then t = at′ and u = au′ for some terms t′ and u′ , and there exists
a normal derivation of t′ ≈ u′ . Since t consists of a single summand,
at′ ≃S a(x + Ψn ). So, by the soundness of E, u = au′ ≃S a(x + Ψn ).
Case 3: the last rule applied is the congruence rule for +.
Then t = t1 + t2 and u = u1 + u2 for some terms t1 , t2 , u1 and u2 , and
there exist normal derivations of t1 ≈ u1 and t2 ≈ u2 . Since t has a
summand similar to a(x + Ψn ), so does either t1 or t2 . Assume, without
loss of generality, that t1 has a summand completed similar
to a(x + Ψn ).
P
Then clearly I(t1 ) = I(u1 ) = {a}, so t1 , u1 -S t, u -S θ∈An a(x + Ψθn ) +
aΦn . By the induction hypothesis, it follows that u1 , and hence u, has a
summand similar to a(x + Ψn ).
82
Chapter 4 On Finite Alphabets and Infinite Bases
The proof is now complete.
Now we are in a position to prove the main theorem of this section.
Theorem 4.6.4 Let 1 < |A| < ∞. Then the equational theory of BCCSP
modulo ≃S is not finitely based.
Proof: Let E be a finite axiomatization over BCCSP that is sound modulo ≃S .
Let n > 1 be greater
than or equal to the depth of any term in E.
P
Note that θ∈An a(x + Ψθn ) + aΦn does not contain a summand similar to
a(x + Ψn ). So according to Prop. 4.6.3, the equation
X
X
a(x + Ψn ) +
a(x + Ψθn ) + aΦn ≈
a(x + Ψθn ) + aΦn ,
θ∈An
θ∈An
which is sound modulo ≃S , cannot be derived from E. It follows that every finite collection of equations that are sound modulo ≃S is necessarily incomplete,
and hence the equational theory of BCCSP modulo ≃S is not finitely based. 4.7
Completed Simulation
In this section we consider completed simulation equivalence ≃CS . In [vG90,
vG01], van Glabbeek gave a finite axiomatization that is sound and groundcomplete for BCCSP modulo ≃CS . It consists of axioms A1-4 together with
CS
a(bx + y + z) ≈ a(bx + y + z) + a(bx + z) ,
where a, b range over A. We prove that the equational theory of BCCSP modulo
≃CS does not have a finite basis if |A| > 1. (Note that our proof in this section
also works in case |A| = ∞, whereas all the other proofs of negative results
assume |A| < ∞). The cornerstone for this negative result is the following
infinite family of equations:
an x + an 0 + an (x + y) ≈ an 0 + an (x + y)
(n ≥ 1) .
It is not hard to see that these equations are sound modulo ≃CS . The idea is
that, given a closed substitution ρ, either ρ(x) cannot perform any action, in
which case ρ(an x) is completed simulated by an 0, or x can perform some action,
in which case ρ(an x) is completed simulated by ρ(an (x + y)).
We shall prove that there cannot be a finite sound axiomatization E for
BCCSP modulo ≃CS from which the equations above can all be derived. We
apply the proof-theoretic technique, showing that if the axioms in E have depth
smaller than n and the equation t ≈ u is derivable from E, then it satisfies the
following property PnCS :
If t, u -CS an 0 + an (x + y), then t has a summand completed similar
to an x iff u has a summand completed similar to an x.
83
4.7 Completed Simulation
The crucial step is to prove that PnCS holds for all substitution instances of
sound equations of depth ≤ n (see Lem. 4.7.1). The proof that the transitivity
and congruence rules preserve PnCS , in Prop. 4.7.3, will then be analogous to our
proof in the previous section that they preserve PnS . We infer that the infinite
family of equations above obstructs a finite basis, by noting that the left-hand
sides of the equations have a summand an x, while the right-hand sides do not.
The following lemma constitutes an crucial step in the proof that substitution
instance of sound equations of depth ≤ n satisfy PnCS .
n
a
Lemma 4.7.1 If at -CS an 0 + an (x + y) and at −−
→ t′ with t′ = x, then
n
at = a x.
Proof: We first prove by induction on n that if at -CS an 0 + an (x + y), then
at = an 0 or at = an x or at = an y or at = an (x + y).
Suppose n = 1. Then I(t) = ∅ by Lem. 4.2.1(2) and var 0 (t) ⊆ {x, y} by
Lem. 4.2.1(3), so t = 0 or t = x or t = y or t = x + y.
Suppose n > 1.PThen by Lem. 4.2.1(2) I(t) = {a} and by Lem. 4.2.1(3)
var 0 (t) = ∅, so t = i∈I ati with I 6= ∅. Clearly, ati -CS an−1 0 + an−1 (x + y),
so by the induction hypothesis ati = an−1 0 or ati = an−1 x or ati = an−1 y or
ati = an−1 (x + y), for all i ∈ I.
It remains to establish that ati = atj for all i, j ∈ I. Suppose, towards
a contradiction, that ati 6= atj for some i, j ∈ I. Then clearly there exist t′i
n−1
n−1
a
a
−−→ t′j and t′i 6= t′j . Modulo symmetry
−−→ t′i , atj −−
and t′j such that ati −−
we can distinguish six cases, and in each of them it suffices to provide a closed
substitution ρ such that ρ(at) 6-CS ρ(an 0 + an (x + y)).
Cases 1,2,3: t′i = 0 and t′j = x or t′j = y or t′j = x + y.
Define ρ such that ρ(x) 6≃CS 0 and ρ(y) 6≃CS 0. Then ρ(t) 6-CS an−1 0
n−1
a
(because ρ(t) −−
−−→ ρ(t′j ) 6≃CS 0), and ρ(t) 6-CS an−1 ρ(x + y) (because
n−1
a
ρ(t) −−
−−→ ρ(ti ) ≃CS 0 whereas ρ(x + y) 6≃CS 0). So ρ(at) 6-CS ρ(an 0 +
n
a (x + y)).
Cases 4,5: t′i = x and t′j = y or t′j = x + y.
Define ρ such that ρ(x) = 0 and ρ(y) 6≃CS 0. Then ρ(t) 6-CS an−1 0
n−1
a
−−→ ρ(t′j ) 6≃CS 0) and ρ(t) 6-CS an−1 ρ(x + y) (because
(because ρ(t) −−
n−1
a
ρ(t) −−
−−→ ρ(t′i ) ≃CS 0 and ρ(x + y) 6≃CS 0). So ρ(at) 6-CS ρ(an 0 + an (x +
y)).
Case 6: t′i = y and t′j = x + y.
Define ρ such that ρ(x) 6≃CS 0 and ρ(y) = 0. Then ρ(t) 6-CS an−1 0
n−1
a
(because ρ(t) −−
−−→ ρ(t′j ) 6≃CS 0) and ρ(t) 6-CS an−1 ρ(x + y) (because
n−1
a
−−→ ρ(t′i ) ≃CS 0 and ρ(x + y) 6≃CS 0). So ρ(at) 6-CS ρ(an 0 + an (x +
ρ(t) −−
y)).
84
Chapter 4 On Finite Alphabets and Infinite Bases
We have established that ati = atj for all i, j ∈ I, so we may conclude that if
at -CS an 0+an (x+y), then at = an 0 or at = an x or at = an y or at = an (x+y).
an
If, moreover, at −−→ t′ with t′ = x, then it is easy to define closed substitutions
showing that at 6= an 0, at 6= an y and at 6= an (x + y), so the proof of the lemma
is complete.
In the following lemma we establish that substitution instances of sound
equations of depth < n satisfy PnCS .
Lemma 4.7.2 Suppose t ≃CS u, let n ≥ 1 be a natural number greater than
the depth of t and u, and suppose σ(t), σ(u) -CS an 0 + an (x + y). Then σ(t)
has a summand an x iff σ(u) has a summand an x.
Proof: Clearly, by symmetry, it suffices to establish the direction from left to
right. So suppose σ(t) has a summand an x; then there are two cases:
Case 1: t has a variable summand z and σ(z) has a summand an x.
Then, since t ≃CS u, by Lem. 4.2.1(3) u also has z as a summand, so
clearly σ(u) also has a summand an x.
Case 2: t has a summand at′ and σ(at′ ) = an x.
Then, since depth(at′ ) < n, from σ(at′ ) = an x it follows that there exist a
am
variable z and a term t′′ such that at′ −−→ z + t′′ and σ(z) = an−m x for
some 1 ≤ m < n. Since t ≃CS u, by Lem. 4.2.1(3), u has a summand au′
am
an
such that au′ −−→ z + u′′ for some term u′′ , and consequently aσ(u′ ) −−→
u′′′ with u′′′ = x. Since also aσ(u′ ) -CS σ(u) -CS an 0 + an (x + y), it
follows by Lem. 4.7.1 that aσ(u′ ) = an x. So σ(u) has a summand an x.
The proof is now complete.
We shall now prove that if an equation derivable from a collection of equations of depth < n, then it satisfies PnCS .
Proposition 4.7.3 Let E be a finite axiomatization over BCCSP that is sound
modulo ≃CS , let n be a natural number greater than the depth of any term in
E, and suppose E ⊢ t ≈ u and t, u -CS an 0 + an (x + y). Then t has a summand
completed similar to an x iff u has a summand completed similar to an x.
Proof: A straightforward adaptation of the proof of Prop. 4.6.3, using Lem. 4.7.2
instead of Lem.
P 4.6.2, replacing ≃S by ≃CS , -S by -CS , “similar” by “completed
similar” and θ∈An a(x + Ψθn ) + aΦn by an 0 + an (x + y).
Now we are in a position to prove the main theorem of this section.
Theorem 4.7.4 Let |A| > 1. Then the equational theory of BCCSP modulo
≃CS is not finitely based.
85
4.8 Ready Simulation
Proof: Let E be any finite axiomatization over BCCSP that is sound modulo
≃CS and let n ≥ 1 greater than the depth of any term in E. Since an 0+an (x+y)
does not have a summand completed similar to an x, by Prop. 4.7.3 the equation
an x + an 0 + an (x + y) ≈ an 0 + an (x + y) ,
which is sound modulo ≃CS , cannot be derived from E. It follows that every
finite collection of equations that are sound modulo ≃CS is necessarily incomplete, and hence the equational theory of BCCSP modulo ≃CS is not finitely
based.
4.8
Ready Simulation
In this section we consider ready simulation equivalence ≃RS . Blom, Fokkink
and Nain [BFN03] gave a finite axiomatization that is sound and groundcomplete for BCCSP modulo ≃RS . It consists of axioms A1-4 together with
the axiom RS presented at the start of Section 4.4.
Note that the equations in the infinite family presented in the previous section to show that ≃CS is not finitely based if |A| > 1, are not sound modulo
≃RS . To see this, let a and b be distinct actions, and let ρ be a closed substitution such that ρ(x) = a0 and ρ(y) = b0. Then ρ(an x) is not ready simulated by
ρ(an 0) because I(ρ(x)) = {a} =
6 ∅ = I(0), and ρ(an x) is not ready simulated
by ρ(an (x + y), because I(ρ(x)) = {a} =
6 {a, b} = ρ(x + y).
To obtain a negative result for ≃RS , we proceed to consider below the following adaptation of the infinite family of equations of the previous section:
X
X
an (x + b0)
(n ≥ 1) .
an (x + b0) ≈ an 0 +
an x + an 0 +
b∈A
b∈A
These equations are sound modulo ≃RS . The idea is that, given a closed substitution ρ, either ρ(x) cannot perform any action, in which case ρ(an x) is ready
simulated by ρ(an 0), or ρ(x) can perform some action b, in which case ρ(an x)
is ready simulated by ρ(an (x + b0)). Note, however, that the summations in
the above equations only abbreviate BCCSP terms if |A| < ∞. So we assume
1 < |A| < ∞ in the remainder of this section.
The condition |A| < ∞ is, in fact, necessary for the negative result that we
are about to prove, for if A = ∞, then Groote’s technique of inverted substitutions from [Gro90] can be applied in a straightforward fashion to prove that the
axiomatization of Blom, Fokkink and Nain [BFN03] is ω-complete; see [CFN06].
The proof that there cannot be a finite sound axiomatization E for BCCSP
modulo ≃RS from which the equations above can all be derived, is again with
an application of the proof-theoretic technique. Let PnRS be the property:
P
If t, u -RS an 0 + b∈A an (x + b0), then t has a summand ready
similar to an x iff u has a summand ready similar to an x.
86
Chapter 4 On Finite Alphabets and Infinite Bases
Note that this is essentially the same property as PnCS of the previous section.
Also the proof that PnRS is satisfied by every equation t ≈ u derivable from
a collection of sound equations of depth < n is analogous to the proof in the
previous section. We only need to reconsider Lem. 4.7.1 in the light of the new
family of equations.
Lemma 4.8.1 If at -RS an 0 +
then at = an x.
P
an
b∈A
an (x + b0) and at −−→ t′ with t′ = x,
P
Proof: We first prove by induction on n that if at -CS an 0 + b∈A an (x + b0),
n
n
n
then at = a 0 or at = a x or at = a (x + b0) for some b ∈ A.
Suppose n = 1. Note that var 0 (t) ⊆ {x} by Lem. 4.2.1(3). Next, we establish
that I(t) ⊆ {b} for some b ∈ A. To this end, let ρ be a closed substitution such
that ρ(x) = 0. Then I(ρ(t)) = I(ρ(0)) = ∅ or I(ρ(t)) = I(ρ(x + b0)) = {b} for
some b ∈ A, and hence I ⊆ {b} for some b ∈ A. Now it has been shown that
t = 0 or t = x or t = b0 or t = x + b0. To exclude the case that t = b0, suppose
that I(t) = {b}, and consider a substitution ρ such that ρ(x) = c0 for some
c 6= b. Since I(ρ(t)) 6= I(ρ(0)) and I(ρ(t)) 6= I(ρ(x + b′ 0)) for b′ 6= b, it follows
that I(ρ(t)) = I(ρ(x + b0)) = {b, c}. So x ∈ var 0 (t), and hence t = x + b0.
Suppose n > 1. ThenPI(t) = {a} by Lem. 4.2.1(2) and var 0 (t) = ∅
n−1
0+
by
4.2.1(3), so t =
i∈I ati with I 6= ∅. Clearly, ati -RS a
P Lem.n−1
n−1
a
(x
+
b0),
so
by
the
induction
hypothesis,
for
all
i
∈
I,
at
=
a
0
i
b∈A
or ati = an−1 x or ati = an−1 (x + bi 0) for some bi ∈ A.
It remains to establish that ati = atj for all i, j ∈ I. Suppose, towards a
contradiction, that ati 6= atj for some i, j ∈ I. Then clearly there exist t′i and
n−1
n−1
a
a
t′j such that ati −−
−−→ t′i , atj −−
−−→ t′j and t′i 6= t′j . Modulo symmetry we
can distinguish four cases, and in each of them
P it suffices to provide a closed
substitution ρ such that ρ(at) 6-RS ρ(an 0 + b∈A an (x + b0)).
Cases 1,2: t′i = 0 and t′j = x or t′j = x + bj 0.
an−1
Define ρ such that ρ(x) 6≃RS 0. Then ρ(t) 6-RS an−1 0 (because ρ(t) −−−−→
n−1
a
ρ(t′j ) 6≃RS 0) and ρ(t) 6-RS an−1 ρ(x + bj 0) (because ρ(t) −−
−−→ ρ(t′i ) ≃RS
0).
Case 3: t′i = x and t′j = x + bj 0.
an−1
Define ρ such that ρ(x) = 0. Then ρ(t) 6-RS an−1 0 (because ρ(t) −−−−→
an−1
ρ(t′j ) 6≃RS 0) and ρ(t) 6-RS an−1 ρ(x + bj 0) (because ρ(t) −−−−→ ρ(t′i ) = 0).
Case 4: t′i = x + bi 0 and t′j = x + bj 0 for some bi , bj ∈ A with bi 6= bj .
n−1
a
Define ρ such that ρ(x) = 0. Then ρ(t) 6-RS an−1 0 (because ρ(t) −−
−−→
′
n−1
ρ(ti ) 6≃RS 0) and ρ(t) 6-RS a
ρ(x + b0) for all b ∈ A (because b 6= bk for
n−1
a
k = i or k = j, so that ρ(t) −−
−−→ ρ(t′k ) ≃RS bk 0 and ρ(x + b0) 6≃RS bk 0).
87
4.8 Ready Simulation
We have established
that ati = atj for all i, j ∈ I, so we may conclude that if
P
at -RS an 0 + b∈A an (x + b0), then at = an 0 or at = an x or at = an (x + b0)
an
for some b ∈ A. If, moreover, at −−→ t′ with t′ = x, then it is easy to define
closed substitutions showing that at 6= an 0 and at 6= an (x + b0), so the proof of
the lemma is complete.
The following lemma corresponds to Lem. 4.7.2 of the previous section.
Lemma 4.8.2 Suppose t ≃RS u, let n ≥ 1 be a naturalPnumber greater than
the depth of t and u, and suppose σ(t), σ(u) -RS an 0 + b∈A an (x + b). Then
σ(t) has a summand an x iff σ(u) has a summand an x.
Proof: A straightforward adaptation of the proof of Lem. 4.7.2, using Lem. 4.8.1
instead of P
Lem. 4.7.1, replacing ≃CS by ≃RS , -CS by -RS , and an 0 + an (x + y)
n
by a 0 + b∈A an (x + b0).
The following proposition corresponds to Proposition 4.7.3 from the previous
section.
Proposition 4.8.3 Let E be a finite axiomatization over BCCSP that is sound
modulo ≃RS , let n be a natural number greaterPthan the depth of any term in
E, and suppose E ⊢ t ≈ u and t, u -RS an 0 + b∈A an (x + b0). Then t has a
summand ready similar to an x iff u has a summand ready similar to an x.
Proof: A straightforward adaptation of the proof of Prop. 4.6.3, using Lem. 4.8.2
instead of Lem.
P 4.6.2, replacing ≃S by ≃RS , -SPby -RS , “similar” by “ready
similar” and θ∈An a(x + Ψθn ) + aΦn by an 0 + b∈A an (x + b0).
Now we are in a position to prove the main theorem of this section.
Theorem 4.8.4 Let 1 < |A| < ∞. Then the equational theory of BCCSP
modulo ≃RS is not finitely based.
Proof: Let E be a finite axiomatization over BCCSP that is sound modulo ≃RS .
Let n be greater than
P the depth of any term in E.
Note that an 0 + b∈A an (x + b0) does not contain a summand ready similar
to an x. So according to Prop. 4.8.3, the equation
X
X
an (x + b0) ,
an (x + b0) ≈ an 0 +
an x + an 0 +
b∈A
b∈A
which is sound modulo ≃RS , cannot be derived from E. It follows that every
finite collection of equations that are sound modulo ≃RS is necessarily incomplete, and hence the equational theory of BCCSP modulo ≃RS is not finitely
based.
88
4.9
Chapter 4 On Finite Alphabets and Infinite Bases
Conclusion
For every equivalence in van Glabbeek’s linear time – branching time spectrum
I, it has now been determined whether it is finitely based or not. Tab. 4.1
presents an overview, with a + indicating that a finite basis exists and a –
indicating that a finite basis does not exist. (The grey shadow indicates that it
was open prior to the results described in this chapter.) We distinguish three
categories, according to the cardinality of the alphabet A: singleton, finite with
at least two actions, and infinite.
bisimulation
2-nested simulation
possible futures
ready simulation
completed simulation
simulation
possible worlds
ready traces
failure traces
readies
failures
completed traces
traces
|A| = 1
+
–
–
+
+
+
+
+
+
+
+
+
+
1 < |A| < ∞
+
–
–
–
–
–
–
–
–
–
+
+
+
|A| = ∞
+
–
–
+
–
+
+
–
+
+
+
+
+
Table 4.1: The existence of finite bases for BCCSP in the linear time – branching
time spectrum I
Chapter 5
Impossible Futures
5.1
Introduction
In this chapter, we study impossible futures semantics [Vog92, VM01]. This
semantics is missing in van Glabbeek’s original spectrum I, because it was only
studied seriously from 2001 on, the year that [vG01] appeared. Impossible futures semantics is a natural variant of possible futures semantics [RB81] (see
also Def. 2.1.4). It is also closely related to fair testing semantics [RV07]. In
[vGV06] it was shown that weak impossible futures equivalence is the coarsest
congruence with respect to choice and parallel composition operators containing weak bisimilarity with explicit divergence that respects deadlock/livelock
traces and assigns unique solutions to recursive equations. For the definitions
of impossible futures semantics, see Def. 2.1.4 and Def. 2.1.6.
In Chapter 4, a complete categorization of the (in)equational theories for the
process algebra BCCSP modulo the semantics in the linear time – branching
time spectrum I [vG01] has been given. For each preorder and equivalence it is
studied whether a finite, sound, ground-complete axiomatization exists. And if
so, whether there exists a finite basis for the equational theory.
So all questions on these matters regarding concrete semantics have been
resolved? No, as for impossible futures semantics, the (in)equational theory
remained unexplored, with the only exception that the inequational theory of
BCCS modulo weak impossible futures preorder was studied in [VM01]. In
that paper, Voorhoeve and Mauw offer a finite, sound, ground-complete axiomatization; their ground-completeness proof relies heavily on the presence of τ .
They also prove that their axiomatization is ω-complete (they do not refer to
ω-completeness explicitly, but they work on open terms, see [VM01, Thm. 5]).
They implicitly assume an infinite alphabet (at [VM01, page 7] they require a
different action for each variable).
As for weak semantics in general, much less is known, compared to concrete
semantics. For several of the semantics in the linear time – branching time
spectrum II [vG93b], a sound and ground -complete axiomatization has been
given, in the setting of BCCS, see, e.g., [vG97]. Moreover a finite basis has been
given for weak, delay, η- and branching bisimulation semantics [Mil89b, vG93a].
89
90
Chapter 5 Impossible Futures
In this chapter, we focus on the axiomatizability of concrete and weak impossible futures preorders and equivalences in the setting of BCCSP and BCCS.
In summary, we obtain the following results:
1. We prove that there exists a finite, sound, ground-complete axiomatization for BCCSP modulo concrete impossible futures preorder -IF . By
contrast, in [AFvGI04] it was shown that such an axiomatization does not
exist modulo possible futures preorder. Thanks to the result established
in Section 3.3, a finite, sound, ground-complete axiomatization for weak
impossible futures preorder WIF can be obtained for free.
2. We show that BCCS modulo weak impossible futures equivalence ≃WIF
does not have a finite, sound, ground-complete axiomatization. This negative result is based on the following infinite family of equations, for m ≥ 0:
τ a2m 0 + τ (am 0 + a2m 0) ≈ τ (am 0 + a2m 0) .
Again thanks to the result established in Section 3.3, this negative result
carries over to concrete impossible futures equivalence ≃IF . Moreover,
in light of this, one can easily establish the nonderivability of the equations a2m+1 0 + a(am 0 + a2m 0) ≈ a(am 0 + a2m 0) from any given finite
equational axiomatization sound for ≃WIF . As these equations are valid
modulo (concrete) 2-nested simulation equivalence, this negative result
applies to all BCCS-congruences that are at least as fine as weak impossible futures equivalence and at least as coarse as concrete 2-nested
simulation equivalence. Note that the corresponding result of [AFvGI04]
can be inferred.
3. We investigate ω-completeness for impossible futures semantics.
First, we prove that if the alphabet of actions is infinite, then the aforementioned ground-complete axiomatization for BCCSP modulo -IF is ωcomplete. To prove this result, we apply the technique of inverted substitutions (see Section 3.4). The result established in Section 3.3 allows this
result to carry over to WIF .
Second, we prove that in case of a finite alphabet of actions, the inequational theory of BCCS modulo WIF does not have a finite basis. In
case of a singleton alphabet, this negative result is based on the following
infinite family of equations, for m ≥ 0:
am x 4 am x + x .
And for finite alphabets with at least two actions, we use the family
X
X
τ (am x)+τ (am x+x)+ τ (am x+am b0) 4 τ (am x+x)+ τ (am x+am b0) .
b∈A
b∈A
The result established in Section 3.3 allows these negative results to carry
over to -IF .
5.2 Ground-Completeness
91
4. n-Nested concrete impossible futures semantics, for n ≥ 0, form a natural
hierarchy (cf. [AFvGI04]), which coincides with the universal relation for
n = 0, trace semantics for n = 1, and impossible futures semantics for n =
2. Using a proof strategy from [AFvGI04], we show that the negative result
regarding concrete impossible futures equivalence extends to all n-nested
impossible futures equivalences for n ≥ 2, and to all n-nested impossible
futures preorders for n ≥ 3. Apparently, (2-nested) impossible futures
preorder is the only positive exception.
To achieve these negative results, we mainly exploit the proof-theoretic technique (cf. Section 2.1.5). On top of this, a saturation principle is introduced,
to transform a single summand into a large collection of (semi-)saturated summands, which plays a pivotal role in obtaining positive results.
To the best of our knowledge, impossible futures semantics is the first example that affords a finite, ground-complete axiomatization for BCCSP (or BCCS)
modulo the preorder, while missing a finite, ground-complete axiomatization for
BCCSP (or BCCS) modulo the equivalence. This surprising fact suggests that,
for instance, if one wants to show p ≃IF q in general, one has to resort to deriving
p -IF q and q -IF p separately, instead of proving it directly.
In [AFI07, dFG07] an algorithm is presented which produces, from an axiomatization for BCCSP modulo a preorder, an axiomatization for BCCSP modulo
the corresponding equivalence. If the original axiomatization for the preorder is
ground-complete or ω-complete, then so is the resulting axiomatization for the
equivalence. In Section 3.2, we show that the same algorithm applies equally
well to weak semantics. However, that algorithm only applies to semantics that
are at least as coarse as ready simulation semantics. Since impossible futures
semantics is incomparable to ready simulation semantics, it falls outside the
scope of [AFI07, dFG07] and Section 3.2. Interestingly, our results yield that
no such algorithm exists for certain semantics incomparable with (or finer than)
ready simulation.
Structure of the chapter. Section 5.2 offers sound, finite, ground-complete
axiomatizations for -IF and WIF ; it also contains the proof of the negative
result for ≃IF and ≃WIF . Section 5.3 is devoted to the proofs of the negative
and positive results regarding ω-completeness. Section 5.4 contains the negative
results regarding n-nested (concrete) impossible futures semantics. Section 5.5
concludes the chapter with an overview of the positive and negative results
achieved in this chapter.
5.2
5.2.1
Ground-Completeness
Concrete Impossible Futures Preorder
In this section, we provide a ground-complete axiomatization for impossible
futures preorder. It consists of the core axioms A1-4 together with two extra
axioms:
92
Chapter 5 Impossible Futures
IF1
a(x + y) 4 ax + ay
IF2 a(x + y) + ax + a(y + z) ≈ ax + a(y + z) .
Recall that here, t ≈ u denotes that both t 4 u and u 4 t are present in the
inequational axiomatization. It is not hard to see that IF1-2 are sound modulo
-IF . The rest of this section is devoted to proving the following theorem.
Theorem 5.2.1 A1-4+IF1-2 is ground-complete for BCCSP(A) modulo -IF .
To give some intuition on the ground-completeness proof, we first present
an example.
Example 5.2.2 Let p = a(a0 + a2 0) + a4 0 and q = a(a0 + a3 0) + a3 0. It is
not hard to see that p -IF q. However, neither a(a0 + a2 0) -IF a(a0 + a3 0) nor
a(a0 + a2 0) -IF a3 0 holds. In order to derive p 4 q, we therefore first derive
with IF2 that q ≈ p + q. And p 4 p + q can be derived with IF1.
In general, to derive a sound closed inequation p 4 q, first we derive q ≈ S(q) (see
Lem. 5.2.5), where S(q) contains for every a ∈ I(q) a “saturated” a-summand
(see Def. 5.2.3). (In Ex. 5.2.2, this saturated a-summand would have the form
a(a0 + a2 0 + a3 0 + a(a0 + a2 0)).) Then, in the proof of Thm. 5.2.1, we derive
Ψ+S(q) ≈ S(q) (equation (5.1)), p 4 Ψ (equation (5.2)) and p 4 p+q (equation
(5.3)), where the closed term Ψ is built from many “semi-saturated” summands
(like, in Ex. 5.2.2, p). These results together provide the desired proof (see the
last line of the proof of Thm. 5.2.1).
Definition 5.2.3 For each closed term q, the closed term S(q) is defined inductively on the depth of q as follows:
X
X
S(q) = q +
a(S(
q ′ )) .
a∈I(q)
aq ′ ⋐q
Example 5.2.4 If q = a(b(c0 + d0) + be0) + af 0, then S(q) = a(b(c0 + d0) +
be0) + af 0 + a(b(c0 + d0) + be0 + f 0 + b(c0 + d0 + e0)).
Lemma 5.2.5 For each closed term q, A1-4+IF1-2 ⊢ q ≈ S(q).
Proof: By induction on depth(q). For any a ∈ I(q),
X
X
q ≈ q + a(
q ′ ) ≈ q + a(S(
q ′ )) .
aq ′ ⋐q
aq ′ ⋐q
The first derivation step uses IF2, and the second induction. Hence, summing
up over all a ∈ I(q),
X
X
q ≈ q+
a(S(
q ′ )) = S(q) .
a∈I(q)
aq ′ ⋐q
93
5.2 Ground-Completeness
For closed terms q and γ ∈ T (q), the closed term qγ is obtained by summing
γ
over all closed terms q ′ such that q −−→ q ′ , and then applying the saturation
from Def. 5.2.3. The auxiliary terms qγ will only be used in the derivation of
equation (5.1) within the proof of Thm. 5.2.1.
Definition 5.2.6 Given a closed term q, and a completed trace a1 · · · ad of q.
For 0 ≤ ℓ ≤ d we define
a
a
ℓ
1
q1 · · · −−→
qℓ } ,
Qa1 ···aℓ = {qℓ | q −−→
and
X
qa1 ···aℓ = S(
qℓ ) .
qℓ ∈Qa1 ···aℓ
Note that qε = S(q). We prove some basic properties for the terms qγ .
Lemma 5.2.7 Given a closed term q, and a completed trace a1 · · · ad of q.
Then, for 0 ≤ ℓ < d,
a
• qa1 ···aℓ −−ℓ+1
−→ qa1 ···aℓ+1 ; and
a
−→ qℓ+1 for all qℓ+1 ∈ Qa1 ···aℓ+1 .
• qa1 ···aℓ −−ℓ+1
Proof: Clearly, qℓ+1 ∈ Qa1 ···aℓ+1 iff there exists some qℓ ∈ Qa1 ···aℓ such that
a
−→ qℓ+1 . And since a1 · · · aℓ+1 is a trace of q, aℓ+1 ∈ I(qℓ ) for some
qℓ −−ℓ+1
qℓ ∈ Qa1 ···aℓ . So by Def. 5.2.3,
X
X
a
qℓ+1 ) = qa1 ···aℓ+1 .
qℓ ) −−ℓ+1
−→ S(
qa1 ···aℓ = S(
qℓ ∈Qa1 ···aℓ
qℓ+1 ∈Qa1 ···aℓ+1
P
a
Moreover, for all qℓ+1 ∈ Qa1 ···aℓ+1 , we have qℓ ∈Qa ···a qℓ −−ℓ+1
−→ qℓ+1 . Hence,
1
ℓ
by Def. 5.2.3,
X
a
qℓ ) −−ℓ+1
−→ qℓ+1 .
qa1 ···aℓ = S(
qℓ ∈Qa1 ···aℓ
We now embark on proving the promised ground-completeness result.
Proof of Thm. 5.2.1: Suppose p -IF q. We derive p 4 q using induction on
depth(p). If p = 0, then clearly q = 0, and we are done. So assume p 6= 0,
and consider any completed path1 π = a1 p1 · · · ad pd of p (with d ≥ 1); that
ad
a1
p1 · · · −−→
pd = 0. We inductively construct closed terms ψℓπ , for
is, p −−→
a
a
1
1 A sequence a p · · · a p is a completed path of a term p if p −−→
p1 · · · −−k→ pk with
1 1
0
0
k k
I(pk ) = ∅. We write CP(p) for the set of completed paths of term p, which is ranged over by
π.
94
Chapter 5 Impossible Futures
ℓ from d down to 1. For the base case, ψdπ = 0. Now let 1 ≤ ℓ < d. Since
a1 ···aℓ
a1 ···aℓ
p −−−
−−→ pℓ and p -IF q, there exists a sequence of transitions q −−−
−−→ qℓ
such that T (qℓ ) ⊆ T (pℓ ). We define
π
.
ψℓπ = qℓ + aℓ+1 ψℓ+1
We prove, by induction on d − ℓ, that for 1 ≤ ℓ ≤ d,
T (ψℓπ ) ⊆ T (pℓ ) .
The base case is trivial, since T (ψdπ ) = ∅. Now let 1 ≤ ℓ < d. By induction,
a
π
π
T (ψℓ+1
) ⊆ T (pℓ+1 ). Moreover, pℓ −−ℓ+1
−→ pℓ+1 , so T (aℓ+1 ψℓ+1
) ⊆ T (pℓ ). Hence,
π
π
π
T (ψℓ ) = T (qℓ + aℓ+1 ψℓ+1 ) = T (qℓ ) ∪ T (aℓ+1 ψℓ+1 ) ⊆ T (pℓ ).
Next, we prove, by induction on d − ℓ, that for 1 ≤ ℓ ≤ d,
E ⊢ aℓ ψℓπ + qa1 ···aℓ−1 ≈ qa1 ···aℓ−1 .
In the base case, since ψdπ = 0 ∈ Qa1 ···ad (see Def. 5.2.6), this is a direct
consequence of the second item in Lem. 5.2.7. Now let 1 ≤ ℓ < d.
aℓ ψℓπ + qa1 ···aℓ−1
π
= aℓ (qℓ + aℓ+1 ψℓ+1
) + qa1 ···aℓ−1 + aℓ qℓ + aℓ qa1 ···aℓ
π
aℓ (aℓ+1 ψℓ+1
π
)
aℓ+1 ψℓ+1
+ qa1 ···aℓ−1 + aℓ qℓ +
≈ aℓ (qℓ +
π
≈ qa1 ···aℓ−1 + aℓ qℓ + aℓ (qa1 ···aℓ + aℓ+1 ψℓ+1
)
(Lem. 5.2.7)
+ qa1 ···aℓ )
≈ qa1 ···aℓ−1 + aℓ qℓ + aℓ qa1 ···aℓ
= qa1 ···aℓ−1 .
(induction)
(IF2)
(induction)
(Lem. 5.2.7)
In the end, for ℓ = 1, we derive a1 ψ1π + qε ≈ qε . In other words,
E ⊢ a1 ψ1π + S(q) ≈ S(q) .
Since this holds for all completed paths π of p, it follows that
X X
X
aψ1π + S(q) ≈ S(q) ,
E⊢
(5.1)
a∈I(p) ap′ ⋐p π∈CP(ap′ )
where CP(ap′ ) denotes the set of completed paths of the summand ap′ .
On the other hand, for every summand ap′ of p,
X
ψ1π .
p′ -IF
π∈CP(ap′ )
Namely, consider any path π0 = a1 p1 · · · ah ph of ap′ . Extend π0 to some coma
π
for
−→ ψℓ+1
pleted path π of ap′ . By the definition of the ψℓπ , clearly, ψℓπ −−ℓ+1
a
···a
2
h
1 ≤ ℓ < h. So ψ1π −−−−−→ ψhπ . Moreover, we proved that T (ψhπ ) ⊆ T (ph ).
95
5.2 Ground-Completeness
So by induction on depth, for every summand ap′ of p, we derive
X
ψ1π .
p′ 4
π∈CP(ap′ )
And thus, by IF1,
ap′ 4 a(
X
X
aψ1π .
X
aψ1π .
ψ1π ) 4
π∈CP(ap′ )
π∈CP(ap′ )
Hence, summing over all summands ap′ of p,
X X
E ⊢ p 4
a∈I(p)
ap′ ⋐p
(5.2)
π∈CP(ap′ )
Finally, since p -IF q, clearly, for each a ∈ I(p),
X
X
p′ -IF
q′ .
ap′ ⋐p
aq ′ ⋐q
So by induction on depth, for each a ∈ I(p),
X
X
q′ .
p′ 4
E ⊢
ap′ ⋐p
aq ′ ⋐q
So by IF2 and IF1, and since I(p) = I(q),
X
X
X
X
X X
p ≈ p+
a(
p′ ) 4 p +
a(
q′ ) 4 p +
aq ′ .
a∈I(p)
ap′ ⋐p
a∈I(q)
aq ′ ⋐q
a∈I(q)aq ′ ⋐q
That is,
E ⊢ p 4 p+q .
(5.3)
In the end, (5.3), (5.2) and (5.1), together with Lem. 5.2.5, yield
X X
X
E ⊢ p 4 p + q ≈ p + S(q) 4
aψ1π + S(q) ≈ S(q) ≈ q .
a∈I(p) ap′ ⋐p π∈CP(ap′ )
5.2.2
Weak Impossible Futures Preorder
We now apply the link established in Section 3.3 to obtain a ground-complete
axiomatization for weak impossible future preorder WIF for free. It consists
of A1-4, IF1-2, W1-2 (see page 48), together with x 4 τ x (W3). Again, this
axiomatization can be simplified. It turns out that IF1 and IF2 are redundant.
(To see this, we only need to observe that A1-4+W1-3 ⊢ a(x+y) 4 a(τ x+τ y) ≈
96
Chapter 5 Impossible Futures
W1
W2′
W3
αx + αy ≈ α(τ x + τ y)
τ (x + y) 4 τ x + y
x 4 τx
Table 5.1: Axioms for weak impossible futures preorder
ax + ay and A1-4+W1-3 ⊢ τ (x + y) + τ x + τ (y + z) ≈ τ x + y + τ (y + z) ≈
τ (y + z) + τ x. It follows that A1-4+W1-3 ⊢ IF2.) Furthermore, W2 can be
replaced by W2′ . (To see this, by W2′ , τ (x + y) + τ x 4 τ x + y; by W3,
τ x + y ≈ τ x + x + y 4 τ x + τ (x + y).) In summary, the crucial axiomas are
presented in Tab. 5.1. Clearly, W1-3 are sound for BCCS modulo WIF , hence,
Corollary 5.2.8 A1-4+W1+W2′ +W3 is ground-complete for BCCS(A) modulo WIF .
Remark 5.2.9 A slightly more complicated axiomatization has been obtained
in [VM01], where, instead of W2′ , the axiom τ (τ x+y) ≈ τ x+y is present. Here,
it becomes a direct corollary from the axiomatization for concrete semantics. It
is also very interesting to see the difference between failures and impossible
futures semantics. It seems that the nuance lies in only one axiom, namely,
for the weak impossible future preorder, x 4 τ x suffices; while for the failures
preorder, a stronger axiom x 4 τ x + y has to be included. However, as revealed
by the algorithm, we argue that the essential difference actually arises in the
concrete case. Namely, between IF1-2 and F1.
5.2.3
Weak Impossible Futures Equivalence
We now prove that for any (nonempty) A there does not exist any finite, sound,
ground-complete axiomatization for BCCS(A) modulo ≃WIF . The cornerstone
for this negative result is the following infinite family of closed equations, for
m ≥ 0:
τ a2m 0 + τ (am 0 + a2m 0) ≈ τ (am 0 + a2m 0) .
It is not hard to see that they are sound modulo ≃WIF . We start with a few
lemmas.
Lemma 5.2.10 If p WIF q then WCT (p) ⊆ WCT (q).
Proof: A process p has a weak completed trace a1 · · · ak iff it has a weak impossible future (a1 · · · ak , A).
τ
Lemma 5.2.11 Suppose t WIF u. Then for any t′ with t ⇒−−
→ t′ there is
τ
′
′
′
′
some u with u ⇒−−→ u such that var (u ) ⊆ var (t ).
97
5.2 Ground-Completeness
τ
Proof: Let t ⇒−−→ t′ . Fix some m > depth(t), and consider the closed substitution ρ defined by ρ(x) = 0 if x ∈ var (t′ ) and ρ(x) = am 0 if x 6∈ var (t′ ). Since
ρ(t) ⇒ ρ(t′ ) with depth(ρ(t′ )) = depth(t′ ) < m, and ρ(t) WIF ρ(u), clearly
ρ(u) ⇒ q for some q with depth(q) < m. From the definition of ρ it then follows
τ
that there must exist u ⇒ u′ with var (u′ ) ⊆ var (t′ ). In case u ⇒−−
→ u′ we are
′
done, so assume u = u. Let σ be the substitution with σ(x) = 0 for all x ∈ V .
τ
τ
τ
Since σ(t) −−→ and t WIF u we have σ(u) −−→, so u −−→ u′′ for some u′′ . Now
′′
′
′
var (u ) ⊆ var (u) = var (u ) ⊆ var (t ).
Lemma 5.2.12 Assume that, for terms t, u, closed substitution σ, a ∈ A and
integer m:
1. t ≃WIF u;
2. m > depth(u);
3. WCT (σ(u)) ⊆ {am , a2m }; and
τ
4. there is a closed term p′ such that σ(t) ⇒−−→ p′ and WCT (p′ ) = {a2m }.
τ
Then there is a closed term q ′ such that σ(u) ⇒−−→ q ′ and WCT (q ′ ) = {a2m }.
Proof: According to proviso (4) of the lemma, we can distinguish two cases.
τ
• There exists some x ∈ V such that t ⇒ t′ with t′ = t′′ +x and σ(x) ⇒−−
→ p′
′
2m
where WCT (p ) = {a }. Consider the closed substitution ρ defined by
ρ(x) = am 0 and ρ(y) = 0 for any y 6= x. Then am ∈ WCT (ρ(t)) =
WCT (ρ(u)), using Lem. 5.2.10, and this is only possible if u ⇒ u′ for
τ
some u′ = u′′ + x. Hence σ(u) ⇒−−→ p′ .
τ
• t ⇒−−
→ t′ with WCT (σ(t′ )) = {a2m }. Since depth(t′ ) ≤ depth(t) =
depth(u) < m, clearly, for any x ∈ var (t′ ), either depth(σ(x)) = 0 or
norm(σ(x)) > m, where norm(p) denotes the length of the shortest weak
u
completed trace of p. Since t ≃WIF u, by Lem. 5.2.11, u⇒ −−
→ u′ with
′
′
′
var (u ) ⊆ var (t ). Hence, for any x ∈ var (u ), either depth(σ(x)) = 0 or
norm(σ(x)) > m. Since depth(u′ ) < m, am ∈
/ WCT (σ(u′ )). It follows from
τ
m 2m
′
WCT (σ(u)) ⊆ {a , a } that WCT (σ(u )) = {a2m }. And u ⇒−−→ u′
τ
′
implies σ(u) ⇒−−→ σ(u ).
The proof is now complete.
Lemma 5.2.13 Assume that, for E an axiomatization sound for WIF , closed
terms p, q, closed substitution σ, a ∈ A and integer m:
1. E ⊢ p ≈ q;
2. m > max{depth(u) | t ≈ u ∈ E};
98
Chapter 5 Impossible Futures
3. WCT (q) ⊆ {am , a2m }; and
τ
4. there is a closed term p′ such that p ⇒−−→ p′ and WCT (p′ ) = {a2m }.
τ
Then there is a closed term q ′ such that q ⇒−−
→ q ′ and WCT (q ′ ) = {a2m }.
Proof: By induction on the derivation of E ⊢ p ≈ q.
• Suppose E ⊢ p ≈ q because σ(t) = p and σ(u) = q for some t ≈ u ∈ E or
u ≈ t ∈ E and closed substitution σ. The claim then follows by Lem. 6.4.2.
• Suppose E ⊢ p ≈ q because E ⊢ p ≈ r and E ⊢ r ≈ q for some r.
Since r ≃WIF q, by proviso (3) of the lemma and Lem. 5.2.10, WCT (r) ⊆
τ
{am , a2m }. Since there is a p′ such that p ⇒−−→ p′ with WCT (p′ ) =
τ
2m
′
{a }, by induction, there is an r such that r ⇒−−
→ r′ and WCT (r′ ) =
τ
2m
′
{a }. Hence, again by induction, there is a q such that q ⇒−−
→ q ′ and
′
2m
WCT (q ) = {a }.
• Suppose E ⊢ p ≈ q because p = p1 + p2 and q = q1 + q2 with E ⊢
τ
p1 ≈ q1 and E ⊢ p2 ≈ q2 . Since there is a p′ such that p ⇒−−
→ p′ and
τ
τ
′
2m
′
′
WCT (p ) = {a }, either p1 ⇒−−→ p or p2 ⇒−−→ p . Assume, without
τ
loss of generality, that p1 ⇒−−→ p′ . By induction, there is a q ′ such that
τ
τ
′
′
q1 ⇒−−→ q and WCT (q ) = {a2m }. Now q ⇒−−
→ q′ .
• Suppose E ⊢ p ≈ q because p = cp1 and q = cq1 with c ∈ A and E ⊢ p1 ≈
q1 . In this case, proviso (4) of the lemma can not be met.
• Suppose E ⊢ p ≈ q because p = τ p1 and q = τ q1 with E ⊢ p1 ≈ q1 . By
proviso (4) of the lemma, either WCT (p1 ) = {a2m } or there is a p′ such
τ
τ
that p1 ⇒−−→ p′ and WCT (p′ ) = {a2m }. In the first case, q ⇒−−→ q1 and
2m
WCT (q1 ) = {a } by Lem. 5.2.10. In the second, by induction, there is
τ
τ
a q ′ such that q1 ⇒−−→ q ′ and WCT (q ′ ) = {a2m }. Again q ⇒−−→ q ′ .
The proof is now complete.
Theorem 5.2.14 There is no finite, sound, ground-complete axiomatization
for BCCS(A) modulo ≃WIF .
Proof: Let E be a finite axiomatization over BCCS(A) that is sound modulo
≃WIF . Let m be greater than the depth of any term in E. Clearly, there is no
τ
term r such that τ (am 0 + a2m 0) ⇒−−
→ r and WCT (r) = {a2m }. So according
to Lem. 5.2.13, the closed equation τ a2m 0 + τ (am 0 + a2m 0) ≈ τ (am 0 + a2m 0)
cannot be derived from E. Nevertheless, it is valid modulo ≃WIF .
Remark 5.2.15 Lem. 5.2.13 does not hold if its first requirement is changed
into E ⊢ p 4 q. Note that the proof regarding the congruence rule for τ in
5.3 ω-Completeness
99
Lem. 5.2.13 fails for WIF . For example, consider the following closed inequations, for m ≥ 0:
τ a2m 0 4 τ (a2m 0 + am 0) .
They are sound modulo WIF , and satisfy the third and fourth requirement of
Lem. 5.2.13. However, they can all be derived by means of IF1:
τ a2m 0 = τ (am (am + 0)) 4 τ (am−1 (am+1 0 + a0))
4 τ (am−2 (am+2 0 + a2 0)) 4 · · · 4 τ (a2m 0 + am 0) .
5.2.4
Concrete Impossible Futures Equivalence
The link established in Section 3.3 together with Thm. 5.2.14 reveals the nonexistence of a finite, ground-complete axiomatization for concrete impossible future equivalences for BCCSP. That is,
Corollary 5.2.16 There is no finite, sound, ground-complete axiomatization
for BCCSP(A) modulo ≃IF .
5.3
5.3.1
ω-Completeness
Infinite Alphabet
In this section, we show that the axiomatization consisting of A1-4+IF1-2 is ωcomplete, provided the alphabet is infinite. The proof is based on the inverted
substitution technique, which is adapted from [Gro90] (see Section 3.4).
Theorem 5.3.1 If |A| = ∞, then A1-4+IF1-2 is ω-complete.
Proof: Consider any pair of BCCSP(A) terms t, u. Define the closed substitution
ρ by ρ(y) = ay 0, where ay is a unique action for y ∈ V that occurs in neither t
nor u. Such actions exist because A is infinite. We define the mapping R from
closed to open BCCSP(A) terms as follows:

R(0)
=0



R(at)
= y if a = ay for some y ∈ V
R(at)
= aR(t) if a 6= ay for all y ∈ V



R(t + u) = R(t) + R(u)
We check the three aforementioned properties.
(1) Since t and u do not contain actions of the form ay , clearly R(ρ(t)) = t
and R(ρ(u)) = u.
(2) For A1-4, the proof is trivial. We check the remaining cases IF1 and IF2.
Let σ be a closed substitution. For IF1, we distinguish two cases.
– a = ay for some y ∈ V . Then R(ay (σ(x1 ) + σ(x2 ))) = y 4 y + y =
R(ay (σ(x1 )) + ay (σ(x2 ))).
100
Chapter 5 Impossible Futures
– a 6= ay for all y ∈ V . Then R(a(σ(x1 ) + σ(x2 ))) = a(R(σ(x1 )) +
R(σ(x2 ))) 4 aR(σ(x1 )) + aR(σ(x2 )) = R(aσ(x1 ) + aσ(x2 )).
We now turn to IF2. We distinguish two cases as well.
– a = ay for some y ∈ V . Then R(ay (σ(x1 ) + σ(x2 )) + ay σ(x1 ) +
ay (σ(x2 ) + σ(x3 ))) = y + y + y ≈ y + y = R(ay σ(x1 ) + ay (σ(x2 ) +
σ(x3 ))).
– a 6= ay for all y ∈ V . Then R(a(σ(x1 ) + σ(x2 )) + aσ(x1 ) + a(σ(x2 ) +
σ(x3 ))) = a(R(σ(x1))+R(σ(x2 )))+aR(σ(x1))+a(R(σ(x2 ))+R(σ(x3 )))
≈ aR(σ(x1 )) + a(R(σ(x2 )) + R(σ(x3 ))) = R(aσ(x1 ) + a(σ(x2 ) +
σ(x3 ))).
(3) Consider the operator + . From R(p1 ) 4 R(q1 ) and R(p2 ) 4 R(q2 ) we
derive R(p1 + p2 ) = R(p1 ) + R(p2 ) 4 R(q1 ) + R(q2 ) = R(q1 + q2 ).
Consider the prefix operator a . We distinguish two cases.
– a = ay for some y ∈ V . Then R(ay p1 ) = y = R(ay q1 ).
– a 6= ay for all y ∈ V . Then from R(p1 ) 4 R(q1 ) we derive R(ap1 ) =
aR(p1 ) 4 aR(q1 ) = R(aq1 ).
The proof is now complete.
This result, together with the link established in Section 3.3, yields
Corollary 5.3.2 If |A| = ∞, then A1-4+W1+W2′ +W3 is ω-complete .
5.3.2
Finite Alphabet
1 < |A| < ∞. We prove that the inequational theory of BCCS(A) modulo
WIF does not have a finite basis in case of a finite alphabet with at least two
elements. The cornerstone for this negative result is the following infinite family
of inequations, for m ≥ 0:
τ (am x) + Φm 4 Φm
with
Φm = τ (am x + x) +
X
τ (am x + am b0) .
b∈A
It is not hard to see that these inequations are sound modulo WIF . Namely,
given a closed substitution ρ, we have WT (ρ(τ (am x))) ⊆ WT (ρ(Φm )) and
τ
ρ(Φm ) −−→. To argue that ρ(τ (am x) + Φm ) and ρ(Φm ) have the same weak
τ
impossible futures, we only need to consider the transition ρ(τ (am x) + Φm ) −−→
τ
am ρ(x) (all other cases being trivial). If ρ(x) = 0, then ρ(Φm ) −−→ am 0 +
0 generates the same weak impossible futures (ε, X). If, on the other hand,
b ∈ I(ρ(x)) for some b ∈ A, then WT (am ρ(x) + am b0) = WT (am ρ(x)), so
τ
ρ(Φm ) −−
→ am ρ(x) + am b0 generates the same weak impossible futures (ε, X).
5.3 ω-Completeness
101
We have already defined the traces and completed traces of closed terms.
Now we extend these definitions to open terms by allowing weak (completed)
traces of the form a1 · · · ak x ∈ A∗ V . We do this by treating each variable
occurrence x in a term as if it were a subterm x0 with x a concrete action,
and then apply Def. 2.1.6. Under this convention, WCT (Φm ) = {am x, x, am b |
b ∈ A}. We write WT V (t) for the set of weak traces of t that end in a variable,
and WT A (t) for ones that end in an action.
Observation 5.3.3 Let m > depth(t) or am ∈ V . Then a1 · · · am ∈ WT (σ(t))
iff there is a k < m and y ∈ V such that a1 · · · ak y ∈ WT V (t) and ak+1 · · · am ∈
WT (σ(y)).
Lemma 5.3.4 If |A| > 1 and t WIF u, then WT A (t) = WT A (u) and WT V (t) =
WT V (u).
Proof: Let σ be the closed substitution defined by σ(x) = 0 for all x ∈ V .
Then t WIF u implies σ(t) WIF σ(u) and hence WT A (t) = WT (σ(t)) =
WT (σ(u)) = WT A (u) by Def. 2.1.6.
For the second statement fix distinct actions a, b ∈ A and an injection p·q :
V → Z>0 (which exists because V is countable). Let m = depth(u) + 1 =
depth(t) + 1. Define the closed substitution ρ by ρ(z) = apzq·m b0 for all z ∈ V .
Again, by Def. 2.1.6, t WIF u implies WT (ρ(t)) = WT (ρ(u)). By Obs. 5.3.3,
for all terms v we have a1 · · · ak y ∈ WT V (v) iff a1 · · · ak apyq·m b ∈ WT (ρ(v))
with k < m. Hence WT V (v) is completely determined by WT (ρ(v)) and thus
WT V (t) = WT V (u).
τ
Lemma 5.3.5 Let |A| > 1. Suppose t WIF u and t ⇒−−
→ t′ . Then there is a
τ
′
′
′
′
term u such that u ⇒−−→ u and WT V (u ) ⊆ WT V (t ).
Proof: Define ρ exactly as in the previous proof. Since ρ(t) ⇒ ρ(t′ ) and t WIF u
there must be a u′ with ρ(u) ⇒ q and WT (q) ⊆ WT (ρ(t′ )). Since ρ(x) is τ -free
for x ∈ V it must be that q = ρ(u′ ) for some term u′ with u ⇒ u′ . Given
the relationship between WT V (v) and WT (ρ(v)) for terms v observed in the
τ
previous proof, it follows that WT V (u′ ) ⊆ WT V (t′ ). In case u ⇒−−
→ u′ we are
′
done, so assume u = u. Let σ be the substitution with σ(x) = 0 for all x ∈ V .
τ
τ
τ
Since σ(t) −−→ and t WIF u we have σ(u) −−→, so u −−→ u′′ for some u′′ . Now
′′
′
′
WT V (u ) ⊆ WT V (u) = WT V (u ) ⊆ WT V (t ).
Lemma 5.3.6 Let |A| > 1. Assume that, for some terms t, u, substitution σ,
a ∈ A and integer m:
1. t WIF u;
2. m ≥ depth(u); and
τ
3. σ(t) ⇒−−
→ t̂ for a term t̂ without traces ax for x ∈ V or am b for b ∈ A.
102
Chapter 5 Impossible Futures
τ
Then σ(u) ⇒−−→ û for a term û without traces ax for x ∈ V or am b for b ∈ A.
Proof: Based on proviso (3) there are two cases to consider.
τ
• y ∈ WT V (t) for some y ∈ V and σ(y) ⇒−−→ t̂. In that case y ∈ WT V (u)
τ
by Lem. 5.3.4, so σ(u) ⇒−−
→ t̂.
τ
• t ⇒−−→ t′ for some term t′ such that t̂ = σ(t). By Lem. 5.3.5 there is a term
τ
u′ with u ⇒−−
→ u′ and WT V (u′ ) ⊆ WT V (t′ ). Take û = σ(u′ ). Clearly
τ
σ(u) ⇒−−→ σ(u′ ). Suppose σ(u′ ) would have a weak trace am b. Then,
by Obs. 5.3.3, there is a k ≤ m and y ∈ V such that ak y ∈ WT V (u′ )
and am−k b ∈ WT (σ(y)). Since WT V (u′ ) ⊆ WT V (t′ ) we have am b ∈
WT (σ(t′ )), which is a contradiction. The case ax ∈ WT (σ(u)) is dealt
with in the same way.
The proof is now complete.
Lemma 5.3.7 Let |A| > 1 and let E be an axiomatization sound for WIF .
Assume that, for some terms v, w, action a and integer m:
1. E ⊢ v 4 w;
2. m ≥ max{depth(u) | t 4 u ∈ E}; and
τ
3. v ⇒−−→ v̂ for a term v̂ without traces ax for x ∈ V or am b for b ∈ A.
τ
Then w ⇒−−
→ ŵ for a term ŵ without traces ax for x ∈ V or am b for b ∈ A.
Proof: By induction on the derivation of E ⊢ v 4 w.
• Suppose E ⊢ v 4 w because σ(t) = v and σ(u) = w for some t 4 u ∈ E
and substitution σ. The claim then follows by Lem. 5.3.6.
• Suppose E ⊢ v 4 w because E ⊢ v 4 u and E ⊢ u 4 w for some u. By
τ
induction, u ⇒−−→ û for a term û without traces ax or am b. Hence, again
τ
by induction, w ⇒−−
→ ŵ for a term ŵ without traces ax or am b.
• Suppose E ⊢ v 4 w because v = v1 + v2 and w = w1 + w2 with E ⊢
τ
τ
v1 4 w1 and E ⊢ v2 4 w2 . Since v ⇒−−→ v̂, either v1 ⇒−−→ v̂ or
τ
τ
v2 ⇒−−→ v̂. Assume, without loss of generality, that v1 ⇒−−→ v̂. By
τ
induction, w1 ⇒−−→ ŵ for a term ŵ without traces ax or am b. Now
τ
w ⇒−−→ ŵ.
• Suppose E ⊢ v 4 w because v = cv1 and w = cw1 with c ∈ A and
E ⊢ v1 ≈ w1 . In this case, proviso (3) of the lemma can not be met.
• Suppose E ⊢ v 4 w because v = τ v1 and w = τ w1 with E ⊢ v1 ≈ w1 .
τ
Then either v1 = v̂ or v1 ⇒−−
→ v̂. In the first case, w1 has no traces ax
or am b by Lem. 5.3.4 and proviso (3) of the lemma; hence w has no such
τ
traces either. In the second case, by induction, w1 ⇒−−
→ ŵ for a term ŵ
τ
without traces ax or am b. Again w ⇒−−→ ŵ.
5.3 ω-Completeness
103
The proof is now complete.
Theorem 5.3.8 If 1 < |A| < ∞, then the inequational theory of BCCS(A)
modulo WIF does not have a finite basis.
Proof: Let E be a finite axiomatization over BCCS(A) that is sound modulo WIF . Let m be greater than the depth of any term in E. According to
Lem. 5.3.7, the inequation τ (am x) + Φm 4 Φm cannot be derived from E. Yet
it is sound modulo WIF .
|A| = 1. We prove that the inequational theory of BCCS(A) modulo WIF
does not have a finite basis in case of a singleton alphabet. The cornerstone for
this negative result is the following infinite family of inequations, for m ≥ 0:
am x 4 am x + x
If |A| = 1, then these inequations are clearly sound modulo WIF . Note that
given a closed substitution ρ, WT (ρ(x)) ⊆ WT (ρ(am x)).
Lemma 5.3.9 If t WIF u then WT V (t) ⊆ WT V (u).
Proof: Fix a ∈ A and an injection p·q : V → Z>0 . Let m = depth(u) +
1. Define the closed substitution ρ by ρ(z) = apzq·m 0 for all z ∈ V . By
Lem. 5.2.10, WCT (ρ(t)) ⊆ WCT (ρ(u)). Now suppose a1 · · · ak y ∈ WT V (t).
Then a1 · · · ak apyq·m ∈ WCT (ρ(t)) ⊆ WCT (ρ(u)) and k < m. This is only possible if a1 · · · ak y ∈ WT V (u).
Lemma 5.3.10 Assume that, for terms t, u, substitution σ, a ∈ A, x ∈ V ,
integer m:
1. t WIF u;
2. m > depth(u); and
3. x ∈ WT V (σ(u)) and ak x 6∈ WT V (σ(u)) for 1 ≤ k < m.
Then x ∈ WT V (σ(t)) and ak x 6∈ WT V (σ(t)) for 1 ≤ k < m.
Proof: Since x ∈ WT V (σ(u)), by Obs. 5.3.3 there is a variable y with y ∈
WT V (u) and x ∈ WT V (σ(y)). Consider the closed substitution ρ given by
ρ(y) = am 0 and ρ(z) = 0 for z 6= y. Then m > depth(u) = depth(t), and
y ∈ WT V (u) implies am ∈ WT (ρ(u)) = WT (ρ(t)), so by Obs. 5.3.3 there is
some k < m and z ∈ V such that ak z ∈ WT V (t) and am−k ∈ WT (ρ(z)). As
k < m it must be that z = y. Since ak y ∈ WT V (t) and x ∈ WT V (σ(y)),
Obs. 5.3.3 implies that ak x ∈ WT V (σ(t)). By Lem. 5.3.9, ak x 6∈ WT V (σ(t))
for 1 ≤ k < m. Hence we obtain k = 0.
104
Chapter 5 Impossible Futures
Lemma 5.3.11 Assume that, for E an axiomatization sound for WIF and for
terms v, w, a ∈ A, x ∈ V and integer m:
1. E ⊢ v 4 w;
2. m > max{depth(u) | t 4 u ∈ E}; and
3. x ∈ WT V (w) and ak x 6∈ WT V (w) for 1 ≤ k < m.
Then x ∈ WT V (v) and ak x 6∈ WT V (v) for 1 ≤ k < m.
Proof: By induction on the derivation of E ⊢ v 4 w.
• Suppose E ⊢ v 4 w because σ(t) = v and σ(u) = w for some t 4 u ∈ E
and substitution σ. The claim then follows by Lem. 5.3.10.
• Suppose E ⊢ v 4 w because E ⊢ v 4 u and E ⊢ u 4 w for some u. By
induction, x ∈ WT V (u) and ak x 6∈ WT V (u) for 1 ≤ k < m. Hence, again
by induction, x ∈ WT V (v) and ak x 6∈ WT V (v) for 1 ≤ k < m.
• Suppose E ⊢ v 4 w because v = v1 +v2 and w = w1 +w2 with E ⊢ v1 4 w1
and E ⊢ v2 4 w2 . Since x ∈ WT V (w), either x ∈ WT V (w1 ) or x ∈
WT V (w2 ). Assume, without loss of generality, that x ∈ WT V (w1 ). Since
ak x 6∈ WT V (w) for 1 ≤ k < m, surely ak x 6∈ WT V (w1 ) for 1 ≤ k < m.
By induction, x ∈ WT V (v1 ), and hence x ∈ WT V (v). For 1 ≤ k < m we
have ak x 6∈ WT V (w) and hence ak x 6∈ WT V (v), by Lem. 5.3.9.
• Suppose E ⊢ v 4 w because v = cv1 and w = cw1 with c ∈ A and
E ⊢ v1 ≈ w1 . In this case, proviso (3) of the lemma can not be met.
• Suppose E ⊢ v 4 w because v = τ v1 and w = τ w1 with E ⊢ v1 ≈ w1 .
Then, by proviso (3) of the lemma, x ∈ WT V (w1 ) and ak x 6∈ WT V (w1 )
for 1 ≤ k < m. By induction, x ∈ WT V (v1 ) and ak x 6∈ WT V (v1 ) for
1 ≤ k < m. Hence x ∈ WT V (v) and ak x 6∈ WT V (v) for 1 ≤ k < m.
The proof is now complete.
Theorem 5.3.12 If |A| = 1, then the inequational theory of BCCS(A) modulo
WIF does not have a finite basis.
Proof: Let E be a finite axiomatization over BCCS(A) that is sound modulo WIF . Let m be greater than the depth of any term in E. According to
Lem. 5.3.11, the inequation am x 4 am x + x cannot be derived from E. Yet,
since |A| = 1, it is sound modulo WIF .
To conclude this subsection, Thm. 5.3.8 and Thm. 5.3.12 yield
Theorem 5.3.13 If |A| < ∞, then the inequational theory of BCCS(A) modulo WIF does not have a finite basis.
5.4 n-Nested Impossible Futures
105
Again, this result, together with the link established in Section 3.3, yields
Corollary 5.3.14 If |A| < ∞, then the inequational theory of BCCSP(A) modulo -IF does not have a finite basis.
5.4
n-Nested Impossible Futures
Similar to the n-nested semantics and n-nested possible futures semantics (see,
e.g., [AFvGI04]), one can define n-nested impossible futures semantics.2
Definition 5.4.1 Assume a labeled transition system. For each n ≥ 0, the
n-nested impossible futures preorder -n on states is defined by:
• s1 -0 s2 for any states s1 , s2 ;
a ···a
a ···a
1
1
k
k
• s1 -n+1 s2 if s1 −−−
−−→
s′1 implies s2 −−−
−−→
s′2 with s′2 -n s′1 .
We write ≃n for -n ∩ %n .
-n+1 ⊂≃n ⊂-n for n ≥ 1. Moreover, -1 coincides with trace preorder, while
-2 =-IF . Moreover, it is easy to see that the limit of n-nested impossible futures
preorder or equivalence coincides with the concrete bisimulation. We will argue
that apart from -IF , no nested impossible futures semantics allows a finite,
ground-complete axiomatization.
In the proof of this result, which basically consists of a generalization of
the proofs of Lem. 6.4.2, Lem. 5.2.13 and Corollary 5.2.16, we shall make use
of formulae in the modal characterization of the n-nested impossible futures
preorders. A state s satisfies the modal formula haiϕ if there exists a transition
a
s −−→ s′ where s′ satisfies the modal formula ϕ.
Definition 5.4.2 For n ≥ 0, we define a set Ln of modal formulae:
• L0 contains only ⊤ and ⊥;
• Ln+1 is given by the BNF
ϕ ::= ha1 i · · · hak i¬ϕ′ (a1 · · · ak ∈ A∗ , ϕ′ ∈ Ln ).
Lemma 5.4.3 Let n ≥ 0. If s1 -n s2 , then ∀ϕ ∈ Ln : s1 |= ϕ ⇒ s2 |= ϕ.
Proof: By induction on n. The base case is trivial. Suppose s1 -n+1 s2 , and
a1 ···ak
let s1 |= ϕ ∈ Ln+1 , where ϕ = ha1 i · · · hak i¬ϕ with ϕ ∈ Ln . Then s1 −−−
−−→ s′1
a
···a
1
k
with s′1 6|= ϕ. Since s1 -n+1 s2 , s2 −−−−−→ s′2 with s′2 -n s′1 . By the induction
hypothesis, s′2 6|= ϕ. Then s′2 |= ¬ϕ, and thus q |= ϕ.
The operator ;m aℓ adds a sequence of ℓ a-transitions to every state at depth
m from which no transition is available.
2 Here we only deal with concrete version; And we omit the subscript “IF” in IF since it
is clear from the context.
106
Chapter 5 Impossible Futures
Definition 5.4.4 [AFvGI04, Def. 31] For k, ℓ ≥ 0, define the operator ;k aℓ on
closed terms inductively by
Pm
ℓ
( i=1 ai pi );k+1 aℓ = Σm
i=1 ai (pi ;k a )
ℓ
(bp + q);0 a = bp + q
0;0 aℓ = aℓ 0 .
In the remainder of this section, we assume without loss of generality that
A = {a}. This is justified because in the coming proofs we will only consider
inequations t 4 u and equations t ≈ u where no actions b 6= a occur in t and
u; and it is easy to see that any sound derivation of an such an (in)equation
cannot contain an occurrence of an action b 6= a.
For n ≥ 1 and m ≥ 0, we define formulae ϕm
n:
ϕm
1
ϕm
n+1
=
=
haim ¬hai⊤
hai¬ϕm
.
n
n−1
In other words, ϕm
haim ¬hai⊤. By induction on n, it is easy to see
n = (hai¬)
m
that ϕn ∈ Ln+1 .
We now formulate a slight variation of [AFvGI04, Lem. 36].
Lemma 5.4.5 Let t be a term with depth(t) < m and depth(ρ(t)) < 2m + n,
for some m, n ≥ 1. Let ρ be a closed substitution with ρ(y) = 0 for each
variable y that occurs at multiple depths in t. Let ρ′ be a closed substitution
with ρ′ (x) = ρ(x);m+n−1−dx am+1 0 if ρ(x) 6= 0 and x ∈ var dx (t), and ρ′ (x) = 0
if ρ(x) = 0. Then
′
n−1
ρ(t) |= ϕm
hai2m+1 ⊤ .
n ⇔ ρ (t) |= (hai¬)
Proof: (Sketch) By induction on m, we can show
ρ′ (t) = ρ(t);m+n−1 am+1 .
And by induction on m + n, we can show
m+1
ρ(t) |= ϕm
|= (hai¬)n−1 hai2m+1 ⊤ .
n ⇔ ρ(t);m+n−1 a
(The latter proof uses that A = {a}.)
Lemma 5.4.6 Let n ≥ 1. Assume that, for some terms t, u and closed substitution ρ:
1. t -n u;
2. m > depth(u);
3. WCT (ρ(u)) ⊆ {am+n−1 , a2m+n−1 }; and
5.4 n-Nested Impossible Futures
107
4. ρ(t) |= ϕm
n.
Then ρ(u) |= ϕm
n.
Proof: From provisos (2) and (3), it is not hard to see that ρ(y) = 0 for each
variable y that occurs at multiple depths in u. So by Lem. 4.2.1, the same holds
for t. Let ρ′ be defined as in Lem. 5.4.5. By proviso (4), ρ(t) |= ϕm
n , so by
Lem. 5.4.5, ρ′ (t) |= (hai¬)n−1 hai2m+1 ⊤. Note that (hai¬)n−1 hai2m+1 ⊤ ∈ Ln .
By proviso (1), ρ′ (t) -n ρ′ (u), so by Lem. 5.4.3, ρ′ (u) |= (hai¬)n−1 hai2m+1 ⊤.
Hence, again by Lem. 5.4.5, ρ(u) |= ϕm
n.
Lemma 5.4.7 Let n ≥ 2. Let the finite axiomatization E be sound modulo
≃n . Assume that, for some closed terms p, q:
1. E ⊢ p ≈ q;
2. m > depth(q);
3. WCT (q) ⊆ {am+n−1 , a2m+n−1 }; and
4. p |= ϕm
n.
Then q |= ϕm
n.
Proof: By induction on the derivation of E ⊢ p ≈ q.
The case ρ(t) = p and ρ(u) = q for some t ≈ u ∈ E and closed substitution
ρ, follows from Lem. 5.4.6.
The other three cases ((1) E ⊢ p ≈ r and E ⊢ r ≈ q; (2) p = p1 + p2 and
q = q1 + q2 with E ⊢ p1 ≈ q1 and E ⊢ p2 ≈ q2 ; (3) p = ap′ and q = aq ′ with
E ⊢ p′ ≈ q ′ ) can be dealt with in the same way as in the proof of Lem. 5.2.13. Theorem 5.4.8 Let n ≥ 2. There is no finite, sound, ground-complete axiomatization for BCCSP(A) modulo ≃n .
Proof: Let E be a finite axiomatization that is sound modulo ≃n . Let m be
greater than the depth of any term in E. For k ≥ 0, we define closed terms pm
k
and qkm :
pm
= a2m−1 0
q0m
= am−1 0
0
m
m
pk+1 = apk + aqk
qk+1 = apk .
m
Clearly, qk -k+1 pk . This induces that pm
k ≃k qk .
m
m
It is not hard to see that pk |= ϕk while qkm 6|= ϕm
k (for k ≥ 1). So by
m
Lem. 5.4.7, pm
n ≈ qn cannot be derived from E. Hence, E is not groundcomplete.
Likewise we can prove this inaxiomatizability result for -n in case n ≥ 3.
The reason that the proof can be shifted from equivalences to preorders without
108
Chapter 5 Impossible Futures
problem, is that the key result Lem. 5.4.6 is formulated for -n . The reason that
the proof does not extend to -2 is that -2 6⊆≃CT , while this inclusion is essential
in the proof of Lem. 5.4.7 (see also the proof of Lem. 5.2.13). On the other hand,
-3 ⊆≃CT does hold (see Lem. 5.2.10).
Theorem 5.4.9 Let n ≥ 3. There is no finite, sound, ground-complete axiomatization for BCCSP(A) modulo -n .
5.5
Conclusion
We have performed a comprehensive and systematic study on the axiomatizability of concrete and weak impossible futures semantics over process algebra
BCCSP and BCCS. Tab. 5.2 presents an overview, (as in Tab. 4.1) with a +
indicating that a finite basis exists and a – indicating that a finite basis does
not exist. The table expands in two dimensions: ground-completeness v.s. ωcompleteness and preorder v.s. equivalence. When necessarily, we distinguish
two categories, according to the cardinality of the alphabet A: finite or infinite.
(concrete/weak) preorder
(concrete/weak) equivalence
n(≥ 3)-nested preorder
n(≥ 2)-nested equivalence
ground-comp.
1 ≤ |A| ≤ ∞
+
–
–
–
ω-comp.
|A| = ∞ 1 ≤ |A| < ∞
+
–
–
–
–
–
–
–
Table 5.2: Summary of the axiomatizability of impossible futures
Chapter 6
Priority
6.1
Introduction
Programming and specification languages often include constructs to describe
mode switches (see, e.g., [Mau91, MTHM97]). Indeed, some form of mode transfer in computation appears in operating systems in the guise of interrupts, in
programming languages as exceptions, and in the behavior of control programs
and embedded systems as discrete “mode switches” triggered by changes in the
state of their environment. Such mode changes are often used to encode different levels of urgency amongst the actions that can be performed by a system as
it computes, and implement variations on the notion of pre-emption.
In light of the ubiquitous nature of mode changes in computation, it is not
surprising that classic process description languages include primitive operators
to describe mode changes – for example, LOTOS [Bri85, ISO87] offers the socalled disruption operator – or have been extended with variations on mode
transfer operators. Examples of such operators that may be added to the process
algebra CCS are discussed by Milner in [Mil89a, page 192–193], and Dsouza and
Bloom offer in [DB95] some discussion on the benefits of adding one of those,
viz. the checkpointing operator, to CCS.
One of the most widely studied, and natural, notions used to implement
different levels of urgency between system actions is priority. (A thorough and
clear discussion of the different approaches to the study of priority in process description languages may be found in [CLN01].) In this chapter, we consider the
well-known priority operator Θ studied by Baeten, Bergstra and Klop [BBK86]
in the context of process algebra. (See [CH90, CW95, CLNS96, CLN07, Phi08]
for later accounts of the priority operator in the setting of process description
languages.) The priority operator Θ gives certain actions priority over others
based on an irreflexive partial ordering relation < over the set of actions. Intuitively, a < b is interpreted as “b has priority over a”. This means that, in
the context of the priority operator Θ, action a is pre-empted by action b. For
example, if p is some process that can initially perform both a and b, then Θ(p)
will initially only be able to execute the action b.
In their classic paper [BBK86], Baeten, Bergstra and Klop provided a sound
109
110
Chapter 6 Priority
and ground-complete axiomatization for this operator modulo bisimulation equivalence. Their axiomatization uses predicates on actions (to express priorities
between actions) and one extra auxiliary operator. Bergstra showed in the
earlier paper [Ber85] that, in case of a finite alphabet of actions, there exists
a finite, ground-complete axiomatization for Θ, without action predicates and
help operators. So, if the set of actions is finite, neither equations with action
predicates as conditions nor auxiliary operators, as used in [BBK86], are actually necessary to obtain a finite axiomatization of bisimulation equivalence
over basic process description languages enriched with the priority operator.
However, can Bergstra’s positive result be extended to a setting with a countably infinite collection of actions? Or are equations with action predicates as
conditions and auxiliary operators necessary to obtain a finite axiomatization
of bisimulation equivalence in the presence of an infinite collection of actions?
(Note that infinite sets of actions are common in process calculi, and arise, for
instance, in the setting of value- or name-passing calculi, e.g. value-passing CCS
[Mil89a], µCRL [GR01] and π-calculus [MPW92a, MPW92b].) The aim of this
chapter is to provide a thorough answer to these questions in the setting of the
process algebra BCCSP enriched with the priority operator Θ. In case of an
infinite alphabet, we permit the occurrence of action variables in axioms.
This chapter considers the equational theory of BCCSP with the priority
operator Θ from [BBK86] modulo bisimulation equivalence. Our first main
result is a theorem indicating that the use of equations with action predicates
as conditions is indeed inevitable in order to offer a finite, ground-complete
axiomatization of bisimulation equivalence over the basic process language we
consider in this study. To this end, we prove that, in case of an infinite alphabet
and in the presence of at least one priority relation a < b between a pair of
actions, there is no finite, ground-complete axiomatization for BCCSP enriched
with the priority operator (Thm. 6.4.3). This result even applies if one is allowed
to add an arbitrary collection of help operators to the syntax. Thm. 6.4.3
offers a very strong indication that the use of equations with action predicates
as conditions is essential for axiomatizing Θ, and cannot be circumvented by
introducing auxiliary operators. (This is in contrast to the classic positive and
negative results on the existence of finite, ground-complete axiomatizations for
parallel composition offered in [BK84, Mol90a, Mol90b].)
The idea underlying the proof of Thm. 6.4.3 is that for each finite sound
axiomatization E there is a pair of actions c, d that does not occur in E. If c
and d are incomparable, then
Θ(c0 + d0) ≈ c0 + d0
is sound modulo bisimulation equivalence. However, using a simple renaming
argument, we show that a derivation of this equation from E would give rise to
a derivation of the unsound equation Θ(a0 + b0) ≈ a0 + b0. Likewise, if c < d,
then
Θ(c0 + d0) ≈ d0
is sound modulo bisimulation equivalence. But we prove that a derivation of
6.1 Introduction
111
this equation from E would give rise to a derivation of the unsound equation
Θ(d0 + c0) ≈ c0.
Having established that equations with action predicates as conditions are
necessary in order to obtain a finite, ground-complete axiomatization of bisimulation equivalence, we then proceed to investigate whether, in the presence
of an infinite set of actions, this equivalence can be finitely axiomatized using
equations with action predicates as conditions, but without auxiliary operators like the unless operator used in [BBK86]. We show that, in general, the
answer to this question is negative. This we do by exhibiting a priority structure with respect to which bisimulation equivalence affords no finite, sound and
ground-complete axiomatization in terms of equations with action predicates
as conditions (Thm. 6.5.6). This shows that, in general, the use of auxiliary
operators is indeed necessary to axiomatize bisimulation equivalence finitely,
even using equations with action predicates as conditions and over the simple
language considered in this study. The priority structure used in the proof of
Thm. 6.5.6 consists of actions ai and bi for i ≥ 1 together with an action c,
where ai < bi < c for each i ≥ 1. We prove that given a finite sound axiomatization E consisting of equations with action predicates as conditions, the sound
equation
Θ(b1 0 + · · · + bn 0) ≈ b1 0 + · · · + bn 0
cannot be derived from E, for a sufficiently large n.
In contrast to the aforementioned negative results, we exhibit a countably infinite, ground-complete axiomatization for bisimulation equivalence over
BCCSP with the priority operator in terms of equations with action predicates
as conditions (Thm. 6.5.9). This axiomatization suggests that, in general, infinite collections of pairwise incomparable actions with respect to the priority
relation < are the source of our negative result presented in Thm. 6.5.6. It is
therefore natural to ask ourselves whether there are conditions that can be imposed on the poset of actions that are sufficient to guarantee that bisimulation
equivalence be finitely axiomatizable using equations with action predicates as
conditions, but without auxiliary operators. We conclude the technical developments in this chapter by proposing some such sufficient conditions. The most
general of these applies to all priority structures such that
1. the collection of the sizes of the finite, maximal anti-chains is finite,
2. there are only finitely many infinite, maximal anti-chains, and
3. for each infinite, maximal anti-chain A, each element of A is above the
same set of actions, that is, for each a, b ∈ A and action c, we have that
c < a iff c < b.
Our results add the priority operator to the list of operators whose addition to a
process algebra spoils finite axiomatizability modulo bisimulation equivalence;
see, e.g., [AFIL05, AFIN06, Mol90a, Mol90b, Sew97] for other examples of nonfinite axiomatizability results over process algebras. Notably, in [AFIN06] two
mode transfer operators from [BB00] are studied in the setting of the basic
112
Chapter 6 Priority
process algebra BPA. It is shown that, even in the presence of just one action,
the interrupt operator does not have a finite, ground-complete axiomatization,
while the disrupt operator does. In the interrupt operator, a process p can be
interrupted by another process q; upon termination of q, process p resumes its
computation. In the disrupt operator, a process p can be preempted by another
process q, after which the execution of p is aborted.
Structure of the chapter. Section 6.2 contains the preliminaries. In Section 6.3, the finite axiomatization for the priority operator Θ from [Ber85] is
presented. Section 6.4 contains a proof of a result to the effect that, in case
of an infinite alphabet, there is no finite, ground-complete axiomatization for
the priority operator modulo bisimulation equivalence, even in the presence of
auxiliary operators. Finally, we show that, in the presence of an infinite set of
actions, in general bisimulation equivalence does not afford a finite axiomatization in terms of equations with action predicates as conditions without the use
of auxiliary operators (Section 6.5.1), and we identify sufficient conditions on
the priority structure over actions that lead to the existence of a finite axiomatization using equations with action predicates as conditions (Section 6.5.2).
Section 6.6 concludes the chapter.
6.2
Preliminaries
We begin by introducing the basic definitions and results on which the technical
developments to follow are based.
6.2.1
The Language BCCSPΘ
In this chapter, we use Act to denote a non-empty alphabet of atomic actions,
with typical elements a, b, c, d, e. Over Act we assume an irreflexive, transitive
partial ordering < to express priorities between actions.1 Intuitively, a < b
expresses that the action b has priority over the action a. We say that actions
a1 , . . . , an are incomparable if they are distinct and ai < aj does not hold for all
1 ≤ i, j ≤ n.
The language of processes we shall consider in this chapter, henceforth referred to as BCCSPΘ , is obtained by adding the unary priority operator Θ from
[BBK86] to the basic process algebra BCCSP [vG90, vG01]. The language is
given by the following grammar:
t ::= 0 | at | t + t | Θ(t) | x | αt ,
where a ranges over Act, x is a process variable and α is an action variable.
Process and action variables range over given, disjoint countably infinite sets.
We use x, y, z to range over the collection of process variables, and α, β as typical
action variables2 .
1 We
use Act instead of A in previous chapters to emphasize the partial ordering structure.
this is in contrast to previous chapters where α, β range over Aτ .
2 N.B.
113
6.2 Preliminaries
We use t, u, v to range over the collection of open process terms (BCCSPΘ ).
As usual, a process term is closed if it does not contain any variables, and
p, q, r, range over the set of closed terms T(BCCSPΘ ). The size of a term is
defined as follows: size(0) = size(x) = 0; size(at) = size(αt) = 1 + size(t);
size(t + u) = size(t) + size(u); and size(Θ(t)) = size(t).
Remark 6.2.1 The readers might have already noticed that we consider a
slightly extended syntax for BCCSP, in that we allow for the use of prefixing operators of the form α , where α is an action variable. The use of action
variables is natural in the presence of infinite sets of actions, and will allow us
to formulate stronger versions of the negative results to follow.
A substitution maps each process variable to a process term, and each action
variable to an action or action variable. A substitution is closed if it maps
process variables to closed process terms and action variables to actions. For
every term t and substitution σ, the term obtained by replacing occurrences of
process variables x and action variables α in t with σ(x) and σ(α), respectively,
is written σ(t). Note that σ(t) is closed if so is σ. For example, σ(αx) = a0 if
σ(α) = a and σ(x) = 0.
As usual, the operational semantics of the language BCCSPΘ is specified
by the transitions rules given in Tab. 6.1, where a ranges over Act. This gives
a
a
ax −−→ x
x1 −−→ y
a
x1 + x2 −−→ y
a
x2 −−→ y
a
x1 + x2 −−→ y
a
b
x −−→ y x 9 for a < b
a
Θ(x) −−→ Θ(y)
Table 6.1: SOS for BCCSPΘ
rise to an Act-labeled transition relation, which is the unique supported model
in the sense of [BIM95]. Intuitively, closed terms in the language BCCSPΘ
represent finite process behaviors, where 0, p + q and ap are exactly the same
as in BCCSP while the process graph of Θ(p) is obtained by eliminating all
transitions q −−a→ q ′ from the process graph of p for which there is a transition
b
q −−→ q ′′ with a < b.
We consider the language BCCSPΘ modulo bisimulation equivalence.
Definition 6.2.2 A binary symmetric relation R over T(BCCSPΘ ) is a bisimulation if p R q together with p −−a→ p′ imply q −−a→ q ′ for some q ′ with p′ R q ′ .
We write p ↔ q if there is a bisimulation relating p and q. The relation ↔ will
be referred to as bisimulation equivalence or bisimilarity.
It is well-known that ↔ is an equivalence relation. Moreover, the transition
rules are in the GSOS format of [BIM95]. (We mention in passing that recently Aceto and Ingolfsdottir [AI08] show that the priority operator cannot
114
Chapter 6 Priority
be expressed using positive rule formats for operational semantics.) Hence,
bisimulation equivalence is a congruence with respect to all the operators in
the signature of BCCSPΘ , meaning that p ↔ q implies C[p] ↔ C[q] for each
BCCSPΘ -context C[·].
We can therefore consider the algebra of the closed terms in T(BCCSPΘ )
modulo ↔. In Section 6.4, we shall offer results that apply to any signature
Σ which extends that for BCCSPΘ . To this end, we shall tacitly assume that
all of the new operators in Σ also preserve bisimulation equivalence, and are
semantically interpreted as operations over finite synchronization trees.
Our order of business in the remainder of this chapter will be to offer a thorough study of the equational theory of the language BCCSPΘ modulo bisimulation equivalence. We begin our investigation by considering the case in which
the set of actions Act is finite in the following section. We then move on to
investigate the equational properties of bisimulation equivalence over BCCSPΘ
when the set of actions is infinite (Sections 6.4 and 6.5).
6.3
|Act| < ∞
In this section, we assume that the action set is finite. The axioms in Tab. 6.2
were put forward by Bergstra in [Ber85]. Note that, in the case of a finite action
set, this axiom system is finite, since then the axiom schemas PR2-4 give rise
to finitely many equations.
PR1
PR2
PR3
PR4
Θ(0)
Θ(ax + ay + z)
Θ(ax + by + z)
Θ(a1 x1 + · · · + an xn )
≈
≈
≈
≈
0
Θ(ax + z) + Θ(ay + z)
Θ(by + z)
(a < b)
a1 Θ(x1 ) + · · · + an Θ(xn )
(a1 , . . . , an incomparable)
Table 6.2: Axiomatization in case of |Act| < ∞
Theorem 6.3.1 ([Ber85]) The axiom system consisting of A1-4 in Tab. 2.4
and PR1-4 in Tab. 6.2 is sound and ground-complete for BCCSPΘ modulo ↔.
Proof: (Sketch) Since ↔ is a congruence with respect to BCCSPΘ , soundness
can be checked for each axiom separately. This is an easy exercise.
Next observe that, using PR1-4, one can remove all occurrences of Θ from
closed terms. Then ground-completeness follows from the well-known ground
completeness of A1-4 for BCCSP modulo ↔ (see, e.g., [HM85]).
In the remainder of this chapter, as usual, process terms are considered modulo associativity and commutativity of +. In other words, we do not distinguish
6.4 |Act| = ∞
115
Pn
t + u and u + t, nor (t + u) + v and t + (u + v). We use a summation i=1 ti
to denote t1 + · · · + tn , where the empty sum represents 0. Such a summation
is said to be in head normal form if each term ti is of the form ai t′i or αi t′i for
some action ai or action variable αi , and term t′i .
It is easy to see that moduloPthe axioms A1 and A2, every term t in the
language BCCSPΘ has the form i∈I ti , for some finite index set I, and terms
ti (i ∈ I) that do not have the form t′ + t′′ . The terms ti (i ∈ I) will be referred
to as the summands of t. For example, the term Θ(a0 + b0) has only itself as
summand.
Remark 6.3.2 Note that the axiom system in Tab. 6.2 is not strong enough to
prove all of the sound equations over the language BCCSPΘ modulo bisimulation
equivalence. For instance, as the readers can check, the equation
Θ(Θ(x) + y) ≈ Θ(x + y)
is sound modulo bisimulation equivalence irrespective of the cardinality of the
set of actions Act and of the ordering relation <. That equation, however,
cannot be proven from those in Tab. 6.2.
6.4
|Act| = ∞
In this section, we deal with the case that the action set is infinite. Our main
result is that bisimulation equivalence does not afford a finite, ground-complete
axiomatization over the language BCCSPΘ , provided that Act contains at least
two actions a, b with a < b. (Otherwise, the equation Θ(x) ≈ x would be sound,
and the priority operator could be eliminated from all terms.) This negative
result even applies if BCCSPΘ is extended with an arbitrary collection of operators (over finite synchronization trees) for which bisimulation equivalence is a
congruence.
The idea behind the proof of our main result of this section is that a finite
axiom system E can mention only finitely many action names. So, since Act
is infinite, we can find a pair c, d of distinct actions that do not occur in E. If
c and d are incomparable, then the equation Θ(c0 + d0) ≈ c0 + d0 is sound;
if c < d, then Θ(c0 + d0) ≈ d0 is sound. In the first case, we show that an
equational proof of Θ(c0 + d0) ≈ c0 + d0 from E would give rise to a proof of
the unsound equation Θ(a0 + b0) ≈ a0 + b0 from E. This follows by a simple
renaming argument, using that c and d do not occur in E. Likewise, in the
second case, a proof of Θ(c0 + d0) ≈ d0 from E would give rise to a proof of
the unsound equation Θ(d0 + c0) ≈ c0 from E.
To present the formal proof of the aforementioned negative result, first we
introduce the action renaming mentioned in the proof idea sketched above.
Definition 6.4.1 Let A ⊆ Act, and let Σ be a signature that includes the set
of operators in BCCSPΘ . We extend each renaming function ρ : A → Act to a
116
Chapter 6 Priority
function ρ : (Σ) → (Σ) as follows, where f is any operator that is not of the
form a .
ρ(0)
def
ρ(at)
def
ρ(f (t1 , . . . , tn ))
def
=
f (ρ(t1 ), . . . , ρ(tn ))
ρ(x)
def
=
x
ρ(αt)
def
αρ(t)
=
=
=
0
ρ(a)ρ(t)
aρ(t)
if a ∈ A
if a ∈
6 A
def
For each substitution σ, the substitution ρ(σ) is defined by ρ(σ)(x) = ρ(σ(x))
and
(
ρ(σ(α)) if σ(α) ∈ A
def
ρ(σ)(α) =
σ(α)
otherwise .
The following lemma states that renaming of actions that are not mentioned in
an axiom system E preserves provability.
Lemma 6.4.2 Let A ⊆ Act and ρ : A → Act. Let Σ be a signature that
includes the set of operators in BCCSPΘ . Let E be a collection of equations
over Σ, and assume that all of the actions a ∈ A do not occur in E. Then
E ⊢ p ≈ q implies E ⊢ ρ(p) ≈ ρ(q).
Proof: The proof is by induction on the depth of a closed proof of p ≈ q from E.
We proceed by a case analysis on the last rule used in the proof of p ≈ q from E.
The case of reflexivity is trivial, and that of transitivity follows immediately by
using the induction hypothesis. Below we only consider the other cases, namely
the instantiation of an axiom and closure under contexts. (Since we are dealing
with closed proofs, closure with respect to prefixing by action variables need not
be considered.)
• Case E ⊢ p ≈ q because σ(t) = p and σ(u) = q for some equation
t ≈ u ∈ E and closed substitution σ. Then ρ(p) = ρ(σ(t)) = ρ(σ)(ρ(t)).
According to the proviso of the lemma, no action a ∈ A occurs in t, so
clearly ρ(t) = t. Similarly, ρ(q) = ρ(σ(u)) = ρ(σ)(ρ(u)) = ρ(σ)(u). Since
t ≈ u ∈ E, by substitution instance, E ⊢ ρ(σ)(t) ≈ ρ(σ)(u). In other
words, E ⊢ ρ(p) ≈ ρ(q), which was to be shown.
• Case E ⊢ p ≈ q because p = ap′ and q = aq ′ where E ⊢ p′ ≈ q ′ . If a ∈ A,
then ρ(p) = ρ(a)ρ(p′ ) and ρ(q) = ρ(a)ρ(q ′ ); otherwise, ρ(p) = aρ(p′ ) and
ρ(q) = aρ(q ′ ). In either case, by induction, E ⊢ ρ(p′ ) ≈ ρ(q ′ ). By context
closure, E ⊢ ρ(p) ≈ ρ(q).
• Case E ⊢ p ≈ q because p = f (p1 , . . . , pn ) and q = f (q1 , . . . , qn ), for some
operator f in the signature that is not of the form a , where E ⊢ pi ≈ qi
for i = 1, . . . , n. By definition, ρ(p) = f (ρ(p1 ), . . . , ρ(pn )) and ρ(q) =
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
117
f (ρ(q1 ), . . . , ρ(qn )). By induction, E ⊢ ρ(pi ) ≈ ρ(qi ) for i = 1, . . . , n. By
context closure, E ⊢ ρ(p) ≈ ρ(q).
The proof is now complete.
With Lem. 6.4.2 at our disposal, we are now in a position to show the first
main result of this chapter:
Theorem 6.4.3 Let |Act| = ∞, and a < b for two actions a, b ∈ Act. Let Σ
be a signature consisting of the operators in BCCSPΘ , together with auxiliary
operators for which bisimulation equivalence is a congruence. Then bisimulation
equivalence has no finite, sound and ground-complete axiomatization over T(Σ).
Proof: We need to show that no finite axiom system is both sound and groundcomplete for T(Σ) modulo ↔. Let E be a finite axiom system over T(Σ) that is
sound modulo ↔. Fix a pair of distinct actions c, d ∈ Act that do not occur in
E. We can select c, d such that either they are incomparable, or c < d. In the
first case, the following equation is sound modulo ↔:
Θ(c0 + d0) ≈ c0 + d0 .
Assume, towards a contradiction, that this equation can be derived from E.
Consider the renaming function ρ defined as: ρ(c) = a and ρ(d) = b. Since c, d
do not occur in E, Lem. 6.4.2 yields that E ⊢ ρ(Θ(c0 + d0)) ≈ ρ(c0 + d0). That
is, E ⊢ Θ(a0 + b0) ≈ a0 + b0, which is not sound modulo ↔, since a < b. This
contradicts the soundness of E.
In the second case, the following equation is sound modulo ↔:
Θ(c0 + d0) ≈ d0 .
Again, assume, towards a contradiction, that this equation can be derived from
E. Consider the renaming function ρ defined as: ρ(c) = d and ρ(d) = c. Since
c, d do not occur in E, Lem. 6.4.2 yields that E ⊢ ρ(Θ(c0 + d0)) ≈ ρ(d0).
That is, E ⊢ Θ(d0 + c0) ≈ c0, which is not sound modulo ↔. Once more, this
contradicts the soundness of E.
In either case, we can conclude that the axiom system E is not groundcomplete.
6.5
Axiomatizing Priority over an Infinite Action Set, Conditionally
Thm. 6.4.3 offers a very strong evidence that, in the presence of an infinite set
of actions, equational logic is inherently not sufficiently powerful to achieve a
finite axiomatization of bisimilarity over closed terms in the language BCCSPΘ .
Indeed, that result holds true even in the presence of an arbitrary number of
auxiliary operators.
118
Chapter 6 Priority
In the presence of action variables, it is natural to view our language as
consisting of two sorts: one for actions and the other for processes. This is all
the more true because the set of actions has the structure of a partial order, and
we should like to express axioms over processes that reflect the influence that
this poset structure on actions has on the behavior of processes. In case our set
of actions is finite, this can be done by means of a finite number of equations
that are instances of PR3 and PR4 in Tab. 6.2.
In the presence of an infinite action set, however, the axiom schemas PR3
and PR4, as well as PR2, have infinitely many instances. One way to try and
capture their effects finitely is to take seriously the idea that, in the presence of
action variables, the equation schemas PR3 and PR4 can be phrased as equations
with action predicates as conditions thus:
CPR3
CPR4n
(α < β) ⇒
Θ(αx + βy + z) ≈ Θ(βy + z)
^
¬(αi < αj )) ⇒
(
1≤i,j≤n
Θ(α1 x1 + · · · + αn xn ) ≈ α1 Θ(x1 ) + · · · + αn Θ(xn ) (n ≥ 0) .
In both of the above equations, we use predicates over actions to restrict the applicability of the equation on the right-hand side of the implication. In general,
henceforth in this study we shall consider equations of the form
P ⇒t≈u ,
where P is a predicate over actions, and t ≈ u is an equation over the language
BCCSPΘ .
In what follows, we shall take a semantic view of predicates over actions.
An action predicate P will be simply identified with the collection of closed
substitutions that satisfy it – with the proviso that two closed substitutions
that agree over the collection of action variables are either both in P or neither
of them is. As we did above for CPR3 and CPR4n , we shall often express
predicates over actions using formulae in first-order logic with equality and the
binary relation symbol <. The definition of the collection of closed substitutions
that satisfy a predicate P expressed using such formulae is entirely standard,
and we omit the details. For example, a closed substitution σ satisfies the
predicate α < β if, and only if, σ(α) < σ(β) holds in the poset (Act, <). We
sometimes write σ(P ) = true if the closed substitution σ satisfies the predicate
P . We say that a predicate is satisfiable if some closed substitution satisfies it.
If P is a tautology, then we simply write t ≈ u. For instance, a version of PR2
with action variables will be written thus:
CPR2
Θ(αx + αy + z) ≈ Θ(αx + z) + Θ(αy + z) .
Note that PR1 in Tab. 6.2 is just CPR40 . Moreover, since < is irreflexive,
CPR41 reduces to
Θ(αx) ≈ αΘ(x) .
(6.1)
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
119
(Note that the above equation can be derived from each of the CPR4n with
n ≥ 1 and axiom A3 in Tab. 6.2.)
An equation of the form P ⇒ t ≈ u is sound with respect to bisimilarity, if
σ(t) ↔ σ(u) holds for each closed substitution σ that satisfies the predicate P .
It is not hard to see that:
Lemma 6.5.1 For each partial order of actions (Act, <), CPR2-3 and CPR4n
(n ≥ 0) are sound modulo bisimilarity over the language BCCSPΘ .
A proof in conditional equational logic of an equation from a set E of axioms with action predicates as conditions uses the same rules presented in Section 2.1.3. However, the rule for substitution instance now reads
P ⇒t≈u
σ(t) ≈ σ(u)
(σ(P ) = true) ,
where P ⇒ t ≈ u is one of the equations with action predicates as conditions in
the set E. Again, by postulating that for each equation of the form P ⇒ (t ≈ u)
in E also its symmetric counterpart P ⇒ (u ≈ t) is present in E, we can
disregard applications of symmetry in conditional equational proofs.
A natural question to ask at this point, and one that we shall address in the
remainder of this study, is whether, unlike standard equational logic, equations
with action predicates as conditions suffice to obtain a finite, ground-complete
axiomatization of bisimulation equivalence over the language BCCSPΘ .
In their classic paper [BBK86], Baeten, Bergstra and Klop offered a finite,
ground-complete axiomatization of bisimilarity over the language BPAδ with
the priority operator that employs equations with action predicates as conditions. Their axiomatization, however, relied upon the introduction of a binary
auxiliary operator, the so-called unless operator ⊳. Operationally, the behavior
of the unless operator is specified by the rules
a
b
x −−→ x′ y 9 for a < b
,
a
x ⊳ y −−→ x′
where a ∈ Act.
In the setting of BCCSPΘ , and using action variables in lieu of concrete action names, the relation between the priority operator and the unless operator
is expressed by the axioms in Tab. 6.3. It is not too hard to see that those
axioms, together with A1-4, yield a ground-complete, finite axiomatization of
bisimulation equivalence. Therefore, even in the presence of an infinite set of actions, bisimulation equivalence affords a finite, ground-complete axiomatization
using equations with action predicates as conditions at the price of introducing
a single auxiliary operator. But, if the set of actions is infinite, is the use of an
auxiliary operator like the unless operator really necessary to obtain a finite axiomatizability result for bisimulation equivalence over BCCSPΘ using equations
with action predicates as conditions? We address this question in what follows.
In particular, we first show that, in general, the use of auxiliary operators is
120
Chapter 6 Priority
¬(α < β) ⇒
(α < β) ⇒
Θ(αx)
Θ(0)
Θ(x + y)
(αx) ⊳ (βy)
(αx) ⊳ (βy)
(αx) ⊳ 0
0 ⊳ (αx)
(x + y) ⊳ z
x ⊳ (y + z)
≈
≈
≈
≈
≈
≈
≈
≈
≈
αx
0
(Θ(x) ⊳ y) + (Θ(y) ⊳ x)
αx
0
αx
0
(x ⊳ z) + (y ⊳ z)
(x ⊳ y) ⊳ z
Table 6.3: Axioms for Θ in the presence of ⊳
indeed necessary to obtain a finite, ground-complete axiomatization of bisimulation equivalence using equations with action predicates as conditions. This
we do in Section 6.5.1 by exhibiting a poset of actions for which no finite set of
sound equations with action predicates as conditions is ground-complete with
respect to bisimulation equivalence over BCCSPΘ . This negative result, however, does not entail that, in the presence of an infinite set of actions, auxiliary
operators are always needed to give a finite, ground-complete axiomatization of
bisimulation equivalence over the language BCCSPΘ . In fact, we then isolate
sufficient conditions on the priority structure over actions that guarantee the
finite axiomatizability of bisimulation equivalence over the language BCCSPΘ
using equations with action predicates as conditions (Section 6.5.2).
6.5.1
A Negative Result
Our order of business will now be to prove that, in the presence of an infinite
set of actions, in general auxiliary operators are indeed necessary in order to
obtain a finite ground-complete axiomatization of bisimulation equivalence over
the language BCCSPΘ , even if we permit the use of equations of the form
P ⇒ (t ≈ u). In this section, Act = {ai , bi | i ≥ 1} ∪ {c}, where ai < bi < c
for each i ≥ 1, and these are the only inequalities. Moreover, for convenience,
we consider terms not only modulo associativity and commutativity of +, but
also modulo the sound equations x + 0 ≈ x and Θ(Θ(x) + y) ≈ Θ(x + y) – see
Rem. 6.3.2. So we can assume, without loss of generality, that terms contain
neither redundant 0 summands nor nested occurrences of Θ.
We will prove the following claim, which will be used to argue that bisimulation equivalence has no finite, ground-complete axiomatization consisting
of equations with action predicates as conditions over the language BCCSPΘ
(Thm. 6.5.6 to follow).
Claim 6.5.2 Let E be a finite collection of equations with action predicates as
conditions that is sound modulo ↔. Let n ≥ 2 be larger than the size of any
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
121
term in the equations of E. Then from E we cannot derive the equation
Θ(Φn ) ≈ Φn ,
where Φn denotes
Pn
i=1 bi 0.
Note that the equation above is sound modulo ↔, because the actions bi
(i ≥ 1) are pairwise incomparable.
First we establish a technical lemma.
Lemma 6.5.3 Let P ⇒ t ≈ u be an equation that is sound modulo ↔, where
P is satisfiable. If some process variable x occurs as a summand in t, then x
also occurs as a summand in u.
Proof: Since P is satisfiable, there exists a closed substitution σ such that
σ(P ) = true. Take some action d ∈ Act that does not occur in σ(u); such
an action exists because A is infinite. Consider the closed substitution σ ′ that
maps x to d(b1 0 + c0), each other process variable to 0, and agrees with σ on
action variables. As P ⇒ t ≈ u is sound modulo ↔ and σ ′ (P ) = σ(P ) = true,
d
we have that σ ′ (t) ↔ σ ′ (u). Since x is a summand of t and σ ′ (t) −−→ b1 0 + c0, it
d
follows that σ ′ (u) −−→ q ↔ b1 0 + c0 for some q. Since d does not occur in σ(u)
and b1 < c, it is not hard to see that x must be a summand of u.
The following lemma is the crux in the proof of our claim. It states a property of closed terms that holds for all of the closed instantiations of axioms in
any finite, sound collection of equations with action predicates as conditions.
As we shall see later on, this property is also preserved by arbitrary conditional equational proofs from a finite, sound collection of equations with action
predicates as conditions (Prop. 6.5.5).
Lemma 6.5.4 Let P ⇒ t ≈ u be sound modulo ↔. Let σ be a closed substitution with σ(P ) = true. Assume that:
• n is larger than the size of t, where n ≥ 2; and
• the summands of σ(t) are all bisimilar to either Φn or 0.
Then the summands of σ(u) are all bisimilar to either Φn or 0.
Proof: First, suppose that all summands of σ(t) are bisimilar to 0. Then σ(t) ↔
0, so the soundness of P ⇒ t ≈ u together with σ(P ) = true yields σ(u) ↔ 0.
This means that all summands of σ(u) are bisimilar to 0, and we are done.
So we can assume that some summand of σ(t) is bisimilar to Φn . Then
σ(t) ↔ σ(u) ↔ Φn , by the proviso of the
of P ⇒ t ≈ u.
Plemma and the soundness
P
We know that we can write t = i∈I ti and u = j∈J uj for some nonempty, finite index sets I and J, where the terms ti and uj are of the form x,
av, αv or Θ(v). By the proviso of the lemma, for each i ∈ I, the summands of
122
Chapter 6 Priority
σ(ti ) are all bisimilar to Φn or 0. Since n ≥ 2, for each i ∈ I, the term ti is not
of the form av or αv. Hence either it is a process variable x, or it is of the form
X
X
X
Θ(
diℓ t′iℓ +
αm t′′im +
zik )
ℓ∈Li
m∈Mi
k∈Ki
(modulo the equations x + 0 ≈ x and Θ(Θ(x) + y) ≈ Θ(x + y)). Let I ′ ⊆ I
be the set of indices of summands of t that have the above form. Observe that
Ki 6= ∅ for each i ∈ I ′ such that σ(ti ) is bisimilar to Φn (because n is larger
than the size of t). Note moreover that summands ti of t having the above form
such that σ(ti ) ↔ 0 must have Li = Mi = ∅, and for such summands σ(zik ) ↔ 0
for each k ∈ Ki .
Let us assume, towards a contradiction, that there is an index j ∈ J such
that σ(uj ) has a summand that is bisimilar neither to Φn nor to 0. We proceed
by a case analysis on the form of uj .
1. Case uj = x. By assumption, σ(x) has a summand that is bisimilar neither to Φn nor to 0. Since P ⇒ t ≈ u is sound modulo ↔ and P is satisfiable (because σ(P ) = true by the proviso of the lemma), by Lem. 6.5.3,
t also has x as a summand. Consequently σ(t) has a summand that is
bisimilar neither to Φn nor to 0. This contradicts one the assumptions of
the lemma.
2. Case uj = au′j or uj = αu′j . Since σ(u) ↔ Φn , we have that a = bh or
σ(α) = bh for some 1 ≤ h ≤ n. Define the substitution σ ′ as
c0 if y = zik for some i ∈ I ′ and k ∈ Ki
σ ′ (y) =
0
otherwise
for process variables y, and let σ ′ agree with σ on action variables. Then
bh
σ ′ (t) 9,
because
•
•
•
•
c > bh ,
Ki 6= ∅ for every i ∈ I ′ with σ(ti ) ↔ Φn ,
Li = Mi = ∅ for every i ∈ I ′ with σ(ti ) ↔ 0 and
t does not contain summands of the form bh v or αv.
b
h
On the other hand, as σ and σ ′ agree on action variables, σ ′ (uj ) −−→
b
h
σ ′ (u′j ), and thus σ ′ (t) ↔
σ ′ (u′j ). It follows that σ ′ (u) −−→
/ σ ′ (u). Since
′
σ (P ) = σ(P ) = true, this contradicts the soundness of P ⇒ t ≈ u modulo
↔.
3. Case uj = Θ(u′ ). Then uj consists of a single summand, so by assumption, we have that σ(uj ) ↔
/ Φn and σ(uj ) ↔
/ 0.
Since σ(u) ↔ Φn , and terms are considered modulo the equations x+0 ≈ x
and Θ(Θ(x) + y) ≈ Θ(x + y), we can take u′ to be of the form
X
X
X
eℓ u′ℓ +
βm u′′m +
yk ,
ℓ∈L
m∈M
k∈K
123
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
for some finite index sets L, M, K. We distinguish two cases.
/ 0 there is a ki ∈ Ki such that ziki is
(a) For each i ∈ I ′ with σ(ti ) ↔
not a summand of u′ .
Define the substitution σ ′ as

if y = ziki for some i ∈ I ′ with σ(ti ) ↔
/ 0, or
 c0
′
if y is a summand of t with σ(y) ↔
/ 0
σ (y) =

σ(y) otherwise
for process variables y, and let σ ′ agree with σ on action variables.
b
It is not hard to see that σ ′ (t) 9i for i = 1, . . . , n (because c > bi
and t has no summand of the form av or αv). On the other hand,
/ 0 and σ(u) ↔ Φn , there is an h with 1 ≤ h ≤ n
since σ(uj ) ↔
c
bh
such that σ(u′ ) −−→.
Furthermore, σ(u′ ) 9. By assumption, ziki is
not a summand of u′ for each i ∈ I ′ with σ(ti ) ↔
/ 0. Moreover, for
/ 0, y is not a summand
any variable summand y of t with σ(y) ↔
/ Φn . So
of u′ , because by assumption σ(y) ↔ Φn while σ(u′ ) ↔
c
c
bh
bh
σ(u′ ) −−→
and σ(u′ ) 9 imply σ ′ (u′ ) −−→
and σ ′ (u′ ) 9. It follows
bh
bh
that σ ′ (uj ) −−→,
and so σ ′ (u) −−→.
Hence σ ′ (t) ↔
/ σ ′ (u). Since
′
σ (P ) = σ(P ) = true, this contradicts the fact that P ⇒ t ≈ u is
sound modulo ↔.
(b) {zi0 k | k ∈ Ki0 } ⊆ {yk | k ∈ K}, for some i0 ∈ I ′ with σ(ti0 ) ↔
/ 0.
In this case, K is non-empty since, as previously observed, Ki0 is
non-empty. By the proviso of the lemma, σ(ti0 ) ↔ Φn , so (since n
/ 0.
is larger than the size of ti0 ) there is a k0 ∈ Ki0 with σ(zi0 k0 ) ↔
/ 0
Furthermore, by the assumption for case 3 of the proof, σ(uj ) ↔
and σ(uj ) ↔
/ Φn . Therefore, there is an h with 1 ≤ h ≤ n such that
b
h
σ(Θ(u′ )) 9.
Define the substitution σ ′ as
ah 0 if y = zi0 k0
′
σ (y) =
σ(y) otherwise
for process variables y, and let σ ′ agree with σ on action variables.
ah
To this end, observe, first of all, that, since
We argue that σ ′ (t) 9.
P
b
bh
bh
h
′
σ(Θ(u )) 9, we have σ( k∈K yk ) 9,
and so σ(zi0 k0 ) 9.
We are
now ready to show that no summand of σ ′ (t) affords an ah -labeled
transition. We consider three exhaustive possibilities:
a
h
i. Let i ∈ I ′ with zi0 k0 6∈ {zik | k ∈ Ki }. Then clearly σ ′ (ti ) 9.
′
/ 0 beii. Let i ∈ I with zi0 k0 ∈ {zik | k ∈ Ki }. Then σ(ti ) ↔
/ 0, so by assumption σ(ti ) ↔ Φn . This implies
cause σ(zi0 k0 ) ↔
b
b
b
h
h
h
σ(ti ) −−→,
so since σ(zi0 k0 ) 9,
it follows that σ ′ (ti ) −−→.
Since
the outermost function symbol of ti is Θ, we can conclude that
ah
σ ′ (ti ) 9.
124
Chapter 6 Priority
b
h
iii. Finally, since σ(zi0 k0 ) ↔
/ 0 and σ(zi0 k0 ) 9,
the proviso of the
lemma yields that zi0 k0 cannot be a summand of t.
Since t has no other types of summands, from the three cases above
ah
ah
On the other hand, σ ′ (Θ(u′ )) −−→,
we can conclude that σ ′ (t) 9.
b
a
h
h
because σ(Θ(u′ )) 9
and zi0 k0 ∈ {yk | k ∈ K}. Hence σ ′ (u) −−→,
′
′
′
/ σ (u). Since σ (P ) = σ(P ) = true, this contradicts
and so σ (t) ↔
the fact that P ⇒ t ≈ u is sound modulo ↔.
In summary, the assumption that, for some j ∈ J, the term σ(uj ) has a summand that is bisimilar neither to Φn nor to 0, leads to a contradiction. This
completes the proof.
The following proposition states that the property of closed instantiations
of sound equations with action predicates as conditions mentioned in the above
lemma is preserved under equational derivations from a finite collection of sound
equations. This is the key to the promised proof of our claim.
Proposition 6.5.5 Let E be a finite collection of equations with action predicates as conditions that is sound modulo ↔. Let n ≥ 2 be larger than the size
of any term in the equations of E. Assume, furthermore, that
• E ⊢ p ≈ q; and
• the summands of p are all bisimilar to Φn or 0.
Then the summands of q are all bisimilar to Φn or 0.
Proof: By induction on the depth of the closed proof of p ≈ q from E. We
proceed by a case analysis on the last rule used in the proof of p ≈ q from E.
• E ⊢ p ≈ q because σ(t) = p and σ(u) = q for some equation P ⇒ t ≈
u ∈ E and closed substitution σ with σ(P ) = true. The claim follows
immediately from Lem. 6.5.4.
• E ⊢ p ≈ q because p = p′ + p′′ and q = q ′ + q ′′ for some p′ , q ′ , p′′ , q ′′
such that E ⊢ p′ ≈ q ′ and E ⊢ p′′ ≈ q ′′ . Since the summands of p are
all bisimilar to Φn or 0, the same holds for p′ and p′′ . By induction, the
summands of q ′ and q ′′ are all bisimilar to Φn or 0. The claim now follows
because the summands of q are those of q ′ and q ′′ .
• E ⊢ p ≈ q because p = ap′ and q = aq ′ for some p′ , q ′ such that E ⊢ p′ ≈ q ′ .
This case is vacuous, because n ≥ 2 and p ↔ Φn .
• E ⊢ p ≈ q because p = αp′ and q = αq ′ for some p′ , q ′ such that E ⊢ p′ ≈
q ′ . This case is vacuous, because p and q are closed.
• E ⊢ p ≈ q because p = Θ(p′ ) and q = Θ(q ′ ) for some p′ , q ′ such that
E ⊢ p′ ≈ q ′ . The claim is immediate, because both p and q consist of a
single summand, and p ↔ q by the soundness of E.
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
The proof is now complete.
125
Theorem 6.5.6 Let Act = {ai , bi | i ≥ 1} ∪ {c}, where ai < bi < c for each
i ≥ 1, and these are the only inequalities. Then bisimulation equivalence has
no ground-complete axiomatization over BCCSPΘ consisting of a finite set of
sound equations with action predicates as conditions.
Proof: Let E be a finite collection of equations with action predicates as conditions that is sound modulo ↔. Let n ≥ 2 be larger than the size of any
term in the equations of E. According to Prop. 6.5.5, from E we cannot derive
Θ(Φn ) ≈ Φn . This equation is sound modulo ↔, and therefore E is not groundcomplete.
6.5.2
Positive Results
We have offered an example of a priority structure (Act, <) with respect to which
it is impossible to give a finite, ground-complete axiomatization of bisimulation
equivalence over BCCSPΘ in terms of equations with action predicates as conditions without recourse to auxiliary operators. That result, however, does not
imply that auxiliary operators are always necessary to achieve a finite basis of
equations with action predicates as conditions for bisimulation equivalence. Our
aim in this section is to substantiate this claim by providing some general conditions over the priority structure (Act, <) that are sufficient to guarantee the
existence of a finite, ground-complete axiomatization of bisimulation equivalence
over BCCSPΘ that uses equations with action predicates as conditions.
Definition 6.5.7 An anti-chain in a poset (Act, <) is a subset of Act consisting
of pairwise incomparable actions. The width of a poset (Act, <) is the least upper
bound of the cardinalities of its anti-chains. A poset (Act, <) has finite width if
its width is finite.
Example 6.5.8 The poset of actions we considered in Section 6.5.1 has uncountably many infinite, maximal anti-chains. (Each such anti-chain can, in
fact, be obtained by picking exactly one of ai and bi for each i ≥ 1.) The width
of that poset is therefore infinite.
We now offer a countably infinite, ground-complete axiomatization of bisimulation equivalence over BCCSPΘ using equations with action predicates as conditions. Such an axiomatization reduces to a finite one if the poset of actions has
finite width.
Theorem 6.5.9 Let (Act, <) be an infinite poset of actions. Then the following
statements hold:
1. The axiom system consisting of the CPR2-3 and CPR4n (n ≥ 0), together
with A1-4 is ground-complete for bisimilarity over the language BCCSPΘ .
126
Chapter 6 Priority
2. Assume that the width of (Act, <) is k. Then the axiom system consisting of CPR2-3, and CPR4k , together with A1-4 and PR1 in Tab. 6.2, is
ground-complete for bisimilarity over the language BCCSPΘ . Therefore
bisimilarity has a finite, ground-complete axiomatization using equations
with action predicates as conditions if (Act, <) has finite width.
Proof: We only present a sketch of the proof for statement 2. (That for statement 1 follows similar lines.)
First of all, observe that it suffices only to show that, if the cardinality of
each anti-chain in (Act, <) is at most k, CPR2-3, CPR4k and PR1 can be used
to remove all occurrences of Θ from closed terms. Indeed, if we can do so, then
ground-completeness follows from the well-known ground-completeness of A1-4
for BCCSP modulo ↔ (see, e.g., [HM85]).
To prove that all occurrences of Θ can be removed from closed terms, assume
that we have a closed term p that does not contain occurrences of Θ. We show
that Θ(p) can be proven equal to a term q that does not contain occurrences
of Θ by induction on the size of p. To this end, note P
that, modulo associativity
n
and commutativity of +, the term p can be written i=1 ai pi for some n ≥ 0,
actions ai and closed terms pi that do not contain occurrences of Θ.
If n = 0, then PR1 yields that Θ(0) ≈ 0, and we are done. If n = 1, then
the claim follows using (6.1) and the induction hypothesis. (Recall that, since
k ≥ 1, Eq. (6.1) is derivable from CPR4k .) Consider now the case when n ≥ 2.
We proceed by examining the following three sub-cases:
• there are i, j such that 1 ≤ i < j ≤ n and ai = aj ,
• there are i, j such that 1 ≤ i, j ≤ n and ai < aj , and
• the collection of actions {a1 , . . . , an } is an anti-chain in the poset (Act, <).
The first two sub-cases are handled using the induction hypothesis, and CPR2
and CPR3, respectively.
If the proviso for the third sub-case applies, then we know that n ≤ k. Using
A3 if n < k, we can therefore reason as follows:
n
X
ai pi )
Θ(
i=1
n
X
ai pi + an pn + · · · + an pn )
≈ Θ(
|
{z
}
i=1
≈
≈
n
X
i=1
n
X
(k − n) times
ai Θ(pi ) (by CPR4k and possibly A3)
ai qi
(by the induction hypothesis)
i=1
for some closed terms q1 , . . . , qn that do not contain occurrences of Θ.
Using this result, a simple argument by structural induction over closed
terms shows that each closed term in the language BCCSPΘ is provably equal
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
127
to one that does not contain occurrences of the Θ operator. The proof is complete.
So bisimilarity affords a finite, ground-complete axiomatization that uses
equations with action predicates as conditions if the poset (Act, <) has finite
width. (Moreover, the equations with action predicates as conditions making up
the axiom systems used in Thm. 6.5.9 only involve predicates over actions that
can be expressed as conjunctions of, possibly negated, atomic formulae of the
form α < β.) A natural question to ask at this point is whether this result holds
for more general priority structures. We now proceed to address this question
in some detail.
Let us begin by observing that there are priority structures with infinite antichains that do allow for a finite, ground-complete axiomatization of bisimilarity
over the language BCCSPΘ . Consider, by way of example, the flat priority
structure ({⊥, a0 , a1 , . . .}, <), where the only ordering relations are given by ⊥ <
ai for each i ≥ 0. Membership of the countably infinite anti-chain {a0 , a1 , . . .}
can be characterized by the predicate
P (α) = ∀β ¬(α < β) .
We can therefore write the following equation that allows us to reduce the
number of summands within the scope of a Θ operator:
P (α) ∧ P (β) ⇒ Θ(αx + βy + z) ≈ Θ(αx + z) + Θ(βy + z) .
(6.2)
It is not hard to see that the above equation is sound. (In fact, the soundness of
this equation will follow from the more general result in Lem. 6.5.12.) Moreover,
following the lines of the proof sketch for Thm. 6.5.9(2), one can argue that,
together with PR1, CPR2-3 and (6.1), this equation can be used to remove all
occurrences of Θ from closed terms. It follows that:
Proposition 6.5.10 Consider the priority poset ({⊥, a0 , a1 , . . .}, <), where the
only ordering relations are given by ⊥ < ai for each i ≥ 0. Then the axiom
system consisting of the equations (6.2), CPR2-3 and (6.1), together with A14 and PR1 in Tab. 6.2, is ground-complete for bisimilarity over the language
BCCSPΘ .
As another example, consider the priority structure
A = ({a0 , a1 , . . .} ∪ {b0 , b1 , c}, <) ,
where the relation < is the least transitive relation satisfying
bi
aj
<
<
aj
c
for all i ∈ {0, 1}, j ≥ 0 and
for each j ≥ 0 .
This poset has one non-trivial maximal finite anti-chain, namely {b0 , b1 }, and
one maximal countably infinite anti-chain, namely
A
= {a0 , a1 , . . .} .
128
Chapter 6 Priority
Membership of A is characterized by the predicate PA defined thus:
PA (α)
= ∃β1 , β2 . β1 < α < β2 .
As the readers can check, the instance of Eq. (6.2) associated with this predicate
is sound. (Again, the soundness of this equation will follow from the more
general result in Lem. 6.5.12.) Moreover, following the lines of the proof sketch
for Thm. 6.5.9(2), one can argue that, together with PR1, CPR2-3 and CPR42
(to handle the finite anti-chain {b0 , b1 }), this equation can be used to remove
all occurrences of Θ from closed terms. It follows that:
Proposition 6.5.11 Consider the priority poset A. Then the axiom system
consisting of Eq. (6.2) for predicate PA , CPR2-3 and CPR42 , together with
A1-4 and PR1 in Tab. 6.2, is ground-complete for bisimilarity over BCCSPΘ .
In both of the examples we have just presented, Eq. (6.2) plays a key role in that
it allows us to reduce the size of terms in “head normal form” having summands
of the form ap and bq with a, b contained in an infinite anti-chain within the
scope of a Θ operator. The following lemma states a necessary and sufficient
condition on the infinite anti-chain that guarantees that axiom (6.2) be sound
modulo bisimilarity.
Lemma 6.5.12 Let A be an anti-chain in the poset (Act, <) whose membership
is described by predicate PA . Then Eq. (6.2) for predicate PA is sound modulo
bisimilarity iff each element of A is above the same set of actions, that is, for
each a, b ∈ A and c ∈ Act, we have that c < a iff c < b.
Proof: We first prove the “if implication”. To this end, assume that a, b ∈ A
and p, q, r are closed terms in the language BCCSPΘ . We claim that
Θ(ap + bq + r) ↔ Θ(ap + r) + Θ(bq + r) .
To see that this claim does hold, it suffices only to observe that the following
statements hold for each closed term p′ :
1. Θ(ap + bq + r) −−a→ p′ iff Θ(ap + r) + Θ(bq + r) −−a→ p′ ,
b
b
2. Θ(ap + bq + r) −−→
p′ iff Θ(ap + r) + Θ(bq + r) −−→
p′ , and
c
c
3. Θ(ap + bq + r) −−→
p′ iff Θ(ap + r) + Θ(bq + r) −−→
p′ , for each action c
different from a, b.
We only offer a proof for the last of these statements. To this end, assume, first
c
of all, that Θ(ap + bq + r) −−→ p′ for some action c different from a, b and closed
′
term p . Since c is different from a, b, there is a closed term r′ such that
• p′ = Θ(r′ ),
c
• r −−→
r′ ,
6.5 Axiomatizing Priority over an Infinite Action Set, Conditionally
129
d
• r 9 for each action d such that c < d, and
• neither c < a nor c < b holds.
c
It is now a simple matter to see that, for instance, Θ(ap + r) −−→ p′ . This yields
c
that Θ(ap + r) + Θ(bq + r) −−→
p′ , which was to be shown.
c
Conversely, suppose that Θ(ap + r) + Θ(bq + r) −−→ p′ for some action
′
c different from a, b and closed term p . Without loss of generality, we may
c
assume that this is because Θ(ap + r) −−→
p′ . Since c is different from a, b, there
′
is a closed term r such that
• p′ = Θ(r′ ),
c
• r −−→
r′ ,
d
• r 9 for each action d such that c < d, and
• c < a does not hold.
Observe now that c < b does not hold either, because a and b are above the
c
same actions by the proviso of the lemma. It follows that Θ(ap + bq + r) −−→
p′ ,
which was to be shown.
To establish the “only if implication”, assume that A contains two distinct
incomparable actions a and b that are not above the same set of actions. Suppose, without loss of generality, that c < a, but c < b does not hold, for some
action c. Then
/ a0 + b0 + c0 ↔ Θ(a0 + c0) + Θ(b0 + c0) .
Θ(a0 + b0 + c0) ↔ a0 + b0 ↔
(The last equivalence holds true because b and c must be incomparable, as c < a
and a and b are incomparable.) Therefore Eq. (6.2) for predicate PA is not sound
modulo bisimilarity.
Remark 6.5.13 Let A, B be two different, maximal anti-chains in the poset
(Act, <). Assume that each element of A is above the same set of actions, that
is, for each a, b ∈ A and c ∈ Act, we have that c < a iff c < b, and so is each
element of B. Then A and B are disjoint.
To see this, assume, towards a contradiction, that a ∈ A ∩ B. Since A and B
are maximal anti-chains, neither one is a subset of the other. Therefore, since
A 6= B, there are actions b, c such that b ∈ A \ B and c ∈ B \ A. It follows that
a, b, c are above the same set of actions in Act. However, b 6∈ B. Therefore,
since B is maximal, there must be some action d ∈ B with b < d or d < b. If
b < d, we have that b < a because a, d ∈ B and each element of B is above the
same actions. This contradicts the assumption that A is an anti-chain. If d < b
then reasoning as above we can reach a contradiction to the assumption that B
is an anti-chain. Therefore, A and B must be disjoint.
130
Chapter 6 Priority
Suppose that p is a closed term in head normal form whose set of initial actions
is included in an infinite anti-chain satisfying the constraint in the statement
of Lem. 6.5.12. Then the sound equation (6.2) offers a way of “simplifying”
the term Θ(p). The use of this axiom is the key to the proof of the following
generalization of Thm. 6.5.9(2), and of Prop. 6.5.10 and 6.5.11.
Theorem 6.5.14 Let (Act, <) be an infinite poset of actions. Assume that
1. the collection of the sizes of the finite, maximal anti-chains in (Act, <) is
finite,
2. (Act, <) has finitely many infinite, maximal anti-chains, and
3. for each infinite, maximal anti-chain A in (Act, <), each element of A is
above the same set of actions, that is, for each a, b ∈ A and c ∈ Act, we
have that c < a iff c < b.
Let k be the size of the largest finite, maximal anti-chain in (Act, <), or 1 if
all maximal anti-chains are infinite. Then the axiom system consisting of one
instance of Eq. (6.2) for predicate PA for each infinite anti-chain A in (Act, <
), CPR2-3 and CPR4k , together with A1-4 and PR1 in Tab. 6.2, is groundcomplete for bisimilarity over the language BCCSPΘ .
Proof: The soundness of the axiom system is easily established, using Lem. 6.5.12
for the instances of axiom (6.2). The completeness of the axiom system can be
shown along the lines of the proof
Pnof Thm. 6.5.9. The key of the argument is
again to prove that each term Θ( i=1 ai pi ), where the pi do not contain occurrences of Θ, can be proven equal P
to a term q that does not contain occurrences
n
of Θ by induction on the size of i=1 ai pi . This we doPby considering several
n
sub-cases depending on the number n of summands in i=1 ai pi .
If n = 0, then the claim follows using PR1. If n = 1, then it suffices only
to use (6.1) and the induction hypothesis. (Recall that (6.1) is derivable from
CPR4k .) If n ≥ 2, then we distinguish the following sub-cases:
• there are i, j such that 1 ≤ i < j ≤ n and ai = aj ,
• there are i, j such that 1 ≤ i, j ≤ n and ai < aj ,
• the collection of actions {a1 , . . . , an } is an anti-chain in the poset (Act, <).
The first two sub-cases are handled using the induction hypothesis, and the
equations with action predicates as conditions CPR2 and CPR3, respectively.
The last sub-case is handled using CPR4k as in the proof of Thm. 6.5.9 if the
set of actions {a1 , . . . , an } is included in a finite maximal anti-chain. Assume
now that {a1 , . . . , an } is only included in an infinite maximal anti-chain, say A.
(In fact, Rem. 6.5.13 ensures that such an anti-chain A is unique.) Using the
instance of Eq. (6.2) for predicate PA and induction, the claim follows.
The rest of the proof follows the lines of that of Thm. 6.5.9, and is therefore
omitted.
131
6.6 Conclusion
Remark 6.5.15 The priority structure we employed in our proof of Thm. 6.5.6
satisfies neither condition 2 nor condition 3 in the proviso of the above theorem.
In light of the above result, bisimilarity has a finite, ground-complete axiomatization using equations with action predicates as conditions over the language
BCCSPΘ if the poset of actions satisfies the proviso of the above theorem. The
above theorem therefore generalizes Prop. 6.5.10 and 6.5.11. A further example
of a priority structure that satisfies the conditions stated in Thm. 6.5.14 is one
having a finite collection of “priority levels” each consisting of an infinite set of
actions – consider, for instance, the poset
({aij | 1 ≤ i ≤ N, j ≥ 1}, <) ,
where N is a positive integer and aij < ahk holds iff i < h.
We have not yet attempted a complete classification of the priority structures
for which bisimulation equivalence affords a finite axiomatization in terms of
equations with action predicates as conditions over the language BCCSPΘ . This
is most likely a hard problem which we leave for future research.
6.6
Conclusion
We have investigated the equational theory of the bisimulation equivalence over
the process algebra BCCSP extended with the priority operator of Baeten,
Bergstra and Klop in depth. We show that, in the presence of an infinite set
of actions, the bisimulation equivalence has no finite, sound, ground-complete
axiomatization over that language. This negative result applies even if the syntax is extended with an arbitrary collection of auxiliary operators. We then
considered axiomatizations using equations with action predicates as conditions
where in the presence of an infinite set of actions, it is shown that in general, the
bisimulation equivalence still has no finite, sound, ground-complete axiomatization. Finally, we identified sufficient conditions on the priority structure over
actions that lead to a finite, ground-complete axiomatization of bisimulation
equivalence using equations with action predicates as conditions.
Open question. In this chapter, generally we considered some specific priority structure. However, a natural question3 is left open, namely, whether the
set of equations that hold in all priority structures is finitely based? We note
that Θ(Θ(x) + y) = Θ(x + y) serves as an example of such sound equations.
Another example might be Θ(x) + Θ(y) ≈ Θ(x) + P
Θ(y) + Θ(x +Py). Also note
n
n
that the equation we gave in Claim. 6.5.2, i.e., Θ( i=1 bi 0) ≈ i=1 bi 0, does
not hold in all priority structures.
3 This
is raised by Jaco van de Pol when he reviewed a manuscript of the dissertation.
132
Chapter 6 Priority
Part II
Verification of Probabilistic
Real-time Systems
133
Chapter 7
Model Checking of CTMCs Against
DTA Specifications
7.1
Introduction
Model checking continuous-time Markov chains (CTMCs) has been focused on
the continuous stochastic logic (CSL, [ASSB00, BHHK03]), which is branchingtime in nature. This chapter concerns the problem of verifying CTMCs versus
linear real-time specifications, which are based on deterministic timed automata
(DTA, [AD94]). Concretely speaking, we explore the following problem: given
a CTMC C, and a linear real-time property provided as a DTA, A, what is the
probability of the set of paths of C accepted by A? (Below we term it as “probability of C |= A” for simplicity.) We show that this set of paths is measurable and
computing its probability can be reduced to computing the reachability probability in a piecewise deterministic Markov process (PDP, [Dav84]), a model that
is used in an enormous variety of applied problems of engineering, operations
research, (stochastic) control theory, management science and economics; examples include queueing systems, stochastic scheduling, fault detection in process
systems, etc. This result relies on a product construction of CTMC C and
DTA A, denoted by C ⊗ A, yielding a deterministic Markov timed automaton
(DMTA), a variant of DTA in which, besides the usual ingredients of timed
automata, like guards and clock resets, the location residence time is exponentially distributed. We show that the probability of C |= A coincides with the
reachability probability of accepting paths in C ⊗ A. The underlying PDP of
a DMTA is obtained by a slight adaption of the standard region construction
[AD94]. The desired reachability probability is characterized as the least solution of a system of integral equations that is obtained from the PDP. Finally,
this probability is shown to be approximated by solving a system of partial differential equations (PDEs). For single-clock DTA, we show that the system of
integral equations can be transformed into a system of linear equations, where
the coefficients are solutions of some ordinary differential equations (ODEs), for
which either an analytical solution can be obtained or a numerical solution can
be approximated arbitrarily close in an efficient way.
135
136
Chapter 7 Model Checking of CTMCs Against DTA Specifications
Related work. [BCH+ 07] and [DHS09] present model checking algorithms of
asCSL and CSLTA respectively. asCSL allows one to impose a time constraint on
action sequences described by regular expressions; its model-checking algorithm
is based on a deterministic Rabin automaton construction. In CSLTA , time
constraints are specified by single-clock DTA. (Note that a similar extension of
real-time temporal logic is TECTL∃ [BLY96].) In [DHS09], C ⊗ A is interpreted
as a Markov renewal process and model checking CSLTA is reduced to computing
reachability probabilities in a DTMC whose transition probabilities are given by
so called subordinate CTMCs. This technique cannot be generalized to multiple
clocks. Our approach does not restrict the number of clocks and supports more
specifications than CSLTA (when combined with CSL as in [DHS09]). For the
single-clock case, our approach produces the same result as [DHS09], but yields
a conceptually simpler formulation whose correctness can be derived from the
simplification of the system of integral equations obtained in the general case.
Moreover, measurability has not been addressed in [DHS09]. As for other work,
[BBB+ 07a, BBB+ 08, BBBM08] provide a quantitative interpretation to timed
automata where delays and discrete choices are interpreted probabilistically. In
this approach, delays of unbounded clocks are governed by exponential distributions like in CTMCs. Decidability results have been obtained for almost-sure
properties [BBB+ 08] and quantitative verification [BBBM08] for (a subclass of)
single-clock timed automata.
Structure of the chapter. Section 7.2 contains definitions of CTMCs, DTA
and PDPs. Section 7.3 presents the approach of model checking CTMCs with
respect to general DTA in detail. Section 7.4 deals with the special case of a
single-clock DTA. Section 7.5 concludes the chapter.
7.2
Preliminaries
For a set H (typically associated with a topology), let Pr : F(H) → [0, 1] be
a probability measure on the measurable space (H, F(H)), where F(H) is a
σ-algebra over H. Let Distr (H) denote the set of probability measures on this
measurable space.
7.2.1
Continuous-time Markov Chains
As stated in Section 1.2, CTMCs can be seen as generalizations of discrete-time
Markov chains (DTMCs, see Section 2.2.2) by enhancing them with negative
exponential state residence time distributions (given by the exit rate function
E in the definition below).
Definition 7.2.1 A (labeled) continuous-time Markov chain (CTMC) is a tuple C = (S, AP, L, α, P, E) where:
• S is a finite set of states;
• AP is a finite set of atomic propositions;
137
7.2 Preliminaries
• L : S → 2AP is the labeling function;
• α ∈ Distr (S) is the initial distribution;
• P : S × S → [0, 1] is a stochastic transition probability matrix; and
• E : S → R≥0 is the exit rate function.
The probability to exit the state s and to take the transition s → s′ in t time
units are
Z t
Z t
E(s)·e−E(s)τ dτ and P(s, s′ )·
E(s)·e−E(s)τ dτ ,
0
0
respectively. A state s is absorbing if P(s, s) = 1. The embedded DTMC of
CTMC C is obtained by deleting the exit rate function E, i.e., emb(C) =
(S, AP, L, α, P).
Definition 7.2.2 (Timed path) Let C = (S, AP, L, α, P, E) be a CTMC. An
t0
t1
infinite path ρ is a sequence s0 −−→
s1 −−→
s2 · · · with, for i ∈ N, si ∈ S and
ti ∈ R>0 such that P(si , si+1 ) > 0 for all i. A finite path is a prefix of an infinite
n
path. We write Paths Cn := S × (R>0 ×S) for the set of (finite)
S paths of length
n in C; the set of finite paths in C is thus defined by Paths C⋆ = n∈N Paths Cn and
ω
Paths Cω := (S × R>0 ) is the set of infinite paths in C. Paths C = Paths C⋆ ∪Paths Cω
denotes the set of all paths in C and Paths C (s) denotes the set of paths that
start from state s. The superscript C is omitted whenever convenient.
Given any path ρ, we define |ρ| as the number of transitions in ρ if ρ is
finite; |ρ| = ∞ if ρ is infinite. For n ≤ |ρ|, ρ[n] := sn is the (n + 1)-st state of
ρ and ρhni := tn is the time spent in state sn . Let ρ@t be the state occupied
in
Pnρ at time t ∈ R≥0 , i.e., ρ@t := ρ[n] where n is the smallest index such that
i=0 ρhii > t.
The definition of a Borel space on paths through CTMCs follows [BHHK03]
(see similar notions in Section 2.2.2 for DTMCs). A CTMC C with the initial distribution α yields a probability measure PrC on paths as follows: For
s0 , · · ·, sk ∈ S with P(si , si+1 ) > 0 for 0 ≤ i < k, and I0 , · · ·, Ik−1 nonempty
intervals in R≥0 , C(s0 , I0 , · · ·, Ik−1 , sk ) denotes the cylinder set consisting of
all paths ρ ∈ Paths(s0 ) such that ρ[i] = si (i ≤ k) and ρhii ∈ Ii (i < k).
F(Paths(s0 )) is the smallest σ-algebra on Paths(s0 ) which contains all sets
C(s0 , I0 , · · ·, Ik−1 , sk ) for all state sequences (s0 , · · ·, sk ) ∈ S k+1 with P(si , si+1 ) >
0 (0 ≤ i < k) and all sequences I0 , · · ·, Ik−1 of nonempty intervals in R≥0 .
The probability measure PrC on F(Paths(s0 )) is the unique measure determined by the one on cylinder sets, which is defined by induction on k as
PrC (C(s0 )) = α(s0 ) and for k > 0:
PrC C(s0 , I0 , · · ·, Ik−1 , sk ) = PrC C(s0 , I0 , · · ·, Ik−2 , sk−1 )
Z
·
P(sk−1 , sk )E(sk−1 )·e−E(sk−1 )τ dτ .
(7.1)
Ik−1
138
Chapter 7 Model Checking of CTMCs Against DTA Specifications
Example 7.2.3 An example CTMC is illustrated in Fig. 7.1(a) (page 139),
where AP = {a, b, c} and s0 is the initial state, i.e., α(s0 ) = 1 and α(s) = 0 for
any s 6= s0 . The exit rates ri and the transition probabilities are as shown.
7.2.2
Deterministic Timed Automata
Definition 7.2.4 A deterministic timed automaton (DTA) is a tuple A =
(Σ, X , Q, q0 , QF , →) where:
• Σ is a finite alphabet;
• X is a finite set of clocks;
• Q is a nonempty finite set of locations, with q0 ∈ Q the initial location;
• QF ⊆ Q is a set of accepting locations; and
• →⊆ (Q \ QF )×Σ×B(X )×2X ×Q is a finite edge relation satisfying that:
a,g ′ ,X ′
a,g,X
(1) g is diagonal-free; and (2) q −−−−→ q ′ and q −−−−−→ q ′′ implies
JgK ∩ Jg ′ K = ∅ (see Section 2.2.3 for general TA).
a,g,X
As usual, we refer to q −−−−→ q ′ as a transition, where a ∈ Σ is the input
symbol, the guard g is a clock constraint on the clocks of A, X ⊆ X is a set
of clocks to be reset and q ′ is the successor location. The intuition is that the
DTA A can move from location q to location q ′ when the input symbol is a and
the guard g holds, while the clocks in X should be reset when entering q ′ . Note
that in this chapter, as a convention, we assume that each location q ∈ QF is a
sink. An example DTA is shown in Fig. 7.1(b) (page 139).
a0 ,t0
a ,tn
A (finite) timed path in A is of the form θ = q0 −−−
−→ q1 · · · qn −−n−−→
qn+1 ,
for ti > 0 (0 ≤ i ≤ n). All the definitions on timed paths in CTMCs can be
adopted here. A timed path θ of length n (i.e. |θ| = n) is accepted by A if there
exists a sequence of clock valuations {ηj }0≤j≤n such that θ[n] ∈ QF and for all
0 ≤ j < n, η0 = ~0, ηj + tj |= gj and ηj+1 = (ηj + tj )[Xj := 0], where intuitively
ηj is the clock evaluation on entering qj . Given a CTMC C s.t. Σ = 2AP , we say
t0
that an infinite timed path ρ = s0 −−→s
1 · · · in C is accepted by A if there exists
t
t0
−−→ sn
s1 · · · sn−1 −−n−1
some n ∈ N such that the finite fragment of ρ, i.e. s0 −−→
L(s ),t
L(s
),t
n−1
n−1
0
gives rise to an augmented timed path ρ̂ = q0 −−−−0−−→
q1 · · · qn−1 −−−−
−−−−
−→
qn , which is accepted by A. Note that here we do not introduce invariants for
locations of DTA as in safety TA [HNSY94]. However, the obtained result in
this chapter can be adapted there without any difficulty.
7.2.3
Piecewise-deterministic Markov Processes
PDPs constitute a general framework that can model virtually any stochastic
system without diffusions [Dav84], and for which powerful analysis and control
techniques exist [LL85, LY91, CD88]. A PDP consists of a finite set of locations, each with a location invariant over a set of variables. A PDP can jump
139
7.2 Preliminaries
a, x < 1, ∅
r2 {b}
{a}
1
s0
r0
{a}
s2
0.2
1
q0
s1
0.5
r1
s3
0.3
r3 {c}
b, x > 1, ∅
q1
1
a, 1 ≤ x < 2, {x}
(a) CTMC C
(b) DTA A
Figure 7.1: Example CTMC C and DTA A
between locations either randomly, in which case the residence time of a location
is governed by an exponential distribution, or when the location invariant is violated. While staying in a location, a PDP evolves deterministically according
to a flow function (which is the solution of a system of ODEs). A state of the
PDP consists of a location and a valuation of the variables. The target state of
the jump is determined by a probability measure depending on the source state.
The process is Markovian as the current state contains all the information to
predict the future progress of the process. For a comprehensive exposition of
PDPs, we refer the readers to [Dav93].
Let X = {x1 , . . . , xn } be a set of variables in R. An X -valuation is a function
η : X → R assigning to each variable x a value η(x). Let V(X ) denote the set of
all valuations over X . A constraint on X , denoted by g, is a subset of Rn . Let
B(X) denote the set of constrains over X . An X -valuation η satisfies constraint
g, denoted as η |= g if (η(x1 ), · · · , η(xn )) ∈ g. Note that here we do not restrict
the constraint to a particular form (as in clock constraints), but rather follow a
semantical formulation.
Definition 7.2.5 (PDP, [Dav84]) A piecewise-deterministic (Markov) process (PDP) is a tuple Z = (Z, X , Inv , φ, Λ, µ) where:
• Z is a finite set of locations;
• X is a finite set of variables;
• Inv : Z → B(X ) is an invariant function;
• φ : Z × V(X ) × R → V(X ) is a flow function, which is assumed to be the
solution of a system of ODEs with a Lipschitz continuous vector field;
• Λ : S → R≥0 is an exit rate function; and
• µ : S ∪ ∂S → Distr (S) is a transition probability function, where
S := {ξ := (z, η) | z ∈ Z, η ∈ S
Inv (z)} is the state space of the PDP Z, S̊ is
the interior of S and ∂S = z∈Z {z} × ∂Inv (z) is the boundary of S, with
∂Inv (z) = Inv (z) \ Inv˚(z) as the boundary of Inv (z) and Inv (z) the closure of
Inv (z). Functions Λ and µ satisfy the following conditions:
140
Chapter 7 Model Checking of CTMCs Against DTA Specifications
z2
x∈R
ẋ = 1
2
3
z0
x<2
ẋ = 1
1
3
z1
x∈R
ẋ = 1
Figure 7.2: An example PDP
• ∀ξ ∈ S. ∃ǫ(ξ) > 0. function
t 7→ Λ(ξ ⊕ t) is integrable on [0, ǫ(ξ)), where
ξ ⊕ t = z, φ(z, η, t) , for ξ = (z, η); and
• Function ξ 7→ µ(ξ, A)1 is measurable for anySA ∈ F(S), where F(S) is a
σ-algebra generated by the countable union z∈Z {z} × Az with Az being
a subset of F(Inv (z)) and µ(ξ, {ξ}) = 0.
A PDP is only allowed to stay in location z when the constraint Inv (z) is
satisfied. If e.g., Inv (z) is x21 − 2x2 < 1.5 ∧ x3 > 2, then its closure Inv (z) is
x21 − 2x2 ≤ 1.5 ∧ x3 ≥ 2, and the boundary ∂Inv (z) is x21 − 2x2 = 1.5 ∧ x3 = 2.
When the variable valuation satisfies the boundary (η |= ∂Inv (z)), the PDP is
forced to jump and leave the current location z. The flow function φ defines
the time-dependent behavior in a single location, in particular, how the variable
valuations change when time elapses. State ξ ⊕ t is the timed successor of
state ξ (on the same location) given that t time units have passed. The PDP
is piecewise-deterministic because in each location (one piece) the behavior is
deterministically determined by φ. In summary, when a new state ξ = (z, η)
is entered and Inv (z) is valid, i.e., ξ ∈ S, the PDP can either delay to state
ξ ′ = (z, η ′ ) ∈ S ∪ ∂S according to both the flow function φ and the time delay t
(in this case ξ ′ = ξ⊕t); or take a Markovian jump to state ξ ′′ = (z ′′ , η ′′ ) ∈ S with
probability µ(ξ, {ξ ′′ }). Note that the residence time of a location is exponentially
distributed. When Inv (z) is invalid, i.e., ξ ∈ ∂S, ξ will be forced to take a
boundary jump to ξ ′′ with probability µ(ξ, {ξ ′′ }).
The embedded discrete-time Markov process (DTMP) emb(Z) of the PDPZ
has the same state space S as Z. The (one-jump) transition probability from a
state ξ to a set A ⊆ S of states (on different locations as ξ), denoted by µ̂(ξ, A),
is given by [Dav93, CD88]:
µ̂(ξ, A) =
Z
♭(ξ)
(Q1A )(ξ ⊕ t)·Λ (ξ ⊕ t) e−
0
R
−
+ (Q1A )(ξ ⊕ ♭(ξ))·e
♭(ξ)
0
R
t
0
Λ(ξ⊕τ )dτ
Λ(ξ⊕τ )dτ
,
dt
(7.2)
(7.3)
where ♭(ξ) = inf{t > 0 | ξ ⊕ t ∈ ∂S} is the minimal time
R to hit the boundary
if such time exists; ♭(ξ) = ∞ otherwise. (Q1A )(ξ) = S 1A (ξ ′ )µ(ξ, dξ ′ ) is the
accumulative (one-jump) transition probability from ξ to A, and 1A (ξ) is the
characteristic function such that 1A (ξ) = 1 when ξ ∈ A and 1A (ξ) = 0 otherwise. Term (7.2) specifies the probability to delay to state ξ ⊕ t (on the same
location) and take a Markovian jump from ξ ⊕ t to A. Note that the delay t
can take a value from [0, ♭(ξ)). Term (7.3) is the probability to stay in the same
1 µ(ξ, A)
is a shorthand for (µ(ξ))(A).
141
7.3 Model Checking DTA Specifications
location for ♭(ξ) time units and then it is forced to take a boundary jump from
ξ ⊕ ♭(ξ) to A since Inv (z) is invalid.
Example 7.2.6 Fig. 7.2 depicts a 3-location PDP Z with one variable x, where
Inv (z0 ) is x < 2 and Inv (z1 ), Inv (z2 ) are both x ∈ [0, ∞). Solving ẋ = 1 gives
the flow function φ(zi , η(x), t) = η(x) + t for i = 0, 1, 2. The state space of Z is
{(z0 , η) | 0 < η(x) < 2} ∪ {(z1 , R)} ∪ {(z2 , R)}. Let exit rate Λ(ξ) = 5 for
any
ξ ∈ S. For η |= Inv (z0 ), let µ (z0 , η), {(z1 , η)} := 31 , µ (z0 , η), {(z2 , η)} := 23
and the boundary measure µ (z0 , 2), {(z1 , 2)} := 1. Given state ξ0 = (z0 , 0)
and the set of states A = (z1 , R), the time for ξ0 to hit the boundary is ♭(ξ0 ) = 2.
Then (Q1A )(ξ0 ⊕t) = 31 if t < 2, and (Q1A )(ξ0 ⊕t) = 1 if t = 2. In the embedded
DTMP emb(Z), the transition probability from state ξ0 to A is:
µ̂(ξ0 , A) =
Z
0
7.3
2
1
·5·e−
3
R
t
0
5 dτ
dt + 1·e−
R
2
0
5 dτ
=
1 2 −10
+ e
.
3 3
Model Checking DTA Specifications
In this section, we deal with model checking C against linear real-time properties
specified by DTA A, namley, to compute the probability of C |= A. We shall
prove that this can be reduced to computing the reachability probability in the
product of C and A (Thm. 7.3.8), which can be further reduced to computing
the reachability probability in a corresponding PDP (Thm. 7.3.12). To simplify
the notations, we assume, without loss of generality, that a CTMC has only
one initial state s0 , i.e., α(s0 ) = 1, and α(s) = 0 for s 6= s0 .
7.3.1
Deterministic Markovian Timed Automata
To model check a DTA specification, we will exploit the product of a CTMC
and a DTA, which is a deterministic Markovian timed automaton (DMTA):
Definition 7.3.1 (DMTA) A deterministic Markovian timed automaton is a
tuple M = (Loc, X , ℓ0 , LocF , E, ), where:
• Loc is a finite set of locations with ℓ0 ∈ Loc the initial location;
• X is a finite set of clocks;
• LocF ⊆ Loc is the set of accepting locations;
• E : Loc → R≥0 is the exit rate function; and
•
⊆ Loc × B(X ) × 2X × Distr (Loc) is an edge relation satisfying that:
(ℓ, g, X, ζ), (ℓ, g ′ , X ′ , ζ ′ ) ∈
implies JgK ∩ Jg ′ K = ∅.
The set of clocks X and the related concepts, e.g., clock valuation, clock
g,X
constraints are defined as for TA (see Section 2.2.3). We refer to ℓ /o /o /o / ζ for
142
Chapter 7 Model Checking of CTMCs Against DTA Specifications
g,X
distribution ζ ∈ Distr (Loc) as an edge and refer to ℓ
ζ(ℓ′ )
/ ℓ′ as a transition
of this edge. The intuition is that when entering location ℓ, the DMTA chooses
a residence time which is governed by the exponential distribution, i.e., the
probability to leave ℓ in t time units is 1 − e−E(ℓ)t . When it decides to jump,
g,X
at most one edge, say ℓ /o /o /o / ζ , due to the determinism, is enabled and the
probability to jump to ℓ′ is given by ζ(ℓ′ ). The DMTA is deterministic as it
has a unique initial location and disjoint guards for all edges emanating from
any location.
Example 7.3.2 The DMTA in Fig. 7.3(a) has initial location ℓ0 with two
edges, with guards x < 1 and 1 < x < 2. Assume t time units elapsed. If
t < 1, then the upper edge is enabled and the probability to go to ℓ1 in time t
is (1 − e−r0 t )·1, where E(ℓ0 ) = r0 ; no clock is reset. The process is similar for
1 < t < 2, except that x will be reset. Location ℓ3 is accepting.
Paths in DMTAs.
Given a DMTA M and a finite symbolic path
ℓ0
g0 ,X0 /
p0
gn−1
,Xn−1
ℓ1 · · · ℓn−1 pn−1 / ℓn ,
where pi = ζi (ℓi+1 ) is the transition probability of ℓi gi ,Xi
/ ℓi+1 , the induced
ζi (ℓi+1 )
t
t
0
finite paths in M are of the form σ = ℓ0 −−→
ℓ1 · · · ℓn−1 −−n−1
−−→ ℓn and have
the property that η0 = ~0, (ηi + ti ) |= gi , and ηi+1 = (ηi + ti )[Xi := 0] where
0 ≤ i < n and ηi is the clock valuation of X in M on entering location ℓi . Finite
path σ is accepting if ℓn ∈ LocF . All the definitions on paths in CTMCs can
be carried over to paths in DMTA.
Given a DMTA M, the cylinder set C(ℓ0 , I0 , · · · , In−1 , ℓn ) where (ℓ0 , · · · , ℓn )
∈ Locn+1 and Ii ⊆ R≥0 denotes a set of paths σ in M such that σ[i] = ℓi and
σhii ∈ Ii . Now we define the measure PrM
η , which is the probability of the cylinder set C(ℓ0 , I0 , · · · , In−1 , ℓn ) such that the initial clock valuation in location ℓ0
is η0 as
M
PrM
η0 (C(ℓ0 , I0 , · · · , In−1 , ℓn )) := P0 (η0 ) .
M
Here PM
i (·) for 0 ≤ i ≤ n is defined as: Pn (η) = 1, and for 0 ≤ i < n, we note
gi ,Xi/ ℓ
that there exists a transition from ℓi to ℓi+1 with ℓi
i+1 (0 ≤ i < n),
pi
and thus we define
PM
i (η)
=
Z
Ii
(η ′ ) dτ ,
1gi (η + τ )·pi ·E(ℓi )·e−E(ℓi )τ · PM
|
{z
} | i+1
{z }
(⋆)
(7.4)
(⋆⋆)
where η ′ := (η + τ )[Xi := 0] and the characteristic function 1gi (η + τ ) = 1, if
η + τ |= gi ; 0, otherwise.
7.3 Model Checking DTA Specifications
143
Intuitively, PM
i (ηi ) is the probability of the suffix cylinder set starting from
ℓi and ηi to ℓn . It is recursively computed by the product of the probability
of taking a transition from ℓi to ℓi+1 in time interval Ii (cf. (⋆)) and the probability of the suffix cylinder set from ℓi+1 and ηi+1 on (cf. (⋆⋆)), where (⋆) is
computed by first ruling out the paths that do not belong to the cylinder set
by 1gi (η + τ ) and then computing the transition probability using the density
function pi ·E(ℓi )·e−E(ℓi )τ as in CTMCs. The characteristic function is Riemann
integrable, as it is bounded and its support is an interval, and thus PM
i (η) is
well-defined.
7.3.2
Product DMTAs
Given a CTMC C and a DTA A, the product C ⊗ A is a DMTA defined by:
Definition 7.3.3 (Product of CTMC & DTA) Let C = (S, AP, L, s0 , P, E)
be a CTMC and A = (2AP , X , Q, q0 , QF , →) be a DTA. We define C ⊗ A =
(Loc, X , ℓ0 , LocF , E, ) as the product DMTA, where Loc := S × Q; ℓ0 :=
hs0 , q0 i; LocF := S × QF ; E(hs, qi) := E(s); and
is defined as the relation
satisfying the following rule:
L(s),g,X
P(s, s′ ) > 0 ∧ q −−−−−−→ q ′
hs, qi
g,X
, s.t. ζ(hs′ , q ′ i) = P(s, s′ ) .
/o /o /o / ζ
Example 7.3.4 Let CTMC C be in Fig. 7.1(a) and DTA A be in Fig. 7.1(b)
(page 139). The product DMTA C ⊗ A is depicted as in Fig. 7.3(a) (page 144).
Remark 7.3.5 It is easy to see from the construction that C ⊗ A is indeed a
DMTA. The determinism of the DTA A guarantees that the induced product
is also deterministic. In C ⊗ A, there is at most one “action” possible, viz. L(s),
from each location ℓ = hs, qi, probably via different edges, but with disjoint
guards. We can thus omit it from the product DMTA.
We denote
Paths C⊗A (♦ LocF ) := {σ ∈ Paths C⊗A
| σ is accepted by C ⊗ A}
⋆
as the set of accepted paths in C ⊗ A, and
Paths C (A) := {ρ ∈ Paths C⋆ | ρ is accepted by DTA A}
as the set of paths in CTMC C that are accepted by DTA A. For any n-ary
tuple J, let J⇂i denote the i-th entry in J, for 1 ≤ i ≤ n. For a C ⊗ A path
t
t
t0
t0
−−→ hsn , qnSi, let σ⇂1 := s0 −−→
−−→ sn , and for any
· · · −−n−1
· · · −−n−1
σ = hs0 , q0 i −−→
set Π of C ⊗ A paths, let Π⇂1 = σ∈Π σ⇂1 .
144
Chapter 7 Model Checking of CTMCs Against DTA Specifications
0.5
r0
x<1, ∅
x<1,∅
r1
1
ℓ0 =hs0 , q0 i
r3
0.3
ℓ1 =hs1 , q0 i
1
1<x<2,{x}
ℓ4 =hs3 , q0 i
1<x<2,{x}
0.2
ℓ3 =hs2 , q1 i
1
x>1,∅
ℓ2 =hs2 , q0 i
r2
r2
(a) DMTA M = C ⊗ A
v0 , r0
v1 , r0
δ
ℓ0 , 0≤x<1
1
v2 , r1
ℓ0 , 1≤x<2
reset, 0.5
0.5
ℓ1 , 0≤x<1
v3 , r1
ℓ1 , 1≤x<2
0.2
reset,0.2
v4 , 0
ℓ2 , 0≤x<1
1
δ
v6 , r2
v5 , r2
ℓ2 , 1≤x<2
δ
ℓ2 , x ≥ 2
1
1
v7 , 0
ℓ3 , 1≤x<2
v8 , 0
δ
ℓ3 , x ≥ 2
(b) Reachable region graph
Figure 7.3: Example product construction of CTMC and DTA
Lemma 7.3.6 Given any CTMC C and DTA A, we have that Paths C (A) =
Paths C⊗A (♦ LocF )⇂1 .
Proof: (=⇒) We show that for any path ρ ∈ Paths C (A), there exists a path
σ ∈ Paths C⊗A (♦ LocF ) such that σ⇂1 = ρ.
t
t0
s1 · · · sn−1 −−n−1
We assume, without loss of generality, that ρ = s0 −−→
−−→
C
~
sn ∈ Paths is accepted by A, i.e., sn ∈ QF and for 0 ≤ i < n, η0 |= 0 and
ηi + ti |= gi and ηi+1 = (ηi + ti )[Xi := 0], where ηi is the time valuation on
entering state si . We can then construct a path θ ∈ Paths A from ρ such that
L(s
L(s ),t
),t
n−1
n−1
0
θ = q0 −−−−0−−→
−−−−
−→ qn , where si and qi have the same
q1 · · · qn−1 −−−−
entering clock valuation. From ρ and θ, we can construct the path
t
t
0
−−→ hsn , qn i ,
hs1 , q1 i · · · hsn−1 , qn−1 i −−n−1
σ = hs0 , q0 i −−→
145
7.3 Model Checking DTA Specifications
where hsn , qn i ∈ LocF . It follows from the definition of an accepting path in a
DTMA that σ ∈ Paths C⊗A (♦ LocF ) and σ⇂1 = ρ.
(⇐=) We show that for any path σ ∈ Paths C⊗A (♦ LocF ), σ⇂1 ∈ Paths C (A).
We assume, without loss of generality, that for path
t
t
0
σ = hs0 , q0 i −−→
−−→ hsn , qn i ∈ Paths C⊗A (♦ LocF ) ,
· · · −−n−1
it holds that hsn , qn i ∈ LocF , and for 0 ≤ i < n, η0 |= ~0 and ηi + ti |= gi
and ηi+1 = (ηi + ti )[Xi := 0], where ηi is the time valuation on entering state
hsi , qi i. It then directly follows that qn ∈ QF and σ⇂1 ∈ Paths C (A), given ηi
the entering clock valuation of state si .
Theorem 7.3.7 For any CTMC C and DTA A, Paths C (A) is measurable.
Proof: We first deal with the case that A contains only strict inequalities. Since
Paths C (A) is a set of finite paths,
[
Paths C (A) =
Paths Cn (A) ,
n∈N
where Paths Cn (A) is the set of paths of C accepted by A of length n. For any
t
t0
path ρ:=s0 −−→
s1 · · · sn−1 −−n−1
−−→ sn ∈ Paths Cn (A), we can associate ρ with a
L(s ),t
L(s
),t
n−1
n−1
0
path θ:=q0 −−−−0−−→
q1 · · ·qn−1 −−−−
−−−−
−→ qn of A induced by the location
sequence:
L(s ),g ,X
L(s
),g
,X
n−1
n−1
n−1
0
0
q0 −−−−
−−0−−→
q1 · · · qn−1 −−−−
−−−−
−−−−
−→ qn ,
such that qn ∈ QF and there exist {ηi }0≤i<n with 1) η0 = ~0; 2) (ηi + ti ) |= gi ;
and 3) ηi+1 = (ηi + ti )[Xi := 0], where ηi is the clock valuation on entering qi .
To prove the measurability of Paths Cn (A), it suffices to show that for each
t
t0
path ρ := s0 −−→
−−→ sn ∈ Paths Cn (A), there exists a cylinder set
· · · −−n−1
C(s0 , I0 , · · ·, In−1 , sn ) (Cρ for short) that contains ρ, and that each path in Cρ
+
is accepted by A. The interval Ii is constructed according to ti as Ii = [t−
i , ti ]
such that
+
• If ti ∈ Q, then t−
i = ti := ti ;
+
• else if ti ∈ R \ Q, then let t−
i , ti ∈ Q such that
+
−
+
– t−
i ≤ ti ≤ ti and ⌊ti ⌋ = ⌊ti ⌋ and ⌈ti ⌉ = ⌈ti ⌉;
∆
−
, where
– t+
i − ti <
2·n
o
n
6 0 .2
∆=
min
{ηj (x) + tj }, 1 − {ηj (x) + tj } {ηj (x) + tj } =
0≤j<n,x∈X
2 {·}
denotes the fractional part.
146
Chapter 7 Model Checking of CTMCs Against DTA Specifications
Note that here we are considering DTA with strict inequalities. Hence
for any 0 ≤ i < n with ηi + ti |= gi , it must be the case that
{ηi (x) + ti } =
6 0.
t′
t′
0
To show that ρ′ := s0 −−→
−−→ sn ∈ Cρ is accepted by A, let η0′ := ~0
· · · −−n−1
′
′
′
and ηi+1 := (ηi + ti )[Xi := 0]. We will show that ηi′ + t′i |= gi . To this end, it
suffices to observe that η0′ = η0 , and for any i > 0 and any clock variable x,
i−1
i−1
′
X
′
X
∆
−
+
−
ηi (x) − ηi (x) ≤
tj − tj ≤
t+
.
j − tj ≤ n · (tj − tj ) ≤
2
j=0
j=0
We claim that since DTA A contains only strict inequalities, it must be the
case that ηi′ + t′i |= gi . To see this, suppose gi is of the form x > K for
∆
′
some K ∈ N. We have that |ηi′ (x) − ηi (x)| ≤ ∆
2 and |ti − ti | < 2 , therefore
′
′
|(ηi (x) + ti ) − (ηi (x) + ti )| < ∆. Note that ηi (x) + ti > K, and thus ηi (x) + ti −
{ηi (x)+ti } = ⌈ηi (x)+ti ⌉ ≥ K. Hence ηi (x)+ti −∆ ≥ K since ∆ ≤ {ηi (x)+ti }.
It follows that ηi′ (x) + t′i > K. A similar argument applies to the case x < K
and can be extended to any constraint gi . Thus, ηi′ + t′i |= gi .
It follows that Cρ is a cylinder set of C and each path in this cylinder set is
accepted by A, i.e., ρ ∈ Cρ and Cρ ⊆ Paths Cn (A) with |ρ| = n. Together with
S
the fact that Paths Cn (A) ⊆ ρ∈Paths C (A) Cρ , we have:
n
Paths Cn (A) =
[
Cρ
ρ∈Paths C
n (A)
and thus
Paths C (A) =
[
[
Cρ .
n∈N ρ∈Paths C
n (A)
We note that each interval in the cylinder set Cρ has rational bounds. Hence
Cρ is measurable. It follows that Paths C (A) is a union of countably many
cylinder sets, and thus is measurable.
We then deal with A with equalities of the form x = n for n ∈ N. We prove
the measurability by induction on the number of such equalities appearing in
A. We have shown the base case (DTAs with only strict inequalities). Now
a,g,X
we focus on the inductive case. Suppose there exists a transition ι = q −→ q ′
where g contains x = n. We first consider a DTA Aι obtained from A by
deleting the transitions from q other than ι. We then consider three DTA Āι ,
<
>
A>
ι and Aι where Āι is obtained from Aι by replacing x = n by true; Aι is
<
obtained from Aι by replacing x = n by x > n and Aι is obtained from Aι by
replacing x = n by x < n. It is not difficult to see that
C
<
Paths C (Aι ) = Paths C (Āι ) \ (Paths C (A>
ι ) ∪ Paths (Aι )) .
Note that this holds since A is deterministic. By the induction hypothesis,
C
C
<
Paths C (Āι ), Paths C (A>
ι ) and Paths (Aι ) are measurable. Hence Paths (Aι )
147
7.3 Model Checking DTA Specifications
is measurable. Furthermore, we note that
[
Paths C (A) =
Paths C (Aι ) ,
a,g,X
ι=q −→ q ′
therefore Paths C (A) is measurable as well.
For arbitrary A with time constraints of the form x ⊲⊳ n where ⊲⊳∈ {≥, ≤},
we consider two DTAs A= and A⊲⊳ . Clearly
Paths C (A) = Paths C (A= ) ∪ Paths C (A⊲⊳
¯) ,
¯ => if ⊲⊳=≥; < otherwise. It follows that Paths C (A) is measurable. where ⊲⊳
We remark that the set of time-convergent paths in a CTMC has probability
measure 0 (see [BHHK03]). The following theorem establishes the link between
CTMC C and DMTA C ⊗ A.
Theorem 7.3.8 For any CTMC C and DTA A,
C⊗A
PrC Paths C (A) = Pr~C⊗A
Paths
(♦Loc
)
.
F
0
Proof: According to Thm. 7.3.7, Paths C (A) can be rewritten as the combination of cylinder sets of the form C(s0 , I0 , · · · , In−1 , sn ) which are all accepted by DTA A.3 By Lem. 7.3.6, namely by path lifting, we can establish exactly the same combination of cylinder sets C(ℓ0 , I0 , · · · , In−1 , ℓ0 ) for
Paths C⊗A (♦LocF ), where si = ℓi ⇂1 . It then suffices to show that for each cylinder set C(s0 , I0 , · · · , In−1 , sn ) which is accepted by A, PrC and PrC⊗A yield the
same probabilities. Note that a cylinder set C is accepted by a DTA A, if each
path that C generates can be accepted by A.
For the measure PrC , according to Eq. (7.1),
Y Z
P(si , si+1 ) · E(si )e−E(si )τ dτ .
PrC C(s0 , I0 , · · · , In−1 , sn ) =
0≤i<n
Ii
For the measure Pr~C⊗A
, according to Section 7.3.1, it is given by PC⊗A
(~0)
0
0
C⊗A
where Pn (η) = 1 for any clock valuation η and
Z
1gi (ηi + τi )pi E(ℓi )e−E(ℓi )τi · PC⊗A
PC⊗A
(η
)
=
i
i+1 (ηi+1 ) dτi ,
i
Ii
where ηi+1 = (ηi + τi )[Xi := 0] and 1gi (ηi + τi ) = 1, if ηi + τi |= gi ; 0, otherwise.
(ηi ) is a constant, i.e., is independent
We will show, by induction, that PC⊗A
i
of ηi , if the cylinder set C(ℓ0 , I0 , · · · , In−1 , ℓn ) is accepted by C ⊗ A. Firstly
3 Note
that this means each path in the cylinder set is accepted by A.
148
Chapter 7 Model Checking of CTMCs Against DTA Specifications
let us note that for C(ℓ0 , I0 , · · · , In−1 , ℓn ), there must exist some sequence of
transitions
gn−1
,Xn−1
g0 ,X0 /
ℓ1 · · · ℓn−1 pn−1 / ℓn
ℓ0
p0
with η0 = ~0 and ∀ti ∈ Ii with 0 ≤ i < n, ηi + ti |= gi and ηi+1 := (ηi + ti )[Xi :=
0]. Moreover, according to Def. 7.3.3, we have:
pi = P(si , si+1 )
and
E(ℓi ) = E(si ) .
(7.5)
We apply a backward induction on n down to 0. The base case is trivial since
PC⊗A
(η) = 1. By the induction hypothesis, PC⊗A
n
i+1 (η) is a constant. For the
inductive case, consider i < n. For any τi ∈ Ii , since ηi +τi |= gi , 1gi (ηi +τi ) = 1,
it follows that
Z
C⊗A
Pi
(ηi ) =
1gi (ηi + τi )pi E(ℓi )e−E(ℓi )τi · PC⊗A
i+1 (ηi+1 ) dτi
Ii
Z
I.H.
pi E(ℓi )e−E(ℓi )τi dτi · PC⊗A
=
i+1 (ηi+1 )
ZIi
(7.5)
=
P(si , si+1 )E(si )e−E(si )τi dτi · PC⊗A
i+1 (ηi+1 ) .
Ii
Clearly, this is a constant. It is thus easy to see that
C(ℓ0 , I0 , · · · , In−1 , ℓn ) :=
Pr~C⊗A
0
Y Z
C⊗A ~
P(si , si+1 )E(si )e−E(si )τ dτ ,
P0 (0) =
0≤i<n
Ii
which completes the proof.
7.3.3
Region Construction for DMTA
In the remainder of this
on how to compute the proba section, we shall focus
C⊗A
C⊗A
bility measure Pr~0
Paths
(♦ LocF ) in an effective way. We start with
adopting the standard region construction [AD94] to DMTA. As we will see,
this allows us to obtain a PDP from a DMTA in a natural way.
As usual, a region is represented as a (clock) constraint. For regions Θ, Θ′ ∈
B(X ), Θ′ is the successor region of Θ if for all η |= Θ there exists δ ∈ R>0 such
that η + δ |= Θ′ and for all δ ′ < δ, η + δ ′ |= Θ ∨ Θ′ . A region Θ satisfies a guard
g (denoted by Θ |= g) iff ∀η |= Θ. η |= g. A reset operation on region Θ is
defined as Θ[X := 0] := {η[X := 0] | η |= Θ}.
Definition 7.3.9 (Region graph of DMTA) Given DMTA M = (Loc, X ,
ℓ0 , LocF , E, ), the region graph G(M) is defined as a tuple (V, v0 , VF , Λ, ֒→),
where:
149
7.3 Model Checking DTA Specifications
• V := Loc × B(X ) is a finite set of vertices, consisting of a location ℓ in M
and a region Θ;
• v0 ∈ V is the initial vertex if (ℓ0 , ~0) ∈ v0 ;
• VF := {v | v⇂1 ∈ LocF } is the set of accepting vertices;
• ֒→ ⊆ V × [0, 1] × 2X ∪ {δ} × V is the transition (edge) relation, such
that:
δ
– v ֒→ v ′ is a delay transition if v⇂1 = v ′ ⇂1 and v ′ ⇂2 is a successor region
of v⇂2 ; and
p,X
g,X
– v ֒→ v ′ is a Markovian transition if there is a transition v⇂1 p / v ′ ⇂1
in M such that v⇂2 |= g and v⇂2 [X := 0] |= v ′ ⇂2 .
• Λ : V → R≥0 is the exit rate function where Λ(v) := E(v⇂1 ) if there exists
a Markovian transition from v, 0 otherwise;
Note that in the obtained region graph, Markovian transitions emanating
from any boundary region do not contribute to the reachability probability as
the time to hit the boundary is always zero (cf. Eq.(7.7)). Therefore, we can
remove all the Markovian transitions emanating from boundary regions and
then collapse each of them with its unique non-boundary (direct) successor. In
the sequel we still denote this collapsed region graph G(M) by slightly abusing
the notation.
Example 7.3.10 For the DMTA C⊗A in Fig. 7.4(a), the reachable part (forward reachable from the initial vertex and backward reachable from the accepting vertices) of the collapsed region graph G(C⊗A) is in Fig. 7.4(b). The
accepting vertices are sinks.
We now define the underlying PDP of a DMTA based on the region graph
G(M) given above:
Definition 7.3.11 For DMTA M = (Loc, X , ℓ0 , LocF , E, ) and region graph
G(M) = (V, v0 , VF , Λ, ֒→), we define PDP Z(M) = (V, X , Inv , φ, Λ, µ) where
for any v ∈ V ,
• Inv (v) := v⇂2 and the state space S := {(v, η) | v ∈ V, η ∈ Inv (v)};
• φ(v, η, t) := η + t for η ∈ Inv (v);
• Λ(v, η) := Λ(v) is the exit rate of state (v, η);
δ
• [boundary jump] for each delay transition v ֒→ v ′ in G(M) we have
µ(ξ, {ξ ′ }) := 1, where ξ = (v, η), ξ ′ = (v ′ , η) and η ∈ ∂Inv (v); and
p,X
• [Markovian jump] for each Markovian transition v ֒→ v ′ in G(M) we have
µ(ξ, {ξ ′ }) := p, where ξ = (v, η), η ∈ Inv (v) and ξ ′ = (v ′ , η[X := 0]).
From now on we write Λ(v) instead of Λ(v, η) as they coincide.
150
Chapter 7 Model Checking of CTMCs Against DTA Specifications
x2 > 1, {x1 }, 1
r0
r1
ℓ1 =hs1 , q1 i
ℓ0 =hs0 , q0 i
x1 < 2, {x2 }, 1
(a) DMTA M = C ⊗ A
v1 , r0
v0 , 0
ℓ0
06x1 =x2 <1
δ
ℓ0
16x1 =x2 <2
v3 , 0
ℓ1
06x1 <1
16x2 <2
x2 >x1 +1
v2 , r0
δ
ℓ0
x1 >2, x2 >2
v4 , 0
ℓ1
06x1 <1
x2 >2
x2 >x1 +2
1, {x1 }
1, {x1 }
(b) Reachable region graph G(C ⊗ A)
Figure 7.4: Example of a region graph
7.3.4
Characterizing Reachability Probabilities
C⊗A
Recall that our task is to compute Pr~C⊗A
Paths
(♦
Loc
)
, owning to
F
0
Thm. 7.3.8. To this end, we now reduce it to the problem of computing the
(time-unbounded) reachability probability in the PDP Z(C ⊗ A), given the initial state(v0 , ~0) and the set of goal states {(v, η) | v ∈ VF , η ∈ Inv (v)} (VF , ·)
for short . It is not difficult to see that reachability probabilities of untimed
events in a PDP Z can be computed in the embedded DTMP emb(Z). Note
that the set of locations of Z and emb(Z) are equal. In the sequel, let D denote
emb(Z).
D
For each
vertex
v
∈
V
,
we
define
recursively
Prob
(v,
η),
(V
,
·)
Prob D
F
v (η)
for short as the probability to reach the goal states (VF , ·) in D from state (v, η).
Firstly, we define
δ
• For the delay transition v ֒→ v ′ ,
−Λ(v)♭(v,η)
Prob D
· Prob D
v,δ (η) = e
v ′ η + ♭(v, η) .
(7.6)
Recall that ♭(v, η) is the minimal time for (v, η) to hit the boundary
∂Inv (v).
p,X
• For the Markovian transition v ֒→ v ′ ,
Prob D
v,v ′ (η)
=
Z
0
♭(v,η)
p·Λ(v)·e−Λ(v)τ ·Prob D
v ′ (η + τ )[X := 0] dτ . (7.7)
151
7.3 Model Checking DTA Specifications
Overall, for each vertex v ∈ V , we obtain:
(
P
/ VF
Prob D
Prob D
p,X
v,δ (η) +
v,v ′ (η), if v ∈
D
v ֒→ v ′
Prob v (η) =
1,
otherwise
.
(7.8)
Note that here the notation η is slightly abused. It represents a vector of clock
variables (see Ex. 7.3.14). Eq. (7.6) and Eq. (7.7) are derived based on Eq. (7.3)
and Eq. (7.2), respectively. In particular the multi-step reachability probability
is computed using a sequence of one-step transition probabilities.
Hence we obtain a system of Rintegral equations Eq. (7.8). One can read
Eq. (7.8) either in the form f (ξ) = Dom(ξ) K(ξ, ξ ′ )f (dξ ′ ), where K is the kernel
and Dom(ξ) is the domain of integration depending on the continuous state
space S; or in the operator form f (ξ) = (Jf )(ξ), where J is the integration
operator. Generally, Eq. (7.8) does not need to have a unique solution. It turns
~
out that the reachability probability Prob D
v0 (0) coincides with the least fixpoint
~
~
of the operator J (denoted by lfpJ ), i.e., Prob D
v0 (0) = (lfpJ )(v0 , 0). Formally,
we have:
Theorem 7.3.12 For any CTMC C and DTA A, Pr~C⊗A
Paths C⊗A (♦ LocF )
0
is the least solution of Prob D
v0 (·), where D is the embedded DTMP of C ⊗ A.
Proof: We can express the set of all finite paths in C⊗A ending in some accepting
location ℓn ∈ LocF for n ∈ N as the union over all location sequences, i.e.,
[
[
C(ℓ0 , I0 , · · · , In−1 , ℓn )
ΠC⊗A =
n∈N (ℓ0 ,··· ,ℓn )∈Locn+1
= Paths C⊗A (♦ LocF ) ∪ Paths C⊗A (♦ LocF ) .
where C(ℓ0 , I0 , · · · , In−1 , ℓn ) is a cylinder set, Ii = [0, ∞) and Paths C⊗A (♦LocF )⇂1
are the set of paths which are not accepted by the DTA A. Note that we can
easily extend the measure Pr~C⊗A
to ΠC⊗A such that
0
Pr~C⊗A
ΠC⊗A = Pr~C⊗A
Paths C⊗A (♦ LocF ) .
0
0
This means that in order to prove the theorem, we need to show that
~ˆ
ΠC⊗A = Prob D
Pr~C⊗A
v0 (0) ,
0
(7.9)
D
~ˆ
(v0 , ~ˆ0), (VF , ·) , i.e., the reachability
where Prob D
v0 (0) is the short form of Prob
probability from state (v , ~ˆ0) to (V , ·). Note that for better readability, we
0
F
indicate clock valuations in D by adding a “ˆ”.
Eq. (7.9) is to be shown on cylinder sets. Note that each cylinder set C(ℓ0 , I0 ,
· · · , In−1 , ℓn ) ⊆ ΠC⊗A (Cn for short) induces a region graph G(Cn ) = (V, v0 , VF ,
Λ, ֒→), where its underlying PDP and embedded DTMP is Z(Cn ) and D(Cn ),
respectively. To prove Eq. (7.9), it suffices to show that for each Cn ,
n) ~
Pr~C⊗A
(Cn ) = Prob D(C
(ˆ0) ,
v0
0
152
Chapter 7 Model Checking of CTMCs Against DTA Specifications
v0i =(ℓi ,Θ0 )
δ
i
i
=(ℓi ,Θm−1 ) δ vm
=(ℓi ,Θm ) δ
δ vm−1
···
♭(v0i ,η̂0i )61
i
i
♭(vm−1
,η̂m−1
)=1
i
i
♭(vm
,η̂m
)=1
···
pi
i
δ vm
′ =(ℓi ,Θm′ )
i
i
♭(vm
′ ,η̂m′ )=1
pi
i+1
vm
′ =(ℓi+1 ,Θm′ )
i+1
vm
=(ℓi+1 ,Θm )
i+1 i+1
♭(vm
,η̂m )61
i+1 i+1
♭(vm
′ ,η̂m′ )61
Figure 7.5: The sub-region graph for the transition from ℓi to ℓi+1
since ΠC⊗A =
S
n∈N
S
(ℓ0 ,··· ,ℓn )∈Locn+1Cn
and D =
S
n∈N
S
(ℓ0 ,··· ,ℓn )∈Locn+1D(Cn ).
We will prove it by induction on the length n of the cylinder set Cn ⊆ ΠC⊗A .
• For the base case of n = 0, i.e., C0 = C(ℓi ) and ℓi ∈ LocF , it holds
(C0 ) = 1; while in the embedded DTMP D(C0 ), since the
that PrC⊗A
ηi
initial vertex of G(C0 ) is v0 = (ℓi , Θ0 ), where ηi ∈ Θ0 and v0 is consequently the initial location of Z(C0 ) as well as D(C0 ) which is accepting,
0)
(ηˆi ) = 1. Note ℓi ∈ Loc is not necessarily the initial location ℓ0 .
Prob D(C
v0
• For the inductive case, we first assume, as the induction hypothesis, that
for n = k − 1,
D(Ck−1 )
(η̂i+1 ) ,
PrC⊗A
ηi+1 (Ck−1 ) = Prob vi+1
where Ck−1 = C(ℓi+1 , Ii+1 , · · · , Ii+k−1 , ℓi+k ) and ℓi+k ∈ LocF . Note that
ℓi+1 ∈ Loc is not necessarily the initial location ℓ0 . We now proceed to
consider the case of n = k. Let Ck = C(ℓi , Ii , ℓi+1 , Ii+1 , · · · , Ii+k−1 , ℓi+k ).
gi ,Xi / ℓ
As a result, there exists a transition ℓi
i+1 where ηi + τi |= gi for
pi
every τi ∈ (t1 , t2 ). t1 , t2 ∈ Q≥0 ∪ {∞} can be obtained from gi , such that
τj ∈ (t1 , t2 ) iff ηi + τj |= gi . According to the semantics of DMTA, we
have
Z t2
C⊗A
Prηi (Ck ) =
pi E(ℓi )e−E(ℓi )τi · PrC⊗A
(7.10)
ηi+1 (Ck−1 ) dτi ,
t1
where ηi+1 = (ηi + τi )[Xi := 0].
Now we deal with the inductive case for D(Ck ). Let us assume that Ck
induces the region graph G(Ck ) whose subgraph corresponding to transi gi ,Xi / ℓ
tion ℓi
i+1 is depicted in Fig. 7.5. For simplicity we consider that
pi
location ℓi induces the vertices {vji = (ℓi , Θj ) | 0 ≤ j ≤ m′ } and location
ℓi+1 induces the vertices {vji+1 = (ℓi+1 , Θj ) | m ≤ j ≤ m′ }, respectively.
Note that for Markovian transitions, the regions stay the same. We denote
η̂ji (resp. η̂ji+1 ) as the entering clock valuation on vertex vji (resp. η̂ji+1 ),
153
7.3 Model Checking DTA Specifications
for j the indices of the regions. For any η̂ ∈
or more specifically,
Sm−1
m−1
X
m
X
t1 =
j=0
Θj ∪
S
j>m′
Θj , η̂ 6|= gi ;
′
♭(vji , η̂ji )
and t2 =
♭(vji , η̂ji ) .
j=0
j=0
Recall that η̂i (in the induction hypothesis) is the clock valuation to first
hit a region with ℓi and η̂i . Given the fact that from v0i the process can
only execute a delay transition before time t1 , it holds that
D(Ck )
Prob vi
D(Ck )
(η̂i ) = e−t1 Λ(vi ) · Prob vi
m
0
i
(η̂m
) ,
and
D(Ck )
Prob vi
m
D(Ck ) i
(η̂m )
m ,δ
i
(η̂m
) = Prob vi
D(Ck )
i+1 (η̂i+1 )
m ,vm
+ Prob vi
.
Therefore, we obtain by substitution of variables:
Prob D
v i (η̂i )
0
−t1 Λ(vi )
= e
D(Ck ) i
(η̂m )
m ,δ
·Prob vi
D(Ck )
i+1 (η̂i+1 )
m ,vm
+ e−t1 Λ(vi ) ·Prob vi
D(C )
i
= e−t1 Λ(vi ) ·Prob vi ,δk (η̂m
)+
m
i
i
Z ♭(vm
,η̂m )
D(C
)
i
+ τ )[Xi := 0] dτ
pi Λ(vi )e−Λ(vi )τ ·Prob vi+1k−1 (η̂m
e−t1 Λ(vi ) ·
m
0
=
D(C ) i
e−t1 Λ(vi ) ·Prob vi ,δk (η̂m
)
m
Z
+
i
i
t1 +♭(vm
,η̂m
)
D(C
)
i
pi Λ(vi )e−Λ(vi )τ ·Prob vi+1k−1 (η̂m
+ τ − t1 )[Xi := 0] dτ .
m
t1
D(C )
i
By evaluating each term Prob vi ,δk (η̂m
) we obtain the following sum of
m
integrals:
D(Ck )
Prob vi
0
=
(η̂i )
P
P
t +
′
mX
−m Z t1 +
j=0
1
D(C
j
h=0
j−1
h=0
i
i
♭(vm+h
,η̂m+h
)
i
i
♭(vm+h
,η̂m+h
)
)
i
Prob vi+1k−1 (η̂m+j
+τ −t1 −
m+j
pi Λ(vi )e−Λ(vi )τ ·
j−1
X
h=0
i
i
♭(vm+h
, η̂m+h
))[Xi :=0] dτ .
D(Ck−1 )
Now we define the function F
(t) : [t1 , t2 ] → [0, 1], such that when
Pj−1
Pj
i
i
i
i
t ∈ [t1 + h=0 ♭(vm+h , η̂m+h ), t1 + h=0 ♭(vm+h
, η̂m+h
)] for j ≤ m′ − m
then
D(C
)
i
F D(Ck−1 ) (t) = Prob vi+1k−1 (η̂m+j
+t−t1 −
m+j
j−1
X
h=0
i
i
♭(vm+h
, η̂m+h
))[Xi := 0] .
154
Chapter 7 Model Checking of CTMCs Against DTA Specifications
D(Ck )
Using F D(Ck−1 ) (t) we can rewrite Prob vi
0
follows:
D(Ck )
Prob vi
0
=
j=0
=
(η̂i )
P
P
t +
′
mX
−m Z t1 +
Z
t2
(η̂i ) to an equivalent form as
1
j
h=0
j−1
h=0
i
i
♭(vm+h
,η̂m+h
)
i
i
♭(vm+h
,η̂m+h
)
pi Λ(vi )e−Λ(vi )τ F D(Ck−1 ) (τ )dτ
pi Λ(vi )e−Λ(vi )τ F D(Ck−1 ) (τ )dτ .
t1
Pj−1
i
i
By the induction hypothesis, for every t ∈ [t1 + h=0 ♭(vm+h
, η̂m+h
), t1 +
Pj
i
i
′
h=0 ♭(vm+h , η̂m+h )] with j ≤ m − m, we have that:
D(C
)
k−1
i
(η̂m+j
+t−t1 −
PrC⊗A
ηi+1 (Ck−1 ) = Prob v i+1
m+j
= F
D(Ck−1 )
j−1
X
h=0
(t) ,
i
i
, η̂m+h
))[Xi :=0]
♭(vm+h
Pj−1
i
i
i
where ηi+1 = (ηi + t)[Xi := 0] and η̂m+j
= η̂i + t1 + h=0 ♭(vm+h
, η̂m+h
).
D(Ck )
C⊗A
(η̂i ). The proof is now complete.
This shows that Prηi (Ck ) = Prob vi
Remark 7.3.13 The region construction in Section 7.3.3 only helps us to obtain a PDP; it does not play an essential role as it does for, e.g. decidability of
emptiness checking of TA [AD94]. This is mainly due to the continuously distributed random delays. (See [KNSS00] for a similar observation.) As a matter
of fact, clock valuations η and η ′ in region Θ may induce different reachability
probabilities. The reason is that η and η ′ may have different periods of time
to hit the boundary, thus the probability for η and η ′ to either delay or take
a Markovian transition may differ. This is in contrast with the traditional TA
theory as well as probabilistic timed automata ([KNSS02], see also Chapter 8),
where η and η ′ are not distinguished.
Example 7.3.14 For the region graph in Fig. 7.4(b), the system of integral
equations for v1 in location ℓ0 is as follows: for 1 ≤ x1 = x2 < 2,
D
D
Prob D
v1 (x1 , x2 ) = Prob v1 ,δ (x1 , x2 ) + Prob v1 ,v3 (x1 , x2 ),
where
−(2−x1 )r0
Prob D
·Prob D
v1 ,δ (x1 , x2 ) = e
v2 (2, 2)
and
Prob D
v1 ,v3 (x1 , x2 )
=
Z
0
2−x1
r0 ·e−r0 τ ·Prob D
v3 (0, x2 + τ ) dτ
with Prob D
v3 (0, x2 + τ ) = 1.
The integral equations for v2 can be derived in the similar way.
7.3 Model Checking DTA Specifications
7.3.5
155
Approximating Reachability Probabilities
Finally, we discuss how to obtain a solution of Eq. (7.8) (page 151). The integral
equations (7.8) are Volterra equations of the second type [AW95]. For a general
reference on solving Volterra equations, we refer the readers to, e.g. [Cor91]. Unfortunately, its exact complexity is not clear for us. (We conjecture that it lies in
Pspace, even in the counting hierarchy CH; see [ABKM09].) As an alternative
option to solve Eq. (7.8), we proceed to give a formulation of PrC Paths C (A)
using a system of partial differential equations (PDEs), which is generally considered easier to tackle than integral equations. Let the augmented DTA A[tf ]
be obtained from A by adding a new clock variable y which is never reset and
a clock constraint y ≤ tf on all edges entering the accepting locations in LocF ,
where tf is a finite (and usually very large) integer. The purpose of this augmentation is to ensure that the values of all clocks reaching LocF are bounded. It is
clear that Paths C (A[tf ]) ⊆ Paths C (A). More precisely, Paths C (A[tf ]) coincides
with those paths which can reach the accepting states of A within the time
bound tf . Note that limtf →∞ PrC (Paths C (A[tf ])) = PrC (Paths C (A)). We can
approximate PrC (Paths C (A)) by solving the PDEs with a large tf as follows:
Proposition 7.3.15 Given a CTMC C, an augmented DTA
A[tf ] andthe
C
underlying PDP Z(C ⊗ A[tf ]) = (V, X , Inv , φ, Λ, µ), Pr Paths C (A[tf ]) =
~v0 (0, ~0) (which is the probability to reach the final states in Z starting from initial state (v0 , ~0X ∪{y} )4 ) is the unique solution of the following system of PDEs:
|X |
∂~v (y, η) X ∂~v (y, η)
+
+
∂y
∂η (i)
i=1
X
Λ(v)·
p· (~v′ (y, η[X := 0]) − ~v (y, η)) = 0 ,
p,X
v ֒→ v ′
where v ∈ V \ VF , η ∈ Inv (v), η (i) is the i-th clock variable, and y ∈ [0, tf ).
δ
For every η ∈ ∂Inv (v) and transition v ֒→ v ′ , the boundary conditions take the
form: ~v (y, η) = ~v′ (y, η). For every vertex v ∈ VF , η ∈ Inv (v) and y ∈ [0, tf ),
we have the following PDE:
|X |
∂~v (y, η) X ∂~v (y, η)
+
+1=0 .
∂y
∂η (i)
i=1
The final boundary conditions are that for every vertex v ∈ V and η ∈ Inv (v) ∪
∂Inv (v), ~v (tf , η) = 0.
Proof: For any set of clocks X (n clocks) of the PDP Z = (Z, X , Inv , φ, Λ, µ)
we define a system of ODEs:
dη(y) ~
= 1,
dy
4~
0X ∪{y}
η(y0 ) = η0 ∈ Rn≥0 ,
denotes the valuation η with η(x) = 0 for x ∈ X ∪ {y}.
(7.11)
156
Chapter 7 Model Checking of CTMCs Against DTA Specifications
which describe the evolution of clock values η(y) at time y given the initial
value η0 of all clocks at time y0 . Note that contrary to our DTA notation,
Eq. (7.11) describes a system of ODEs where η(y) is a vector of clock valuations
(i)
at time y and dη dy(y) gives the timed evolution of clock η (i) . Given a continuous
differentiable functional f : Z × Rn≥0 → R≥0 , for every z ∈ Z, we define:
df (z, η(y))
dy
=
(7.11)
=
n
X
∂f (z, η(y)) dη (i) (y)
·
dy
∂η (i)
i=1
n
X
∂f (z, η(y))
∂η (i)
i=1
.
For notation simplicity we define the vector field from Eq. (7.12) as the operator
Pn
Ξ which acts on functional f (z, η(y)), i.e., Ξf (z, η(y)) = i=1 ∂f (z,η(y))
. We
∂η (i)
also define the equivalent notation Ξf (ξ) for the state ξ = (z, η(y)) and any
y ∈ R≥0 .
We can define the value of PrC Paths C (A) as the expectation ~(0, ξ0 ) on
PDP Z as follows:
Z tf
1Z (Xτ )dτ | X0 = ξ0
~(0, ξ0 ) = E
0
Z tf
= E(0,ξ0 )
1Z (Xτ )dτ ,
0
where the initial starting time is 0, the starting state is ξ0 = (z0 , ~0), Xτ is the
underlying stochastic process of Z defined on the state space S, and 1Z (Xτ ) = 1
when Xτ ∈ {(z, η(τ )) | z ∈ VF , η(τ ) ∈ Inv(z)}, 1Z (Xτ ) = 0, otherwise. Note
that we can also define
i in Eq. (7.12) for any starting time y < tf
hR the expectation
tf
and state ξ as E(y,ξ) y 1Z (Xτ )dτ .
We then obtain the expectation ~(0, ξ0 ) by following the construction in
[Dav93]. For this we form the new state space Ŝ = ([0, tf ] × S) ∪ {∆} where
∆ is the sink state and the boundary is ∂ Ŝ := ([0, tf ] × ∂S) ∪ ({tf } × S). We
define the following functions: Λ̂(y, ξ) = Λ(ξ), µ̂((y, ξ), {y} × A) = µ(ξ, A) and
µ̂((tf , ξ), {∆}) = 1 for y ∈ [0, tf ), A ⊆ S and ξ ∈ S.
Given the construction we obtain an equivalent form for the expectation
Eq. (7.12), i.e.:
Z ∞
~
~(0, ξ ) = E
1̂ (τ, X )dτ ,
(7.12)
0
(0,ξ0 )
Z
τ
0
where ~1̂Z : Ŝ → {0, 1}, ~1̂Z (τ, Xτ ) = 1 when Xτ ∈ {(z, η(τ )) | z ∈ VF , η(τ ) ∈
Inv(z)} and τ ∈ [0, tf ), ~1̂Z (τ, Xτ ) = 0 otherwise. We also define ~1̂Z (∆) to
be zero. Note that we introduce the sink state ∆ in order to ensure that
limy→∞ E(0,ξ) ~(y, Xy ) = 0, which is a crucial condition in order to obtain a
unique value for the expectation ~(0, ξ0 ).
157
7.4 Single-clock DTA Specifications
For the expectation Eq. (7.12), [Dav93] defines the following integro-differential
equations (for any y ∈ [0, tf )):
U~(y, ξ) = Ξ~(y, ξ) + Λ̂(y, ξ) ·
Z
(~(y, ξ ′ ) − ~(y, ξ)) µ̂((y, ξ), (y, dξ ′ )), ξ ∈ S
S
Z
~(y, ξ) =
~(y, ξ ′ )µ̂((y, ξ), (y, dξ ′ )), ξ ∈ ∂S
(7.13)
(7.14)
S
U~(y, ξ) + ~1̂Z (y, ξ) = 0, ξ ∈ S .
(7.15)
Eq. (7.13) denotes the generator of the stochastic process Xy , and Eq. (7.14)
states the boundary conditions for Eq. (7.15). We can rewrite the integrodifferential equations (7.13), (7.14) and (7.15) into a system of PDEs with
boundary conditions, given the fact that the measure µ̂ is not uniform. For
each vertex v ∈
/ VF , η ∈ Inv(v) and y ∈ [0, tf ) of the region graph G, we write
the PDE as follows (here we define ~v (y, η) := ~(y, ξ) for ξ = (v, η)):
X
∂~v (y, η) X ∂~v (y, η)
p · (~v′ (y, η[X := 0]) − ~v (y, η)) = 0 .
+ Λ(v)
+
(i)
∂y
∂η
p,X
i
v ֒→ v ′
p,X
Note that for any edge v ֒→ v ′ in the region graph G, µ̂((y, (v, η)), (y, (v ′ , η ′ ))) =
δ
p. For every η ∈ ∂Inv(v) and transition v ֒→ v ′ , the boundary conditions take
the form: ~v (y, η) = ~v′ (y, η). For every vertex v ∈ VF , η ∈ Inv(v) and
y ∈ [0, tf ), we get:
∂~v (y, η) X ∂~v (y, η)
+
+1=0 .
∂y
∂η (i)
i
Note that all final states are made absorbing. The final boundary conditions
are that for every vertex v ∈ Z and η ∈ Inv(v) ∪ ∂Inv(v), ~v (tf , η)=0.
7.4
Single-clock DTA Specifications
For single-clock DTA specifications, we can simplify the system of integral equations obtained in the previous section to a system of linear equations, where the
coefficients are a solution of a system of ODEs that can be calculated efficiently.
Given a DMTA M, we denote the set of constants appearing in the clock
constraints of M as {c0 , . . . , cm } with c0 = 0. We assume the following order:
0 = c0 < c1 < · · · < cm . Let ∆ci = ci+1 − ci for 0 ≤ i < m. Note that for
single-clock DMTA, the regions in the region graph G(M) can be represented
by the following intervals: [c0 , c1 ), . . . , [cm , ∞) [LMS04]. We partition the region
graph G(M) = (V, v0 , VF , Λ, ֒→), or G for short, into a set of subgraphs Gi =
(Vi , VFi , Λi , {Mi , Fi , Bi }), where 0 ≤ i ≤ m, and Λi (v) = Λ(v) if v ∈ Vi , 0
otherwise. These subgraphs are obtained by partitioning V , VF and ֒→ as
follows:
158
Chapter 7 Model Checking of CTMCs Against DTA Specifications
• V =
S
0≤i≤m {Vi },
where Vi = {(ℓ, Θ) ∈ V | Θ ⊆ [ci , ci+1 )};
S
• VF = 0≤i≤m {VFi }, where v ∈ VFi iff v ∈ Vi ∩ VF ; and
S
• ֒→= 0≤i≤m {Mi ∪ Fi ∪ Bi }, where Mi is the set of Markovian transitions
(without reset) between vertices inside Gi ; Fi is the set of delay transitions
from the vertices in Gi to that in Gi+1 (Forward) and Bi is the set of
Markovian transitions (with reset) from Gi to G0 (Backward). It is easy
to see that Mi , Fi , and Bi are pairwise disjoint.
Since the initial vertex of G0 is v0 and the initial vertices of Gi for 0 < i ≤ m are
implicitly given by Fi−1 , we omit them in the definition. As an example, the
vertices in Fig. 7.6 are partitioned by the ovals and the Mi edges are unlabeled,
while the Fi and Bi edges are labeled with δ and “reset”, respectively. The VF
vertices (double circles) may appear in any Gi . Actually, if v = (ℓ, [ci , ci+1 )) ∈
VF , then v ′ = (ℓ, [cj , cj+1 )) ∈ VF for i < j ≤ m. This is true because VF =
{(ℓ, true) | ℓ ∈ LocF }. It implies that for each final vertex not in the last region,
there is a delay transition from it to the next region, see the final vertex in Gi+1
in Fig. 7.6. The exit rate functions and the probabilities on Markovian edges
are omitted in the graph.
Given a subgraph Gi (0≤i≤m) of G with ki states, let the probability vector
⊤
~
Ui (x) = [u1 (x), . . . , uki (x)] ∈ Rki ×1 , where uj (x) is the probability to go from
i
i
i
vertex vij ∈ Vi to vertices in VF (in G) at time x. Starting from Eq. (7.6)~ i (x), which we later on
Eq. (7.8), we provide a set of integral equations for U
reduce to a system of linear equations. Distinguish two cases:
~ i (x) is given by:
Case 1: 0 ≤ i < m. U
Z ∆ci −x
~ i (x) =
~ i (x + τ )dτ
U
Mi (τ )U
(7.16)
0
+
Z
∆ci −x
~ 0 (0)
Bi (τ )dτ · U
(7.17)
0
~ i+1 (0) ,
+ Di (∆ci − x) · Fi U
(7.18)
where x ∈ [0, ∆ci ], and
• Di (x) ∈ Rki ×ki is the delay probability matrix, where for any 0 ≤ j ≤
j
ki , Di (x)[j, j] = e−E(vi )x (the off-diagonal elements are zero);
• Mi (x) = Di (x)·Pi ·Ei ∈ Rki ×ki is the probability density matrix for
the Markovian transitions inside Gi , where Pi and Ei are the transition probability matrix and exit rate matrix for vertices inside Gi ,
respectively;
• Bi (x) ∈ Rki ×k0 is the probability density matrix for the reset edges
Bi , where Bi (x)[j, j ′ ] indicates the probability density function to
take the Markovian jump with reset from the j-th vertex in Gi to the
j ′ -th vertex in G0 ; and
159
7.4 Single-clock DTA Specifications
reset
δ ... ...
δ
δ
δ
...
δ
δ
δ
δ
G0
...
δ
Gi
Gi+1
Gm
Figure 7.6: Partitioning the region graph
• Fi ∈ Rki ×ki+1 is the incidence matrix for delay edges Fi . More
specifically, Fi [j, j ′ ] = 1 indicates that there is a delay transition
from the j-th vertex in Gi to the j ′ -th vertex in Gi+1 ; 0 otherwise.
Let us explain these equations. Eq. (7.18) is obtained from Eq. (7.6), where
Di (∆ci − x) indicates the probability to delay until the “end” of region
~ i+1 (0) denotes the probability to continue in Gi+1 (at relative
i, and Fi U
time 0). In a similar way, Eq. (7.16) and Eq. (7.17) are obtained from
Eq. (7.7); the former reflects the case where clock x is not reset, while the
latter considers the reset of x (and returning to G0 ).
~ m (x) is simplified as follows:
Case 2: i = m. U
Z ∞
Z
~ m (x) =
~ m (x + τ )dτ + ~1F +
U
M̂m (τ )U
0
∞
~ 0 (0) , (7.19)
Bm (τ )dτ · U
0
where M̂m (τ )[v, ·] = Mm (τ )[v, ·] for v ∈
/ VF , 0 otherwise. ~1F is a vector
such that ~1F [v] = 1 if v ∈ VF , 0 otherwise. We note that ~1F stems
from the second clause of Eq. (7.8), and M̂m is obtained by setting the
corresponding elements of Mm to 0. Also note that as the last subgraph
Gm involves infinite regions, it has no delay transitions.
Before solving the system of integral equations (7.16)-(7.19), we first make
the following observations:
(i) Due to the fact that inside Gi there are only Markovian jumps with neither
resets nor delay transitions, Gi with (Vi , Λi , Mi ) forms a CTMC Ci , say.
For each Gi we define an augmented CTMC Cia with state space Vi ∪ V0 ,
such that all V0 -vertices are made absorbing in Cia . The edges connecting
Vi to V0 are kept and all the edges inside C0 are removed. The augmented
CTMC is used to calculate the probability to start from a vertex in Gi
and take a reset edge in a certain time.
160
Chapter 7 Model Checking of CTMCs Against DTA Specifications
(ii) Given any CTMC C with k states and rate matrix P · E, the matrix Π(x)
is given by:
Z x
M(τ )Π(x − τ )dτ + D(x) .
(7.20)
Π(x) =
0
′
Intuitively, Π(t)[j, j ] indicates the probability to start from vertex j and
reach j ′ at time t.
The following proposition states the close relationship between Π(x) and the
transient probability vector:
Proposition 7.4.1 Given a CTMC C with initial distribution α, rate matrix
P·E and Π(t), ~π (t) satisfies the following two equations:
~π (t) = α · Π(t) ,
d~π (t)
= ~π (t) · Q ,
dt
(7.21)
(7.22)
where Q = P·E − E is the infinitesimal generator.
Proof: The transition probability matrix Π(t) for a CTMC C with state space
S is denoted by the following system of integral equations:
Z t
M(τ )Π(t − τ )dτ + D(t) ,
(7.23)
Π(t) =
0
where M(τ ) = D(τ )·P·E. Now we define, for the CTMC C, a stochastic process
X(t). The probability Pr(X(t + ∆t) = sj ) to be in state sj at time t + ∆t can
be defined as:
X
Pr(X(t) = si ) · Pr(X(t + ∆t) = sj |X(t) = si ) .
Pr(X(t + ∆t) = sj ) =
si ∈S
We can define Pr(X(t + ∆t) = sj ) in the vector form as follows:
~π (t + ∆t) = ~π (t)Φ(t, t + ∆t) ,
where
~π (t) = [Pr(X(t) = s1 ), · · · , Pr(X(t) = sn )] ,
and
Φ(t, t + ∆t)[i, j] = Pr(X(t + ∆t) = sj |X(t) = si ) .
As the stochastic process X(t) is time-homogeneous, we have that
Pr(X(t + ∆t) = sj |X(t) = si ) = Pr(X(∆t) = sj |X(0) = si ) ,
which means that Φ(t, t + ∆t) = Φ(0, ∆t). As Pr(X(∆t) = sj |X(0) = si )
denotes the transition probability to go from state si to state sj at time ∆t, we
have that Φ(0, ∆t) = Π(∆t), which results in the equation:
~π (t + ∆t) = ~π (t)Π(∆t) .
(7.24)
161
7.4 Single-clock DTA Specifications
Now we transform Eq. (7.24) as follows:
~π (t + ∆t) = ~π (t)Π(∆t)
~π (t + ∆t) − ~π (t) = ~π (t)Π(∆t) − ~π (t) = ~π (t)(Π(∆t) − I)
d~π (t)
~π (t + ∆t) − ~π (t)
Π(∆t) − I
= lim
= ~π (t) lim
.
∆t→0
∆t→0
dt
∆t
∆t
=⇒
=⇒
Π(∆t) − I
. For this we rewrite the rightNow the task is to compute lim∆t→0
∆t
hand limit as:
Z ∆t
1
1
M(τ )Π(∆t − τ )dτ + lim
(D(∆t) − I) .
lim
∆t→0 ∆t
∆t→0 ∆t 0
R ∆t
1
The lim∆t→0 ∆t
M(τ )Π(∆t − τ )dτ is of the type 00 . Note that
0
!
Z ∆t
Z ∆t
d
∂
M(τ )Π(∆t − τ )dτ = M(∆t)Π(0) +
M(τ )
Π(∆t − τ )dτ
d∆t
∂∆t
0
0
and D(0) = Π(0) = I, by Eq. (7.23). By l’Hôspital rule, we obtain:
Z ∆t
1
M(τ )Π(∆t − τ )dτ
lim
∆t→0 ∆t 0
!
Z ∆t
∂
Π(∆t − τ )dτ
= lim M(∆t)Π(0) +
M(τ )
∆t→0
∂∆t
0
= M(0)Π(0)
= P·E .
The lim∆t→0
1
∆t
(D(∆t) − I) is of the type 00 . Note that
d
(D(∆t) − I) = −ED(∆t) .
d∆t
Therefore by l’Hôspital rule, we obtain lim∆t→0
1
∆t
(D(∆t) − I) = −E and
Π(∆t) − I
= P·E − E = Q ,
∆t→0
∆t
lim
where Q is the infinitesimal generator of the CTMC C. As a result we obtain:
Π(∆t) − I
d~π (t)
= ~π (t) lim
= ~π (t)Q .
∆t→0
dt
∆t
Combining with Eq. (7.24) we obtain:
~π (t) = α · Π(t)
and
d~π (t)
= ~π (t) · Q .
dt
162
Chapter 7 Model Checking of CTMCs Against DTA Specifications
Remark 7.4.2 Prop. 7.4.1 has been observed in [BHHK03] and plays an essential role in developing the efficient algorithm. However, to the best of our
knowledge, a rigorous proof is missing in the literature.
~π (t) is the transient probability vector with ~π (t)[s] indicating the probability to be in state s at time t given the initial probability distribution α.
Eq. (7.22) is the celebrated forward Chapman-Kolmogorov equation. According
to this proposition, solving the integral equation Π(t) boils down to selecting
the appropriate initial distribution vector α and solving the system of ODEs in
Eq. (7.22), which can be done very efficiently using the uniformization technique.
Prior to exposing in the next theorem how to solve the system of integral
equations by solving a system of linear equations, we define Π̄ai ∈ Rki ×k0 for an
augmented CTMC Cia to be part of Πai , where Π̄ai only keeps the probabilities
starting from Vi and ending in V0 . Actually,
Πi (x) Π̄ai (x)
,
Πai (x) =
0
I
where 0 ∈ Rk0 ×ki is the matrix with all elements zero, and I ∈ Rk0 ×k0 is the
identity matrix.
Theorem 7.4.3 For subgraph Gi of G with ki states, it holds for 0 ≤ i < m
that:
~ i (0) = Πi (∆ci ) · Fi U
~ i+1 (0) + Π̄ai (∆ci ) · U
~ 0 (0) ,
U
(7.25)
where Πi (∆ci ) and Π̄ai (∆ci ) are for CTMC Ci and the augmented CTMC Cia ,
respectively. For the case i = m, it holds that:
~ m (0) = P̂i · U
~ m (0) + ~1F + B̂m · U
~ 0 (0) ,
U
where P̂i (v, v ′ ) = Pi (v, v ′ ) if v ∈
/ VF ; 0 otherwise, and B̂m =
(7.26)
R∞
0
Bm (τ )dτ .
Proof: We first deal with the case i < m. If in Gi there exists some backward
edge, namely, for some j, j ′ , Bi (x)[j, j ′ ] 6= 0, then we shall consider the augmented CTMC Cia with kia = ki + k0 states. In view of this, the augmented
~ a (x) is defined as:
integral equation U
i
~ ia (x) =
U
Z
0
∆ci −x
~
~ ia (x + τ )dτ + Dai (∆ci − x) · Fai Û
Mai (τ )U
i (0) ,
!
~ i (x)
a
U
~ ′ (x) ∈ Rk0 ×1 is the vector representing
where
=
∈ Rki ×1 , U
i
~ ′ (x)
U
i
the reachability probability for the augmented states in Gi , Fai = F′i B′i ∈
a
a
Fi
Rki ×(ki+1 +k0 ) such that F′i =
∈ Rki ×ki+1 is the incidence matrix for
0
!
~ i+1 (0)
a
U
0
~
∈ Rki ×k0 , Ûi (0) =
∈ R(ki+1 +k0 )×1 .
delay edges, and B′i =
~ 0 (0)
I
U
~ a (x)
U
i
163
7.4 Single-clock DTA Specifications
First, we prove the following equation:
~
~ a (x) = Πa (∆ci − x) · Fa Û
U
i
i
i i (0) ,
where
Πai (x) =
Z
x
Mai (τ )Πai (x − τ )dτ + Dai (x) .
0
(7.27)
Set ci,x = ∆ci − x. We consider the iterations of the solution of the following
system of integral equations:
~ a,(0) (x)
U
i
= ~0
Z
~ a,(j+1) (x) =
U
i
ci,x
0
a,(j)
~
Mai (τ )U
i
~
(x+τ )dτ + Dai (ci,x ) · Fai Ûi (0) ,
and
a,(0)
Πi
(ci,x )
a,(j+1)
Πi
=0
Z
=
(ci,x )
ci,x
0
a,(j)
Mai (τ )Πi
(ci,x −τ )dτ + Dai (ci,x ) .
By induction on j, we prove the following equation:
~
~ a,(j) (x) = Πa,(j) (ci,x ) · Fai Û
U
i (0) .
i
i
(7.28)
~ a,(0) (x) = ~0 and Πa,(0) (ci,x ) = 0. The equation clearly holds.
• Base case: U
i
i
• Inductive case: By the induction hypothesis,
~
~ a,(j) (x) = Πa,(j) (ci,x ) · Fai Û
U
i (0) .
i
i
It follows that
~ a,(j+1) (x)
U
i
Z ci,x
~
~ a,(j) (x + τ )dτ + Da (ci,x ) · Fa Û
=
Mai (τ )U
i
i i (0)
i
0
Z ci,x
a,(j)
~
~
=
Mai (τ )Πi (ci,x −τ ) · Fai Ûi (0)dτ + Dai (ci,x ) · Fai Ûi (0)
0
!
Z ci,x
a,(j)
~
a
a
=
Mi (τ )Πi (ci,x − τ )dτ + Di (ci,x ) · Fai Ûi (0)
0
=
a,(j+1)
Πi
(ci,x )
~
· Fi Ûi (0) .
This completes the induction.
Clearly,
a,(j)
Πai (ci,x ) = lim Πi
j→∞
(ci,x )
~ a,(j) (x) .
~ ia (x) = lim U
and U
i
j→∞
164
Chapter 7 Model Checking of CTMCs Against DTA Specifications
By taking the limit as j → ∞ on both sides of Eq. (7.28), we obtain
~
~ ia (x) = Πai (ci,x ) · Fai Û
U
i (0) .
Let x = 0. We obtain (recall that ci,x = ∆ci − x)
~
~ a (0) = Πa (∆ci ) · Fa Û
U
i
i
i i (0) .
We can also write the above relation as:
!
!
~ i+1 (0)
~ i (0)
U
U
a
′
′
= Πi (∆ci ) Fi Bi
~ ′ (0)
~ 0 (0)
U
U
i
!
~ i+1 (0)
Πi (∆ci ) Π̄ai (∆ci )
Fi 0
U
=
~ 0 (0)
0
I
0 I
U
!
~ i+1 (0)
U
Πi (∆ci )Fi Π̄ai (∆ci )
=
~ 0 (0)
0
I
U
!
~ i+1 (0) + Π̄a (∆ci )U
~ 0 (0)
Πi (∆ci )Fi U
i
.
=
~ 0 (0)
U
~ i (0) in the following matrix form
As a result we can represent U
~ i (0) = Πi (∆ci )Fi U
~ i+1 (0) + Π̄ai (∆ci )U
~ 0 (0)
U
by noting that Πi is formed by the first ki rows and columns of matrix Πai , and
Π̄ai is formed by the first ki rows and the last kia − ki columns of Πai .
For the case i = m, i.e., the last graph Gm , the region Rsize is infinite, therefore
~ m (x + τ ) in ∞ M̂m (τ )U
~ m (x + τ )dτ
delay transitions do not exist. The vector U
0
does not depend on entering
time
x,
therefore
we
can
take
it
out
of
the integral.
R
R∞
~ m (0). Moreover, ∞ M̂m (τ )dτ boils down
As a result we obtain 0 M̂m (τ )dτ · U
0
R∞
to P̂m and 0 Bm (τ )dτ to B̂m . Also we add the vector ~1F to ensure that the
probability to start from a state in VF is one (see Eq. (7.8)).
Since the coefficients of the linear equations are all known, solving the system
~ 0 (0), which contains the probability Prob v (0) of
of linear equations yields U
0
reaching VF from initial vertex v0 .
Now we explain how Eq. (7.25) is derived from Eq. (7.16)-(7.18). The term
~ i+1 (0) is for the delay transitions, where Fi specifies how the
Πi (∆ci ) · Fi U
~ 0 (0)
delay transitions are connected between Gi and Gi+1 . The term Π̄ai (∆ci ) · U
a
is for Markovian transitions with reset. Π̄i (∆ci ) in the augmented CTMC Cia
specifies the probabilities to first take transitions inside Gi and then a one-step
Markovian transition back to G0 . Eq. (7.26) is derived from Eq. (7.19). Since it is
the last region and timeRgoes to infinity, the time to enter the region is irrelevant
∞
(thus set to 0). Thus 0 M̂i (τ )dτ boils down to P̂i . In fact, the Markovian
jump probability inside Gm can be taken from the embedded DTMC of Cm ,
which is P̂i .
165
7.4 Single-clock DTA Specifications
Example 7.4.4 For the single-clock DMTA in Fig. 7.3(a), we show how to
compute the reachability probability Prob((v0 , 0), (v5 , ·)) on the region graph
G (cf. Fig. 7.3(b)), which has been partitioned into subgraphs G0 , G1 and G2 .
We mainly show how to obtain the system of linear equations according to
Thm. 7.4.3. We first collect the matrices for subgraphs G0 , G1 and G2 , respectively.
The matrices for G0 are given as
0
M0 (x) = 1·r0 ·e−r0 x
0
0
0
0.5·r1 ·e−r1 x
0
The matrices for G1 are given as
0
B
M1 (x) = B
r0 ·e−r0 x
0
0
0
0
0
0
0
0
B
B
B
B
a
M1 (x) = B
B
B
B
0
0
0
0
r0 ·e−r0 x
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
r2 ·e−r2 x
0
0
0
0
0
0
0
0
0
0.2·r1 ·e−r1 x
0
1
C
C
A
0
B
F1 = B
0
0
r2 ·e−r2 x
0
0
0
0
r2 ·e−r2 x
0
0
0
0
0
1
0
0
F0 = 0
0
0
1
0
0.5·r1 ·e−r1 x
0
0
r0 ·e−r0 x
0
0
The matrices for G2 are given as
M̂2 (x) =
1
A
1
C
C
A
0
0
p00
p20
p40
p02
p22
p42
p04
p24
p44
0
1
B
A Π1 (1) = B
p11
p31
p51
p71
p13
p33
p53
p73
p15
p35
p55
p75
0
0
1
0
1
0
0
0
0
0
0
0
0
0
1
A
0
1
0
0
0
0.2·r1 ·e−r1 x
0
0
0
0
r2 ·e−r2 x
1
C
C
A
1
C
C
C
C
C
C
C
C
A
1
0
To obtain the system of linear equations, we need:
0
Π0 (1) = 0
0
0
0
B
B1 = B
0
0
0
0
0
r1 ·e−r1 x
0
P̂2 =
1
0
0
1
0
C
B
a
C
A Π̄1 (1) = B
p17
p37
p57
p77
p̄10
p̄30
p̄50
p̄70
p̄12
p̄32
p̄52
p̄72
p̄14
p̄32
p̄54
p̄74
All elements in these Π-matrices can be computed by the transient probability in the corresponding CTMCs C0 , C1 and C1a (cf. Fig. 7.7). The obtained
system of linear equations is thus
2
4
2
66
4
u1
u3
u5
u7
3 0
77 B
5=B
u0
u2
u4
3
5
p11
p31
p51
p71
=
p13
p33
p53
p73
0
p00
p20
p40
p15
p35
p55
p75
p02
p22
p42
p17
p37
p57
p77
p04
p24
p44
1 0
C
B
C
A·B
0
0
1
0
1 0
A·
0
0
0
1
1
0
0
1
C
C
·
A
0
0
0
1 2
A · 664
0
0
0
0
0
1
u6
u8
0
B
+B
p̄10
p̄30
p̄50
p̄70
u1
u3
u5
u7
p̄12
p̄32
p̄52
p̄72
3
77
5
p̄14
p̄32
p̄54
p̄74
1 2
C
C
A·4
u0
u1
u3
3
5
1
C
C
A
166
Chapter 7 Model Checking of CTMCs Against DTA Specifications
v0
1
r0
0.5
v2
r1
0.2
v4
v1
r0
v5
1
v3
r0
v0
1 r1
v4
v1
r0
v5
1
0.5
1
v6
r2
1
1
r1
r1
v7
r2
r2
v2
0.2
v3
v7
r2
(a) C0
r2
r2
r2
v8
r2
1
(b) C1
(c)
C1a
(d) C2
Figure 7.7: CTMCs
u6
u8
=
0
0
1
0
·
u6
u8
+
0
1
This system of linear equations can be solved easily.
Remark 7.4.5 We note that for two-clock DTA which yield two-clock DMTA,
the approach given in this section fails in general. In the single-clock case, the
reset guarantees to jump to G0 (0), and the delay to Gi+1 (0) when it is in Gi .
However, in the two-clock case, after delay or reset generally only one clock has
a fixed value, while the value of the other one is not determined.
The time-complexity of computing the reachability probability in the single2
2
3
3
clock DTA case is O(m · |S| · |Loc| · λ · ∆c + m3 · |S| · |Loc| ), where m is
the number of constants appearing in DTA, |S| is the number of states in the
CTMC, |Loc| is the number of locations in the DTA, λ is the maximal exit rate
2
2
in the CTMC and ∆c = max0≤i<m {ci+1 −ci }. The first term m·|S| ·|Loc| ·λ·
∆c is due to the uniformization technique for computing transient distribution;
3
3
and the second term m3 · |S| · |Loc| is the time complexity for solving a system
of linear equations with O(m · |S| · |Loc|) variables. Note that from this analysis
we can conclude that a ptime procedure is obtained. However, this does not
imply that the problem of model checking single-clock DTA is in ptime, as our
procedure involves computation over real numbers in general and thus can only
be done in an approximated way .
7.5
Conclusion
We considered the quantitative verification of a CTMC C against a DTA A.
As a key result, we obtained that computing the probability of C |= A can
be reduced to computing reachability probabilities in PDPs. For single-clock
DTA, this reduces to solving a system of linear equations yielding an equivalent,
though simpler, characterization than in [DHS09].
7.5 Conclusion
167
Many problems remain open. How to solve the obtained system of integral equations and ODEs and their (theoretical) complexity (in terms of both
Turing computation model and Blum-Shub-Smale computation model [BSS89,
BCSS98]) are clearly interesting topics of further investigation; providing took
support is also in the plan. Moreover, note that for DTA, the fairness issue
(e.g. Büchi accepting condition) has not been addressed in this chapter (we essentially considered the reachability), which deserves future exploration. Other
great challenges include non-deterministic TA and M(I)TL model checking.
168
Chapter 7 Model Checking of CTMCs Against DTA Specifications
Chapter 8
Probabilistic Time-abstracting
Bisimulations for PTA
8.1
Introduction
In this chapter, we study probabilistic timed automata (PTA) [KNSS02], which
are a probabilistic extension of timed automata (TA). As in TA, in the research
on PTA, the notion of region graphs plays an essential role, see e.g. [KNSS02].
However, it is well-recognized that albeit being a very useful tool for theoretical
purposes, the region graph is too large to be of any practical interest: its size
is exponential in the number of clocks of the system as well as in the size of
the constants in the time constraints. To overcome this explosion, inspired
by [TY01], we propose probabilistic time-abstracting bisimulations (PTaBs) for
PTA, where the passage of arbitrary time is abstracted by a τ -transition. To
deal with the probabilistic aspect, we follow a standard approach due to [LS91].
This equivalence is usually much coarser than the region equivalence, therefore,
in practice, it induces a (potentially) much smaller state space partition. In
particular, the region equivalence constitutes a (very fine) probabilistic timeabstracting bisimulation. The bisimulation quotient is a finite-state Markov
decision process (MDP), where the states are equivalence classes over symbolic
states (sets of states) with either τ or discrete probabilistic transitions.
PTaB is particularly useful when the desired properties do not involve time
constraints. Such properties are very common in practice, i.e., safety, reachability, etc. Those properties can be captured well by the probabilistic computation tree logic (PCTL, [HJ94]), which is proven to be preserved under
PTaBs. In this case, the existing algorithms and tools for MDPs with respect
to PCTL [BdA95, KNP04] can thus be applied for PTA.
To obtain a minimal PTaB quotient, our algorithm works in the partitionrefinement fashion [PT87, KS90]. We start from an initial partition that respects
state labeling, and proceed by refining each block till it contains only bisimilar
states. Due to the fact that PTA involve an interweaving of time, nondeterminism and probability distributions, the minimization has thus to deal with the
following difficulties: When taking a τ -transition, it must guarantee that time
169
170
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
traverses continuously, e.g., time cannot jump from 0 to 2 without traversing
1. Thus, we introduce the timed predecessor set as a splitter, as in [TY01].
Moreover, since a discrete transition results in one or more probability distributions, the splitter of only one block in [TY01] is, however, not applicable. Our
algorithm instead adopts the idea of the mutual-refine technique in [BEM00],
which maintains a state partition and a distribution partition. In each refinement iteration, the distribution partition is used to refine a state partition and
vice versa. The algorithm in [BEM00], unfortunately, cannot be applied in our
setting in a straightforward way, as in that case untimed probabilistic models
(e.g. MDPs, probabilistic automata) are addressed and thus the set of target
distributions is always fixed, while in our case, since the number of symbolic
states in a block might grow with the iteration when time comes into play,
both the symbolic state space and the distribution set vary in each round, let
alone their partitions. To solve this problem, an Expand operator is introduced,
recalculating the symbolic states in a block as well as the distribution set before the mutual-refine technique is applied. This algorithm is symbolic, namely,
equivalence classes are symbolic states and set-theoretic operators are used to
compute the set of predecessor states of a symbolic state.
Related work. Besides [TY01], for (non-probabilistic) timed systems, timeabstracting bisimulation is also studied in [LY97] in the setting of real-time
process calculi. [BEM00, DHS03] present algorithms for probabilistic bisimulation and simulation of discrete probabilistic systems. [LMST03] investigates
weak probabilistic bisimulation for PTA with a decision procedure. That algorithm is region based, which is tried to be avoided in the current chapter.
[KNSS02] presents a comprehensive exposition for PTA and model checking
algorithms for PTA. [KNSW07] gives a symbolic algorithm for model checking,
however, the problem of computing time-abstracting bisimulations (or generally
computing bisimulations) is not considered there.
Structure of the chapter. Section 8.2 presents basic definitions regarding
PTA. Section 8.3 defines the PTaB. Section 8.4 presents the bisimulation minimization algorithm and constitutes the core of this chapter. Section 8.5 shows
that the bisimulation preserves PCTL formulae. This chapter is concluded in
Section 8.6.
8.2
Preliminaries
The basic notions regarding probability distributions, clock variables and clock
constraints can be found in Section 2.2.
8.2.1
Probabilistic Timed Automata
Definition 8.2.1 (Probabilistic timed automata [KNSS02]) A probabilistic timed automaton (PTA) is a tuple G = (Loc, X , ℓ0 , L, Inv, ) where:
8.2 Preliminaries
171
• Loc is a finite set of locations with ℓ0 ∈ Loc the initial location;
• X is a set of clocks;
• L : Loc → 2AP is a labeling function for the locations, where AP denotes
a finite set of atomic propositions;
•
⊆ Loc × B(X ) × Distr (2X × Loc) is an edge relation; and
• Inv : Loc→B(X ) is an invariant-assignment function.
Recall that B(X ) is the set of clock constrains, as defined in Section 2.2.3.
For simplicity, we require that the invariants and enabling conditions in the
transitions are subject to the following conventions. We note that (1) and (2)
are used to ensure the soundness of the semantic model (cf. Def. 8.2.6), while
(3) is a common assumption in the literature.
1. If in some state of G, allowing any amount of time to elapse would violate
the invariant of the current location, then the guard of at least one discrete
transition is satisfied. Namely, in each state (ℓ, ν), if ν + t 6|= Inv (ℓ) for
any t ∈ R>0 , then there must exist some edge from ℓ with the guard g
such that ν |= g;
2. It is never possible to perform a discrete transition to a node for which
the invariant is not satisfied by the current values of the clocks; and
3. All invariants are downward-closed in the sense that for any d ∈ R≥0 , ν +
d |= Inv (ℓ) implies that ν |= Inv (ℓ).
The labeling function L associates to each location ℓ a set of atomic propositions that are valid in ℓ. The system starts in location ℓ0 with all its clocks
initialized to 0. The values of all the clocks increase uniformly with time. We
g
refer to ℓ
η as a transition, where the guard g is a clock constraint on the
clocks of G and η is a distribution over the (X, ℓ) pairs with X ⊆ X a set of
clocks to be reset and ℓ the successor location. The intuition is that the PTA
G can move from location ℓ to location ℓ′ via two phases. In the first phase, a
distribution η is nondeterministically chosen when g holds. In the second phase,
a successor location ℓ′ is probabilistically chosen according to η(X, ℓ′ ), where
the clocks in X should be reset when entering ℓ′ . The function Inv assigns to
each ℓ a location invariant that constrains the amount of time that may be spent
in ℓ. In other words, location ℓ must be left before the invariant Inv(ℓ) becomes
invalid. If there is no outgoing discrete transition enabled, the corresponding
state has a (discrete) deadlock.
Example 8.2.2 Fig. 8.1 is an example PTA, where from ℓ1 there are two distributions (or transitions), and thus the PTA is nondeterministic. The transitions
to ℓ3 and ℓ4 share the same guard x > 1, since they belong to the same distribution. The transition to ℓ3 resets the clock {x}. The labeling on ℓ1 and ℓ3 is
{a}, ∅ otherwise.
172
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
ℓ1 , x ≤ 2
x ≤ 1, 1
0.3, {x}
ℓ2
0.7, ∅
ℓ4
x>2
{a}
∅
0.7, ∅
x>1
ℓ3 , x ≤ 3
x ≤ 2, 1, ∅
{a}
0.3, {x}
∅
Figure 8.1: An example PTA
0 ≤ x < 1, 1
x > 2, 1
ℓ1
x ≥ 1, 3/4
ℓ2
x ≤ 2, 1/4
ℓ1
3/4
1≤x≤2
ℓ2
1/4
Figure 8.2: The encoding to a one-clock-constraint model
Remark 8.2.3 According to the syntax , one transition of a PTA is associated
with a single clock constraint. This requirement is intuitive and reasonable
since the more-than-one clock constraint case, see e.g., [Bea03, LMST03], can
be encoded by adding more distributions. To give an example, the left-hand
automaton in Fig. 8.2 is a PTA with two clock constraints in one distribution.
This can be encoded by the PTA on the right in Fig. 8.2. It goes as follows:
When 0 ≤ x < 1, the transition from ℓ1 to ℓ2 is not enabled. Thus the only
possible transition is the self-loop on ℓ1 , which is normalized to probability 1.
When 1 ≤ x ≤ 2, both transitions are enabled, and their probabilities remain
the same. The x > 2 case is similar as 0 ≤ x < 1.
8.2.2
Probabilistic Timed Structures
The semantics of a timed automaton is an (infinite) timed transition system.
The semantics of a PTA is provided by a probabilistic timed structure, which is
essentially an infinite MDP.
Definition 8.2.4 A probabilistic timed structure (PTS) M is a labeled Markov
decision process (S, Steps, L, s0 ) where:
• S is a set of states with s0 ∈ S being the initial state;
• Steps : S → 2R≥0 ×Distr (S) is a function that assigns to each state s ∈ S a
set of pairs (t, µ) where t ∈ R≥0 and µ ∈ Distr (S); and
8.2 Preliminaries
173
• L : S → 2AP is a state labeling function.
Steps(s) is the set of transitions that can be nondeterministically chosen in
state s. The transition labels are of the form (t, µ) where t is the duration of
the transition and µ is the probability distribution over the successor states.
t,µ
s −−→ s′ means that after t time units have elapsed, a transition is fired from s
to s′ with probability µ(s′ ).
Paths. Paths in a PTS arise by resolving both the nondeterministic and probabilistic choices. A path of the PTS M = (S, Steps, L, s0 ) is a finite or infinite
sequence:
t ,µ0
t ,µ1
t ,µ2
ω = s0 −−0−−→
s1 −−1−−→
s2 −−2−−→
...
where si ∈ S, (ti , µi ) ∈ Steps(si ) and µi (si+1 ) > 0 for all 0 ≤ i ≤ |ω|, where
|ω| is the number of transitions in ω. For 0 ≤ i ≤ |ω|, ω[i] denotes the (i+1)-th
state of ω (i.e., ω[i] = si ) and step(ω, i) (or step(ω[i])) denotes the (i+1)-th
transition (ti , µi ). A finite path ω ends in a state, denoted by last(ω). We write
Paths ⋆ to denote the set of finite paths and Paths ⋆ (s) the set of finite paths
that start from state s. Paths ω and Paths ω (s) are the counterparts for infinite
paths.
Definition 8.2.5 A scheduler of a PTS M = (S, Steps, L, s0 ) is a function G
mapping every finite path ω of M to a pair (t, µ) such that G(ω) ∈ Steps(last(ω)).
Let W be the set of all schedulers of M.
A scheduler resolves the nondeterminism by choosing a probability distribution based on the process executed so far. Formally, if a PTS is guided by
scheduler G and has the finite path ω as its history, then it will be in state s in
the next step with probability µ(s) after t time units, where G(ω) = (t, µ). We
write Paths ω
G to denote the set of infinite paths induced by a given scheduler G,
ω
i.e., Paths ω
G = {ω ∈ Paths | G(ω↓i ) = step(last(ω↓i )) for i ≥ 0}, where ω↓i reω
ω
turns the prefix of ω up to length i. Paths ω
G (s) is defined as Paths G ∩ Paths (s).
Scheduler G on PTS M induces a discrete-time Markov chain (DTMC)
MG , where the nondeterminism has been resolved. Each state in MG is a
finite path fragment ω in M. Formally MG = (Paths ⋆G , PG ) is a DTMC where
t,µ
µ(s) if G(ω) = (t, µ) and ω ′ = ω −−→ s .
PG (ω, ω ′ ) =
0
otherwise .
The definition of probability space of MG can be found in Section 2.2.2.
8.2.3
Obtaining a PTS from a PTA
Any PTA can be interpreted as a PTS. Due to the continuous nature of clocks,
these underlying PTSs in general have infinitely many states (even uncountably
many), and are infinitely branching. PTA can thus be considered as a finite
description of infinite PTSs.
174
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
Given a PTA G = (Loc, X , ℓ0 , L, Inv, ), a state of G is a pair (ℓ, ν), where
ℓ ∈ Loc is a location and ν is a valuation satisfying the invariant of ℓ.
Definition 8.2.6 (PTS semantics of a PTA) Let G = (Loc, X , ℓ0 , L, Inv,
be a PTA. The PTS of G is MG = (S, Steps, L′ , s0 ) with:
• S = {(ℓ, ν) | ν |= Inv(ℓ), ℓ ∈ Loc} with s0 = (ℓ0 , ~0);
• L′ ((ℓ, ν)) = L(ℓ);
• Steps assigns to each state in S a set of transitions, which are defined in
two ways. Namely, for each state (ℓ, ν) ∈ S,
0,µ
– discrete transition: (ℓ, ν) −−−→ (ℓ′ , ν ′ ), if the following conditions
hold:
1.
2.
3.
4.
g
∃ transition ℓ
η in G with η(ℓ′ , X) > 0;
ν |= g;
ν ′ = ν[X := 0]; and
P
µ(ℓ′ , ν ′ ) = X⊆X ,ν ′ =ν[X:=0] η(X, ℓ′ ).
Usually, we simply write (ℓ, ν) → µ.
d,1
– delay transition: (ℓ, ν) −−→ (ℓ, ν + d) for all 0 ≤ d ≤ t, if ν + d |=
Inv(ℓ).
Note that 1 indicates that the probability distribution is µ1(ℓ,ν+d) .
Usually, we simply write (ℓ, ν) −−d→ (ℓ, ν + d).
Formally, for any state (ℓ, ν) in MG , we say that (ℓ, ν) has a (discrete)
deadlock if Steps((ℓ, ν)) does not contain any discrete transition.
Symbolic states. We define symbolic states which are used for the effective
representation and manipulation of the infinite state space of PTS. Generally,
a symbolic state denotes a set of states of MG .
In a nutshell, a zone [HNSY94] Z ⊆ RX of X is a set of valuations which
satisfy a conjunction of clock constraints. Formally, the zone for the constraint
g is Z = {ν | ν |= g, x ∈ X }. Geometrically, a zone is a polyhedron (note that
in this chapter we do not require a zone to be convex). A symbolic state S is a
set of states whose clock evaluations form the same zone. Strictly speaking, S
is a set of pairs of location and zone, namely, it is of the form (ℓ, Z). The union
of all symbolic states is the state space S.
8.3
Probabilistic Time-abstracting Bisimulations
In order to refine the dense state space as much as possible, we adopt the timeabstracting bisimulation [TY01] for state space minimization, which abstracts
from the quantitative aspect of time: we know that some time passes, but not
how much. We first introduce a technical definition:
)
8.3 Probabilistic Time-abstracting Bisimulations
175
Definition 8.3.1 µ, µ′ ∈ Distr (S) are equivalent with respect to equivalence
R on S, written µ ≡R µ′ , if ∀U ∈ S/R. µ(U )=µ′ (U ).
Definition 8.3.2 (Probabilistic time-abstracting bisimulation) Let G be
a PTA and MG = (S, Steps, L′ , s0 ) the PTS of G. A probabilistic timeabstracting bisimulation (PTaB) for G is an equivalence relation R over the
state space S of MG such that for all states (s1 , s2 ) ∈ R, the following conditions hold:
• L′ (s1 ) = L′ (s2 );
t
1
s′1 , for some t1 ∈ R>0 , then there exist t2 ∈ R>0 and s′2 ∈ S such
• If s1 →
t2 ′
s2 and (s′1 , s′2 ) ∈ R; and
that s2 →
• If s1 → µ1 , for some µ1 ∈ Distr (S), then there exists some µ2 ∈ Distr (S)
such that s2 → µ2 and µ1 ≡R µ2 .
s1 and s2 are probabilistic time-abstracting bisimilar, denoted by s1 ∼ s2 , if
(s1 , s2 ) ∈ R for some PTaB R.
τ
We use τ -transitions to abstract the exact time passage away, formally, s →
t
′
s iff ∃t ∈ R>0 .s → s′ .
It is straightforward to verify that ∼ is an equivalence. The rest of Section 8.3
is devoted to showing the properties of PTaB.
8.3.1
Region Equivalences
In the following, we first recall the definition and properties of the region equivalence [AD94, ACD93], which is essential in turning the infinite state space of
a PTS into a finite quotient. Later we will show a similar result as in [TY01]
that the region equivalence for PTS is in fact a PTaB.
Given a PTA G, let c = cmax (G), the largest integer constant among all the
clock constraints and invariants in G. Two clock evaluations ν and ν ′ are region
equivalent (with respect to c), denoted by ν ∼
= ν ′ , iff they satisfy:
• ∀x ∈ X , either ⌊ν(x)⌋ = ⌊ν ′ (x)⌋ or both ν(x) > c and ν ′ (x) > c; and
• ∀x, y ∈ X , either ⌊ν(x) − ν(y)⌋ = ⌊ν ′ (x) − ν ′ (y)⌋ or both ⌊ν(x) − ν(y)⌋ > c
and ⌊ν ′ (x) − ν ′ (y)⌋ > c.
Note that ⌊r⌋ is the floor function that returns, for r ∈ R, the maximal
integer that is at most r. The equivalence classes induced by ∼
= are called regions.
The region equivalence can be lifted to states of PTA such that (ℓ, ν) ∼
= (ℓ′ , ν ′ )
′
′
∼
if ℓ = ℓ and ν = ν . The region equivalence enjoys the following properties:
Lemma 8.3.3 For valuations ν, ν ′ ∈ RX with ν ∼
= ν′:
1. for any zone Z, ν ∈ Z iff ν ′ ∈ Z;
176
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
2. for any set of clocks X⊆X , ν[X := 0] ∼
= ν ′ [X := 0]; and
3. ∀d ≥ 0∃d′ ≥ 0. ν + d ∼
= ν ′ + d′ .
Theorem 8.3.4 The region equivalence is a PTaB, i.e., ∼
= ⊆ ∼.
Proof: Let (ℓ, ν), (ℓ, ν ′ ) be two states in PTS M = (S, Steps, L′ , s0 ) such that
(ℓ, ν) ∼
= (ℓ, ν ′ ).
• (Labels) Since L′ ((ℓ, ν)) = L(ℓ), where L is the labeling function in the
corresponding PTA G, we have L′ ((ℓ, ν)) = L′ ((ℓ, ν ′ )).
d
• (Timed transition) Let (ℓ, ν) → (ℓ, ν + d) be a timed transition. Due to
Lem. 8.3.3(3), there exists a d′ ≥ 0 such that ν + d ∼
= ν ′ + d′ . We have that
′
′
′
ν , ν +d |= Inv(ℓ) since ν, ν +d |= Inv(ℓ). For any d′′ < d′ , ν +d′′ |= Inv(ℓ),
d′
by the downward-closedness of Inv(ℓ). Thus (ℓ, ν ′ ) → (ℓ, ν ′ + d′ ) is also a
timed transition.
• (Probabilistic transition) Let (ℓ, ν) → µ be a probabilistic transition. µ
is chosen by some scheduler G. As G can only select enabled transitions,
µ |= g. Let Z = {ν | ν |= g}, clearly ν ∈ Z. Since (ℓ, ν) ∼
= (ℓ, ν ′ ), due to
Lem. 8.3.3(1), ν ′ ∈ Z, which means that ν ′ |= g, thus µ is also enabled
in (ℓ, ν ′ ). Therefore, we can construct a scheduler G′ which chooses the
′
same distribution as G. Since µ ≡∼
= µ due to Lem. 8.3.3(2), (ℓ, ν ) → µ is
also a probabilistic transition.
The region equivalence satisfies all the conditions of being a PTaB, therefore
∼
= ⊆ ∼.
The above theorem asserts that the region equivalence is a (probably very
refined) PTaB. Note that the converse does not hold in general. It can be the
case that (ℓ, ν) ∼ (ℓ′ , ν ′ ) where ℓ 6= ℓ′ (see Ex. 8.4.5), however, (ℓ, ν) 6∼
= (ℓ′ , ν ′ ).
The following result shows that deadlocks are preserved by ∼.
Proposition 8.3.5 If (ℓ, ν) ∼ (ℓ′ , ν ′ ), then (ℓ, ν) has a deadlock iff (ℓ′ , ν ′ ) has
a deadlock.
Evidently, the converse does not hold.
Remark 8.3.6 Another desired property of timed systems is timelock-free. For
TA, generally timelocks are states violating the time-progress requirement. Formally, a state s is a timelock if all infinite runs starting from s are zeno. A TA is
timelock-free if none of its reachable states is a timelock. Similar notions can be
adapted to PTA in an expected way: a state s is a timelock in PTA if with probability 1, all infinite runs starting from s are zeno; and the notion of timelock-free
for PTA is defined accordingly. However, unfortunately, time-abstracting bisimulations do not preserve non-zenoness (or time-divergence) or timelockness. As
177
8.4 Minimization of PTA
x := 0
ℓ1 x ≤ 1
ℓ2 x ≤ 1
Figure 8.3: Counterexample regarding timelock from [TY01]
a matter of fact, this problem already occurs for (non-probabilistic) TA, as in
[TY01], page 41, figure 11, a counterexample is given explicitly, which we reproduce as in Fig. 8.3. It is easy to see that (ℓ1 , 0) ∼ (ℓ2 , 0). However, (ℓ1 , 0) is
a timelock while (ℓ2 , 0) is not. (We note that this is not surprising, since timeabstracting bisimulations are insensitive to exact delays.) We also mention that
to handle the time-divergence problem, [TY01] basically makes the well-known
strongly non-zeno assumption, namely, a minimal amount of time passes in every cyclic execution of the TA. We will not discuss this in depth since somehow
it is impertinent to the main issue addressed in the current chapter.
8.4
Minimization of PTA
Having defined the PTaB, an immediate question is: how to compute it, since
one of the crucial steps of exploiting PTaB for verification is to generate the
quotient of the given PTA. A simple answer might be, taking the region graph,
since the region equivalence is a PTaB! However, as pointed in [ACD93], since
the number of regions grows exponentially with the number of clocks in the
PTA, the finite region equivalence quotient is too large to be of any practical
interest. In fact [KNSS02] shows that the number of regions for PTA is even
higher than for TA. In other words, for the sake of efficiency, we are interested
in the minimal quotient, namely, the one corresponding to the coarsest bisimulation. In what follows, we will propose an algorithm to compute the quotient
of a PTS with respect to the coarsest PTaB, which combines the algorithm in
[TY01] for TA and the algorithm in [BEM00] for MDPs.
8.4.1
Partition Refinement
Prior to presenting our algorithm, let us first recall how the minimization algorithm from [KS90, PT87] works for finite (non-probabilistic, untimed) labeled
transition systems (LTSs). The algorithm relies on the partition-refinement
technique [PT87]. Roughly speaking, the state space S is partitioned in blocks,
i.e., pairwise disjoint sets of states. Starting from an initial partition Π0 where,
e.g., all equally-labeled states form a block, the algorithm successively refines
these blocks such that ultimately each block contains only bisimilar states. The
refinement is based on the fact that a bisimulation induces a pre-stable partition.
Formally, given a partition of states Π and blocks C1 , C2 ∈ Π, C1 is pre-stable
with respect to C2 if C1 ⊆ pred(C2 ) or C1 ∩ pred(C2 ) = ∅, where pred(C) is the
178
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
set of direct predecessors of all the states in C. If C1 is not stable with respect
to C2 , then C1 can further be partitioned into two sub-blocks C1 ∩ pred(C2 ) and
C1 \ pred(C2 ). In this case, C2 is a splitter of C1 . Π is pre-stable if all its blocks
are pairwise pre-stable. The main sketch of the algorithm presented in Algo. 1,
albeit simple, is the essence of partition refinement.
Algorithm 1 The general partition-refinement algorithm
Require: The LTS, the initial partition Π0
Ensure: The partition Π under the coarsest bisimulation
1:
Π := Π0 ;
2:
while (∃C1 , C2 ∈ Π, C1 is not stable with respect to C2 ) do
3:
ΠC1 := {C1 ∩ pred(C2 ), C1 \ pred(C2 )};
4:
Π := (Π \ {C1 }) ∪ ΠC1 ;
5:
end while
6: return Π;
The scheme can be adapted to infinite state spaces, assuming that they admit effective representations of blocks and decision procedures for computing
intersection, set-difference and predecessors of blocks, and testing whether a
block is empty. For termination, it must be ensured that a pre-stable partition always exists. In [TY01], such an adaption is given for TA to compute
time-abstracting bisimulation, since the state space of TA falls in this category.
In particular, two symbolic operations, i.e. time-pred and disc-pred, are introduced, which return the set of all discrete-predecessors and time-predecessors
of states respectively. For details, see [TY01], Figure 15.
8.4.2
Bisimulation Quotienting Algorithm
In this section, we shall move further, taking probabilistic transitions into account. This is not trivial, since the infinite states (caused by time) and probabilistic transitions are closely interweaved, thus the set pred should be replaced
by the discrete predecessors discpred and the timed predecessors timepred in a
proper way.
The set of timed predecessors splits a block where a discontinuity on time
occurs when taking a timed transition. This is captured by the time-refinement
operator (see Def. 8.4.1). Besides, due to Prop. 8.3.5, a state having a deadlock
must be in a different block than a state that does not suffer from a deadlock.
This suggests a first discrete-refinement operator (see Def. 8.4.2).
For discrete predecessors, since a probability distribution rather than a state
is associated with a transition, successively dividing a block by a single-block
splitter does not suffice. Instead, we adapt the mutual-refine algorithm proposed
in [BEM00]. The algorithm maintains a distribution partition in addition to a
state partition, and in each iteration refines one partition by the other and
vice versa, till both partitions stabilize. However, this algorithm cannot be
directly applied in our case, since a block might expand in a new partition as
the number of symbolic states in it may grow. Consequently, in a new partition,
8.4 Minimization of PTA
179
it is possible that the distribution set differs from the one in the last iteration,
and obviously the old distribution partition is obsolete. The Expand operator
(see Def. 8.4.3) thus recalculates the symbolic states, the distribution set as
well as the distribution partition, and as a final step in one iteration, a state
partition is refined by the a second discrete-refinement operator (see Def. 8.4.4)
using the newest distribution partition.
The algorithm is presented in Algo. 2. A detailed explanation follows.
Algorithm 2 The partition-refinement algorithm for PTA
Require: The PTA G and PTS MG = (S, Steps, L′ , s0 )
Ensure: The partition Π under the coarsest PTaB
1: Initialization: Get the initial partition, Π := ΠAP ;
2:
Partition Π according to Refine1d (Π, ∇).
3: Repeat
4:
phase i – Refine Π by discrete transitions:
5:
Choose some block C ∈ Π,
6:
Expand(C, Π);
7:
Update the distribution set Distr ′ ;
8:
Compute the equivalence class Distr ′ /Π;
9:
Choose some M ∈ Distr ′ /Π;
10:
Π := Refine2d (Π, M );
11:
phase ii – Refine Π by time delays:
12:
Choose some block C ∈ Π;
13:
Π := Refinet (Π, C);
14: until Π does not change.
15: return Π;
Determining the initial partition
The initial partition of states ΠAP = S/RAP is the AP-partition of S, where
RAP = {(s1 , s2 ) ∈ S × S | L(s1 ) = L(s2 )}. Initially, the zone of symbolic state
(ℓ, ν) is Inv(ℓ), thus on the symbolic state level,
ΠAP = {([ℓ]RAP , Inv(ℓ))} | ℓ ∈ Loc .
Refining partitions
In the rest of this section, we will concentrate on how to refine an existing partition. For reference convenience, given a PTA, we designate to each transition
(leading to a distribution) a unique action name, and for each location ℓ, we
denote ∇(ℓ) as the setSof all outgoing transitions from ℓ, which is ranged over
by α, β, · · · . Let ∇ = ℓ∈Loc ∇(ℓ). Moreover, for each transition α, gα and µα
are the corresponding guard and the resulting distribution, respectively. As an
example, there are four uniquely labeled transitions in the PTA in Fig. 8.1.
180
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
As we have two types of transitions, there are two types of splitters as
well. For the timed transitions, the time-splitter is a block C2 , and the timerefinement operator is as follows:
Definition 8.4.1 (The time-refinement operator) Let Π be a partition of
S and C1 , C2 ∈ Π. Then:
Refinet (C1 , C2 ) = {C1 ∩ timepred(C2 ), C1 \ timepred(C2 )} \ {∅} ,
t
′
where timepred(S) = {s | ∃s′ ∈ S,
S t ∈ R>0 , s −−→ s }.
We define Refinet (Π, C2 ) = C1 ∈Π Refinet (C1 , C2 ).
This corresponds to phase ii (line 11-13) in Algo. 2.
For the discrete transitions, the split consists of two steps. The first step is
to differentiate the symbolic states that can fire a discrete transition from those
that cannot. In this step, a splitter is the action set ∇, which refines a block as
follows:
Definition 8.4.2 (The 1st discrete-refinement operator) Let Π be a partition of S, ∇ be the action set, and C ∈ Π. Then:
Refine1d (C, ∇) = {C + , C − } \ {∅} ,
where C + = {(ℓ, Z) | ∃α ∈ ∇(ℓ), Z ⊆ gα and (ℓ, Z ′ ) ∈ C for some Z ′ } and
C − = {(ℓ, Z) | ∀α ∈ ∇(ℓ), Z ∩ gα = ∅ and (ℓ, Z ′ ) ∈ C for some Z ′ }.
S
We define Refine1d (Π, ∇) = C∈Π Refine1d (C, ∇).
All symbolic states in C + have an enabled discrete transition whereas any
of them in C − does not. Actually, C − is the set of states that have a deadlock.
This is used in line 2 of the algorithm. This operator has only to be performed
once, because the further refinement won’t change the fact of having a discrete
transition. Thus, we place it in the initialization.
As the second step, we can further partition C + according to the distributions. Suppose the current partition Π = {C1 , ..., Cn }, n S
∈ N. For any block
Ci , we can write Ci = {(ℓ1i , Yi1 ), . . . , (ℓqi , Yiq )} with Ci = 1≤j≤q (ℓji , Yij ), and
for any 1 ≤ h 6= k ≤ q, ℓhi 6= ℓki . For index 1 ≤ h ≤ m and α ∈ ∇(ℓhi ) such that
Yih ⊆ gα , we hope to derive the distributions induced by α.
However, it is possible that the resulting symbolic state of a transition bestrides different blocks, which leads to the problem that the probability µ(C) to
a block C is not well defined. For instance, Fig. 8.4(a) illustrates a distribution
from S0 = (ℓ, Z) to S1 = (ℓ1 , Z) and S2 = (ℓ2 , Z), where S0 ∈ C0 , S2 ∈ C2 but
S1 scatters in C11 , C12 and C13 , as in Fig. 8.4(b). Note that {Z11 , Z12 , Z13 } is
a partition of Z. The problem is that µ(C1i ) can not be defined for 1 ≤ i ≤ 3.
To solve this problem, we have to split a symbolic state in such a way that
each sub-symbolic state has well-defined probabilistic transitions over the partition Π, as in Fig. 8.4(c). As a result of this split, the number of blocks stays
181
8.4 Minimization of PTA
p1 (ℓ1 , Z)
(ℓ, Z)
C11
C0
(ℓ, Z)
p2
C2
(a)
C0′
p2 (ℓ2 , Z)
?
C12
(ℓ1 , Z2 )
(ℓ2 , Z)
p1
(ℓ, Z2 )
(ℓ1 , Z2 )
C12
(ℓ2 , Z)
p2
(ℓ, Z3 )
C13
C11
p2
(ℓ, Z1 )
(ℓ1 , Z1 )
?
?
(ℓ1 , Z1 )
p1
C2
p1
p2
C13
(ℓ1 , Z3 )
(ℓ1 , Z3 )
(b)
(c)
Figure 8.4: The motivation of Expand
the same, but the symbolic state space expands in terms of transitions. In the
following, we define the Expand operator in a formal way.
For symbolic state S = (ℓ, Z) and action α,
Supp(µα ) = {(ℓ1 , X1 ), . . . , (ℓm , Xm )}
with probabilities p1 , ..., pm , respectively, where
P Xi ⊆ X is the reset clock
set, and pi is the associated probability with 1≤i≤m pi = 1. For successor
(ℓj , Xj , pj ), the resulting symbolic state is Sj = (ℓj , Z[Xj := 0]). In the following, we will split Z into such a partition Z = {Z1 , . . . , Zf } that for any
(sub)symbolic state (ℓ, Z ′ ) of S, i.e., Z ′ ∈ Z, each of its successor state is
located only in one block. For Ck ∈ {C1 , ..., Cn }, define
Zjk = (ℓ, ν) | ν ∈ Z, (ℓj , ν[Xj := 0]) ∈ Ck .
It is possible that Zjk = ∅. For each successor 1 ≤ j ≤ m, {Zj1 , Zj2 , . . . , Zjn } is a
partition of Z. We have the following partitions:
For 1-st successor :
{Z11 , . . . , Z1k , . . . , Z1n }
..
.
For j-th successor :
{Zj1 , . . . , Zjk , . . . , Zjn }
..
.
1
k
n
{Zm
, . . . , Zm
, . . . , Zm
} .
For m-th successor :
We define
Z~k =
\
~
k[j]
Zj
,
1≤j≤m
~
k[j]
where for each j, 1 ≤ ~k[j] ≤ n. Zj denotes choosing the ~k[j]-th element in
the j-th row, where ~k is a vector of indices. Stated in words, Z~k is obtained
182
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
by taking the intersection of one arbitrary element from each row in the above
“matrix”. It is not difficult to see that
{Z~k | 1 ≤ ~k[j] ≤ n, 1 ≤ j ≤ m} \ {∅}
is a partition of Z, and in the worst case, this partition may contain nm blocks.
~
k[j]
For each Z~k , since Z~k ⊆ Zj
for 1 ≤ j ≤ m, it must be the case that
(ℓj , Z~k [Xj := 0]) ⊆ C~k[j] for each 1 ≤ j ≤ m. Hence, the probability from the
symbolic state (ℓ, Z~k ) to Ci for 1 ≤ i ≤ n is obtained by adding the nonzero
probabilities in the i-th column:
X
Pr (ℓ, Z~k ), α, Ci =
pj .
1≤j≤m,~
k[j]=i
In what follows,
we merge those Z~k and Z~k′ such that for each Ci ∈ Π,
Pr (ℓ, Z~k ), α, Ci = Pr (ℓ, Z~k′ ), α, Ci . As a result, the partition Z = {Z1 , ..., Zf }
is obtained. And the expansion operator expands a block with (possibly) more
refined symbolic states as follows:
Definition 8.4.3 (The Expand operator) Let Π be a partition of S, α ∈ ∇,
C ∈ Π and S = (ℓ, Z) ∈ C. Then:
Expand(S, α, Π) = {(ℓ, Zi ) | 1 ≤ i ≤ f } .
Expand(C, Π) =
S
S∈C,α∈∇
Expand(S, α, Π).
Note that for each sub-symbolic state T of S in Expand(S, α, Π), Pr(T, α, Ck )
is well-defined. Let us denote µT,α as the distribution over Π from T via
action α. Now the distribution set is updated as Distr ′ = {µT,α | T ∈
Expand(C, Π) with T = (ℓ, Y ) for some ℓ, Y and α ∈ ∇(ℓ)}. The distribution
partition on Distr ′ over Π, denoted by Distr ′ /Π, can thus be updated consequentially, based on the following fact: Let M ∈ Distr ′ /Π, then ∀µ, µ′ ∈ M ,
µ(C) = µ′ (C) for any C ∈ Π. As the mutual-refine technique, the state partition
can in turn be refined by the distribution partition as follows:
Definition 8.4.4 (The 2nd discrete-refinement operator) Let Π be a partition of S with C ∈ Π, C ′ = Expand(C, Π) and M ∈ Distr ′ /Π. Then:
Refine2d (C, M ) = {CM , C \ CM } \ {∅} ,
where CM = {T | µT,α ∈ M and T = (ℓ, Y ), α ∈ ∇(ℓ)}.
We define Refine2d (Π, M ) = (Π \ {C}) ∪ Refine2d (C, M ).
The above steps correspond to phase i, line 4-10 in Algo. 2.
Example 8.4.5 The bisimulation quotient of the PTA in Fig. 8.1 is shown in
Fig. 8.5. There are four equivalence classes, with the above three labeled with
183
8.4 Minimization of PTA
0.3
{ℓ1 , ℓ3 ; x = 0}
{a}
τ
{ℓ1 , ℓ3 ; x ≤ t1 }
{a}
τ
1
{ℓ1 , ℓ3 ; t1 < x ≤ t2 }
0.7
{ℓ2 , ℓ4 }
{a}
∅
Figure 8.5: The bisimulation quotient
{a} and the one below, the sinking state, with ∅. The label τ denotes that
some time passes during the transition. The intuition is that from ℓ1 or ℓ3 it
is possible to go to a state within a given period of time (the first τ ), where
either it takes a discrete transition to the sinking state, or it stays (taking the
second τ ) for some time till it can take a transition back resetting its clock with
probability 0.3, or it goes to the sinking state with probability 0.7. t1 and t2
are arbitrary time points, which have been abstracted from the original model.
Correctness and termination. It is not difficult to see that by any of the
Refine operators, we may obtain some new blocks, where for any two states
in different blocks, they are not bisimilar and these blocks are disjoint. The
correctness of the algorithm follows from standard correctness arguments of the
standard partition-refinement scheme [PT87, KS90] and its probabilistic adaption [BEM00]. The definition of Refinet (Π, C) clearly yields a finer partition,
which deals with timed transitions. For the two operators for discrete transitions, the first one, Refine1d (Π, ∇) (where ∇ is an action set), distinguishes the
states with discrete transitions and it also yields a finer partition. The second
one, Refine1d (Π, M ) (where M is an equivalence class with respect to the probabilistic distributions), is standard in computing probabilistic bisimulations (see
[BEM00, Lem. 4.4]). The correctness of novel Expand operator follows directly
from its definition.
Termination is ensured by Thm. 8.3.4. Namely, in the worst case, the algorithm will generate the partition induced by the region equivalence.
Complexity. We analyze the complexity of the algorithm briefly. Since in
the worst case, the region equivalence will be obtained, our algorithm needs to
refine exponentially many times to reach the fixpoint, and thus it is an Exptime
algorithm. On the other hand, it is not very hard to show, by adapting the constructions of [CY92, LS00, LS07], that for PTA with at least three clocks, the
Exptime lower-bound can be obtained. The key observation is that, as shown
in [CY92, LS00], given a TA, the reachability problem is Pspace-complete when
the constants occurring in the guards of the TA belong to {0, 1} or when the
number of clocks is at least three (with relatively large constants to which these
clocks are compared). Both of them turn out to be the source of the complexity
184
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
and conceptually they can be traded for each other. When the probability is
involved, as shown in [LS07], the complexity lower-bound (of almost-sure probabilistic reachability, which is a natural variant of reachability in the probabilistic
setting) climbs up to Exptime-complete (note that one can trade the number of
clocks for the number of components, according to [LS00]). Moreover, one can
easily use PTaB to “encode” the probabilistic reachability property (see also
[JLS08] for relevant work). Hence the claimed Exptime lower-bound. Nevertheless, we note that (1) for PTA with only one clock, we can show that the
algorithm only needs polynomial time to reach the fixpoint. (This is due to
the fact that for one-clock TA, one can take larger granularity on the occurring
constants, and thus obtain a region graph which is only of polynomial size, see
[LMS04, JLS08].) Thus in this case, we can get a polynomial-time algorithm; (2)
In practice, usually, PTA have a much coarser partition than the one induced
by the region equivalence, and thus our algorithm is expected to perform pretty
well in this case. Undoubtedly, (2) has to be confirmed by the experimental
data, which is left as the future work. (The current chapter is only devoted to
the theoretical development.) We only note here that the results of [TY01] for
(non-probabilistic) TA are encouraging.
8.5
Verification of Branching-time Properties
The logic PCTL. In this section we prove that PTaB preserves branchingtime properties specified in probabilistic CTL (PCTL, [HJ94]). We first briefly
introduce the syntax and semantics of PCTL.
Φ ::= tt | a | ¬Φ | Φ ∧ Φ | ∃Pp (φ) ,
where p ∈ [0, 1] is a probability, a ∈ AP, ∈ {<, ≤, >, ≥} and φ is a path
formula defined as:
φ ::= Φ U Φ | Φ W Φ .
Semantics. The semantics of PCTL is defined by a satisfaction relation,
denoted |=, which is characterized as the least relation over the state in a PTS
M = (S, Steps, L, s0 ) (infinite paths in M, respectively) and the state formulae
(path formulae) satisfying:
s |= tt
s |= a
s |= ¬Φ
s |= Φ ∧ Ψ
s |= ∃Pp(φ)
iff
iff
iff
iff
a ∈ L(s)
not (s |= Φ)
s |= Φ and s |= Ψ
for some scheduler G ∈ W, Prob(s, φ) p in the DTMC
DG , where Prob(s, φ) = Pr{σ ∈ Paths(s) | σ |= φ}.
ρ |= Φ U Ψ
iff
ρ |= Φ WΨ
iff
∃ i. ρ[i] |= Ψ ∧ ∀ 06j<i. ρ[j] |= Φ ,
either ρ |= Φ U6Ψ or ∀i.ρ[i] |= Φ .
The bisimulation relation can be lifted to paths in the following way:
185
8.6 Conclusion
Lemma 8.5.1 (Bisimulation on paths) Let s ∼ s′ . Then for each (finite or
t ,µ0
t ,µ1
infinite) path ω = s0 −−0−−→
s1 −−1−−→
s2 · · · ∈ Paths(s), there exists a path
t′ ,µ′
t′ ,µ′
0
1
ω ′ = s′0 −−0−−→
s1 −−1−−→
s′2 · · · ∈ Paths(s′ ) of the same length such that for
′
each i ≥ 0, si ∼ si and µi ≡∼ µ′i .
Theorem 8.5.2 Let G be a PTA and ∼ be a PTaB on G. For any PCTL
formula Φ, s ∼ s′ implies that s |= Φ iff s′ |= Φ.
Proof: The proof is by induction on the structure of Φ. The base case is
trivial, since if s ∼ s′ , then L(s) = L(s′ ). The interesting inductive cases
are for Φ = Pp (φ), where φ = Ψ1 U Ψ2 . Assume that s |= Φ. Then there
exists a scheduler G : Paths ∗ → R × Distr (S) such that Paths G (s, Ψ1 UΨ2 ) =
{ω ∈ Paths G (s) | ∃i ≥ 0. ω(i, ti ) |= Ψ2 ∧ ∀0 ≤ j < i, t < tj . ω(j, t) |= Ψ1 } and
Pr(Paths G (s, Ψ1 UΨ2 )) p.
Assume ω ∈ Paths G (s, Ψ1 UΨ2 ). According to Lem. 8.5.1, there must exist a
′
probabilistic time-abstracting bisimilar path ω ′ ∈ Paths G (s′ , Ψ1 UΨ2 ), and vice
∗
versa. We can thus construct a scheduler G′ : Paths → R×Distr (S) as follows:
for ω ∈ Paths ∗ (s) and its bisimilar path ω ′ ∈ Paths ∗ (s′ ), if G(ω) = (µ, t), then
G′ (ω ′ ) = (µ′ , t′ ) and µ ≡∼ µ′ .
′
It remains to show that Pr(Paths G (s, Ψ1 UΨ2 )) = Pr(Paths G (s′ , Ψ1 UΨ2 )).
Due to the fact that for each Ψ1 UΨ2 path, the probability distributions determined by G and G′ are equivalent, the probability measures of the two sets of
paths coincide.
The above theorem states that a PCTL formula is preserved by PTaBs,
which indicates that all the PCTL properties can be checked on the quotient
PTSs. Thus all the existing techniques, algorithms, and tools for finite MDPs
can be applied. We remark that the reverse direction of the theorem, however,
does not hold in general.
8.6
Conclusion
We have investigated PTaBs for PTA. This equivalence usually provides a much
coarser partition than tradition region equivalence and preserves PCTL. We
provided a non-trivial adaptation of the traditional partition-refinement algorithm to compute the quotient under PTaB.
As for future work, experimental evaluation of the efficiency of the proposed algorithm is to be carried out. Moreover, combined probabilistic timeabstracting bisimulations (where a transition can be simulated by a convex combination of transitions), weak probabilistic time-abstracting bisimulations, and
logical characterizations of PTaBare interesting topics which deserve further investigation. Furthermore, abstract-refinement and counterexample generation
problems for PTA are in the plan.
186
Chapter 8 Probabilistic Time-abstracting Bisimulations for PTA
Bibliography
[AB06]
Rajeev Alur and Mikhail Bernadsky. Bounded model checking for
GSMP models of stochastic real-time systems. In João P. Hespanha and Ashish Tiwari, editors, HSCC, volume 3927 of Lecture
Notes in Computer Science, pages 19–33. Springer, 2006.
[ABKM09]
Eric Allender, Peter Bürgisser, Johan Kjeldgaard-Pedersen, and
Peter Bro Miltersen. On the complexity of numerical analysis.
SIAM J. Comput., 38(5):1987–2006, 2009.
[ACD91a]
Rajeev Alur, Costas Courcoubetis, and David L. Dill. Modelchecking for probabilistic real-time systems (extended abstract).
In Javier Leach Albert, Burkhard Monien, and Mario Rodrı́guezArtalejo, editors, ICALP, volume 510 of Lecture Notes in Computer Science, pages 115–126. Springer, 1991.
[ACD91b]
Rajeev Alur, Costas Courcoubetis, and David L. Dill. Verifying automata specifications of probabilistic real-time systems. In
de Bakker et al. [dBHdRR92], pages 28–44.
[ACD93]
Rajeev Alur, Costas Courcoubetis, and David L. Dill. Modelchecking in dense real-time. Inf. Comput., 104(1):2–34, 1993.
[Ace03]
Luca Aceto. Some of my favourite results in classic process algebra. Bulletin of the EATCS, 81:90–108, 2003.
[ACFI06]
Luca Aceto, Taolue Chen, Wan Fokkink, and Anna Ingólfsdóttir.
On the axiomatizability of priority. In Michele Bugliesi, Bart
Preneel, Vladimiro Sassone, and Ingo Wegener, editors, ICALP
(2), volume 4052 of Lecture Notes in Computer Science, pages
480–491. Springer, 2006.
[ACFI08]
Luca Aceto, Taolue Chen, Wan Fokkink, and Anna Ingólfsdóttir.
On the axiomatisability of priority. Mathematical Structures in
Computer Science, 18(1):5–28, 2008.
[ACHH92]
Rajeev Alur, Costas Courcoubetis, Thomas A. Henzinger, and
Pei-Hsin Ho. Hybrid automata: An algorithmic approach to the
specification and verification of hybrid systems. In Robert L.
187
188
Bibliography
Grossman, Anil Nerode, Anders P. Ravn, and Hans Rischel, editors, Hybrid Systems, volume 736 of Lecture Notes in Computer
Science, pages 209–229. Springer, 1992.
[AD94]
Rajeev Alur and David L. Dill. A theory of timed automata.
Theor. Comput. Sci., 126(2):183–235, 1994.
[AD00]
Robert B. Ash and Catherine A. Doléans-Dade. Probability and
Measure Theory. Academic Press, 2000.
[AFH96]
Rajeev Alur, Tomás Feder, and Thomas A. Henzinger. The benefits of relaxing punctuality. J. ACM, 43(1):116–146, 1996.
[AFI07]
Luca Aceto, Wan Fokkink, and Anna Ingólfsdóttir. Ready to
preorder: Get your BCCSP axiomatization for free! In Till
Mossakowski, Ugo Montanari, and Magne Haveraaen, editors,
CALCO, volume 4624 of Lecture Notes in Computer Science,
pages 65–79. Springer, 2007.
[AFI08]
Luca Aceto, Wan Fokkink, and Anna Ingólfsdóttir. A cancellation
theorem for BCCSP. Fundam. Inform., 88(1-2):1–21, 2008.
[AFIL05]
Luca Aceto, Wan Fokkink, Anna Ingólfsdóttir, and Bas Luttik.
CCS with Hennessy’s merge has no finite-equational axiomatization. Theor. Comput. Sci., 330(3):377–405, 2005.
[AFIL09]
Luca Aceto, Wan Fokkink, Anna Ingólfsdóttir, and Bas Luttik. A
finite equational base for CCS with left merge and communication
merge. ACM Trans. Comput. Log., 10(1), 2009.
[AFIM08]
Luca Aceto, Wan Fokkink, Anna Ingólfsdóttir, and Mohammad Reza Mousavi. Lifting non-finite axiomatizability results
to extensions of process algebras. In Giorgio Ausiello, Juhani
Karhumäki, Giancarlo Mauri, and C.-H. Luke Ong, editors, IFIP
TCS, volume 273 of IFIP, pages 301–316. Springer, 2008.
[AFIN06]
Luca Aceto, Wan Fokkink, Anna Ingólfsdóttir, and Sumit Nain.
Bisimilarity is not finitely based over BPA with interrupt. Theor.
Comput. Sci., 366(1-2):60–81, 2006.
[AFV01]
Luca Aceto, Wan Fokkink, and Chris Verhoef. Structural operational semantics. In Jan Bergstra, Alban Ponse, and Scott
Smolka, editors, Handbook of Process Algebra, pages 197–292. Elsevier Science, 2001.
[AFvGI04]
Luca Aceto, Wan Fokkink, Rob J. van Glabbeek, and Anna
Ingólfsdóttir. Nested semantics over finite trees are equationally
hard. Inf. Comput., 191(2):203–232, 2004.
Bibliography
189
[AH91]
Rajeev Alur and Thomas A. Henzinger. Logics and models of real
time: A survey. In de Bakker et al. [dBHdRR92], pages 74–106.
[AH93]
Rajeev Alur and Thomas A. Henzinger. Real-time logics: Complexity and expressiveness. Inf. Comput., 104(1):35–77, 1993.
[AH94]
Rajeev Alur and Thomas A. Henzinger. A really temporal logic.
J. ACM, 41(1):181–204, 1994.
[AH97]
Rajeev Alur and Thomas A. Henzinger. Real-time system = discrete system + clock variables. STTT, 1(1-2):86–109, 1997.
[AI07]
Luca Aceto and Anna Ingólfsdóttir. The saga of the axiomatization of parallel composition. In Caires and Vasconcelos [CV07],
pages 2–16.
[AI08]
Luca Aceto and Anna Ingólfsdóttir. On the expressibility of priority. Inf. Process. Lett., 109(1):83–85, 2008.
[AILS07]
Luca Aceto, Anna Ingólfsdóttir, Kim Larsen, and Jiri Srba. Reactive Systems: Modelling, Specification and Verification. Cambridge University Press, 2007.
[Alu99]
Rajeev Alur. Timed automata. In Nicolas Halbwachs and Doron
Peled, editors, CAV, volume 1633 of Lecture Notes in Computer
Science, pages 8–22. Springer, 1999.
[AM04]
Rajeev Alur and P. Madhusudan. Decision problems for timed
automata: A survey. In Marco Bernardo and Flavio Corradini,
editors, SFM, volume 3185 of Lecture Notes in Computer Science,
pages 1–24. Springer, 2004.
[ASSB00]
Adnan Aziz, Kumud Sanwal, Vigyan Singhal, and Robert K.
Brayton. Model-checking continous-time Markov chains. ACM
Trans. Comput. Log., 1(1):162–170, 2000.
[AW95]
George B. Arfken and Hans J. Weber. Mathematical Methods for
Physicists (4th ed.). Academic Press, 1995.
[BA07]
Mikhail Bernadsky and Rajeev Alur. Symbolic analysis for GSMP
models with one stateful clock. In Bemporad et al. [BBB07b],
pages 90–103.
[Bae90]
Jos C. M. Baeten, editor. Applications of Process Algebra. Cambridge Tracts in Theoretical Computer Science (No. 17). Cambridge University Press, 1990.
[Bae05]
Jos C. M. Baeten. A brief history of process algebra. Theor.
Comput. Sci., 335(2-3):131–146, 2005.
190
Bibliography
[BB00]
Jos C. M. Baeten and Jan A. Bergstra. Mode transfer in process
algebra. Technical report, CSR00–01, Eindhoven University of
Technology, 2000.
[BBB+ 07a]
Christel Baier, Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye, and Marcus Größer. Probabilistic and topological semantics
for timed automata. In Vikraman Arvind and Sanjiva Prasad, editors, FSTTCS, volume 4855 of Lecture Notes in Computer Science, pages 179–191. Springer, 2007.
[BBB07b]
Alberto Bemporad, Antonio Bicchi, and Giorgio C. Buttazzo, editors. Hybrid Systems: Computation and Control, 10th International Workshop, HSCC 2007, Pisa, Italy, April 3-5, 2007,
Proceedings, volume 4416 of Lecture Notes in Computer Science.
Springer, 2007.
[BBB+ 08]
Christel Baier, Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye, and Marcus Größer. Almost-sure model checking of infinite
paths in one-clock timed automata. In LICS, pages 217–226.
IEEE Computer Society, 2008.
[BBBM08]
Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye, and Nicolas Markey. Quantitative model-checking of one-clock timed automata under probabilistic semantics. In QEST, pages 55–64.
IEEE Computer Society, 2008.
[BBK86]
Jos C. M. Baeten, Jan A. Bergstra, and Jan Willem Klop. Syntax and defining equations for an interrupt mechanism in process
algebra. Fundam. Inform., IX(2):127–168, 1986.
[BC05]
Patricia Bouyer and Fabrice Chevalier. On conciseness of extensions of timed automata. Journal of Automata, Languages and
Combinatorics, 10(4):393–405, 2005.
[BCH+ 07]
Christel Baier, Lucia Cloth, Boudewijn R. Haverkort, Matthias
Kuntz, and Markus Siegle. Model checking Markov chains with
actions and state labels. IEEE Trans. Software Eng., 33(4):209–
224, 2007.
[BCJ09]
Jasper Berendsen, Taolue Chen, and David N. Jansen. Undecidability of cost-bounded reachability in priced probabilistic timed
automata. In Jianer Chen and S. Barry Cooper, editors, TAMC,
volume 5532 of Lecture Notes in Computer Science, pages 128–
137. Springer, 2009.
[BCSS98]
Lenore Blum, Felipe Cucker, Michael Shub, and Steve Smale.
Compexity and Real Computation. Springer-Verlag, 1998.
Bibliography
191
[BdA95]
Andrea Bianco and Luca de Alfaro. Model checking of probabalistic and nondeterministic systems. In Thiagarajan [Thi95],
pages 499–513.
[Bea03]
Danièle Beauquier. On probabilistic timed automata. Theor.
Comput. Sci., 292(1):65–84, 2003.
[BEM00]
Christel Baier, Bettina Engelen, and Mila E. Majster-Cederbaum.
Deciding bisimilarity and similarity for probabilistic processes. J.
Comput. Syst. Sci., 60(1):187–231, 2000.
[Ber85]
Jan A. Bergstra. Put and get, primitives for synchronous unreliable message passing. Technical report, Logic Group Preprint
Series 3, Utrecht University, Department of Philosophy, 1985.
[BFN03]
Stefan Blom, Wan Fokkink, and Sumit Nain. On the axiomatizability of ready traces, ready simulation, and failure traces. In
Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Gerhard J. Woeginger, editors, ICALP, volume 2719 of Lecture Notes
in Computer Science, pages 109–118. Springer, 2003.
[BHH+ 04]
Christel Baier, Boudewijn R. Haverkort, Holger Hermanns, JoostPieter Katoen, and Markus Siegle, editors. Validation of Stochastic Systems - A Guide to Current Research, volume 2925 of Lecture Notes in Computer Science. Springer, 2004.
[BHHK03]
Christel Baier, Boudewijn R. Haverkort, Holger Hermanns, and
Joost-Pieter Katoen. Model-checking algorithms for continuoustime Markov chains. IEEE Trans. Software Eng., 29(6):524–541,
2003.
[BHR84]
Stephen D. Brookes, C. A. R. Hoare, and A. W. Roscoe. A theory
of communicating sequential processes. J. ACM, 31(3):560–599,
1984.
[Bil95]
Patrick Billingsley. Probability and Measure. Wiley-Interscience,
New York, 1995.
[BIM95]
Bard Bloom, Sorin Istrail, and Albert R. Meyer. Bisimulation
can’t be traced. J. ACM, 42(1):232–268, 1995.
[BJK06]
Jasper Berendsen, David N. Jansen, and Joost-Pieter Katoen.
Probably on time and within budget: On reachability in priced
probabilistic timed automata. In QEST, pages 311–322. IEEE
Computer Society, 2006.
[BK82]
Jan A. Bergstra and Jan Willem Klop. Fixed point semantics in
process algebras. Report IW 206, Mathematical Centre, Amsterdam, 1982.
192
Bibliography
[BK84]
Jan A. Bergstra and Jan Willem Klop. Process algebra for synchronous communication. Information and Control, 60(1-3):109–
137, 1984.
[BK08]
Christel Baier and Joost-Pieter Katoen.
Checking. MIT Press, 2008.
[BL08]
Patricia Bouyer and François Laroussinie. Model checking timed
automata. In Stephan Merz and Nicolas Navet, editors, Modeling
and Verification of Real-Time Systems, pages 111–140. ISTE Ltd.
– John Wiley & Sons, Ltd., 2008.
[BLY96]
Ahmed Bouajjani, Yassine Lakhnech, and Sergio Yovine. Modelchecking for extended timed temporal logics. In Bengt Jonsson
and Joachim Parrow, editors, FTRTFT, volume 1135 of Lecture
Notes in Computer Science, pages 306–326. Springer, 1996.
[Bou04]
Patricia Bouyer. Forward analysis of updatable timed automata.
Formal Methods in System Design, 24(3):281–320, 2004.
[Bou09]
Patricia Bouyer. Model-checking timed temporal logics. In Carlos Areces and Stéphane Demri, editors, Proceedings of the 4th
Workshop on Methods for Modalities (M4M-5), Electronic Notes
in Theoretical Computer Science, Cachan, France, 2009. Elsevier
Science Publishers. To appear.
[BPDG98]
Béatrice Bérard, Antoine Petit, Volker Diekert, and Paul Gastin.
Characterization of the expressive power of silent transitions in
timed automata. Fundam. Inform., 36(2-3):145–182, 1998.
[BPS01]
Jan A. Bergstra, Alban Ponse, and Scott Smolka, editors. Handbook of Process Algebra. North-Holland, 2001.
[Bri85]
Ed Brinksma. A tutorial on LOTOS. In Michel Diaz, editor,
PSTV, pages 171–194. North-Holland, 1985.
[BS81]
Stanley Burris and H. P. Sankappanavar. A Course in Universal
Algebra. Springer-Verlag, New York, 1981.
[BSS89]
Lenore Blum, Michael Shub, and Steve Smale. On a theory
of computation and complexity over the real numbers: NPcompleteness, recursive functions and universal machines. Bull.
Am. Math. Soc., 21(1):1–46, 1989.
[BV95]
Jos C.M. Baeten and Chris Verhoef. Concrete process algebra.
In S. Abramsky, Dov M. Gabbay, and T.S.E. Maibaum, editors,
Handbook of logic in computer science (vol. 4): semantic modelling, pages 149–268. Oxford University Press, 1995.
Principles of Model
Bibliography
193
[BW90]
Jos C. M. Baeten and W.P. Weijland. Process Algebra, Cambridge
Tracts in Theoretical Computer Science 18. Cambridge University
Press, 1990.
[BY03]
Johan Bengtsson and Wang Yi. Timed automata: Semantics, algorithms and tools. In Jörg Desel, Wolfgang Reisig, and Grzegorz
Rozenberg, editors, Lectures on Concurrency and Petri Nets, volume 3098 of Lecture Notes in Computer Science, pages 87–124.
Springer, 2003.
[CD88]
Oswaldo L.V. Costa and Mark H. A. Davis. Approximations
for optimal stopping of a piecewise-deterministic process. Math.
Control Signals Systems, 1(2):123–146, 1988.
[CES86]
Edmund M. Clarke, E. Allen Emerson, and A. Prasad Sistla.
Automatic verification of finite-state concurrent systems using
temporal logic specifications. ACM Trans. Program. Lang. Syst.,
8(2):244–263, 1986.
[CF06]
Taolue Chen and Wan Fokkink. On finite alphabets and infinite
bases III: Simulation. In Christel Baier and Holger Hermanns,
editors, CONCUR, volume 4137 of Lecture Notes in Computer
Science, pages 421–434. Springer, 2006.
[CF08]
Taolue Chen and Wan Fokkink. On the axiomatizability of impossible futures: Preorder versus equivalence. In LICS, pages
156–165. IEEE Computer Society, 2008.
[CFLN08]
Taolue Chen, Wan Fokkink, Bas Luttik, and Sumit Nain. On
finite alphabets and infinite bases. Inf. Comput., 206(5):492–519,
2008.
[CFN06]
Taolue Chen, Wan Fokkink, and Sumit Nain. On finite alphabets
and infinite bases II: Completed and ready simulation. In Luca
Aceto and Anna Ingólfsdóttir, editors, FoSSaCS, volume 3921 of
Lecture Notes in Computer Science, pages 1–15. Springer, 2006.
[CFvG08]
Taolue Chen, Wan Fokkink, and Rob J. van Glabbeek. Ready to
preorder: The case of weak process semantics. Inf. Process. Lett.,
109(2):104–111, 2008.
[CFvG09]
Taolue Chen, Wan Fokkink, and Rob J. van Glabbeek. On finite
bases for weak semantics: Failures versus impossible futures. In
Mogens Nielsen, Antonı́n Kucera, Peter Bro Miltersen, Catuscia
Palamidessi, Petr Tuma, and Frank D. Valencia, editors, SOFSEM, volume 5404 of Lecture Notes in Computer Science, pages
167–180. Springer, 2009.
[CGP99]
Edmund M. Clarke, Orna Grumberg, and Doron Peled. Model
Checking. MIT Press, 1999.
194
Bibliography
[CH90]
Rance Cleaveland and Matthew Hennessy. Priorities in process
algebras. Inf. Comput., 87(1/2):58–77, 1990.
[CHK08]
Taolue Chen, Tingting Han, and Joost-Pieter Katoen. Timeabstracting bisimulation for probabilistic timed automata. In
TASE, pages 177–184. IEEE Computer Society, 2008.
[CHKM09a]
Taolue Chen, Tingting Han, Joost-Pieter Katoen, and Alexandru
Mereacre. LTL model checking of time-inhomogeneous Markov
chains. In ATVA. Springer, 2009. To appear in Lecture Notes in
Computer Science.
[CHKM09b] Taolue Chen, Tingting Han, Joost-Pieter Katoen, and Alexandru Mereacre. Quantitative model checking of continuous-time
Markov chains against timed automata specifications. In LICS.
IEEE Computer Society, 2009. To appear. Also aviailable as technical report AIB-2009-02, RWTH Aachen University, Germany.
[CLN01]
Rance Cleaveland, Gerald Luttgen, and V. Natarajan. Priorities in process algebra. In Jan Bergstra, Alban Ponse, and Scott
Smolka, editors, Handbook of Process Algebra, pages 711–765. Elsevier Science, 2001.
[CLN07]
Rance Cleaveland, Gerald Lüttgen, and V. Natarajan. Priority
and abstraction in process algebra. Inf. Comput., 205(9):1426–
1458, 2007.
[CLNS96]
Rance Cleaveland, Gerald Lüttgen, V. Natarajan, and Steve
Sims. Priorities for modeling and verifying distributed systems.
In Tiziana Margaria and Bernhard Steffen, editors, TACAS, volume 1055 of Lecture Notes in Computer Science, pages 278–297.
Springer, 1996.
[Cor91]
C. Corduneanu. Integral Equations and Applications. Cambridge
University Press, 1991.
[CPvdPW07] Taolue Chen, Bas Ploeger, Jaco van de Pol, and Tim A. C.
Willemse. Equivalence checking for infinite systems using parameterized boolean equation systems. In Caires and Vasconcelos
[CV07], pages 120–135.
[CSKN05]
Stefano Cattani, Roberto Segala, Marta Z. Kwiatkowska, and
Gethin Norman. Stochastic transition systems for continuous
state spaces and non-determinism. In Vladimiro Sassone, editor,
FoSSaCS, volume 3441 of Lecture Notes in Computer Science,
pages 125–139. Springer, 2005.
[CV07]
Luı́s Caires and Vasco Thudichum Vasconcelos, editors. CONCUR 2007 - Concurrency Theory, 18th International Conference, CONCUR 2007, Lisbon, Portugal, September 3-8, 2007,
Bibliography
195
Proceedings, volume 4703 of Lecture Notes in Computer Science.
Springer, 2007.
[CvdPW08]
Taolue Chen, Jaco van de Pol, and Yanjing Wang. PDL over
accelerated labeled transition systems. In TASE, pages 193–200.
IEEE Computer Society, 2008.
[CW95]
Juanito Camilleri and Glynn Winskel. CCS with priority choice.
Inf. Comput., 116(1):26–37, 1995.
[CY92]
Costas Courcoubetis and Mihalis Yannakakis. Minimum and
maximum delay problems in real-time systems. Formal Methods
in System Design, 1(4):385–415, 1992.
[CY95]
Costas Courcoubetis and Mihalis Yannakakis. The complexity of
probabilistic verification. J. ACM, 42(4):857–907, 1995.
[Dav84]
Mark H. A. Davis. Piecewise-deterministic Markov processes: A
general class of non-diffusion stochastic models. Journal of the
Royal Statistical Society (B), 46(3):353–388, 1984.
[Dav93]
Mark H. A. Davis. Markov Models and Optimization. Chapman
and Hall, 1993.
[DB95]
Ashvin Dsouza and Bard Bloom. On the expressive power of
CCS. In Thiagarajan [Thi95], pages 309–323.
[dBHdRR92] J. W. de Bakker, Cornelis Huizing, Willem P. de Roever, and
Grzegorz Rozenberg, editors. Real-Time: Theory in Practice,
REX Workshop, Mook, The Netherlands, June 3-7, 1991, Proceedings, volume 600 of Lecture Notes in Computer Science.
Springer, 1992.
[DEP02]
Josee Desharnais, Abbas Edalat, and Prakash Panangaden.
Bisimulation for labelled markov processes.
Inf. Comput.,
179(2):163–193, 2002.
[dFG07]
David de Frutos-Escrig and Carlos Gregorio-Rodrı́guez. Simulations up-to and canonical preorders: (extended abstract). Electr.
Notes Theor. Comput. Sci., 192(1):13–28, 2007.
[dFG09]
David de Frutos-Escrig and Carlos Gregorio-Rodrı́guez.
(Bi)simulations up-to characterise process semantics.
Inf.
Comput., 207(2):146–170, 2009.
[dFGP08a]
David de Frutos-Escrig, Carlos Gregorio-Rodrı́guez, and Miguel
Palomino. Coinductive characterisations reveal nice relations between preorders and equivalences. Electr. Notes Theor. Comput.
Sci., 212:149–162, 2008.
196
Bibliography
[dFGP08b]
David de Frutos-Escrig, Carlos Gregorio-Rodrı́guez, and Miguel
Palomino. Ready to preorder: an algebraic and general proof. J.
Log. Algebr. Program., 2008.
[DHS03]
Salem Derisavi, Holger Hermanns, and William H. Sanders. Optimal state-space lumping in Markov chains. Inf. Process. Lett.,
87(6):309–315, 2003.
[DHS09]
Susanna Donatelli, Serge Haddad, and Jeremy Sproston. Model
checking timed and stochastic properties with CSLTA . IEEE
Trans. Software Eng., 35(2):224–240, 2009.
[DW02]
Klaus Denecke and Shelly L. Wismath. Universal Algebra and
Applications in Theoretical Computer Science. CRC/C&H, 2002.
[FL00]
Wan Fokkink and Bas Luttik. An ω-complete equational specification of interleaving. In Ugo Montanari, José D. P. Rolim,
and Emo Welzl, editors, ICALP, volume 1853 of Lecture Notes in
Computer Science, pages 729–743. Springer, 2000.
[FN04]
Wan Fokkink and Sumit Nain. On finite alphabets and infinite
bases: From ready pairs to possible worlds. In Igor Walukiewicz,
editor, FoSSaCS, volume 2987 of Lecture Notes in Computer Science, pages 182–194. Springer, 2004.
[FN05]
Wan Fokkink and Sumit Nain. A finite basis for failure semantics. In Luı́s Caires, Giuseppe F. Italiano, Luı́s Monteiro, Catuscia Palamidessi, and Moti Yung, editors, ICALP, volume 3580
of Lecture Notes in Computer Science, pages 755–765. Springer,
2005.
[Fok00]
Wan Fokkink. Introduction to Process Algebra. EATCS Texts in
Theoretical Computer Science. Springer, 2000.
[Fok07]
Wan Fokkink. Modelling Distributed Systems. EATCS Texts in
Theoretical Computer Science. Springer, 2007.
[GR01]
Jan Friso Groote and Michel A. Reniers. Glgebraic process verification. In Jan Bergstra, Alban Ponse, and Scott Smolka, editors,
Handbook of Process Algebra, pages 1151–1208. Elsevier Science,
2001.
[Gro90]
Jan Friso Groote. A new strategy for proving ω-completeness applied to process algebra. In Jos C. M. Baeten and Jan Willem
Klop, editors, CONCUR, volume 458 of Lecture Notes in Computer Science, pages 314–331. Springer, 1990.
[Gur90]
R. Gurevic. Equational theory of positive numbers with exponentiation is not finitely axiomatizable. Ann. Pure Appl. Logic,
49(1):1–30, 1990.
197
Bibliography
[GV08]
Orna Grumberg and Helmut Veith, editors. 25 Years of Model
Checking - History, Achievements, Perspectives, volume 5000 of
Lecture Notes in Computer Science. Springer, 2008.
[Hav00]
Boudewijn R. Haverkort. Markovian models for performance
and dependability evaluation. In Ed Brinksma, Holger Hermanns, and Joost-Pieter Katoen, editors, European Educational
Forum: School on Formal Methods and Performance Analysis,
volume 2090 of Lecture Notes in Computer Science, pages 38–83.
Springer, 2000.
[Hee86]
Jan Heering. Partial evaluation and ω-completeness of algebraic
specifications. Theor. Comput. Sci., 43:149–167, 1986.
[Hen77]
L. Henkin. The logic of equality.
Monthly, 84(8):597–612, 1977.
[Hen88]
Matthew Hennessy. Algebraic Theory of Processes. MIT Press,
Cambridge, MA, 1988.
[Hen96]
Thomas A. Henzinger. The theory of hybrid automata. In LICS,
pages 278–292, 1996.
[Hen98]
Thomas A. Henzinger. It’s about time: Real-time logics reviewed.
In Davide Sangiorgi and Robert de Simone, editors, CONCUR,
volume 1466 of Lecture Notes in Computer Science, pages 439–
454. Springer, 1998.
[HJ94]
Hans Hansson and Bengt Jonsson. A logic for reasoning about
time and reliability. Formal Asp. Comput., 6(5):512–535, 1994.
[HKNP06]
Andrew Hinton, Marta Z. Kwiatkowska, Gethin Norman, and
David Parker. PRISM: A tool for automatic verification of probabilistic systems. In Holger Hermanns and Jens Palsberg, editors, TACAS, volume 3920 of Lecture Notes in Computer Science,
pages 441–444. Springer, 2006.
[HKPV98]
Thomas A. Henzinger, Peter W. Kopke, Anuj Puri, and Pravin
Varaiya. What’s decidable about hybrid automata? J. Comput.
Syst. Sci., 57(1):94–124, 1998.
[HM85]
Matthew Hennessy and Robin Milner. Algebraic laws for nondeterminism and concurrency. J. ACM, 32(1):137–161, 1985.
[HNSY94]
Thomas A. Henzinger, Xavier Nicollin, Joseph Sifakis, and Sergio Yovine. Symbolic model checking for real-time systems. Inf.
Comput., 111(2):193–244, 1994.
[Hoa85]
C.A.R Hoare. Communicating Sequential Process. Prentice Hall,
1985.
American Mathematical
198
Bibliography
[ISO87]
ISO. Information processing systems – open systems interconnection – LOTOS – a formal description technique based on the
temporal ordering of observational behaviour ISO/TC97/SC21/N
DIS8807, 1987.
[JLS08]
Marcin Jurdzinski, François Laroussinie, and Jeremy Sproston.
Model checking probabilistic timed automata with one or two
clocks. CoRR, abs/0809.0060, 2008.
[Kat08a]
Joost-Pieter Katoen. Perspectives in probabilistic verification. In
TASE, pages 3–10. IEEE Computer Society, 2008.
[Kat08b]
Joost-Pieter Katoen. Quantitative evaluation in embedded system design: Trends in modeling and analysis techniques. In
DATE, pages 86–87. IEEE, 2008.
[Kel76]
Robert M. Keller. Formal verification of parallel programs. Commun. ACM, 19(7):371–384, 1976.
[KKZ05]
Joost-Pieter Katoen, Maneesh Khattri, and Ivan S. Zapreev. A
Markov reward model checker. In QEST, pages 243–244. IEEE
Computer Society, 2005.
[KNP04]
Marta Z. Kwiatkowska, Gethin Norman, and David Parker. Probabilistic symbolic model checking with PRISM: a hybrid approach. STTT, 6(2):128–142, 2004.
[KNPS08]
Marta Z. Kwiatkowska, Gethin Norman, Dave Parker, and
Jeremy Sproston. Modeling and Verification of Real-Time Systems: Formalisms and Software Tools, chapter Verification of
Real-Time Probabilistic Systems, pages 249–288. John Wiley &
Sons, 2008.
[KNS03]
Marta Z. Kwiatkowska, Gethin Norman, and Jeremy Sproston.
Probabilistic model checking of deadline properties in the IEEE
1394 Firewire root contention protocol. Formal Asp. Comput.,
14(3):295–318, 2003.
[KNSS00]
Marta Z. Kwiatkowska, Gethin Norman, Roberto Segala, and
Jeremy Sproston. Verifying quantitative properties of continuous
probabilistic timed automata. In Catuscia Palamidessi, editor,
CONCUR, volume 1877 of Lecture Notes in Computer Science,
pages 123–137. Springer, 2000.
[KNSS02]
Marta Z. Kwiatkowska, Gethin Norman, Roberto Segala, and
Jeremy Sproston. Automatic verification of real-time systems
with discrete probability distributions. Theor. Comput. Sci.,
282(1):101–150, 2002.
Bibliography
199
[KNSW07]
Marta Z. Kwiatkowska, Gethin Norman, Jeremy Sproston, and
Fuzhi Wang. Symbolic model checking for probabilistic timed
automata. Inf. Comput., 205(7):1027–1077, 2007.
[Koy90]
Ron Koymans. Specifying real-time properties with metric temporal logic. Real-Time Systems, 2(4):255–299, 1990.
[KS90]
Paris C. Kanellakis and Scott A. Smolka. CCS expressions, finite
state processes, and three problems of equivalence. Journal of
Information and Computation, 86(1):43–68, 1990.
[KSK76]
John G. Kemeny, J. Laurie Snell, and Anthony W. Knapp.
Denumerable Markov Chains (Graduate Texts in Mathematics).
Springer, 1976.
[Lin95]
Huimin Lin. PAM: A process algebra manipulator. Formal Methods in System Design, 7(3):243–259, 1995.
[LL85]
Suzanne M. Lenhart and Yu-Chung Liao. Integro-differential
equations associated with optimal stopping time of a piecewisedeterministic process. Stochastics, 15(3):183–207, 1985.
[LLT90]
Azeddine Lazrek, Pierre Lescanne, and Jean-Jacques Thiel. Tools
for proving inductive equalities, relative completeness, and ωcompleteness. Inf. Comput., 84(1):47–70, 1990.
[LLW95]
François Laroussinie, Kim Guldstrand Larsen, and Carsten
Weise. From timed automata to logic - and back. In Jirı́ Wiedermann and Petr Hájek, editors, MFCS, volume 969 of Lecture
Notes in Computer Science, pages 529–539. Springer, 1995.
[LMS04]
François Laroussinie, Nicolas Markey, and Ph. Schnoebelen.
Model checking timed automata with one or two clocks. In CONCUR, pages 387–401, 2004.
[LMST03]
Ruggero Lanotte, Andrea Maggiolo-Schettini, and Angelo Troina.
Weak bisimulation for probabilistic timed automata and applications to security. In SEFM, pages 34–43. IEEE Computer Society,
2003.
[LS91]
Kim Guldstrand Larsen and Arne Skou. Bisimulation through
probabilistic testing. Inf. Comput., 94(1):1–28, 1991.
[LS00]
Francesca Levi and Davide Sangiorgi. Controlling interference in
ambients. In POPL, pages 352–364, 2000.
[LS07]
François Laroussinie and Jeremy Sproston. State explosion
in almost-sure probabilistic reachability. Inf. Process. Lett.,
102(6):236–241, 2007.
200
Bibliography
[Lut06]
Bas Luttik. What is algebraic in process theory. Bulletin of the
EATCS, 88:66–83, 2006.
[LY91]
Suzanne M. Lenhart and Naoki Yamada. Perron’s method for viscosity solutions associated with piecewise-deterministic processes.
Funkcialaj Ekvacioj, 34:173–186, 1991.
[LY97]
Kim Guldstrand Larsen and Wang Yi. Time-abstracted bisimulation: Implicit specifications and decidability. Inf. Comput.,
134(2):75–101, 1997.
[Lyn51]
R. Lyndon. Identities in two-valued calculi. Transactions of the
American Mathematical Society, 71:457–465, 1951.
[Mau91]
Sjouke Mauw. PSF – A Process Specification Formalism. PhD
thesis, University of Amsterdam, December 1991.
[McK96]
R. McKenzie. Tarski’s finite basis problem is undecidable. Journal
of Algebra and Computation, 6(1):49–104, 1996.
[Mil80]
Robin Milner. A Calculus of Communicating Systems, volume 92
of Lecture Notes in Computer Science. Springer, 1980.
[Mil84]
Robin Milner. A complete inference system for a class of regular
behaviours. J. Comput. Syst. Sci., 28(3):439–466, 1984.
[Mil89a]
Robin Milner. Communication and Concurrency. Prentice Hall,
1989.
[Mil89b]
Robin Milner. A complete axiomatisation for observational congruence of finite-state behaviors. Inf. Comput., 81(2):227–247,
1989.
[Mil90]
Robin Milner. Operational and algebraic semantics of concurrent
processes. In Handbook of Theoretical Computer Science, Volume
B: Formal Models and Sematics (B), pages 1201–1242. 1990.
[ML07]
Sayan Mitra and Nancy A. Lynch. Trace-based semantics for
probabilistic timed I/O automata. In Bemporad et al. [BBB07b],
pages 718–722.
[MMT87]
R. McKenzie, G. McNulty, and W. Taylor. Algebras, Varieties,
Lattices. Wadsworth & Brooks/Cole, 1987.
[Mol89]
F. Moller. Axioms for Concurrency. PhD thesis, Department of
Computer Science, University of Edinburgh, July 1989. Report
CST-59-89. Also published as ECS-LFCS-89-84.
[Mol90a]
Faron Moller. The importance of the left merge operator in process algebras. In Mike Paterson, editor, ICALP, volume 443
of Lecture Notes in Computer Science, pages 752–764. Springer,
1990.
Bibliography
201
[Mol90b]
Faron Moller. The nonexistence of finite axiomatisations for CCS
congruences. In LICS, pages 142–153. IEEE Computer Society,
1990.
[MPW92a]
Robin Milner, Joachim Parrow, and David Walker. A calculus of
mobile processes, I. Inf. Comput., 100(1):1–40, 1992.
[MPW92b]
Robin Milner, Joachim Parrow, and David Walker. A calculus of
mobile processes, II. Inf. Comput., 100(1):41–77, 1992.
[MTHM97]
Robin Milner, Mads Tofte, Robert Harper, and David MacQueen.
The Definition of Standard ML (Revised). MIT Press, 1997.
[Mur65]
V.L. Murskiı̆. The existence in the three-valued logic of a closed
class with a finite basis having no finite complete system of identities. Doklady Akademii Nauk SSSR, 163:815–818, 1965. In Russian.
[Mur75]
V.L. Murskiı̆. The existence of a finite basis of identities, and
other properties of “almost all” finite algebras. Problemy Kibernetiki, 30:43–56, 1975. In Russian.
[NH84]
Rocco De Nicola and Matthew Hennessy. Testing equivalences
for processes. Theor. Comput. Sci., 34:83–133, 1984.
[OW08]
Joël Ouaknine and James Worrell. Some recent results in metric temporal logic. In Franck Cassez and Claude Jard, editors,
FORMATS, volume 5215 of Lecture Notes in Computer Science,
pages 1–13. Springer, 2008.
[Pan01]
Prakash Panangaden. Measure and probability for concurrency
theorists. Theor. Comput. Sci., 253(2):287–309, 2001.
[Par81]
David Park. Concurrency and automata on infinite sequences. In
Peter Deussen, editor, Theoretical Computer Science, volume 104
of Lecture Notes in Computer Science, pages 167–183. Springer,
1981.
[Phi87]
Iain Phillips. Refusal testing. Theor. Comput. Sci., 50:241–284,
1987.
[Phi08]
Iain Phillips. CCS with priority guards. J. Log. Algebr. Program.,
75(1):139–165, 2008.
[Plo74]
Gordon D. Plotkin. The lambda-calculus is ω-incomplete. J.
Symb. Log., 39(2):313–317, 1974.
[Plo04a]
Gordon D. Plotkin. The origins of structural operational semantics. J. Log. Algebr. Program., 60-61:3–15, 2004.
202
Bibliography
[Plo04b]
Gordon D. Plotkin. A structural approach to operational semantics. J. Log. Algebr. Program., 60-61:17–139, 2004.
[Pnu77]
A. Pnueli. The temporal logic of programs. In FOCS, pages
46–57, Los Alamitos, CA, 1977. Computer Society Press.
[PT87]
Robert Paige and Robert Endre Tarjan. Three partition refinement algorithms. SIAM J. Comput., 16(6):973–989, 1987.
[Put94]
Martin L. Puterman. Markov Decision Processes: Discrete
Stochastic Dynamic Programming. Wiley, New York, 1994.
[RB81]
William C. Rounds and Stephen D. Brookes. Possible futures,
acceptances, refusals, and communicating processes. In FOCS,
pages 140–149. IEEE, 1981.
[RV07]
Arend Rensink and Walter Vogler. Fair testing. Inf. Comput.,
205(2):125–198, 2007.
[Sew97]
Peter Sewell. Nonaxiomatisability of equivalences over finite state
processes. 90(1–3):163–191, 15 December 1997.
[Spr04]
Jeremy Sproston. Model checking for probabilistic timed systems.
In Baier et al. [BHH+ 04], pages 189–229.
[Sto02]
Mariëlle Stoelinga. An introduction to probabilistic automata.
Bulletin of the EATCS, 78:176–198, 2002.
[Thi95]
P. S. Thiagarajan, editor. Foundations of Software Technology
and Theoretical Computer Science, 15th Conference, Bangalore,
India, December 18-20, 1995, Proceedings, volume 1026 of Lecture
Notes in Computer Science. Springer, 1995.
[Tho97]
Wolfgang Thomas. Languages, automata, and logic. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages,
volume III, pages 389–455. Springer, New York, 1997.
[TY01]
Stavros Tripakis and Sergio Yovine. Analysis of timed systems
using time-abstracting bisimulations. Formal Methods in System
Design, 18(1):25–68, 2001.
[Var85]
Moshe Y. Vardi. Automatic verification of probabilistic concurrent finite-state programs. In FOCS, pages 327–338, 1985.
[vG90]
Rob J. van Glabbeek. The linear time-branching time spectrum
(extended abstract). In Jos C. M. Baeten and Jan Willem Klop,
editors, CONCUR, volume 458 of Lecture Notes in Computer Science, pages 278–297. Springer, 1990.
Bibliography
203
[vG93a]
Rob J. van Glabbeek. A complete axiomatization for branching
bisimulation congruence of finite-state behaviours. In Andrzej M.
Borzyszkowski and Stefan Sokolowski, editors, MFCS, volume 711
of Lecture Notes in Computer Science, pages 473–484. Springer,
1993.
[vG93b]
Rob J. van Glabbeek. The linear time - branching time spectrum
II. In Eike Best, editor, CONCUR, volume 715 of Lecture Notes
in Computer Science, pages 66–81. Springer, 1993.
[vG97]
Rob J. van Glabbeek. Notes on the methodology of CCS and
CSP. Theor. Comput. Sci., 177(2):329–349, 1997.
[vG01]
Rob J. van Glabbeek. The linear time-branching time spectrum I.
In Jan Bergstra, Alban Ponse, and Scott Smolka, editors, Handbook of Process Algebra, pages 3–99. Elsevier Science, 2001.
[vGV06]
Rob J. van Glabbeek and Marc Voorhoeve. Liveness, fairness
and impossible futures. In Christel Baier and Holger Hermanns,
editors, CONCUR, volume 4137 of Lecture Notes in Computer
Science, pages 126–141. Springer, 2006.
[vGW96]
Rob J. van Glabbeek and W. P. Weijland. Branching time and
abstraction in bisimulation semantics. J. ACM, 43(3):555–600,
1996.
[VM01]
Marc Voorhoeve and Sjouke Mauw. Impossible futures and determinism. Inf. Process. Lett., 80(1):51–58, 2001.
[Vog92]
Walter Vogler. Modular Construction and Partial Order Semantics of Petri Nets, volume 625 of Lecture Notes in Computer Science. Springer, 1992.
204
Bibliography
Summary
This dissertation consists of two parts.
Part I concentrates on the axiomatizability of process algebras, mostly on two
basic process algebras BCCSP and BCCS.
Chapter 3 presents two meta-theorems regarding the axiomatizability. The
first one concerns the relationship between preorders and equivalences. We show
that the same algorithm proposed by Aceto el al. and de Frutos Escrig et al.
for concrete semantics, which transforms an axiomatization for a preorder to
the one for the corresponding equivalence, applies equally well to weak semantics. This makes it applicable to all 87 preorders surveyed in the “linear time –
branching time spectrum II” that are at least as coarse as the ready simulation
preorder. We also extend the scope of the algorithm to infinite processes, by
adding recursion constants. The second meta-theorem concerns the relationship
between concrete and weak semantics. For any semantics which is not finer than
failures or impossible futures semantics, we provide an algorithm to transform
an axiomatization for the concrete version to the one for the weak counterpart. As an application of this algorithm, we derive ground- and ω-complete
axiomatizations for weak failure, weak completed trace, weak trace preorders.
Chapter 4 settles the remaining open questions regarding the existence of
ω-complete axiomatizations in the setting of the process algebra BCCSP for all
the semantics in the linear time – branching time spectrum I, either positively
by giving a finite, sound and ground-complete axiomatization which turns out to
be ω-complete, or negatively by proving that such a finite basis of the equational
theory does not exist. We prove that in case of a finite alphabet with at least
two actions, failure semantics affords a finite basis, while for ready simulation,
completed simulation, simulation, possible worlds, ready trace, failure trace
and ready semantics, such a finite basis does not exist. Completed simulation
semantics also lacks a finite basis in case of an infinite alphabet of actions.
Chapter 5 investigates the (in)equational theories of concrete and weak impossible futures semantics over the process algebras BCCSP and BCCS. We
present a finite, sound, ground-complete axiomatization for BCCSP modulo
the concrete impossible futures preorder, which implies a finite, sound, groundcomplete axiomatization for BCCS modulo the weak impossible futures preorder. By contrast, we prove that no finite, sound axiomatization for BCCS
modulo the weak impossible futures equivalence is ground-complete, and this
205
206
Summary
negative result carries over to the concrete case. If the alphabet of actions is
infinite, then the aforementioned ground-complete axiomatizations are shown
to be ω-complete. However, if the alphabet is finite and nonempty, we prove
that the inequational (resp. equational) theories of BCCSP and BCCS modulo
the impossible futures preorder (resp. equivalence) lack such a finite basis. Finally, we show that the negative result regarding impossible futures equivalence
extends to all n-nested impossible futures equivalences for n ≥ 2, and to all
n-nested impossible futures preorders for n ≥ 3.
Chapter 6 studies the equational theory of bisimulation equivalence over the
process algebra BCCSPΘ , i.e., BCCSP extended with the priority operator Θ
of Baeten et al. It is proven that, in the presence of an infinite set of actions,
bisimulation equivalence has no finite, sound, ground-complete axiomatization
over that language. This negative result applies even if the syntax is extended
with an arbitrary collection of auxiliary operators, and motivates the study of
axiomatizations using equations with action predicates as conditions. In the
presence of an infinite set of actions, it is shown that, in general, bisimulation
equivalence has no finite, sound, ground-complete axiomatization consisting of
equations with action predicates as conditions over BCCSPΘ . Finally, sufficient
conditions on the priority structure over actions are identified that lead to a
finite, sound, ground-complete axiomatization of bisimulation equivalence using
equations with action predicates as conditions.
Part II concentrates on the verification of probabilistic real-time systems.
Chapter 7 studies the following problem: given a continuous-time Markov
chain C, and a linear real-time property provided as a deterministic timed automaton A, what is the probability of the set of paths of C that are accepted
by A? It is shown that this set of paths is measurable and computing its probability can be reduced to computing the reachability probability in a piecewise
deterministic Markov process. The reachability probability is characterized as
the least solution of a system of integral equations and is shown to be approximated by solving a system of partial differential equations. For the special case
of single-clock deterministic timed automata, the system of integral equations
can be transformed into a system of linear equations where the coefficients are
solutions of ordinary differential equations.
Chapter 8 focuses on probabilistic timed automata, an extension of timed
automata with discrete probabilistic branchings. As the region construction
of these automata often leads to an exponential blow-up over the size of original automata, reduction techniques are of the utmost importance. In this
chapter, we investigate probabilistic time-abstracting bisimulation (PTaB), an
equivalence notion that abstracts from exact time delays. PTaB is proven to
preserve probabilistic computation tree logic. The region equivalence is a (very
refined) PTaB. Furthermore, we provide a non-trivial adaptation of the traditional partition-refinement algorithm to compute the quotient under the PTaB.
This algorithm is symbolic in the sense that equivalence classes are represented
as polyhedra.
Nederlandse Samenvatting1
Klokken, Dobbelen en Processen
Het proefschrift bestaat uit twee delen.
Deel I concentreert zich op de axiomatizeerbaarheid van procesalgebra’s, met
name op twee basale procesalgebra’s BCCSP en BCCS.
Hoofdstuk 3 bevat twee meta-stellingen betreffende axiomatizeerbaarheid.
De eerste richt zich op de relatie tussen preorders and equivalenties. We tonen
aan dat het algoritme van Aceto el al. en de Frutos Escrig et al. voor concrete
semantieken, die een axiomatizering voor een preorder omzet in een axiomatizering voor de corresponderende equivalentie, ook van toepassing is op zwakke
semantieken. Dit maakt het van toepassing op alle 87 preorders in het “lineaire
tijd – splitsende tijd spectrum II” die minstens zo grof zijn als de ready simulation preorder. We breiden de scope van het algoritme ook uit tot oneindige
processen, door toevoeging van recursie-constanten. De tweede meta-stelling betreft de relatie tussen concrete and zwakke semantieken. Voor iedere semantiek
die niet fijner is dan failures of impossible futures semantiek, geven wij een algoritme om een axiomatizering voor de concrete versie om te zetten naar een
axiomatizering voor de zwakke tegenhanger. Als een toepassing van dit algoritme leiden we grond- en ω-complete axiomatizeringen af voor weak failure,
weak completed trace, en weak trace preorder.
Hoofdstuk 4 beantwoordt alle openstaande vragen wat betreft het bestaan
van ω-complete axiomatizeringen in de context van de procesalgebra BCCSP,
voor de semantieken in het “lineaire tijd – splitsende tijd spectrum I”. De
antwoorden zijn ofwel positief door een eindige, grond-complete axiomatizering
te geven die ook ω-compleet is, ofwel negatief door te bewijzen dat een dergelijke eindige basis voor de equationele theorie niet bestaat. We bewijzen dat er
bij een eindig alfabet van tenminste twee acties, een eindige basis is voor failure semantiek, terwijl voor ready simulation, completed simulation, simulation,
possible worlds, ready trace, failure trace and ready semantiek, zo’n eindige basis niet bestaat. Completed simulation semantiek heeft ook geen eindige basis
bij een oneindig alfabet van acties.
Hoofdstuk 5 onderzoekt de equationele theorieën voor concrete en zwakke
impossible futures semantiek over de procesalgebra’s BCCSP and BCCS. We
1 Dutch
Summary. Translated from the English summary by Wan Fokkink.
207
208
Nederlandse Samenvatting
presenteren een eindige, grond-complete axiomatizering voor BCCSP modulo de
concrete impossible futures preorder, wat een eindige, grond-complete axiomatizering impliceert voor BCCS modulo de zwakke impossible futures preorder.
In contrast hiermee bewijzen we dat er geen eindige, grond-complete axiomatizering bestaat voor BCCS modulo de zwakke impossible futures equivalentie; dit
negatieve resultaat is overdraagbaar naar het concrete geval. Indien het alfabet
van acties eindig is, tonen we aan dat de voornoemde grond-complete axiomatizeringen ω-compleet zijn. Echter, indien het alfabet eindig is en niet leeg,
dan bewijzen we dat de inequationele (resp. equationele) theorieën van BCCSP
and BCCS modulo impossible futures preorder (resp. equivalentie) zo’n eindige
basis ontberen. Tenslotte tonen we aan dat het negatieve resultaat betreffende
impossible futures equivalentie ook van toepassing is op alle n-nested impossible futures equivalenties voor n ≥ 2, and op alle n-nested impossible futures
preorders voor n ≥ 3.
Hoofdstuk 6 bestudeert de equationele theorie van bisimulatie equivalentie
voor de procesalgebra BCCSPΘ , oftewel, BCCSP uitgebreid met de prioriteitsoperator Θ van Baeten et al. Er wordt bewezen dat in aanwezigheid van een
oneindig alfabet, bisimulatie equivalentie geen eindige, grond-complete axiomatizering heeft over deze procesalgebra. Dit negatieve resultaat is zelfs van
toepassing als de syntax wordt uitgebreid met een willekeurige collectie hulpoperatoren, en motiveert de studie van axiomatizeringen die gebruik maken
van vergelijkingen met actie-predicaten als condities. In aanwezigheid van een
oneindig alfabet, wordt aangetoond dat bisimulatie equivalentie in het algemeen
geen eindige, grond-complete axiomatizering heeft over BCCSPΘ van vergelijkingen met actie-predicaten als condities. Tenslotte worden condities gegeven
op de prioriteits-structuur over acties, die aanleiding geven tot een eindige,
grond-complete axiomatizering van bisimulatie equivalentie, waarbij de vergelijkingen zijn voorzien van actie-predicaten als condities.
Deel II concentreert zich op de verificatie van probabilistische real-time systemen.
Hoofdstuk 7 bestudeert het volgende probleem: gegeven een continuoustime Markov-keten C, en een lineaire real-time eigenschap weergegeven als een
deterministische getimede automaat A, wat is de kansgrootte van de verzameling paden van C die worden geaccepteerd door A? Aangetoond wordt dat
deze verzameling paden meetbaar is, en dat het berekenen van zijn kans kan
worden gereduceerd tot het berekenen van de kans op bereikbaarheid in een
stuksgewijs deterministisch Markov-proces. De kans op bereikbaarheid wordt
gekarakterizeerd als de kleinste oplossing van een systeem van integrale vergelijkingen, en wordt benaderd door het oplossen van een systeem van partiële
differentiaalvergelijkingen. Voor het speciale geval van enkele-klok deterministische getimede automaten, blijkt dat het systeem van integrale vergelijkingen
kan worden getransformeerd naar een systeem van lineaire vergelijkingen waarin
de coëfficiënten oplossingen zijn van gewone differentiaalvergelijkingen.
Hoofdstuk 8 richt zich op probabilistische getimede automaten, een uitbreiding van getimede automaten met discrete probabilistische splitsingen. Aangezien
209
de constructie van gebieden voor deze automaten vaak leidt tot een exponentiële
toename van de omvang van de originele automaat, zijn reductietechnieken van
het grootste belang. In dit hoofdstuk onderzoeken we probabilistische tijdabstraherende bisimulatie (PTaB), een equivalentie-notie die abstraheert van
precieze tijdsmomenten. Er wordt bewezen dat PTaB de probabilistische computation tree logica behoudt. De gebiedsequivalentie is een (zeer verfijnde)
PTaB. We geven ook een niet-triviale aanpassing van het traditionele partitieverfijningsalgoritme om het quotiënt onder de PTaBte berekenen. Dit algoritme is symbolisch in de zin dat equivalentie-klassen worden gerepresenteerd
als polyhedra.
210
Nederlandse Samenvatting
Curriculum Vitae
Taolue Chen was born on January 26th, 1980 in Taizhou, Jiangsu province,
China. In 2002, he graduated with a bachelor degree in computer science from
the Nanjing University in Nanjing, Jiangsu province, China. In 2005, he received
his master degree, in computer science as well, from the same university, with
a thesis on mobile process algebras, under the supervision of prof. dr. Jian Lu.
In July 2005, Chen started his PhD study in the Software Engineering Group
2 (Specification and Analysis of Embedded Systems) at the Centrum Wiskunde
and Informatica (CWI) in Amsterdam, The Netherlands. Under the supervision of prof. dr. Wan Fokkink (Vrije Universiteit Amsterdam) and prof. dr.
Jaco van de Pol (Universiteit of Twente), he worked on concurrency theory
and formal methods, in particular, theory of process algebras and verification
of probabilistic real-time systems. His work was carried out under the auspices
of the BSIK/BRICKS project (Basic Research in Informatics for Creating the
Knowledge Society). The present dissertation contains the results of this work.
211
Titles in the IPA Dissertation Series since 2005
E. Ábrahám.
An Assertional
Proof System for Multithreaded Java
-Theory and Tool Support- . Faculty
of Mathematics and Natural Sciences,
UL. 2005-01
O. Tveretina. Decision Procedures
for Equality Logic with Uninterpreted
Functions. Faculty of Mathematics
and Computer Science, TU/e. 200510
R. Ruimerman. Modeling and Remodeling in Bone Tissue. Faculty of
Biomedical Engineering, TU/e. 200502
A.M.L. Liekens. Evolution of Finite Populations in Dynamic Environments. Faculty of Biomedical Engineering, TU/e. 2005-11
C.N. Chong. Experiments in Rights
Control - Expression and Enforcement. Faculty of Electrical Engineering, Mathematics & Computer Science, UT. 2005-03
H. Gao. Design and Verification of
Lock-free Parallel Algorithms. Faculty of Mathematics and Computing
Sciences, RUG. 2005-04
H.M.A. van Beek. Specification
and Analysis of Internet Applications.
Faculty of Mathematics and Computer Science, TU/e. 2005-05
M.T. Ionita. Scenario-Based System Architecting - A Systematic Approach to Developing Future-Proof
System Architectures.
Faculty of
Mathematics and Computing Sciences, TU/e. 2005-06
G. Lenzini. Integration of Analysis Techniques in Security and FaultTolerance. Faculty of Electrical Engineering, Mathematics & Computer
Science, UT. 2005-07
J. Eggermont. Data Mining using
Genetic Programming: Classification
and Symbolic Regression. Faculty of
Mathematics and Natural Sciences,
UL. 2005-12
B.J. Heeren. Top Quality Type Error Messages. Faculty of Science, UU.
2005-13
G.F. Frehse. Compositional Verification of Hybrid Systems using Simulation Relations. Faculty of Science,
Mathematics and Computer Science,
RU. 2005-14
M.R. Mousavi. Structuring Structural Operational Semantics. Faculty
of Mathematics and Computer Science, TU/e. 2005-15
A. Sokolova. Coalgebraic Analysis
of Probabilistic Systems. Faculty of
Mathematics and Computer Science,
TU/e. 2005-16
I. Kurtev. Adaptability of Model
Transformations. Faculty of Electrical Engineering, Mathematics &
Computer Science, UT. 2005-08
T. Gelsema. Effective Models for
the Structure of pi-Calculus Processes
with Replication. Faculty of Mathematics and Natural Sciences, UL.
2005-17
T. Wolle. Computational Aspects of
Treewidth - Lower Bounds and Network Reliability. Faculty of Science,
UU. 2005-09
P. Zoeteweij.
Composing Constraint Solvers. Faculty of Natural Sciences, Mathematics, and Computer Science, UvA. 2005-18
J.J. Vinju. Analysis and Transformation of Source Code by Parsing and Rewriting. Faculty of Natural Sciences, Mathematics, and Computer Science, UvA. 2005-19
M.Valero Espada. Modal Abstraction and Replication of Processes with
Data. Faculty of Sciences, Division of
Mathematics and Computer Science,
VUA. 2005-20
A. Dijkstra.
Stepping through
Haskell. Faculty of Science, UU.
2005-21
Y.W. Law. Key management and
link-layer security of wireless sensor
networks: energy-efficient attack and
defense. Faculty of Electrical Engineering, Mathematics & Computer
Science, UT. 2005-22
E. Dolstra. The Purely Functional
Software Deployment Model. Faculty
of Science, UU. 2006-01
R.J. Corin. Analysis Models for Security Protocols. Faculty of Electrical
Engineering, Mathematics & Computer Science, UT. 2006-02
P.R.A. Verbaan. The Computational Complexity of Evolving Systems. Faculty of Science, UU. 2006-03
K.L. Man and R.R.H. Schiffelers. Formal Specification and Analysis of Hybrid Systems. Faculty of
Mathematics and Computer Science
and Faculty of Mechanical Engineering, TU/e. 2006-04
M. Kyas. Verifying OCL Specifications of UML Models: Tool Support and Compositionality. Faculty
of Mathematics and Natural Sciences,
UL. 2006-05
M. Hendriks.
Model Checking
Timed Automata - Techniques and
Applications.
Faculty of Science,
Mathematics and Computer Science,
RU. 2006-06
J. Ketema. Böhm-Like Trees for
Rewriting. Faculty of Sciences, VUA.
2006-07
C.-B. Breunesse. On JML: topics in tool-assisted verification of JML
programs. Faculty of Science, Mathematics and Computer Science, RU.
2006-08
B. Markvoort. Towards Hybrid
Molecular Simulations. Faculty of
Biomedical Engineering, TU/e. 200609
S.G.R. Nijssen. Mining Structured
Data. Faculty of Mathematics and
Natural Sciences, UL. 2006-10
G. Russello. Separation and Adaptation of Concerns in a Shared Data
Space. Faculty of Mathematics and
Computer Science, TU/e. 2006-11
L. Cheung.
Reconciling Nondeterministic and Probabilistic Choices.
Faculty of Science, Mathematics and
Computer Science, RU. 2006-12
B. Badban. Verification techniques
for Extensions of Equality Logic. Faculty of Sciences, Division of Mathematics and Computer Science, VUA.
2006-13
A.J. Mooij. Constructive formal
methods and protocol standardization.
Faculty of Mathematics and Computer Science, TU/e. 2006-14
T. Krilavicius. Hybrid Techniques
for Hybrid Systems. Faculty of Electrical Engineering, Mathematics &
Computer Science, UT. 2006-15
M.E. Warnier. Language Based Security for Java and JML. Faculty of
Science, Mathematics and Computer
Science, RU. 2006-16
V. Sundramoorthy. At Home In
Service Discovery. Faculty of Electrical Engineering, Mathematics &
Computer Science, UT. 2006-17
B. Gebremichael. Expressivity of
Timed Automata Models. Faculty of
Science, Mathematics and Computer
Science, RU. 2006-18
L.C.M. van Gool.
Formalising
Interface Specifications. Faculty of
Mathematics and Computer Science,
TU/e. 2006-19
C.J.F. Cremers. Scyther - Semantics and Verification of Security Protocols. Faculty of Mathematics and
Computer Science, TU/e. 2006-20
J.V. Guillen Scholten. Mobile
Channels for Exogenous Coordination of Distributed Systems: Semantics, Implementation and Composition. Faculty of Mathematics and
Natural Sciences, UL. 2006-21
H.A. de Jong. Flexible Heterogeneous Software Systems. Faculty of
Natural Sciences, Mathematics, and
Computer Science, UvA. 2007-01
N.K. Kavaldjiev.
A run-time
reconfigurable Network-on-Chip for
streaming DSP applications. Faculty
of Electrical Engineering, Mathematics & Computer Science, UT. 2007-02
M. van Veelen. Considerations
on Modeling for Early Detection of
Abnormalities in Locally Autonomous
Distributed Systems.
Faculty of
Mathematics and Computing Sciences, RUG. 2007-03
T.D. Vu. Semantics and Applications of Process and Program Algebra.
Faculty of Natural Sciences, Mathematics, and Computer Science, UvA.
2007-04
L. Brandán Briones. Theories for
Model-based Testing: Real-time and
Coverage. Faculty of Electrical Engineering, Mathematics & Computer
Science, UT. 2007-05
I. Loeb. Natural Deduction: Sharing
by Presentation. Faculty of Science,
Mathematics and Computer Science,
RU. 2007-06
M.W.A. Streppel. Multifunctional
Geometric Data Structures. Faculty
of Mathematics and Computer Science, TU/e. 2007-07
N. Trčka. Silent Steps in Transition
Systems and Markov Chains. Faculty
of Mathematics and Computer Science, TU/e. 2007-08
R. Brinkman. Searching in encrypted data. Faculty of Electrical
Engineering, Mathematics & Computer Science, UT. 2007-09
A. van Weelden. Putting types to
good use. Faculty of Science, Mathematics and Computer Science, RU.
2007-10
J.A.R. Noppen. Imperfect Information in Software Development Processes. Faculty of Electrical Engineering, Mathematics & Computer
Science, UT. 2007-11
R. Boumen. Integration and Test
plans for Complex Manufacturing
Systems. Faculty of Mechanical Engineering, TU/e. 2007-12
A.J. Wijs.
What to do Next?:
Analysing and Optimising System Behaviour in Time. Faculty of Sciences,
Division of Mathematics and Computer Science, VUA. 2007-13
C.F.J. Lange. Assessing and Improving the Quality of Modeling: A
Series of Empirical Studies about the
UML. Faculty of Mathematics and
Computer Science, TU/e. 2007-14
T. van der Storm. Componentbased Configuration, Integration and
Delivery. Faculty of Natural Sciences, Mathematics, and Computer
Science,UvA. 2007-15
B.S. Graaf. Model-Driven Evolution
of Software Architectures. Faculty of
Electrical Engineering, Mathematics,
and Computer Science, TUD. 2007-16
A.H.J. Mathijssen. Logical Calculi
for Reasoning with Binding. Faculty
of Mathematics and Computer Science, TU/e. 2007-17
D. Jarnikov. QoS framework for
Video Streaming in Home Networks.
Faculty of Mathematics and Computer Science, TU/e. 2007-18
M. A. Abam. New Data Structures
and Algorithms for Mobile Data. Faculty of Mathematics and Computer
Science, TU/e. 2007-19
W. Pieters. La Volonté Machinale:
Understanding the Electronic Voting
Controversy.
Faculty of Science,
Mathematics and Computer Science,
RU. 2008-01
A.L. de Groot. Practical Automaton Proofs in PVS. Faculty of Science,
Mathematics and Computer Science,
RU. 2008-02
M. Bruntink. Renovation of Idiomatic Crosscutting Concerns in
Embedded Systems. Faculty of Electrical Engineering, Mathematics, and
Computer Science, TUD. 2008-03
A.M. Marin. An Integrated System
to Manage Crosscutting Concerns in
Source Code. Faculty of Electrical
Engineering, Mathematics, and Computer Science, TUD. 2008-04
N.C.W.M. Braspenning. Modelbased Integration and Testing of
High-tech Multi-disciplinary Systems.
Faculty of Mechanical Engineering,
TU/e. 2008-05
M. Bravenboer. Exercises in Free
Syntax: Syntax Definition, Parsing,
and Assimilation of Language Conglomerates. Faculty of Science, UU.
2008-06
M. Torabi Dashti. Keeping Fairness Alive: Design and Formal Verification of Optimistic Fair Exchange
Protocols. Faculty of Sciences, Division of Mathematics and Computer
Science, VUA. 2008-07
I.S.M. de Jong. Integration and
Test Strategies for Complex Manufacturing Machines. Faculty of Mechanical Engineering, TU/e. 2008-08
I. Hasuo. Tracing Anonymity with
Coalgebras. Faculty of Science, Mathematics and Computer Science, RU.
2008-09
L.G.W.A. Cleophas.
Tree Algorithms: Two Taxonomies and a
Toolkit. Faculty of Mathematics and
Computer Science, TU/e. 2008-10
I.S. Zapreev.
Model Checking
Markov Chains:
Techniques and
Tools. Faculty of Electrical Engineering, Mathematics & Computer Science, UT. 2008-11
M. Farshi. A Theoretical and Experimental Study of Geometric Networks. Faculty of Mathematics and
Computer Science, TU/e. 2008-12
G. Gulesir.
Evolvable Behavior Specifications Using ContextSensitive Wildcards. Faculty of Electrical Engineering, Mathematics &
Computer Science, UT. 2008-13
E.H. de Graaf.
Mining Semistructured Data, Theoretical and Experimental Aspects of Pattern Evaluation. Faculty of Mathematics and
Natural Sciences, UL. 2008-22
F.D. Garcia. Formal and Computational Cryptography: Protocols,
Hashes and Commitments. Faculty of
Science, Mathematics and Computer
Science, RU. 2008-14
R. Brijder. Models of Natural Computation: Gene Assembly and Membrane Systems. Faculty of Mathematics and Natural Sciences, UL. 2008-23
P. E. A. Dürr. Resource-based Verification for Robust Composition of
Aspects. Faculty of Electrical Engineering, Mathematics & Computer
Science, UT. 2008-15
E.M. Bortnik. Formal Methods in
Support of SMC Design. Faculty
of Mechanical Engineering, TU/e.
2008-16
R.H. Mak. Design and Performance Analysis of Data-Independent
Stream Processing Systems. Faculty
of Mathematics and Computer Science, TU/e. 2008-17
M. van der Horst. Scalable Block
Processing Algorithms. Faculty of
Mathematics and Computer Science,
TU/e. 2008-18
C.M. Gray. Algorithms for Fat Objects: Decompositions and Applications. Faculty of Mathematics and
Computer Science, TU/e. 2008-19
J.R. Calamé. Testing Reactive Systems with Data - Enumerative Methods and Constraint Solving. Faculty
of Electrical Engineering, Mathematics & Computer Science, UT. 2008-20
E. Mumford. Drawing Graphs for
Cartographic Applications. Faculty of
Mathematics and Computer Science,
TU/e. 2008-21
A. Koprowski.
Termination of
Rewriting and Its Certification. Faculty of Mathematics and Computer
Science, TU/e. 2008-24
U. Khadim. Process Algebras for
Hybrid Systems: Comparison and
Development. Faculty of Mathematics and Computer Science, TU/e.
2008-25
J. Markovski. Real and Stochastic
Time in Process Algebras for Performance Evaluation. Faculty of Mathematics and Computer Science, TU/e.
2008-26
H. Kastenberg. Graph-Based Software Specification and Verification.
Faculty of Electrical Engineering,
Mathematics & Computer Science,
UT. 2008-27
I.R. Buhan. Cryptographic Keys
from Noisy Data Theory and Applications. Faculty of Electrical Engineering, Mathematics & Computer Science, UT. 2008-28
R.S. Marin-Perianu.
Wireless
Sensor Networks in Motion: Clustering Algorithms for Service Discovery
and Provisioning. Faculty of Electrical Engineering, Mathematics &
Computer Science, UT. 2008-29
M.H.G. Verhoef. Modeling and
Validating Distributed Embedded
Real-Time Control Systems. Faculty
of Science, Mathematics and Computer Science, RU. 2009-01
M. de Mol. Reasoning about Functional Programs: Sparkle, a proof assistant for Clean. Faculty of Science,
Mathematics and Computer Science,
RU. 2009-02
M. Lormans. Managing Requirements Evolution. Faculty of Electrical Engineering, Mathematics, and
Computer Science, TUD. 2009-03
M.P.W.J. van Osch. Automated
Model-based Testing of Hybrid Systems. Faculty of Mathematics and
Computer Science, TU/e. 2009-04
H. Sozer.
Architecting FaultTolerant Software Systems. Faculty
of Electrical Engineering, Mathematics & Computer Science, UT. 2009-05
M.J. van Weerdenburg. Efficient
Rewriting Techniques.
Faculty of
Mathematics and Computer Science,
TU/e. 2009-06
H.H. Hansen. Coalgebraic Modelling: Applications in Automata
Theory and Modal Logic. Faculty
of Sciences, Division of Mathematics
and Computer Science, VUA. 2009-07
A. Mesbah. Analysis and Testing
of Ajax-based Single-page Web Applications. Faculty of Electrical Engineering, Mathematics, and Computer
Science, TUD. 2009-08
A.L. Rodriguez Yakushev. Towards Getting Generic Programming
Ready for Prime Time. Faculty of
Science, UU. 2009-9
K.R. Olmos Joffré. Strategies for
Context Sensitive Program Transformation. Faculty of Science, UU. 200910
J.A.G.M. van den Berg. Reasoning about Java programs in PVS using
JML. Faculty of Science, Mathematics and Computer Science, RU. 200911
M.G. Khatib. MEMS-Based Storage Devices. Integration in EnergyConstrained Mobile Systems. Faculty
of Electrical Engineering, Mathematics & Computer Science, UT. 2009-12
S.G.M. Cornelissen.
Evaluating Dynamic Analysis Techniques for
Program Comprehension. Faculty of
Electrical Engineering, Mathematics,
and Computer Science, TUD. 2009-13
D. Bolzoni. Revisiting Anomalybased Network Intrusion Detection
Systems. Faculty of Electrical Engineering, Mathematics & Computer
Science, UT. 2009-14
H.L. Jonker. Security Matters: Privacy in Voting and Fairness in Digital
Exchange. Faculty of Mathematics
and Computer Science, TU/e. 200915
M.R. Czenko. TuLiP - Reshaping Trust Management. Faculty of
Electrical Engineering, Mathematics
& Computer Science, UT. 2009-16
T. Chen. Clocks, Dice and Processes. Faculty of Sciences, Division
of Mathematics and Computer Science, VUA. 2009-17