Download Paper

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
Decidability of Linear AÆne Logic
A. P. Kopylov
Department of Mathematics and Mechanics
Moscow State University
119899, Moscow, Russia
Abstract
Propositional linear logic is known to be undecidable. In the current paper we prove that full propositional linear aÆne logic containing all the multiplicatives, additives, exponentials, and constants is decidable. The proof is based on a reduction of linear aÆne
logic to sequents of specic \normal forms", and on a
generalization of Kanovich computational interpretation of linear logic adapted to these \normal forms".
1
Introduction and Summary
Linear logic has been introduced by Girard [1]. The
undecidability of linear logic is established in [5]. Linear aÆne logic is linear logic with the weakening rule
[6]. We will abbreviate linear logic and linear aÆne
logic as LL and LLW correspondingly. The decidability problem for linear aÆne logic remained open.
In the current paper the decidability proof for linear aÆne logic is based on the three results: rstly,
the entire LLW is reduced (according to Turing) to
its certain fragment (normal fragment). Secondly, the
derivability of a normal form sequent is characterized
in terms of vector games. Thirdly, by means of this
computational interpretation, we prove that the normal fragment of LLW is decidable.
At the same time we prove that the entire linear
logic is also reduced to its normal fragment and the
derivability in this fragment is characterized in terms
of analogous games.
The inference rules of linear logic are shown in the
Table 1a, and the weakening rule is in the Table 1b.
Theorem 1 (Girard) The cut rule is eliminated
both in LL and in LLW.
2
Normal fragment and its computational interpretation
Now we give the denition of the normal fragment.
The normal fragment is an expansion of the -Horn
The research described in this publication was made possible in part by Grant No. NFQ000 from the International Science
Foundation
fragment [2, 3]. Let us recall some denitions from [2].
Denitions A simple product is a tensor product
of literals and the constant 1. (For example: 1, p,
p q). A Horn implication is an implication of the
form: X Æ Y , where X and Y are simple products.
A -Horn implication is an implication of the form:
X Æ (Y Z), where X, Y and Z are simple products.
Denitions A simple disjunction is a disjunction of
the form: X Y , where X and Y are simple products.
Horn implications, -Horn implications, and simple
disjunctions are called normal formulas . A normal
sequent is a sequent of the form
....
......
.............
..
W; !
) ?;
where W is a simple product, is a multiset of normal
formulas, and is a multiset of simple products. The
normal fragment is a fragment containing only normal
sequents.
Let us give the denition of the interpretation of
normal sequents. We begin with the remark: there is
a simple correspondence between simple products and
vectors with natural coordinates (cf. [3]). Namely, we
can associate a n-dimensional vector (x1 ; x2 ; : : : ; x )
with a simple product p1 p2 : : : p n . Here
p = p p : : : p (x times) and p0 = 1. If X is
~ denotes the corresponding
a simple product, then X
vector.
Let = (W; ! ) ?) be a normal sequent. Let us
consider the following two single-person games associated with this sequent.
x1
x
x2
n
x
n
Game A
~ are written on the
1. Initially, all vectors from blackboard.
2. We may write new vectors with natural coordinates by the following rules:
(I)
(L? )
)
)
?
A
A
A
(Cut)
(R? )
A;
)
;
)
(L
)
)
)
)
(L )
) ) )
(L Æ)
Æ
) (L?) ?)
)
(L1)
)1 ..............
........
..
1
.............
1;
A;
1
A
B;
1
A ............ B;
1;
A;
(R&)
(R)
(R>)
B;
A
B;
B;
A
0;
2
B;
2
2
B;
1;
A;B;
.............
A ............ B;
B;
B;
;
)
)
) & ) )
)
)
)
A;
&B;
A
)
2
2
2
A
A
2
;
(L&)
(L)
(L0)
1
2
A;
2
1;
1;
;
A;
...
.......
.
...........
...
2
2
2
A
1;
2
1;
B;
1
B;
2
2
A;
2
A;
1
B;
A;
1 ;A
1;
) )
(R
)
) (R ) )) (R Æ) ) )Æ (R?) ? ))
(R1) )1
A;B;
A
)
)
)?
) 1
B;
) ) ) & ) ) ) ) )> A;
B;
A
B;
A;
A
B;
B;
A
B;
U;
2
;
)
) ?
(R!) !! )
)
! ?
! ! )
)
(C!)
! )
! )
) !
)
?
(R?)
(L?) ? ! )?
)? )? ? (W ?) ))? (C?)
)? Table 1a: The inference rules of LL.
(L!)
(W !)
A;
A;
!A;
A;
A; A;
A;
A;
A;
A;
A;
A;
A; A;
A;
(W )
(I)
(L
)
(H)
(H )
(H )
...............
.......
..
(L!)
(W !)
(C!)
(W )
A;
)
) ; where 0 ; 0 Table 1b: The weakening rule for LLW.
0
0
(Cut)
(M)
)
)
~ = U:
~
) ; where W
W
W
W;
U;
1 ;U
1
W;
W;
W
V;
1;
2
Z;
Z
V;
)
Æ )
)
)
Æ( ) )
)
)
) Y
X
)
)
) ) W;
U;
U;X
Y1
Y;
U;
X
Y1
Y2
U;X
U1 ;
U1
1
Y1
1
.............
U;
Y2 ;
U2 ;Y1 ............. Y2 ;
Y2
1;
U2 ;
2
2
1;
2
2
)
(R?)
)
)
(W ?)
! )
! ! )
(C?)
! )
Table 2a: The inference rules of
W;A;
W;
!
W; A;
W;
W;
;Z
; Z
W;
W; A;
W;
W; A; A;
W;
W; A;
W;
; Z
; Z; Z
; Z
)
) ; where 0 ; 0 ) ;
)
Table 2b: The weakening rules for NLLW.
W;
W
)
) ?
)
) ?
) ? ?
) ?
NLL.
U;
W;
W;
0
0
1;
2
)
2
(a) If X Æ Y 2 and a vector a + Y~ has been
already written, for some vector a with nat~
ural coordinates, then we may write a + X.
(b) If X Æ (Y1 Y2 ) 2 and vectors a + Y~1 and
a + Y~2 have been already written for some
~
a 2 ! , then we may write a + X.
~
(c) If Y1 Y2 2 and vectors a1 + Y1 and a2 + Y~2
have been already written for some a1 ; a2 2
! , then we may write a1 + a2 .
~.
3. The aim of the game is to obtain the vector W
n
...............
.........
..
n
Game B This game is the game A with the additional rule:
4. If a vector a has been written and a c then1 we
may write c.
In the current section we prove the following
Theorem 2 (The computational interpretation)
It is possible to reach the aim in the game A (B ),
if and only if the sequent is derivable in LL (LLW).
Proof To prove the theorem, we introduce auxiliary
logics NLL and NLLW (normal linear logic and normal
linear logic with the weakening rule). These logics are
the development of calculus HLL, which was used in
[2] for justication of the computational interpretation
of the -Horn fragment of linear logic. The inference
rules of NLL are represented in the Table 2a. And
NLLW is NLL with the weakening rules (Table 2b).
In the tables 2a, 2b U; V; W; X; Y; Z are simple
products; ; 1 ; 2 are multisets of normal formulas
and normal formulas with \!"; ; 1 ; 2 are multisets of simple products and simple products with \?";
and A is a normal formula. We dene the NLL and
NLLW so that the set of normal sequents which are
derivable in NLL (NLLW) is equal to the set of normal
sequents which are derivable in LL (LLW). In order to
verify it we use the following four lemmas.
Lemma 2.1 If = P1 : : : P is a multiset of simple
products and the sequent
n
;
)
is derivable in LL (LLW), then the sequent
; ) 1 We say that (x1 : : : xn ) (y1 : : : yn ) i
8
i
(xi yi ).
Lemma 2.3 Let the sequent
~
W; ; ! ~ ) ; ?
(2)
be derivable in NLL (NLLW) without the rule (M).
~
Here ; ~ are multisets of normal formulas and ; ~
are multisets of simple products. If = (W; ! ; ! )
~ , then it is possible to reach the aim in the
?; ?)
game A (B ).
Lemma 2.4 Let be a normal sequent. Assume
~ by the rules of the
that one can write the vector W
game A (B ). Then the sequent is derivable in
LL (LLW).
It follows from the lemmas 2.1{2.4 that the following three assertions are equivalent:
LL ` ,
NLL ` ,
It is possible to reach the aim in the game A .
And by analogy for linear aÆne logic the following
three assertions are equivalent:
` ,
NLLW ` ,
LLW
It is possible to reach the aim in the game B .
Hence, the theorem 2 is following from the lemmas
2.1{2.4.
Now let us prove these lemmas by induction. For
short, we pove these lemmas only for linear aÆne logic.
In order to obtain the proofs for linear logic we need
only except cases of weakening.
Proof of Lemma 2.1 We prove the lemma by
induction on the cut-free derivation of the sequent ; ) in LLW.
Case 1. The last rule in the derivation is (L
).
In this case = P1 ; P2 ; 0 and the derivation has
the form:
P1 ; P2 ; 0 ; ) ( )
P1 P2 ; 0 ; ) Hence, since (P1 ; P2 ; 0 )
= (P1 P2 ; 0 )
, then, by
the induction hypothesis, the sequent (1) is derivable
in NLLW.
Case 2. The last rule in the derivation is (R
).
L
(1)
is derivable in NLL (NLLW), where = P1 : : : P .
n
Lemma 2.2 We can eliminate the rule (M) both in
NLL and NLLW.
1;
;
1
2
) Z 1 ; 1
2 ; 2 ) Z2 ; 2
( )
(
Z1 1 ) Z1 2 ; 1
2 ; 2 ) Z1 Z2 ; 2
(
1 2 ; 1 ; 2 ) Z1 Z2 ; 1 ; 2
( )
(1 ; 2 )
; 1 ; 2 ) Z1 Z2 ; 1 ; 2
1
M
M
)
)
C ut
L
Table 3a: Case 2.
1;
;
1
2
) X; 1
( )
1 ) X 2 ; 1
1 2 ; X Æ Y;
(1 ; 2 )
; X Æ Y;
1
2 ; 2 ) 2
(
X 2 ; X Æ Y; 2 ) 2
(
2 ) 1 ; 2
( )
2 ) 1 ; 2
Y
M
1;
1;
H
)
C ut
)
L
Table 3b: Case 3.1.
1;
;
1
2
) X; 1
Y1 Y2 2 ; 2 ) 2
2 ; 2 ) 2
( )
( )
X ;
X
Æ
(Y
Y
);
)
1 ) X 2 ; 1
2
2
2
2
( )
1 2 ; X Æ (Y1 Y2 ); 1 ; 2 ) 1 ; 2
( )
(1 ; 2 )
; X Æ (Y1 Y2 ); 1 ; 2 ) 1 ; 2
1
M
H
C ut
L
Table 3c: Case 3.2.
Table 3: The proof of the lemma 2.1
Case 5. The last rule in the derivation is (L1):
In this case the derivation has the form:
1 ; 1 ) Z1 ; 1 2 ; 2 ) Z2 ; 2
1 ; 2 ; 1 ;
2
(R
)
) Z1 Z2 ; 1 ; 2
Hence, the sequent (1) is derivable in NLLW (see the
Table 3a).
Case 3. The last rule in the derivation is (L Æ).
There are two subcases:
Case 3.1. The derivation has the form:
) X; 1
1 ; 2 ; X Æ Y;
1 ;
Y; 2 ;
1
1; 2
) 2
2
) 1 ; 2
(L Æ)
Where Y is a simple product. In this case, the sequent (1) is derivable in NLLW (see the Table 3b).
Case 3.2. The derivation has the form:
) X; 1 2 ; Y1 Y2 ; 2 ) 2
( Æ)
1 ; 2 ; X Æ (Y1 Y2 ); 1 ; 2 ) 1 ; 2
1 ;
1
L
) 2
2
2 ) 2 ;
Y2 ; 2 ;
2 ) 2 :
Moreover, the derivation length of each of these sequents is shorter than the derivation length of sequent Y1 Y2 ; 2 ; 2 ) 2 . Hence, we can apply the
induction hypothesis to these sequents. Therefore, the
following sequents are derivable in NLLW:
Y1 2 ; 2 ) 2 ;
Y2 2 ; 2 ) 2 ;
;
) X; 1 :
1
1
The sequent (1) is derivable from them in NLLW (see
the Table 3c).
Case 4. The last rule in the derivation is (L ):
........
....
.....
..
..
...
Y1 ; 1 ;
1
) 1
Y2 ; 2 ;
1 ; 2 ; Y1 Y2 1 ;
...................
......
2
2
) 2
) 1 ; 2
(L......................... )
Hence, the sequent (1) is derivable in NLLW:
Y1 ; 1 ) 1 Y2 ; 2 ) 2
1
1 2 ; Y1 Y2 ; 1 ;
(1 ; 2 )
; Y1 Y2 ; 1 ;
.................
.......
...............
.......
..
0
(1; 0 )
;
(L
)
)
Case 6. Our sequent is an axiom (R1) : ) 1. Then,
= ;, = 1, and the sequent (1) is an axiom in
NLLW: 1 ) 1.
Case 7. The last rule in the derivation is (W ):
) 0
( )
0 ; 1 ; ) 0 and 0 . Hence, the sequent (1) is
0 ;
0
W
where
derivable in NLLW:
0
is derivable in LLW then the following sequents are
also derivable in LLW:
Y1 ; 2 ;
L
0 ; 0 ) 0
; ) One can prove that if the sequent
Y1 Y2 ; 2 ;
0 ) ( 1)
1; 0 ) Hence, since 0 = (1; 0 ) , the sequent (1) is derivable in NLLW:
; ) 2
) 1 ; 2
( )
)
;
2
1 2
2
L
(0 ; 1 )
;
)
(W )
(L
)
Case 8. The last rule in the derivation is one of the
rules: (L!), (W!), (C!), (R?), (W?), or (C?), then the
sequent (1) is derivable by the same rule in NLLW.
Proof of Lemma 2.2 Let the sequent
W;
) Z; ;
(3)
be derivable in NLLW without the rule (M). We shall
prove, by induction on derivation length, that the sequent
W V; ) Z V; is also derivable without the rule (M).
Case 1. The sequent (3) is an axiom in NLLW:
W ) W . Then the sequent W V ) W V is
also an axiom.
Case 2. The sequent (3) is derivable by the rule
(L
) in NLLW:
W;
U;
(H .......................... )
1
) Z; ( )
) Z; L
~ = U.
~ By the induction hypothesis the followHere W
ing sequent is derivable in NLLW:
W V;
) Z V; :
The sequent
U V;
) Z V; is derivable from it by the rule (L
) in NLLW, since
W !
V = U !
V.
Case 3. The sequent (3) is derivable by the rule (H):
U; 0 ) Z; (
X U; X Æ Y; 0 ) Z; Y
H
)
By the induction hypothesis the following sequent is
derivable:
Y
U V; 0 ) Z V; :
The sequent
X U V; X
Æ Y; 0 ) Z V; is derivable from it by the rule (H).
Case 4. The sequent (3) is derivable by the rule
(H ):
Y1 U;
) Z; Y2 U; 0 ) Z; ( )
X U; X Æ (Y1 Y2 ); 0 ) Z; 0
H
By the induction hypothesis the following sequents are
derivable:
Y1 U V;
Y2 U V;
) Z V; ;
0 ) Z V; :
0
The sequent
X U V; X
Æ (Y1 Y2 ); 0 ) Z V; is derivable from them by the rule (H ).
Case 5. The sequent (3) is derivable by the rule
(H ):
...............
..........
Y1 U1 ;
1
) Z; 1
U1 U2 ; Y1 Y2 ; 1 ;
Y2 U2 ;
..................
.......
2
2
) 2
) Z; 1 ; 2
(H ......................... )
By the induction hypothesis the following sequent is
derivable:
Y1 U1 V;
1
) Z V; 1 :
The sequent
U1 U2 V; Y1 Y2 ; 1 ;
) Z V; 1 ; 2
is derivable from it and from the sequent Y2 U; 2 )
............
........
...
2 by the rule (H ).
..................
.......
2
Case 6. The sequent (3) is derivable by the rule
(Cut).
Case 6.1. The derivation has the form:
W; 1 ) Z; 1 ; U
U; 2 ) 2
W; 1 ;
2
(C ut)
) Z; 1 ; 2
By the induction hypothesis the following sequent is
derivable
W V;
1
) Z V; 1 ; U:
The sequent
W V; 1 ;
2
) Z V; 1 ; 2
is derivable from it and from the sequent U;
by the rule (Cut).
Case 6.2. The derivation has the form:
W; 1 ) 1 ; U
U; 2 ) Z; 2
W; 1 ;
2
) Z; 1 ; 2
2
) 2 ,
(C ut)
By the induction hypothesis the following sequents are
derivable
W V; 1 ) 1 ; U V;
U V; 2 ) 2 ; Z V:
The sequent
W V; 1 ;
2
) Z V; 1 ; 2
is derivable from them by the rule (Cut).
Case 7. The sequent (3) is derivable by the rule
(W).
Case 7.1. The derivation has the form:
W; ) ( )
W; ) Z; W
In this case the sequent W V;
able:
W; ) ) Z V; is deriv(W )
W V; ) ( )
W V; ) Z V; Case 7.2. If the derivation has some other form,
then the rule (M) commutes with the rule (W). Hence,
by the induction hypothesis, the sequent W V; )
Z V; is derivable without the rule (M).
Case 8. The sequent (3) is derivable by the rule (L!),
(W!), (C!), (R?), (W?), or (C?).
Note, that the rule (M) commutes with all these
rules. Hence, by the induction hypothesis, the sequent
W V; ) Z V; is derivable without the rule (M).
W
Y1 U; 0; ! ~
) ; ?~ Y2 U; 0; ! ~ ) ; ?~
( )
~
X U; X Æ (Y1 Y2 ); 0 ; ! ~ ) ; ?
H
Table 4a: Case 4.
~1
Y1 U1 ; 1 ; ! ~ 1 ) 1 ; ?
U1 U2 ; Y1
..................
.......
~2
Y2 U2 ; 2 ; ! ~ 2 ) 2 ; ?
~ 1 ; ?
~2
Y2 ; 1 ; 2 ; ! ~ 1 ; ! ~ 2 ) 1 ; 2 ; ?
(H ........................ )
Table 4b: Case 5.
~ 1; U
~2
W; 1 ; ! ~ 1 ) 1 ; ?
U; 2 ; ! ~ 2 ) 2 ; ?
~ 1 ; ?
~2
W; 1 ; 2 ; ! ~ 1 ; ! ~ 2 ) 1 ; 2 ; ?
(C ut)
Table 4c: Case 7.
Table 4: The cases of the lemma 2.3
Proof of Lemma 2.3 We prove this lemma by the
induction on the length of a derivation of the sequent
(2).
Case 1. The sequent (2) is an axiom W ) W . This
case is trivial.
Case 2. The sequent (2) is derivable by the rule
(L
). This case is also trivial.
Case 3. The sequent (2) is derivable by the rule (H):
Y
U;
X U; X
0; !~
Æ Y;
) ; ?~
(
~
0 ; ! ~ ) ; ?
H
~0
W; 0 ; ! ~ 0 ) 0 ; ?
~
W U; ; ! ~ ) ; ?
(W )
~ by rules of the
Similarly, one can write the vector W
game B . Therefore, one can also write the vector
~ + U~ by the rules of this game (the rule (d)).
W
Case 7. The sequent (2) is derivable by the rule
(Cut) (see the Table 4c). Consider the sequents
~ 1 ; ?U)
1 = (W; ! 1 ; ! ~ 1 ) ?1 ; ?
)
~
Consider the sequent = (Y U; ! 0 ; ! ~ ) ?; ?).
By the induction hypothesis one can write the vector
Y~ + U~ by the rules of the game B . Hence, one can
write it by the rules of the game B too. Therefore,
~ + U~ by the rules of
one can also write the vector X
this game (the rule (a)).
Case 4. The sequent (2) is derivable by the rule
(H ) (see the Table 4a). Similarly to the previous
case, one can write Y~1 + U~ and Y~2 + U~ by the rules of
~ + U~ by
the game B . Therefore, one can also write X
the rules of this game (the rule (b)).
Case 5. The sequent (2) is derivable by the rule
(H ) (see the Table 4b). Similarly, one can write
the vectors Y~1 + U~ 1 and Y~2 + U~ 2 by rules of the game
B . Therefore, one can also write the vector U~ 1 + U~ 2
by the rules of this game (the rule (c)).
Case 6. The sequent (2) is derivable by the rule
....
.........
.........
..
(W ):
and
~ 2 ):
2 = (U; ! 2 ; ! ~ 2 ) ?2 ; ?
By the induction hypothesis one can write the vector
~ by the rules of the game B and the vector U~ by
W
the rules of the game B . Hence, if one can write
a vector by the rules of the game B , then one can
write the same vector by the rules of the game B . In
~ by the rules of
particular, one can write the vector W
the game B .
The other cases ((L!); (W !); (C!); (R?); (W ?); (C?))
are trivial.
1
2
1
Proof of Lemma 2.4 We prove this lemma by the
induction on the number of moves, which are necessary
~ . If W
~ can be obtained at zero
to obtain the vector W
moves (i.e. W 2 ), then, apparently, the sequent
W; ! ) ? is derivable. Otherwise, four cases may
occur.
U Y; !
) ? U X; X Æ Y ) U Y
U X; X Æ Y; ! ) ?
( ! !)
U X; ! ) ?
(C ut)
L ;C
Table 5a: Case 1.
U; Y1 ; ! ) ? U; Y2 ; ! ) ?
( )
U; Y1 Y2 ; ! ) ?
X; X Æ (Y1 Y2 ) ) Y1 Y2
U; X; X Æ (Y1 Y2 ); ! ) ?
( )
U X; X Æ (Y1 Y2 ); ! ) ?
( ! !)
U X; ! ) ?
L
(C ut)
L
L ;C
Table 5b: Case 2.
Table 5: The proof of the lemma 2.4
~ is obtained by the rule (a) from V~ . In
Case 1. W
this case
W = U X,
V =U Y,
X ÆY 2 .
By the induction hypothesis, the sequent V; ! ) ?
is derivable in LLW. And the sequent W; ! ) ? is
derivable from it in LLW (see the Table 5a).
~ is obtained by the rule (b) from V~1 and
Case 2. W
~V2 . In this case
W = U X,
V 1 = U Y1 ,
V 2 = U Y2 ,
X Æ (Y1 Y2 ) 2 .
By the induction hypothesis, the sequents V1 ; ! ) ?
and V2 ; ! ) ? are derivable in LLW. Then, the
following sequents are derivable:
U; Y1; !
U; Y2; !
) ?;
) ?;
and the sequent W; ! ) ? is derivable from them in
LLW (see the Table 5b).
~ is obtained by the rule (c) from V~1 and
Case 3. W
~V2 . In this case
W = U1 U2 ,
V1 = U1 Y1 ,
V2 = U2 Y2 ,
Y1 Y2 2 .
By the induction hypothesis, the sequents V1 ; ! )
? and V2 ; ! ) ? are derivable in LLW. Then the
...................
.
......
following sequents are derivable:
U1 ; Y 1 ; !
) ?;
) ?;
U2 ; Y 2 ; !
and the sequent W; ! ) ? is derivable from them in
LLW:
U1 ; Y1 ; ! ) ? U2 ; Y2 ; ! ) ?
(L......................... )
) ?; ?
( U1 U2 ; Y1 Y2 ; ! ) ?
( !
U1 U2 ; ! ) ?
U1 ; U2 ; Y1 Y2 ; ! ; !
....
.
........
.........
...
L
...............
.......
...
!
;C ;C
L ;C
?)
!)
~ is obtained by the rule (d) from V~ . In
Case 4. W
this case W = V U.
By the induction hypothesis, the sequent V; ! )
? is derivable in LLW, and the sequent W; ! ) ?
is derivable from it in LLW:
V; ! ) ?
(W )
) ?
( )
V U; ! ) ?
V; U; !
3
L
The decidability of the normal fragment of LLW
Theorem 3 The normal fragment of linear aÆne
logic is decidable.
We begin with
Denitions
1. The set of vectors
implication X Æ Y if
8a 2 !
n
a + Y~
A is closed under the Horn
z
2 A ) a + X~ 2 A:
(Here ! is the set of vectors with natural coordinates).
2. The set of vectors A is closed under the -Horn
implication X Æ (Y1 Y2 ) if
n
8a 2 !
a + Y~1 2 A; a + Y~2 2 A ) a + X~ 2 A:
n
3. The set of vectors A is closed under the simple
disjunction Y1 Y2 if
.........
....
.
....
.
..
....
8a1; a2 2 !
n
Denition A cone with the vertex z is a set of vectors K = fy : y z g.
Note, that any set of vectors, which is closed under
the weakening rule is a union of cones. We shall prove
that such set is a union of a nite set of cones. This
fact follows from the central lemma:
a1 + Y~1 2 A; a2 + Y~2 2 A ) a1 +a2 2 A
Lemma 3.2 Any set of vectors from ! which are
incomparable pairwise is nite.
n
Proof Let A ! and 8x; y 2 A (x 6< y). We shall
prove that A is nite by induction on n. For n = 1
it is trivial. To verify the induction step, we consider
any vector a 2 A. Let a = (a1 ; : : : ; a ). We dene the
following sets:
n
n
A
= fx 2 A : x
a g:
4. Let be a multiset of normal formulas. The set
of vectors A is -closed if it is closed under all formulas
from .
5. The set of vectors A is closed under the weakening rule if
8a 2 A 8c a c 2 A:
One can easily verify that
dene the sets
Lemma 3.1 Let = (W; ! ` ?) be a normal sequent. Then the following two conditions are equivalent:
. By the induction
hypothesis, the sets A are nite. Hence, A is nite
too.
(i) It is possible to reach the aim in the game B .
(ii) For any A ! if A is -closed and closed under
~ A then W
~ 2 A.
the weakening rule and n
Proof
(ii)=)(i) Let us consider the set
A = fx : x can be obtained by the rules of B g:
By the denition of the game B and closed sets, the
set A is -closed and closed under the weakening rule.
~ Hence, according to
Moreover, it includes the set .
~ 2 A. So, W
~ can be obtained by
our hypothesis, W
the rules of the game B .
(i)=)(ii) Let A be any -closed set which is closed
~ A. Then, by the
under the weakening rule, and denition of the game B and closed sets, A includes
all vectors, which can be written by the rules of the
~ 2 A.
game B . In particular, W
Remark It is possible to formulate the analogous
lemma for the game A .
A
i
ij
i
A
i
=
S A.
=1
n
i
i
Now we
= fx 2 A : x = j g:
i
i
S
It is evident, that A = =1 A
ai
i
ij
j
ij
Corollary 3.3 Let A be a set of vectors with natural
coordinates. In this case if A is closed under the weakening rule, then A is a union of nite set of cones.
Proof Let B be a set of the minimal elements of the
set A, i.e.
B = fz 2 A : :9y 2 A
Then
[
A=
2B
According to 3.2, B is nite.
y < z g:
K:
z
z
Denition Let x; y 2 ! , then
n
max(x; y) := (max(x1 ; y1 ); : : : ; max(x ; y )):
n
n
Denition Let A; B Z , x 2 Z (here Z is a set
of integers), then
n
n
A + x := fa + x : a 2 Ag;
A + B := fa + b : a 2 A; b 2 Bg:
S
Lemma 3.4 Let A = 2B K , then
z
z
1.
2.
A is closed under X Æ Y if and only if
8z 2 B max(z Y~ ; 0) + X~ 2 A:
A is closed under X Æ (Y1 Y2 ) if and only if
8z1; z2 2 B
3.
i.e.
A is closed under Y1
8z1; z2 2 B max(z1
................
........
..
Y2 if and only if
n
Y~1 ; 0)+max(z2 Y~2 ; 0) 2 A
; a + Y~
ÆY
n
z
z
n
~
Y
max z
z
~;
Y
~;
Y
~
X
[
z
i.e.
max z
z
K
(
max z
2B
8z 2 B
max(z
0)+X~
~;
Y
A;
~ 2 A:
Y~ ; 0) + X
2. The situation is similar to the previous item.
is closed under X Æ (Y1 Y2 ) if and only if
fa + X~ : a 2 w
n
A
; a + Y~1 2 A; a + Y~2 2 Ag A:
But
fa + X~ : a 2 w ; a + Y~1 2 A; a + Y~2 2 Ag =
(A Y~1 ) \ (A Y~2 ) \ ! + X~ =
S
S
( K Y~1 ) \ ( K Y~2 ) \ ! + X~ =
2B
2B
S
S
~=
( K
)\( K
)\! +X
2B
2B
S K
\K
\ ! + X~ =
2B
S K
(
0)+ :
2B
So, A is closed under X Æ (Y1 Y2 ) if and only if
n
n
z
z
z
z
z1 ;z2
z1 ;z2
n
z
z
~
Y
1
z1
~
Y
1
max z1
[
z1 ;z2
2B
z2
~ ;z
Y
1
2
K
n
~
Y
2
z
z
~
Y
2
~ ;
Y
2
(
max z1
n
~
X
~ ;z
Y
1
2
~ ;
Y
2
0)+X~
A;
n
z
n
z
z
~ ;
Y
1
max z
z1 ;z2
2B
max z
z
~ ;
Y
1
max z1
[
n
z
n
z
n
z
z
n
z1 ;z2
But
fa + X~ : a 2 w ; a + Y~ 2 Ag = (A Y~ ) \ ! + X~ =
S
~ = S [(K Y~ ) \ ! + X]
~ =
( K Y~ ) \ ! + X
2B
2B
S [K \ ! + X]
~ = S [K (
~
0) + X] =
2B
2B
SK
(
0)+ :
2B
So, A is closed under X Æ Y if and only if
z
But
fa1 + a2 : a1 ; a2 2 w ; a1 + Y~1 2 A; a2 + Y~2 2 Ag =
(A Y~1 ) \ ! + (A Y~2 ) \ ! =
S
S
( K Y~1 ) \ ! + ( K Y~2 ) \ ! =
S2BK
S K2B
+
(
0)
(
0) =
2B
2B
S K
(
0)+
(
0) :
2B
So, A is closed under Y1 Y2 if and only if
z
2 Ag A:
n
n
n
Proof 1. By the denition, A is closed under X
if and only if
fa + X~ : a 2 w
....
........
...........
..
~ 2 A:
Y~2 ; 0) + X
Y~1 ; z2
max(z1
8z1; z2 2 B max(z1 Y~1 ; z2 Y~2 ; 0) + X~ 2 A:
3. A is closed under Y1 Y2 if and only if
fa1 + a2 : a1 ; a2 2 w ; a1 + Y~1 2 A; a2 + Y~2 2 Ag A:
~ ;
Y
2
~ ;
Y
2
max z2
...............
.........
..
K
(
max z1
~ ;
Y
1
0)+max(z2
~ ;
Y
2
0)
A;
i.e.
8z1; z2 2 B
max(z1 Y~1 ; 0)+max(z2 Y~2 ; 0) 2 A:
Corollary 3.5 For any nite
S set of vectors B the
property of the set A = 2B K to be -closed is
decidable.
z
z
Proof Note that all conditions in the last lemma are
decidable. Hence, the property to be -closed is also
decidable.
Now we can prove the theorem 3. Let us look at the
condition (ii) from the lemma 3.1. By the corollary 3.3
this condition is equivalent to the following condition:
S
(iii) For any nite set B ! if 2B K is -closed
~ S 2B K then W
~ 2 S 2B K .
and n
z
z
z
z
z
z
By the corollary 3.5 the condition standing after
the words \For any nite set B ! . . . " is decidable.
Hence, the condition (iii) is recursively co-enumerable.
By the lemma 3.1 the condition (iii) is equivalent to
the condition (i). And by the theorem 2 the condition (i), in its turn, is equivalent to the condition,
that LLW ` . Hence, the set of derivable normal sequents in LLW is co-enumerable. On the other hand
it is clear that this set is recursively enumerable. So,
by Post's theorem, it is decidable.
n
4
Reduction to the normal fragment
Theorem 4 Linear logic and linear aÆne logic are
reduced to their normal fragments.
! ;A ) B
! ; !A ) B
! ; !A ) !B
(L!)
(R!)
Proof of ! ' !
A
! ;A ) B
! ; A ) ?B
! ; ?A ) ?B
(R?)
(L?)
A
! ; A? ) B ?
Proof of
A
?
R
Proof of ' B
Proof of ? ' ?
! ;B ) A
! ; A? ; B )
! ;A ) B
C)C
( )
! ;A ) B C
! ;C ) B C
! ;A C ) B C
A
(L? )
(R? )
B
B
?
(L)
L
(L&; W !)
(L&)
Proof of & ' &
A
C
B
! ;A ) B C ) C
( )
! ; A; C ) B C
( )
! ;A C ) B C
B
C
Æ '
A
C
Æ
L
R
A
Æ '
C
...................
......
Proof of
C
B
! ;B ) A C ) C
( Æ)
! ; B; A Æ C ) C
( Æ)
! ;A ÆC ) B ÆC
B
Æ
......
....
...........
....
L
C
R
! ;A ) B C ) C
(
! ; A C ) B; C
(
! ;A C ) B C
R
A
L
Proof of
C
Proof of ' ! ;A ) B C ) C
( Æ)
! ; C; C Æ A ) B
( Æ)
! ;C ÆA ) C ÆB
Proof of
C
! ;A ) B
C)C
( &)
! ; A&C ) B
! ; A&C ) C
! ; A&C ) B&C
B
'
C
(R; W !)
...........
.......
........
....
......
.....
A ............ C
'
C
...............
L ............
.................
........
R
)
)
....
......
....
B ............. C
Table 6: The proof of lemma 4.1.1
Denition A \good" sequent is a sequent of the
form: ; ! ` , where , and contain neither
\!" nor \?".
The proof of the theorem 4 consists of two steps.
Firstly, we prove that the \good" fragment is reduced
to the normal fragment (Lemma 4.1). And secondly,
we prove that the whole LL and LLW are reduced to
their \good" fragment (Lemma 4.2).
Lemma 4.1 For any \good" sequent one can eectively construct a normal sequent such that
LLW ` i LLW ` ;
LL ` i LL ` :
In order to prove the lemma 4.1 we introduce
Denition Let A and B be formulas, and be a
multiset of formulas. We say that A ' B in any logic
(A is equal to B according to ), if the following sequents are derivable in the logic:
! ; A ) B;
! ; B ) A:
If is an empty multiset, then we write A = B.
By analogous to the traditional equality the following lemma is holds for LL as well as for LLW:
Lemma 4.1.1 If A ' B then
!A ' !B; A C ' B C; C Æ A ' C Æ B;
?A ' ?B; A&C ' B&C;
A Æ C ' B Æ C;
?
?
A ' B ; A C ' B C; A C ' B C:
..................
.......
.....
......
...........
...
Proof Let the sequents
! ; A ) B;
! ;B ) A
be derivable. We must prove the derivability of the
sequents of the form ! ; D(A) ) D(B) and of the form
! ; D(B) ) D(A), where D(p) = !p, p C, C Æ p,
and so on. The derivations of the rst sequents are
represented in the Table 6. The derivations of the
second sequents are obtained by replacement A by B
and B by A in the derivations in the Table 6.
Lemma 4.1.2 If A ' B then the sequent ! ; ) is derivable if and only if the sequent ! ; 0 ) 0 is
derivable, where 0 stands for substitution of the formula B for some occurrences of the formula A.
Proof We may assume without loss of generality
that 0 stands for substitution of the formula B for
only one occurrence of the formula A. Let us suppose that the formula A occurs in the external formula C = C(A) (i.e., C 2 or C 2 ). Then
C 0 = C(B) and according to the last lemma C ' C 0 .
Case 1. C 2 , i.e. = C; 0 ; 0 = C 0 ; 0 ; 0 =
.
Then the sequent ! ; 0 ) 0 is derivable from the
sequent ! ; ) :
! ; C; 0 ) ! ; C 0 ) C
! ; ! ; C 0 ; 0 ) ! ; C 0 ; 0 ) (C !)
(C ut)
(4)
Case 2. C 2 , i.e. = C; 0 ; 0 = C 0 ; 0 ;
0
= .
Then the sequent ! ; 0 ) 0 is derivable from the
sequent ! ; ) :
! ; C ) C0
! ; ! ; ) C 0 ; 0
( !)
! ; ) C 0 ; 0
! ; ) C; 0
(C ut)
C
(5)
Replace the formula C by the formula C 0 and the
formula C 0 by the formula C in the derivations (4)
and (5). We obtain the derivation of the sequent
! ; ) from the sequent ! ; 0 ) 0 .
Lemma 4.1.3 The sequent
!(A
Æ p); !(p Æ A); ) is derivable, where 0 stands for substitution of the formula A for all occurrences of the literal p.
=A
Æ p; p Æ A:
Then A ' p. Hence, if the sequent 0 ) 0 is derivable, then the sequent ! ; ) is derivable too (by
the rule (W !) and the last lemma).
Conversely, suppose that the sequent !(A Æp); !(p Æ
A); ) is derivable. If we substitute the formula
A for the literal p in this sequent, then we obtain the
derivable sequent:
!(A Æ A); !(A Æ A); 0 ) 0 :
The sequent 0 ) 0 is derivable from this sequent:
!(A
Æ A); !(A Æ A); 0 ) 0
(
!(A Æ A); 0 ) 0
0 ) 0
C
!)
....
...........
.
.....
...
!(r
!(r
!(r
!(r
Æ (p q));
Æ (p q));
Æ p?);
Æ 1);
!((p q) Æ r);
!((p q) Æ r);
!(p? Æ r);
!(1 Æ r):
Table 7.
After all these replacements we obtain a \good"
sequent of the form
is derivable if and only if the sequent
0 ) 0
Proof Let
Namely, we replace the formulas A&B by (A? B ? )? ,
A B by (A? B ? )? , A Æ B by (A B ? )? , ? by
1?, and 0 by >?.
Secondly, using that 1 = > in LLW, we replace
> by 1. After that we obtain a \good" sequent 1
which contains only the following connectives: !, ,
, ? , and 1.
Then, using the lemma 4.1.3, we may replace all
formulas of the forms p q, p q, p? and 1 by new
literals and add formulas of the form from the Table 7
to the antecedent.
) !(A Æ A)
Using this lemma, let us prove the lemma 4.1 for
LLW. Let we have a \good" sequent . We shall transform this sequent, watching that the new sequent will
be always derivable in LLW if and only if the old one
is derivable too.
Firstly, by De Morgan equalities, we eliminate from
our sequent the following connectives: &, , Æ, ?, 0.
..................
......
1 ; 2 ; 3 )
where 1 is a multiset of the formulas of the kind from
the Table 7. 2 is a multiset of new literals, and 3
is a multiset of the formulas of the kind !p, where p is
a new literal.
Then we transfer formulas from 1 and 3 to equivalent normal formulas. Namely, since p = (1 Æ p), we
may replace formulas !p from 3 by the Horn implication !(1 Æ p). Since p? Æ q = p q, we may replace
!(p? Æ q) by the simple disjunction !(p q). And using that !((p q) Æ r) = !(p Æ r) !(q Æ r), we may
replace the formulas of the form !((p q) Æ r) by the
pair of the Horn implications !(p Æ r) and !(q Æ r).
Finally, since !(r Æ p? ) = (?(r p))? , we may remove
the formulas !(r Æ p? ) from the antecedent, and add
the formula ?(r p) to the consequent. After all, we
obtain a sequent of the form:
..............
........
..
...............
.........
..
2 ; !
) ?;
where is a list of normal formulas and is a list of
simple products.
Then we replace the multiset of literals 2 by simple
product W = 2 and obtain a normal sequent =
(W; ! ) ?). This sequent is derivable in LLW if
and only if the sequent is.
For LL this lemma is proved similarly except the
point, where we use the fact that > = 1 in LLW. For
LL it is proved that LL is reduced to the fragment
without constants 0 and >.
Lemma 4.2 Linear logic and linear aÆne logic are
reduced (according to Turing) to the problems of the
derivability of \good" sequents in the corresponding
logic.
Proof Note, that since ?A = (!A? )? we can conne
ourselves without loss of generality to the sequents
without \?". Let we have any sequent without \?".
Consider the set X = fA : !A 2 Sub g. With any
formula A from X we associate a new literal p . For
any formula B 2 Sub we dene the formula B 0 as
a formula obtained from B by replacement of subformulas of the form !A by the literal p . (We replace only formulas !A which is not a subformula of
any other formula of the kind !C). For example, if
B = (!q Æ !(r !t)) then B 0 = (p Æ p( ! ) ). We
dene also the inverse operation ` as follows:
A
A
q
r
t
`B = B[!A1 =p ; : : : ; !A =p n ];
A1
n
A
where X = fA1 ; : : : ; A g. It is clear that `(B 0 ) = B.
Consider the multiset X consisting of the following
formulas:
(p Æ p p ); (p Æ 1); (p Æ A0 );
n
A
A
A
A
n
i
A2
An
is derivable, then
(p
A1
:::
p
An
Æ p ) 2 X :
(7)
C
(b) X is the least set that meets the property (a),
~
i.e. if any set ~ meets the property (a) then .
This set encodes the rule (L!). It is easy to verify
that the desired least set exists and nite. Moreover, if
we have decidability algorithm for the \good" sequents
of LL (LLW) then we can eectively construct this set.
Really, one can construct this set by the following way.
Initially, let X = ;. Then, while there is a derivable
sequent of the form (6), such that the formula (7) is
not in X , we add this formula in X . (We can verify
whether sequents of the kind (6) are derivable, because
they are \good" sequents). Since the set X is nite,
then the number of the formulas of the form (7) is also
nite. Hence, this algorithm is always terminal.
X
Lemma 4.2.1 If B
!`B is derivable.
X
X
B = (p
2 X ; X
, then the sequent
)
A1
:::
p
Æp
An
C
)
(where A ; C 2 X , and all A 's are dierent), such
that the sequent ) !`B is derivable. Let us verify that
the set ~ X meets the property (a). Indeed, suppose
that the sequent
!X ; !~ X ; p ; : : : ; p n ) C 0
i
i
A1
A
is derivable, then we verify that (p : : :
p n ÆC) 2
~ X . If we substitute the formulas !A for all literals p
in the last sequent, then we get the derivable sequent
!`X ; !`~ X ; !A1 ; : : : ; !A ) C:
(8)
A1
A
A
n
For any formula B 2 X ; ~ X the sequent ) !`B is
derivable (item 1 of this lemma and the denition of
~ X ). The sequent
A
for any A 2 X . These formulas encode the rules (C!),
(W !) and (R!) correspondingly.
We shall also consider the multiset X with the
following properties:
(a) For any set of the formulas A1 ; : : : ; A ; C 2 X
(all A 's are dierent) if the sequent
!X ; !X ; p ; p ; : : : ; p ) C 0
(6)
A1
Proof 1. If B 2 , then `B = (!A Æ !A !A), or
(!A Æ 1), or (!A Æ A). In all these cases the sequent
) `B is easily derivable, and so the sequent ) !`B is.
2. In order to prove this lemma for B 2 , we
consider the set ~ X , which consists of the formulas of
the form
!A1 ; : : : ; !A
n
)C
is derivable from these sequents and the sequent (8)
by the rule (Cut). Thus, the sequent
) !(!A1 : : : !A Æ !C)
n
is derivable. Hence, by the denition of the set ~ X ,
(p : : : p n Æ p ) 2 ~ X :
A1
A
C
So, the set ~ X meets the property (a). And therefore, by the property (b), X ~ X . Hence, if B 2 X
then the sequent ) !`B is derivable.
Lemma 4.2.2 If we have the sequent = ( ) )
(which does not contain any \?"), then the sequent is derivable if and only if the sequent
!X ; !X ; 0 ) 0
is derivable.
Proof Firstly, we prove that if the sequent
!X ; !X ; 0 ) 0
is derivable then the sequent is derivable too. If we
substitute the formulas !A for the literals p in this
sequent, then we obtain the derivable sequent
A
!`X ; !`X ;
) :
(9)
!X ; !X ; p ; p ; 00 ) 0
!X ; !X ; p p ; 00 ) 0
!X ; !X ; p ; p
A
A
A
A
Æp p ) p p
Æ p p ; 00 ) 0
( ! !)
!X ; !X ; p ; 00 ) 0
A
p ;p
A
A
A
A
A
A
A
A
(C ut)
A
L ;C
A
Table 8.
For any formula B 2 X ; X the sequent ) !`B is
derivable (Lemma 4.2.1). The sequent = ( ) )
is derivable from these sequents and the sequent (9)
by the rule (Cut).
Now we prove by induction on the derivation of the
sequent that if this sequent is derivable, then the
sequent
!X ; !X ; 0 ) 0
is derivable too.
Case 1. is derived by the rule (L!):
)
( !)
!A; 0 ) Hence, (p Æ A0 ) 2
A;
0
Then A 2 X .
X . According to the induction hypothesis the sequent
!X ; !X ; A0 ; 00 ) 0 is derivable. Hence, the sequent !X ; !X ; p ; 00 ) 0 is also derivable:
!X ; !X ; A0 ; 00 ) 0 p ; p Æ A0 ) A0
( )
!X ; !X ; p ; p Æ A0 ; 00 ) 0
A
A
A
C ut
A
A
(L!; C !)
!X ; !X ; p ; 00 ) 0
)
( !)
!A; 0 ) Then A 2 X . Hence, (p Æ 1) 2 X .
By the induction hypothesis the sequent !X ; !X ; 00 ) 0 is
derivable. Hence, the sequent !X ; !X ; p ; 00 ) 0
is also derivable:
!X ; !X ; 00 ) 0
A
A
A
C ut
!X ; !X ; p ; p
A
p ;p Æ1 ) 1
(
Æ 1; 00 ) 0
A
A
(L!; C !)
!X ; !X ; p ; 00 ) 0
A
Case 3. is derived by the rule (C!):
!A; !A;
!A;
)
(
0)
0
C
!)
0
A
is derivable.
The derivation of the sequent
!X ; !X ; p ; 00 ) 0 from this sequent is represented in the Table 8.
Case 4. is derived by the rule (R!):
A
!A1 ; : : : ; !A
!A1 ; : : : ; !A
n
)C
(
) !C
R
!)
By the induction hypothesis the sequent
!X ; !X ; p ; : : : ; p ) C 0
A1
An
is derivable. Consider formulas B1 ; : : : ; B , such that
all B 's are dierent and
m
i
fB1 ; : : : ; B g = fA1 ; : : : ; A g:
m
n
The sequent
!X ; !X ; p ; : : : ; p
(p
W
(L1)
A
Bm
) C0
is derivable (similarly to the case 3). Hence, by denition of the set X ,
Case 2. is derived by the rule (W !):
!X ; !X ; 1; 00 ) 0
A
A
B1
A
0
A
n
L
A
Then A 2 X . Hence, (p Æ p p ) 2 X . By the
induction hypothesis the sequent
!X ; !X ; p ; p ; 0 ) 0
)
B1
::: p
Bm
Æ p ) 2 X :
C
Therefore, the sequent
!X ; !X ; p ; : : : ; p
B1
Bm
)p
C
is derivable. Similarly to the case 2, the sequent
!X ; !X ; p ; : : : ; p
A1
An
)p
C
:
is derivable from the last sequent.
The other cases not associated with \!", are trivial.
It is easy to see that the lemma 4.2 follows from
the last lemma. And the theorem 4 follows from the
lemmas 4.1 and 4.2.
The theorems 1{3 provide the main
Theorem Linear aÆne logic is decidable.
5
Conclusion remarks
Here we consider one more corollary from the
computational interpretation of the normal fragments. In [4] it was considered the certain fragment
H2 LL(!; ; Æ; ). The sequents from this fragment
were interpreted in terms of so-called non-determinate
Petri Nets with conditional transitions. In fact, it is
possible to express the rules of the game A in the
terms of these nets and vice versa. As a corollary
of this fact we have that the whole linear logic is reduced to the fragment H2 LL(!; ; Æ; ). By analogous the multiplicative-exponential fragment of linear
logic (MELL = LL(!; ?;? ; Æ; ; ; 1; ?)) is reduced
to H2 LL(!; ; Æ). Hence, the following fragments are
decidable or undecidable simultaneously:
(i) MELL = LL(!; ?; ; ; Æ; ? ; 1; ?)
(ii) LL(!; ; Æ)
(iii) The normal fragment of LL(!; ?; ; ; Æ)
The decidability problem of these fragments remains
open.
...............
........
..
...............
........
..
...............
.........
..
Acknowledgments
I am very thankful to S.N. Artemov for orientation
of my research and for support of my work. I owe M.I.
Kanovich a debt of gratitude for the fruitful discussion
of the results and his helpful comments. This work
would be impossible without their participation.
References
[1] J.-Y.Girard. Linear Logic. Theoretical Computer
Science, 50, 1{102, 1987.
[2] M.I.Kanovich. Horn Programming in Linear
Logic is NP-complete. In Proc. 7-th Annual
IEEE Symposium on Logic in Computer Science,
Santa Cruz, June 1992, pp.200-210
[3] M.I.Kanovich. Petri Nets, Horn Programs, Linear Logic, and Vector Games. Proceedings of the
International Symposium Theoretical Aspects of
Computer Software, TACS'94, Sendai, Japan,
April 1994. In Lecture Notes in Computer Science, (ed. M.Hagiya and J.Mitchell), 1994, 789,
p.642{666
[4] M.I.Kanovich. Simulating Linear Logic in
1-Only Linear Logic. CNRS, Laboratoire de
Mathematiques Discretes, Pretirage nÆ 94-02,
January 1994, 81 p.
Available by anonymous ftp from host
lmd.univ-mrs.fr and the le
pub/kanovich/unit-only.dvi.
[5] P.Lincoln, J.Mitchell, A.Scedrov, and
N.Shankar. Decision Problems for Propositional
Linear Logic. In Proc. 31st IEEE Symp. on Foundations of Computer Science, 662{671, 1990.
[6] A.S.Troelstra. Lectures on Linear Logic. CSLI
lecture notes; no.29, 1992