Download A STUDY OF PARTIAL ORDERS ON

Document related concepts
no text concepts found
Transcript
A STUDY OF PARTIAL ORDERS ON NONNEGATIVE MATRICES AND
VON NEUMANN REGULAR RINGS
A dissertation presented to
the faculty of
the College of Arts and Sciences of Ohio University
In partial fulfillment
of the requirements for the degree
Doctor of Philosophy
Brian Scott Blackwood
August 2008
This dissertation entitled
A STUDY OF PARTIAL ORDERS ON NONNEGATIVE MATRICES AND
VON NEUMANN REGULAR RINGS
by
BRIAN SCOTT BLACKWOOD
has been approved for
the Department of Mathematics
and the College of Arts and Sciences by
Dinh V. Huynh
Professor of Mathematics
Benjamin M. Ogles
Dean, College of Arts and Sciences
BLACKWOOD, BRIAN SCOTT, Ph.D., August 2008, Mathematics
A STUDY OF PARTIAL ORDERS ON NONNEGATIVE MATRICES AND
VON NEUMANN REGULAR RINGS (84 pp.)
Directors of Dissertation: Surender K. Jain and Dinh V. Huynh
In this dissertation, we begin by studying nonnegative matrices under the minus
partial order. The study of the minus partial order was initiated by Hartwig and
Nambooripad independently. We first give the precise structure of a nonnegative
matrix dominated by a group-monotone matrix under this partial order. The special
cases of stochastic and doubly stochastic matrices are also considered.
We also
derive the class of nonnegative matrices dominated by a nonnegative idempotent
matrix.
This improves upon and generalizes previously known results of Bapat,
Jain and Snyder.
Next, we introduce the direct sum partial order and investigate the relationship
among different partial orders, namely, the minus, direct sum, and Loewner partial
orders on a von Neumann regular ring.
It is proven that the minus and direct
sum partial orders are equivalent on a von Neumann regular ring. On the set of
positive semidefinite matrices, we show that the direct sum partial order implies
the Loewner partial order. Various properties of the direct sum and minus partial
orders are presented. We provide answers to two of Hartwig’s questions regarding
the minus partial order. One of the main results gives an explicit form of maximal
elements in a given subring. Our result generalizes the concept of a shorted operator
of electrical circuits, as given by Anderson-Trapp. As an application of the main
theorem, the unique shorted operator has been derived.
Finally, we consider the parallel sum of two matrices over a regular ring. Previously known results of Mitra-Odell and Hartwig are generalized. We also obtain
a result on the harmonic mean of two idempotents.
Approved:
Dinh V. Huynh
Professor of Mathematics
Preface
The study of nonnegative matrices was initiated by Perron [43] in 1907 and
Frobenius [14] in 1912. In 1920, E.H. Moore [40] defined a unique inverse for every
square or rectangular matrix but it gained little notice. In 1955, R. Penrose [42]
showed that Moore’s generalized inverse is the unique matrix satisfying four equations which is now known as the Moore-Penrose generalized inverse. Subsequently,
countless papers have been written on various topics related to generalized inverses.
In particular, there has been a great deal of interest in λ − monotone matrices, that
is, nonnegative matrices with a nonnegative generalized inverse.
The investiga-
tion began in 1972, when R. J. Plemmons and R. E. Cline [44] determined when a
nonnegative matrix has a nonnegative Moore-Penrose inverse. Soon thereafter, in
1974, A. Berman and R.J. Plemmons [6] provided the solution for when nonnegative
matrices possess a nonnegative group-inverse. Investigations into decompositions
of nonnegative matrices with nonnegative generalized inverses were made by many
authors, notably P. Flor, S. K. Jain and L. Snyder. In addition to the study of nonnegative matrices and generalized inverses for their own sake, there are applications
to a variety of other fields including statistics, economics and engineering.
Various partial orders on rings and matrices have been studied extensively. In
1934, K. Loewner [32] defined a partial order on the set of positive semidefinite
matrices which has been used extensively with shorted operators. The star partial
order of M. P. Drazin [12] is on a ring with involution. Motivated by Drazin, R. E.
Hartwig [19] and, independently, K.S.S. Nambooripad [41] defined a partial order
on a regular ring, namely the minus partial order. Both partial orders have been
used on rings and matrices. S. K. Mitra [34] was instrumental in the development
of the minus partial order as well as its applications to shorted operators.
Chapter 1 provides the basic definitions, notation, lemmas and theorems used in
the later chapters. A new lemma is stated and proven as well.
In Chapter 2, we begin our study of nonnegative matrices and the minus partial
order. Bapat, Jain and Snyder [4] described the structure of nonnegative matrices
which are dominated by a nonnegative idempotent matrix under the minus partial
order. We improve upon and generalize these results in our investigation into the
structure of nonnegative matrices A that are dominated by a given nonnegative
λ−monotone matrix B under the minus partial order. We provide necessary and
sufficient conditions for a nonnegative matrix A dominated by a nonnegative matrix
B that has a nonnegative group inverse. As a special case when B is an idempotent
matrix, we provide an explicit description of the class of nonnegative matrices A
dominated by a nonnegative idempotent matrix B.
Next, we turn our attention to partial orders on a von Neumann regular ring.
In Chapter 3, we introduce the direct sum partial order defined as a ≤⊕ b if bR =
aR⊕ (b − a)R. The relationship between the minus partial order and the direct sum
partial order is investigated.
We provide answers to two questions that Hartwig
[19] posed. Additionally, we give an explicit description of maximal elements in a
subring under the minus partial order. As a special case, we obtain a result similar
to the one obtained by Mitra-Puri [38] for the unique shorted operator.
In Chapter 4, we consider the parallel sum of two matrices over a regular ring.
This generalizes earlier results of Mitra-Odell and Hartwig. We also obtain a result
on the harmonic mean of two idempotents.
Acknowledgments
I would like to acknowledge the invaluable help, patience and encouragement of
my advisors, Professor S. K. Jain and Professor Dinh V. Huynh, during the preparation of this dissertation. Next, I would like to thank Dr. Ashish K. Srivastava
for the helpful conversations that we had about von Neumann regular rings. Also,
special thanks goes to Dr. Pramod Kanwar for his remarks and corrections to my
dissertation. I would like to thank my family for their support and understanding.
In particular, special thanks goes to my wife, Sara, and my children: Brian, William
and Tyler.
Contents
Abstract
3
Preface
5
Acknowledgments
8
1 Preliminaries
11
1.1 Definitions, Notation, and Conventions . . . . . . . . . . . . . . . . . 11
1.2 Nonnegative Group-Monotone Matrices and the Minus Partial Order
16
1.3 Partial Order on a von Neumann Regular Ring . . . . . . . . . . . . 20
1.4 The Parallel Sum of Two Matrices Over a Regular Ring . . . . . . . . 21
2 Nonnegative Group-Monotone Matrices and the Minus Partial Order
23
2.1 Group-Monotone Matrices . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Idempotent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9
10
3 Partial Order on a von Neumann Regular Ring
44
3.1 Equivalence of Partial Orderings and Their Properties . . . . . . . . . 46
3.2 A Characterization of the Maximal Elements . . . . . . . . . . . . . . 57
3.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4 The Parallel Sum of Two Matrices Over a Regular Ring
70
4.1 Commutative Regular Ring . . . . . . . . . . . . . . . . . . . . . . . 71
4.2 Regular Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Bibliography
79
Chapter 1
Preliminaries
This chapter provides the necessary background and results used throughout the
dissertation. The first section lists definitions, notation, and conventions that are
used in the dissertation. In the second section, results that are needed for Chapter 2
are given along with a new lemma. In the third section, we state important results
that will be needed for Chapter 3. In the final section, known results are presented
for use in Chapter 4.
1.1
Definitions, Notation, and Conventions
Definition 1.1.1. A matrix A = [aij ] is nonnegative if aij ≥ 0 for all i, j which
is denoted A ≥ 0.
Likewise, A = [aij ] is positive if aij > 0 for all i, j which is
11
12
written A > 0.
Definition 1.1.2. An n×n matrix A = [aij ] is said to be row stochastic if aij ≥ 0
and
n
P
aij = 1 for 1 ≤ i ≤ n. An n × n matrix A = [aij ] is said to be column
j=1
stochastic if aij ≥ 0 and
n
P
aij = 1 for 1 ≤ j ≤ n. An n × n nonnegative matrix
i=1
is said to be doubly stochastic if the matrix is both row and column stochastic.
Definition 1.1.3. An n × n real matrix A is symmetric if A = AT , where T
denotes the transpose.
Likewise, an n × n complex matrix A is hermitian if
A = A∗ where A∗ denotes the conjugate transpose of A.
Definition 1.1.4. Let A be an m × n matrix. Consider the following equations:
AXA = A
(1)
XAX = X
(2)
(AX)T = AX
(3)
(XA)T = XA
(4)
Ak XA = Ak
(1k )
AX = XA
where X is an n × m matrix.
(5)
For a matrix A and a non-empty subset λ of
{1, 2, 3, 4, 5}, X is called a λ-inverse of A if X satisfies equations (i) for i ∈ λ.
Note that equations 1k and 5 are only valid for square matrices. A {1} − inverse
13
of A will be written as A− or A(1). Also, the {1, 2, 3, 4} − inverse of A is the unique
Moore-Penrose inverse and is denoted A†.
Definition 1.1.5. Let A be an n × n matrix. The smallest positive integer k such
that rank Ak = rank Ak+1 is called the index of a square matrix. Every square
matrix A has a unique {1k , 2, 5} − inverse, where k is the index of A called the
Drazin inverse which is denoted AD . The system of equations (1), (2) and (5)
has a unique solution X, called the group inverse of A, if and only if rank(A) =
rank(A2). The group inverse of A is denoted by A# where the index is k = 1.
Definition 1.1.6. A matrix A is called λ − monotone if it has a nonnegative
λ − inverse. A matrix A is group-monotone if A# exists and is nonnegative.
Definition 1.1.7. A matrix J is a direct sum of matrices J1 , ..., Jr, written J =
J1 ⊕ · · · ⊕ Jr , if


 J1 0 · · · 0 




.
.

0 J
.
.
.
.


2
.

J =

.
 .. . . . . . . 0 






0 · · · 0 Jr
Definition 1.1.8. If A, B are m × n matrices then we say that A is dominated by B
under the minus partial order, written A ≤− B, if rank B = rank A+rank(B −
A).
14
Theorem 1.2.1 will provide us with several equivalent statements.
In Chapters 3 and 4, R will be a von Neumann regular ring with identity, unless
stated otherwise.
Definition 1.1.9. An element a ∈ R is called von Neumann regular if axa = a
for some x ∈ R, and x is called a von Neumann inverse of a. A ring R is called
von Neumann regular if every element in R is von Neumann regular.
For convenience, we will often use the terminology regular ring in place of von
Neumann regular ring.
Definition 1.1.10. A ring R is unit-regular if for each x ∈ R there is a unit (an
invertible element) u ∈ R such that xux = x.
Definition 1.1.11. Let S be the set of all regular elements in any ring R.
For
a, b ∈ S we say that a ≤− b if there exists a {1} − inverse x of a such that ax = bx
and xa = xb. This is known as the minus partial order for regular rings. For the
ring of matrices over a field, a ≤− b if and only if rank(b − a) = rank(b) − rank(a).
Definition 1.1.12. For a, b ∈ R, a ≤⊕ b if bR = aR ⊕ (b − a)R is called the direct
sum partial order.
Definition 1.1.13. The Loewner partial order on the set of positive semidefinite
matrices S is defined as follows: for a, b ∈ S, a ≤L b if b − a ∈ S.
15
Definition 1.1.14. Let R be a regular ring and S be a subset of R. We define a
maximal element in C = {x ∈ S : x ≤⊕ a} as an element b 6= a such that b ≤⊕ a
and if b ≤⊕ c ≤⊕ a then c = b or c = a.
Definition 1.1.15. We say that a, b ∈ R are parallel summable if a (a + b)(1) b
is invariant under the choice of {1} − inverse (a + b)(1) .
(1)
summable then p(a, b) = a (a + b)
If a and b are parallel
b is called the parallel sum of a and b.
Definition 1.1.16. In general, a ring R is called prime if aRb = (0) implies that
a = 0 or b = 0.
Definition 1.1.17. We say that every principal right ideal of a ring is uniquely
generated if any two elements of the ring that generate the same principal right
ideal must be right associates. In other words, if for all a, b in the ring R, aR = bR
implies a = bu for some unit u ∈ R.
Definition 1.1.18. The harmonic mean of two numbers is
define the harmonic mean of a and b as 2a(a + b)(1)b.
2ab
.
a+b
For a, b ∈ R, we
16
1.2
Nonnegative Group-Monotone Matrices and
the Minus Partial Order
We begin by stating some of the known facts and preliminary results that are referred
to throughout Chapter 2.
Theorem 1.2.1. ([34], Lemma 1.2) Let A and B be m × n matrices.
Then the
following conditions are equivalent:
1. A ≤− B
2. There exists a {1}-inverse A− of A such that (B −A)A− = 0 and A− (B −A) =
0.
3. Every {1}-inverse of B is a {1}-inverse of A.
4. Every {1}-inverse B − of B satisfies AB − (B − A) = 0 and (B − A)B −A = 0.
In other words, the parallel sum of A and B − A is the zero matrix.
Theorem 1.2.2. ([13], Theorem 2). If E is a nonnegative idempotent matrix of
rank r, then there exists a permutation matrix P such that
17


JD
J


 0
0

T
P EP = 

CJ CJ D



0
0
0 0


0 0

,

0 0



0 0
where all the diagonal blocks are square; J is a direct sum of matrices xiyiT , xi > 0,
yi > 0 and yiT xi = 1, i = 1, 2, . . . , r; and C, D are nonnegative matrices of suitable
sizes. In particular, if E is symmetric then


J 0
,
P EP T = 


0 0
where J is a direct sum of matrices xi xTi , xi > 0 and xTi xi = 1, i = 1, 2, . . . , r.
Lemma 1.2.1. ([4], Lemma 3) Let A, E be n × n matrices such that E 2 = E, and
suppose that A ≤− E . Then A is idempotent and AE = A = EA.
The next lemma gives the converse when A is also an idempotent.
Lemma 1.2.2. Let A and B be n × n idempotent matrices. A ≤− B if and only if
AB = A = BA.
Proof. Let A, B be idempotent where AB = A = BA.
As A is idempotent,
AAA = A2A = AA = A2 = A so A is its own {1}-inverse. Now by Theorem 1.2.1,
18
we will prove the equivalent condition that there exists a {1}-inverse A− of A such
that (B − A)A− = 0 and A− (B − A) = 0 to show that A ≤− B. So choose A = A− .
Then (B − A)A− = (B − A)A = BA − A2 = BA − A = 0 as BA = A.
Also
A− (B − A) = A(B − A) = AB − A2 = AB − A = 0 as AB = A. Thus by Theorem
1.2.1, A ≤− B as required. The converse follows by the previous lemma.
Theorem 1.2.3. ([24], Theorem 5.2) Let A be a nonnegative matrix and let A(1,2) =
p(A) ≥ 0, where p(A) =
k
P
αi Ami , αi 6= 0, mi ≥ 0. Then there exists a permutation
i=1
matrix P such that

JD
J


 0
0

T
P AP = 

CJ CJ D



0
0

0 0


0 0

,

0 0



0 0
where C, D are nonnegative matrices of appropriate sizes and J is a direct sum of
matrices of the following types (not necessarily both):
(I) βxy T , where x and y are positive unit vectors with y T x = 1 and β is a positive
root of
k
P
i=1
αi tmi +1 = 1
19


β 12 x1y2T
0
···
0
 0





.
..
 0

T
.
.
0
β
x
y
.


23 2 3




.
.
.
 where xi,yi are positive

..
..
..
(II) 
0





..
..

. β d−1,d xd−1 ydT 
.






β d1xd y1T
0
···
···
0
unit vectors of the same order with yiT xi = 1; xi and xj , i 6= j are not necessarily
of the same order. The numbers β 12, . . . , β d1 are arbitrary positive with d > 1 and
d | mi + 1 for some mi such that the product β 12β 23 · · · β d1 is a common root of the
following system of at most d equations in t
P
αi t
(mi +1)
d
= 1,
d∈Λ0
P
αi t
(mi +1−k)
d
=0
d∈Λk
k ∈ {1, 2, . . . , d− 1} where Λk = {d : d | mi + 1− k, d 6= 1} k ∈ {0, 1, . . . , d− 1} with
the understanding that if some Λk = ∅ then the corresponding equation is absent.
Conversely, suppose we have, for some permutation matrix P ,

JD
J


 0
0

T
P AP = 

CJ CJ D



0
0

0 0


0 0



0 0



0 0
where C, D are arbitrary nonnegative matrices of appropriate sizes and J is a direct
sum of matrices of the following types (not necessarily both)
′
(I ) βxy T , β > 0 where x,y are positive vectors with y T x = 1.
20


β 12x1 y2T
0
···
0
 0





.
..
 0

T
.
.
0
β
x
y
.


23 2 3




′
.
.
.


..
..
..
(II ) 
0





..
..

. β d−1,d xd−1 ydT 
.






β d1 xd y1T
0
···
···
0
where β ij ≥ 0, xi and yi are positive vectors with yiT xi = 1. Then A(1,2) ≥ 0 and is
equal to some polynomial in A with scalar coefficients.
1.3
Partial Order on a von Neumann Regular Ring
The following result of Jain and Prasad [27] will prove to be useful throughout
Chapter 3 and, specifically, for providing an equivalent definition of the minus partial
order on a regular ring.
Theorem 1.3.1. Let R be a ring and let a, b ∈ R such that a+b is a regular element.
Then the following are equivalent:
1. aR ⊕ bR = (a + b)R;
2. Ra ⊕ Rb = R(a + b);
3. aR ∩ bR = (0) = Ra ∩ Rb.
21
From Rao-Mitra [45], we have the following characterization of {a(1)} and {a(1,2)}.
Lemma 1.3.1. Let R be a ring and let a ∈ R.
If x ∈ {a(1)} then {a(1)} =
x + (1 − xa)R + R(1 − ax). In addition, {a(1,2)} = {a(1)aa(1)}.
1.4
The Parallel Sum of Two Matrices Over a
Regular Ring
From Hartwig-Shoaf [22], we have the following theorem.
Theorem 1.4.1. Let R be a regular ring and a, b ∈ R.
Then the following are
equivalent:
1. (a + b)R = aR + bR, R(a + b) = Ra + Rb;
2. The triplets a(a + b)(1)a, b(a + b)(1)b, a(a + b)(1)b, and b(a + b)(1)a are invariant
under (·)
(1)
and a(a + b)(1)b = b(a + b)(1)a.
The following theorem of Gillman-Henriksen [15] shows that every commutative
regular ring with identity is a unit regular ring.
Theorem 1.4.2. For any element a of a commutative regular ring with identity,
there exists a unit u such that a2u = a.
22
Marks [33] proved the following result about unit regular rings.
Theorem 1.4.3. Let R be a von Neumann regular ring. Then R is unit-regular if
and only if every principal right ideal is uniquely generated.
From [36], Mitra-Odell proved the following result for matrices over any field.
Theorem 1.4.4. Let A and B be matrices of order m × n each, and let there exist
a matric C such that {C − } = {A− } + {B − } Then A and B are parallel summable
and C = P (A, B).
In [17], Hartwig extended the result of Mitra and Odell to matrices over a prime
regular ring.
Theorem 1.4.5. Suppose R is a prime regular ring such that A, B, C ∈ Rm×n where
{C − } = {A− } + {B − }. Then A, B are parallel summable and C = P (A, B).
Chapter 2
Nonnegative Group-Monotone
Matrices and the Minus Partial
Order
In this chapter, our focus is on the minus partial order studied by several authors
([19], [34], [36], [41]). We continue our investigation into the structure of nonnegative matrices A that are dominated by a given nonnegative λ−monotone matrix B
under the minus partial order. The present chapter studies the case when B is a
nonnegative matrix that possesses a nonnegative group inverse. In [4], Bapat, Jain
and Snyder considered the case when the matrix B is an idempotent matrix.
We provide necessary and sufficient conditions for a nonnegative matrix A dom-
23
24
inated by a nonnegative matrix B that has a nonnegative group inverse (Theorem
2.1.1). In the special case when B is an idempotent matrix, Theorem 2.2.1 provides an explicit description of the class of nonnegative matrices A dominated by a
nonnegative idempotent matrix B.
For some applications of decompositions of nonnegative matrices, one may refer
to the following recent papers of Herrero-Ramirez-Thome [23] and Jain-Tynan [28].
2.1
Group-Monotone Matrices
Theorem 2.1.1. Let A, B be n × n nonnegative matrices such that B # ≥ 0, rank
B = r and rank A = s, s ≤ r.
permutation matrix P such that

JD
J


 0
0

T
P BP = 


CJ CJ D


0
0
Then A ≤− B if and only if there exists a


A11D
0 0
 A11




 0
0
0 0


T
 and P AP = 




0 0
CA11 CA11D




0
0
0 0

0 0


0 0

,


0 0


0 0
where C, D are nonnegative matrices of appropriate sizes and J is a direct sum of
the following types (not necessarily both):
(I) βxy T , where x and y are positive unit vectors with y T x = 1, β > 0.
25


β 12x1 y2T
0
···
0
 0





.
..
 0

T
.
.
0
β
x
y
.


23 2 3




.
.
.


..
..
..
(II) 
0





..
..

. β d−1,d xd−1 ydT 
.






β d1 xd y1T
0
···
···
0
with β ij > 0 where xi ,yi are positive unit vectors of the same order with yiT xi = 1;
xi and xj , i 6= j are not necessarily of the same order and



α1r  y1T
x1 0 · · · 0  α11 α12 · · ·






.. 
..

 0 . . . . . . ...   ...
.
. 
 0




A11 = 
 .
 .
.
.
.
.
.


 ..
. . .. 
.. .. 0 
  ..
  ..







0
αr1 αr2 · · · αrr
0 · · · 0 xr
···
0
..
.
..
.
···

0

.. 
..
. .



..
. 0



0 yrT
where αij ≥ 0 and A11 ≤− J .
Proof. Let A, B be n × n nonnegative matrices such that rank B = r and rank
A = s, s ≤ r. As B # ≥ 0, by Theorem 1.2.3 [24] there exists a permutation matrix
P such that

JD
J


 0
0

T
P BP = 


CJ CJ D


0
0

0 0


0 0

,


0 0


0 0
26
For any {1} − inverse B − of B , P BP T =
where J is of type (I) or (II).
P BB − BP T = P BP T P B − P T P BP T .
It is straightforward that



−
J −D
J


 0
0



CJ − CJ − D



0
0
0 0
JD
J




 0
0
0 0


 is a {1} − inverse of 


CJ CJ D

0 0





0
0
0 0

0 0


0 0



0 0



0 0
because J − is a {1}−inverse of J . Furthermore, we choose a {1}−inverse P B − P T
of P BP T as given below:

−
−
J D
J


 0
0

− T
PB P = 

 −
CJ − D
CJ


0
0

0 0


0 0

.


0 0


0 0
As A ≤− B, by Theorem 1.2.1 [34], AB −B = AB −A = BB −A = A. Partitioning P AP T in conformity with P BP T yields

A11


A
 21
T
P AP = 


A31


A41

A12 A13 A14


A22 A23 A24

.


A32 A33 A34


A42 A43 A44
27
Now P AB − BP T = P AP T P B − P T P BP T

A11


A
 21
=


A31


A41

A12 A13 A14  J −
J −D



0
A22 A23 A24
 0



A32 A33 A34 CJ − CJ − D


0
0
A42 A43 A44

−
A11J J


A J − J
 21
= 


A31J − J


A41J − J

A11


A
 21
= 


A31


A41

0 0  J
JD



0 0
0
 0



0 0 CJ CJ D


0 0
0
0

0 0


0 0




0 0


0 0

+ A13CJ −J A11J − J D + A13CJ −J D 0 0


+ A23CJ −J A21J − J D + A23CJ −J D 0 0




+ A33CJ −J A31J − J D + A33CJ −J D 0 0


+ A43CJ −J A41J − J D + A43CJ −J D 0 0

A12 A13 A14


A22 A23 A24

 = P AP T .


A32 A33 A34


A42 A43 A44
Thus, Ai3 = 0 and Ai4 = 0 for i = 1, 2, 3, 4. In addition, P BB − AP T =
P BP T P B − P T P AP T

JD
J


 0
0

=


CJ CJ D


0
0

0 0  J −
J −D



0 0
0
 0



0 0 CJ − CJ − D


0 0
0
0

0 0 A11



0 0
 A21



0 0 A31


0 0 A41

A12 A13 A14


A22 A23 A24

=


A32 A33 A34


A42 A43 A44
28

−
−
−
−
J J A12 + J J DA22
 J J A11 + J J DA21



0
0



CJ J −A + CJ J − DA
−
−

11
21 CJ J A12 + CJ J DA22


0
0

A11


A
 21
=

A
 31


A41

0 0


0 0



0 0



0 0

A12 A13 A14


A22 A23 A24

 = P AP T .

A32 A33 A34



A42 A43 A44
Then it follows that A2j = 0 and A4j = 0 for j = 1, 2, 3, 4. Thus


P AB − BP T
P BB − AP T
−
−
A11J J A11J J D


 0
0

= 

A J − J A J − J D
 31
31


0
0

−
J J − A12
 J J A11



0
0

= 

CJ J −A
−

11 CJ J A12


0
0
0 0


0 0

 and

0 0



0 0

0 0


0 0



0 0



0 0
where P AB − BP T = P AP T = P BB − AP T .
We will now use the following list (1) of equations to determine P AP T :
29
A11 = A11J − J = J J − A11,
A12 = A11J − J D = J J − A12,
(1)
A31 = A31J − J = CJ J − A11,
A32 = A31J − J D = CJ J −A12.
From the relations above, A11 = A11J − J = J J −A11, A12 = A11D, A31 = CA11,
and A32 = CJ J − A12 = CA11J − J D = CA11D. As a result,

A11D
 A11


 0
0

T
P AP = 

CA

11 CA11D


0
0

0 0


0 0

.

0 0



0 0
We claim: A11 ≤− J or equivalently rank(J ) = Rank(A11 ) + rank(J − A11).
First note that rank(J ) = rank(B) = rank(P BP T ) and rank(A11) = rank(A) =
rank(P AP T ) . Because rank(P BP T ) = rank(P AP T ) + rank(P BP T − P AP T ) it
follows that rank(J ) = rank(A11) + rank(J − A11). Thus we obtain A11 ≤− J .
Recall that A11 = A11J − J = J J −A11 where J is a direct sum of matrices of type
′
′
(I) or (II). Now if J is any type (I) summand of J , then J = βxy T where β is a
′
positive scalar and x, y are positive unit vectors. Choose (J )− = β1 xy T . Thus
1 T
1 T
′ − ′
′
′
T
T
T
(J ) J =
xy (βxy ) = xy = (βxy )
xy
= J (J )−
β
β
30
′′
Now if J is any type (II) summand of J, then


β 12x1 y2T
0
···
0
 0





.
.
 0

..
0
β 23x2y3T . .






′′
.
.
.

..
..
..
J =
0






.
..

T
.
.
.
β
x
y

d−1,d d−1 d 




T
β d1 xd y1
0
···
···
0
for some positive integer d . Now

 0


 1 x yT
 β 12 2 1
′′ − 

J
=
 0


..

.



0
···
0
···
0
1
x yT
β d1 1 d 
0
..
.
.
..
.
···
0
1
x yT
β 23 3 2
..

..
.
..
.
1
x yT
β d−1,d d d−1
..
.
0






.







Then

T
 x1 y1


′′ ′′ − 
 0
J
J
=
 .
 ..



0
Thus
0
..
.
..
.
···
···

0 

.. 
..
.
. 
 ′′ − ′′ = J
J .

..

.
0 


0 xdydT
31

T
x1 y1




−
−
J J = JJ = 






···
0
0
..
.
..
.
..
.
0
···
0 

.. 
..
.
. 



..
.
0 



0 xr yrT
is a block diagonal matrix such that each diagonal block is of rank one where the
summands of J are of type (I), (II), or both. Note that the list of equations (1) is
valid for any given choice of {1} − inverse of J . We have chosen J # for J − . Now
partition A11 in conformity with J − J = J J − . So

A11
′
′

A11 · · · · · · A1r 



 .
.
..

 ..
.
.
.


.
=
 .
.. 
..
 ..
.
. 





 ′
′
Ar1 · · · · · · Arr
′
′
′
From A11J − J = A11 = J J −A11, it follows that xi yiT Aij = Aij = Aij xj yjT .
′
′
′
Clearly, each Aij must be of rank 0 or 1. Now Aij = xi yiT Aij xj yjT . Thus, we may
′
write Aij = αij xi yjT . Hence

A11

T
T
T
α11 x1y1 α12x1 y2 · · · α1r x1 yr 




.
.
α x y T

..
..
 21 2 1



=

.
.
..


.
.
.
.
.






αr1 xr y1T
···
· · · αrr xr yrT
32



α1r  y1T
x1 0 · · · 0  α11 α12 · · ·






.. 
..
 0 . . . . . . ...  α

.
. 

  21
0


=
.

.
.. 
..
 .. . . . . . . 0   ...

.
. 


  ..






0 · · · 0 xr
αr1 · · · · · · αrr
0
···
0
..
.
..
.

0

.. 
..
. .

.

..
. 0



0 yrT
···
The converse is clear.
The proofs of the following corollaries follow immediately from Theorem 2.1.1.
Matrices of type (I) and (II) are also the same.
Corollary 2.1.1. Let A, B be n × n nonnegative matrices such that B # ≥ 0, B is
row (column) stochastic, rank B = r and rank A = s, s ≤ r. Then A ≤− B if and
only if there exists a permutation matrix P such that




 J 0
 A11 0
 and P AP T = 
,
P BP T = 




CJ 0
CA11 0
where C is a nonnegative matrix of the appropriate size and J is a direct sum of
type (I) and (II) matrices with β ij > 0 where xi,yi are positive unit vectors of the
same order with yiT xi = 1; xi and xj , i 6= j are not necessarily of the same order
33
and



α1r  y1T
x1 0 · · · 0  α11 α12 · · ·






.. 
..
 0 . . . . . . ...   ...

.
. 


0


A11 = 
.

. 
..
 .. . . . . . . 0   ...
.
. .. 


  ..






0 · · · 0 xr
αr1 αr2 · · · αrr
0
···
0
..
.
..
.
···

0

.. 
..
. .



..
. 0



0 yrT
where αij ≥ 0 and A11 ≤− J .


J J D 
 and P AP T =
Note that if B was column stochastic, then P BP T = 


0 0


A11 A11D

 above.


0
0
Corollary 2.1.2. Let A, B be n × n nonnegative matrices such that B # ≥ 0, B is
doubly stochastic, rank B = r and rank A = s, s ≤ r. Then A ≤− B if and only
if there exists a permutation matrix P such that
P BP T = [J ] and P AP T = [A11] ,
where J is a direct sum of type (I) and (II) matrices with β ij > 0 where xi ,yi are
positive unit vectors of the same order with yiT xi = 1; xi and xj , i 6= j are not
34
necessarily of the same order and




T
x1 0 · · · 0  α11 α12 · · · α1r  y1 0 · · · 0 





 .


.
.
.
.
.
.
.
.
0
  ..
 0

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.










A11 = 





.
.
.
.
..
 . ... ...
 .
  . ... ...

.
.
.
0
.
.
.
0












T
0 · · · 0 xr
αr1 αr2 · · · αrr
0 · · · 0 yr
where αij ≥ 0 and A11 ≤− J .
The following example demonstrates that when A ≤− B, with B # nonnegative,
A# need not be nonnegative.

0


1

Example 2.1.1. Let B = 


0


0


0 1 0
0




1

0 0 0

 and A = 




1 0 0
0




0
0 0 0

1 1 0


1 0 0

 where rank(B) = 3


0 0 0


0 0 0


and rank(A) = 2. Obviously both A, B ≥ 0. Now B #
0


0

=


1


0
1 0 0


0 1 0

 ≥ 0. We


0 0 0


0 0 0
proceed to show that A is of the form stated in the theorem and A ≤− B. There
35

0


1

exists a permutation matrix P = 

0



0

0


0

T
P BP = 


1


0

1 0 0


0 0 0

 such that

0 1 0



0 0 1


1 0 0
1




1
0 1 0


 and P AP T = 




0 0 0
0




0
0 0 0


1 0 0


0 1 0

 where C, D = 0,


0 0 0


0 0 0

0 1 0





J = 
0 0 1 is a type (II) matrix




1 0 0




and A11


1 0 0 1 1 0 1 0 0 1 1 0

 




 










= 0 1 0 1 0 1 0 1 0 = 1 0 1
.


 




 


0 0 0
0 0 1 0 0 0 0 0 1




0 1 0 1 1 0

 


 

 

In addition, J − A11 = 
0 0 1 − 1 0 1 =

 


 

1 0 0
0 0 0


−1 0 0




−1 0 0.






1 0 0
Therefore, rank A11 + rank (J − A11) = 2 + 1 = 3 = rank J. Thus A ≤− B.
Furthermore, rank (A2) = 2 = rank (A) and so A# exists.
36

−1


1

#
But A = 

0



0
2.2
1
2

0


0 −1 0

 0.

0 0 0



0 0 0
Idempotent Matrices
As a special case of the previous theorem, we provide necessary and sufficient conditions that give the structure of a nonnegative matrix A satisfying A ≤− B where
B is a nonnegative idempotent. The condition obtained in this situation is simpler
to verify.
Theorem 2.2.1. Let A, B be n × n nonnegative matrices such that B 2 = B, rank
B = r and rank A = s . Then A ≤− B if and only if there exists a permutation
matrix P such that

JD
J


 0
0

T
P BP = 

CJ CJ D



0
0


A11D
0 0
 A11




 0
0
0 0


 and P AP T = 


CA

0 0

11 CA11D




0
0
0 0

0 0


0 0

,

0 0



0 0
where C, D are nonnegative matrices of appropriate sizes and J is a direct sum of
matrices xi yiT where xi , yi are positive unit vectors of the same order with yiT xi = 1;
37
xi and xj , i 6= j are not necessarily of the same order and

 
A11

T
x1 0 · · · 0  y1 0 · · · 0 

 


 

.
.
 0 . . . . . . ..   0 . . . . . . .. 

 

E 

=
.
 .

 .. . . . . . . 0   .. . . . . . . 0 

 


 


 

0 · · · 0 xr
0 · · · 0 yrT
where E is a nonnegative idempotent r × r matrix.
Proof. As B ≥ 0 and B is idempotent, B is its own group inverse. By assumption
A ≤− B and so by Theorem 2.1.1


JD
J


 0
0

T
P BP = 


CJ CJ D


0
0

A11D
0 0
 A11




 0

0
0 0

T
 and P AP = 




0 0
CA11 CA11D




0
0
0 0

0 0


0 0

,


0 0


0 0
where C, D are nonnegative matrices of appropriate sizes and J is a direct sum of
matrices xiyiT where xi , yi are positive unit vectors of the same order with yiT xi = 1;
xi and xj , i 6= j are not necessarily of the same order and




T
x1 0 · · · 0  α11 α12 · · · α1r  y1 0 · · · 0 





 .


.
.
.
.
.
.
.
.
0


..
. . .. 
..
.. 
..
. . .. 

  ..
 0



.
A11 = 
.
 .
 .

.
..
 . ... ...
 .
  . ... ...

.
.
.
0
.
.
.
0












0 · · · 0 xr
αr1 αr2 · · · αrr
0 · · · 0 yrT
38
As B is idempotent, it follows from Lemma 1.2.1 that A is idempotent and hence
2
T
P AP Tis idempotent. Thus
 A11 = A11 because P AP
 is idempotent.Furthermore,
T
x1 0 · · · 0 
y1 0 · · · 0 








 0 . . . . . . ... 
 0 . . . . . . ... 




 has a left inverse and 
 has a right
since 
.

.

 .. . . . . . . 0 
 .. . . . . . . 0 












0 · · · 0 xr
0 · · · 0 yrT
inverse,


α11 α12 · · · α1r 


 .
.. 
..
 ..
.
. 



E=
 .
.. 
..

 .
.
. 
 .




αr1 αr2 · · · αrr
is also an idempotent matrix because yiT xi = 1.
To prove the converse, we first show that A11 ≤− J , i.e. rank(J ) = rank(A11) +
rank(J − A11).
We have by assumption that rank B = r and rank A = s.
follows that rank P BP T = r and rank P AP T = s.
It
Now as the rank P BP T
is completely determined by J and rank P AP T is completely determined by A11,
rank J = r and rank A11 = s. Now J − A11 =
39






T
0  x1 0 · · · 0  y1T 0 · · · 0 
x1 y1 0 · · ·

 
 


 
 

.
.
.
.
.
.
.
.
.
 0
 0
 0

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

 
 


−
E 
=
 .
 .
 .

.. ..
 ..
  .. . . . . . . 0   .. . . . . . . 0 
.
.
0

 
 


 
 


 
 

T
T
0
· · · 0 xr yr
0 · · · 0 xr
0 · · · 0 yr


T
 y1
x1 0 · · · 0




 0 . . . . . . ...   0




.
 .

 . ... ...
0   ..
.




0
0 · · · 0 xr

..
.
..
.
···


T
 y1
0  x1 0 · · · 0 
 
 


.. 
.. 
.. ..
..



.
. .
. .  0
 0
E 
−
 .
 .
.
.
..



.. .. 0 
. 0   ..
  ..
 
 
 
 
0
0 · · · 0 xr
0 yrT



···
0
 
T
 y1 0 · · · 0 
x1 0 · · · 0 








 0 . . . . . . ... 
 0 . . . . . . ... 




.
 [I − E] 
=

.

.
 .. . . . . . . 0 
 .. . . . . . . 0 












0 · · · 0 yrT
0 · · · 0 xr




···
0
..
.
..
.
···

0

.. 
..
. .



..
. 0



0 yrT
T
y1 0 · · · 0 
x1 0 · · · 0 








 0 . . . . . . ... 
 0 . . . . . . ... 




 ≤ rank ([I − E]).
 [I − E] 

Thus rank 

.

 .
.
.
.
.

.
 .
.. .. 0 
..
.. 0 

.

 .








0 · · · 0 yrT
0 · · · 0 xr
But as
Y T X [I − E] Y T X = [I − E] ,
40



T
y1
x1 0 · · · 0 






 0 . . . . . . ... 
0




 [I − E] 
rank 
 .

.
 .. . . . . . . 0 
 ..









0 · · · 0 xr
0
···
0
..
.
..
.
···

0 


.. 
..

. .

 = rank ([I − E]) .

..

. 0



0 yrT
Since an idempotent matrix is diagonalizable, there exists an invertible matrix
U such that

1 0 · · · · · ·

 .
0 . . . . .


.
−1
.
U EU = 
0
 .. . . 1


0
0
0



0 ··· ··· ···

0

.. 
.


.. 
.


.. 
.



0
where there are exactly s 1’s ones along the diagonal as rank E = s.
Now
U −1 [I − E] U = I − U −1 EU. Hence the rank(J − A11) = r − s. Thus rank(J ) =
r = s + (r − s) = rank(A11) + rank(J − A11) . This yields A11 ≤− J and hence
A ≤− B as required.
Although we do not specifically state corollaries for the row, column and doubly
stochastic idempotent cases, they are analogous to the corollaries given previously
41
for the group-monotone case. However, we do present the symmetric idempotent
case which follows from the previous theorem.
Corollary 2.2.1. Let A, B be n × n nonnegative matrices such that B 2 = B, B is
symmetric, rank B = r and rank A = s . Then A ≤− B if and only if there exists
a permutation matrix P such that




J 0
A11 0
 and P AP T = 
,
P BP T = 




0 0
0 0
where J is a direct sum of matrices xi xTi where xi is a positive unit vector with
xTi xi = 1; xi and xj , i 6= j are not necessarily of the same order and

 

A11
T
x1 0 · · · 0  x1 0 · · · 0 

 


 

.
.
.
.
.
.

0
. . . . .. 
. . . . .. 

 0


E 
=

  .
.

  . ... ...
 . ... ...
.
0
.
0

 


 


 

0 · · · 0 xTr
0 · · · 0 xr
where E is a nonnegative idempotent r × r matrix.
We present the following example to demonstrate the structure described in
Theorem 2.2.1. In this case, B is a doubly stochastic idempotent matrix.
42

1 0 0


0 1 1
 2 2
Example 2.2.1. Let B = 

0 1 1
 2 2


0 0 0


0
1




0
0


 and A = 



0
0





1
0

0 0 0


0 0 0

. Clearly A ≤− B.

0 0 0



0 0 1
Following the notation in the theorem P = I. We may express

 


T
0
0  x1 0 0  y1T 0 0 
x1 y1

 



 


 =  0 x 0   0 yT 0 
T
B=
0
x
y
0

 


2 2
2
2

 



 


T
T
0
0
x3 y3
0 0 x3
0 0 y3

1


0

=

0



0



1 0 0

 1 0


0 1 0 
 2 
 0 0
and clearly A = 


0 1 0 
 2 



 0 0
0 0 1


0 0 
 1 0 0 0




1
0


2
 0 1 1 0





1
0 

2

0
0
0
1

0 1




0 1 0 0 0
1 0 0












0 0 1 1 0 where E = 0 0 0
 is an








1 0 0 0 1
0 0 1
idempotent.
Recall that according to Lemma 1.2.1, A ≤− B where B is an idempotent
matrix implies that A is an idempotent matrix. In Corollary 2.2.1, we characterized
43
matrices dominated by a symmetric idempotent matrix B. A natural question is
whether or not A is also a symmetric idempotent matrix. In the following example
we show that this need not be the case.




0 1 0
1 0 0











Example 2.2.2. Let B = 0 1 0 and A = 0 1 0
 .








0 1 0
0 0 1
symmetric idempotent matrix.
Clearly, B is a
Now A ≤− B as rank(B
−
 − A) =
 2 = rank(B)
 0 1 0





rank(A) and, furthermore, A is idempotent. But A = 0 1 0
 6=




0 1 0
AT and thus A is not symmetric.
0 0 0




1 1 1 =






0 0 0
Chapter 3
Partial Order on a von Neumann
Regular Ring
Various partial orders on an abstract ring or on the ring of matrices over the real
and complex numbers have been introduced by several authors either as an abstract
study of questions in algebra, or for the study of problems in engineering and statistics (See, e.g. [1], [3], [12], [26], [32] and [38]). Also of interest are partial orders on
semigroups which are studied by several authors (See, e.g. [19], [39], and [41]).
In this chapter we study the well-known minus partial order on a von Neumann
regular ring which is simply a generalization of a partial order on the set of idempotents in a ring introduced by Kaplansky. Recall that for any two elements a, b in a
von Neumann regular ring R, we say a ≤− b (and read it as a is less than or equal
44
45
to b under the minus partial order) if there exists an x ∈ R such that ax = bx and
xa = xb where axa = a. Furthermore, we define the partial order ≤⊕ by saying
that a ≤⊕ b if bR = aR ⊕ (b − a)R, and call it the direct sum partial order. The
Loewner partial order on the set of positive semidefinite matrices S is defined by
saying that for a, b ∈ S, a ≤L b if b − a ∈ S.
The direct sum partial order is shown to be equivalent to the minus partial order
on a von Neumann regular ring. We also demonstrate that the minus partial order
on the subset of positive semidefinite matrices in the matrix ring over the field of
complex numbers implies the Loewner partial order. In addition, two questions
posed by Hartwig about the minus partial order are answered.
One of the main results of this chapter gives an explicit description of maximal
elements in a subring under minus partial order (Theorem 3.2.1). As a special case,
we obtain a result identical to the one obtained by Mitra-Puri [38] for the unique
shorted operator; which, in turn, is equivalent to the formula of Anderson-Trapp
([3] Theorem 1) for computing the shorted operator of a shorted electrical circuit
(Theorem 3.3.1).
46
3.1
Equivalence of Partial Orderings and Their
Properties
We now investigate properties of the direct sum partial order and its relation to the
minus partial order.
Let R be a regular ring. Recall a ≤⊕ b if and only if bR = aR ⊕ (b − a)R. By
Theorem 1.3.1 this is equivalent to Rb = Ra ⊕ R(b − a). It is straightforward to see
that ≤⊕ is a partial order.
Next, we show that the minus partial order is equivalent to the direct sum partial
order on a regular ring.
Theorem 3.1.1. Let R be a regular ring and a, b ∈ R.
Then the following are
equivalent:
1. a ≤⊕ b;
2. a ≤− b;
3. {b(1)} ⊆ {a(1)}.
Proof. 1 =⇒ 2 : As a ≤⊕ b, bR = aR ⊕ (b − a)R.
Hence, a ∈ bR and thus a = bx for some x ∈ R.
any g ∈ {b(1)}, bgb = b.
It follows that aR ⊆ bR.
As R is a regular ring, for
Thus bga = bg(bx) = (bgb)x = bx = a.
Now aga =
47
bga − (b − a)ga = a − (b − a)ga. Thus a − aga = (b − a)ga. But aR ∩ (b − a)R = (0)
and a − aga = (b − a)ga ∈ aR ∩ (b − a)R. Hence a − aga = 0 and (b − a)ga = 0.
Therefore aga = a = bga and hence {b(1)} ⊆ {a(1)}. Indeed, this demonstrates that
1 =⇒ 3.
Now choose x = gag. Then axa = a(gag)a = aga = a and x ∈ {a(1)}. Now
bx = (bga)g = ag as bga = a. Furthermore, ax = agag = ag as aga = a. Thus
ax = bx. Now bg(b − a) = bgb − bga = (b − a) and (b − a)g(b − a) = bg(b − a) − ag(b−
a) = (b−a)−ag(b−a). Hence ag(b−a) = (b−a)−(b−a)g(b−a) ∈ aR∩(b−a)R = (0).
Thus (b − a) = (b − a)g(b − a) and ag(b − a) = 0. It follows that agb = aga = a.
Now xb = (gag)b = g(agb) = ga and xa = gaga = ga. Therefore xb = xa. Thus
ax = bx and xa = xb for some x ∈ {a(1)} and it follows that a ≤− b.
2 =⇒ 3 : This is well-known. It is proven here for completeness. As a ≤− b,
there exists some x ∈ {a(1)} such that ax = bx and xa = xb.
It follows that
a = axa = bxa = axb and for any y ∈ {b(1)}, aya = (axb)y(bxa) = ax(byb)xa =
axbxa = (axb)xa = axa = a. Thus {b(1)} ⊆ {a(1)}.
3 =⇒ 1 : Given that {b(1)} ⊆ {a(1)}, ab(1)a = a for any b(1) ∈ {b(1)}.
By
Lemma 1.3.1, {b(1)} = g + (1 − gb)R + R(1 − bg) for g ∈ {b(1)}. For each x ∈ {b(1)}
there exists some r1, r2 ∈ R such that x = g + (1 − gb)r1 + r2 (1 − bg). Multiplying
on the left and right by a yields axa = a [g + (1 − gb)r1 + r2 (1 − bg)] a.
Hence
a = axa = a [g + (1 − gb)r1 + r2 (1 − bg)] a = aga + a(1 − gb)r1 a + ar2(1 − bg)a =
48
a + a(1 − gb)r1a + ar2 (1 − bg)a.
Thus a(1 − gb)r1 a + ar2 (1 − bg)a = 0. As
a(1 − gb)r1a + ar2(1 − bg)a = 0 holds for all r1 and r2 , we can take, in particular,
r2 = 0 which gives a(1−gb)r1 a = 0 for all r1 and hence a(1−gb)Ra = (0). Similarly,
by taking r1 = 0, we conclude aR(1 − bg)a = (0).
As r1 , r2 were arbitrary, it follows that a(1 − gb)Ra = (0) = aR(1 − bg)a.
Now (a(1 − gb)R)2 = (a(1 − gb)R) (a(1 − gb)R) = (a(1 − gb)Ra) ((1 − gb)R) =
(0) ((1 − gb)R) = (0).
Similarly (R(1 − bg)a)2 = (0).
ring, it has no nonzero nilpotent left or right ideal.
Since R is a regular
Thus, a(1 − gb)R = (0)
and R(1 − bg)a = (0).
As 1 ∈ R, a(1−gb) = 0 and (1−bg)a = 0. Therefore, bga = a = agb. Now for any
t1, t2 ∈ R, at1 = (bga)t1 = b(gat1) ∈ bR and (b − a)t2 = bt2 − at2 = bt2 − (bga) t2 =
b(t2 − gat2) ∈ bR. Hence, aR + (b − a)R ⊆ bR . Thus aR + (b − a)R = bR.
Now we want to show that aR ∩ (b − a)R = (0). For some u, v ∈ R, suppose
au = (b − a)v ∈ aR ∩ (b − a)R. Then au = agau = ag(b − a)v = agbv − agav =
av − av = 0 as a = agb. Thus aR ∩ (b − a)R = (0) and so bR = aR ⊕ (b − a)R.
Hence, a ≤⊕ b as required.
Remark 3.1.1. It is stated in [35] that Hartwig-Luh have demonstrated that, when
R is a regular ring, 2 is equivalent to 3 with the additional hypothesis that a ∈ bRb.
We also note that proving directly 2 =⇒ 1 requires a brief argument.
49
The Corollary that follows shows, in particular, that the minus partial order defined on the set of idempotents is the same as the partial order defined by Kaplansky
on idempotents (See e.g. Lam [29], page 323).
Corollary 3.1.1. Let R be a regular ring and a, b ∈ R such that b = b2. Then the
following are equivalent:
1. a ≤− b;
2. a = a2 = ab = ba.
Proof. 1 =⇒ 2 : If a ≤⊕ b then bR = aR ⊕ (b − a)R and Rb = Ra ⊕ R(b − a). It
follows that aR ⊆ bR and Ra ⊆ Rb. Hence a = bx and a = yb for some x, y ∈ R. So
ab = (yb)b = yb2 = yb = a as b is idempotent. Similarly, ba = b(bx) = b2x = bx = a
as b is idempotent. Now a − a2 = a(1 − a) ∈ aR and a − a2 = ba − a2 = (b − a)a ∈
(b − a)R. But aR ∩ (b − a)R = (0). Therefore a − a2 = 0 and a = a2. Hence,
a = a2 = ab = ba.
2 =⇒ 1 : Conversely, for any x ∈ R, as b = a + (b − a), it follows that bx =
[a + (b − a)] x = ax + (b − a)x . So bR ⊆ aR + (b − a)R. Now for any r1 ∈ R,
ar1 = (ba)r1 = b(ar1 ) ∈ bR. Also, for any r2 ∈ R, (b−a)r2 = br2 −ar2 = br2 −bar2 =
b(r2 − ar2) ∈ bR. As right ideals are closed under addition, aR + (b − a)R ⊆ bR.
Hence bR = aR + (b − a)R.
50
Now we want to show that aR ∩ (b − a)R = (0).
Suppose au = (b − a)v ∈
aR ∩ (b − a)R for u, v ∈ R. Multiplying on the left by a yields a2u = abv − a2v.
By assumption, a = a2 = ab, so au = a2u = abv − a2v = av − av = 0.
Thus
aR ∩ (b − a)R = (0) and a ≤⊕ b.
The proposition that follows shows that, under a certain condition, in a subring
S of a regular ring R, the minus partial order on R implies the direct sum partial
order on S.
Proposition 3.1.1. Let S be a subring of a regular ring R such that bR ∩ S = bS
for some a, b ∈ S. Then a ≤− b on R implies a ≤⊕ b on S. In the case where b is
an idempotent, bR ∩ S = bS if and only if a ≤⊕ b on S.
Proof. Suppose that a, b ∈ S and bR∩S = bS. By Theorem 3.1.1, a ≤− b on R if and
only if bR = aR ⊕ (b − a)R. We have, bS ⊂ aS ⊕ (b − a)S ⊆ aR ∩ S ⊕ (b − a)R ∩ S ⊆
(aR ⊕ (b − a)R) ∩ S = bR ∩ S = bS. Thus aS ⊕ (b − a)S = bS and so a ≤⊕ b.
For the second part of the proposition, we just need to show the “if” part. Let
a, b ∈ S and suppose a ≤⊕ b on S. Clearly, bS ⊆ bR ∩ S. For any x ∈ R, if bx ∈ S,
then bx = bbx = b(bx) ∈ bS . Thus bR ∩ S = bS.
Next, we show that the condition bR ∩ S = bS is not necessary in the first part
of the proposition.
51
Example 3.1.1. Let R be the ring of 3 ×3 matrices
 and S be the
 subring
 consisting
 0 1 0
0 1 0








 and a = 0 0 0.
of upper triangular matrices. Let b = 
0
0
1












0 0 0
0 0 0
Clearly
bR ∩ S 6= bS, a ≤− b on R and a ≤⊕ b on S as well.
Corollary 3.1.2. Let S be a subring of a regular ring R such that a, b ∈ S and
a ≤− b on R where b is idempotent. Then a ≤⊕ b on S.
Proof. We always have bS ⊆ aS ⊕ (b − a)S. We claim that for any s1 , s2 ∈ S there
exists s3 such that as1 + (b − a)s2 = bs3.
Now as1 + (b − a)s2 = bs3 = bbs3 =
b [as1 + (b − a)s2] ∈ bS. Thus aS ⊕ (b − a)S ⊆ bS and hence a ≤⊕ b on S.
Remark 3.1.2. One can choose the subring S in the above proposition to be any
right (left) non-singular ring as such a ring is always embeddable in a right (left)
maximal quotient ring which is a regular ring (Lam [30], page 376, 13.36).
In [19], Hartwig posed the following two questions, among others:
(1) If R is a regular ring and aR ∩ cR = (0) = Ra ∩ Rc, does there exist a(1)
such that a(1)c = 0 = ca(1)?
(2) Does a ≤− c, b ≤− c, aR ∩ cR = (0) = Ra ∩ Rc imply a + b ≤− c?
Below, we answer Question 1 in the affirmative and Question 2 in the negative
by providing a counterexample.
We do not know whether or not someone has
52
answered these questions, as we could not find this in the literature. In any case,
we believe that the answers we have given would be of interest to the reader.
It is first necessary to prove a lemma. Note that, in a regular ring, if we let
b = a + c then, a ≤− a + c if and only if a ≤⊕ a + c if and only if (a + c)R = aR ⊕ cR.
Lemma 3.1.1. Let R be a regular ring and let a, b, c ∈ R with b = a + c. Then the
following statements are equivalent:
1. a ≤− b;
2. aR ∩ cR = (0) = Ra ∩ Rc.
Proof. It follows immediately from Theorem 3.1.1 that a ≤− b implies aR ∩ cR =
(0) = Ra ∩ Rc.
Conversely, for any g ∈ {b(1)}, bgb = b. Now a + c = b = bgb = bg(a + c) =
bga + bgc and consequently a − bga = bgc − c ∈ Ra ∩ Rc.
But by assumption
Ra∩Rc = (0). Thus a−bga = 0 and bgc−c = 0. It follows that a = bga and c = bgc.
So a = bga = (a + c) ga = aga + cga which gives us a − aga = cga ∈ aR ∩ cR = (0).
Thus a − aga = 0 and a = aga for any g ∈ {b(1)}. Hence, {b(1)} ⊆ {a(1)} and by
the previous theorem, a ≤− b.
Proposition 3.1.2. (Hartwig Question 1) If R is a regular ring and aR ∩ cR =
(0) = Ra ∩ Rc, for some nonzero elements a, c ∈ R, then there exists a nonzero a(1)
such that a(1)c = 0 = ca(1).
53
Proof. Let b = a+c. By the previous lemma, a ≤− b. Then, by the definition of the
minus partial order, for some a(1), aa(1) = ba(1) and a(1)a = a(1)b. Now substituting
b = a + c yields aa(1) = (a + c)a(1) and a(1)a = a(1)(a + c). Thus aa(1) = aa(1) + ca(1)
and a(1)a = a(1)a + a(1)c. It follows that ca(1) = 0 = a(1)c as required.
Example 3.1.2.
(Hartwig
2)

 Question

0


0

Let a = 


0


0
0 1 0
0





0
0 0 0

, b = 




0 0 0
0




0 0 0
0


0 0 0
0





0
0 0 1

 and c = 




0 0 0
0




0 0 0
0
and b ≤− c. We show that a + b − c. Now,


F F F F 




0 0 0 0


,
aR = 


0 0 0 0






0 0 0 0


0 0 0 0




F F F F 


,
bR = 




0 0 0 0




0 0 0 0

0 1 1


0 0 1

. Then a ≤− c


0 0 0


0 0 0
54

0


0

Ra = 

0



0

0


0

Rb = 


0


0
So aR ∩ bR = (0) =
 Ra ∩ Rb. 
0


0

Next a + b = 


0


0

0 F 0


0 F 0

 and

0 F 0



0 F 0

0 0 F


0 0 F

.


0 0 F


0 0 F

0 1 0
0




0
0 0 1


, and a + b = 




0 0 0
0




0
0 0 0

0 1 0


0 0 1

 −


0 0 0


0 0 0

0


0




0


0

0 1 1


0 0 1

 =


0 0 0


0 0 0
c 
because rank(c)
 − rank(a + b) = 2 − 2 = 0 and rank(c − (a + b)) = rank

0


0



0



0
0 0 1



0 0 0

 = 1.


0 0 0



0 0 0
The following result is contained in (Lemma 1, [39]) where the author proves
the equivalence of 11 statements. However, for the sake of completeness, a direct
argument is provided.
55
Lemma 3.1.2. Suppose R is a regular ring and a, b ∈ R such that {a(1)}∩{b(1)} =
6 ∅.
Then the following are equivalent:
1. aR ⊂ bR and Ra ⊂ Rb;
2. a ≤⊕ b.
Proof. Suppose aR ⊂ bR and Ra ⊂ Rb.
It follows that a = rb = bs for some
r, s ∈ R. We claim that ab(1)a is invariant under any choice of b(1). Let x, y ∈ {b(1)}
be arbitrary.
Now axa = (rb)x(bs) = r(bxb)s = rbs as bxb = b.
Similarly,
aya = (rb)y(bs) = r(byb)s = rbs as byb = b. Thus axa = aya for every x, y ∈ {b(1)}.
Hence ab(1)a is invariant under any choice of b(1). By assumption, {a(1)} ∩{b(1)} =
6 ∅
so there exists some g ∈ {a(1)} ∩ {b(1)}.
Therefore ab(1)a = aga = a for all b(1).
Hence {b(1)} ⊆ {a(1)} and by Theorem 3.1.1, a ≤⊕ b.
Conversely, if a ≤⊕ b, then aR ⊂ bR and Ra ⊂ Rb follow by definition.
We now demonstrate an important relationship between {2}-inverses and {1,2}inverses under the direct sum partial order.
Lemma 3.1.3. Let a ∈ R where R is a regular ring.
equivalent:
1. b is a {2}-inverse of a;
2. There exists a {1,2}-inverse c of a such that b ≤⊕ c.
Then the following are
56
Proof. Suppose b is a {2}-inverse of a for a ∈ R.
a(1)(a − aba)a(1) and c = b + u.
For any fixed a(1), define u =
Then aca = aba + aua = aba + aa(1)aa(1)a −
aa(1)abaa(1)a = aba+a−aba = a and cac = (b+u)a(b+u) = bab+bau+uab+uau =
b + ba(a(1)aa(1) − a(1)abaa(1)) + (a(1)aa(1) − a(1)abaa(1))ab+
(a(1)aa(1) − a(1)abaa(1))a(a(1)aa(1) − a(1)abaa(1)) = b + baa(1) − baa(1) + a(1)ab −
a(1)ab + a(1)aa(1) − a(1)abaa(1) − a(1)abaa(1) + a(1)abaa(1) = b + a(1)(a − aba)a(1) =
b + u = c. This shows that c is a {1,2}-inverse of a.
Now we want to show that b ≤⊕ c. In other words, we will prove that bR⊕uR =
cR.
Observe that cab = [b + a(1)(a − aba)a(1)]ab = bab + a(1)(ab − abab) = bab.
Therefore b ∈ cR.
As c = b + u, it is clear that cR ⊆ bR + uR.
and b ∈ cR, uR ⊆ cR.
It follows that cR = bR + uR.
As u = c − b
Now we want to show
that bR ∩ uR = (0). Let bp = uq ∈ bR ∩ uR for some p, q ∈ R. Multiplying ba
on both sides yields bp = babp = bauq = ba[a(1)(a − aba)a(1)]q = (ba − baba)a(1)q =
(ba − ba)a(1)q = 0.
Therefore bR ∩ uR = 0.
Thus bR ⊕ uR = cR and we have
demonstrated that b ≤⊕ c.
Conversely, suppose that there exists a {1,2}-inverse c of a such that b ≤⊕ c. As
c is a {2}-inverse of a, cac = c and thus a ∈ {c(1)}. By assumption b ≤⊕ c and it
follows from Theorem 3.1.1 that {c(1)} ⊆ {b(1)}. Thus a ∈ {c(1)} ⊆ {b(1)} and it
follows that bab = b. Hence b is a {2}-inverse of a.
57
Lemma 3.1.4. Suppose R is a regular ring.
Let y be a {2}-inverse and z be
a {1,2}-inverse of an element α in the subring fRe such that y ≤⊕ z.
Then
eyf ≤⊕ ezf.
Proof. Let α = fxe ∈ fRe.
Since y ≤⊕ z, yR ⊆ zR and Ry ⊆ Rz.
Thus,
y = rz = zs for some r, s ∈ R. It is straightforward to verify that zαy = y = yαz.
This gives (ezf)x(eyf) = (ezf)x(e(zs)f) = ez(fxe)zsf = ezsf = eyf. Similarly
(eyf)x(ezf) = eyf.
Thus (eyf)R ⊆ (ezf)R and R(eyf) ⊆ R(ezf).
As α =
fxe is a common {1}-inverse of y and z, it follows that (eyf)x(eyf) = eyf and
(ezf)x(ezf) = ezf and so x is a common {1}-inverse of eyf and ezf. By Lemma
3.1.2, eyf ≤⊕ ezf .
3.2
A Characterization of the Maximal Elements
Let R be a regular ring and S be a subset of R. We define a maximal element in
C = {x ∈ S : x ≤⊕ a} as an element b 6= a such that b ≤⊕ a and if b ≤⊕ c ≤⊕ a
then c = b or c = a.
For fixed elements a, b, c ∈ R, we give a complete description of the maximal
elements in the subring S = eRf, where e and f are idempotents given by eR =
aR ∩ cR and Rf = Ra ∩ Rb. Here, C = {s ∈ eRf : s ≤⊕ a}.
Before we prove our main result, we give two key lemmas. We assume through-
58
out that a ∈
/ S.
Lemma 3.2.1. Let R be a regular ring. Then d ∈ C is a maximal element in C
′
′
′
if and only if for any d ≤⊕ a such that dR ⊆ d R ⊆ eR, Rd ⊆ Rd ⊆ Rf, we have
′
d=d.
′
′
Proof. Let d be a maximal element in C. If d is any element in R such that d ≤⊕ a
′
′
′
′
′
and dR ⊆ d R ⊆ eR, Rd ⊆ Rd ⊆ Rf, then clearly d ∈ eRf. As d ≤⊕ a, d ∈ C.
′
′
Then {a(1)} ⊆ {d(1) } ∩ {(d )(1)}. Hence, d ≤⊕ d by Lemma 3.1.2. Then by the
′
maximality of d in C, d = d .
The converse is obvious.
Lemma 3.2.2. C = {euf : u is {2}-inverse of fa(1) e}.
Proof. Let s = etf ∈ C for some t ∈ R. Then s ≤⊕ a.
{a(1)} ⊆ {s(1)}.
By Theorem 3.1.1,
Therefore, we have (etf)a(1)(etf) = (etf). In other words,
(etf)(fa(1)e)(etf) = (etf), proving that s = etf is a {2}-inverse of fa(1)e.
This
shows that s = euf for some {2}-inverse u of fa(1)e.
Conversely, consider any u ∈ (fa(1)e)(2) and let x = euf.
that x ≤⊕ a.
u ∈ (fa(1)e)(2).
x = euf ∈ C.
We want to show
Now xa(1)x = (euf) a(1) (euf) = eu fa(1) e uf = euf = x as
Hence {a(1)} ⊆ {x(1)}.
By Theorem 3.1.1, x ≤⊕ a and so
59
Theorem 3.2.1. max C = {evf : v is a {1,2}-inverse of fa(1)e}.
Proof. Suppose x = euf ∈ C where u = fa(1)e
(2)
. By Lemma 3.1.3, there is a
{1,2}-inverse v ∈ eRf of fa(1)e such that euf ≤⊕ evf. Note that evf ≤⊕ a. Thus
max C ⊆ {evf : v is a {1,2}-inverse of fa(1)e} unless evf = a, for every choice of v,
but this cannot happen because by hypothesis a ∈
/ S.
Now suppose evf, ev ′f ∈ C such that v, v ′ are {1,2}-inverses of fa(1)e and evf ≤⊕
ev ′f. Therefore ev ′fR = evfR⊕(ev ′f −evf)R. Now we want to show that ev ′fR =
evfR.
As evf, ev ′f ∈ C, evf ≤⊕ a and ev ′f ≤⊕ a.
(1)
and {a(1)} ⊆ {(ev ′f) }.
Thus {a(1)} ⊆ {(evf)(1)}
So let a(1) be a common {1}-inverse of evf and ev ′f.
By assumption evfR ⊆ ev ′fR.
As shown in Lemma 3.1.4, (ev ′f) a(1) (evf) =
evf and (ev ′f) a(1) (ev ′f) = (ev ′f).
Now (ev ′f) R = ev ′fa(1)R = ev ′fa(1) eR =
ev ′(fa(1)evfa(1)e)R ⊆ ev ′fa(1)evfR = evfR ⊆ ev ′fR.
Thus ev ′fR = evfR.
Similarly we can show that Rev ′f = Revf.
As Rev ′ f = Revf, we claim that ev ′f = evf. Let ev ′f = revf for some r ∈ R.
Now evf = ev ′fa(1)evf = (revf)a(1)evf = r(evf) = ev ′f.
Hence max C = {evf : v is a {1,2}-inverse of fa(1)e}.
Thus evf = ev ′f.
60
We now provide an example to illustrate the previous theorem.
Example 3.2.1. Note that we are choosing f to be of rank
 two. Soany maximal element will have, at most, rank two.

1
1
2 4 0


1 1 0
 2



0 0 0


0 0 0

1


0



0



0

0


0

.


0


1


1


0

Suppose a = 


0


0


1 0 0


0 1 1
 2 2
Choose e = 

 1 1
0 2 2


0 0 0
0 0 0


1 0 0

.


0 1 0


0 0 1

0


0

 and f =


0


1
Then one choice for a(1) is a(1) =
1
1
1
0 0 0
 2 8 8 0






 1 1 1 0
1 0 0



4
4
 and fa(1)e = 
.



 0 0 0 0
0 1 0









0 0 1
0 0 0 1
For our choice of a {1, 2}-inverse
of fa(1)e, we first choose its Moore-Penrose inverse and later its {1, 2, 5}-inverse,
known as the group inverse, as both arealso {1, 2}-inverses.
Let v1be the Moore

16
 45


4
 45
(1)
Penrose inverse of fa e. Then v1 = 

4
 45


0
−
32
45
8
45
8
45
0
8
4
0 0
9 9




2 1
0 0


 and ev1f =  9 9



2 1
0 0
9 9




0 1
0 0
0 0


0 0

.


0 0


0 1
Now ev1f ≤ a because rank(a − ev1f) = 2 = 4 − 2 = rank (a) − rank(ev1f). Thus
61
ev1f ∈max C.
Wenow find another
element of max

 C.
8
2
2
9 9 9


 16 4 4
9 9 9
v2 = 

0 0 0



0 0 0
0


0

.

0



1
2
1
3 3


2 1
3 3
Then ev2f = 

2 1
3 3


0 0
Thegroup-inverse v2 of fa(1)e is
0 0


0 0

.

0 0



0 1
Now ev2f ≤− a because
rank(a − ev2f) = 2 = 4 − 2 = rank (a) − rank(ev2f). Thus ev2f ∈max C.
3.3
Applications
In this section, as an application of our main theorem on maximal elements, we
derive the unique shorted operator aS of Anderson-Trapp [3] that was also studied
by Mitra-Puri (See Theorem 2.1, [38]).
Throughout this section R will denote the ring of n × n matrices over the field
of complex numbers, C. For any matrix or vector u, u∗ will denote the conjugate
transpose of u. In this section S will denote the set of positive semidefinite matrices.
Recall that any element a ∈ R has a unique Moore-Penrose inverse, denoted a†.
For w ∈ S and b ∈ R, x is the unique w -weighted Moore-Penrose inverse of b if x
is {1, 2} -inverse of b and satisfies (3)w (wbx)∗ = wbx and (4)w (wxb)∗ = wxb.
Recall, the Loewner order, ≤L , on the set S of positive semidefinite matrices in
62
R is defined as follows: for a, b ∈ S, a ≤L b if b − a ∈ S.
Suppose a ∈ S and c ∈ R. As in the previous section, eR = aR∩cR, e = e2, and
choose f = e∗. Clearly, f ∈ Ra because a is hermitian. Let CL = {s ∈ eRf ∩ S :
s ≤L a} = {s ∈ eSf : s ≤L a}.
Under this terminology, the set C in the previous section will become, C = {s ∈
eSf : s ≤⊕ a}.
We assume that rank(e) 6= rank(a), equivalently, a ∈
/ eSf as shown in the
remark below.
Remark 3.3.1. rank(e) = rank(a) if and only if a ∈ eSf.
Proof. Suppose rank(e) = rank(a). So eR = aR as eR ⊆ aR. Then a = ex for
some x ∈ R and by taking conjugates, a = x∗ e∗, i.e., a ∈ Re∗ . Hence, a ∈ eRe∗.
As a ∈ S, a ∈ S ∩ eRe∗ = eSe∗. For if exe∗ ∈ S then exe∗ = e (exe∗) e∗ ∈ eSe∗ and
so S ∩ eRe∗ ⊆ eSe∗. The reverse inclusion is obvious.
Conversely, suppose a ∈ eSf.
As eR = aR ∩ cR, we have e = ax and so
rank(e) ≤ rank(a). As a ∈ eSf, a = ese∗ for some s ∈ S. Therefore rank(a) ≤
rank(e). Hence, rank(e) = rank(a).
Lemma 3.3.1. Suppose a, b ∈ S. If a ≤⊕ b then a ≤L b.
Proof. Suppose a ≤⊕ b. Equivalently, (b − a) ≤⊕ b and by Theorem 3.1.1 we know
that {b(1)} ⊆ {(b − a)(1)}. Thus, b† is a {1}-inverse of (b − a). From [31], as b is
63
positive semidefinite, b† is positive semidefinite. Thus b−a = (b − a) b† (b − a) ≥L 0.
Hence (b − a) ∈ S and a ≤L b.
Theorem 3.3.1. Let a ∈ S and let fa† be the a-weighted Moore-Penrose inverse of
f. Then max C = max CL = {afa† f}.
Proof. By Theorem 3.2.1, max C = {evf : v is a {1,2}-inverse of fa(1) e}.
By
assumption, e ∈ aR and so e = ax for some x ∈ R. By taking conjugates, e∗ = x∗a
as a ∈ S.
In addition, as f ∈ Ra, f = ya for some y ∈ R.
This yields that
fa(1)e = yaa(1)ax = yax and thus fa(1)e is independent of the choice of a(1). We
may then choose the Moore-Penrose inverse a† for a(1).
Next, we want to show that a {1, 2} − inverse of fa† e is also unique. Note that
fa† e = e∗ a†e is positive semidefinite, as the Moore-Penrose inverse of a positive
semidefinite element is positive semidefinite [31]. As a ∈ S, we can write a = zz ∗
for some z ∈ R.
Now fR = yaR = yaa†aR = fa†aR = fa† R = fzz ∗ R =
fzR = (fz) (fz)∗ R = fzz ∗ f ∗ R = fa† eR. Similarly Re = Rfa† e. It follows that
f = fa† ep and e = qfa†e for some p, q ∈ R. Consider an element evf ∈ max C.
Then evf = qfa† evfa†ep = qfa†ep, showing that evf is independent of the choice
of {1,2}-inverse v of fa† e. Thus max C is a singleton set consisting of the element
†
†
†
e fa†e f. Since a ∈ S, a† ∈ S and hence e e∗a† e f = e fa† e f ∈ S.
Next, we proceed to show that max C = {afa†f} also.
Recall that afa† f is
64
hermitian and so afa† f =
afa†f
∗
= f ∗ fa†
∗
idempotent, we get afa† f = a(fa†f)(fa† f) = fa† f
We now prove that afa† f ≤⊕ a.
a∗ =
∗
fa† f
∗
a.
Since fa† f is an
a(fa† f) and thus afa† f ∈ S.
Let a(1) be an arbitrary {1}-inverse of a.
Then afa†f a(1) afa†f = (afa†)(ya)a(1) afa† f = (afa† y)aa(1)a(fa† f) = afa† yafa†f =
afa† ffa† f = afa† f.
Hence {a(1)} ⊆ { afa† f
(1)
}. Consequently, by Theorem 3.1.1,
afa† f ≤⊕ a which gives afa† f ∈ C.
Furthermore, by Lemma 3.3.1, afa†f ≤⊕ a gives afa† f ≤L a and hence afa†f ∈ CL .
Finally, we show that for every d ∈ CL , d ≤L afa† f. As d ∈ S ⊆ Rf, write d = uf
for some u ∈ R. Then dfa† f = uffa† f = uf = d = (fa† f)∗ d fa† f as d is hermitian.
Now consider afa† f − d = fa† f
∗
∗
∗
a fa† f − fa† f d fa† f = fa† f (a − d) fa† f ,
which is positive semidefinite and thus afa†f − d ∈ S. Hence d ≤L afa† f.
Thus afa† f is the unique maximal element in CL provided afa† f 6= a. We have
shown above that afa† f ∈ CL and thus afa† f ∈ eSf. But by assumption a ∈
/ eSf .
So afa†f 6= a. Therefore, afa†f is unique maximal element in CL and it also belongs
to C as we have already proven that afa†f ≤⊕ a.
†
Now, because e fa†e f is the unique maximal element in C and afa† f ∈ C,
†
†
†
afa† f ≤⊕ e fa†e f . By Lemma 3.3.1, afa†f ≤L e fa†e f as e fa† e f ∈ CL .
It has been shown above that for every element d ∈ CL , d ≤L afa† f and thus afa† f
†
= e fa† e f. Hence, max C = max CL = {afa† f} as desired.
The following examples demonstrate the result proven in the previous theorem,
65
†
i.e. afa† f = e fa†e f and so max C = max CL = {afa† f}. Furthermore, max C
agrees with the formula given by Anderson-Trapp for computing the shorted operator aS when we are given the impedance matrix a.
The Anderson-Trapp formula states that if a is the n×n impedance matrix then
the shorted operator of a withrespect to the k-dimensional
subspace S (shorting

†
a11 − a12a22a21 0

, where a is partitioned as a =
n − k ports) is given by aS = 

0
0


a11 a12

 such that a11 is a k × k matrix. We show that the maximal element


a21 a22
afa† f that we obtain is permutation equivalent to aS , i.e. P afa†f P T = aS for some
permutation matrix P.

1
2
1
2
0


0 0 0

Example 3.3.1. Let e = 

1 0 1
2
2


0 0 0


1


0

a=


1


0
0 1 0


1 0 0

.


0 1 0


0 0 1


1
2
0





0
0

 and then f = e∗ = 


1
0

2




1
0

1
2


0

†
Then one may check that fa = f = 

1
2


0
1
2
0
0

0


0 0 0

. Suppose

0 12 0



0 0 1

1
2
0


0 0 0

.


0 12 0


0 0 1
66

1


0

†
So afa f = 

1



0

1
4


0



1
4


0
0
1
4

0 1 0


0 0 0

. We now show that af †f = e fa† e † f. Now, a† =
a

0 1 0



0 0 1


0
1 0





0 0
1 0 0

†
†
 and fa e = 




1
0 4 0
1 0




0 0 1
0 0


1


0

†
Hence e fa† e f = 


1


0
verify that afa† f ≤⊕ a.

1 0


0 0

.


1 0


0 1

1


0

† †
Thus e fa e f = 


1


0
0 1 0


0 0 0

 = af † f as proven in the theorem.
a


0 1 0


0 0 1

0 1 0


0 0 0

.


0 1 0


0 0 1
We may
This follows from rank(a) − rank(afa†f) = 3 − 2 = 1 =
rank(a − afa†f). We know then afa† f ≤L a. Thus max C = max CL = {afa†f}.
We now compute
 operator
  as given by Anderson-Trapp. We partition
the shorted
1




0


a as follows: a = 

 1



0
0 1



1 0


0 1
0 0
0
 
 
 
0
 .
 
0 



1
67


 



1 0 1 0

1
   † 



  


0 1 0 − 0 1

0
0
0
0
0

  



  
= 
Then aS = 



  
 1 0 1

1
0









0
0
0


0 1 0


1 0 0

.

0 1 0



0 0 0


1


0

P =

0



0
0 0 0


0 0 1

, P af † fP T = aS .
a


0 1 0


1 0 0

1


0

Example 3.3.2. Let e = 


0


0


1


0

a=

0



0
Now for

1 0 0
1




1

0 0 0

 and then f = e∗ = 




0 0 0
0




0
0 0 0



1
1
0 0 0
2 2




0 0

1 0 0

. Then f † = 
a


0 0
0 2 2






0 2 2
0 0
0 0
1





0
0 0

. So af † f = 
a


0
0 0






0 0
0
0 0 0


0 0 0

. Suppose


0 0 0


0 0 0

0 0 0


0 0 0

. Now a† =

0 0 0



0 0 0
68

1


0



0



0


1
1
0 0 0
4 4




1 1
1 0 0


†
†
 and fa e =  4 4


0 0
1
1
0 8 8





0 18 18
0 0


1


0

† †
Hence e fa e f = 

0



0

0 0


0 0

.

0 0



0 0

1


0

† †
Thus e fa e f = 

0



0

0 0 0


0 0 0

.

0 0 0



0 0 0
0 0 0


0 0 0

 = af † f. Now af † f ≤⊕ a as rank(a)−rank(af † f) =
a
a
a


0 0 0


0 0 0
3 − 1 = 2 = rank(a − afa† f). We know then afa† f ≤L a. Thus max C = max CL =
{afa† f}.
We now compute
operator
as given by Anderson-Trapp. We partition
 the shorted

 1
 


0

a as follows: a = 


 
0
 

 
0
0 0 0 



1
0
0




.

0 2 2






0 2 2

69

Then aS
 
†

0 1 0 0
   
 
 


 

 1 − 0 0 2 2 0 0 0

 


 


0 0 2 2

 
= 



0

 

 

0

 

 

 
0

0 0

0 0


0 0



0 0





0 
1





0



 = 



0
0






0
0


0

0 0 0


0 0 0

.


0 0 0


0 0 0
In this case, the permutation matrix is just the identity matrix and afa† f = aS .
Chapter 4
The Parallel Sum of Two Matrices
Over a Regular Ring
In this chapter, our focus turns to the parallel sum.
The concept of the parallel
sum comes from electrical engineering. Two resistors may be wired in series or in
parallel. If two resistors R1 and R2 are wired in series then the total resistance is
r1 + r2 . If two resistors R1 and R2 are wired in parallel then the total resistance is
r1 r2
r1 +r2
unless r1 = r2 = 0 where the parallel sum would then be 0. In [2], Anderson
and Duffin generalized the scalar case to that of positive semidefinite matrices. The
motivation to solve this problem arises from the computation of the parallel sum of
two impedance matrices. They expressed the parallel sum as p(A, B) = A(A+B)†B
where A, B are positive semidefinite matrices and (A + B)† is the Moore-Penrose
70
71
inverse of (A + B).
In [37] Mitra-Puri generalized the concept of the parallel sum to arbitrary matrices over the complex numbers and introduced the more general definition of parallel summability which is where A(A + B)(1)B is invariant under the choice of
{1} − inverse (A + B)
(1)
and p(A, B) = A(A + B)(1)B is the parallel sum of A and
B.
In [36], Mitra-Odell introduced the problem that is the consideration of this
chapter: Let A, B be matrices of order m×n and let there exist a matrix C such that
{C (1)} = {A(1)} + {B (1)}. Then A and B are parallel summable and C = P (A, B).
Hartwig [17] later generalized the result of Mitra-Odell to matrices over a prime
regular ring with identity. His proof closely followed that of Mitra-Odell.
The goal of this chapter is to come closer to solving the problem in matrices over
a commutative regular ring and, more generally, in matrices over a regular ring.
4.1
Commutative Regular Ring
We first consider the case of a commutative regular ring. The following lemma is
straightforward and well known. It is proven here for completeness.
Lemma 4.1.1. Suppose R is a commutative ring and e, f ∈ R are idempotent.
Then Re + Rf = R(e + f(1 − e)).
72
Proof. Suppose x ∈ R(e + f − fe) then x = u(e + f − fe) for some u ∈ R. So
x = ue + uf − ufe = (ue − ufe) + uf = (u − uf)e + uf ∈ Re + Rf.
Conversely, suppose x ∈ Re. Then x = ve = ve + vef − veef = vee + vef −
vefe = ve(e + f − fe) ∈ R(e + f − fe). Thus Re ⊆ R(e + f(1 − e)) Similarly,
Rf ⊆ R(e + f(1 − e)). Thus Re + Rf = R(e + f(1 − e)).
Recall that every principal right ideal of a ring is uniquely generated if any
two elements of the ring that generate the same principal right ideal must be right
associates (if for all a, b in the ring R, aR = bR implies a = bu for some unit u ∈ R).
Theorem 4.1.1. Suppose R is a commutative regular ring such that a, b, c ∈ R
where {c− } = {a− } + {b− }. Then c = abu where u is a unit.
Proof. By Lemma 1.3.1 and the commutativity of R we have that c− + R(1 − cc− ) =
a− +R(1−aa− )+b− +R(1−bb− ). As {c− } = {a− }+{b− }, it follows that R(1−cc− ) =
R(1 − aa− ) + R(1 − bb− ) = R [1 − aa− + (1 − bb− )aa− ] = R (1 − bb− aa− ) = R(1 −
aba−b− ) by the previous lemma and commutativity. As R(1−cc− ) = R(1−aba− b− ),
it follows that Rc = Rab. Now given that R is a commutative regular ring, we know
by Theorem 1.4.2 that it is unit-regular. As R is unit regular and Rc = Rab, then
c = abu where u is a unit by Theorem 1.4.3.
Theorem 4.1.2. Suppose R is a commutative regular ring such that a, b, c ∈ R
where {c− } = {a− } + {b− }. Then, c = ab(a + b)(1,2).
73
Proof. Suppose that {c− } = {a− } + {b− }.
Multiplying both sides by c2 yields
c2 (a− + b− ) = c2 c− = c. Again, multiplying both sides by a2b2 yields c2 a2b2 (a− +
b− ) = ca2b2 . Thus c2 ab(a + b) = ca2b2 . Multiplying both sides by (a− b− ) gives us
c2 ab(a−b− )(a + b) = ca2 b2(a− b− ).
We want to show that caa− = c and cbb− = c. From Lemma 1.3.1 and {c− } =
{a− } + {b− }, we have c− + (1 − c− c)R + R(1 − cc− ) = a− + (1 − a− a)R + R(1 −
aa− ) + b− + (1 − b− b)R + R(1 − bb− ). Multiplying on the left and right by c yields
c = ca− c + c(1 − a− a)Rc + cR(1 − aa− )c + cb− c + c(1 − b− b)Rc + cR(1 − bb− )c. As
c = c (a− + b− ) = ca− c + cb− c, we have 0 = c(1 − a− a)Rc + cR(1 − aa− )c + c(1 −
b− b)Rc + cR(1 − bb− )c. Now, cR(1 − aa−)c = 0, so (1 − aa− )cR(1 − aa− )c = 0 and
as R has no nonzero nilpotent left or right ideals, (1 − aa− )c = 0. Thus c = caa−
and similarly c = cbb− . Therefore c2 (a + b) = cab.
Hence, c2 (a + b) (a− b− ) = cab(a−b− ) = c. So a {1} − inverse of c is (a + b)a−b−
and so c− = a− b− (a + b). As R is commutative, we may choose c = ab(a + b)(1,2) as
c = cc− c = ab(a + b)(1,2) [a− b− (a + b)] ab(a + b)(1,2) = ab(a + b)(1,2).
We are unable to show that c = p(a, b), that is, invariance under {1} − inverses
of (a + b). But we have shown that it is invariant under {1, 2} − inverses. This
problem remains open.
74
4.2
Regular Ring
We now present our results for regular rings and matrices over a regular ring. The
proof of the following theorem can be given by suitably modifying the proof of
Mitra-Odell [36] and Hartwig [17] but it is given here for completeness.
Theorem 4.2.1. Suppose R is a regular ring such that a, b, c ∈ R where {c− } =
{a− } + {b− }, bR ⊆ cR and Ra ⊆ Rc. Then a and b are parallel summable.
Proof. By assumption, {c− } = {a− } + {b− }. As, c− + (1 − c− c) R + R(1 − cc− ) =
a− + (1 − a− a) R + R(1 − aa− ) + b− + (1 − b− b) R + R(1 − bb− ), it follows that
(1 − c− c) R + R(1 − cc− ) = (1 − a− a) R + R(1 − aa− ) + (1 − b− b) R + R(1 − bb− ).
Now aR ∩ bR = eR where e is idempotent and R(1 − aa− ) + R(1 − bb− ) = R(1 − e).
Similarly, Ra ∩ Rb = Rf where f is idempotent and (1 − a− a) R + (1 − b− b) R =
(1 − f)R.
Now, choose e = b(1 − m− m)b− , f = b− (1 − nn− )b where m = (1 − aa−)b
and n = b(1 − a− a).
That eR ⊆ bR and eR ⊆ aR is clear.
Conversely, let
y = ax1 = bx2 ∈ aR ∩ bR. So y ∈ eR. Similarly, Ra ∩ Rb = Rf.
Therefore (1 − c− c) R + R(1 − cc− ) = (1 − f)R + R(1 − e). So c(1 − f)Rc +
cR(1 − e)c = 0 implies that c(1 − f)Rc + c(0)(1 − e)c = c(1 − f)Rc = 0.
c(1 − f)Rc(1 − f) = 0.
So
Hence c(1 − f) = 0 as R has no nonzero nilpotent left
or right ideals. Hence, c = cf. Similarly, c(1 − f)(0)c + cR(1 − e)c = 0 and so
75
cR(1 − e)c = 0. Thus (1 − e)cR(1 − e)c = 0 and as R has no nonzero nilpotent left
or right ideals, (1 − e) c = 0. So c = ec. So c = cf = ec. By assumption, bR ⊆ cR
and Ra ⊆ Rc. So cR ⊆ eR = aR ∩ bR ⊆ bR ⊆ cR and it follows that eR = cR.
Also, Rc ⊆ Rf = Ra ∩ Rb ⊆ Ra ⊆ Rc. Thus Rf = Rc. Thus, aR ∩ bR = eR = cR
and Ra ∩ Rb = Rf = Rc.
As R is regular, cR = eR implies that e = cc= for some {1} − inverse c= ,
perhaps different from those used previously.
Likewise, f = c≡ for some other
{1} − inverse c≡ . As c= = a= + b= for some {1} − inverses a= , b= and c = cf ∈ Rb,
b − bm− m = b(1 − m− m)b− b = eb = cc= b = c (a= + b= ) b = ca= b + cb= b = ca= b +
c = ca= b + ca= a = ca= (a + b).
Also, we have bm− (1 − aa−) (a + b) = bm− m.
Combining these yields [ca= + bm− (1 − aa−)] (a + b) = b(1 − m− m) + bm− m = b.
Hence Rb ⊆ R(a + b).
Clearly, (a + b) [a= c + (1 − aa−) n− b] = b and thus bR ⊆
(a + b) R. Similarly Ra ⊆ R(a + b) and aR ⊆ (a + b) R. Hence aR + bR = (a + b)R
and Ra + Rb = R(a + b).
By Theorem 1.4.1, a(a+b)−b and b(a+b)− a are invariant under choice of (a+b)(1)
and a(a + b)− b = b(a + b)− a. Thus, p(a, b) = p(b, a) are equal and well-defined.
It is still an open problem to show that c = p(a, b) under the conditions that
bR ⊆ cR and Ra ⊆ Rc.
It is known that if R is regular then so is the set Rm×n of m × n matrices over R
in the sense that for C ∈ Rm×n , CXC = C always has a solution X = C − ∈ Rn×m
76
[17]. For convenience, let Γ = Rn×m . In the previous theorem, R is a regular ring.
However, in the following analogous theorem, the result is stated in terms of m × n
matrices over a regular ring R.
Theorem 4.2.2. Let Rm×n be the set of m × n matrices over R , where R is is a
regular ring. Suppose that A, B, C ∈ Rm×n where {C − } = {A− } + {B − }, BΓ ⊆ CΓ
and ΓA ⊆ ΓC. Then A and B are parallel summable.
Theorem 4.2.3. Suppose R is a regular ring such that e1, e2 are idempotent, 2 is invertible in R and p(e1 , e2) = p(e2 , e1). Then the harmonic mean of two idempotents
is idempotent.
Proof. Let e21 = e1 and e22 = e2.
Now as p(e1, e2) = p(e2 , e1), we may set p =
e1 (e1 + e2)(1) e2 = e2 (e1 + e2)(1) e1 where we have invariance by the definition of the
parallel sum. We want to show that pR = e1R ∩ e2 R (similarly Re1 ∩ Re2 = Rp).
Now pR ⊆ e1R ∩ e2R by definition of p = e1 (e1 + e2)(1) e2 = e2 (e1 + e2)(1) e1 .
We need to show that e1 R ∩ e2R ⊆ pR.
Let x ∈ e1R ∩ e2R.
Note that
(e1 + e2 ) (e1 + e2 )(1) (e1 + e2 ) = (e1 + e2), so (e1 + e2 ) (e1 + e2 )(1) (e1 + e2 ) x = (e1 + e2) x
and 2 (e1 + e2 ) (e1 + e2 )(1) x = 2x. As 2 is invertible, x = (e1 + e2) (e1 + e2 )(1) x =
e1 (e1 + e2)
(1)
x + e2 (e1 + e2)
(1)
x.
Set x = e1z1 = e2 z2 for z1, z2 ∈ R.
Then
x = e1 (e1 + e2 )(1) e2z2 +e2 (e1 + e2 )(1) e1z1 = pz2 +pz1 = pz1 +pz2 = p(z1 +z2) ∈ pR.
Hence pR = e1R ∩ e2R.
77
2
2
h
(1)
i
Now we will show that 2p = p. So, 2p = e1 (e1 + e2) e2 2p
h
i
= e1 (e1 + e2 )(1) e2 (e1 + e2) p = e1 (e1 + e2 )(1) e2e1 p + e1 (e1 + e2 )(1) e2e2 p
= e1 (e1 + e2)(1) e2p + e1 (e1 + e2 )(1) e2p = e1 (e1 + e2)(1) p + e2 (e1 + e2)(1) p
= (e1 + e2) (e1 + e2)(1) p = p.
Now, for the harmonic mean, set p = 2p = 2e1 (e1 + e2)(1) e2. So p
′
′
2p = p as required.
′
2
= 4p2 =
78
OPEN QUESTIONS
Question 1 Characterize nonnegative matrices A dominated by a Drazin-monotone
matrix B under the minus partial order.
Question 2 Given a regular ring R such that a, b, c ∈ R where {c− } = {a− } + {b− },
is it the case that a and b are parallel summable and c = p(a, b)?
Bibliography
[1] W. N. Anderson Jr., Shorted operators, SIAM J. Appl. Math. 20 (1971), 522525.
[2] W. N. Anderson, R. J. Duffin, Series and parallel addition of matrices, J. Math
Anal. Appl. 26 (1969), 576-594.
[3] W. N. Anderson Jr., G. E. Trapp, Shorted operators II, SIAM J. Appl. Math.
28 (1975), 60-71.
[4] R. B. Bapat, S. K. Jain, L. E. Snyder, Nonnegative idempotent matrices and
minus partial order, Linear Algebra Appl. 261 (1997) 143–154.
[5] A. Ben-Israel, T. N. E. Greville, Generalized Inverses: Theory and Applications,
second ed., Springer-Verlag, Berlin, 2002.
[6] A. Berman and R. J. Plemmons, Matrix Group Monotonicity, Proc. of the
Amer. Math. Soc. 46 (3) (1974) 355-359.
79
80
[7] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical
Sciences, first ed., SIAM, Philadelphia, 1994.
[8] P. B. Bhattacharya, S. K. Jain, and S. R. Nagpaul, Basic Abstract Algebra.
New York: Cambridge Univ. Press, 1986.
[9] B. Blackwood, S. K. Jain, Nonnegative group-monotone matrices and the minus
partial order, Linear Algebra and its Applications, accepted for publication.
[10] B. Blackwood, S. K. Jain, The parallel sum of two matrices over a regular ring,
to be submitted.
[11] B. Blackwood, S. K. Jain, K. M. Prasad and A. Srivastava, Shorted operators
relative to a partial order on a regular ring, submitted to Communications in
Algebra.
[12] M. P. Drazin, Natural structures on semigroups with involution, Bull. Amer.
Math. Soc. 84 (1978), 139-141.
[13] P. Flor, On groups of nonnegative matrices, Compositio. Math. 21 (1969) 376–
382.
[14] G. Frobenius, Über Matrizen aus nicht negativen Elementen, S.-B. Preuss.
Akad. Wiss. 23 (1912), 456-477.
81
[15] L. Gillman, M. Henriksen, Some remarks about elementary divisor rings, Trans.
Amer. Math. Soc. 82 (1956) 362–365.
[16] K. R. Goodearl, von Neumann Regular Rings, second ed., Krieger Publishing
Company, Malabar, Florida, 1991.
[17] R. E. Hartwig, A remark on the characterization of the parallel sum of two
matrices, Linear and Multilinear Algebra 22 (1987), no. 2, 193–197.
[18] R. E. Hartwig, Block generalized inverses. Arch. Rational Mech. Anal. 61
(1976), no. 3, 197–251.
[19] R. E. Hartwig, How to partially order regular elements, Math. Japon. 25 (1980)
1-13.
[20] R. E. Hartwig, 1-2 inverses and the invariance of BA+ C, Linear Algebra Appl.
11 (1975), 271-275.
[21] R. E. Hartwig, J. Luh, On finite regular rings, Pacific J. Math. 69 (1977), no.
1, 73–95.
[22] R. E. Hartwig, J. M. Shoaf, Invariance, group inverses and parallel sums, Rev.
Roumaine Math. Pures Appl. 25 (1980), no. 1, 33–42.
[23] A. Herrero, A. Ramirez, N. Thome, An algorithm to check the nonnegativity
of singular systems, Appl. Math. Comput. 189 (2007), no. 1, 355–365.
82
[24] S.K. Jain, Linear systems having nonnegative best approximate solutions - a
survey, in: Algebra and its Applications, Lecture Notes in Pure and Applied
Mathematics 91, Dekker, 1984, pp. 99–132.
[25] S.K. Jain, E.K. Kwak, V.K. Goel, Decomposition of nonnegative groupmonotone matrices, Trans. Amer. Math. Soc. 257 (2) (1980) 371–385.
[26] S. K. Jain, S. K. Mitra and H. J. Werner, Extensions of G-based matrix partial
orders, SIAM J. Matrix Anal. Appl. 17 (1996), 834–850.
[27] S. K. Jain and K. M. Prasad, Right-Left Symmetry of aR ⊕ bR = (a + b)R in
Regular Rings, J. Pure and Applied Algebra, 133 (1998) 141-142.
[28] S.K. Jain, J. Tynan, Nonnegative matrices A with AA# ≥ 0, Linear Algebra
Appl. 379 (2004) 381-394.
[29] T. Y. Lam, A First Course in Noncommutative Rings, second ed., SpringerVerlag, 2001.
[30] T. Y. Lam, Lectures on Modules and Rings, first ed., Springer-Verlag, 1999.
[31] T. O. Lewis and T. G. Newman, Pseudoinverses of positive semidefinite matrices, SIAM J. Appl. Math., 16 (1968) 701-703.
[32] K. Loewner, Über monotone matrixfunktionen, Math. Zeitschrift 38 (1934),
177–216.
83
[33] G. Marks, A criterion for unit-regularity, Acta Math. Hungar. 111 (2006), no.
4, 311–312.
[34] S.K. Mitra, Matrix partial orders through generalized inverses: Unified theory,
Linear Algebra Appl. 148 (1991) 237–263.
[35] S. K. Mitra, The minus partial order and shorted matrix, Linear Algebra Appl.
81 (1986), 207-236.
[36] S.K. Mitra, P.L. Odell, On parallel summability of matrices, Linear Algebra
Appl. 74 (1986) 239–255.
[37] S. K. Mitra and M. L. Puri, On parallel sum and difference of matrices, J.
Math. Anal. Appl. 44 (1973),. 92-97.
[38] S. K. Mitra and M. L. Puri, Shorted operators and generalized inverses of
matrices, Linear Algebra Appl. 25 (1979), 45–56.
[39] H. A. Mitsch, A Natural Partial Order for Semigroups, Proc. Amer. Soc.97 (3)
(1986), 384–388.
[40] E. H. Moore, On the Reciprocal of the General Algebraic Matrix, Bulletin of
the American Mathematical Society 26 (1920) 394-395.
[41] K. S. S. Nambooripad, The Natural Partial Order on a Regular Semigroup,
Proc. Edinburgh Math. Soc. 23 (1980) 249-260.
84
[42] R. Penrose, A Generalized Inverse for Matrices, Proceedings of the Cambridge
Philosophical Society 51 (1955) 406-413.
[43] O. Perron, Zur Theorie der Matrizen, Math. Ann. 64 (1907) 248-263.
[44] R. J. Plemmons and R. E. Cline, The Generalized Inverse of a Nonnegative
Matrix, Proc. of the Amer. Math. Soc. 31 (1) (1972) 46-50.
[45] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and Its Applications,
first ed., Wiley, New York, 1971.
Related documents