Download Bose, R.C. and J.N. Srivastava; (1963)Multidimensional partially balanced designs and their analysis, with applications to partially balanced factorial fractions."

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotation matrix wikipedia , lookup

Determinant wikipedia , lookup

Capelli's identity wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Jordan normal form wikipedia , lookup

Exterior algebra wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Symmetric cone wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Four-vector wikipedia , lookup

Matrix calculus wikipedia , lookup

Gaussian elimination wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
f
UNIVERSITY OF NORTH CAROLINA
Department of Statistics
Chapel Hill, N. C.
MULTIDIMENSIONAL PARTIALLY BALANCED DESIGNS AND THEIR ANALYSIS,
WITH APPLICATIONS TO PARTIALLY BALANCED FACTORIAL FRACTIONS
by
,;
R. C. Bose and J. N. Srivastava
October 196:3
•J.
The purpose of this p~er is three fold~ The first one is to
introduce a class of .... tidimensional designs (under the model
of addivity of factori•. effects) which involve partial balance •
The next purpose is to" xamine closely the pattern in the matrices
which one has to invert in order to carry out the analysis. The
third aim is to apply the theory so developed to the problem of
analysing irregular fractions of certain kinds.
This research was supported by the Mathematics Division of the Air Force
Office of Scientific Research.
Institute of Statistics
Mimeo Series No. :376
1
MULTIDIMENSIONAL PARTIALLY BALANCED DESIGNS AND THElIR ANALYSIS,
WITH APPLICATIONS TO PilRTIALLY BALANCED FACTORIAL FRACTIONS 1
by
R. C. Bose and J. N. Srivastava
= = = = = = = = = = = = = = = == = = = = = == = = = = =
1.
== = == = = = == =
Introduction and Summary.
The use of multidimensional designs like !atin Square, Graeco-Latin Square,
Youden Square is now well established.
12, 14, l5J.
A good discussion is available in [5, 6,
Consider a Latin Square: We have three factors (i) row blocks,
(ii) column blocks, and (iii) treatments.
This is a special kind of 3-dimensional
(i.e. 3-factor) design. Notice that if the number of rows, columns or treatments
is t, we take only t 2 observations and not t 3 , as one would get by trying all
combinations of rows and columns and treatments, if that were possible.
A struc-
turally similar design would be obtained if the factors were (i) blocks (ii) varieties of corn (iii) insecticides, each having t
prOVided one takes the same t
2
levels (i.e. being t
in number),
combinations of factors as in the above design.
The ordinary designs involving blocks and treatments may be called 2-dimensional.
Designs of three and higher dimensions have been discussed in detail in
[7, 8, 9, 10J, both when the effects of the various factors are assumed to be
additive, and when interaction is present between various pairs of factors.
The purpose of this paper is three fold.
The first one is to introduce a
class of multidimensional designs (under the model of addivity of factorial effects)
which involve partial balance.
Such designs would be useful for economising on
~his research was supported by the Mathematics Division of the Air Force
,~
Office of Scientific Research.
t
2
the number of observations to be taken1 and also for many kinds of special experimental situations.
Hm-lever "i'le shall not be able to consider here the impor-
tant problem of constructing such designs l and shall confine ourselves to their
analysis.
The next purpose is to examine closely the pattern in the matrices which one
has to invert in order to carry out the analysis.
This leads us to some powerful
methods for inverting such patterned matrices.
The third aim is to apply the theory so developed to the problem of analysing
irregular fractions of certain kinds.
These include (i) balanced fractions which
are essentially partially balanced arrays (of strength 41 if one is interested
in all effects involving two or a lesser number of fractions)1 as defined by
Chakravarti
balanced.
[4J; and (ii) certain more general fractions, termed here as partially
In factI now that a quick method of analysis becomes available1 the
problem of the examination of the properties of newly constructed (partially
balanced) fractions becomes much easier.
This in turn would greatly help in the
construction of good economic fractions.
Much of the work in this paper was first discussed in [ l}].
2.
Partially balanced association schemes.
Corresponding to an ordinary PBIB design, one has an association schemel the
definition of which can be found in Bose and Mesner [}].
Given a set of v
f'
r
obj.ects l a relation satisfying the following conditions
is said to be an association scheme with m classes:C(a) Any two distinct objects are either the first l second, or ••• m-th associates l
the relation of association being symmetrical1 i.e. if the object a is the i-th
e
associate of the object
C(b)
of a.
~,
Each object a has
then
n
i
~
is the i.th associate of a.
i-th associates, the number n
i
being independent
3
C(c)
If any two objects
and ~
a
are i-th associates, then the number of ob-
jects which are the j-th associates of
and is independent of the pair a,
are
~
~
and k-th associates of a
is
i
Pjk
with which we start so long as a and
~
i-thassociates.
We consider a generalisation of the above scheme.
Definition 2.1.
Suppose there are m sets of objects Sl' S2' ••• 'Sm' the objects
in the i-th set Si being denoted by xil ' xi2 , ••• ,xin .• Then the sets
J.
(Sl' S2'··. ,Sm) will be said to have a multidimensional partially balanced association if the following conditions are satisfied:C (i)
m
With respect to any X.
J.a
E S., the objects of
J.
S.
J
can be divided into n ..
J.J
(n > 0), where each element of the a-th class is the a-th assoij
in Sj. The number of objects in the a-th associate class is
disjoint classes
ciate of xia
a
nij (i and j
may take any value between 1 and m, and may in particular be
identical) • The number n ..
is independent of the particular object
in Si' and depends only on
i.
J.J
Cm(ii)
The relation of association is symmetrical, i.e. if x jb
X.
J.a
chosen
is the k-th
associate of xia in Sj' then xia is the k-th associate of xjb in Si.
Notice that this implies n = n
for all i and j.
ji
ij
Cm(iii) Let S., S. and Sk be any three sets, where i, j, k are not
J.
J
necessarily distinct.
associate of
be the a.th
ia € Si and x jb € Sj. Let x jb
in S.,
so that x.W is the a-th
associate of xjb in Si.
J
.
Consider the set Q
Let
(Xia '
x
~,
k) of the ~-th associates of xia
in Sk' and the
4
The scheme defined above will be referred to as the multidimensional
partially balanced (MDPB) association scheme.
:3. Multidimensional partially balanced des igns.
F " F " ••• " F •
Suppose we have m factors
2
m Suppose the i-th factor has
Here the word level does not imply that the
l
levels F11" F i2 " ... " F
•
iSi
levels of any factor are necessarily ordered according to some quantitative
s1
criterion~
This if Fl denotes varieties of wheat" then Fll" F12" ••• "Fl
sl
may stand for sl different varieties of wheat.
There are
sl x s2 x ••• x sm = No" say, combinations of levels.
(j." j2' ••• , jm)
e
Let
denote a ty:pical treatment combination in which the r-th factor
occurs at level F. (1 < j < s ; r = 1" 2" ••• , m). The observed response to
rJ r
- r - r
this treatment combination will be denoted by y(jl' j2' ••• , jm). Also we shall
v7rite E[y(jl" j2" •• ·,jm)]
= Y*(jl'
j2' ••• , jm)' where E denotes ex,pected value.
The main aspect in which multidimensional designs differ from the factorial
designs is that in the former, we do not define main effects, interactions etc.
as in the latter.
In fact in the latter, the factorial effects are generally
assumed to be additive, which we in particualr shall assume.
Thus we shall take
as model:
(1)
Y*(jl' j2' •• ·,jm)
= T(l,
Var. [y(jl" j2' ••• " jm)J
where T(r, j ) (1 < j < s ; r
r
- r - r
level Frj
of the factor
jl) + T(2, j2) + ... + T(m, jm)
= ~2
"
= 1, 2, ••• , m) denotes the 'true effect' of the
Also vIe shall suppose all the observations to be
r
independent.
e
An example of a 4-dimensional situation will be given here.
Suppose one has
20 different varieties of Wheat, 4 methods of cultivation, and 10 different insectisides.
In addition to these :3 factors, one usually has another nuisance factor
.
5
viz. a set of blocks.
A system of blocks may be called for because of soil
heterogeneity, for example.
As appears likely, the interaction between these
factors may be negligible, and the model (1) may be valid.
The problem may then
be to estimate the true effect of the various factor levels.
Since the total number of level-combinations is large and the number
of effects
to be estimated is relatively small, a design is required which may satisfactorily
cope with our needs in relatively fewer observations.
This is exactly the purpose
served by MOPB designs to be defined now.
l
be the total number of times the combination
Let h ,2, ••• ,m
jl,j2,···,jm
.u, jm) is tried in our experiment. Let
(2)
E
k~r,t
,
=
i.e.
appears at level F .~
in the
r
r'd
r t
r
set
level-combinations selected for experimentation, and h.'.
the numJr,J t
ber of times the levels F .
and Ft j
occur together in O.
r,J r
, t
Def. 3.1.
Let the set of the sk levels of F (k = 1,2, ••• ,m) be denoted by
k
8 • Then the set 0 will be said to be a multidimensional partially balanced
k
design if
M(i)
is the number of times the factor
the sets 8 , 8 , ••• ,8
1
2
m
F
have a multidimensional partially balanced asso-
ciation scheme defined over them,
M(ii)
h:
Jr
= ~r
(r
= 1,2, ••• ,m),
is independent of the level F
where
.,
r,l.r
~
r
depends only on the r-th factor, and
6
hr,t.
M( ~i~)
• •
j ,J
.
r
t
= dar,t'
a cons t ant dependi ng on the pa ir f
0 fac t ors
Fr' Ft' an d
also upon a, where F . € Sand Ft. € St are ~th associates of each
r
r,J r
,J
t
a
other under the association scheme in M(i). Obviously we must have d
=0 }
r,r
for all permissible r.
Consider now the analysis of the above designs.
Only the linear estimation
part of the analysis will however be discussed here; for once the best linear
unbiased esti.ms.tes of the parameters have been obtained, the sum. of squares etc. and
the analysis of variance table can be easily computed using standard methods
discussed, for example, in [1, 6].
Consider the model (1). The number of parameters T to be estimated is
m
s = E s. Let y denote a (fixed) column vector of the observations
r=l ;r
y{jl,j2,···,jm) where (jl,j2, ••• ,jm) € O. Let
e
(3)
it
= (T(l,l), ••• ,T(l,sl);
T(2,l), T(2,2), ••• ,T(2,s2);
•••• ; T{m,l), T(m,2), ••• ,T(m,sm»
be the
sxl vector of parameters.
Then we can write
(4)
E{X)
=
- ,
At P
where A is a certain matrix with elements
equations (1).
i
Then it is well-known [lJ
0 and 1, and is obtained by using
that the normal equations for obtaining
can be written
A At i
In general
=
(M I) is singular.
A 1. •
However we can overcome this difficulty by pro-
ceeding as in the case of 2-dimensional designs.
respect to factor
Call a design connected with
F.,
if after eliminating from (5) the parameters corresponding
1.
of
to all the factors except F., we get a se/ (s.-l) independent
.J.
involving the T(i,r) alone.
be called well-connected.
J.
7
equations
A design connected with respect to each factor may
For a well connected design, suppose we
make
the (usual)
assumptions
si
~
T(i,r)
=
i = l,2, ••• ,m
0,
r=l
Then equations (5) are changed to
(6)
(M') +
rJ.E =
AX
1
where
Q's being real numbers, and Q being repeated si times. By properly choosing
i
r , the matrix (M' + r) becomes nonsingular. Then solving (6) we have a solution
for
.E.
r,
Due to the special diagonal nature of
have the same pattern.
the matrices (M.')
and [(M I) + r]
Thus the problem now is to invert a nonsingular matrix with
the same pattern as of (M').
In the remaining part of this section, we shall
examine this pattern more closely.
The element in the cell (k, f) of M'
is obtained by taking the sum of the
products of the corresponding elements in the k-th column and the ~-th column of
the matrix At.
Let the elements in the k-th row and the f-th row of 12 be
respectively T(x, jx) and T(YI j ).
y
(q, k) and cell (q, f) of AI
the vector X contains both
Then the product of the elements in the cell
will be unity, if and only if, the q-th element in
jx and
jy.
Hence the element (It,
equal to the number of level-combinations in
~
n
f)
in which the symbols
of AA'
is
x and j y
both occur together (respectively at the position of the x-th and the y-th factors).
j
8
hX'Y. by definition. Again this in turn equals lJ,
if
jx,J y
x,y
j x E Sand
j y E Sy are the a-th a~soc~taG of~ch other.
x
a
Let nml the matrices D
be defined as follows.
x,y
a between the s levels of F
First we define association matrices B
This however equals
x
~
x
and
sy levels of F,
i.e. between Sand
Sy (where x and y mayor may not
y
x
a is of size s x s , and has unity in the cell (i,j)
be equal). The matrix B
x,y
x
y
if the i-th level of F and the j-th level of F a.re a-th associates under the
x
y
a
multidimensional association scheme, and zero otherwise. Next we define D~, of
size s x s each, such that each such matrix contains m2·submatrices. Let M
ij
denote the si x Sj sub-matrix in the i-th row block and j-th column bloek. This
corresponds to the factors
a
and F ·• Then D
j
i
~
t
is a zero matrix if x ~ x or yt ~ Y
and Mxy
t •
F
a
is such that M
= B
~
or both.
Thus
~
a
,
is an
D
~
s x s matrix in which the rows and columns correspond in order to the elements in
-p
,
and in which the cell (j ,j ) corresponding to the elements T(x, j )
x
T(y, j )
in
x
y
and
and j
are a-th associates, and contains
x
y
a which do not correspond to
zero otherwise. Also all other elements of D
y
p
-
contains unity if j
~
factors
F
x and Fy are zero.
From the developments in the last two paragraphs, it therefore follows that
Mt
=
I:
a,x,y
The next two sections will be devoted to the development of certain algebraic
properties of our association scheme, which "l'ill be used later to obtain an algoritbm for inverting a matrix of the type
(7).
4. Linear associative algebra of the MDPB association scheme.
We first establish certain properties of the matrices
It
element will be denoted by b':lu
whose
(t,u)
9
Lemma
4.1.
(i)
nij
(ii)
ni n ..
0:
bat'll , for all permissible t, i, j, a
ij
u
0:
= n j n ji for all pemmissible i, j, 0:.
= r:
0:
~J
=
(iv)
r: Co: B~.
a
J
ninj
= 0nin
~J
j
(the n x n
j
i
matrix of all unities)
(the n x n . zero matrix) implies
i
J
c~
=0
, for all
\oN
permissible
The linear functi ons of Bij , B~j'
same nij matrices as a basis.
form a vector space with these
(v)
Proof:
(i)
0:.
For fixed t, we have
r:
u
b
atu
ij
= number of elements in Sj which are the o:-th associates
of object t (e Si) •
= naij , by definition.
(ii) For each object t e Si' we have n~j elements in Sj
a-th associates of t.
which are the
Thus from (i), we get
r:
u
a
n ..
J~
waich givea the required result.
(iii)
This relation holds, since for all t e Si' u e Sj' the pair (t,u) are
'C¥.. th ,associates fa.+' one
(iv)
and only one value of a.
This is obvious.
(v) This holds by virtue of (iv).
Lemma. 4 • 2 •
r: p(i, k, y; j, a, ~) B~k
Y
10
Proof:
The matrices on the l.h.s. are of dimensions
n x n j and nj x
i
The element in the cell (t, u)
respectively so that the product exists.
~
of the
product matrix is
n.
=
'i/
'1=1
Suppose now that t e Si and u e Sj
bo:tq b13qu
ij
jk
and t, u are y-th associates.
the last expression above equals the number of elements in S.
which are common
J
to the set of ex-th associates of t
and the set of 13-th associates of u and
is, by definition, equal to p(i,k,y; j,o:,13).
On the other hand, the element in
the (t, u) cell of the matrix on the r.h.s. of (9) is Z p(i, k, q; k,
Since
e
(t,u)
Then
0:,
f3)bi~U •
'1
are y-th associates, only one member in this last sum is nonzero,
and the sum reduces to p(i, k, y; j,
0:,
13). This ccmpletes the proof.
Lemma 4.3;.
=
Z p(j,
f,
y; k,
j,
f,
0:,
13) D}r'
if k
= k'
,
Y
for all permissible values of
k, ex and 13, and where
0ss'
denotes the
zero matrix of order s x s'.
Proof:
Consider the product D~k D~'f obtained by mUltiplying the two matrices
blockwise (blocks formed by a row or coltunn of submatrices).
The element in the
ql-th row block and %-th column block of the product will be a zero
either the
matrill~
if
'll-th row block of D~k consists entirely of zero submatrices, or the
~-th column block of De'f has zero submatrices only, or both. Since all the row
ex
blocks of Djk consist of zero submatrices except the j-th row block, and all the
column blocks (except f-th) of
D~'f contain zero submatrices only, the only possi-
11
ble nonzero submatrix of the product is
and
f-th column blocl....
HOY7ever, if
M (' which stands in the j-th rm'1' block
j
k! k'
the two nonzero submatrices will get
mUltiplied with zero submatrices, and Mjf i'7ill also be zero.
If k = k t, then
obviously
by the last lemma.
This completes the proof.
a
The above lennna shows that the product of any two matrices Dij
pressed as a linear function of these same matrices.
a
of any D
ij
are
1
and
0
Also since the elements
only, and since
=
(11)
can be ex-
J ss
it is clear that D~,j (for all permissible a, i, j) are all linearly independent.
Hence the vector space
algebra.
algebra
L formed by the linear functi ons of D~j is a linear
Further since matrix multiplication follov7s the associative law, the
L is associative too.
Hence we have proved
The set of all linear tunctions of the D~j
Theorem 4.1.
eiative a.lgebra
L with
the
.
n •• .(;: E n...
) matrices
~J
By using the properties of the algebra.
L
a
D J.
i
form a linear asso"
as a basis.
established above, one could prove
a number Of useful necessary relations uhich the parameters
etc must satisfy, if the multidimensional association
~cheme
are summarized below.
.Theorem 4.2 •
n
(12)
ik
(i) E p{k,i,a; f,f3 I Y) p(j,1,o; k,~,a)
CX=1
njf
= E
a=l
p(j, f,a; k,~,(3) p
(j,1,o; f,a,y) ,
p(i,j,a; k,f3,y), n
exists.
ij
Many Of these
12
4It
for all permissible values of
f, ~,
i , j, k,
y, 0 and
~.
n
ij
E p(i, k, a; j, ~, y)
(ii)
=
~=l
for all permissible
(iii)
,
i, j, k, a and y.
p(j, j, a; k, y, y)
if each object in set S.
J
Proof:
~j
is the
~-th
associate of itself.
The first set of relations can be established by considering the equality
a
and equating coefficients of D atter expansion. The other results can be
ji
obtained by using the same techniques on B~k together with (11).
~
Consider the matrices D~j , a
= 1,2, ••• ,n jj ,
Note that because of the re-
lation of association being symmetrical (see condition C (ii) in sec, 2,),
m
these matrices are symmetric,
Hence
D~j D~j = (D~j D~j)'
Again each product
function of the D}j'
D~j D~j could obviously be written down as a linear
Thus we have
Hence the set of all linear functions of the
algebra
(L
n..
JJ
matrices
a
D .
jJ
is a sub-
say) of L, and further, L is commutative. The algebra's L.
j
j
J
correspond to the algebra of ordinary PB association scheme, and have been
studied by Bose and Mesner [3].
13
4It
The algebra
D~k D~(
(I
j.
L however is not commutative, since for example the product
D~( D~k
is in general nonzero, the product
Since commutativity of L.
J
a
will always be so, if
is partly a conseq~enCe of the synmletry of the
Djj , one might wonder whether one could generate a commutative algebra by con-
a
sidering instead, the linear functions of A defined by
ij
,
(14)
which are symmetric. The answer however is negative, since it can be shown that
a:
the Aij do not in general generate an algebra.
5. Inversion of matrices belonging to linear algebras.
Consider the algebra
L.
Clearly the identity matrix Is e: L.
Let D e: L,
and suppose
,
and that D is nonsingular.
Since every matrix satisfies ita own characteristic
polynomial, we must have
... +
for some real numbers
Q's.
Q
r
r
D
=
0
ss
,
Since D is nonsingular, i'le then get
(16)
(A similar equation could be written dovIn if Q I 0, and
i
j = 0, 1, ••• , i-l.). Since D e: L, each power of D also belongs
J
to L, and hence so does the expression on the r.h.s. of (16). Hence D- l e: L.
provided Q
o
Q. = 0 for
I
O.
(This result is well known but we gave a proof for reader's convenience).
-1
D
=
W, and write
Let
14
=
where
vi'S
are real numbers, to be obtained if D- l
is needed.
To solve this
latter problem, let
(18)
Since W D = Is' we get
Z
=
e
{f
~,k.f
o:,i,j
0:
Z
=
d
Z
o:,~,k,i,j
{i
d
ij
y
Hence equating coefficients of D
kj
= Z
rv R ·
~k.
u,~,~
~
~f D~j
0:
ij
~j
p(k, j, y; i, ~,
Z
DY
kj
0:)
y=l
we get
d~. p(k,j,y; i,~,o:), for all permissible y,k,j.
~J
The number of unknowns and the number of equations in the set
(19)
are both
Z
n~ . ). Further since liJ € L , these equations
O:,i,j
J
must all be independent and consistent and must offer a unique solution for the
obviously equal to
w's.
~.
Now, in
m
= .Zl
J=
~j
(19),
n •• (=
keeping k
fixed, vary y
~.
equations involving only the
fixed, and x, y
take all permissible values.
broken up into sets, the k-th set containing
set of equations.
For fixed y
and
Z d~. p(k,j,y; i,~,o:). Varying y, j
0:
~J
values, we get an
~.
and
j.
This gives
'{X
unknmms
Thus the
~.
n..
unknowns.
j, the coefficient of
and
~,
i
over the
inwhich k
is
equations can be
Consider the k-th
v~i is
~.
permissible
Hence the
e
15
problem of inversion of D is thrO'WD. on that of inverting 01'
1:"2
2 ", .,Om •
The above method of inversion is valid (up to equations (19» for any linear
algebra.
are small.
For matrices in L, it is further useful if the
in case
~.
However
are large, another more powerful method to be presented below is much
~.
more useful.
Consider a commutative linear algebra, say L.. Let Lj
denote the equivaJ
0
n
jj
lent algebra generated by Bjj, ••• ,B
• Let the n j characteristic roots of
jj
a
Bjj , arranged in a certain order to be explained below, be the elements of the row
vector
(20)
2ja
=
(0 ja,l' 5 ja,2' ... , ° ja,n}
The order of the roots in (20) is
I
as
required by the following well known
Theorem (Frobeniu~).
+.
2
n ..
B , Bjj , ••• , Bj~J , say, which are pairwise
jj
a
f3
commutative, i.e. for which Bjj B
= Bf3jj Bajj for any permissible a and f3.
jj
Suppose there exist matrices
Then there exists an ordering (say, as in (20» of the roots of B~j' a
= 1,2, ••• ,n jj
such that the elements in the vector
(Oja,l + °jf31' °ja,2 + °jf32' ••• , °ja,n + Oj f3n
j
B~j
represent in some order the roots of
(Oja,l 0j~,l' 0ja,2 0jf3i 2 ' ••• , 0ja,n
( Bajj B13)
j
+
~j'
0jf3 n )
j
j
)
Similarly those of
represent in some order the roots of
•
jj
A direct application of Frobenius Theorem shows that if
Bj
=
(21)
Z b
a
0;
a
jj Bjj , and
iB
LjO ' 1.e.
denotes the row vector of the roots of B., then
Bj
E
J
j
=
n ..
JJ
0;
Z
bjj
a:l
2ja
if
16
Suppose now that B,
-1
of Bj •
Let Bj
is nonsingular" and consider the problem of inversion
J
=
Cj " and let the vector of roots of C
j
be denoted by
~ej
•
Further let
(22)
~B
= (OB l
j
=
.2C,
J
=I n
Then since BjC
j
' °B j 2"
j
(OC ,1' °c 2'
j
J
..., OBjnj )
... ,
•
°c n )
j j
Frobenius l Theorem gives
j'
write
n
e
(24)
.2C
=
jj
r.
cx=l
j
cx
C
jj E.jCX
Further, write
(25)
so that
tl
6.1
B,,, and c
J
vector
-
~j
j
j
= [.2..11'
(b~j'
:2.1
=
~j
= (c~jl
°'2
-J
,
••• , .2.jn
n, ,
b2 , .,., bj~J)
jj
n jj
2
C
c
jjl • • e_ J jj )
]
jj
1
n x n.1j matrix, Ej is the known vector of coefficients for
.1
l • (The
is the unknown vector, to be calculated for evaluating
is a.
B:J
will be spoken as being the inverse of b).
-.1
be rewritten as
(26)
.2B
j
.9.c
=
j
~.
c.
J -J
•
Thus
(24) and (21) can
17
Thus the solution for .£j
is given by
_ (,
)-1,
!J,j~'
c - j
~.
J
J
5
-c.
J
Consider the element in the cell (a,13) of
(6 ' !J,.).
which by Frobenius' Theorem must equal to
(B~j I j .). Now I e
J
J
nj
j
J
O!N 5.~ ,
It clearly equals
-J~ -J~
L., and if every
J
object is defined a certain (say y-th) associate of itself and of none alse, then
for some value of a' (here a
a'
elements of each B..
we have BYj'
J
= I n. ,and
all the diagonal
a'
J
=
are zero for a' ~ y, which implies that to B. .
JJ
JJ
for a' ~y.
= y),
0,
(9)
Henceby
tr(a~ ~.)=p(j,j,Yjj,a,13) tr BY
j.
J
. 'Jj. jJ
=tljP(jlj,Yjjla,13).
The last term is
clearly zero if a ~ 13 1 since distinct associate classes of any object do not have
any common elements.
(a,a) is
cell
6J 6j
Thus
is a diagonal matrix and the element in the
n.), i.e.
J
(28)
which can be easily calculated.
For the algebra
Lj
made very easy as soon as
therefore, the problem of inversion of any B
j
€
L.
is
JO
from
is known. For then one just calculates ~B
j
j
(26) (which just involves the multiplication of a vector by a matriX), and ~
6
j
from .2-B.
of
by taking reciprocals, and finally multiplying
J
tl
j
~C.
by the transpose
J
•
A few remarks regarding the computation of
Corresponding to the algebra
L.
J
will not be out of place here.
a
generated by the B.. , there is a standard
6..
J
JJ
representation of L., say L.
generated by certain (n .. x n. ) matrices which
J
JO
JJ
Jj
may be denoted by Pja. The interested reader is referred to Bose and Mesner [3J.
It is shown there that the nonzero roots of B~.
JJ
are the same.
and of the corresponding P.
Ja
a
Further the multiplicities of these nonzero roots as roots of B
jj
18
can be easily obtained by noting that
ex
tr(B .. )
JJ
= O. The ordering of the roots
doS
required by Frobenius is usually easily done by trial and error.
The above holds only due to commutativity.
algebra.s of
However since the
Ljl,s
are sub-
L, we shall in the next section use the above property to invert
matrices in L.
If D is a matrix belonging to an algebra
multidimensiona.l (orMnary)
L, then D will be said to be a
partially balanced matrix, if L is the algebra
generated by a multidimensional (ordinary) partially balanced association scheme.
6.
An
algorithm for inverting multidimensional partially balanced matrices
We shall first indicate the method for the cases
generalization for higher m.
verted.
e
(29)
Let m=2, and let D(eL)
m=2 and 3, and later the
be the matrix to be in~
Suppose
D =
ex
dij
I:
ex, i,j
and write D in the partitioned form
B
ll
D =
(30)
ex
Dij
1
B
12
1
where
(30)
Let M = D-1 , and suppose a partitioned form for M, similar to ( 30) is
MIl
(31)
M
=
\.
i
M2
M
12
M22 !
...l
19
It can be easily checked tillat the M
ij
relations
are connected with the B
ij
M22
=
-1 B r1
[B22 - B2l Bll
12
M
12
=
Mll
=
-B-1 B M , M = M•
12
ll 12 22 2l
B-1 + B-1 B M B B-1
ll
ll 12 22 2l ll
(32)
where without
,
loss of generality, we assume the matrices under the inversion sign
to be nonsingular.
This formula holds whatever the pattern in D may be.
now that
=
Then we have
where
(35)
Q(a',y')
by the
= ~
~,~'
~2 ~~
E
Y
p(2,1,y;1,~,ar) p(2,2,yJ;lJYJ~t)
E p(i,k,y; j,~,ar) p(1,f,yt;k'Yl~t),
y
Suppose
20
n. j
••• , di~ ) , for all permissible
i" j" k
and
f ·
Hence we get
(37)
-1
M
22
=
yl
(d
22
yl
M
22
=
E
- a l Q( a' ,y I )c~~) Byl22
E
E
yl
YI
d22 ,1 B22 ,
yl
= E
say
.
Thus
(38)
yl
B
22
~f
m
22
yf
,
say
could be easily obtained by the method of the last section. Fer
MIl' we go backwards.
Thus,
E
crf312
=
-
f3 , f3 I
the vectors ~ll' !22
=
(E
=
E
which shows that
a
a
I
(c
lla2
I
11
tt ••
<~ w
f3 11
!lltj2 ]BlIl~ . =
i'tl...
oller
being defined analogously to
C~l B~l)
+
E
L
l
f3"a
•
-
,
f3 • t
E
f3 I
I
m12
_B t
] a
Bll
M
12
and
I
B12
(36) • Similarly,
I
1121
crf312 (~ll
~a !12
E
a
a
mIl Bll ' say
MIl
OPe
.·g~tting
,say
"
21
Next, using an obvious generalization of the above notation, consider (for
the case m:::3) the problem of obtaining M where
ji M
;
(41)
M
M
M
12
ll
M
13
22
21
=
M
M
23
~3
~2
~l
1-
,..
--f
\
I
i
M ::: D-1 ,
B12
B13 1
B21
B22
B
LB31
B
32
B
33
I
and
D
I1
T
Bll
=1
II
2,
i
;.
!
.... -:
Since,
Bij
=
we can denote
D
(42)
E
c/,
bc/'
ij
c/,
Bij
in short by a
such that the element in
e
,
for all permissible
and
(i,j) cell is
•
The suffix (ij) at the end of the bracket indicate the two sets
volved.
j ,
3x3 matrix D* whose elements are row vectors,
nij
2
(bi j , bij , ... , b ij \ j
(43)
i
Si and Sj
in-
We shall consider the expression (43) to be identical with (42), except
for an abbreviation.
D*
will be called the coefficient matrix of D.
The computation proceeds as indicated in the partitioning of D.
first stage, we obtain M , the inverse of Bil'
l
m=2, one computes M , where
2
-1
B
B
ll
12 i
1
r-
I 21
B
~
Then proceeding as for the case
M2 • 11
M2 •12
=
!
B
22
At the
M2 • 21
,
say •
M2 • 22
The computations can all be neatly and briefly expressed by ua;lng the vector (4,}.
This ,nll be seen in an example (sec. 8) where we illustrate the working of the
algorithm.
22
From the formula$ (32) one sees that we must next obtain
(44)
Using computations similar to those at
(34), one finds that each term in the
last bracket is a linear function Of the matrices
B~j' so that (44) belongs to
, and hence can be easily inverted. Let this inverse be
3
L
~3.
Then to get M,
we further use (32) to obtain
f~
)
;:::
M~;
-
M
2
B13 \
~3
B
\
,- 23 -;
-'
=
(M2 • 11
H
13
+ M2 • 12 B23 ) M --1
33
(M2 '.21
B
13
+ M2 • 22 B ) M
23
33
•
each term of which could be computed using product formulae Of· the form
where
(46)
The sub-matrices
of M can be similarly computed
using (32).
The method for inverting D remains the same for general values of m.
2
are m
submatrices
B
ij
of D.
There
At the r-th stage, we invert the top-left large
e
23
submatrix of D, viz
B
12
B
ll
D
r
=
•
!
L
B
12
·_·i
• ••
•
Brl
• ••
B
r2
• ••
B
rr
II
1
.J
Formulae of the form (45) suffice for all computations.
The algoritm is carried
is obtained.
at the end of which D-1
v
The,·above algorithm. offers a speedy method of inverting any multidimensional
on for
r
= l,2,~ •• ,v,
partially balanced matrix, in particular one of the form
(7), which occurs in the
analysis of a MDPB design. The algorithm has the further advantage that at every
stage, the relation D D- l = I
provides a check on the calculations.
r r
r
The method of linear associative algebras appears to be a very powerful tool
e
for inverting several large classes of JlS.tterned matrices.
Such matrices arise in
various areas of research, particularly design of experiments.
In the next section we consider an important application to the latter case.
7. Applications to the
an~y.sis
of partially balanced factorial fractions.
The theory that "Te have developed so far has an important application to the
analysis of factorial fractions haVing some balance or symmetry in them.
Due to
lack of space, we shall not be able to demonstrate this in complete generality.
m
l
m
2
shall also restrict ourselves to fractions from 2 x 3
We
factorials, firstly in
order to make ideas clearer and the presentation elaborate, and secondly to avoid
certain complications that arise when some factors have more than 3 levels.
How-
ever the generalization for the latter case is not too difficult, and we shall
give some hints in that direction.
e
The factors at two levels each may be denoted by a , a , ••• , a
and those
2
l
ml
at 3 levels by b , b2 , ••• , b~ • The corresponding symmetrical case is obtainable
l
24
by taking
or m as zero. We shall suppose that interest lies in estimating
2
the general mean, all the main effects and all the two factor interactions. For
~
an elaborate definition of the various effects, see [2J, to which we shall constantly refer.
As usual, let the general mean be den cted by
= 1,2, ••• ,ml ),
Ai (i
AiAif (i«
if)
1,2, ... ,m -l)
2
AiB~
and B~ (j
Bj
= 1,2, ... ,ml -l),
and pure interactions by
B?~f(j«jf)
BjBjf(j(<jf) = l,2, .....m2-l) ..
2
= 1,2, ... ,m2 )
and BjB (j ~ k
k
(i = 1,2, ••• ,m , j
l
= 1,2, ••• ,m2 ),
Il, the main effects by
=
and the mixed ones by AiB j
and
= l,2, ••• ,m2 ).
The total number of effects is
and these can be grouped in an obvious manner into 10 sets
containing
8
4
=
2
(B j }
Il
alone;
(ii)
each with m
2
8
2
=
{Ail,
elements; (iii)
with m
l
8
5
viz
elements,
= {AiAj },
8
6
(i)
8
3
8
1
= {Il},
= (B j },
= (BiB j },
7 = (B~ B~} With ~ ml{ml - I), ~ m2 (m2 - 1) and ~ m2 (m2 - 1) elements
respectively; (iv) 8 = (BiB~J with m (m -l) elements; and (v) 8 = (AiB }
j
2 2
8
9
8
and 8
10
=
Now, in
2
(A B }
1 j
with m m
l 2
elements each.
r2J , "le have shown that if we are interested in the above
effects, the main problem in the analysis of a factorial fraction (T
to the inversion of a certain
row and each column of EE f
effects.
(Q,
e
Thus if
~
and
~
\)X\)
matriX, denoted there by EEf.
\)
say) reduces
Further, each
corresponds to one and only one among these
\)
are any two such effects, then there is a unique cell
9) corresponding to them, and the element in this cell being denoted by
Now take any set, say 8 •
2
Corresponding to this there is a
EEf, say M , whose diagonal elements are of the form
22
mlXln
€(Ai,A:t.)
l
€(Q,~).
submatrix of
and off-diagonal
25
ones of the form
£(Ai,A
j
).
Similarly if we take the rows corresponding to S2
and the columns corresponding to S4 we get a m xm submatrix M24 of EEl ~
l 2
whose elements are of the form £(Ai,B~). Thus EEt is subdivided into 100 sub-
= 1,2, ••• ,10),
matrices Mij(i,j,
where Mij corresponds to the sets Si and Sj.
We wish to apply the properties of the linear algebroas developed in the earlier
sections for the inversion of EEt.
For this purpose we define an association
scheme between and within the different sets.
The following table gives, for a typical element of each set, its various
associates in the same set.
TABLE 1.
Set S1
e
Associates
1st
i
2nd
1
IJ.
2
A.
Ai
Aj
3
B
i
2
B
i
B.J.
B~J.
B
j
2
B
j
4
3rd
4th
5th
2
2
BiBk' BkB j
l\B~
I..l.
J.
5
AiAj
AiAj
Ai~
~A~
6
BiB j
BiBj
BiBk
7
B~B~
B?~
B~~
BkB(
2 2
BB
k f
8
2
BiB j
2
BiB j
B?j
9
AiB j
2
AiBj
A.B.
J. J
2
AiB j
AiBk
2
AiBk
10
e
Typical Element
In the above table, the subscripts
B?k'
i,j,k and
they occur in interactions belonging to the same set.
(
BjB~
~Bj
~Bf
~B~
~B(
are all unequal, whenever
Thus in S5 the second
26
associates of A A are Al~' A A etc, i.e. those interactions between A-factors
1 2
1 4
which involve Ai but not Aj • Similarly the third associates are those which
do not involve either Ai
or A.•
J
ft can be shown, from considerations of the symmetry of the factors involved
that, for each set Sj (j
= 1,2, ••• ,10),
the association relation defined above
satisfies the conditions of an ordinary partially balanced soheme.
(EE I) • Suppose that (EE I) is a balanced
of
the diagonal sub-matrices M..
JJ
Consider now
matrix, i.e. is symmetrical with respect to all the factors involved.
Such a matrix arises in particular, when T is a partially balanced array of
strength
4, as defined by Chakravarti [4].
Then it can be shown (using the theory developed in [2]) that if 9, iJ 1 and
e
CP2
are any three elements of Sj (j
rf 8),
such that
(e,
(?l) are the same associates
l ) = 6(e, q(2)' This im(j ~ 8) belongs to the commutative linear algebra Lj generated
(under Table 1) as
(e, CP2)' then in EEl we have E(e,
plies that M
jj
by the association scheme for Sj.
CfJ
On the other hand even for a balanced matrix
(EEl), it can be shown that in general, we do not have
,
where
i, j, k are all unequal.
long to L • In order that M
8
cient.
88
This implies that in general M
88 does not be-
E
L , the condition (49) is necessary and suffi-
8
This condition can be expressed more directly in terms of the fraction T
itself, when it is equivalent to
Arst denotes the number of treatment
ijk
combinations in T in each of which the factors B , B , B occur respectively
j
k
i
for all permissible
i, j
and k, where
e
27
at levels
r, sand t.
A ba.lanced fraction for which this easily verifiable
condition is satisfied vdll be called a commutative balanced fraction.
In order that we may be able to use the :properties of the algebra
inverting the matrices
for
L.
J
M E Ljl we must know the roots of the association
jj
matrices
B~j of Lj , or equivalently the matrices
sec. 5).
These matrices are now :presented below for
/).j
(defined for an
L , L , L and
2
5 8
Lj
in
L , since
9
the rest are covered under these cases:
(51)
/).,
(i)
2
=
-j
r
1
l~-l
J'm ..1
1
(-l)J'
~-
1
-'j
/).,
(ii)
5
= I 2(ml -2)
mil
(iii)
/).8
I
J'
mi I
(-2)J',
Jt
m1 -1
(m -4)J'
1
m -1
1
1
I' where mi = ~ml(ml-3),
-(ml -3 )J~ -1
1
1
J'
1
J'
m -1
2
~-l
J,.~I
J'
m'2
. J' ,
m
2
ml
m~
i
J
I II
m
2
(-l)J'm"
2
=
~(~-2)(ml-3)
-r
I
J'
m -l
2
(-l)J' 1
1)12-
,
llI.
l
2(m -2)
2
(m -4)J'
2
m -1
2
2 (m2-4)
2("2- )
(-2)J',
m
2
2J t"
m
2
(-2)J' ,
m
2
( -2)J~"
(m2-2)J~ -1
(O)J t It
m2
(O)J~ -1
m2 ","2) (m 7"3) ( ... 2)(m2-3)J~ -1 2J'
2
m'2
2
-em -2)J'
2
m -l
2
2
2
2
,
I
-.¥
28
e
(iv)
b,'
9
=
J ~ml-l)(m2-l)
(m -1)
2
(-l)J(m -l)(m -1)
1
2
{-l)J~ -1
2
(m2-l)J~_1
(ml-l)
(-l)J(m -l)(m -1)
1
2
(m -l)J'
1
m -1
2
(-l)J' 1
ml -
(m -l)(m -l)
l
2
where Jt
r
JI
m -1
1
1
J'(m -l)(m -1)
2
l
denotes a row vector of length
above matrices and the results of sec.
r
J'm -1
2
- (lIJ.-l)J'm -1
2
,
-(m2-1)J~-1
with every element unity.
Using the
5., it is a very simple matter to invert the
If the fraction is balanced but not commutative, then also (by using the properties of a linear algebra, or otherwise), the inversion of M
88
e
(see [l'l). to the inversion of matrices of very low order.
can be reduced
The formulae, however,
are more complicated and less elegant and explicit than in the commutative case,
and will not be presented here due to lack
of space.
The algorithm developed in sec. 6. can be gainfully employed for the inversion.
of
(EE')
from any balanced fraction
may be effected in the following way.
T.
If T is commutative, a simplification
He rearrange the row blocks and column
blocks of EE', so that now they correspond in order to
and 8
10
•
M
88
by inverting M
88
,
8
9
instead of M • The algorithm can now be started (for (EEt)*)
ll
first, and then continuing with the rest. In fact, writing
it can be shown that
get
8
1
2
1
This will mean that in the nei'T EE' (say (EEI)*) the top left hand
submatrix is
e
8 , 8 , 8 , ••• , 8
p = (M - Q,'
M8~
Q,)
,which one has to invert (see (32»
{EE' )*-1 ,belongs to an algebra L which has as subalgebras Ll ,··
.'L.r'
to
L
9
29
e
and LIO.The multidimensional PB association scheme to which L corresponds
is detailed below.
Since the association scheme within each Si
exhibited in Table
1, we consider the association between the elements of Si
and Sj
for
i ~ j.
The motivation behind the scheme is this.
Sj.
Then
(a,
is already
<f'l)
Let
e
Si and <f'l and CP2
£
are defined to be the same associates of each other as
£
(e,CP2)'
only if
To be more specific, suppose that between CPl and
PI A-factors and
ql B-factors common (0 ~ PI + ql ~ 2).
<f'2' let these numbers be respectively P2
of the A.factors common between <f'
denoted by
Xl and in CPl
common, and the indices
~.
Let
If PI
there are exactly
Similarly for
~
1, let
the index of
then we get the ordered pair
e
and CPI
(~ll' yil).
= 1, ••• ,Pl;
P2 + ~ pairs
Similarly if a
j
(X , x 2i ), (Y2j' Y2j).
2i
Between <f'
and
be
(XII' XiI)
Yl and
He. shall thus get PI + ql
= l,···,ql·
e
b-factor (say B)
are respectively
e
A be one
A in
by xf-.· This gives us one ordered pair
of B in
(Xli' Xii)' (Ylj' yij)' i
and
and '!rl.
corresponding to the common factor A.
similarly get
e,
is
Yi'
ordered pairs
and *2
we shall
Then it has been found that
(53) holds if
(i)
(ii)
(iii)
the set of ordered pairs (Xli' xii)' i = 1, •••Pl' is the same as
the set (X , x ), i = 1, ••• 'PI ,
2i
21
..
the set
(Ylj' Yij)
and (Y , y ), j
2j
2j
= 1, ••• ,ql
Hence in accordance With the above, rTe shall say that <f'l and <f'2
associate class in Sj
generated by
e£
Si' if and only if
(54)
are the same.
are in the same
is true.
This
e
;0
means for example that
~(e Sl); (ii)
(i)
all elements of S5
are the same associates of
the different associates of Bj (e S;) in the set
(say 1st associates) and ~B( k, ( ~ j, (2nd associates), (iii)
in SlO' AiBj ,
etc.
Ai~' ~Bj
and
~B(
Bj~
S6 are
AiBj(e S9) has
as its four different kind of associates;
It can be shown that, leaving S8 aside, the association scheme defined by
Table
1, and
(54), (to be called the factorial association scheme) is of the MDPB
type.
The proof, which follows from considerations of symmetry
be~ween
the factors,
will be omitted here.
The case when some factors, say C , C , ... , Cm'
l
2
can be dealt with in a similar manner.
[C~}, x
= l, ••• ,s-l,
and s-l
There are
s-l
are at
s(>;) levels each,
sets of main effects
sets of interactions of the form
[C~ C~}, x
= 1,2,
.... ,s-l which give rise to submatrices in the Qj.agonal of EEt, which are such
or L • Also the
5
process of inversion of submatrices which correspond to sets of the form. fC~C~},
that they can be inverted by using one of the algebras
(x
L
2
~ y) is exactly the same as for M88 (which arises out of [B?j})' except
that for commutativity, other conditions are needed.
The above theory permits many further applications.
As mentioned earlier,
the main problem in analyzing a fraction and examining its properties, is to invert
its
(EEt).
In many cases, although the fraction may not be balanced, it may be
such that EEt. breaks up into two or more matrices of smaller size, which in turn
are invertible by using the algorithm of sec.
6., or just the algebras Lj
such fractions will be called partially balanced.
found in
[2J.
•
All
Some examples of these will be
Construction of good partially balanced fractions will be discussed
in separate papers, wherein the above theory which inspired them Will be greatly
exemplified.
e
31
8. An example.
We shall now illustrate the preceding theory by considering the inversion of
27 factorial fraction,
a (22 x 22) matrix 0, which arises in the analysis of a
discussed in Example
2
[2]~
of
4 , and
then correspond, in order, to the effects
~,A2'~' A
In this fraction the factors are divided into two groups
BI , B21 B}. The rows and columns of
(1
(~,~,~,~~,~~,~;Bh,~,~;AA'YI'~~'~~'
AI B2 , A2B2, ~B2' A4B2 , A B , A B , A B , A4B ; J.L). The four sets of effects,
I 3 2 3 3 3
3
separated from each other by semicolons, may be denoted by 8 , 8 , 8
and 8 •
4
2
1
3
An association scheme within each set may be defined as in Table 1, sec. 7. If
for distinction, the sets in the table are denoted by
clearly the sets
types 81 and
8
1
8~
and 8
2
are of the type
8~,
8~
~
instead of 8., then
~
and similarly 8
4
and 8
3
of
respectively.
In addition to the above, a factorial association scheme (defined in the last
section) also exists between the different sets.
Thus the total scheme between
and within 8 , 8 , 8
and 8
is of the MDPB type. This can be checked directly
1 2 3
4
0:
as well from the corresponding association matrices B •
ij
To save space, we shall indicate only the matrices B~. (i ~ j), since the
~J
others can be immediately written down using table 1:
B1
12
1
B
13
=
J
63
= [Q. Q.
,
1--
Q.],
where
1
1
1
Q. =
1
0
0
0
1
0
0
0
1
1
0
0
1
1
0
0
0
1
1
1
0
e
I
L.
B2
13
=
1
J 6,12 - B13
,
1
i
j
i
,
"-,
B123
=
J
J
14
J 14
°141
14
°14
J 14
J
J 14
°14
In the above J
~n
or J
14
~
B223 = J ,12
3
,
I
1
I
- B123
,
J
is an mxn matrix with unity everywhere, and
°
~
is
of the same size with zero everywl':e reo
As at
(42), (43), we shall
v~ite
(for all
i
andj)
where n
is the number of associate classes between 8 and 8 • The number
ij
j
i
of associate classes within 8 , 8 , 8 and 8 respectively is 3, 2, 4 and 1.
1 2 3
4
These include the zero-th or the self associate class. For convenience we shall
make a departure from (55) for 8 , am write
2
8ince B~2
terms of
=I3
(I , J
3
' and B~2
= (J33 -
1 ), this essentially implies 'Working in
3
33 ) in place of the usual basis (B~2' B~2).
With the above notation at hand,
0
is given by
(ll,l'-l)l~_J
(-1)21
(10,1)22
I
____ .__._.J
(0,2)32
__.
----------
.
(11,1,1,-1)33
(0)43
l
33
In order to apply the algorithm of sec, 6, we use the partitioning indicated
above~
The successive square matrices indicated by this partitioning will be denoted by
01' 02' ~ and 04 (= 0)
respectively.
L , particularly
To invert 01 we must obviously use algebra
ml
= 4)
of
(51) (2), together with (27).
=
bt
b t bo'
= (14,12.1 13 ,
6 '.1
.2.'°1-1
8 J 12 )
bo
=
(8
bo
5 (with
Here
(ll, I, -1)
=.2,'°1 ,
= (lk), (l~
5
say (in the notation as at (27»,
)J 13 ,{-§-)J12
-3
1 )
W"']Ji:' W'"
'
= diag (6, 24, 6) •
bot b.
°1
..,
Hence
8/6
1
W"
\
-3/24 [
L 1/6_
,
and
Proceeding with the algorithm we next invert 02 using
(32).
(58)
it can be easily checked that
The inverse of the last matrix is easily obtainable, and is
Since
e
34
r
Supposing
Xl
0- 1
2
1
Y ]
=
y'
1
we have using (32) with (58)
Xl
Next, to invert
1 1
B32
B23
1
2
B32B23
0,
= 01-1 1
Zl
J
()
-1
Y-1 21 ~
1
= 48 x41
we need the following results:
2 2
= ( 2,1,2,1)33' B32
B23 = ( 1,0,1,0)33
2 1
= B32
B23 = (0,1,0
1
1)33
1 1
2 1
B13B33 = (2,0)13 ' B13B33 = (0 1 2)13
2 = (1,2)13' B2 B2 = ( 21 1)13
B113B33
13 33
Bi3B~3 = (2,4\3
'
Bi3B~3 = (4,2)13
•
,
e
35
Then for the first formula in
(32), lIe compute
= (11,1,1,-1)33 - "4}- [ ~ (60, 60, 19, 19)+ 6(1,1,1,1)33 +
= 41
1
x
To find
30
-1
(11682, -126, 202, -1766)33 = Z2
Z2' we use algebra L
9
Using notation as at
b f 1.:1'
~~
=
1.:1
2
(l.:1 fb,)
1.:1
9(vlith
(156,-8,156,-8)33 J
' say
m1
= 4, m2 = 3)
of
(52) (iv).
(27), we have here
= [(48/41), 8 J 16, (144/10)J12, 12 J 13 J = ~f_1 '
..
Z2
2
~z
and
io
[(41/48), (1/8)J16 , (10/144)J12 , (1/12)J13 J
= lU4 [287, 190, 285, 714J ,
=
diag
Renee
=
1
12 x
144
so that
-
vlriting now
II
X
Y2 I
Y'2
Z2
2
0;1 =
L
·lJ
(12, 24, 36, 72) •
I
-'
287 -1
95 I
95 ;
119
,
say
"tit
we have
(0,2)13
i
•
J
_ (0,2)23
r
Using (59) J this can be quickly reduced to
12-~ 1~~
Y2 =
(123, 147)13
•
L(13 0 , 154)23
Similarly
X
2
=
0;1 -
Y2
[(0,2)31
{O,2)32
J
0;1
(327, 135, 159)11
1
=
12 x
144
-1
, the above process is repeated once more.
n which border
~
are of the type
1
J
(192, 140)22
L
To obtain 0
(162)12
k.
J 1r' "There k
,calculation is easy and will not be reproduced.
Since the matrices in
is a constant, the
REFERENCES
37
38
[12]
Shrfkhande, S.
experiments.
s.
Some combinatorial problems in the design of
(1950).
Unpublished doctoral dissertation, University of North Carolina,
Chapel Hill, N. ·C.
[13]
Srivastava, J. N. (1961).
designs.
Contributions to the construction and analysis of
Institute of Statistics, mimeo series no. 301, Univ. of North
Carolina, Chapel Hill, N. C.
[14J
Yates, Frank (1937).
Impe~.al
[15J
The design and analysis of factorial experiments.
Bureau o:f Soil Science, Technical Conununication no. 35.
Youde:'J., H. J.
(1937).
tobacco mosaic virus.
Use of incomplete block replications in estimating
Contributions from Boyce Thompson Institute
.2
317-326.