Sunteți pe pagina 1din 17

Linear Algebra and its Applications 430 (2009) 27812797

Contents lists available at ScienceDirect


Linear Algebra and its Applications
j our nal homepage: www. el sevi er . com/ l ocat e/ l aa
Binary operations and canonical forms for factorial
and related models
Vera de Jesus
a
, Joo Tiago Mexia
a
, Miguel Fonseca
a,
, Roman Zmy slony
b
a
Department of Mathematics, Faculty of Science and Technology, New University of Lisbon, Monte da Caparica 2829-516,
Caparica, Portugal
b
Faculty of Mathematics, Computer Science and Econometrics, University of Zielona Gra, Podgrna 50, 65-246 Zielona Gra, Poland
A R T I C L E I N F O A B S T R A C T
Article history:
Received 6 December 2007
Accepted 9 January 2009
Available online 14 March 2009
Submitted by P.H. Styan
Keywords:
Commutative Jordan algebras
Binary operations
Aggregation
Disaggregation
Factorial models
Binary operations on commutative Jordan algebras are used to
extend the grouping of treatments in blocks and the taking of frac-
tional replicates to models where factors have arbitrary number of
levels. Up to now these techniques had been restricted to models
whose factors have a prime or a power of a prime number of lev-
els. Moreover, the binary operations will enable the use of simple
models as building blocks for more complex designs.
2009 Elsevier Inc. All rights reserved.
1. Introduction
Factorial designs in which the factors have a prime or power of a prime number of levels enjoy
interesting properties. Thus, see for instance [2,5,7], we can group the treatments in blocks to get a
better control of experimental error. Moreover, by considering only the treatments in one of the blocks
we have a fractional replicate in which there is only a fraction of the treatments corresponding to the
combination of factor levels.
The theory yielding these results is based on Galois elds which are available when the number of
factor levels is a prime or a power of a prime. To overcome this limitation we will use commutative
Jordanalgebras, CJA. These algebras are linear spaces constitutedby symmetric matrices that commute
and containing the square of their matrices. Each CJA A has (see [9]) an unique basis, the principal

Corresponding author.
E-mail addresses: veramjesus@gmail.com (V. de Jesus), fmig@fct.unl.pt (M. Fonseca), r.zmyslony@wmie.uz.zgora.pl
(R. Zmyslony).
0024-3795/$ - see front matter 2009 Elsevier Inc. All rights reserved.
doi:10.1016/j.laa.2009.01.013
2782 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
basis pb(A), constituted by pairwise orthogonal orthogonal projection matrices. As we shall see the
use of CJA enables us to present the models we study as associated to CJA. The canonical form of a
xed effects model associated to a CJA A will be
y =
w

j=1
A
j

j
, (1)
where the matrices Q
j
= A
j
A

j
, j = 1, . . . , w, constitute the principal basis of A, the
j
have mean vectors

j
, j = 1, . . . , w +1, and covariance matrices
2
I
g
j
, j = 1, . . . , w +1, and
w+1
= 0.
Binary operations on CJAs enable us (see [3]) to consider models obtained:
1. bycrossing, i.e., we get a newmodel whose treatments are all the combinations of the treatments
in the initial models;
2. by nesting every treatment in a model inside each treatment of another model.
Moreover, merging factors gives, as we shall see, models associated to sub-algebras. In this way, we
will overcome the requirement that the number of levels must be a prime or a power of a prime.
We will start by presenting the main results we use on CJA, binary operations and sub-algebras.
Next we consider inference for models associated to CJA. These two sections give us the framework for
the study of factorial and related models. We start with prime base factorials before considering their
fractional replicates. We will see how, through crossing and nesting, complex models may be derived.
Lastly we will consider factor merging which, as we mentioned, leads to models where factors may
have an arbitrary number of levels, and the inverse possibility of model terms disaggregation, which
leads to models associated to larger algebras.
Our results will give us great exibility in experiment designing. Then we may start by deciding if
we have nesting or not. If we decided there will be nesting we decide which factors will be considered
for each nesting tier. For each tier with more than one factor we will have factor crossing. Besides
deciding what factors should be considered for each tier we must decide how many levels each of
them shall have. The techniques of factor merging and disaggregating models terms give us a great
freedom in the choice of the numbers of levels. Applying this approach any tier will be the crossing of
prime number of factorials. This lead to the possibility of applying extensively fractional replications
thus controlling the size of the nal experiment. Moreover, through confounding, the full experiment
may be broken up into blocks with a better control of the experimental error.
The exibility we achieved rests on:
writing the sub-models in their canonical form:
y =
w

j=1
A
j

j
+e. (2)
Then, the matrices Q
j
= A
j
A

j
; j = 1, . . . , w, and Q
w+1
= I
n

w
j=1
Q
j
constitute the principal
basis of the CJA to which the sub-models are associated;
applying the binary operations corresponding to models crossing and nesting to obtain a CJA
associated to the nal model; these operations are described in Section 2.
We point out that the sub-models we considered were prime basis factorials and their fractional
replicates. For these sub-models, as we show in Sections 4 and 5, the canonical form arises naturally
from the initial formulations of such models.
It may be interesting to point out that the range spaces R(Q
j
), j = 1, . . . , w and R(Q

) of matrices
Q
j
, j = 1, . . . , w, and Q

constitute an orthogonal partition of the sample space


R
n
=
w
j=1
R(Q
j
)R(Q

), (3)
where represents the direct sum of subspaces. Now the R(Q
j
) will be associated to effects and
interactions and when
j
= 0 the mean vector =

w
j=1
A
j

j
will span the orthogonal complement
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2783
R(Q
j
)

of R(Q
j
), j = 1, . . . , w. Thus the hypothesis of absence of effects and/or interactions cannaturally
be written as
H
0,j
:
j
= 0, j = 1, . . . , w. (4)
Moreover, as we shall show, is an estimable vector if it may be written as
= C
w

j=1
A
j

j
. (5)
Thus, the parameters
j
are easy to interpret.
2. Commutative Jordan algebras
Jordan algebras were introduced (see [4]) to provide an algebraic foundation for QuantumMechan-
ics. Later on these structures were applied (for instance see [813]) to carry out linear statistical
inference. In these studies Jordan algebras were called quadratic subspaces since they are linear sub-
spaces constituted by square matrices, containing the squares of their matrices. For priority sake we
will use their rst name. We are interested in commutative Jordan algebras CJA where matrices
commute. If pb(A) = {Q
1
, . . . , Q
w
} and M A, M =

w
j=1
a
j
Q
j
. When M is an orthogonal projection
matrix OPM it is idempotent. Because the Q
1
, . . . , Q
w
are pairwise orthogonal and idempotent,
a
j
= 0 or a
j
= 1, j = 1, . . . , w. With C = {j : a
j
= 1}, we will have
_

_
M =

jC
Q
j
,
rank(M) =

jC
rank(Q
j
).
(6)
Thus, with B a sub-algebra of A, the matrices in pb(B) will be sums of matrices in pb(A). We also
see that if a rank one OPMbelongs to A, it also belongs to pb(A). Namely, if
1
n
J
n
=
1
n
1
n
1
n

A, we have
that
1
n
J
n
pb(A) and take Q
1
=
1
n
J
n
. We then say that A is regular. Moreover, if
w

j=1
Q
j
= I, (7)
we say that A is complete. Otherwise, we may add
Q
w+1
= Q

= I
w

j=1
Q
j
(8)
to pb(A) to obtain pb(A), with A the completion of A. A complete and regular sub-CJA will be a
principal sub-CJA. Consequently, A will also be regular and complete.
Given the families T
1
and T
2
, T
1
T
2
will be the family of the Kronecker products of the matrices
of T
1
by those of T
2
. Then (see [3]), the CJA A
1
A
2
with
pb(A
1
A
2
) = pb(A
1
) pb(A
2
) (9)
will be associated to models obtained crossing models associated to A
1
to models associated to A
2
.
The product of principal sub-CJAs gives also principal sub-CJAs.
We point out that (see [3])
A
1
(A
2
A
3
) = (A
1
A
2
) A
3
. (10)
This associative property will be useful when we want to cross more than two models. Namely, to
obtain through sub-CJA condensation models with factors that, as we shall see, have arbitrary number
of levels.
Now, if A
1
and A
2
are complete and regular CJAs associated to models, the CJA A
1
A
2
with
pb(A
1
A
2
) =
__
1
n
1
J
n
1
_

_
1
n
2
J
n
2
_
, . . . , Q
1,w
1

_
1
n
2
J
n
2
_
, I
n
1
Q
2,2
, . . . , I
n
1
Q
2,w
2
_
(11)
2784 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
is (see [3]) associated to the model obtained by nesting the treatments of the second model inside the
treatments of the rst one.
The restricted Kronecker product of CJAs is also(see [3]) associative, whichmay be useful inderiving
complex models.
There is aninterestingrelationbetweenCJAs andorthogonal matrices. Whenpb(A) = {Q
1
, . . . , Q
w
},
we have
Q
j
= A
j
A

j
, j = 1, . . . , w (12)
the column vectors of A
j
constituting an orthonormal basis for R(Q
j
), j = 1, . . . , w. Then
P =
_
A
1
A
w
_
(13)
will be an orthogonal matrix associated to A. There is not an unique orthogonal projection matrix
associated to a CJA. With P
j
, j = 1, . . . , w, orthogonal matrices, and
B
j
= A
j
P
j
, j = 1, . . . , w (14)
the column vectors of B
j
will also constitute an orthonormal basis for R(Q
j
), j = 1, . . . , w, so
R =
_
B
1
B
w
_
(15)
will also be an orthogonal matrix associated to A.
This relation between CJAs and orthogonal matrices will be useful when we consider prime basis
factorials and their fractional replicates.
In what follows we consider models with r observations per treatment. If, when r = 1, the model
is associated to the CJA B, whatever r > 1 the model will (see [3]) be associated to A A(r), with
pb(A(r)) =
_
1
r
J
r
, J
r
_
, (16)
where J
r
= I
r

1
r
J
r
. It is interesting to point out that A(r) is complete and regular with dim(A(r)) = 2.
When deriving complex models, we will assume that the simple models we are considering have r = 1
replicates. Then, the complex model will initially also have r = 1 replicates. The previous observations
show how to obtain a CJA associated to the complex model when r > 1.
If pb(A) = {Q
1
, . . . , Q
w
}, with Q
j
= A
j
A

j
, j = 1, . . . , w, the matrices in pb(A A(r)) will be
Q
j

_
1
r
J
r
_
=
_
A
j

_
1
r
J
r
___
A
j

_
1
r
J
r
__

, j = 1, . . . , w, (17)
and
Q

= I
n
J
r
= (I
n
T
r
)(I
n
T
r
)

, (18)
where T
r
is obtained deleting the rst column equal to
1

r
1
r
of a r r orthogonal matrix.
3. Associated models
We will consider xed effects models with r observations for each of the n treatments associ-
ated to complete CJA

AA(r). If pb(

A) = {

Q
1
, . . . ,

Q
w
} we will have pb(

AA(r)) = {Q
1
, . . . , Q
w
} with
Q
j
=

Q
j

1
r
J
r
, j = 1, . . . , w +1, and Q
w+1
= I
n
J
r
= (I
n
T
r
)(I
n
T
r
)

.
Moreover, if

Q
j
=

Aj

j
, j = 1, . . . , w, we have Q
j
= A
j
A

j
with A
j
=

Aj

_
1

r
1
r
_
, j = 1, . . . , w. We will
also have Q
w+1
= A
w+1
A

w+1
with A
w+1
= I
n
T
r
.
Initially the model may be written as
y =
w

j=1
A
j

j
+e, (19)
where the
1
, . . . ,
w
are xed and e is an error vector with null mean vector and variancecovariance
matrix
2
I
n
(n = nr). Thus, the mean vector will be
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2785
=
w

j=1
A
j

j
, (20)
and the variancecovariance matrix
2
I
n
.
To obtain the canonical form for the model we observe that
I
n
=
w+1

j=1
Q
j
=
w+1

j=1
A
j
A

j
(21)
so that
y =
w+1

j=1
A
j

j
, (22)
with
_

j
= A

j
y =
j
+A

j
e, j = 1, . . . , w,

w+1
= A

w+1
y = A

w+1
e,
(23)
thus the
1
, . . . ,
w
[
w+1
] will have mean vector
1
, . . . ,
w
[0] and variancecovariance matrix
2
I
g
j
with g
j
= rank(A
j
) = rank(Q
j
), j = 1, . . . , w +1.
Thus, the mean vector of y will be
=
w

j=1
A
j

j
= A (24)
with A = [A
1
A
w
] and = [

w
]. Moreover, the orthogonal projection matrices on the space
spanned by will be, see [6],
T = A(A

A)
+
A

, (25)
and
= C (26)
is estimable if and only if C = WA, with LSE (Least Square Estimators) given by

= C(A

A)
+
A

y = WA(A

A)
+
A

y = WTy. (27)
Now, we also have T =

w
j=1
Q
j
=

w
j=1
A
j
A

j
, so

= W
w

j=1
A
j

j
, (28)
and since = W

w
j=1
A
j

j
we notice the importance of parameters
j
, j = 1, . . . , w and their estima-
tors

, j = 1, . . . , w in treating estimable vectors.
The cross-covariance matrices of the
j
, j = 1, . . . , w +1, are null so that if we assume the
normality of y the
1
, . . . ,
w+1
will be normal and independent. Thus S
j
(
j,0
) =
j

j,0

2
will the
product by
2
of a chi-square with g
j
degrees of freedom and non-centrality parameter

j
(
j,0
) =
1

j

j,0

2
, j = 1, . . . , w. Moreover, S
j
(
j,0
), j = 1, . . . , w, will be independent from
S =
w+1

2
which will be the product by
2
of a central chi-square with g = n(r 1) = g
w+1
degrees
of freedom. Thus for testing
H
0
(
j,0
) :
j
=
j,0
, j = 1, . . . , w, (29)
we have an F test with statistic
F
j
(
j,0
) =
g
g
j
S
j
(
j,0
)
S
, j = 1, . . . , w. (30)
2786 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
The distribution of F
j
(
j,0
)j = 1, . . . , w, will be the F distribution with g
j
, j = 1, . . . , w, and g degrees
of freedom and non-centrality parameter
j
(
j,0
) = 0 when and only when H
0
(
j,0
) holds. This test
will be strictly unbiased.
The hypothesis H
0
(
j,0
) generalizes the usual hypothesis
H
0
(0) :
j
= 0. (31)
This extension is interesting since it leads to a new property of the F tests. Thus,

S
j
=
j

j

2
is
the product by
2
of a central chi-square with g
j
degrees of freedom independent from S, j = 1, . . . , w
and so

F
j
=
g
g
j

S
j
S
, j = 1, . . . , w. (32)
will have the central F distribution with g
j
and g degrees of freedomj = 1, . . . , w. If the quantile of that
distribution for probability 1 q is f
1q,g
j
,g
j = 1, . . . , w,
pr
_

j

j

2
g
j
f
1q,g
j
,g
S
g
_
= pr(

F
j
f
1q,g
j
,g
) = 1 q, j = 1, . . . , w. (33)
The rst of these inequalities denes a 1 q level condence hypersphere for
j
. Now, the q level F test
for H
0
(
j,0
) does not reject that hypothesis if and only if
j,0
is contained in the 1 q level condence
hypersphere, j = 1, . . . , w. Thus these tests enjoy duality.
Moreover, once the matrices A
j
j = 1, . . . , w are obtained, the analysis of these models is straight-
forward. So, in the next sections, we will concentrate on the derivation of these matrices.
4. Prime basis factorials
Let us assume that N factors with p (prime) levels cross. Usually (for instance, see [5], Chapters 6
and 9) we take p = 2 or p = 3. Then we have a p
N
factorial with that number of level combinations.
These level combinations will be called treatments.
To obtain a CJA A(p
N
) associated to these models we number the factor levels from 0 to p 1. This
will enable us to work with vector spaces over Galois elds. We put [p] = {0, . . . , p 1} and [p]
0
=
{1, . . . , p 1}. Using module p arithmetic we have the Galois eld G[p] with support [p]. Moreover, the
set [p]
N
of dimension N vectors with components in [p] will be the support of a dimension N vector
space G[p]
N
over G[p]. The dual L = L[p]
N
of G[p]
N
is constituted by the linear applications
l(x|a) =
_
_
N

j=1
a
j
x
j
_
_
(p)
, (34)
where x, a G[p]
N
and (p) indicates the use of module p arithmetic. With c
1
, . . . , c
u
[p], we put
u

i=1
l(x|a
i
) = l
_
_
x

i=1
(c
i
a
i
)
(p)
_
_
. (35)
Writing (a) = l(x|a), we dene an isomorphism between G[p]
N
and L[p]
N
. Then, the l(x|a
i
),
i = 1, . . . , u will be linearly independent if and only if the a
1
, . . . , a
u
are linearly independent. Both
G[p]
N
and L[p]
N
have dimension N, and
#(G[p]
N
) = #(L[p]
N
) = p
N
. (36)
The null vectors of G[p]
N
and L[p]
N
are 0 and l
0
(x) = l(x|0).
Puttingl(x|a
1
)l(x|a
2
) whena
2
= (ca
1
)
(p)
, withc [p]
0
, wedeneanequivalencerelationinL[p]
N
.
Now, l
0
(x) will be isolated in its equivalence class while non null linear applications are grouped in
classes with p 1 elements. Thus there will be
k
N
(p) =
p
N
1
p 1
(37)
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2787
such classes. Each of these classes contains one and only one application whose rst non null
coefcient is 1. Such linear applications are called reduced and their family is represented by L[p]
N
r
and
#(L[p]
N
r
) = k
N
(p). The reduced homologue of a linear application is the reduced linear application -
equivalent to it. Applications l(x|a
1
), . . . , l(x|a
u
) are linearly independent if and only if their reduced
homologues are linearly independent. Thus, we may assume that a basis L = {l
1
, . . . , l
u
} of a linear
subspace L
1
= L
1
(L) is constituted by reduced applications. The subspace L
1
is -saturated and
contains p
u
applications. Thus, with L
1,r
the family of reduced applications belonging to L
1
, we have
#(L
1,r
) = k
u
(p).
Writing x
1

L
1
x
2
when, whatever l L
1
, l(x
1
) = l(x
2
), we dene an equivalence relation in G[p]
N
whose equivalence classes are the blocks
[L|b] = {x : l
i
(x) = b
i
; i = 1, . . . , u}. (38)
There are p
u
such blocks, each with cardinal p
Nu
.
Let us order the reduced applications giving the rst indexes to those in L
1,r
. If we also order the
vectors in G[p]
N
we may dene the matrix C(l
h
), with elements
c
i,j
(l
h
) =
_
1 when l
h
(x
j
) = i 1
0 when l
h
(x
j
) / = i 1
; i = 1, . . . , p; j = 1, . . . , p
N
. (39)
We now establish
Lemma 1. We have C(l
h
)C(l
h
)

= p
N1
I
p
, h = 1, . . . , k
N
(p), and C(l
h
)C(l
k
)

= p
N2
J
p
when h / = k.
Proof. For each column of C(l
h
) we have one element equal to 1, the remaining being null, and for
each rowof C(l
h
) we have p
N1
elements equal to 1, the remaining being null, so C(l
h
)C(l
h
)

= p
N1
I
p
,
h = 1, . . . , k
N
(p). Moreover, given a rowof C(l
h
) and a rowof C(l
k
) there will be p
N2
= #([l
h
, l
k
|b
1
, b
2
])
matchings between non null elements on both rows, b
1
, b
2
= 0, . . . , p 1, thus C(l
h
)C(l
k
)

= p
N2
J
p
when h / = k, and the proof is complete.
We may now establish
Proposition 1. The matrix
_
1

p
N
1
p
N
A(l
1
) A(l
k
N
(p)
)
_
,
where A(l
h
) =
1

p
N1
C(l
h
)

T
p
, h = 1, . . . , k
N
(p), is an orthogonal matrix associated to the CJA A(p
N
) with
pb(A(p
N
)) =
_
1
p
N
J
p
N , Q(l
1
), . . . , Q(l
k
N
(p)
)
_
,
where Q(l
h
) = A(l
h
)A(l
h
)

.
Proof. Reasoning as to establish Lemma 1, we get C(l
h
)1
p
N
=p
N1
1
p
, thus A(l
h
)

1
p
N
=
_
p
N1
T

p
1
p
=0,
h = 1, . . . , k
N
(p). Moreover, A(l
h
)A(l
h
)

=
1
p
N1
T

p
C(l
h
)C(l
h
)

T
p
= T

p
T
p
= I
p1
, h = 1, . . . , k
N
(p). Lastly,
with h / = k, A(l
h
)A(l
k
)

=
1
p
T

p
J
p
T
p
=
1
p
T

p
1
p
1
p

T
p
= 0
(p1)(p1)
. Thus, P(p
N
) is orthogonal. The rest of
the proof is straightforward.
We now order the reduced applications, so that l
1
, . . . , l
k
N
(p)
take xed values in the blocks. Thus,
if we order the blocks, we may dene the matrices D(l
h
) with elements
d
i,j
(l
h
) =
_
1 when l
h
(x
j
) = i 1
0 when l
h
(x
j
) / = i 1
; i = 1, . . . , p; j = 1, . . . , p
u
; h = 1, . . . , k
u
(p). (40)
Assuming that we have only 1 replicate and representing by z the vector of block totals, we have
D(l
h
)z = C(l
h
)y, h = 1, . . . , k
u
(p). (41)
2788 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
Then, with
B(l
h
) =
1
_
p
u1
D(l
h
)

T
p
, h = 1, . . . , k
u
(p), (42)
we have
B(l
h
)z =
_
p
Nu
A(l
h
)

y, h = 1, . . . , k
u
(p). (43)
Moreover, we can reason as above to show that the matrix
R(p
u
) =
_
1

p
u
1
p
u
B(l
1
) B(l
k
u
(p)
)
_
(44)
is orthogonal. Thus, with T the sum of all observations,
z
2

T
2
p
u
=
k
u
(p)

h=1
B(l
h
)

z
2
= p
Nu
k
u
(p)

h=1
S(l
h
), (45)
where
S(l
h
) = A(l
h
)

y
2
= Q(l
h
)y
2
, h = 1, . . . , k
u
(p). (46)
Thus, with
_

_
A(L
1
) =
_
A(l
1
) A(l
k
u
(p)
)
_
Q(L
1
) = A(L
1
)A(L
1
)

=
k
u
(p)

h=1
Q(l
h
)
, (47)
we have
S(L
1
) =
1
p
Nu
z
2

T
2
p
N
= A(L
1
)

y
2
=
k
u
(p)

h=1
S(l
h
). (48)
Moreover, since P(p
N
) is orthogonal, we have
y
2

T
2
p
N
=
k
N
(p)

h=1
S(l
h
) = S(L
1
) +
k
N
(p)

h=k
u
(p)+1
S(l
h
). (49)
When we have r replicates we must take
S(l
h
) =
_
_
_
_
_
A(l
h
)
1

r
1
r
_
y
_
_
_
_
2
, h = 1, . . . , k
n
(p) (50)
as well as
S(L
1
) =
_
_
_
_
_
A(L
1
)
1

r
1
r
_
y
_
_
_
_
2
(51)
to get, assuming that the observations are grouped according to treatments,
y
2

T
2
p
N
r
=
k
N
(p)

h=1
S(l
h
) + SSE = S(L
1
) +
k
N
(p)

h=k
u
(p)+1
S(l
h
) + SSE, (52)
where
SSE = (I
p
N T
r
)y
2
(53)
is the usual sum of squares for the error. We then have a rst case in which we replace matrices in
the principal basis of a CJA A(p
N
) by their sum, thus obtaining the principal basis
_
1
p
N
J
p
N , Q(L
1
),
Q
k
u
(p)+1
, . . . , Q
k
N
(p)
_
of a sub-CJA, A(p
N
/L
1
).
In practice, this merging corresponds to the grouping of the treatments in the blocks. A better
control of the experimental errors is then achieved at the price of not being able to consider separately
the S(l
h
), h = 1, . . . , k
u
(p).
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2789
The order of l L[p]
N
r
, in the family of reduced applications in L[p]
N
, is the number of its non null
coefcients minus 1. If the order is null, the sole non null coefcient will be 1 and the application is
relatedtotheeffects of thecorrespondingfactor. Theseapplications will becalledeffects. Otherwisethe
applications will be factorial interactions betweenthe factors for whichthey have nonnull coefcients.
Usually, L
1
is chosen so that L
1,r
does not contain effects and as few as possible low order factorial
interactions. For p = 2 and p = 3 this problem has been studied (for instance, see [1]).
This technique of grouping treatments in blocks is known as confounding. The reason for this is, as
shown in (41), that the totals of the different levels of l L
1,r
are the sums of block totals. Thus, the
differences between levels of l L
1,r
are confounded with differences between blocks.
We now relate our results with the usual approach for balanced models with N factors that cross.
Given D F = {1, . . . , N}, let X(D) be the family of linear applications whose non null coefcient
indexes are in D and calX
r
(D) be the family of linear reduced applications whose non null coefcient
indexes are in D. X(D) is -saturated. We put
Q(D) =

lX
r
(D)
Q(l), D F, (54)
thus getting the matrices that, with
1
p
N
J
p
N , constitute the principal basis of a CJA A(p
N
/D). This CJA is
related to the partitions
_

_
y
2

T
2
p
N
=

DF
S(D)
y
2

T
2
p
N
r
=

DF
S(D) + SSE
, (55)
where
S(D) =

lX
r
(D)
S(l), D F. (56)
These partitions correspondtothe usual definitionof factors andinteractions. When#(D) = 1, S(D)
will be the sumof squares for the effects of the corresponding factor and when #(D) > 1, S(D) will be
the sumof squares for the interaction between factors with indexes in D. When #(D) > 1, the sums of
squares for the factorial interactions between the factors with indexes in D are merged into the sumof
squares for their interaction. For an alternative treatment of this subject, see [7], Sections 2.3 and 7.1.
4.1. Example 1
Let us consider an example of confounding. We assume that N = 4, that p = 5 and that the reduced
applications used to dene the blocks are
_
l
1
(x) = x
1
+ x
2
+ x
3
+ x
4
,
l
2
(x) = x
1
+2x
2
+3x
3
+4x
4
.
Then the other reduced linear applications that are confounded will be
_

_
3(l
1
(x) + l
2
(x)) = x
1
+4x
2
+2x
3
,
2(l
1
(x) +2l
2
(x)) = x
1
+4x
3
+3x
4
,
4(l
1
(x) +3l
2
(x)) = x
1
+3x
2
+2x
4
,
4(l
1
(x) +4l
2
(x)) = x
2
+2x
3
+3x
4
.
It is interesting to point out that no effects and no rst order interaction are confounded and that
the presence of the four factors in the confounded factorial interactions is balanced since each of
them is present in ve of the six interactions, each pair in four of them and each triplet in three of
them.
Moreover, the system of equations
_
x
1
+ x
2
+ x
3
+ x
4
= b
1
(= 0, . . . , 4),
x
1
+2x
2
+3x
3
+4x
4
= b
2
(= 0, . . . , 4),
(57)
2790 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
gives
_
x
3
= [4(b
1
+ b
2
)]
(5)
+2x
1
+3x
3
,
x
4
[2(b
1
+3b
2
)]
(5)
+2x
1
+ x
2
(58)
so it is easy to obtain the 5
2
= 25 blocks.
We conclude by pointing that we have 24 = 25 1 degrees of freedomfor blocks and 6 confounded
factorial interactions. Of these 2 have order 3 and 4 have order 2. For an alternative treatment of this
subject, see [7], Sections 2.4 and 7.2.
5. Fractional replicates
Wenowconsider onlythetreatments x
1
, . . . , x
p
Nu inachosenblock[L|b] inorder tohaveafractional
replicate
1
p
u
p
N
. We give to the chosen treatments the rst p
Nu
indexes. To study these models we
introduce in L[p]
N
an equivalence relation
L
1
, putting l
L
1
g if g = cl + l

, with c [p]
0
and l

L
1
,
where L
1
is the linear subspace of L[p]
N
spanned by L. Since l

will take a xed value b for all chosen


treatments, we will have g(x
j
) = (cl(x
j
) + b)
(p)
, j = 1, . . . , p
Nu
. Thus, with C

(l) the sub-matrix of C(l)


constituted by the rst p
Nu
columns, we see that the rows of C

(g) are obtained reordering the rows


of C

(l). Likewise, with


A

(l) =
1
_
p
Nu1
C

(l)

T
p
, (59)
we see that the columns of A

(g) are obtained reordering the columns of A

(l). Now, when r = 1, we


have the sums of squares S

(l) = A

(l)

y
2
and, when r > 1, S

(l) =
_
_
_
_
_
A

(l)
1

r
1
r
_

y
_
_
_
_
2
so that, in
both cases,
S

(l) = S

(g). (60)
This fact leads tothechoiceinevery
L
1
equivalenceclass of areducedapplicationtowhichis attrib-
uted the difference between the groups of treatments that correspond to the different values taken by
the applications. As we saw, if l
L
1
g, the groups dened by l are the same than those dened by g.
Thus, L must be chosen in such a way that effects and as many as possible low order interactions
are isolated in their
L
1
equivalence classes. For p = 2 and p = 3, this problem has been studied (for
instance, see [1]).
To study confounding in the case of fractional replicates we complete L
1
= {l
1
, . . . , l
u
} to obtain
a basis {l
1
, . . . , l
u
, . . . , l
v
, . . . ,
N
} for L[p]
N
. Taking L
2
= {l
u+1
, . . . , l
v
} and L
3
= {l
v+1
, . . . , l
N
} as well as
L
j
= L(L
j
), j = 1, 2, 3, we have the sub-spaces L
1
L
2
and L
2
L
3
given by the direct sum of L
1
and
L
2
and of L
2
and L
3
. With,
_

_
g
1
=
N

j=1
a
1,j
l
j
,
g
2
=
N

j=1
a
2,j
l
j
,
(61)
we have g
1

L
1
g
2
if and only if a
2,j
= (ca
1,j
)
(p)
, j = u +1, . . . , N, with c [p]
0
. Let us establish
Proposition 2. L
1
is a
L
1
equivalence class. Moreover, there are k
Nu
(p) classes distinct of L
1
, each
containing p
u
(p 1) applications. The
L
1
equivalence classes are -saturated and (L
1
L
2
)\L
1
is the
union of k
vu
(p) such classes.
Proof. If g
1
L
1
we have g
1

L
1
g
2
if and only if g
2
L
1
and, since when g
1
, g
2
L
1
, g
1

L
1
g
2
we see
that L
1
is a
L
1
equivalence class. Moreover, if g
1
= g
1,1
+ g
1,2
and g
2
= g
2,1
+ g
2,2
with g
1,1
, g
2,1
L
1
and g
1,2
, g
2,2
L
2
we have, as we saw, g
1

L
1
g
2
if and only if g
1,2
g
2,2
. So, the
L
1
equivalence clas-
ses distinct from L
1
will contain one and only one application from (L
1
L
2
)
r
. Thus, there will be
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2791
k
Nu
(p)
L
1
equivalence classes distinct from L
1
. Since #(L[p]\L
1
) = p
N
p
u
, we see that they will
contain p
u
(p 1) applications. Lastly, if g
1
L
1
L
2
, g
1

L
1
g
2
when and only when g
2
L
1
L
2
will
be
L
1
-saturated. The rest of the proof is straightforward.
Let us give the indexes 1, . . . , k
u
(p)[k
u
(p) +1, . . . , k
u
(p) + k
vu
(p); k
u
(p) +1, . . . , k
u
(p) + k
Nu
(p)]
to the reduced applications in L
1
[L
2
; L
2
L
3
]. The remaining reduced applications will receive the
indexes from k
u
(p) + k
Nu
(p) to k
N
(p). If mo(l) is a minimum order application
L
1
equivalent to l,
we have
S(mo(l)
_
= S(l) l (L
1
L
2
)
r
. (62)
Reasoning as in the preceding section, we show that
P
_
1
p
u
p
N
_
=
_
1

p
Nu
1
p
Nu
A

(l); l (L
1
L
2
)
r
_
(63)
is an orthogonal matrix associated to the CJAA
_
1
p
u
p
N
_
with principal basis constituted by
1
p
Nu
J
p
Nu
and the A

(l
j
)A

(l
j
)

, j = k
u
(p) +1, . . . , k
u
(p) + k
Nu
(p). For a more general discussion of this problem
see [7].
Putting x
1

L
2
x
2
if x
1
and x
2
are chosen treatments and, whatever l L
2
, l(x
1
) = l(x
2
), we will have
an equivalence relation dened in [L
1
|b
1
] whose equivalence classes are the sub-blocks
[L
1
, L
2
|b
1
, b
2
] = [L
1
|b
1
] [L
2
|b
2
]. (64)
Moreover, we can reason as above to show that, if the vector z is now the vector of totals of sub-
blocks, we have
S(L
2
) =
1
p
Nuv
z
2

T
2
p
Nu
=
k
u
(p)+k
vu
(p)

h=k
u
(p)+1
S(l
h
), (65)
When r = 1 and no confounding is carried out we have the partition of sums of squares
y
2

T
2
p
Nu
=
k
u
(p)+k
Nu
(p)

h=k
u
(p)+1
S(l
h
) =
k
u
(p)+k
Nu
(p)

h=k
u
(p)+1
S(mo(l
h
)), (66)
and, also with no confounding but with if r > 1, the partition will be
y
2

T
2
rp
Nu
=
k
u
(p)+k
vu
(p)

h=k
u
(p)+1
S(l
h
) + SSE =
k
u
(p)+k
vu
(p)

h=k
u
(p)+1
S(mo(l
h
)) + SSE. (67)
Besides this, if there is confounding, with
A

(L
2
) =
_
A

(l
k
u
(p)+1
) A

(l
k
u
(p)+k
vu
(p)
)
_
(68)
there is a CJA with principal basis constituted by
1
p
vu
J
p
vu , Q

(L
2
) = A

(L
2
)A

(L
2
)

and the Q

(l
h
) =
A

(l
h
)A

(l
h
)

, h = k
u
(p) + k
vu
(p) +1, . . . , k
u
(p) + k
Nu
(p). Thenwe will also have the partition, when
r = 1,
y
2

T
2
p
Nu
= S

(L
2
) +
k
u
(p)+k
Nv
(p)

h=k
u
(p)+k
vu
(p)+1
S

(l
h
) = S

(L
2
) +
k
u
(p)+k
Nu
(p)

h=k
u
(p)+k
vu
(p)+1
S

(mo(l
h
)),
(69)
and, when r > 1,
y
2

T
2
rp
Nu
= S

(L
2
) +
k
u
(p)+k
Nv
(p)

h=k
u
(p)+k
vu
(p)+1
S

(l
h
) + SSE
= S

(L
2
) +
k
u
(p)+k
Nv
(p)

h=k
u
(p)+k
vu
(p)+1
S

_
mo(l
h
)
_
+ SSE.
(70)
2792 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
Let X

(D) = X(D) (L
2
L
3
)
r
. Then we have the partitions
_

_
y
2

T
2
p
Nu
=

DF
S

(D)
y
2

T
2
rp
Nu
=

DF
S

(D) + SSE
, (71)
where
S

(D) =

lX

(D)
S

(l) =

lX

(D)
S

(mo(l)); D F. (72)
These last partitions correspond to the usual definition of factors and interactions, as was the case
of the complete factorials in the preceding sections. If P is the family of sets D such that X

(D) / = ,
we have the CJA A
_
1
p
u
p
N
/P
_
with principal basis constituted by
1
p
Nu
J
p
Nu and the
Q

(D) =

lX

(D)
Q

(l); D P. (73)
5.1. Example 2
We now consider an example of fractional replication with N = 3 and p = 5, in which we use
l(x) = x
1
+ x
2
+3x
3
(74)
to generate the chosen block. Since
x
1
+ x
2
+3x
3
= b (= 0, . . . , 4) (75)
gives
x
3
= (2b)
(5)
+3x
1
+3x
2
. (76)
It is easy to generate that block. Moreover the sole reduced application in L
1
is l(x) and there are
k
31
(5) =
5
31
1
5 1
= 6

L
1
classes distinct fromL
1
. These classes are
_

_
{x
1
; x
1
+3x
2
+4x
3
; x
1
+4x
2
+2x
3
; x
1
+2x
2
+ x
3
; x
2
+3x
3
},
{x
2
; x
1
+2x
2
+3x
3
; x
1
+4x
2
+3x
3
; x
1
+3x
2
+3x
3
; x
1
+3x
3
},
{x
3
; x
1
+ x
2
+4x
3
; x
1
+ x
2
+ x
3
; x
1
+ x
2
; x
1
+ x
2
+2x
3
},
{x
1
+2x
2
; x
1
+4x
2
+4x
3
; x
1
+3x
2
+2x
3
; x
1
+ x
3
; x
1
+2x
3
},
{x
1
+2x
3
; x
1
+3x
2
; x
1
+4x
2
+ x
3
; x
1
+2x
2
+4x
3
; x
2
+ x
3
},
{x
2
+4x
3
; x
1
+2x
2
+2x
3
; x
1
+4x
2
; x
1
+3x
2
+ x
3
; x
1
+4x
3
}.
We may choose the rst application in each of these classes as its representative. In this way, we
may test the three effects and three rst order interactions one for each pair of factors. Let l
i,j
(x) be
the jth application, in the ith class with, for instance, l
2,3
(x) = x
1
+4x
2
+3x
3
. It is easy to check that
l
i,j
(x) = c
i,j
(l
i,1
(x) + j(x)) j = 2, 3, 4, 5; i = 1, 2, 3, 4, 5, 6 (77)
with, for instance, c
2,3
= 2. For the treatments in the block [l|b], we have
l
i,j
(x) = c
i,j
(l
i,1
(x) + jb), j = 2, 3, 4, 5; i = 1, 2, 3, 4, 5, 6, (78)
which shows howthe values of the representative applications determine the values of other reduced
applications in the same
L
1
class.
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2793
6. Crossing and nesting
Let thel

h,1
= mo(l
h,1
), . . . , l

h,k
h
= mo(l
h,k
h
) betheselectedapplications for themodels
1
p
u
h
h
p
N
h
h
, h =
1, . . . , z, their
_
1
p
N
h
u
h
h
J
p
N
h
u
h
h
, Q

(l

h,1
), . . . , Q

(l

h,k
h
)
_
, h = 1, . . . , z, the principal basis, and [L
1,h
|b
1,h
] the
blocks constituted by the chosen treatments, h = 1, . . . , z. If u
h
= 0, all the treatments are chosen so
that [L
1,h
|b
1,h
] = G[p
h
]
N
h
.
Whenwecross themodels thespaceof thenewtreatments will bethecartesianproduct
z
h=1
[L
h
|b
h
],
and we have a model associated to the CJA

z
h=1
A
_
1
p
u
h
h
p
N
h
h
_
. With
= {h : h
i
= 0, . . . , k
i
, i = 1, . . . , z} (79)
the principal basis of this CJA will be constituted by the
Q(h) =
z

i=1
Q

(i, h
i
), h , (80)
where Q

(i, 0) =
1
p
N
i
u
i
h
J
p
N
i
u
i
and Q

(i, h
i
) = Q

(l
i
, h
i
), h
i
= 1, . . . , k
v
i
u
i
(p
i
), i = 1, . . . , z.
If we take r > 1 replicates, we will have the CJA

z
h=1
A
_
1
p
u
h
h
p
N
h
h
_
A(r).
Taking T
i,0
= I
1
and T
i,h
i
= T
p
i
, h
i
= 1, . . . , k
i
, i = 1, . . . , z, we have
Q(h) = A(h)A(h)

, h , (81)
where
A(h) =
z

i=1
A

(i, h
i
)
=
z

i=1
_
1
_
p
N
i
u
i
C

(i, h
i
)

T
i,h
i
_
, h
=
z

i=1
(C

(i, h
i
)

i=1
(T
i,h
i
)
= C(h)

T(h)
(82)
with
_

_
C(h) =
z

i=1
C

(i, h
i
), h ,
T(h) =
z

i=1
T(i, h
i
), h .
(83)
We may point out that if the treatments in the initial models are grouped in blocks [L
1,h
|b
h
], h =
1, . . . , z, the blocks in the product model will be the cartesian product of the blocks in the original
models. If confounding is not used in one or more of the initial models, all the treatments in those
models will constitute an unique block. Moreover, we can reason as in Section 4 to dene an orthog-
onal matrix P(p
v
h
u
h
) associated to the confounding. If u
h
= v
h
there will be no confounding for the
corresponding initial model and P(1) = I
1
. With
P =
z

h=1
P(p
v
h
u
h
), (84)
and z the vector of block totals we have the sum of squares for blocks given by
S
_
_
z

h=1
L
2,h
_
_
=
1
r

z
h=1
p
N
h
v
h
h
Pz
2

T
2
r

z
h=1
p
N
h
u
h
h
, (85)
2794 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
where, as before, T is the grand total. An alternative expression for this sum of squares will be given
by
S
_
_
z

h=1
L
2,h
_
_
=
_
_
_
_
_
_
Q
_
_
z

h=1
L
2,h
_
_
y
_
_
_
_
_
_
2
, (86)
where Q
_
z
h=1
L
2,h
_
is obtained adding the

z
h=1
Q

(h), where Q

(h) =
1
p
N
h
u
h
h
J
p
N
h
u
h
h
or Q

(l
h,i
),
k
u
h
(p
h
) < i k
u
h
(p
h
) + k
v
h
u
h
(p
h
) withthe exceptionof

z
h=1
1
p
N
h
u
h
h
J
p
N
h
u
h
h
. We nowhave the sub-CJA
A
_

z
h=1
_
1
p
u
h
h
p
N
h
h
__
_
z
h=1
L
2,h
_
_
of A
_

z
h=1
_
1
p
u
h
h
p
N
h
h
__
=

z
h=1
A
_
1
p
u
h
h
p
N
h
h
_
.
When we consider for the initial models the sub-CJA A
_
1
p
u
h
h
p
N
h
h
_
D
h
_
, corresponding to the
classic partitions insums of squares for factors andinteractions, for the nal model we will have the CJA
A
_
_
z

h=1
_
1
p
u
h
h
_
p
N
h
h
_
z

h=1
D
h
_
_
=
z

h=1
A
_
1
p
u
h
h
p
N
h
h
_
D
h
_
_
.
While we had to discuss a number of details about the crossing of models, the situation of model
nesting is much more straightforward. We can apply directly the results of Sections 2 and 3.
6.1. Example 3
We may use the models in the previous two examples, nesting one of them inside the other. This
may be done in a straightforward way since the matrices A for the nal model may be obtained directly
fromthose of the initial models. It is interesting to point out that crossing the two models would not be
convenient since we would be using a second order factorial interaction for selecting the treatments.
In the next section, we give an example in which models crossing is used.
7. Aggregation and disaggregation
A rst case of factor merging occurs in prime basis factorials p
N
. If, with F = {1, . . . , N}, we have a
disjoint partition
F =
w
_
j=1
C
j
, (87)
we can merge the factors in each of the C
j
, j = 1, . . . , w, into a factor with p
#(C
j
)
levels, j = 1, . . . , w. Each
of the levels of one of the newfactors will correspond to a combination of levels of the merged factors.
WithV W = {1, . . . , w}, let D(V) be the set of reducedapplications withat least a nonnull coefcient
for the factors with indexes in each of the C
j
, with j V and null coefcients in the remaining C
j
. If
#(V) = 1, V = {j} and the only non null coefcients of l D(V) will be for factors in C
j
and either l is
the effects of a sub-factor of the jth new factor or a factorial interaction between sub-factors of that
factor. If #(V) > 1, l will be a factorial interaction between sub-factors of the newfactors with indexes
in V. Then, we have
Q(V) =

lD(V)
Q(l) = A(V)A(V)

, V W, (88)
where
A(V) = [A(l); l D(V)], (89)
and we can apply the results in Section 3.
When we merge factors in a fractional replicate, the procedure is the same. The only difference is
that we must work with distinct mo(l
h
), h = k
u
(p) +1, . . . , k
u
(p) + k
Nu
(p), or, if we also carried out
confounding, the mo(l
h
), h = k
u
(p) + k
vu
(p) +1, . . . , k
u
(p) + k
Nu
(p).
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2795
Up to now we have merged factors with the same (prime) number of levels. To overcome this
limitation we may merge factors in models obtained through crossing, thus obtaining factors with
a number of levels that is the product of the number of levels of the factors in the initial models.
Now, some or all of the factors to be merged could have been themselves the result of merging,
having thus numbers of levels that are powers of prime numbers. In this way we may obtain fac-
tors with arbitrary number of levels. The relevant sub-CJA would be obtained using the procedure
of condensation described in Section 2. Moreover, part or all of the initial models could be fractional
replicates.
Lastly, given a model associated to a CJA A, we carry out disaggregation when we replace two or
more matrices in the principal basis of A by sums of pairwise orthogonal OPMs. Usually, this is the
last operation to be applied. Thus, once the models to cross and nest are obtained, we carry out these
aggregations and afterwards, if such is the case, we carry out disaggregation.
7.1. Example 4
We shall assume that we have two models each with two factors. In the rst [second] model the
factors have 2 [3] levels. Assuming that we cross these models and that we aggregate the rst with the
third and the second with the fourth factors we get a model with two factors with six levels each.
The principal basis for the two initial models are {Q(l
0,1
); Q(x
1
); Q(x
2
); Q(x
1
+x
2
)} and {Q(l
0,2
);
Q(z
1
); Q(z
2
); Q(z
1
+ z
2
); Q(z
1
+2z
2
)}.
In the nal model we have a principal basis constituted by the matrices
Q
0
=Q(l
0,1
) Q(l
0,2
)
1

r
J
r
,
Q(1)=Q(l
0,1
) Q(z
1
)
1

r
J
r
+Q(x
1
) Q(l
0,2
)
1

r
J
r
+Q(x
1
) Q(z
1
)
1

r
J
r
,
Q(2)=Q(l
0,1
) Q(z
2
)
1

r
J
r
+Q(x
2
) Q(l
0,2
)
1

r
J
r
+Q(x
2
) Q(z
2
)
1

r
J
r
,
Q(1 2)=Q(l
0,1
) Q(z
1
+ z
2
)
1

r
J
r
+Q(l
0,1
) Q(z
1
+2z
2
)
1

r
J
r
+Q(x
1
) Q(z
2
)
1

r
J
r
+Q(x
1
) Q(z
1
+ z
2
)
1

r
J
r
+Q(x
1
)
Q(z
1
+2z
2
)
1

r
J
r
+Q(x
2
) Q(z
1
)
1

r
J
r
+Q(x
2
) Q(z
1
+z
2
)
1

r
J
r
+Q(x
2
) Q(z
1
+2z
2
)
1

r
J
r
+Q(x
1
+ x
2
) Q(l
0,2
)

r
J
r
+Q(x
1
+ x
2
) Q(z
1
)
1

r
J
r
+Q(x
1
+ x
2
) Q(z
2
)
1

r
J
r
+Q(x
1
+ x
2
) Q(z
1
+ z
2
)
1

r
J
r
Q(x
1
+ x
2
) Q(z
1
+2z
2
)
1

r
J
r
.
These orthogonal projection matrices have ranks
g
0
=1 1,
g(1)=1 2 +1 1 +1 2 = 5,
g(2)=1 2 +1 1 +1 2 = 5,
g(1 2)=1 2 +1 2 +1 2 +1 2 +1 2 +1 2 +1 2 +1 2 +
+1 1 +1 2 +1 2 +1 2 +1 2 = 25.
Thus the sumof squares for the two six level factors will, as expected, have ve degrees of freedom
and be given by
2796 V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797
_

_
S(1) = Q(1)y
2
=
_
_
_
_
A(l
0,1
)

A(z
1
)

r
1

r
_
y
_
_
_
2
+
_
_
_
_
A(x
1
)

A(l
0,2
)

r
1

r
_
y
_
_
_
2
+
_
_
_
_
A(x
1
)

A(z
1
)

r
1

r
_
y
_
_
_
2
,
S(2) =
_
_
Q(2)Y
_
_
2
=
_
_
_
_
A(l
0,1
)

A(z
2
)

r
1

r
_
y
_
_
_
2
+
_
_
_
_
A(x
2
)

A(l
0,2
)

r
1

r
_
y
_
_
_
2
+
_
_
_
_
A(x
2
)

A(z
2
)

r
1

r
_
y
_
_
_
2
.
(90)
Likewise we will have the sum of squares S(1 2) for the interaction with 25 degrees of freedom.
While S(1) andS(2) are the sumof three terms, S(1 2) are the sumof thirteenterms. If we assume the
models to have xed effects we can treat globally the hypothesis of absence of effects and interactions.
Then with S = A

y
2
and g = 36(r 1) we will have the F tests statistics
_

_
F(1) =
36(r1)
5
S(1)
S
F(2) =
36(r1)
5
S(2)
S
F(1 2) =
36(r1)
25
S(12)
S
(91)
with 5 and g, 5 and g, 25 and g degrees of freedom.
We couldalsoconsider the hypothesis of absence of effects andinteractions (for the sixlevel factors)
as the interactionof hypothesis tobe testedseparately. For instance the hypothesis of absence of effects
of the rst (six levels) factors holds if and only if
Q(1)y
2
= (Q(l
0,1
) Q(z
1
)
1
r
J
r
)y
2
+ (Q(x
1
) Q(l
0,2
)
1
r
J
r
)y
2
+(Q(x
1
) Q(z
1
)
1
r
J
r
)y
2
= 0.
Thus, this hypothesis holds if and only if the
H
0,1
(1) :
_
A(l
0,1
)

A(z
1
)

r
1

r
_
= 0,
H
0,2
(1) :
_
A(x
1
)

A(l
0,2
)

r
1

r
_
= 0,
H
0,3
(1) :
_
A(x
1
)

A(z
1
)

r
1

r
_
= 0,
where is the mean vector, simultaneously hold, the ranks of of the matrices that dene these hypoth-
esis are 2, 1, and 2. So, the corresponding F tests statistics will be
_

_
F
1
(1) =
36(r1)
2
_
_
_
_
A(l
0,1
)

A(z
1
)

r
1

r
_
y
_
_
_
2
S
F
2
(1) =
36(r1)
1
_
_
_
_
A(x
1
)

A(l
0,2
)

r
1

r
_
y
_
_
_
2
S
F
3
(1) =
36(r1)
2
_
_
_
_
A(x
1
)

A(z
1
)

r
1

r
_
y
_
_
_
2
S
(92)
so that
F(1) =
2F
1
(1) +F
2
(1) +2F
3
(1)
5
. (93)
Thus, testing separately the sub-hypothesis we may detect significant results that would not be
found through global testing.
Let us nowassume that we hadusedthe reducedlinear applications x
1
+ x
2
andz
1
+2z
2
togenerate
blocks. For each pair of values (b
1
, b
2
), b
1
= 0, 1 and b
2
= 0, 1, 2 we will have a block. The composition
of the blocks will be
V. de Jesus et al. / Linear Algebra and its Applications 430 (2009) 27812797 2797
b
1
b
2
x
1
x
2
z
1
z
2
b
1
b
2
x
1
x
2
z
1
z
2
0 0 0 0 0 0 1 0 0 1 0 0
0 0 1 1 0 1 1 1
0 0 2 2 0 1 2 2
1 1 0 0 1 0 0 0
1 1 1 1 1 0 1 1
1 1 2 2 1 0 2 2
0 1 0 0 0 2 1 1 0 1 0 2
0 0 1 0 0 1 1 0
0 0 2 1 0 1 2 1
1 1 0 2 1 0 0 2
1 1 1 0 1 0 1 0
1 1 2 1 1 0 2 1
0 2 0 0 0 1 1 2 0 1 0 1
0 0 1 2 0 1 1 2
0 0 2 0 0 1 2 0
1 1 0 1 1 0 0 1
1 1 1 2 1 0 1 2
1 1 2 0 1 0 2 0
The sum of squares for blocks will have 5 degrees of freedom and be given by
SQBl =
_
_
_
_
A(x
1
+ x
2
)

A(l
0,2
)

r
1

r
_
y
_
_
_
2
+
_
_
_
_
A(l
0,1
)

A(z
1
+2z
2
)

r
1

r
_
y
_
_
_
2
+
_
_
_
_
A(x
1
+ x
2
)

A(z
1
+2z
2
)

r
1

r
_
y
_
_
_
2
,
(94)
these terms being deleted from S(1 2) which now has only 20 degrees of freedom.
As before, we may test sub-hypothesis for the effects of the two six levels factors and interactions.
For the interaction we had 13 sub-hypothesis and now we only have 10.
All theexamples of designs of experiments aredescribedinhttp://pessoa.fct.unl.pt/fmig/papers/laa1.
References
[1] W.G. Cochran, G.M. Cox, Experimental Designs, second ed., Wiley, New York, 1992 (First corr. printing).
[2] A. Dey, R. Mukerjee, Fractional Factorial Plans, Wiley, New York, 1999.
[3] M. Fonseca, J.T. Mexia, R. Zmy slony, Binary operations on Jordan algebras and orthogonal normal models, Linear Algebra
Appl. 417 (1) (2006) 7586.
[4] P. Jordan, J. von Neumann, E.P. Wigner, On an algebraic generalization of the quantummechanical formalism, Ann. of Math.
35 (2) (1934) 2964.
[5] Douglas C. Montgomery, Design and Analysis of Experiments, sixth ed., John Wiley & Sons, Hoboken, 2005.
[6] J.T. Mexia, Duality of tests F and the Scheff multiple comparation method in presence of controlled heterocedasticity, best
linear unbiased estimates, Comput. Statist. Data Anal. (31) (1990) 271281.
[7] Rahul Mukerjee, C.F.J. Wu, A Modern Theory of Factorial Designs, Springer, New York, 2006.
[8] J. Seely, Linear spaces and unbiased estimation. An aplication to the mixed linear model, Ann. Math. Stat. (41) (1970)
17351748.
[9] J. Seely, Quadratic subspaces completeness, Ann. Math. Stat. (42) (1971) 710721.
[10] J. Seely, G. Zyskind, Linear spaces and minimum variance unbiased estimation, Ann. Math. Stat. (42) (1971) 691703.
[11] J. Seely, Minimal sufcient statistics completeness for multivariate normal families, Sankhy a Ser. A 39 (1977) 170185.
[12] D.M. Vanleeuwen, D.S. Birkes, J.F. Seely, Balance and orthogonality in designs for mixed classication models, Ann. Stat. 27
(6) (1999) 19271947.
[13] D.M. van Leeuwen, J.F. Seely, D.S. Birkes, Sufficent conditions for orthogonal designs in mixed linear models, J. Statist.
Plann. Inference 73 (12) (1998) 373389.

S-ar putea să vă placă și