Documente Academic
Documente Profesional
Documente Cultură
Tin-Yau Tam
Department of Mathematics and Statistics
221 Parker Hall
Auburn University
AL 36849, USA
tamtiny@auburn.edu
1
Some portions are from B.Y. Wangs Foundation of Multilinear Algebra (1985 in
Chinese)
2
Chapter 1
Im T = {T v : v V } W, Ker T = {v : T v = 0} V
T v = 1 w1 + 2 w2 = T (1 T 1 w1 + 2 T 1 w2 ),
3
4 CHAPTER 1. REVIEW OF LINEAR ALGEBRA
Problems
1. Show that dim Hom (V, W ) = dim V dim W .
2. Let T Hom (V, W ). Prove that rank T min{dim V, dim W }.
3. Show that if T Hom (V, U ), S Hom (U, W ), then rank ST min{rank S, rank T }.
4. Show that the inverse of T Hom (V, W ) is unique, if exists. Moreover T
is invertible if and only if rank T = dim V = dim W .
5. Show that V and W are isomorphic if and only dim V = dim W .
1.2. MATRIX REPRESENTATIONS OF LINEAR MAPS 5
Theorem 1.2.1. Let T Hom (V, W ) and S Hom (W, U ) with bases E =
{e1 , . . . , en } for V , F = {f1 , . . . , fm } for W and G = {g1 , . . . , gl } for U . Then
[ST ]G G F
E = [S]F [T ]E
and in particular
[T v]F = [T ]F
E [v]E ,
v V.
Pm
Proof. Let A = [T ]F G
E and B = [S]F , i.e., T ej = i=1 aij fi , j = 1, . . . , n. and
Pl
T fi = k=1 bki gk , i = 1, . . . , m. So
m
X m
X l
X l
X
ST ej = aij Sfi = aij ( bki )gk = (BA)kj gk .
i=1 i=1 k=1 k=1
So [ST ]G G F
E = BA = [S]F [T ]E .
0
Consider I := IV End V . The matrix [I]E
E is called the transitive matrix
from E to E 0 since
0
[v]E 0 = [Iv]E 0 = [I]E
E [v]E , v V,
0
i.e., [I]E
E transforms the coordinate vector [v]E with respect to E to [v]E 0 with
E 0 1
respect to E 0 . From Theorem 1.2.1, [I]E
E 0 = ([I]E ) .
Two operators S, T End V are said to be similar if there is an invertible
P End V such that S = P 1 AP . Similarity is an equivalence relation and is
denoted by .
Theorem 1.2.2. Let T End V with dim V = n and let E and E 0 be bases
0
of V . Then A := [T ]E E
E 0 and B := [T ]E are similar, i.e., there is an invertible
1
P Cnn such that P AP = B. Conversely, similar matrices are matrix
representations of the same operator with respect to different bases.
1.2. MATRIX REPRESENTATIONS OF LINEAR MAPS 7
0
Proof. Let I = IV . By Theorem 1.2.1, [I]E E E
E 0 [I]E = [I]E = In where In is n n
identity matrix. Denote by P := [I]E 0 (the transitive matrix from E 0 to E) so
E
0
that P 1 = [I]E
E . Thus
0 0 0
[T ]E E E E E
E 0 = [IT I]E 0 = [I]E [T ]E [I]E 0 = P
1
[T ]E
E P.
Problems
1. Let E = {e1 , . . . , en } and F = {f1 , . . . , fm } be bases for V and W . Show
that the matrix representation := [ ]F E : Hom (V, W ) Cmn is an
isomorphism.
2. Let E and F be bases of V and W respectively. Show that if T
Hom (V, W ) is invertible, then [T 1 ]E F 1
F = ([T ]E ) .
3. Let E and F be bases of V and W respectively. Let T Hom (V, W ) and
A = [T ]F
E . Show that rank A = rank T .
Proof. Let
v1
e1 = ,
kv1 k
v2 (v2 , e1 )e1
e2 =
kv2 (v2 , e1 )e1 k
...
vn (vn , e1 )e1 (vn , en1 )en1
en =
kvn (vn , e1 )e1 (vn , en1 )en1 k
is called the standard inner product on Cn . The induced norm is called the
2-norm and is denoted by
Problems
2. Suppose
Pnthat S = {v1 , . . . , vn } isP an orthogonal set of nonzero vectors.
n
With i=1 ai vi = 0, aj (vj , vj ) = ( i=1 ai vi , vj ) = 0 so that aj = 0 for all
j since (vj , vj ) 6= 0. Thus S is linearly independent.
5. Straightforward computation.
1.4 Adjoints
Let V, W be inner product spaces. For each T Hom (V, W ), the adjoint of T
is S Hom (W, V ) such that (T v, w)W = (v, Sw)V for all v V, w W and is
denoted by T . Clearly (T ) = T .
Theorem 1.4.1. Let W, V be inner product spaces. Each T Hom (V, W ) has
a unique adjoint.
Pn
For any v V , by Theorem 1.3.3, v = i=1 (v, ei )V ei so that
n
X n
X
(v, Sw)V = (v, (w, T ei )W ei )V = (T ei , w)W (v, ei )V
i=1 i=1
Xn
= (T (v, ei )V ei , w)W = (T v, w)W .
i=1
So [T ]E F
F = A = ([T ]E ) .
Problems
1. Let S, T Hom (V, W ) where V, W are inner product spaces. Prove that
[T ]G
G = P 1 [T ]E
EP = P
1
([T ]E
E) P
= P 1 (P [T ]GGP
1
) P = (P P )1 ([T ]G
G ) (P P ),
i.e. ([T ]G G
G ) and ([T ]G ) are similar.
Remark: If G is an orthonormal basis, then P is unitary and thus ([T ]G
G) =
G
([T ]G ), a special case of Theorem 1.4.2.
5. Follows from Theorem 1.4.2, Problem 2.3 and rank A = rank A where A
is a matrix.
Cnn form a group and is denoted by Un (C). One can see immediately that
A is unitary if A has orthonormal columns (rows since AA = I as well from
Problem 1.8).
Theorem 1.5.1. (Schur triangularization theorem) Let A Cnn . There is
U Un (C) such that U AU is upper triangular.
Proof. Let 1 be an eigenvalue of A with unit eigenvector x1 , i.e., Ax1 = 1 x1
with kx1 k2 = 1. Extend via Theorem 1.3.4 to an orthonormal basis {x1 , . . . , xn }
for Cn and set Q = [x1 xn ] which is unitary. Since Q Ax1 = 1 Q x1 =
1 (1, . . . , 0)T ,
1
0
A := Q AQ = .
.. A1
0
where A1 C(n1)(n1) . By induction, there is U1 Un1 (C) such that
U1 A1 U1 is upper triangular. Set
1 0 0
0
P := .
.. U 1
0
which is unitary. So
1
0
P AP = .
.. U1 A1 U1
0
is upper triangular and set U = QP .
Theorem 1.5.2. Let T End V , where V is an inner product space. Then
there is an orthonormal basis E of V such that [T ]E
E is upper triangular.
0
Proof. For any orthonormal basis E 0 = {e01 , . . . , e0n } of V let A := [T ]E E 0 . By
Theorem 1.5.1 there is U Un (C), where n = dim V , such that U AU is
upper triangular. Since Pn U is 0unitary, E = {e1 , . . . , en } is alsoEan orthonormal
0
basis, where ej = u e
i=1 ij i , j = 1, . . . , n (check!) and [I]E = U . Hence
E E E0 E0 1
[T ]E = [I]E 0 [T ]E 0 [I]E = U AU since U = U .
Lemma 1.5.3. Let A Cnn be normal and let r be fixed. Then arj = 0 for
all j 6= r if and only if air = 0 for all i 6= r.
Proof. From arj = 0 for all j 6= r and AA = A A,
n
X X
|arr |2 = |arj |2 = (AA )rr = (A A)rr = |arr |2 + |air |2 .
j=1 i6=r
Problems
1. Show that upper triangular in Theorem 1.5.1 may be replaced by lower
triangular.
2. Prove that A End V is unitary if and only if kAvk = kvk for all v V .
3. Show that A Cnn is psd (pd) if and only if A = B B for some (invert-
ible) matrix B. In particular B may be chosen lower or upper triangular.
4. Let V be an inner product space. Prove that (i) (Av, v) = 0 for all v V
if and only if A = 0, (ii) (Av, v) R for all v V if and only if A is
Hermitian. What happens if the underlying field C is replaced by R?
5. Show that if A is psd, then A is Hermitian. What happens if the underlying
field C is replaced by R?
6. Prove Theorem 1.5.5.
7. Prove that if T Hom (V, W ) where V and W are inner product spaces,
then T T 0 and T T 0. If T is invertible, then T T > 0 and T T > 0.
(T u, v) = hu, vi = hv, ui = (T v, u) = (T u, v)
The next theorem shows that in the above manner inner products are in
one-one correspondence with pd matrices.
Clearly
n
X n
X
(T u, v) = hu, ei i(ei , v) = hu, (v, ei )ei i = hu, vi.
i=1 i=1
Problems
1.7. INVARIANT SUBSPACES 17
Pn
1. Let
Pn E = {e1 , . . . , en } be a basis
Pnof V . For any u = i=1 ai ei and v =
i=1 bi ei , show that (u, v) := i=1 ai bi is the unique inner product on V
so that E is an orthonormal basis.
3. Let V be an inner product space. Show that for each A Cnn there are
u1 , . . . , un ; v1 , . . . , vn Cn such that aij = (ui , vj ) 1 i, j n.
2.
3.
Hence T hi = (S )1 T () S .
(a) (T |W ) = T |W .
So (T |W ) = T |W .
(b) Follows from Problem 7.3.
Problems
So V = Im P1 Im Pm (Problem 8.1).
Conversely if V = W1 Wm , then for any v V , there is unique
factorization v = w1 + + wm , wi Wi , i = 1, . . . , m. Define Pi End V
by Pi v = wi . It is easy to see each Pi is a projection and Im Pi = Wi and
P1 + + Pm = IV . For the uniqueness, if there are projections Q1 , . . . , Qm
such that Q1 + + Qm = IV and Im Qi = Wi for all i, then for each v V ,
from the unique factorization Qi v = Pi v so that Pi = Qi for all i.
20 CHAPTER 1. REVIEW OF LINEAR ALGEBRA
Problems
1. Show that if V = W1 + + Wm , then V = W1 Wm if and only if
dim V = dim W1 + + dim Wm .
2. Let P1 , . . . , Pm End V be projections on V and P1 + + Pm = IV .
Show that Pi Pj = 0 whenever i 6= j.
3. Let P1 , . . . , Pm End V be projections on V . Show that P1 + + Pm is
a projection if and only if Pi Pj = 0 whenever i 6= j.
1.8. PROJECTIONS AND DIRECT SUMS 21
1.
3. P1 + + Pm is a projection means
X X X X
Pi + Pi Pj = Pi2 + Pi Pj = (P1 + + Pm )2 = P1 + + Pm .
i i6=j i i6=j
7.
22 CHAPTER 1. REVIEW OF LINEAR ALGEBRA
hT (), vi = h, T (v)i
where the bracket hf, vi := f (v) is the duality pairing of V with its dual space,
and that on the right is the duality pairing of W with its dual. This identity
characterizes the dual map T , and is formally similar to the definition of the
adjoint (if V and W are inner product spaces). We remark that we use the same
notation for adjoint and dual map of T Hom (V, W ). But it should be clear
in the context since adjoint requires inner products but dual map does not.
Pn
so that u := i=1 f (ei )ei . Uniqueness follows from the positive definiteness of
the inner product: If (v, u0 ) = (v, u) for all v V , then (v, u u0 ) = 0 for all v.
Pick v = u u0 to have u = u0 .
The map : V V defined by (u) = (, u) is clearly a linear map bijective
from the previous statement.
m
i Vi = V1 Vm = {(v1 , . . . , vm ) : vi Vi , i = 1, . . . , m}
Problems
1. Show that Theorem 1.9.3 remains true if the inner product is replaced by a
nondegenerate bilinear form B(, ) on a real vector space. Nondegeneracy
means that B(u, v) = 0 for all v V implies that u = 0.
P E
So T fj =
i aji ei , i.e., [T ] = AT , i.e., ([T ]F
F E)
T
= [T ]E
F . This
explains why T is also called the transpose of T .
(Roy) Suppose E = {e1 , . . . , en } and F = {f1 , . . . , fm }. Let Pm [T ]F
E =
E
(aij ) CmnPand [T ]F = (bpq ) Cnm . By definition, T ej = i=1 aij fi
n
and T fq = p=1 bpq ep . Since ej (ei ) = ij and fj (fi ) = ij , the definition
(T fj )ei = fj (T ei ) implies that bij = aji . Thus [T ]E F T
F = ([T ]E ) .
is a basis of V1 Vm (check!).
1.10 Notations
Denote by Sm the symmetric group on {1, . . . , m} which is the group of all
bijections on {1, . . . , m}. Each Sm is represented by
1, ..., m
= .
(1), . . . , (m)
Proof.
X m
Y
det C[|] = () c(i),(i)
Sm i=1
X Ym X n
= () a(i),j bj,(i)
Sm i=1 j=1
X m
X Y
= () a(i),(i) b(i),(i)
Sm m,n i=1
m
X Y X
= a(i),(i) ()b(i),(i)
m,n i=1 Sm
m
X X
= a(i),(i) det B[|]
m,n i=1
m
X Y
= a(i),(i) det B[|]
Dm,n i=1
X m
X Y
= a(i),(i) det B[|]
Qm,n Sm i=1
X X m
Y
= ( () a(i),(i) ) det B[|]
Qm,n Sm i=1
X
= det A[|] det B[|].
Qm,n
X X X m
Y nm
Y
s() s()
= (1) ()()(1) a(i)(i) a0 (j)0 (j)
Qm,n Sm Snm i=1 j=1
X m
Y n
Y
= (1)s() ()(1)m(m+1)/2 a(i)(i) a0 (im)(i)
Sn i=1 i=m+1
X n
Y
= ()() a(i)(i)
Sn i=1
= det A.
In particular
| det U [|]| = | det U (|)|. (1.11)
Proof. Because det I[|] = and U [|] = U [|]T , apply Cauchy-Binet
theorem on In = U U to yield
X
= det U [|] det U [|]. (1.12)
Qm,n
Multiplying both sides by U [|] and sum over Sm , the left side becomes
X
det U det U [|] = det U det U [|]
Qm,n
Problems
1. Prove equation (1.2).
Qm Pn P Qm
2. Prove that i=1 j=1 aij = m,n i=1 ai(i) .
3. Prove a general Laplace expansion theorem: Let A Cnn , , Qm,n ,
and 1 m n.
X
det A = det A[|](1)s()+s() det A(|).
Qm,n
4. Show that
m
Y
m n+m1
|(n1 , . . . , nm )| = ni , |m,n | = n , |Gm,n | = ,
i=1
m
n n
|Dm,n | = m!, |Qm,n | = .
m m
3.
4. The
Qm ith slot of each sequence inm(n1 , . . . , nm ) has ni choices. So |(n Q1m, . . . , nm )| =
i=1 n i and thus |m,n | = n follows from |(n 1 , . . . , nm )| = i=1 ni
with ni = n for all i.
|Gm,n | = n+m1 m . Now |Qm,n | is the number of picking m distinct num-
n
bers
n from 1, . . . , n. So |Qm,n | = m is clear and |Dm,n | = |Qm,n |m! =
m m!
Chapter 2
Group representation
theory
pi,(j) = i,(j) , i, j = 1, . . . , n
31
32 CHAPTER 2. GROUP REPRESENTATION THEORY
(it ) = it+1 , t = 1, . . . , k 1
(ik ) = i1
(i) = i, i 6 {i1 , . . . , i + k}
and we write = (i1 , . . . , ik ). Two cycles (i1 , . . . , ik ) and (j1 , . . . , jm ) are said
to be disjoint if there is no intersection between the two set {i1 , . . . , ik } and
{j1 , . . . , jm }. From the definition, we may express the cycle as
For example,
1 2 3 4 5 6 7
= 1 2 5 3 4 7 6 = 1 2 5 3 4 7 .
2 5 1 7 3 6 4
1 2 3 4 5
= 1 2 5 3 4 = 3 4 1 2 5 = 3 4 5 1 2 .
2 5 4 3 1
So c() = 2.
Cycles of length 2, i.e., (i j), are called transpositions.
Theorem 2.1.3. Each element Sn can be written (not unique) as a product
of transposition.
2.1. SYMMETRIC GROUPS 33
Though the decomposition is not unique, the parity (even number or odd
number) of transpositions in each Sn is unique (see Merris p.56-57).
Problems
1. Prove that each element of Sn is a product of transpositions. (Hint:
(i1 i2 ik ) = (i1 i2 )(i2 i3 ) (ik1 ik ) = (i1 ik )(i1 ik1 ) (i1 i2 )).
2. Show that Sn is generated by the transposition (12), (23), . . . , (n 1, n)
(Hint: express each tranposition as a product of (12), (23), . . . , (n 1, n)).
3. Express the following permutations as a product of nonintersecting cycles:
(a) (a b)(a i1 . . . ir b j1 . . . jr ).
(b) (a b)(a i1 . . . ir b j1 . . . jr )(a b).
(c) (a b)(a i1 . . . ir ).
(d) (a b)(a i1 . . . ir )(a b).
4. Express = (24)(1234)(34)(356)(56) S7 as a product of disjoint cycles
and find c().
T (1 2 ) = T (1 )T (2 ), 1 , 2 G.
A(1 2 ) = [T (1 2 )]E E E E
E = [T (1 )T (2 )]E = [T (1 )]E [T (2 )]E = A(1 )A(2 ).
B() = [T ()]F F E E
F = [I]E [T ()]E [I]F = P
1
A()P, G,
where P := [I]E F , i.e., A and B are isomorphic. In other words, the matrix
representations with respect to different bases of a linear representation are
isomorphic. Similarly the representations S : G GL(V ) and T : G : GL(W )
are said to be isomorphic if there is an invertible L Hom (V, W ) such that
T () = L1 S()L for all G. In other words, isomorphic representations
have the same matrix representation by choosing appropriate bases accordingly.
Given any representation T : G GL(V ) with dim V = n, a choice of basis
in V identifies with Cn (i.e., isomorphic to via the coordinate vector) and call
this isomorphism L. Thus we have a new representation S : G GLn (C)
which is isomorphic to T . So any representation of G of degree n is isomorphic
to a matrix representation on Cn . The vector space representation does not
lead to new representation, but it is a basis free approach; while at the same
time the matrix representation may give us some concrete computation edge.
So we often use both whenever it is convenient to us.
The basic problem of representation theory is to classify all representations
of G, up to isomorphism. We will see that character theory is a key player.
Example 2.2.1. (a) The principal representation: 7 1 for all G,
is of degree 1.
2.2. REPRESENTATION THEORY 35
(e) S3 has 6 elements: e, (23), (12), (123), (132), (13). Define T : S3 GL2 (C)
by setting their images accordingly as
1 1 0 1 0 1 1 1 1 0
I2 , , , , ,
0 1 1 0 1 1 1 0 1 1
T |W () = T ()|W .
The following theorem asserts that reducibility and completely reducibility co-
incide for finite group representations.
Theorem 2.2.3. (Maschke) Let A be a matrix representation of the finite group
G. If A is reducible, then A is completely reducible.
Proof. Since A is reducible, we may assume that
A1 () 0
A() = , for all G,
C() A2 ()
1
P
Set Q := |G| ( G C()A1 ()1 ). Then
Problems
1. Prove that the degree 3 representation of S3 : 7 (i(j) ) GL3 (C) is
reducible (Hint: find a 1-dimensional invariant subspace).
2. Prove that the degree 2 representation in Example 2.2.1(e) is irreducible.
3. Let T : G GLn (C) be a representation of a finite group G. Prove that
there is an inner product such that T () is unitary for all G, i.e.,
for any u, v V , G we have (T ()u, T ()v) = (u, v). (Hint: Let
h, iPbe any arbitrary inner product. Then consider it average (u, v) :=
1
|G| G hT ()u, T ()vi).
4. From Problem 3.
1 1
5. e 7 I3 , 7 1 , 2 7 1. It is reducible using the
1 1
argument of Problem 1 (or observe that C3 is abelian so all irreducible
representations must be of degree 1 from Problem 3.1).
6. With respect to some inner product, each A() is unitary by Theorem
2.2.5, if A : G GLn (V ) is reducible, i.e., there is a nontrivial invariant
subspace W . For each v W , we have (A()v, A()w) = (v, w) = 0 for
all w W . But A()W = W so that (A()v, W ) = {0}, i.e., A()W
W for all A. So W is invariant under A() for all G.
Im L = {Lv : v V } W, Ker L = {v V : Lv = 0} V.
For any G,
X X
A()(M ) = A( 1 )M B() = A( 1 )M B( ) = (M )B().
G G
2.3. IRREDUCIBLE REPRESENTATIONS 41
|G|
So cst = n st and substitute it into (2.5) to yield the desired result.
Theorem 2.3.5. Let A() = (aij ()) and B() = (bij ()) be two irreducible
matrix representations of G of degree n and m respectively. Then for any G,
(
X |G|
1
ais ( )btj () = n aij ()st if A = B
G
0 if A and B not isomorphic
P
i.e., G ais ( 1 )atk () = |G|
n aik ()st .
Similarly multiply both sides of (2.6) by bjk () and sum over j to have the
second part of the theorem.
Each entry of A() can be viewed as a function G C. The totality of
A() as runs over G is viewed as a |G|-dimensional complex vector.
42 CHAPTER 2. GROUP REPRESENTATION THEORY
Theorem 2.3.6. Let A() = (aij ()) be an irreducible degree n matrix repre-
sentation of G. Then the n2 functions aij : G C are linearly independent.
Problems
Pr
3. (Roy) The second claim t=1 n2t |G| follows from the first one, because
dim C[G] = |G|. Suppose there are ctij C such that
r X
X
ctij atij () = 0, for all G.
t=1 i,j
Multiply (on the right) both sides by aspq ( 1 ) and sum over , by The-
orem 2.3.5
r X
X X |G| X s s |G| X s s
0= ctij atij ()aspq ( 1 ) = cij aiq ()jp = c a ()
t=1 i,j
ns i,j ns i ip iq
G
for all G. Since the above equation holds for all s = 1, . . . , r and
1 p, q ns , we conclude by Theorem 2.3.6 again that csij = 0 for all
1 i, j ns . Thus all csij s are zero by induction hypothesis.
2.4 Characters
In this section, we assume that G is a finite group.
Let A be a (matrix or linear) representation of G. The function : G C
defined by
() := tr A()
is called a character of G. If A is irreducible, we call an irreducible char-
acter. Characters are class functions on G, i.e., ( ) is a constant for [],
where
[] = { 1 : G}
denotes the conjugacy class in G containing G. So characters are indepen-
dent of the choice of basis in the representation. Though non-similar matrices
could have same trace, we will see in this section that characters completely de-
termine the representation up to isomorphism, i.e., representations having the
same characters if and only if they are isomorphic.
Clearly (e) = tr A(e) = tr In = n where n = (e) is the degree of the
representation. When (e) = 1, is said to be linear; otherwise it is nonlinear.
Theorem 2.4.2. (Orthogonal relation of the first kind) Let , I(G). Then
(
1 X 1 1 if =
()( ) = (2.8)
|G| 0 if 6=
G
44 CHAPTER 2. GROUP REPRESENTATION THEORY
1 X 1 X |G|
()( 1 ) = ij = 1.
|G| |G| i,j n
G
Proof. Similar to the proof of Theorem 2.4.2 and use Theorem 2.3.5.
Theorem 2.4.4. For any character of G, ( 1 ) = () for all G.
Proof. From Problem 2.3, is obtained from a unitary representation, i.e.,
A( 1 ) = A()1 = A() . So
Proof. Exercise.
We sometimes denote it as (, )G . With respect to this inner product, from
Theorem 2.4.2 and Theorem 2.4.4 the irreducible characters of G are orthonor-
mal in C[G] and thus are linearly independent. So we have finitely many irre-
ducible characters of G. Denote them by
I(G) = {1 , . . . , k }.
2.4. CHARACTERS 45
From (2.4.2)
(i , j ) = ij . (2.10)
A function f : G C is said to be a class function if f is constant on each
conjugacy class of G. Denote by C(G) C[G] the set of all class functions. It
is clear that C(G) is a vector space. Since G is a disjoint union of its conjugacy
classes, dim C(G) is the number of conjugacy classes in G.
Xl t
X t
X
A () = tr ( Ai ()) = ri tr Di () = ri i () (2.11)
i=1 i=1 i=1
Similarly
t
X
B () = si i ().
i=1
From A = B we have
t
X
(ri si )i () = 0, G.
i=1
Now irreducible, i.e., A irreducible, if and only if only one mi = 1 and other
mi are zeros. In other words, irreducible, if and only if (, ) = 1.
The regular representation Q : G GLn (C), where n = |G|, contains all
the irreducible representations of G as sub-repesentations. We now study its
character structure.
Theorem 2.4.9. The multiplicity of i in Q is i (e), i.e., the degree of i .
k
X
Q = (e)i . (2.12)
i=1
We have the following important relation between the order of G and the
degrees of all irreducible characters.
Pk P
Theorem 2.4.10. |G| = i=1 i (e)2 = I(G) (e)2 .
Pk
Proof. Use (2.13) and (2.12) to have |G| = Q (e) = i=1 i (e)2 .
Immediately we have an upper bound for the degree of any irreducible char-
acter which is not the principal character: , i.e. (e) (|G| 1)1/2 .
Example 2.4.11. |S3 | = 6 so there are three irreducible characters: the prin-
cipal character, the alternating character and an irreducible character of degree
2 (see Example 2.2.1(e)). It is because 6 = 1 + 1 + 22 .
2.4. CHARACTERS 47
{atij : i, j = 1, . . . , n, t = 1, . . . , k}
is a basis of C[G]. Let f C(G) C[G]. Then for some ctij , i, j = 1, . . . , n and
t = 1, . . . , k,
k X
X nt
f () = ctij atij (), for all G.
t=1 i,j=1
a linear combination of 1 , . . . , k .
48 CHAPTER 2. GROUP REPRESENTATION THEORY
Example 2.4.13. Since |S4 | = 24 and there are 5 conjugacy classes, there are
5 irreducible characters of G: principal character, alternating character, two
irreducible characters of degree 3 and one irreducible character of degree 2 as
24 = 1 + 1 + 22 + 32 + 32 (the principal and alternating characters are the only
linear characters).
Let [] be the conjugacy class in G containing .
Theorem 2.4.14. (Orthogonal relation of the second kind)
(
|[]| X 1 if [] = []
()() = (2.14)
|G| 0 if [] 6= []
I(G)
Problems
2.4. CHARACTERS 49
(a) (
X |G| if 1
() =
G
0 otherwise
P
(b) G |()|2 = |G|.
P |G|
3. Prove that I(G) |()|2 = |[]| for any G.
8. Find all the irreducible characters of the alternating group A3 (A3 is iso-
morphic to the cyclic group C3 ).
for all i = 1, . . . , m, i.e., T is linear while restricted on the ith coordinate for all
i. Indeed the condition can be weakened to
T (v1 +v10 , v2 +v20 ) = T (v1 , v2 )+T (v10 , v20 ) = T (v1 , 0)+T (0, v2 )+T (v10 , 0)+T (0, v20 ).
Since is multilinear,
(v1 +v10 , v2 +v20 ) = (v1 , v2 +v20 )+(v10 , v2 +v20 ) = (v1 , v2 )+(v1 , v20 )+(v10 , v2 )+(v10 , v20 ).
In particular (v1 , 0) = (0, v2 ) = 0, but T (v1 , 0) and T (0, v2 ) are not necessar-
ily zero.
Example 3.1.1. The following maps are multilinear.
(a) f : C C C defined by f (x, y) = xy.
(b) : V V C defined by (f, v) = f (v).
51
52 CHAPTER 3. MULTILINEAR MAPS AND TENSOR SPACES
Let Ei = {ei1 , . .P
. , eini } be a basis of Vi , i = 1, . . . , m. So each vi Vi can
ni
be written as vi = j=1 aij eij , i = 1, . . . , m. Let : V1 Vm W be
multilinear. From definition,
n1
X nm
X
(v1 , . . . , vm ) = ( a1j1 e1j1 , . . . , amjm emjm )
j1 =1 jm =1
n1
X nm
X
= a1j1 amjm (e1j1 emjm )
j1 =1 jm =1
X
= a1(1) am(m) (e1(1) , . . . , em(m) )
X
= a (e ) (3.1)
where
m
Y
a := ai(i) C,
i=1
e := (e1(1) , . . . , em(m) ) m
i=1 Vi , (3.2)
for all (n1 , . . . , nm ). (e ) in (3.1) completely determine the multilinear
.
3.1. MULTILINEAR MAPS AND TENSOR MAPS 53
So = .
Let us point some more differences between linear and mutlilinear maps.
When T : V W is linear, T is completely determined by the n := dim V
values T (e1 ), . . . , T (en ) where E =Q{e1 , . . . , en } is a basis of V . But for a
m
multilinear map , wePneed || = i=1 dim Vi values. It is much more than
m
dim(V1 Vm ) = i=1 dim Vi .
Recall Example 3.1.1(d) with m = n = 2, i.e., : C2 C2 C22 defined
by (x, y) = xy T . Since rank (xy T ) min{rank x, rank y T } 1, we have
det((x, y)) = 0. But if x1 := (1, 0)T and x2 := (0, 1)T , then
det((x1 , x1 ) + (x2 , x2 )) = det I2 = 1.
54 CHAPTER 3. MULTILINEAR MAPS AND TENSOR SPACES
rank = dimhIm i.
Qm
Clearly rankQ i=1 dim Vi . The multilinear map is called a tensor map
m
if rank = i=1 dim Vi . In other words, a tensor map is a multilinear map with
maximal image span. Example 3.1.1 (a) is a trivial tensor map.
Theorem 3.1.3. The multilinear map : V1 Vm P is a tensor map
if and only if the set {(e ) : } is linear independent, where e is given in
(3.2).
Qm
Proof. From (3.1) h(e ) : i = hIm i and || = i=1 dim Vi .
Theorem 3.1.4. Tensor map of V1 , . . . , Vm exists, i.e., there are W and :
V1 Vm W such that is a tensor map.
Qm
Proof. By Theorem 3.1.2, simply pick W so that dim W = i=1 dim Vi and let
{w : } be a basis so that w ( ) determine the multilinear which
is obviously a tensor map.
Clearly tensor maps on V1 Vm are not unique.
The study of multilinear map is reduced to the study of some linear map
(not unique) via a tensor map. A multilinear map : V1 Vm P
is said to have universal factorization property if for any multilinear map
: V1 Vm W , there is T Hom (P, W ) such that = T .
V1 MVm /P
MMM ~~
MMM ~~
MMM ~~
M& ~~~ T
W
dimhIm i dimhIm i.
Qm
So rank = i=1 dim Vi and is a tensor map.
3.2. TENSOR PRODUCTS AND UNIQUE FACTORIZATION 55
Problems
1. Suppose that : W1 Wm W is multilinear and Ti : Vi Wi is
linear, i = 1, . . . , m. Define : V1 Vm W by (v1 , . . . , vm ) =
(T1 v1 , . . . , Tm vm ). Show that is multilinear.
2. Prove that if : V1 Vm W is multilinear and T : V W is
linear, then T is multilinear.
3. Show that when n > 1, the determinant function det : Cn Cn C
is not a tensor map.
4. Suppose that the multilinear map : V1 Vm P has universal
factorization property. Show that the linear map T is unique if and only
if hIm i = P .
Solutions to Problems 3.1
1.
(v1 , . . . , vi + cvi0 , . . . , vm )
= (T1 v1 , . . . Ti (vi + cvi0 ), . . . , Tm vm )
= (T1 v1 , . . . , Ti (vi ) + cTi (vi0 ), . . . , Tm vm )
= (T1 v1 , . . . , Ti (vi ), . . . , Tm vm ) + c(T1 v1 , . . . , Ti (vi0 ), . . . , Tm vm )
= (v1 , . . . , vi , . . . , vm ) + c(v1 , . . . , vi0 , . . . , vm ).
2.
T (v1 , . . . , vi + cvi0 , . . . , vm )
= T ((v1 , . . . , vi , . . . , vm ) + c(v1 , . . . , vi0 , . . . , vm ))
= T (v1 , . . . , vi , . . . , vm ) + cT (v1 , . . . , vi0 , . . . , vm )
The elements of m
i=1 Vi are called tensors. The tensors of the form
(v1 , . . . , vm ) = v1 vm
{e : }, {e : }
(v1 , . . . , vm ) = T (v1 , . . . , vm ) = T v .
Problems
1. Show that if some vi = 0, then v1 vm = 0.
Pk
2. Let z U V so that z can be represented as z = i=1 ui vi . Prove that
if k is the smallest natural number among all such representations, then
{u1 , . . . , uk } and {v1 , . . . , vk } are linear independent sets respectively.
3.2. TENSOR PRODUCTS AND UNIQUE FACTORIZATION 57
(v1 , . . . , vi + avi0 , . . . , vm )
= T ((v1 , . . . , vi , . . . , vm ) + a(v1 , . . . , vi0 , . . . , vm ))
= T ((v1 , . . . , vi , . . . , vm )) + aT ((v1 , . . . , vi0 , . . . , vm ))
= (v1 , . . . , vi , . . . , vm ) + a(v1 , . . . , vi0 , . . . , vm )
Qm
Thus is multilinear. Notice that rank = rank = i=1 dim Vi , since
T is invertible. Therefore, is a tensor map and Q is the corresponding
tensor space.
(Daniel) By Theorem 3.1.5, in order to show that is a tensor map,
we need to show that has universal factorization property. Let f :
m m
i=1 Vi R be any multilinear map. Since : i=1 Vi P is a tensor
map and P = i=1 Vi , there is f : P R such that f = f . Set
m
f = f T 1 : Q R. Then
f = f = f T 1 T = f .
58 CHAPTER 3. MULTILINEAR MAPS AND TENSOR SPACES
(u1 , . . . , um ) + + (v1 , . . . , vm )
= T (u1 um ) + + T (v1 vm )
= T (u1 um + + v1 vm )
= 0.
(u1 , . . . , um ) = (v1 , . . . , vm ).
Suppose that uk and vk are not linearly dependent for some k. Then there is
Vk V such that fk (vk ) = 1 Q
and fk (uk ) = 0. For other i 6= k, choose fi Vi
m
such that f (vi ) = 1. Set := i=1 fi . Then
m
Y
(v1 , . . . , vm ) = fi (vi ) = 1
i=1
3.3. BASIC PROPERTIES OF TENSORS AND INDUCED INNER PRODUCTS59
and
m
Y
(u1 , . . . , um ) = fi (ui ) = 0
i=1
E := {e
:= e1(1) em(m) : }
(e
, e ) = , .
60 CHAPTER 3. MULTILINEAR MAPS AND TENSOR SPACES
Such inner product is unique from Theorem 1.6.3. From Problem 1.6 #1
X
(u, v) := a b (3.3)
P
P
where u = a e , v
= b e m
i=1 Vi . It appears depending on the
choice of basis but it does not.
Theorem 3.3.5. Let Vi be inner product space with orthonormal basis Ei =
{ei1 , . . . , eini }, i = 1, . . . , m. The inner product obtained from (3.3) satisfies
m
Y
(u1 um , v1 vm ) = (ui , vi )i (3.4)
i=1
ui , vi Vi for all i.
Proof. Since ui , vi Vi we have
ni
X ni
X
ui = aij eij , vi = bij eij , i = 1, . . . , m.
j=1 j=1
and
(V1 Vk ) (Vk+1 Vm ) = V1 Vm . (3.7)
Proof. By (3.5) the tensor map that satisfies (3.6) exists and is unique. From
hIm i = hv1 vm : vi Vi i = m
i=1 Vi , (3.7) follows. See Problem 3.2 #4
for details.
We also write for in Theorem 3.3.6. So (3.7) can be written as
(V1 Vk ) (Vk+1 Vm ) = V1 Vm
(v1 vk ) (vk+1 vm ) = v1 vm .
Problems
1. Suppose that
Pkv1 , . . . , vk V are linearly independent and u1 , . . . , uk U .
Prove that i=1 ui vi = 0 if and only if u1 = = uk = 0.
2. Let v1 , . . . , vk V and A Ckk . Suppose AAT = Ik and uj =
Pk Pk Pk
i=1 aij vi , j = 1, . . . , k. Prove that i=1 ui ui = i=1 vi vi .
: (V1 Vk ) (Vk+1 Vm ) V1 Vm
(v1 vk , vk+1 vm ) = v1 vm .
Pk
5. Let z = i=1 ui vi wi U V W . Prove that if {u1 , . . . , uk } and
{v1 , . . . , vk } are linearly independent and wi 6= 0 for all k, then k is the
smallest length of z.
Pk Pk
1. By Theorem 3.3.1 i=1 ui vi = 0 implies that i=1 (ui , vi ) = 0 for
any multilinear map . Since v1 , . . . , vk are linearly independent, we can
choose f V such that f (vi ) = 1 for all i. If some ui 6= 0, then there
would be g U such that g(ui ) = 1 and g(uj ) = 0 for j 6= i. The map
Pk
f g : U V C is multilinear so that i=1 f g(ui , vi ) = 1, a contradiction.
2.
k k
k ! k !!
X X X X
uj uj = aij vi aij vi
j=1 j=1 i=1 i=1
k
k k !
X XX
= aij a`j vi v`
j=1 `=1 i=1
k X
X k Xk
= aij a`j vi v`
`=1 i=1 j=1
k X
X k k
X
= i` vi v` = vj vj
`=1 i=1 j=1
(v1 , . . . , vm ) := T1 v1 Tm vm .
T (v1 vm ) = T1 v1 Tm vm .
3.4. INDUCED MAPS 63
Proof.
(m m
i=1 Si )(i=1 Ti )(v1 vm )
= (mi=1 Si )(T1 v1 Tm vm )
= Si Ti v1 Sm Tm vm
= mi=1 (Si Ti )(v1 vm ).
Proof. Let rank Ti = ki for all i. So there is a basis {ei1 , . . . , eiki , eiki +1 , . . . , eini }
for Vi such that T ei1 , . . . , T eiki are linearly independent in Wi and T eiki +1 =
T eini = 0, i = 1, . . . , m. Notice that {e : (n1 , . . . , nm )} is a basis for
m i=1 Vi . Moreover
(m
i=1 Ti )e = T e1(1) Tm em(m)
(m
i=1 Ti )e , (k1 , . . . , km )
Theorem 3.4.3. Let Ti Hom (Vi , Wi ), where Vi , Wi are inner product spaces,
i = 1, . . . , m. Then
(m m
i=1 Ti ) = i=1 Ti .
64 CHAPTER 3. MULTILINEAR MAPS AND TENSOR SPACES
Proof. We use the same notation (, ) for all the inner products on Vi , Wi .
(m
i=1 Ti v , w ) = (T1 v1 Tm vm , w1 wm )
Ym
= (Ti vi , wi )
i=1
Ym
= (vi , Ti wi )
i=1
= (v1 vm , T1 w1 Tm
wm )
m
= (v , i=1 Ti w )
Since m
i=1 Vi is spanned by decomposable tensors, we have the desired result.
Problems
1. Prove that
Qm
2. (Roy) If Ti = ci Si with i=1 ci = 1, then clearly
T1 Tm (v1 vm ) = T1 v1 Tm vm
= c1 S1 v1 cm Sm vm
Ym
= ( ci )S1 v1 Sm vm
i=1
= S1 Sm (v1 vm )
So m m m m
i=1 Ti = i=1 Si . Conversely if i=1 Ti = i=1 Si 6= 0, then
T1 v1 Tm vm = S1 v1 Sm vm 6= 0
4.
E = {e
: (n1 , . . . , nm )}, F = {f : (k1 , . . . , km )}
i = 1, . . . , m. That is
ki
X
Ti eit = aist fis , t = 1, . . . , ni , i = 1, . . . , m.
s=1
F Fi
[m m m
i=1 Ti ]E = i=1 [Ti ]Ei = i=1 Ai .
In general A B 6= B A.
3.5. MATRIX REPRESENTATIONS AND KRONECKER PRODUCT 67
Notice that
(m, p) = {(1, 1), . . . , (1, p), (2, 1), . . . , (2, p), , (m, 1), . . . , (m, p)}
and
(n, q) = {(1, 1), . . . , (1, q), (2, 1), . . . , (2, q), , (n, 1), . . . , (n, q)}.
That is, the (A B)[1, . . . , p|1, . . . , q] = a11 B. Similar for other blocks.
Problems
1. Let Ai , Bi Cki ,ni , i = 1, . . . , m. Prove the following:
(i) m
i=1 Ai = 0 if and only if some Ai = 0.
m
(ii) Q m
i=1 Ai = i=1 Bi 6= 0 if and only if Ai = ci Bi 6= 0 for all i and
m
i=1 ci = 1.
Qm
(iii) rank (m i=1 Ai ) = i=1 rank Ai .
(iv) (m m
i=1 Ai ) = i=1 Ai .
X m
Y Ym m X
Y ti
( a(i)(i) )( bi(i)(i) )
i
= ( ai(i)j bij(i) )
(t1 ,...,tm ) i=1 i=1 i=1 j=1
X m
Y m
Y
( ai(i)(i) )( bi(i)(i) )
(t1 ,...,tm ) i=1 i=1
X m
Y
= (ai(i)(i) )bi(i)(i) )
(t1 ,...,tm ) i=1
m X
Y ti
= ( ai(i)j bij(i) )
i=1 j=1
3. Recall that
m
Y
(m
i=1 Ai ), = ai(i)(i) ,
i=1
i = 1, . . . , m.
(a) Suppose that Ai is upper triangular with diag Ai = (i1 , . . . , ini ). Then
F
[m m
i=1 Ti ]E = i=1 Ai is also upper triangular with (, )-diagonal entry
Qm
i=1 i(i) , .
(b) If ij , j = Q
1, . . . , ni , are the eigenvalues of Ti , then the eigenvalues of
m
m T
i=1 i are i=1 i(i) , .
(c) If sij , j = 1, . .Q
. , ni , are the singular values of Ti , then the singular values
m
of m T
i=1 i are i=1 si(i) , .
Q m
(d) tr m i=1 Ti = i=1 tr Ti .
Q m Qm
(e) det m i=1 Ti = i=1 (det Ti )
n/ni
, where n := i=1 ni .
Proof. (a) Use Problem 3.4 #3 for matrix approach.
The following is linear operator approach. Since Ai is upper triangular, For
any (n1 , . . . , nm ), we have
(m
i=1 Ti )e = T1 e1(1) Tm em(m)
= (1(1) e1(1) + v1 ) (m(m) em(m) + vm )
Ym
= i(i) e
+w
i=1
P Qm Qm Pni Qm
(d) tr (m
i=1 Ti ) = i=1 i(i) = i=1 j=1 ij = i=1 tr Ti .
(e) Since determinant is equal to the product of eigenvalues,
m
YY m Y
Y
det m
i=1 Ti = i(i) = i(i)
i=1 i=1
Since (i) is chosen from 1, . . . , ni and such (i) appears in ||/ni times in those
n := || elements in . So
Y ni
Y ni
Y
||/ni n/ni
i(i) = ij = ij
j=1 j=1
and hence
n/ni
m
Y ni
Y m
Y
det(m
i=1 Ti ) =
ij = (det Ti )n/ni .
i=1 j=1 i=1
(m m m m m m m m
i=1 Ti ) (i=1 Ti ) = (i=1 Ti )(i=1 Ti ) = i=1 Ti Ti = i=1 Ti Ti = (i=1 Ti )(i=1 Ti ) .
So m
i=1 Ti is normal.
(b) If all Ti are Hermitian, then
(m m m
i=1 Ti ) = i=1 Ti = i=1 Ti
(m m m m
i=1 Ti ) (i=1 Ti ) = i=1 Ti Ti = i=1 IVi = Im
i=1 Vi
.
(a) If m
i=1 Ti 6= 0 is normal, then all Ti 6= 0 are normal.
(b) If
Qm mi=1 Ti 6= 0 is Hermitian, then Ti = ci Hi where Hi are Hermitan and
i=1 ci R.
E
which is an orthonormal basis for m m
i=1 Vi . From Theorem 3.6.1, [i=1 Ti ]E is
E
upper triangular. On the other hand [m
i=1 Ti ]E is a normal matrix. By Lemma
E
1.5.3 [m
i=1 Ti ]E must be diagonal. now
E E1 Em
[m
i=1 Ti ]E = [T1 ]E1 [Tm ]Em 6= 0.
From Problem 3.5 #3, each [Ti ]E Ei is diagonal. Hence Ti is normal for all i.
i
m
Y
ak := i(i) 6= 0
i6=k
m
Y
0 6= m
i=1 Ti = ( ck )(m
k=1 Hk ).
k=1
Qm
Since m m
i=1 Ti and k=1 Hk are Hermitian, we have k=1 ck R.
Problems
1. Prove
Qthat if m
i=1 Ti 6= 0 is pd (psd), then Ti = ci Hi and Hi is pd (psd)
m
and i=1 ci > 0.
3.
(U I)(A In In A)(U I)
= (U I) (A In In A)(U I)
= T I I T
5.
E = {e
: (n1 , . . . , nm )}, F = {f : (k1 , . . . , km )}
3.7. TENSOR PRODUCT OF LINEAR MAPS 73
1 := (n1 , . . . , nm ), 2 := (k1 , . . . , km )
i
Define Tjq Hom (Vi , Wi ) such that
i
Tjq eip = pq fij ,
Then
1 m
T e = T(1)(1) e1(1) T(m)(m) em(m)
= (1)(1) f1(1) (m)(m) fm(m)
= f .
So
X X X X X
c T e = c f = c f = Se
.
2 1 2 1 2
Since {e m
: 1 } is a basis of i=1 Vi we have
X X
S= c T .
2 1
In other words, S is a linear combination of induced maps T ( 2 ,
1 ).
Theorem 3.7.2. There exist a unique tensor map such that Hom (m m
i=1 Vi , i=1 Wi )
is the tensor space of Hom (V1 , W1 ), . . . , Hom (Vm , Wm ), i.e.,
Hom (m m
i=1 Vi , i=1 Wi ) = Hom (V1 , W1 ) Hom (Vm , Wm ).
hIm i = hm m m
i=1 Ti : Ti Hom (Vi , Wi )i = Hom (i=1 Vi , i=1 Wi ).
Moreover
i.e., Hom (m m
i=1 Vi , i=1 Wi ) is a tensor product of Hom (V1 , W1 ), . . . , Hom (Vm , Wm ).
Problems
1. Define the bilinear map : V V V V by (v1 , v2 ) = v2 v1 so
that there is T End (V V ) such that T = , i.e., T (v1 , v2 ) = v2 v1 .
Prove that when dim V = 1, T is not an induced map, i.e., there are no
T1 , T2 End (V ) such that T = T1 T2 .
2. Let {e1 , . . . , en }Pbe a P
basis for V . Define Tij End V such that Tij ek =
n n
kj ei . Let T = i=1 j=1 Tij Tji . Show that T (v1 v2 ) = v2 v1 for
all v1 , v2 V (cf. Theorem 3.7.1 and notice that the T in Problem 1 is a
linear combination of induced maps).
3. Prove that in the proof of Theorem 3.7.1
i
(a) (Daniel) {Tjp : j = 1, . . . , ki , q = 1, . . . , ni } is a basis of Hom (Vi , Wi ).
(b) {T : 2 , 1 } is a basis of Hom (m m
i=1 Vi , i=1 Wi ).
1.
2.
3.
4.
X m
Y
f= f (e ) f(i)
m,n i=1
X m
Y
c f(i) = 0.
m,n i=1
76 CHAPTER 3. MULTILINEAR MAPS AND TENSOR SPACES
Then
X m
Y
0= c f(i) (e ) = c , m,n .
m,n i=1
p q
z }| { z }| {
Vqp = V V V V
3.8. SOME MODELS OF TENSOR PRODUCTS 77
is called a tensor space of type (p, q) (with covariant type of degree p and
with contra-variant type of degree q). Analogous to the previous treatment,
under some tensor map, M (V , . . . , V , V, . . . , V ; C) (p copies of V and q copies
of V ) is a model of Vqp :
Let E = {e1 , . . . , en } be a basis of V and let E = {f1 , . . . , fn } be the dual
basis of V . Then
p
Y q
Y
{ e(i) f(j) : p,n , q,n }
i=1 j=1
is a basis for
M (V , . . . , V , V, . . . , V ; C) (p copy of V, q copy of V )
Define : V V V V M (V , . . . , V , V, . . . , V ; C) by
p
Y q
Y
(e(1) , . . . , e(p) , f(1) , . . . , f(q) ) = e(i) f(j) .
i=1 j=1
is a basis of Vqp .
Problems
1. Define a simple tensor map : m V (m V ) such that m V =
(V ) .
2. Let M (V1 , . . . , Vm ; W ) be the set of all multilinear maps
Qm from V1 Vm
to W . Prove that dim M (V1 , . . . , Wm ; W ) = dim W i=1 dim Vi .
P ()e
= P ()e(1) e(m)
= e(1 (1)) e(1 (m))
= e
1
X
= ,1 e
, m,n .
m,n
79
80 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
P ()e
= e 1 (4.1)
vk := e1 + ke2 , k = 1, . . . , m.
Problems
1. Suppose m > 1. Prove that unless = e or dim V = 1, P () is not an
induced operator, i.e., there are no Ti End V such that P () = m
i=1 Ti .
It is because
1 (v(1) , . . . , v(m) )
1 X
= (v(1) , . . . , v(m) )
m!
Sm
1 X
= (v (1) , . . . , v (m) ) ( = )
m!
Sm
= 1 (v1 , . . . , vm ),
1 X
(v(1) , . . . , v(m) ) = (v1 , . . . , vm ) for all v1 , . . . , vm V. (4.3)
m!
Sm
1 X
( 1 )(v(1) , . . . , v(m) ) = (v1 , . . . , vm ). (4.5)
m!
Sm
1 X
(v1 , . . . , vm ) = ( 1 )(v(1) , . . . , v(m) ).
m!
Sm
4.2. SYMMETRIC MULTILINEAR MAPS AND SYMMETRIZERS 83
It is because
(v(1) , . . . , v(m) )
1 X
= ( 1 )(v(1) , . . . , v(m) )
m!
Sm
1 X
= ( 1 )(v (1) , . . . , v (m) ) ( = )
m!
Sm
1 X
= ( 1 )()(v (1) , . . . , v (m) ) (by ( 1 ) = ( 1 )())
m!
Sm
= () (v1 , . . . , vm ).
i.e., = 1 + .
When m > 2, the decomposition of may have more than two summands.
The reason for m = 2 case having two summands is that
I(S2 ) = {1, }
(e) X
(v1 , . . . , vm ) := ( 1 )(v(1) , . . . , v(m) )
|G|
G
Proof. We now show that satisfies (4.7) by using the irreducible character
orthogonal relation of the first kind:
(v1 , . . . , vm )
(e) X
= ( 1 ) (v(1) , . . . , v(m) )
|G|
G
(e) X (e) X
= ( 1 ) ( 1 )(v(1) , . . . , v(m) )
|G| |G|
G G
!
2 X X
(e) 1 1
= ( )( ) (v (1) , . . . , v (m) ) ( = )
|G|2
G G
(e)2 X |G| 1
= ( ) (v (1) , . . . , v (m) ) (by Theorem 2.4.2)
|G|2 (e)
G
= (v1 , . . . , vm ). (4.8)
Proof.
X
(v1 , . . . , vm )
I(G)
X (e) X
= ( 1 )(v(1) , . . . , v(m) )
|G|
I(G) G
X X
= 1 (e)( 1 ) (v(1) , . . . , v(m) )
|G|
G I(G)
X
= e,1 (v(1) , . . . , v(m) ) (by Corollary 2.4.15)
G
= (v1 , . . . , vm ). (4.10)
P
Since : m V m V is multilinear, = I(G) follows immediately.
4.2. SYMMETRIC MULTILINEAR MAPS AND SYMMETRIZERS 85
= T (G, ), (4.11)
i.e.,
m VH / m V
HH vv
HH vv
HH
HH vv
v
$ zvv T (G,)
m
V
and thus
(v1 , . . . , vm ) = T (G, )v ,
T (G, )v = (v1 , . . . , vm )
(e) X
= ( 1 ) (v(1) , . . . , v(m) )
|G|
G
(e) X
= ( 1 )v
|G|
G
(e) X
= ( 1 )P ( 1 )v
|G|
G
(e) X
= ()P ()v .
|G|
G
So
(e) X
T (G, ) = ()P () End (m V ) (4.12)
|G|
G
and is called the symmetrizer associated with G and since from (4.11),
it turns into which is symmetric to G and . We now see some basic
properties of T (G, ).
= P (e)
= Im V .
P
(c) By Theorem 2.4.3, G ()( 1 ) = 0 if 6= . So
T (G, )T (G, )
(e)(e) X X
= ()()P ()
|G|2
G G
!
(e)(e) X X 1
= ()( ) P ( )
|G|2
G G
= 0.
4.2. SYMMETRIC MULTILINEAR MAPS AND SYMMETRIZERS 87
The following asserts that the sum is indeed an orthogonal sum with respect to
the induced inner product on n V .
Theorem 4.2.4. The tensor space m V is an orthogonal sum of V (G),
I(G), i.e.,
m V =I(G) V (G).
In other words
m V = V1 (G) Vk (G),
where I(G) = {1 , . . . , k } is the set of irreducible characters of G.
Proof. From Theorem 4.2.3(a) and (c), T (G, ) is an orthogonal projection.
The elements in Vm (G) of the form
v1 vm := T (G, )(v1 vm )
are called decomposable symmetrized tensors. Such notation does not
reflect its dependence on G and . From Theorem 4.2.4, v1 vm is the
piece of v1 vm in V (G). So V (G) is spanned by the decomposable
symmetrized tensors v1 vm .
As we saw before, when G = S2 we have I(G) = {1, }. Then
1
v1 v2 = T (S2 , 1)(v1 v2 ) = (v1 v2 + v2 v1 )
2
and
1
v1 v2 = T (S2 , )(v1 v2 ) = (v1 v2 v2 v1 ).
2
Clearly
v1 v2 = v1 v2 + v1 v2 .
Moreover
(v1 v2 , v1 v2 )
1
= (v1 v2 + v2 v1 , v1 v2 v2 v1 )
4
1
= ((v1 v2 , v1 v2 ) (v1 v2 , v2 v1 ) + (v2 v1 , v1 v2 ) (v2 v1 , v2 v1 ))
4
1
= ((v1 , v1 )(v2 , v2 ) (v1 , v2 )(v2 , v1 ) + (v2 , v1 )(v1 , v2 ) (v2 , v2 )(v1 , v1 ))
4
= 0,
88 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
verifying that
V V = V1 (S2 ) V (S2 )
as indicated in Theorem 4.2.4.
The following is the unique factorization property of symmetrizer map :
m V V (G).
Theorem 4.2.5. Let : m V W be a multilinear map symmetric with
respect to G and . There exists a unique T Hom (V (G), W ) such that
= T ,
/ V (G)
m VE
EE x x
EE xx
E
EE
x
x T
" |xx
W
i.e,
(v1 , . . . , vm ) = T v .
Proof. Uniqueness follows immediately since the decomposable symmetrized
tensors v span V (G).
For the existence, from the factorization property of (Theorem 3.2.3),
m VF / m V
FF x x
FF x
F
FF xxxT
" x| x
W
(v1 , . . . , vm ) = (v1 , . . . , vm )
(e) X
= ( 1 )(v(1) , . . . , v(m) )
|G|
G
(e) X
= ( 1 )T v ( = T )
|G|
G
!
(e) X
= T ( )P ( ) v
1 1
|G|
G
= T T (G, )v
= T v
(v1 , . . . , vm ) = T v .
4.2. SYMMETRIC MULTILINEAR MAPS AND SYMMETRIZERS 89
Problems
1. Prove that if G, then P ()T (G, ) = T (G, )P (). In addition, if
(e) = 1, then P ()T (G, ) = ( 1 )T (G, ). (Hint: If is linear, then
() = ()())
2. Prove P ( 1 )v = v . In addition, if (e) = 1, then v = ()v .
3. Show that if is linear, then : m V W is symmetric with respect
to G and if and only if (v(1) , . . . , v(m) ) = ()(v1 , . . . , vm ) for all
G.
P Qm
4. Let f1 , . . . , fm V . Prove that : m V C defined by = G () t=1 f(t)
is symmetric with respect to G and .
5. Prove that the multilinear map : m V C is skew symmetric with re-
spect to Sm if and only if vi = vj implies that (v1 , . . . , vm ) = 0 whenever
i 6= j. (Hint: is a product of transpositions).
2. By Problem 1,
(e) X
( 1 )(v(1) , . . . , v(m) )
|G|
G
(e) X
= ( 1 )()(v1 , . . . , vm )
|G|
G
4. (Roy)
(e) X
( 1 )(v(1) , . . . , v(m) )
|G|
G
m
!
(e) X 1
X Y
= ( ) () f(t) (v(t) )
|G| t=1
G G
m
(e) X Y
= ( 1 )( ) f (t) (v(t) ) ( = )
|G| t=1
, G
! m
(e) X X 1
Y
= ( )( ) f (t) (vt ) (( ) = ( ))
|G| t=1
G G
Y m
(e) X |G|
= ( ) f (t) (vt ) (Theorem 2.4.3)
|G| (e) t=1
G
X m
Y
= ( ) f (t) (vt )
G t=1
= (v1 , . . . , vm )
0 = (v1 , . . . , vi + vj , . . . , vi + vj , . . . , vm )
= (v1 , . . . , vi , . . . , vi , . . . , vm ) + (v1 , . . . , vj , . . . , vj , . . . , vm )
+(v1 , . . . , vi , . . . , vj , . . . , vm ) + (v1 , . . . , vj , . . . , vi , . . . , vm )
(u1 , . . . , um ) + + (v1 , . . . , vm ) = 0.
(u1 , . . . , um ) = T u , . . . , (v1 , . . . , vm ) = T v .
So
(u1 , . . . , um ) + + (v1 , . . . , vm ) = T (u + + v ) = 0.
Notice that when G = {e}, the symmetric multilinear function with respect
to {e} and 1 simply means that is simply multilinear. So Theorem 4.3.1
is a generalization of Theorem 3.3.1.
Theorem 4.3.2. If v1 vm = 0, then v1 , . . . , vm are linearly dependent.
Proof. Suppose on the contrary that v1 vm = 0 and v1 , . . . , vm were linearly
independent. Extend it to a basis {v1 , . . . , vm , vm+1 , . . . , vn } of V . So
Qm
Now for each m,n , t=1 f(t) : m V C is multilinear. So by the unique
factorization property of , there is T (m V ) such that
m
Y
f(t) = T .
t=1
In particular
m
Y m
Y
T e
= T e = ( f(t) )e = f(t) (e(t) ) = .
t=1 t=1
So m,k (otherwise some (t) > k so that f(t) (v(t) ) = 0 from (4.14)).
From (4.15),
m
(e) X Y
0 6= T v = T u = ( 1 ) f(t) (u(t) ) = 0,
|G| t=1
G
Let
G := { G : = } < G
be the stabilizer of m,n .
Theorem 4.3.5. (a) Let E = {e1 , . . . , en } be an orthonormal basis of V .
Then
(e) X
(e , e ) = (), , , m,n .
|G|
G
(b)
(e) X
ke k2 = (), m,n .
|G|
G
Problems
4. Suppose
P is an irreducible character of G and m,n . Prove that
1
|G | G () is a nonnegative integer (Hint: consider the restriction
of onto G ).
2. By Theorem 4.3.5(a),
3.
= {(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 2, 1), (2, 2, 2)}.
G := { : G }
|{ G : = j }| = |{ G : j1 = }| = |{ G : = }| = |G |
The subspace
O := he : Gi V (G)
is called the orbital subspace associated with m,n . The following gives
some relation between orbital spaces.
the inner product (see Chapter 2) of the principal character and the character
|G (not necessarily irreducible), the restriction of onto G (it is a character
of G because the restriction of the representation A : G GLn (C) to G
is a representation). Notice that (|G , 1) is a nonnegative integer since it
denotes the multiplicity of the principal character in |G by Theorem 2.4.7
and Theorem 2.4.12.
Theorem 4.4.5. Let E = {e1 , . . . , en } be a basis of V . Then
V (G) =
he : Gi. (4.20)
(if
P the transposition (i, i + 1) 6 G, then we would have G = {e}, contradicting
G () = 0). Then G = Sm by Problem 2.1 #2.
Let := (i, i + 1) G. Then G = {e, } so that
X
(e) + ( ) = () = 0.
G
So
() = tr A() = (1)r k = (1 ) (r )k = k()
for all G, i.e., = k. Since is irreducible, we must have k = 1.
Let X
:= { m,n : () 6= 0}.
G
= and
Note that
=
=
{ : G}. (4.21)
(e) X
s = () = (e)(|G , 1).
|G |
G
G = G 1 G k
4.4. BASES OF SYMMETRY CLASSES OF TENSORS 101
Then E := {e
1 , . . . , ek } is a basis of W but {e1 , . . . , ek } may not be
a basis for O .
Since W is invariant under P () and thus invariant under T (G, ), the
restriction T (G, ) = T (G, )|W of T (G, ) is a linear map on W and is a
projection (Problem 1.8 #8).
Let C = (cij ) := [T (G, )]E E Ckk (C depends on and s := dim O
k). From
We now compute C:
T (G, )e
j = ej = T (G, )e
j
(e) X
= ()e
j 1
|G|
G
(e) X
= ( 1 j )e
( = j 1 )
|G|
G
k
(e) X X
= ( 1 j )e
(G = ki=1 G i )
|G| i=1
G i
k
X (e) X
= (i1 1 j )e
i ( = i )
i=1
|G|
G
for j = 1, . . . , k. So
(e) X
cij = (i1 j ), i, j = 1, . . . , k.
|G|
G
k
X k
X (e) X (e) X
s = tr C = cii = () = ().
i=1 i=1
|G| |G |
G G
102 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
= {1 , . . . , s ; 1 , . . . , s ; . . . }
Although is not unique, it does not depend on the choice of basis E since ,
and C do not depend on E, i.e., if F = {f1 , . . . , fn } is another basis of V ,
is still a basis of V (G).
then {f : }
Example: Consider m = n = 3 and G = S3 . Recall that there are three
irreducible characters of S3 , namely, the principal character 1, the alternating
character and the character with degree 2:
{(1, 1, 1)};
{(1, 1, 2), (1, 2, 1), (2, 1, 1)};
{(1, 1, 3), (1, 3, 1), (3, 1, 1)};
{(1, 2, 2), (2, 1, 2), (2, 1, 1)};
{(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)};
{(1, 3, 3), (3, 1, 3), (3, 3, 1)};
{(2, 2, 2)};
{(2, 2, 3), (2, 3, 2), (3, 2, 2)};
{(2, 3, 3), (3, 2, 3), (3, 3, 2)};
{(3, 3, 3)}.
= {(1, 1, 2), (1, 1, 3), (1, 2, 2), (1, 2, 3), (1, 3, 3), (2, 2, 3), (2, 3, 3)}.
4.4. BASES OF SYMMETRY CLASSES OF TENSORS 103
(e) X
s = () = (e) + ((1 2)) = 2.
|G |
G
From
(e) X 1
cij = (i1 j ) = ((i1 j ) + (i1 (12)j ))
|G | 3
G
we have
2
3 13 13
C= 1 2
13
3 3
13 13 2
3
Now we consider the larger orbit. For = (1, 2, 3), we have G = {e} so
that
(e) X
s = () = 2(e) = 4
|G |
G
= {(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)} = {1 , . . . , 6 }.
From
(e) X 1
cij = (i1 j ) = (i1 j ))
|G | 3
G
104 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
we have
2
3 0 0 13 31 0
0 2
31 0 0 13
3
0 13 2
0 0 13
C=
1
3
13 0 0 2
3 13 0
0 0 13 2
0
3 3
0 13 31 0 0 2
3
Notice that rank C = 4 (for example the columns 1, 2, 3, 4 or columns 1, 2, 3, 5
are linearly independent. So we can take 1 = = (1, 2, 3), 2 = (1, 3, 2),
3 = (2, 1, 3) and 4 = (2, 3, 1). Hence
=
{(1, 1, 2), (1, 2, 1);
(1, 1, 3), (1, 3, 1);
(1, 2, 2), (2, 1, 2);
(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1);
(1, 3, 3), (3, 1, 3);
(2, 2, 3), (2, 3, 2);
(2, 3, 3), (3, 2, 3)}
in which the order is not lexicographic.
It is known (Problem 6) that = if and only if is linear. In such
cases, {e : } is an orthogonal basis of V (G) if E = {e1 , . . . , en } is an
such that {e : }
orthogonal basis of V . If (e) > 1, can we choose
is an orthogonal basis for V (G)? When G = Dm (the diheral group) such
exists for every I(G) if and only if m is a power of 2 (Wang and Gong
(1991) [34, 35], Holmes and Tam (1992) [4]). On the other hand, every doubly
transitive subgroup G < Sm has an irreducible character for which no such
exists (Holmes (1995) [3])
We give several common examples of symmetry classes of tensors and in-
duced operators.
Example 4.4.8. Assume 1 m n, G = Sm , and = . Then V (G) is
the mth exterior space m V , = = Qm,n , the set of strictly increasing
sequences in m,n , = Gm,n , the set of nondecreasing sequences in m,n . See
Problem 6 and Theorem 4.4.6(b).
Example 4.4.9. Assume G = Sm and 1, the principal character. Then
=
V1 (G) is the mth completely symmetric space m V , = = Gm,n . See
Problem 7.
Example 4.4.10. Assume G = {e} < Sm ( 1). Then V (G) = m V ,
=
= = m,n . See Problem 7.
= (e) X
dim V (G) = || ()nc() .
|G|
G
4.4. BASES OF SYMMETRY CLASSES OF TENSORS 105
in terms of s .
The following gives another expression for ||
Theorem 4.4.11.
X X
dim V (G) = =
|| s = s
X (e) X
= ()
|G |
G
(e) X X
= ().
|G|
m,n G
Proof. It suffices to establish the last equality. By the proofs of Theorem 4.4.3
and Theorem 4.4.4
X X X 1 X X
() = ()
|G |
m,n G G G
X 1 X X
= () (G = 1 G )
|G |
G G
X 1 X
= |G| ().
|G |
G
Problems
and
1 X c()
dim V1 (Sm ) = n .
m!
Sm
is a projection.
k
X
(C 2 )ij = cis csj
s=1
k
(e)2 X X X
= 2
(i1 s ) (s1 j )
|G| s=1
G G
2 k
X X X
(e)
= (i1 s ) (s1 1 j ) ( = 1 )
|G|2 s=1 G G
k
2 X X
(e)
= (i1 s )(s1 j )
|G|2 s=1 ,G
(e) X X
2
= (i1 )( 1 j ) ( = s )
|G|2
G G
!
(e)2 X X 1 1
= ()( i j )
|G|2
G G
(e) 2 X ( 1 j )
i
= by Theorem 2.4.3
|G| (e)
G
= cij .
108 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
So C is a projection matrix.
/ V (G)
m VH
HH vv
HH vv
HH vv
HH v
# zvv K(T )
V (G)
satisfying
K(T )v1 vm = T v1 T vm .
Such K(T ) is called the induced operator of T on V (G). Notice that K(T )
depends on G, , m, n. We now discuss some basic properties of K(T ).
K(T )v = T v1 T vm = T (G, )T v1 T vm
= T (G, )(m T )v = (m T )T (G, )v
= (m T )v
So
m,r }
{e = K(T )v :
is a subset of the basis E of V (G) so that it is a linearly independent set.
When 6 m,r , there is some i such that (i) > r. Then T v(i) = 0 so
that K(T )v = 0. So rank K(T ) = | m,r |.
Proof. Since A = [T ]E
E is upper triangular,
X
T ej = j ej + aij ei , j = 1, . . . , n,
i<j
there is G such
where 1 , . . . , n are the eigenvalues of T . For each ,
that = 1 . Then
where
1 < = 1 .
So [K(T )]E
E is upper triangular, where E = {e : } in the orbital order.
Qm
In Theorem 4.5.5, the eigenvalues of K(T ) are evidently i=1 (i) ,
counting multiplicity. However the order of these eigenvalues apparently
,
depends on the order of 1 , . . . , n ; but it is merely a deception.
Theorem 4.5.5. Let 1 , . . . , n and s1 , . . . , sn be the eigenvalues and the sin-
gular values of T End V , respectively. Then
Qm counting multiplicities.
1. The eigenvalues of K(T ) are i=1 (i) , ,
Moreover for any Sn ,
m
Y
(i) ,
i=1
4.5. LINEAR OPERATORS ON V (G) 111
f (1 , . . . , n ) = f ( (1) , . . . , (n) ), Sn .
Qm counting multiplicities.
2. The singular values of K(T ) are i=1 s(i) , ,
Moreover for any Sn ,
m
Y
s (i) ,
i=1
n
X n X
X n
XX
nq = qt = mt () =
mt () = m||.
t=1 t=1
t=1
So
m
q= ||.
n
Then
n
n
!m
n ||
Y m Y
n ||
m
det K(T ) = t = t = (det T ) n || .
t=1 t=1
is an orthonormal basis of V (G) (Problem 6). Thus if (e) = 1, then the matrix
representation of K(T ) with respect to E0 follows from the following theorem.
Theorem
q 4.5.8. Suppose (e) = 1 and E = {e1 , . . . , en } is a basis of V (so
E = { G|G|
0 is a basis of V (G) according to the lexicographic
e : }
|
X Ym
1
K(A), = p () a(t),(t) , , (4.26)
|G ||G | G t=1
Since the induced matrix K(A) is the matrix representation of K(T ), we have
the following properties.
Theorem 4.5.9. Let A Cnn with eigenvalues 1 , . . . , n and singular values
s1 sn 0. Suppose is linear and let K(A) be the induced matrix of
A. Then
(a) K(I) = I|| = 1 P
, where ||
c()
|G| G ()n .
m
(h) det K(A) = (det A) n || .
(i) K(A ) = K(A) .
(j) If A is normal, Hermitian, psd, pd, or unitary, so is K(A).
For (e) > 1, see [18] for the construction of orthonormal basis of V (G),
the matrix representation of K(T ), where T End V , and the induced matrix
K(A).
We now provide an example of K(T ) with nonlinear irreducible character.
Assume that G = S3 and is the degree 2 irreducible character. We have
3,2 = {(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)}
= {(1, 1, 1), (1, 1, 2), (1, 2, 2), (2, 2, 2)}
=
{(1, 1, 2), (1, 2, 2)}
=
{(1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 2)}.
4.5. LINEAR OPERATORS ON V (G) 115
= {e
E = {e : }
(1,1,2) , e(1,2,1) , e(1,2,2) , e(2,1,2) }
1
(e(1,1,2) , e(1,2,1) ) = (e(1,2,2) , e(2,1,2) ) = .
3
For example, by Theorem 4.3.5
(e) X 2 1
(e(1,1,2) , e(1,2,1) ) = ()(1,1,2),(1,2,1) = ((23) + (123)) = .
|S3 | 6 3
S3
By direct computation,
a2 d abc 0 abd b2 c 0
0 a d abc abd b2 c
2
b2 c abd
[K(T )]E
=
acd bc 2
E 0 ad2 bcd 0
2 2
acd bc bc acd 0 ad2 bcd
For example
K(T )e(1,1,2) = T e1 T e1 T e2
= (ae1 + ce2 ) (ae1 + ce2 ) (be1 + de2 )
= a2 be(1,1,1) + a2 de(1,1,2) + acbe(1,2,1) + acde(1,2,2)
+cabe(2,1,1) + cade(2,1,2) + c2 be(2,2,1) + c2 be(2,2,2)
= a2 de(1,1,2) + acbe(1,2,1) + acde(1,2,2)
+cabe(2,1,1) + cade(2,1,2) + c2 be(2,2,1) by (4.26)
2
= (a d abc)e(1,1,2) + 0e(1,2,1) + (acd bc2
)e(1,2,2) + (acd bc2 )e(2,1,2) .
The computations for K(T )e(1,2,1) , K(T )e(1,2,2) , K(T )e(2,1,2) are similar.
Furthermore, one can define the derivation operator DK (T ) of T by
d
DK (T ) = K(I + tT )t=0 ,
dt
116 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
Problems
1. Prove that (m T )P () = P ()(m T ) and (m T )T (G, ) = T (G, )(m T ).
2. If V (G) 6= 0, prove that K(T ) is invertible if and only if T is invertible.
Moreover, K(T 1 ) = K(T )1 .
3. Let S, T End V are psd. Show that K(S + T ) K(S) + K(T ), i.e.,
K(S + T ) K(S) + K(T ) is psd. Hence S T implies K(S) K(T ).
(Hint: Use Problem 3.6 #2)
4. Prove that if G = Sm , = , rank T < m, then K(T ) = 0.
5. Suppose T End V is positive semidefinte with eigenvalues 1
n . Prove that K(T ) on V1 (G) has largest eigenvalue m
1 and smallest
eigenvalues m
n .
6. Suppose that (e) = 1 and E = {e1 , .q
. . , en } is an orthonormal basis of the
inner product V . Prove that E = { |G| e : }
0 is an orthonormal
|G |
basis for V (G).
7. Suppose that (e) = 1 and E = {e1 , . . . , en } is a basis of V . Let T End V
and [T ]E
E = A = (aij ). Prove that
m
1 X Y
([K(T )]E
E ), =
() a(t)(t) , , .
|G | i=1
G
P
8. Let P
m,n and be linear. Prove that if G () = 0, i.e., 6 ,
then G ()() = 0, where : m,n W is any map.
4.5. LINEAR OPERATORS ON V (G) 117
4. Let r = rank T .
rank K(T ) (Theorem 4.5.2)
= |m,r |
= |m,r Qm,n | (Theorem 4.4.6 since G = Sm , = )
= 0
since all elements of m,r has some repeated entries as r < m so that
m,r Qm,n = .
5. By Theorem 4.5.5, since = m,n , the eigenvalues are Qm (i) ,
i=1
m,n . Since 1 n 0, the largest eigenvalue of K(T ) is m
1 and
the smallest eigenvalue is mn .
6. By Theorem 4.3.5(a), E0 is an orthogonal basis. It remains to show that
each vector in E0 has unit length. By Theorem 4.3.5(b)
s
|G| 2 |G| 2 (e) X 1 X
e = ke k = () = () = (|G , 1)
|G | |G | |G | |G |
G G
7.
8.
9.
Qm Qm
10. Notice that t=1 (t) = t=1 (t) for all G, s = (e)(|G , 1)
and G = 1 G , for all m,n and G. So
m
XY X m
Y
tr K(T ) = (t) = (e) (|G , 1) (t)
t=1
t=1
X m
Y
= (e) (|G , 1) (t) .
t=1
X m
Y
dG (A) = () at(t) (4.28)
G t=1
X m
Y
dG (A) = det A := () at(t) .
Sm t=1
X m
Y X m
Y
dG (AT ) = () a(t)t = () at(t) = dG (A).
G t=1 G t=1
Theorem 4.3.4 and Theorem 4.5.7 can be restated respectively using gener-
alized matrix function as in the next two theorems.
m
(e) X Y
(K(T )e , e ) = () a(t)(t) (Theorem 4.5.7)
|G| t=1
G
(e)
= d (A[|])
|G| G
(e) T
= d (A [|]) (Theorem 4.6.1(c))
|G| G
(e) (e) T
(K(T )e1 en , e1 en ) = d (A) = d (A ).
|G| G |G| G
We will use the above two theorems to establish some inequalities and iden-
tities. Notice that is irreducible if and only if is irreducible. Clearly
(e) = (e). So statements about dG (A) apply to dG (A).
120 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
Proof. Let ui = A(i) and vj = B(j) , where A(i) denotes the ith row of A. Then
(e)
(u , v ) = d ((ui , vj )) (Theorem 4.6.2)
|G| G
(e)
= d (AB ) ((u, v) := v u)
|G| G
Then apply Cauchy-Schwarz inequality
|(u , v )| (u , u )(v , v )
to have (4.29).
Theorem 4.6.5. Let A, B Cmm . Then
Applying (4.30),
Proof. This is indeed a special case of Theorem 4.6.4. Let G < Sm be the group
consisting of , where is a permutation on {1, . . . , p}, and is a permutation
on {p+1, . . . , m}. Let () = ()(). Then is a linear irreducible character
of G (check!) and dG (A) is the right side of (4.32).
4.6. GENERALIZED MATRIX FUNCTIONS 121
See Horn and Johnsons Matrix Analysis p.478 for a matrix proof of Theorem
4.6.7.
1 X
h(A) = (e)dG (A).
|G|
I(G)
In particular, if A 0, then
(e) 1
h(A) d (A), h(A) per A
|G| G m!
1 X X
(e)dG (A) = (u , v ) (by Theorem 4.6.2)
|G|
I(G) I(G)
X
= (T (G, )u , T (G, )v )
I(G)
X
= ( T (G, )u , v ) (by Theorem 4.2.3)
I(G)
m
Y
= (ui , vi ) (by Theorem 4.2.3(b))
i=1
= h(A).
P
When A 0, by Theorem 4.6.6 each summand dG (A) in 1
|G| I(G) (e)dG (A)
is nonnegative so we have
(e)
h(A) d (A)
|G| G
1
for all I(G). In particular with G = Sm and 1, we have h(A) m! per A.
hu1 , . . . , um i = hv1 , . . . , vm i.
122 CHAPTER 4. SYMMETRY CLASSES OF TENSORS
Pm
So there is A Cmm such that vi = j=1 aij uj , i = 1, . . . , m. Equip an inner
product on V so that {u1 , . . . , um } is an orthonormal set. Then
(v , u ) = (u , u ) = (v , v ) (4.33)
so that
|(v , u )|2 = (u , u )(v , v )
with u 6= 0. By the equality case of Cauchy-Schwarz inequality, v = cu for
some constant. Then substitute into (4.33) to have c = 1.
Theorem 4.6.10. For all psd A, B Cmm ,
dG (A + B) dG (A) + dG (B).
(e) X
dG ((AB)[|]) = dG (A[|])dG (B[|])
|G|
4.6. GENERALIZED MATRIX FUNCTIONS 123
1 1
d (A) d (A).
|H| H |G| G
Proof.
(e)2 X X
T (G, )T (H, ) = ()()P ()
|G||H|
G H
(e)2 X X
= ( 1 )()P ( ) ( = )
|G||H|
G H
(e) X
= ( )P ( ) (Theorem 2.4.3 since ( 1 ) = ( 1 ))
|G|
G
= T (G, ).
Since T (G, )T (H, ) = T (G, ), apply the above inequality on T (H, )z,
dG (A) (e)per A.
Problems
1. Show that if A, B Cmm are psd, then det(A + B) det A + det B and
per (A + B) per A + per B.
Qm Pm
2. (Hadamard) Prove that if A Cmm , then | det A|2 i=1 j=1 |aij |2 .
Hint: det(AA ) h(AA ).
3. Prove that when is linear, Cauchy-Binet formula takes the form: for any
, ,
X 1
dG ((AB)[|]) = d (A[|])dG (B[|]).
|G | G
2
2. By Theorem
Qm4.6.6,
Pm| det A| 2 = det(AA ) h(AA ) since AA is psd. But
h(AA ) = i=1 j=1 |aij | .
3. From Theorem 4.6.11, since (e) = 1,
1 X
dG ((AB)[|]) = dG (A[|])dG (B[|])
|G|
1 X X
= dG (A[|])dG (B[|]) ( = G)
|G|
G
v = v1 vm
127
128CHAPTER 5. EXTERIOR SPACES AND COMPLETELY SYMMETRIC SPACES
Proof. The proofs of (a), (b), (c) are obtained in Chapter 4. Notice that (e) = 1
and Qm,n , G = {e}.
X
v1 vm = ct v1 vj1 vt vj+1 vm = 0
t6=j
m
X Xm
u = ( a1j vj ) ( amj vj )
j=1 j=1
X m
Y
= ( ai(i) )v
m,m i=1
X m
Y
= ( ai(i) )v ((d) and Dm,m = {(1, . . . , m) : Sm })
Dm,m i=1
m !
X Y
= ai(i) v
Sm i=1
m
!
X Y
= () ai(i) v
Sm i=1
= det(aij )v (Problem 1)
hu1 , . . . , um i = hv1 , . . . , vm i.
Pm
Then ui = j=1 aij vj , i = 1, . . . , m. Use (e) to have det(aij ) = 1 (a special
case of Theorem 4.6.9)
5.1. EXTERIOR SPACES 129
Hence
for all , .
Cm (A), = det A[|],
for all , Qm,n .
For example, if n = 3 and k = 2, then
det A[1, 2|1, 2] det A[1, 2|1, 3] det A[1, 2|2, 3]
C2 (A) = det A[1, 3|1, 2] det A[1, 3|1, 3] det A[1, 3|2, 3] .
det A[2, 3|1, 2] det A[2, 3|1, 3] det A[2, 3|2, 3]
By Theorem 5.1.2
X
det(AB)[|] = det A[|] det B[|].
Qm,n
Compound matrix is a very powerful tool and the following results are two
applications.
Then
n n n
X xj
X xj X
|1 | = asj a |asj | = Rs R[1] .
xs j=1 xs j=1
sj
j=1
X m
Y
|a(i)(i) | (Dm,n = { : Qm,n , Sm })
Dm,n i=1
m
X Y
|a(i)(i) |
m,n i=1
m X
Y n m
Y m
Y
= |a(i)j | = R(i) R[i]
i=1 j=1 i=1 i=1
| det A|
|n | Qn1 (5.3)
i=1 R[i]
since
n
Y
det A = i .
i=1
A A = R Q QR = R R.
so that
n
Y n
Y
si = | det A| = | det R| = a[i] .
i=1 i=1
Remark: The converse of Theorem 5.1.5 is true, i.e., given positive numbers
a1 , . . . , an and s1 sn > 0, if (5.4) is true, then there exists A Cnn
with singular values ss and diag R = (a1 , . . . , an ) where A = QR is the QR
decomposition of A. This result is known as Kostants convexity theorem [11].
Indeed Kostants result is in a broader context known as real semisimple Lie
groups.
For A Cnn , t C, we write
(1)
Cm (In + tA) = I( n ) + tDm (A) + t2 Dm
(2)
(A) + + tm Dm
(m)
(A).
m
(r)
The matrix Dm (A) is called the rth derivative of Cm (A). In particular
(1)
Dm (A) is the directional derivative of Cm () at the identity In in the direc-
tion A. It is viewed as the linearization of Cm (A)
(r)
Theorem 5.1.6. If A Cnn is upper triangular, then Dm (A) is upper tri-
angular.
5.1. EXTERIOR SPACES 133
(r) (r)
Theorem 5.1.7. (a) Dm (SAS 1 ) = Cm (S)Dm (A)Cm (S)1 for all 1 r
m. So, if A and B are similar, then Cm (A) and Cm (B) are similar.
(1)
In particular (1) + + (m) , Qm,n , are the eigenvalues of Dm (A).
(r) (r)
(c) Dm (A ) = Dm (A) .
Proof. (a)
Cm (In + tSAS 1 )
= Cm (S(In + tA)S 1 )
= Cm (S)Cm (In + tA)Cm (S 1 )
= Cm (S)Cm (In + tA)Cm (S)1
(1)
= Cm (S)(I( n ) + tDm (A) + t2 Dm
(2)
(A) + + tm Dm
(m)
(A))Cm (S)1
m
(1)
= I( n ) + tCm (S)Dm (A)Cm (S)1 + t2 Cm (S)Dm
(2)
(A)Cm (S)1
m
+ + tm Cm (S)Dm
(m)
(A)Cm (S)1
(r)
Dm (A) = Cm (U )Dm
(r)
(T )Cm (U )
(r) (r)
and hence Dm (A)) and Dm (T )) have the same eigenvalues. From Theorem
(r)
5.1.6 Dm (T ) is upper triangular so the diagonal entries are the eigenvalues of
134CHAPTER 5. EXTERIOR SPACES AND COMPLETELY SYMMETRIC SPACES
A. Now
Cm (I + tT ),
= det(In + tT )[|]
Ym
= (1 + t(i) )
i=1
2
X Y
= 1 + t((1) + + (m) ) + t2 (i)
Q2,m i=1
r
X Y
+ + tr (i) +
Qr,m i=1
(r) P Qr
for all Qm,n . So the eigenvalues of Dm (A) are Qr,m i=1 (i) ,
Qm,n .
(c) For any t R,
(1)
I( n ) + tDm (A ) + t2 Dm
(2)
(A ) + + tm Dm
(m)
(A )
m
= Cm (I + tA )
= Cm (I + tA)
(1)
= I( n ) + tDm (A) + t2 Dm(2)
(A) + + tm Dm(m)
(A)
m
(r) (r)
Hence Dm (A ) = Dm (A) for all r as desired.
(1)
Let A Cnn and set m (A) := Dm (A). It is called the mth additive
compound of A.
Theorem 5.1.8. m : Cnn C( n )( n ) is linear, i.e., m (A+B) = m (A)+
m m
m (B) for all A, B Cnn and m (cA) = cm (A).
Proof. By Theorem 5.1.3(a)
Now
Cm (In + tA)Cm (In + tB) = (I( n ) + tm (A) + o(t2 ))(I( n ) + tm (B) + o(t2 ))
m m
and
det(V AV 1 + U BU 1 ) = det(V (A + V 1 U BU 1 V )V 1 )
= det V det(A + V 1 U BU 1 V ) det V
= det(A + V 1 U BU 1 V ),
Thus
In In
det(A+U BU 1 ) = Cn (A In )Cn = Cn (A In )Cn ((U U ) U 1 ).
U BU 1 B
Problems
1. Show that for m V (m n = dim V ), = Dm,n .
2. Let T End V and dim V = n. Prove that for all v1 , . . . , vn V ,
Cn (T )v1 vn = (det T )v1 vn .
3. Suppose V (G) 6= 0 and v1 vm = 0 whenever v1 , . . . , vm are linearly
dependent. Prove that V (G) = m V , i.e., G = Sm and = .
136CHAPTER 5. EXTERIOR SPACES AND COMPLETELY SYMMETRIC SPACES
Pk
4. (Cartan Lemma) Let e1 , . . . , ek be linear independent and i=1 ei vi =
0. Prove that there is a symmetric matrix A Ckk such that vi =
Pk
i=1 aij ej , i = 1, . . . , k. Hint: Extend e1 , . . . , ek to a basis e1 , . . . , en
of
PnV . Notice that P {ei ej : 1 i < j n} is a basis of 2 V and
i,j=1 aij ei ej = i<j (aij aji )ei ej .
qP
n 2
5. Let A Cnn and Ei = j=1 |aij | and use similar notations as in
Theorem 5.1.4. Prove that
Ym Y m
n
|i | E[i] , m = 1, . . . , n
i=1
m i=1
and
| det A|
|n | Qn1 .
n i=1 E[i]
Hint: Use Problem 4.6 #2.
Sep(A) = min |i j |.
i6=j
Set L := A In In A. Prove
!1/2
|tr Cn2 n (L)|
Sep(A) Qn2 n2 .
i=1 R[i] (L)
Solutions
1.
2.
3.
4.
5.
6.
7.
then
a = cb , for all Qm,n
n
for some constant c 6= 0. The m scalars a are called the Plucker coordi-
nates of v . The concept was introduced nby
Julius Plucker in the 19th century
for studying geometry. Clearly not all m scalars are Plucker coordinates for
some v since not all vectors in m V are decomposable. We are going to give
a necessary and sufficient condition for Plucker coordinates.
By see Theorem 5.1.1(c),
X X
a e
= v = ()v =
()a e
Qm,n Qm,n
so that
a = ()a .
So it is a necessary condition and motivates the following definition.
Let p : m,n C be a function satisfying
The following result gives the relationship between decomposability and subde-
terminants.
Theorem 5.2.1. Let 1 m n. If p satisfies (5.7) and is not identically zero,
then the following are equivalent.
P
1. z = Qm,n p()e is decomposable.
Proof.
P
(a) (b). Suppose z = Qm,n p()e is decomposable, i.e., z = v1
Pn
vm . Write vi = j=1 aij ej , i = 1, . . . , m, where E = {e1 , . . . , en } is a basis
of V . By Theorem 5.1.1(g),
X
z = v = det A[1, . . . , m|]e
Qm,n
is decomposable.
(b) (c). For any A Cmn and , Qm,n , let
S := A[1, . . . , m|] = (A(1) . . . A(m) ) Cmm
where A(i) denotes the ith column of S, i = 1, . . . , m. Similarly let
T := A[1, . . . , m|] = (A(1) . . . A(m) ) Cmm .
If we define X Cmm by
xij = det(A(1) . . . A(i1) A(j) A(i+1) . . . A(m) ), 1 i, j m,
then by Cramer rule
SX = (det S)T.
Taking determinants on both sides
det S det X = det T (det S)m .
If S is invertible, then
det X = det T (det S)m1 . (5.8)
If S is not invertible, then replace S by S + tIm . Since det B is a polynomial
function of its entries bij , we have (5.8) by continuity argument.
Suppose that (b) is true. Then
det S = det A[1, . . . , m|] = p(),
det T = det A[1, . . . , m|] = p()
and
xij = det(A(1) . . . A(i1) A(j) A(i+1) . . . A(m) )
= det A[1, . . . , m|(1), . . . , (i 1), (j), (i + 1), . . . , (m)]
= p((1), . . . , (i 1), (j), (i + 1), . . . , (m))
= p([i, j : ]).
140CHAPTER 5. EXTERIOR SPACES AND COMPLETELY SYMMETRIC SPACES
Suppose (c) is true. Since p is not a zero polynomial, there is some Qm,n
such that p() 6= 0. Define A Cmn where
1
aij = p() m 1 p((1), . . . , (i1), (j), (i+1), . . . , (m)), 1 i m, 1 j n.
P (, )P (, ) = p()P (, ), , , m,n
is also necessary and sufficient condition (but we will not discuss it in detail).
We now use Theorem 5.2.1 to deduce some results.
Let Dm,n . Denote by Im denotes the set of the components of . We
say that , Dm,n differ by k components if |Im Im | = m k. For
example = (1, 3, 5) and = (3, 5, 6) differ by one component. We say that
1 , . . . , k Qm,n form a chain if all consecutive sequences differ by one com-
ponent. The following is an important necessary condition for decomposability.
P
Theorem 5.2.2. Let z = Qm,n p()e k
V (with dim V = n) be de-
composable. If p() 6= 0, p() 6= 0 and and differ by k components, then
there are k 1 different sequences 1 , . . . , k1 Qm,n such that p(i ) 6= 0,
i = 1, . . . , k 1, and , 1 , . . . , k1 , form a chain. In other words, any two
nonzero coordinates of a decomposable v are connected by a nonzero chain.
Proof. When k = 1, the result is trivial. Suppose k > 1. Use (5.7) to extend
the domain of p to m,n . By Theorem 5.2.1
so one term in the expansion of the left side must be nonzero, i.e., there is
Sm such that
Ym
p([i, (i) : ]) 6= 0.
i=1
5.2. DECOMPOSABLE ELEMENTS IN THE EXTERIOR SPACES 141
P
Theorem 5.2.4. Let v1 vm = Qm,n p()e 6= 0. If p() 6= 0, let
Pn
ui = j=1 p([i, j])ej , i = 1, . . . , m. Then u1 um = p()m1 v1 vm .
Thus hu1 , . . . , vm i = hv1 , . . . , vm i. Here
By Theorem 5.2.1
z = e1 e2 e3 + 2e1 e2 e4 e1 e3 e4 3e2 e3 e4 3 V
Problems
P
1. Let v1 vm = Wm,n p()e
6= 0. Prove that for any m,n ,
n
X
ui = p([i, j])ej hv1 , . . . , vm i, i = 1, . . . , m.
j=1
plays an important role for the completely symmetric space as the determinant
to the exterior space.
(e) If u = v 6= 0, then
Qm there is Sm and di 6= 0 such that ui = di v(i) ,
i = 1, . . . , m and i=1 di = 1.
Pn
(f) If vi = j=1 aij ej , i = 1, . . . , m, then
X 1
v = per A[1, . . . , m|]e .
()
Gm,n
m
i=1 vi of the finite hyperplanes vi is not dense in V . So there is w W such
that
Ym
(w, vt ) 6= 0.
t=1
On the other hand, for any w V
m
Y
(w w, v ) = (T (G, 1)w w, v ) = (w w, v1 vm ) = (w, vt ).
t=1
Qm
So v = 0 would imply t=1 (w, vt ) = 0 for all w V . Thus v 6= 0.
(e) If u = v 6= 0, then for any w V ,
m
Y m
Y
(w, ut ) = (w w, u ) = (w w, v ) = (w, vt ). (5.9)
t=1 t=1
Applying the claim in the proof of (d) on the space v1 to have yi = 0 for some
i, i.e., ui = xi hv1 i. So there is di 6= 0 such that ui = di v1 . Substitute it into
(5.9)
m
Y Y
0 = (w, v1 )( (w, vt ) di (w, ut )). (5.10)
i=2 t6=i
Then apply continuity argument, so that the above equality is valid for w V .
Then use induction.
(f)
n
X Xn
v = ( a1j ej ) ( amj ej )
j=1 j=1
X m
Y
= ai(i) e
m,n i=1
X m
1 X Y
= ai(i) e (Theorem 4.4.3)
() i=1
Gm,n Sm
X 1
= per A[1, . . . , m|]e .
()
Gm,n
5.3. COMPLETELY SYMMETRIC SPACES 145
Problems
1. Prove the Cauchy-Binet formula for the permanent: Let A, B Cnn .
For any , Gm,n ,
X 1
per ((AB)[|]) = per (A[|])per (B[|]).
()
Gm,n
3. Let G < Sm . Check which properties of Theorem 5.3.1 remain true for
V1 (G).
4. Prove that the set {v v : v V } generates V m = V1 (Sm ).
146CHAPTER 5. EXTERIOR SPACES AND COMPLETELY SYMMETRIC SPACES
Xn
d daij (t)
per A(t) = per A(i|j) .
dt i,j=1
dt
Chapter 6
Research topics
(1) X (1) X
(e , e ) = ( 1 ) = ( 1 ),
|G| |G|
G G
the first equality from [1, p. 339] and the second from the observations that
G 1 = G and ( 1 ) = ( 1 1 ).
We have dim V = (1)(, 1)G .
The group G is 2-transitive if, with respect to the componentwise action,
it is transitive on the set of pairs (i, j), with i, j = 1, . . . , m, i 6= j. Note that if
G is 2-transitive, then for any i = 1, . . . , m, the subgroup { G | (i) = i} of
G is transitive on the set {1, . . . , i, . . . , m}.
Proof. By the remarks above, it is enough to show that V does not have an
o-basis of some I(G), .
Let H = { G | (m) = m} < G and denote by the induced character
(1H )G , so that () = |{i | (i) = i}| for G (see [4, p. 68]).
Let G H and for i = 1, . . . , m, set Ri = { H | (i) = i}. Clearly,
Ri = if i {m, 1 (m)}. Assume i / {m, 1 (m)}. Since H acts transitively
147
148 CHAPTER 6. RESEARCH TOPICS
X n
X X m2
() = |Ri | = |Ri | = |H|.
i=1
m1
H i6=m,1 (m)
Problems
1. Show that if G is abelian, then m V has an o-basis.
2. Give a proof of Corollary 6.1.2 by showing that (i) Sm is 2-transitive for
all m and (ii) the alternating group Am is 2-transitive if m 4.
3. Show that if G is the dihedral group Dm Sn , then m V has an o-basis
if and only if m is a power of 2.
Bibliography
[1] A.C. Aitken, The normal form of compound and induced matrices, Proc.
London Math. Soc. 38 (1934), 354376.
[2] M. Fiedler, Bounds for the determinant of the sum of hermitian matrices,
Proc. Amer. Math. Soc., 30 (1971), 2731.
[3] R.R. Holmes, Orthogonal bases of symmetrized tensor spaces, Linear and
Multilinear Algebra 39 (1995), 241243.
[4] R.R. Holmes and T.Y. Tam, Symmetry classes of tensors associated with
certain groups, Linear and Multilinear Algebra 32 (1992), 2131.
[5] R.R. Holmes, C.K. Li, T.Y. Tam, Numerical range of the derivation of an
induced operator. J. Aust. Math. Soc. 82 (2007), 325-3-43.
[6] C. Gamas, Conditions for a symmetrized decomposable tensor to be zero,
Linear Algebra Appl., 108 (1988), 83-119.
[7] A. Horn, On the eigenvalues of a matrix with prescribed singular values,
Proc. Amer. Math. Soc., 5 (1954) 47.
[8] I.M. Isaacs, Character Theory of Finite Groups, Academic Press, New York,
1976.
[9] G. James and A. Kerber, The Representation Theory of the Symmetric
Group, Encyclopedia of Mathematics and Its Applications, Volume 16,
Addison-Wesley Publishing Company, 1981.
[10] G. James and M. Liebeck, Representations and Characters of Groups, Cam-
bridge Mathematical Textbooks, Cambridge University Press, 1993.
[11] B. Kostant, On convexity, the Weyl group and Iwasawa decomposition,
Ann. Sci. Ecole Norm. Sup. (4), 6 (1973) 413460.
[12] C.K. Li, and T.Y. Tam, Operator properties of T and K(T ), Linear Algebra
Appl. 401 (2005), 173191.
[13] D.E. Littlewood, On induced and compound matrices, Proc. London Math.
Soc. 40 (1936), 370380.
149
150 BIBLIOGRAPHY
[18] R. Merris, The structure of higher degree symmetry classes of tensors II,
Linear and Multilinear Algebra, 6 (1978), 171178.
[20] R. Merris, Multilinear Algebra, Gordon and Breach Science Publishers, Am-
sterdam, 1997.
[21] U. Prells, M.I. Friswell, S.D. Garvey, Use of geometric algebra: Compound
matrices and the determinant of the sum of two matrices, Proc. R. Soc.
Lond. A, 459 (2030), 273285.
[23] J.A. Dias da Silva, Colorings and equality of tensors, Linear Algebra Appl.,
342 (2002) 7991.
[24] J.A. Dias da Silva, J. A. and M.M. Torres, On the orthogonal dimension
of orbital sets, Linear Algebra Appl., 401 (2005), 77107.
[25] , J.A. Dias da Silva and H.F. da Cruz, Equality of immanantal decompos-
able tensors, Linear Algebra Appl., 401 (2005), 2946.
[26] J.P. Serre, Linear Representations of Finite Groups, Springer, New York,
1977.
[27] T.Y. Tam, A. Horns result on matrices with prescribed singular values and
eigenvalues, Electronic Linear Algebra, 21 (2010) 2527.
[28] T.Y. Tam and M.C. Thompson, Determinant and Pfaffian of sum of skew
symmetric matrices, Linear Algebra Appl., 433 (2010) 412423.
[29] T.Y. Tam and M.C. Thompson, Determinants of sum of orbits under com-
pact Lie group, to appear in Linear Algebra Appl.
[30] T.Y. Tam and M.C. Thompson, Determinants of sums of two real matrices
and their extensions, submitted for publication.
BIBLIOGRAPHY 151
[31] J.H. van Lint, Notes on Egoritsjevs proof of the van der Waerden conjec-
ture. Linear Algebra Appl., 39 (1981), 18.
[32] J.H. van Lint, The van der Waerden conjecture: two proofs in one year.
Math. Intelligencer 4 (1982), 7277.