Documente Academic
Documente Profesional
Documente Cultură
13. IV = Ix: x = X, ] .
[ Xl
XI = -3x~.
20. IV ~ {w, w ~ ~ , [ ]. 'aay «al "mbe,}
.'"2 any real number}
14. W = {w: w = [ ~ ]. b any real number} 21. IV ~ lu, " ~ [ : ]. a' + b' +0' ~ I ,"d
l
metrically by sketching a graph for IV.
In Exercises 27-30. ghe a set·theoretic description of
on .. = I
• 18. IV ~ {~ ~ ~
, [ a > OJ
the given points as a subset IV of R3 .
27. The points on the plane X .;.. -' - 2;: = 0
~ [:J
28. The points on the line ....ith parametric equations
~ I" ~ -x, -
x = 21. y = -31. and;: = t
19. IV x, h,j 29. The points in the y;:-plane
30. The points in the plane ). = 2
r~ x.
f R~ geo-
X, ] [ ", ]
'~:: andY~::'
[
168 Chapter 3 The Vector Space Nil
,. ~1"'M.,(.,J..
~'0~ [ x, _ y, ]
~6'1 ~~.r.?~;~.4tdf x-y= X2+)2 .
In the context of R~. scalars are always real numbers. In panicular. throughout thl~
chapter. the tenn scalar always means a real number.
The following theorem gives die anthmeiic propenies of \'ector addition and scalar
multiplication. Note that the statements in this theorem are already familiar from Section
1.6. which discusses the arithmetic properties of matrix operations (a vector in R n is an
(n x I) matrix, and hence the propenies of matrix addition and scalar multiplication
listed in Section 1.6 are inherited by vectors in R"),
~ As we will see in Chapter 5. any set that satisfies the properties of Theorem 1 is
-
called a "trtor space: thus for each positi,-e integer n, R~ is an example of a vector space
-ltI£olU \\ 1 If x. y. and z are vectors in R~ and a and b are scalars. then the following propenie:.
hold, v -oftn '<""1« ~ \Wo~(" ;0 ~
fig properties TllIL\I~t-\\ 2 A subset IV of R" is a subspace of R" if and only if the following condilions are mel:
(51)" The zero vector. 8. is in U'.
'.lM 1"~1"'1 (52) x + y is in W whenever x and " are in tv.
properties of
"TIle usual stalemem orThcorem 2li.l. only conditions (s2) and (.3) bul as~ume. thaI the subset IV is nonemplY.
a subspace Thus (51) replaces the assumplion thal IV is nonempty. The twO \ersions are equivalenl (su Exercise 34).
170 Chapter 3 The Vector Space R"
U =
['" ] 112
"3
and \. =
[" ] t'!
'3
•
= and =
"
1/1 112 - 113 VI t'2 - t.'3·
u + \' =
'" + v,
/12 + Vz
1 and au =
au, ]
(/l/2 .
[ [
Ilj + U3 J all)
"l ..... t'. = (112 - If) + (1'2 - 1'3) = (uz + t'2) - (113 T V3)·
Thus if the components of u and " salisf) the condition XI = X2 - XJ. then so do the
components of the sum U T '>. This argument shows that condilion (s2) is mel by IV.
../ Similarl). from (I).
:=y-x
') .(0. I. I)
( ,
, / ·(\.1.0)
x
The next example illustrates again the use of the procedure described above to verify
that a subset lV of R" is a subspace.
.0 do t:he
IV ~ I>' ~
X [ :: ]. x, ~ 2.". x, ~ 3x,. x, "'y ""I numbe,) .
llel by lV.
Verify thai IV is a subspace of R J and give a geometric interpretation of IV.
Solution For clarity in this initial example. we explicitly number the five steps used 10 verify that
IV is a subspace.
:;.6). • • I. The algebraic condition for x to be in W is
U =
[ ] "ndV~[::]
'"
1/1
"J
172 Chapler 3 The \'ector Space R"
4. Next, check whether the sum, U + v, is in W. (That is, does the veclor u + \
satisfy Eq. (4)1) Now. the sum u + v is given by
u,+u,]
u+v=
[ 112+ti2
1I:1+ V)
au, ]
au = aU2 .
[
au,
Using (5a) gives aUl = a(2111) = 2(a/ll) and aUJ = 0(3/11) = 3(OUl). There-
fore. au is in W whene\'er u is in W (see Eq. (4)).
:u Thus. by Theorem 2. \V is a subspace of RJ . G....:.?me.!!!cally. W is a line-lhrough the
origin with parametric equalions
x = XI
y=2x 1
:.= 3.1'1.
The graph of the line is given in Fig. 3.7. •
Exercise 29 shows that any line in three-space through the origin is a subspace of
R J • and Example 3 of Section 3.3 shows Ibat in three-space any plane through the origin
is a subspace. Also note thai for each positive integer n. R" is a subspace of itself and
{OJ is a subspace of R". We conclude this section with examples of subsels that are nOI
subspaces.
l
E,-\\lPL[
E-
IV ~ (x," ~ ~:
[ x, ,"dx, any real numbe,,}.
tication of w.
"
(5a)
(5h) ,,, (I. 2. 3)
,,
e vector u +v ,,,
y
r ,,:~.
---------. (I. 2. 0)
Solution To show that IV is not a subspace of R', we need only verify that at least one of the
properties (~IHs3) of Theorem 2 fails. NOle Ihal geometrically IV can be interpreted
as the plane:; = I, which does not contain the origin. In other words, the zero vector.
8, is not in IV. Because condition (sl) of Theorem 2 is not mel. IV is not a subspace of
R', Although it is nOI necessary to do so. in this example we can also sho,"" that both
conditions (s2) and (s3) of Theorem 2 fail. To see this. let x and y be in IV, where
ond'~[~,:l
iiJUd. There-
[ x, ]
x= "7
~ghthe
Then x + ~' is given by
X, - ,., ]
x+~'= X1~Y1 '
[
• In particular, x - ~' is nOI in W _ because the third component of x + Ydoes not have the
I~ubspace of value 1. Similarly.
Ih the origin
,f 1l~lf and ax, ]
that are not ax = a;1 .
[
Solution In this case 8 is in IV. and it is easy to see that if x and }' are in IV. then so is x + y, If
we set = "
=il:
x~[:] ~&'=I:I:
=11:
=- and a =J/2. then x is in IV but ax is not. Therefore. condition (s3) of Theorem 2 is nOi =J:I:
--met by IV. and hence IV is not a subspace of R2 , • = x
!..e Ji ~
r \A \\[JLI::: S Let IV be the subspace of R2 defined by
"'"
IV = {x; x = [
X, ] '
X2 where either XI = 0 or X2 = OJ. ,
tct ". be
Show that IV is not a subspace of R2.
Solution Let x and y be defined by
X+ J '=[:] Gne3~
is not in IV. so IV is nor a subspace of R2. Note that 8 is in Uf. and for any "ector x in
:1. let. aOC
IV and any scalar a. ax is again in IV. Geometrically, \V is the sel of points in the plane d.
thai lie either on the x-axis or on lhe J.axis. Eilher of lhese axes alone is a subspace of
R 2 • bul. as lhis example demonstrates. their union is not a subspace. • a'
p,,,,,
"31 ~
EXERCISES
~
EUT:'1
E.ter.:!<.e ~ I. f
Je-.;.."'n;Oor
In Exercises 1--8. IV is a subset of R~ consisting of"ec· 7. IV = {x: xi +X2 = I}
lars of lhe fonn 8. IV = Ix: XjXl = OJ :2.. =
.~ [ x,x, ]
X • In Exercises 9-17. IV is a subset of R 3 consisting of
\cctors of the fonn
In each case detennine whether W is a subspace of Rl .
If '" is a subspace. Ihen give a geometric descriplion X~[::]. :3.• =
of IV. x,
I. IV = Ix: .(1 = 2x~J
In each case. detennine whethc,r IV is a subspace of R),
2. IV = Ix: Xl - X2 = 2J ~~.• =
If IV is a subspace. then give a geomclric description
J. IV = {lo:: Xl = X2 or .tj = -X2} of IV. LI
~"=L
4. IV = {x: Xl and X2 are rational numbers} 9. IV = Ix: X] = lxl - X2)
f
-IOCX
21. Let a and b be fixed VKtOrs in R3 , and let IV be the 30. [f U and V are subsets of R". then the sel U + V is
the plane
subset of Rl defined b) defined by
p3Ce of
\V = {x: aTx = 0 and bTx =O}. U+V={x:x=u-,', uinU and ,'in VI .
• Prove thaI W is a subspace of RJ • Prove that if U and V are sUbspaces of R". then
U + V is a subspace of R".
In Exercises 22-25. W is the subspace of Rl defined in
Exercise 21. For each choice of a and b. give a geo~t 31. leI U and V be sUbspaces of R". Pro'e thai the
intersection. U n V. is also a subspace of R~.
ric description of IV.
32. Let U and V be the sUbspaces of Rl defined by
~lOg of
11a~[-J b~[-:] U = Ix: aT x = OJ and V = Ix: b T" = 01.
whe~
15. a = Ul ~ -n b [
a) Show that the union, U U V. satisfies properties
(sl) and (s3) of Theorem 2.
b) If neither U nor V is a subset of the olher, show
that U U V does not satisfy condition (s2) of
176 Chapter 3 The Vector Space R"
Theorem 2. [Hint: Choose vectors u and y such 34. Let W be a nonempty subsel of R" thai satisfies con-
that u is in U but not in V and y is in V but not ditions (52) and (s3) of Theorem 2. Prove that 8 is in
in U. Assume that u + \' is in either U or V and Wand conclude that W is a subspace of R". (Thus
reach a contradiction.] property (s I) of Theorem 2 can be replaced with the
assumption thaI W is nonempIY.)
EXAMPLES OF SUBSPACES
In this section we introduce several important and particularly useful examples of sub-
spaces of R".
Til rORE\\ 1 If VI •••.• V, are \'ectors in R". then the set \V consisting of all linear combinations of
VI ••••• V, is a subspace ofR".
Proof To show that \V is a subspace of R", we must verify that the three conditions of
Theorem 2 are satisfied. Now 8 is in W because
8 =OVI + ···+Ov,.
Jext, suppose that)' and 1 are in \V. Then lhere exist scalmal .... , a" bl ..... b, such
that
l=bIVI+···+b,v,.
Thus.
Y + 1 = (al + bdvi + ... + (a, + b,)v,.
so Y+ 1 is a linear combination of VI •...• v,: that is,)' + 1 is in W. Also, for any scalar
c.
cy = (cailvi + ... + (ca,)v,.
In particular, C)' is in W. It follows from Theorem 2 that W is a subspace of R". •
t'Jples of sub-
Figure 3.8
SpIv}
,~[ J
l'
Then
Spiv} ~ ~
{, [ i, ,ny ,,,1 numbe,}.
is a linear
that Thus Spiv} is the line with parametric equations
x = r
.. \', is a )' = 2r
z = 3r.
Equivalently, SpIv} is the line that passes through the origin and through the point with
'inations of coordinates J. 2, and 3 (see Fig. 3.9).
lflditions of : l
~ any scalar
Hgu" 39 Sp I[i ]I
If u and \' are noncollinear geometric vectors, then
R~. • Sp{u. v} = {au + bv: a. b any real numbers}
alllinear is the plane containing u and \' (see Fig. 3.10). The following example illustrates this
,~noted by case with a subspace of R 3.
,
178 Chapter 3 The Vector Space Rn
,.4 ,
, ,
,, ,,
au/ (lu+bv~ ,-
,,
,,
,
" ,,
, "
oIe' • • -------,"
\' bv
U~[n V~[:J
, ~
and
y~ [~J
Then y is in IV if and only if there exist scalars Xl and X2 such that
y = Xlll +X2V, (I)
That is, y is in W if and only if there ex.ist scalars Xl and Xl such that
Yl = 2Xl
)'2 = Xl + X2 (2)
)'3 = 2X2.
[ 20"']
I I Y2 ,
a 2 }'3
1 0 (I/2)Y,]
o 1 )'2 - (1/2»)', 13)
[
o 0 0/2))'3 + (l/2)YI - J2
, in echelon form. Therefore, linear system (2) is consistent if and only if (1/2).'1'1 - }'2 +
(1/2))'3 = 0, or equivalently, if and only if
,.
)'1 - 2n + )'3 = O. (4)
•y Thus It! is the subspace given by
y, ]
x
" \V = Iy = )'2 :)'1 - 2)'2 + Y3 = OJ. (5)
[
Figure 3.11
n
A ponion of the plane It also follows from Eq. (5) that geomelrically \tI is the plane in three-space wilh equation
x~2)'+;:=O o
:c - 2)'+:: = (see Fig. 3.11). •
(1 )
O£:t-!" 1110' I Let A be an (m x n) matrix. The null spau of A (denoted X(A» is the set of
\'ectors in R" defined by
In words. the null space consists of all th~x§\lChthat Ax is the zero vector.
The next theorem shows that the null space of an (m x n) matrix A is a subspace of R".
A8 =8. (6)
'of
ysicist, and so 8 is in N(A). (Nole: In Eq. (6). the left 8 is in R" but the right 8 is in R"'.) Now
mainly let u and" be vectors in N(A). Then u and v are in R" and
's work
Au=8 and Av=8. (1)
and therefore u + v is in N(A). Similarly, for any scalar a. it follows from Eq. (7) that
A(au) = aAu = a8 = 9.
3 1]
A~[i
1
1 5 4 .
2 4 -1
u n
o 2 3
1 1 -2
o o 0
Solving the corresponding reduced system yields
Xl = -2x) - 3x-t
X2 = - X) + 2X4
as the solution to Eq. (8). Thus a veclor x in R 4 •
x ~ ~1
[
x=
-2X;-3X,]
-x) +2X4
=x,
[_2] [-3]
-I
+X4
2
.
.\") . 1 0
[
4 0 1
where.\"3 and.\"4 are arbitrJry; thai is.
~ ~ X; [ ]+x'[-~l
-2
-1
N(A) Ix x .\") and.\"4 any real numbers}. ..
1
o
3.3 Examples of Subspaces 181
f7Hhar As the next example demonstrates. the fact that ...\ "(A) is a subspace can be used to
show that in three-space e\'el) plane through the origin is a subspace.
• L\.\\\PLE] Verify that any plane through the origin in R 3 is a subspace of R3.
Solution The equation of a plane in three-space through the origin is
t
ax
-
+ by + c.;; = 0,
where a. b. and c are specified constants not all of which are zero. Now. Eq. (9) can be
~ritten as
(9,
Ax =8.
A ~ I. b cJ ood x = [ J
Thus x is on the plane defined by Eq. (9) if and only if x is in N(A). Since N(A)
is a subspace of R3 by Theorem 4. any plane through the origin IS a subspace
~R3. •
OH1'ITIO' 2. Let A be an (m x 11) matrix. The ra1lge of A (denoted 'R(A)] is the set of \'ectors
in Rill defined by
In words. the range of A consists of the set of all vectors y in R'" such that the linear
system
Ax = Y
We saw in Section 1.5 (see Theorem 5) that if the (m x II) matrix A has colu
At, A 2 , .... A" and if
X~[J
then the matrix equation
Ax =y
Til I OIU- \\ ~
+-J If A is an (III x II) matrix and if R(A) is the range of A. then R(A) is a subspace
of~. •
The next example illustrates a way to give an algebraic specification for R(A).
3 1]
A~[~
1
1 5 4 .
2 4 -1
b~ [:J
Then b is in R.(A) if and only if the system of equations
Ax = b
[AlbJ~[~
1 3 1
1
2
5
4
4
-1 ::
b, ]
'
3.3 Examples of Subsp8ces 183
which is equivalent to
4 has columns
[~
0 2 3 b,-b, ]
1 1 -2 2b 1 -b2 .
0 0 0 -3bl +b2+ b 3
It follows that Ax = b has a solution [or equivalently, b is in 'R(A)} if and only if
-3b l + b~ + bJ =O. or b3 =
3b 1 - b2, where hi and b2 are arbitrary. Thus
7«A) = Ib, b = [ :: ]
3b 1 - b1,
t 'CIAI. -
~ubspace f/vJ ~tA
')
•. Jo~wl1
.""--
'" Sp{AI.A2 ..... A~} .
In a similar fashion. the rows of A can be regarded as vectors ai, a2' ... , a.. in R~, and
the row space of A is defined to be
For example. if
A=[:~~l
then the row space of A is Sp{a\. all. where
al = II 2 3J and 82 = [1 0 1].
The following theorem shows that row-equivalent m8lrices have the same row space.
TUEOI\(\\ b Let A be an (m x n) matrix. and suppose thai A is row equivalent to the (m x 11)
matrix B. Then A and B have the same row space. -
The proof of Theorem 6 is given at the end of this section. To illustrate Theorem 6,
let A be the (3 x 3) matrix
1 -1
A= [ ~ -: :]
184 Chapter 3 The Vector Space R-
B~
1 0
0
J]
1 2
[
o 0 0
By Theorem 6, matrices A and 8 have the same row space. Clearly the zero row of E
contributes nothing as an element of the spanning SCI, so the row space of B is Sp{b l , b:
where
bl ::: [1 0 3] and b2::: [0 I 2].
If the rows of A are denoted by 3]. 82. and 83. then
Sprat. 82. 83}::: Sp{b]. b2J.
More generally. given a subset S ::: {"j ....• "", 1of Rn , Theorem 6 allows us to obtai.::.
a "nicer" subset T ::: {WI •••.. Wk} of R" such that Sp(S)::: Sp(T). The next example
illustrates mis.
\ \ ,\PI Ie") Let S ::: {VI, \'2, \'3, V4} be a subset of R 3. where
"I ~ [ : l v, = [ : l ~ [-:l ~d v,
Show that there exists a set T = {WI. W2) consisting of two veclQrs in R3 such that
v, = [ _; l
SpeS) = Sp(T).
Solutio" Let A be the (3 x 4) matrix
A = [VI, v2. vJ, V41:
that is.
A=[~
2
J
5
1
4
-5
~
-I
] .
T _
I 32 51]
2
A _ ,
[2 I 4-5
5-1
and the row vectors of AT are precisely the vectors "f, ,'r t vf, and vI. It is straightfor-
ward to see that AT reduces to Ihe matri:'l:
T _
1 0 7]
0 1-3
B -
[0000 0 0 .
3.3 Examples or Subspaces 185
and R3-2R1.
So. by Theorem 6. AT and B T have the same row space. Thus A and B have the same
column space, where
uro row of B
is 5p{b]. b:ll.
B~[~
In particular. 5p(S) = Sp(T), where T =
~3
o
1
{WI. W2},
o
o
o n
w, ~ U] aod W, ~ [ -;] ~
~ us to obtain
Proof of Theorem 6 (Optional)
pt'u example
Assume that A and Bare row·equivalenl (m x n) matrices. Then there is a sequence of
matrices
l:].
-I
, such that
A = AI.A1, ... ,Ak_I,A,t = B
a,
A~
a•
Clearly the order of the rows is immaterial: that is, if B is obtained by interchanging the
jth and kth rows of A,
~ght(or.
a,
a.
B~
,
aJ
am
186 Chapter 3 The Yeetor Space R-
Sp{al' .... a J . . . . . 8t- .... a",} = Sp{al' .... at- .... aj' .... a",}.
Next. suppose that B is oblained by performing the row operalion Rt + C Rj on A; thu~
a,
aj
B=
at + C8j
am
If lhe vector x is in Ihe row space of A. then there exist scalars bl ..... b". such that
x = b18, + ... .,-bJaJ + ... -b t 8j; _ ... +h"'a.... 10
The vector equation (10) can be rewritten as
x = biB. _ ... + (bJ - cbt)a J _ ••. + bj;(llt - ca J ) ... ••• + b",a",.
and hence x is in the row space of B. Conversel)'. if the \'eetor)' is in the row space of
B, then theTe exist scalars d., ' ... d", such that
.Y = dl81 + ... + djaJ + ... -dt(at + eaj) +, .. + dill&"'. (1~
EXERCISES
Exercises 1-11 refer to lhe \'eclOl"S in Eq. (14). 7. S = {b. e} 8. S = {a. b. d}
a ~[ _: l b ~[ -: l ,~[ -: l 9, S= {b,c,d}
11, S = {a. c, e}
10. S = lao b. e}
d~[~l·~[a
In Exercises 1-11. either shoy. that SpeS) = R~ or gi,-e
an algebraic specification for SpeS). If Sp(S) #= R~.
C141
[J w~ -:J x~ J
v~ [ [
y~ [=n ,~U]
then give a geometric description of SpeS).
I. s ~ {al 2. S ~ {bl 3. s ~ I.)
4. S = {a. b} 5. S = lao dl 6. S = lao cl (IS)
3.3 Examples or Subspaces 187
In Exercises 12-19. either show that Sp(S) = R) or give In Exercises 26-37. give an algebraic specification for
an algebraic specification for SpeS). If Spes) "1= R). the null space and the range of the given matrix A.
[ 1-2 ] [-I 3]
then gi\'e a geometric description of SpeS).
12. s= (vI 13. S= (wI 26. A = 27. A =
l .. on A; thus. -3 6 2 -6
14.S=(v.wl IS.S=(v.xl
28.A=[: ~] 29.A=[~~]
16.S=I\'.w.xl 17.S=(w,x.zl
IS.S=(v.w.z} 19.5=(w,x.YI
:]
20. Let S be the set given in Exercise 14. For each vec-
tor given below. determine whether the vector is in 1 -I
30.A= [ 2-1 31. A = 1 2 ']
SpeS). Express those vectONi lhilt are in SpeS) as a [ 3 6 4
linear combination of "and w.
uch that
(lOj
.) [ : ] b) [ J ,) [~ ] 32.A~ [i ;] 33A=[~~]
I 2 3]
')[-~]
I -, ']
( II)
space of
d)[:] 0[:] 34. A =
[
2 -3
1 0
5
7
35.A=131
[ 2 2 10
:]
in the null space of the matrix
I 2
(13
,.mgthejth
A~[: :] 37. A =
[
2 5
I 3
seCtion. 23. Determine which of the "ectOT'llisted in Eq. (1-1) is 38. Let A be the matrix given in Exercise 26.
in the null space of the matrix
a) For each "ector b that follo....s. determine
15).
in the null space of the matrix
A = [-2 I I).
])b=[_:] ~b~[-:]
J
25. Determine which of the vectors listed in Eq. (15) is
in the null space of the matrix JU)b: [ : ] ~b~[~]
1-I 0]
A =
[ 2 -I
3 -5 -2
I
')b~[_~] ~b=[:]
( 15)
188 Chapter 3 The Vector Space R n
39. Repeat Exercise 38 for the matrix A given in Exer- 45. Let 5 be the set of vectors given in Exercise 17.
cise 27. Exhibit a matrix A such that Sp(S) = R(A).
40. Let A be the matrix given in Exercise 34. In Exercises 46--49. use the technique illustrated in Ex-
a) For each vector b that follows. detennine ample 5 to find a set T = (WI. W2) consisting of two
whether b is in R(A). vectors such that SpeS) = Sp(T).
b) If b is in R(A), then exhibit a vector x in R 3
such that Ax = b.
c) If b is in R(A), then write b as a linear
combination of the columns of A.
46.s~!UHn[nl
i)b ~ U] ii)b~ [J 47 s H_~ H-~ ]I
~ \ [ -;
iii)b~[;] i,) b~ [ ; ] 48. h ![~ H~~ H;H-:]I
,)b~ U] ,i)b~ [n 49. S ~ l[ ~ H:H:J[-: ]I
SO. Identify the range and the null space for each of the
41. Repeat Exercise 40 for the matrix A given in Exer-
following.
cise 35.
a) The (n x II) identity matrix
42. Let
b) The (II x II) zero matrix
+
\V~{y~[ -Xl + 4X2 - 2X3
2Xl - 3X2 X, ]
c) Any (II x n) nonsingular matnx A
: Xl, X2, X, real}.
51. Let A and B be (n x 11) matrices. Verify that
2XI+ X2+ 4x, N(A) nN(B) S; N(A + B).
Exhibit a (3 x 3) matrix A such that W = R(A). 52. Let A be an (m x r) matrix and B an (r x II) matrix.
Conclude that W is a subspace of R'. a) Show that N(B) S; N(AB).
43. Lei b) Show that R(AB) S; R(A).
53. Let W be a subspace of R n . and let A be an (m x II)
W={x= [ : : ]:3XI-4X2+2X3=O}. matrix. Let V be the subset of R ffl defined by
X3 V = {y: y = Axforsomexin WI.
Exhibit a (I x 3) matrix A such that IV = N(A). Prove that V is a subspace of Rffl •
Conclude that W is a subspace of R 3. 54. Let A be an (III x II) matrix, and let B be obtained by
44. Let S be the set of vectors given in Exercise 16. multiplying the kth row of A by the nonzero scalarc.
Exhibit a matrix A such that Sp(S) = R(A) . Prove that A and B have the same row space.
~Urclse ,- subspace. The first pan of this section is devoted to developing the definition of a basis.
":t I and in the latter part of the section. we present techniques for obtaining bases for the
rated in Ex- subspaces introduced in Section 3.3. We will consider the concept of dimension in
ling of [\\0 Section 3.5.
An example from R 2 will serve to illustrate the transition from geometry 10 algebra.
We have already seen that each vector \' in R 2•
v ~ [ : l (I)
can be interpreted geometrically as the point with coordinates a and b. Recall that in R2
the vectors el and e2 arc defined by
]) 2
As we will see later. the set Ie" e2} is an example of a basis for R (indeed. it is called
the lIatural basis for R 2). In Eq. (2), the vector \' is detennined by the coefficients
-h of tl1c a and b (see Fig. 3.12). Thus the geometric concept of characterizing a point by its
coordinates can be interpreted algebraically as detennining a vector by itS coefficientS
when the ,ector is expressed as a linear combination of "basis" vectors. (In facl. the
coefficients obtained are often referred to as the coordinates of the vector. This idea
will be developed fuoher in Chapter 5.) We rum now to the task of making these ideas
precise in the context of an arbitrary subspace \V of R" .
main\[.
yi
(b, L ++ ~a. b)
.m ,.
~~
XII)
tty
~~·a ~
b) ae,
'alare.
\' = ael ..... be~
• Figure 3.12
Spanning Sets
lnd the Let \V be a subspace of R". and let S be a subset of \V. The discussion above suggests
IC1[Cend that the first requirement for 5 to be a basis for \V is that each vector in \V be expressible
~ for a as a linear combination of the vectors in S. This leads to the following definition.
,
190 Chapter 3 The Vector Space R"
DEFt" n 10' J Let W be a subspace of R n , and let S = {WI ••.. , wml be a subset of W. We say
that S is a spanning set for \V. or simply that S spans W. if every ..ector w in W
can be expressed as a linear combination ofveclors in S:
w=a\,,', +··'TQ",W.,.
+ X2U~ + X3U3 = V.
X1Ul
where v is the vector in (3), always has a solution. The vector equation (4) is equivalent
"
to the (3 x 3) linear system with the matrix equation
Ax = v. (5)
where A is the (3 x 3) malrix A = [UI' U2. U3]. The augmented matrix for Eq. (5) is
[AI'l~[ -l :J
-2 I
3 2
I 4
and this matrix is row equivalent to
0 0 lOa+9b-7C]
U 0
I 0
I
4a+4b-3c
-a -b+c
.
3.4 Bases for Subspaces 191
is the solution of Eq. (4). In particular, Eq. (4) always has a solution, so 5 is a spanning
m~RJ. _
r
iSlhatSisa
= lel.e•. eJ}. E\\\\PLl 2 LetS = {\'I.Vl.VJJ be the subsetofR J defined by
Ul il
IfvlsinR J •
Does S span RJ ?
of RJ.
Solutioll Let \' be the vector given in Eq. (3). As before, the vector equation
XlVI -l- Xl\'l + xJ\'J = \' (6)
where A = [vI. V2, \'J]' The augmented matrix for Eq. (7) is
[AI'I~ U o
:l
-I 2
7
Fd as a Ii near
Itt the \-eclor -7 o
and the matrix [A I\'1is row equivalem to
I 0 7/2
equivaJenl
'" [
o 1 3/2
b/2
-0 ...... (I/2)b
]
.
o 0 0 -70 +2b-c
(5) It follows that Eq. (6) has a solution if and only if -70 - 2b - c = O. In panicular, S
does not span R3. Indeed.
Eq. (5) is
w~[:]
is in R3 but is not in SpeS); that is. w cannot be expressed as a linear combination of VI.
~,and~. _
192 Chapter 3 The Vector Space R II
The next example illustrates a procedure for constructing a spanning set for the null
space. N(A), of a matrix A.
A=U
1
1
) I]
54.
2 4 -I
- 2X 3-)X']
+ 2x4
-XJ
.V(A) = {x: x = XJ . Xl and x~ any real numbers}.
[ x,
Thus a vector x in .,r(A) is totally determined by me unconstrained parameters XJ and
X.t. Separating mose parameters gives a decomposition of x:
-2-" -)x,
-XJ.U4 ] [ --2x,
XJ ] [ -lx,
h 4] [ -I
-2 ] [ -)2 ]
x= = + =X3 +X4 .(8)
x3 X3 0 I 0
[
X4 0 X.t 0 I
u, =
[
-l
-2 ]
~d u, ~
[ -) ]
;.
The remaining subspaces introduced in Section 3.3 were either defined or character-
ized by a spanning set. If S = {VI ••••• "r} is a subset of R". for instance, men obviously
S is a spanning set for SpeS). If A is an (m x n) matrix,
et for the null then. as we saw in Section 3.3, (AJ, ... , An} is a spanning set for'R.(A), the range of A.
Finally, if
A~[:~l
where a/ is the ;th-row vector of A, then, by definition, {at. ... , am} is a spanning set
for the row space of A.
u~ [ : l
The set S = {el,~, u} is a spanning set for R 2. Indeed, for an arbitrary vector" in Rl ,
-ers Xj and
,= [:l
-3]~ ,',
\' = (0 - c)el - (b - c)el -r CU, where C is any real number whatsOever. But the subset
{el. ell already spans R 1, so the vector u is unnecessary.
Recall that a set {VI •...• v",} of vectors in R" is linearly independent if the vector
equation
XI\'I-r"'+X",Y", =9 C9,
has only the trivial solution Xl = ... = x'" = 0; ifEq. (9) has a nontrivial solution. then
the set is linearly dependent. The set S = lei, el. ul is linearly dependent because
el-r e2- u =9.
Our nexl example illustrates that a linearly dependent set is not an efficient spanning set:
Ihat is, fewer vectors will span the same space.
[:l J :l
er, it is an
~ ~
'ination
-
:g set for
~h.aracter
Pb\iously
v, = v, [ ; 'nd v, [
Show that S is a linearly dependent set. and exhibit a subset T of S such that T contains
only two vectors but Sp(T) = SpeS).
Solution The vector equation
Xl VI + X2V! + X3V, = 8 ( 10)
194 Chapter 3 The Vector Space R~
'230]
A=
[ I 3 5 0
I I I 0
.
n
o
B=[~
-I
1 2
o 0
in echelon form. Solving the system with augmented matrix B gives
Xl = X3
X2 = -2X3.
Because Eq. (10) has nontrivial solutions. the set S is linearly dependent. Taking X3 = I.
for example. gi\'es x. = I. Xl = -2. Therefore.
\'1 - 2\'2 + v) = 8. (11 )
vector is removed from B, this smaller sel cannOI be a spanning sel for \V (in particular.
~nted matrix the veclor removed from B is in \V but cannot be expressed as a linear combinalion
of the vectors retained). In this sense a linearly independent spanning sel is a minimal
spanning sel and hence represents the mosl efficient way of characterizing the subspace.
This idea leads to the following definition.
Note Ihat the zero subspace of R". IV = {B}. contains only the vector B. Although
it is the case that (B) is a spanning set for IV, Ihe set {B} is linearly dependent. Thus the
concept of a basis is not meaningful for \V = {B}.
Because B is also a linearly independent set. v. e can show Ihal the representation of x in
Eq. (12) is unique. That is. if we have any n:prescmalion of the fonn x = hi VI + h2"2 +
follows ... + bp"p.lhen al = bl. 02 =~ ..... a p = hp- To eSlablish lhis uniqueness. suppose
thai bJ, b2 . .... b p are any scalars such thai
x = h 1"l + h2 "2 + ... + bp"p,
Subtracting the preceding equalion from Eq. (12). we obtain
8= (al - hl)"1 + (a2 - b2)V2 + ... + (a p - bp)"p,
Then, using the factthat vpJ is linearly independent. we see thai OJ -hi = O.
{"~I. "2 .... ,
a2 - ~ = O..... a p - bp = O. This discussion of uniqueness leads 10 the following
remark.
Remork lei B = (VI. v2 ..... v p ) be a basis for W, where \V is a subspace of RI!. If
x is in W. then x can be represented uniquely in terms of the basis B. Thai is. there are
unique scalars al. 02 •.... 01' such that
in SpeS)
x= 01"1 -02"2 + ... -0,."1"
As we see later. these scalm are called Ihe coordinates of x with respect to the basis.
Examples of Bases
•
II is easy 10 show that Ihe unil "ceIOTS
constitute a basis for R3 . In generaL the II-dimensional vectors el. e2 .... , e" form a
basis for R", frequently called the natural basis.
In Exercise 30, the reader is asked to use Theorem 13 of Section 1.7 to prove that
any linearly independent subset B = {Vj. V2. ,')} of R J is actually a basis for R 3, Thus.
for example, the vectors
EXA\\PLf 5 Let A be the (3 x 4) matrix given in Example 4 of Section 3.3. Use the algebraic
specification of R(A) derived in that example to obtain a basis for R(A).
Solution In Example 4 of Section 3,3, the range of A was determined to be
Thus b l and b2 are unconstrained variables, and a vector b in R(A) can be decomposed
"
•
b ~ [
3b 1
::
- b2
] ~ ~
[
3b l
] +[ ~,] =
-bl
b, [ ~
3
] + b, [ ~]
-1
. (BI
u, ~ U] ond U ~ [ -:J
then UI and U:l are in R(A). One can easily check that {Uj. U2} is a linearly independent
set. and it is evident from Eq. (13) that R(A) is spanned by Ul and U2. Therefore,
[UI. U:l} is a basis for R(A). •
The previous example illustrates how to obtain a basis for a subspace W, given
an algebraic specification for W. The last two examples of this seclion illustrate two
different techniques for constructing a basis for W from a spanning set.
3.4 Bases Cor Subspaces 197
· e~ fonn a D..\\\PlC b lei \V be the subspace of R4 spanned by the set S = (\'\. \'2. V). V4. v51, where
to prove Ihal
for R J . Thus.
~
v, [ ) l ~ tl -J v, [ "= [
v, = [ _ ~l ~ n 'nd v, [
algebraic and then determine which of the Vj'S can be eliminaled. If V is the (4 x 5) matrix
[~
0 -2 0 1
1 3 0 2
'omposed
0
0
0
0
I
0
-I
0 f] (IS)
1 XI = 2x) - Xs ?
X2 = -3x) - 2,rs 3 (16)
1.j""O b
X4 = xs.
, - 0
where xJ and Xs are unconstrained variables. In panicular. the set S is linearly dependent.
:\1oreover. taking XJ = J and Xj = 0 yields x\ = 2, X2 = -3. and X4 = O. Thus
Eq. (14) becomes
2\'1 - 3\"2 + \') = 9, (171
and hence
Since both V3 and Vs are in Splvi' V2. v~l. it follows (as in Example 4) thai VI, \'2. anc.
\'~ span w.
To see that the set {"J, V2. v~1 is linearly independent. nOie that the dependen,,;c
relation
is JUSt Eq. (14) with \'~ and Vs removed. Thus the augmented matrix ["I. V2. v418]. for
Eq. (18) reduces to
1000]
oJ 0 0
o
[ o 00 01 00 .
which is matrix (15) with the third and fifth columns removed. From matrix (19). it i"
clear that Eq. (18) has only the lrivial solution: so {\'l' \'2. \'~} is a linearly independent
set and therefore a basis for W. •
The procedure demonstrated in the preceding example can be outlined as follo\\-,,:
TIIEORl\\ - If the nonzero matrix A is row equivalent to the matrix B in echelon form, then the
nonzero rows of B form a basis for the row space of A.
Proof B} Theorem 6. A and B ha\e the same row space, II follows that the nonzero rows of
B span Ihe row space of A. Since the nonzero rows of an echelon matrix are Iinearl}
independent vectors. it follows thallhe nonzero rows of B form a basis for the row space
ofA. •
r \.\ \\PLE - Let IV be the subspace of R 4 gi\cn in Example 6. Use Theorem 7 to construct a basis
for IV.
3.4 Bases (or Subspaces 199
ikpendence Thus \V can be viewed as the row space of the matrix yT. where
1 1 2 -1
,~ 8J.
" for V' = I
1
I
2
4 -1
1 1
5
1 0 4 -1
2 5 0 2
1 0 0 -9
0 I 0 4
119). it is
pendent B' ~ I0 0 1 2
• 0 0 0 0
iollO\H: 0 0 0 0
in echelon form. it follows from Theorem 7 that tbe nonzero rows of BT form a basis
for the row space of yT. Consequently the nonzero columns of
la Iinearl~ o' 0I 00 00 00 ]
• B~
,Ie, For [
o 0 I 0 0
,. from ~9 4 2 0 0
are a basis for W. Specifically. the set {Ul. u~. U3) is a basis of W. where
then the
fOVo.) of
~
u, [ j1 ~ ~ 1~d u, [ ~
u, [ !1 •
Iinearl}
'p3.ce The procedure used in the preceding example can be summarized as follows:
• 1. A spanning set 5 := {"I ..... VIII} for a subspace W of R" is given.
~ 3 basi!> 2. Let Y be the (n x m) matrix Y := l"! .... \'",]. Use elementary row operations
to transform V T to a matrix aT in echelon form.
3. The nonzero columns of B are a basis for lv.
200 Chapter 3 The Veclor Space R"
D..± EXERCISES
In Exercises 1-8. let W be the subspace of R4 consisting
of vectors of the fonn
1I.A~[: 1 0
2
Xl-X' =0 3 -I ]
5 8 -2
8,XI- X 2 - X 3 - X 4=0
X2+X) =0 2
9. Let IV be the subspace described in Exercise I. For
each veclor Xthai follows, determine if x is in \V. If
x is in \V, then express x as a linear combination of
[I 1
12, A = I I
:]
i]
the basis vectors found in Exercise I. 2 3
~ i] U
2
0) , [ b) ,= [ - 1 0] 5 3 -I
13. A= o 2 2
1-I 1
il
16..-1.= 22 2I 1
[
2 3 0
-. Cse the technique illustrated in Example 7 10 obtain " =[ '0 =[ =:J "d
J
,
a basis for the range of A. where A is the matrix
given in Exercise 11
Repeat Exercise 17 for the matrix given in Exer-
cise 12.
19. Repeat Exercise 17 for the matrix given in Exer-
'0 = [ -J
Show thaI S is a linearly dependent sel. and verify
cise 13.
thatSp{\'J. \'2· \'31 = Sp{vl,v2l,
~. Repeat Exercise 17 for the matrix gi\'en in Exer-
28, Let S = {v]. \'2, v31. where
cise 14.
~_SUCh that
Exercises 21-24 for the given set 5:
.' Find a subset of 5 that is a basis for Sp(S) using Ihe
" = [ l ~ '0 = [ ~ l "d
.,=[ -: l
technique illustrated in Example 6.
e
b' Find a basis for Sp(S) using the technique
eOfAth31 illustrated in Example 7.
Ol.h![: H~]l
• A .of Find every subset of S thaI is a basis for R2.
A. asa
29. Let 5 = (VI. \'2, V3, ".. 1, .... here
'J [ ~ ~] I I 0]
b) [ I I 0
esc Theorem 13 of Section 1.7 to show thaI B is a
linearly independent set.)
In Exercises 32-35, detennine whether the given set S
is a basis for R3.
~]
![J [: H=n!
<J [
26. Find a basis for the range of each matrix in Exer- 32. h
cise 25.
202 Chapter 3 The Vector Space R n
3;,s~I[J[1])
38. Show that any spanning set for R n must contain
at least n vectors. Proceed by showing that if
Uj, U2 ..... u p are vectors in R". and if p < n.
then there is a nonzero vector v in R n such that
36. Find a vector w in R 3 such that w is not a linear v Tu, = O. 1 sis p. [Him: Write the conSlrainli.
combination of \'J and "2: as a (p x II) system and use Theorem 4 of Section
~ DIMENSION
In this section we translate the geometric concept of dimension into algebraic terms.
Clearly R2 and R3 have dimension 2 and 3. respectively. since these vector spaces are
simply algebraic interpretations of two-space and three-space. It would be natural to
extrapolate from these two cases and declare that Rn has dimension f1 for each positive
integer II: indeed, we have earlier referred to elements of R" as II-dimensional vectors.
But if IV is a subspace of Rn , how is the dimension of IV to be detennined? An
examination of the subspace. IV. of R3 defined by
. [,,-2
\V = {x: x = :;
X
)]
,Xl and X3 any real numbers}
n.actly IWO
a-lJ1g may have many different bases. In facl. Exercise 30 of Section 3.4 shows Ihat any sel of
two three linearly independent vectors in R 3 is a basis for R l . Therefore. for the concept of
dimension to make sense, we must show that all bases for a given subspace IV contain
'.or {Him: the same number of vectors. This fact will be an easy consequence of the following
lam one theorem.
.fthe ba!oh
litt Oil.! \I "l Let W be a subspace of W. and let B = {"'I. "'2, .... ",pI be a spanning set for W
U'-I ContalO comaining p vectors. Then any set of p + 1 or more vcctors in W is linearly dependcllt.
ing that if Let {Sl. S2, .... Sill} be ally set of 111 vectors in W. where III > p. To show that this set is
Prouf
lJ P < fl.
. ,-uch thai linearly dependent, we first express each Sj in terms of the spanning set B:
constram~
Sj = 011"'1 + a21"'2 + + apl"'p
r. of Section
~binatlon S2 = 012"'1 + 022W2 + + Op:lw p (I)
As an immediate corollary of Theorem 8. we can show that all bases for a subspa.;c
contain the same number of \'ectors.
I t COI\Oll.\I\' Let W be a subspace of Rfl • and let B = {WI. W2 ..... w p } be a basis for IV containin,:,:
I p veclOrs. Then every basis for \V contains p "ectors.
Proof Let Q = {Ul. U2, .. , , Ur I be any basis for IV. Since Q is a spanning set for IV. b)
Theorem 8 any set of r + I or more vectors in \f is linearly dependent. Since 8 is.
linearly independem set of p vecwrs in IV. we know that p :::: r. Similarly, since B i-
a spanning set of p vectors for W, any set of p + I or more vecwrs in IV is linear!.
dependent. By assumption, Q is a set of r linearly independent vectors in IV; so r :::: p
Now. since we have p :::: rand r S p, it must be that r = p. •
Given that e\'ery basis for a subspace comains the same number of vectors, we can
make the following definition without any possibility of ambiguity.
DH1 .... llIO .... :) Let IV be a subspace of Rfl • If IV has a basis B = {WI. "'2..... w p } of
p vectors, then we say that \V is a subspace of dimension p. and we write
dim(IV) = p.
In Exercise 30. the reader is asked to show that every nonzero subspace of R fl does
ha\'e a basis. Thus a value for dimension can be assigned to any subspace of Rfl • where
for completeness we define dime W) = 0 if IV is the zero subspace.
Since R3 has a basis lei. e2. e3} containing three vectors, we see that dim(R3) = 3.
In general. Rfl has a basis lei, e2 ..... efl} that contains n veclOrs: sodim(R = n. Thus fI
)
X, ]
IV = {x: x = X2 . XI = -2X3. -"2 = X3. X3 arbitrary}.
[
x,
Exhibit a basis for IV and detennine dim(1V).
50lulion A vector Xin \V can be wri!ten in the fonn
-lx, ] [ -2 ]
x= [ :: = XJ : •
u~ [-:J
3.5 Dimension 205
f~)r a subspa~ It follows that dim(IV) = I. Geometrically. IV is the line through the origin and through
the point with coordinates (-2. I. I). so again thedefinition of dimension coincides with
our geometric intuition. •
1\ comainin!
The next example illuSlTates the importance of the corollary 10 Theorem 8.
:-tors.\lte Use the techniques illustrated in Examples 5. 6. and 7 of Section 3.4 to find three different
bases for IV. Give the dimension of IV.
Solution
w ,
of (a) The lechnique used in Example 5 consisted of finding a basis for IV by using
the algebraic specificalion for IV. In panicular.let b be a vector in R3:
e wme
b~ [:
From Ihis description il follows that IV has a basis {VI. \'2}. where
X2 = -X3 - (3/2)X4.
"'here X3 and X4 are arbilrary. Therefore. the vectors U3 and 14 can be deleted
from the spanning set for \Y. lea\"ing {UI. u21 as a basis for W.
(c) Let U be the (3 x 4) matrix whose columns span lV. U = [Ul' U2. U3.14).
Following the technique of Example 7. reduce U T to the matrix
CT =
I 01-24]
0
[ 000
o 0 0
C=
[
I
0
4 -2
0
I
o
o
o
fonn a basis for W; that is. (WI. w21 is a basis for W. where
n
W, ~ ~][ 'nd w, ~ [ -!J
In each case the basis obtained for \V contains two vectors. sodim(W) = 2. Indeed.
viewed geometrically. W is the plane with equation -4x + 2)' + z = O. •
It is clear from Eq. (7) that any vector in W can be expressed as a linear combination of
UI. U2 ..••• up, so the given linearly independent set also spans W. Therefore, the set is
a basis.
The proof of property 4 is left as an exercise. -
E\-\ \\1>1 E J Let W be the subspace of R 3 given in Example 2. and let lVI. v2. V3) be the subset of W
defined by
VI~ [ -J v, ~ ~
[ l "d v) ~ [ ]
Determine which of the subsets {\'d. {'"21, {Vl. v21, {v], v31. {v2' v31, and {v], V2, "3) is
a basis for W.
Solution In Example 2, the subspace \V was described as
-Ex"" \\PLI:: I Find the rank. nullity, and dimension of the row space for the matrix A, where
A ~ -~
[ o
1
1 2]
2 -3 .
4 8 5
Solution To find the dimension of the row space of A, observe that A is row equivalent to the
matrix
B= [i o
o
I
-2
3
0 n
and B is in echelon form. Since the nonzero rows of B fonn a basis for the row space
of A, the row space of A has dimension 3.
To find the nullity of A, we must detennine the dimension of the null space. Since
the homogeneous system Ax = 6 is equivalent to Bx = 6, the null space of A can be
detennined by solving Bx = 6. This gives
Xl = 2x)
X2 = -3X3
X~ = O.
Thus N(A) can be described by
-3x)2Xl]
N(A) ~ Ixo> = OXl' Xl ,"y ",I ,umbe'l·
[
It now follows that the nullity of A is 1 because the vector
v ~ -r][
inlent 10 the
C~[~
fonn a basis for neAl. Thus the rank of A is 3.
0 0
I
0
0
I n
Note in the previous example that the row space of A is a subspace of R~. whereas
-
the column space (or range) of A is a subspace of R 3, Thus they are entirely different
subspaces: even so, the dimensions are the same, and the next theorem states that this is
ahl,'3)'s the case.
~ row space
fpace. Since
l'fA can be
T+'t:OI~t:\\ 10 If A is an (III x II) matrix. then the rank of A is equal to the rank of AT.
The proof of Theorem 10 will be gi\'en at the end of this section. Note that the range
of AT is equal to the column space of AT. But the column space of AT is precisely the
row space of A. so the following corollaJ) is actually a restatement of Theorem 10.
-
COl\llll \In If A is an (m x II) matri~. then the row space and the column space of A ha\e the same
dimension. _
This corollary pro\'ides a useful way to determine the rank of a matrix A. Specifi..
call}. if A is row equh-alent to a matriJi: B in echelon form. then the number. r. of nonzero
rows in B equals the rank of A.
The null space of an (m x /I) matrix A is determined by soh ing the homogeneous
s},tem of equations Ax = 8. Suppose the augmented matrix [A 18J for the system is
row equivalent to the matri~ IB 8J. which is in echelon form. Then clear!} A is row
equivalent to B. and the number. r. of nonzero rows of B equals the rank of A. But r
is also the number of nonzero rows of [B 18J. It follows from Theorem 3 of Section 1.3
that there are " - r free variables in a solution for Ax = 8. But the number of vectors
in a basis for ..\~(A) equals the number of free variables in the solution for Ax = 8
(see Example 3 of Section 3.4): that is. the nullity of A is II - r. Thus we ha\'e show n.
informally. Ihat the following formula holds.
Remark If A is an (Ill x II) malrix.lhen
Example 4 illustrates the argument preceding the remark. If A is the matrix givetl
in Example 4,
A ~ [ -;
But we already know that Ax = b is consistent ifand only ifb is in the column space of
A. It follows that Ax = b is consistent if and only if the sub:o>paces given in Eq. (9) and
Eq. (10) are equal and consequently have the same dimension. _
Our final theorem in this section shows that rank can be used 10 detennine nonsin·
gular matrices.
-l~! rORr \\ 11 An (11 x 11) matrix A is nonsingular if and only if the rank of A is I!.
Proof Suppose that A = fA J • A2' .... A"]. The proof of TIleorem 12 rests on the observation
that the range of A is given by
matrix given Conversely. suppose that A has rank n: that is, R.(A) has dimension n. It is an
immediate consequence of Eq. (II) and Theorem 9. property 4. that {AI. A2 ... " A~}
is a basis for 'R,(A). In particular, the columns of A are linearly independent. so. by
Theorem 12 of Section 1.7. A is nonsingular. ~
~
h" "'nk
hasn=4 aJ) ]
fore. the Aj = a2J •
)lJows Ihal
[
a.)
Suppose thai AT has rank k. Since the columns of AT are af. aI..... a~. it follows that
if
'1eCess<U)
\V = Sp{al.32 ..... 3"'1.
then dim(W) = k. Therefore, W has a basis {WI. W2 •.••• w.J and. by Theorem 9.
property 2, III ::: k. For I .::: j .::: k. suppose thai Wj is the (I x n) ,ector
10 (a.. l. a.. 2, ...• a..~J = a", = C.. lwl +C",lw2 + ...... C",t,",'t.
pace of Equating the jth component of the left side of system (12) with the jth component
«9) and of the right side yields
•
, nonsin-
a.
aJ)
~J
]
= Wlj
[ c"
C21 ]
. + W:2J
[ CJ2
C22
•
]
+ ... + Wtj
[ C2t ]
C"
. (1J)
[
.. .. .. ..
alii; C",l e",2 C",J:
~nation for I .::: j .::: n. For I .::: i .::: k, define c, to be lhe (m x I) column vector
l 11
r linear!)
c., =
C..
C" ]
[
c.,
212 Chapter 3 The Vector Space R"
V = Sp(CI. C2 ..... cd
has dimension k. at most. By Exercise 32. dim[R(A)) .:s dim(V) ::: k: thtlt is.
rank(A) .:s rank(A T ).
Since (AT)T = A. the same argument implies that rank(A T ) .:s rank(A). TIm"
rank(A) = rank{A T ). _
~ EXERCISES
Ex.ercises 1-14 referto the vectors in (l5l. 10. S = lUI. Ul} for R 2
u, ~ [ : 1u, ~ [ : 1u, 1 = [ -:
11. 5 = {U2. U3} for R 2
12.5= IVI. V2. v31 for RJ
13.5= IVI. "2. v~} for RJ
x,
.
x,
"
In Exercises 1-6. detennine by inspection .... hy the given
Determine dim("') when the components of x satisfy
set 5 is not a basis for R2• (That is. either S is linearly
the given conditions.
dependent or S does not span R2 .)
LS={uJ! 2.S={Ul} IS. Xl - 2.t2 + XJ -.\"4 = 0
16,XI-2t] =0
3. S = {Ul' U2. U3} 4. S = {U2. U3. us}
7. S= {Vl."2} 8. S= {VI. V)} Tn Exercises 21-24. find a basis for N(A) and give the
9. S = {"l. V2. "J. V4} nullity and the rank of A.
In Exercises 10-14. use Theorem 9. propeny 3. to de-
_~ _~ ~]
1 , "..
tennine .... hether the gi,en set is a basis for the indicated 21. A _ [ ] 22. A = [ 2 -.5
\ector space.
3.5 Dimension 213
13.A=
[ I-I
2-1
-I • :] .=[ -J b~[ _] ood
that ie;:.
"A~ [;
2
3
3 -I
0
1
;]
In Exercises 25 and 26. find a basis fOT R(A) and give
,~[ -:J
the nullity and Ihe rank of A .
Thu_~
21]
.-4,
30. LeI IV be a nonzero subspace of R~. Show Ihal IV
• 25. A~ [ -:
has a basis. [Hint: LeI WI be any nonzero \eclOl" in
o 3 \Y. If (WI I is a spanning set for \V. then we are done.
IfnOl.lhere isa \ectOr""! in IV such that (\II. \Ill
.
1 5 7
is linearly independent. Why? Continue by asking
1 whether this is a spanning set for IV. Why must this
'6'A=[~
2, 0] process eventually stop?]
•
1 5 -2
31. Suppose thaI {UI. U2 •.•.. up} is a basi~ for a sub-
space IV. and suppose that x is in IV with x =
0IU] + a2D! + ... - OpDp- Show thallhis repre-
27. ut W be a subs~. and let S be- a spanning sel for sentation for x in lenns of !he basis is unique-
W. Find a basis for \Y. and calculate dim(W) for
Ihal is. if x = blUI + b1U2 + ... - bpu p. then
each sel S.
b l = al. b 1 = Ul • •... b p = up-
-i]{H~iH -f]l
sible \'alue for the rank of A and the smallest possible
\'alue for Ihe nullily of A.
\ salisfy
b)S=I[ a) A is(3 x 3)
b) A is(3 x 4)
c) A is(5 x 4)
28. Let W be the subspace of R~ defined by \V = 34. If A is a (3 x 4) matrix. prove that the columns of
-:!..r~ =0 fx: \,T x = 01. Calculate dim(\V). where A are linearly dependent.
Jr~ = 0 35. If A is a (4 x 3) m:urh:. prove Ihal the rows of A are
,.~ [J
~O linearly dependenl.
~O 36. Let A bean (m x n) matrix. Pro\e Ihal rank(A) .::: m
-_f~=O and I1lnk(A) ::: n.
gi\'e the 37. Let A be a (2 x 3) matrix with Tank 2. Show thai the
(2 x 3) system of equations Ax = b is consistent
~]
29. Let W be- !he subspace of R~ defined by IV = Ix: 38. lei A be a (3 x 4) matrix wilh nullity 1. Prove Ihat
.; aT x = 0 and b Tx = 0 and e Tx = 01. Calculate the(Jx4)systemofequationsAx = bisconsislenl
dim{W) for fore\erychoiccofbin RJ .
214 Chapter 3 The Vector Space W
39. Prove that lin (n x II) matri;( A is nonsingular if and '""'-0-1 •...• wpl· Finall). use Theorem 8 to reach a
only if the nullil}' of A is zero. comrndiction.
40. lei A be an (m x m) nonsingular mlltrix, and let B -U. Suppose that S = {Ul. u! ..... up} is a set of lin-
be an (m x II) mlltrix. Prove that X(AB) = ;\'(B) early independent vectors in II subspace IV. where
and conclude that rank (A B) = rank (B). dim(W) = 11/ and If/ > p. Prove that there is II
41. Prove propeny 4 of Theorem 9 as follows: Assume \-ector Up_I in IV such that {UI. U2 ..... up. Up_II
that dim(W) = p and let S = ("'·1 ..... '""pI be a set is linearl) independelll. Use this proof to shO\\ that
of p vectors that spans W. To see that S is linearly II basis including all the veclOrs in S can be con-
independent. suppose that CIWI + ... + cpw p = structed for W.
8. If Cj :F O. show that W = Sp{Wl ..... WI-J.
Orthogonal Bases
The idea of onhogonality is a generalization of the vector geometT) concept of per-
pendicularit). If u and \' are twO vectors in R2 or R3 • then ",e know that u and \' are
perpendicular if u r \' = a
(see Theorem 7 in Section 2.3). For example. consider the
vectors U and v given by
In general. for vectors in R n. we use the te.rm orthogonal mther than the tenn
perpendicular. Specifically. if u and \' are vectors in R n. we say that U and \' are
orthogonal if
uTv=O.
We will also find the concept of an onhogonal set of vectors to be useful.
3.6 Orthogonal Bases ror Subspaces 215
~m 8 to reach a
Dt:t-l'lTIo, b Let 5 = {Ut. U2 ••... up} be a set of \'ectors in R~. The set S i!. said to be
1~
a set of lin- an orthogollal set if each pair of distinct \'ectors from S is orthogonal; that is.
,pace IV. where
u'["Uj =Owheni f:.j.
,; that there is a
I: ... up. up... d
1'\'01' to show that
a 5 can be con- E,\\\\PI r I Verify that S is an orthogonal set ofveclOrs. where
ocept of per-
u and \' are
u[u1=[1 0 I 21 [ _i]~'_o-,+o=o
,. .:on~ider the
directed line
ur UJ = (I 0 I
21 [ =i]~'_o-,+o=o
-21 ]
uIU; = [I -I = 1-2"t"I+O=O.
01 [ -I
o
Therefore. 5 = {u]. U1. uJ} is an orthogonal set of "ectors in R4 • •
An important property of an onhogonal set 5 is that S is necessarily linearly inde-
pendent (so long as S does not contain the zero vector).
IllrOR.r \I 13 Let S = lUI. U2 ••••• upl be a set of nonzero vectors in R~. If S is an orthogonal set of
vectors. Ihen S is a linearly independent set of veclOrs.
the term Proof leI (1. q ..... Cp be any scalars that satisfy
II and v are
CtUt - C1U1 + ... + CpU p = O. 11
Form the scalar product
0'
DHI'drro, - Let IV be a sub~pace of RI/. and let B = {UJ. "2, .... Up} be a basis for W. If
B is an orthogonal set of \cctors. then B is called an orthogonal basis for W.
Funhennore. if u, I = I for I .::: i .::: p. then B i<; said to be an orthonormal
basis for HI.
The "ord ort/wnonl/o! suggesls both orthogonal and normalized. Thus an or-
thonormal basis is an orthogonal basis consisling of vectors having length I. where a
lector of length 1 is a unit vector or a nonnaliLed vector. Observe that Ihe unit lectors
c,. C1 ..... ell fonn an onhononnal basis for R·.
E\--\ \IPlt: 2 Verif) that the set B = {\ I. \'2. \'J} is an onhogonal basis for R3 • where
vi "1 = 3 - 2 - I= 0
vi \'J = I - 8 ..j.. 7=0
vf \"3 = 3 + 4 - 7 = O.
Now. R 3 has dimemion 3. Thus. since 8 is a set of three \ectors and is also a linearly
independent sel (see Theorem 13). it follows that B is an orthogonal basis for R J • •
These observations are stated fOrolany in the following corollary of Theorem 13.
3.6 Orthogonal Bases for Subspaces 217
COl\OLL "I' Let IV be a subspace of R". where dim(lV) = p. If S is an orthogonal set of p nonzero
vectors and is also a subset of IV. then S is an orthogonal basis for W. •
Orthonormal Bases
If B = {UI, U2' ...• upl is an orthogonal set. then C = l{/j Ul. {/2U2 ....• apu p } is also
(2)
an orthogonal set for any scalars al. a2 ..... a p ' If B contains only nonzero vectors and
c = O. if wc define the scalars (/, by
:h U , we see that
.:Iependent set of aj = Ju, u,
T .'
•
XlOrs from a p_ then C is an orthol/ormal set. That is. we can convert an orthogonal set of nonzero
.:Iependent subset vectors into an orthonormal set by dividing each vector by its length.
bot!-onal basis. In
\-.- vII = JvTv.
1 \+\ "PLE J Recall that the set B in Example 2 is an orthogonal basis for R3. Modify B so that it is
an orthonormal basis.
Soilltion Given that B = {v). V2. V3} is an orthogonal basis for R 3. we can modify B to be an
c'b for IV. If orthonormal basis by dividing each vector by its length. In particular (see Example 2).
sis for IV.
the lengths of VI, "2. and V3 arc
onhollormal
IlvJlI = J6. IIv211 =./!T, and IIv311 = ../66.
Therefore. the set C = lWI. ""2. W3} is an orthononnal basis for R 3 • where
d. Thus an or-
~[lgth I, where a
1/)6 ] 3/JIT]
WI = I \' 1 =
.J6 2/)6 . W2
1 \',. =
= ji'1 -l/JIT . 'nd
the unit vectors [ [ -l/JIT
1/)6
1/./66]
w3 -
_ _1
J66 V3 = -4/./66 . •
[ 7/./66
Determining Coordinates
Suppose that IV is a p-dimensional subspace of R". and B = {WI. ""2. , ... wpl is a
basis for IV. If v is any vector in IV. then v can be written uniquely in the form
v=al\\'1 +{/2w2+···+{/p"",,·
(In Eq. (3), the fact that the scalars al. a2' .... (/p are unique is proved in Exercise 31 of
Section 3.5.) The scalars {/J. {/2 • . . . . {/p in Eq. (3) are called the coordinates of I' with
respect to the basis B .
• also a linearly As we will see, it is fairly easy to determine the coordinates of a vector with respect to
_n for R 3 • ~ an orthogonal basis. To appreciate the savings in computation. consider how coordinates
. Theorem 13. are found when the basis is not orthogonal. For instance. the set B, = {VI. v2_ \lJ i" a
218 Chapter 3 The Vector Space R"
As can be seen. "j "3 i= O. and so 8 1 is not an onhogonal basis. Nexl. suppose we wi...r..
10 express some \'ector \ in R3• say \ = [5, -5. -lJT, in term5 of BI , We musl sohe
the (3 x 3) system: 0IV! +02v2 +OJ"3 = v, In matrix terms thecoordinatesol. {/2. ana
OJ are found by solving the equation
(13
=
-2
= ol(wj WI).
,,= [ ~n w, =[ ] " = [J . d w, = [ -J
Solution Beginning with the equation
-
3.6 Orthogonal Bases for Subspaces 219
wfv=al(w[wI). or 12= 6a 1
wf v = a2(wf "'2). or 33 = 1IG2
wf v = Q3(wf "'3). or 66 = 660 3 .
-.e ~e wish Thus GI = 2. (/2 = 3. and a) = I. Therefore, as can be verified directly, v = 2Wl +
'"lUSt solve 3W2 + w3· ~
• G1. and n
In general, let IV be a subspace of R • and let B = {WI. W2 •.... wpl be an or-
Ihogonal basis for IV. [f \' is any vector in IV, then ,. cun be expressed uniquely in the
form
v = (/1"'1 + (/2"'2 + ... + Gpw p • (Sa)
where
WTv
tennine a, - ,
-T-'
I::::·I ::: p. (5b)
Wi W,
fllrcW.l \\ I ~ Gram-Schmidl Lei W be a p-dimen::.ional subspace of R". and let {WI, "'2.... , wpl
be any basis for W. Then Ihe sel of'ectors {u\. U2 ..... up} is an onhogonal basis for
W. where
UI =WI
ujW!
U2 = "'2 - -T-U,
U, Ul
uj"'3 Ur",)
U) = W) - -T-U1 - -Y---U2.
u 1 UI u 2 U2
~ and ~here. in general.
i-I r
""' Uk Wi
Uj = W, - L -T-U,. 2::: i ::: p. 161
,1,=1 Ul Uk
•
The proof of Theorem I ~ is somewha' technical. and we defer it 10 the end of this section.
In Eq. (6) we have explidt expre,ssions that can be used to generate an orthogonal
set of vectors {UI. U2 ••.. , up} from a given set of linearly independent vectors. These
220 Chapter 3 The Vector Space R"
explicit expressions are especially useful if .... e have reason to implement the Gram-
Schmidt process on a compUier.
Ho.... e'er, for hand calculations. it is not necessary to memorize formula (6). AU .... e
need to remember is the form or the general pattem of tbe Gram-Schmidt process. In
particular. the Gram-Schmidt process smrts wilh a basis {WI. w2 ....• w p ) and generates
new vectors Ul. U2. U3 ..•• according 10 the following patlern:
UI = WI
U2 = "'2+aUI
U3 = "'3 + bUI + CU2
u~ = "'~ + dUI + eu! + [U3
In this sequence. the scalars can be determined in a step-b)'-step fashion from the or-
thogonality conditions.
For instance. to determine the scalar a in the definition of U2 ..... e use the condition
uf U2 = 0:
To determine the IWO scalars band c in the definition of U3. we use the two condilions
uf U3 = 0 and ur
U3 = O. In particular.
0 -- u Tj U3 _ T
- u j "'3 + b UTl Ul + CUIT U2
= uf "'3 + buf Uj (since uf U2 = 0 by Eq. (7»
Therefore: b = -(uf "'3)/(U; ud·
Similarly.
lb< Grnm-
-EXAM!"l E 5 Let W be the subspace of R 3 defined by \V ::;; Sp(Wj, W2}, where
6
::'rtll.:e<.",
J generaL.-..... w, ~ [ :] ,nd W,
112 = Wl +auj.
where the scalar a is found from the condition llf 112 = O. Now, 111 = [I. 1, 2]T and thus
111112 is given by
u[ 112 = 111<W2 +aul) = ll1w2 +aufuj = -6+6{/.
Therefore. to have uf
112 = O. we need a = I. With a = I. 112 is given by 112 = "'2+111
u, ~ [ :] ,nd u, ~ [ _: l •
For convenience in hand calculations. we can always eliminate fractional compo-
nents in a set of orthogonal vectors. Specifically, if x and yare onhogonaL then so are
iiUOlh
ax and)' for any scalar a:
If xT y = 0, then (ax}T)' = a(x Ty} = O.
We will make use of this observation in the following example.
EX-i-\I-.IPLE (, Use the Gram-Schmidt onhogonalization process to generate an onhogonal basis for
J
IV = Sp{Wi. W2. W3), where
W, ~[] W, [ ]~ oed w, [~
Solution First we should check to be sure that {WI. w2. W3} is a linearly independent set. A
.;onaJ calculation shows that the vectors are linearly independent. (Exercise 27 illustrates what
lQtiOfu, happens when the Gram-Schmidt algorithm is applied to a linearly dependent set.)
llLe this To generate an onhogonal basis lUI. U2. U3} from {WI. "'2, "'3}, we first set
"'.the
II was UI =Wl
v, ~ [] v, = [J 'nd v, ~ [ J1 •
(Note: In Example 6. we could have also eliminated fractional components in the middle
of the Gram-Schmidt process, That is. we could have redefined U2 to be the vector
U2 = [0, -I. 1, -If and then calculated u, with this new. redefined multiple of U2.)
As a final example. we use MATLAB to construct orthogonal bases.
.
au u ==
A.
1 2 1 J 2
•
1
1
1 2
o
••
1
5
»orth{A)
ana.
0.3841 ·0.1173 ·0.9158
0.7682 0.5908 0.2466
0.5121 ·0.7983 0.3170
ob<.un an
»null (A)
ana_
-0.7528 -0.0690
-0.2063 0.1800
• -0.1069 -0.9047
0.5736 -0.OU9
-0.2243 0.3772
middl(
'«tor Figure 3.14 The ~IATLAB command arth IA) produces an
of u~.) orthonormal basis for the range of A. The command null (A) gives
an orthononnal basis for the null space of A.
ofw) and Wl. Proceeding inductively. suppose thai ul. "1_ .... U,_I have been generated
by Eq. (6): and suppose that each Ut has (he form
Ut = Wt - CIWI - C1Wl - ... - Ci_IWt_l_
From this equation, each Uk is nonzero: and it follows thai Eq. (6) is a well-defined
expression lsince uf lit > 0 for 1 ~ k ::: (i - OJ. Finally, since each Uk in Eq. (6) is a
linear combinalion of "'\. \\'2 •.••• "'t. we see thal Ui is a nontrivial linear combination
.ge of A of "'\. w~ ..... w,: and therefore u, is nonzero.
Th< l<- All thai remains 10 be proved is Ihat the vectors generated by Eq. (6) are orthogonal.
:ors; that Clearl) ur U2 = O. Proceeding inductivel) again. suppose thai uJ Ut = 0 for any j and
basis for k. where j t- k and I :::: j. k :::: i-I. From (6) we have
~. A has
• T
UjUj =
r
Uj Wi -
;-,
L-r-u,
r
Uk W,
)
= U
r
Wi -
;-, (r- r - )
L Uk W, r
(U j ud
( t=1 Uk Ut
J
i_I UkUt
\ectors
= UJ Wi -r (uJ",) r -r-
U j uJ
(U j Uj) = O.
I,," . Thus Thus Ui is orthogonal to u J for I :::: j :::: i - 1. Having this result. we have shown
the fonn thal {UI. U2 ..... up} is an orthogonal set of p nonzero veClors. So. by the corollary of
mation Theorem 13. the \ectoT'S Ul. U1 ..... up are an orthogonal basis for \V.
224 Chapter 3 The Vector Space R n
n
3.(, EXERCISES
~ 12'~[!]
In Exercises 1--4. verify that {UI. U1. U3} is an orthogonal
set for the given vectors.
l l
pendent \,ectors.
4.
u,
~
u, [;] u, ~U l ~ -n
u, u, [ -;]
u, [
14. [ iH;H-~ ]
In Exercise~ 5-8. tind values a. b. and c such that 15. [ 1 ] [ 0 ] [1 ]
n H;H~~ ]
(UI.U2.U3) IS an orthogonal set. 1.2. I
;. u, [ : ~ l ~ [J u, u, [ ~ 16. [ ;
6 u, [ ~ ~ l ~ [-J u, ~
u, [ : ] 17. [ ; ] . [ : ] . [ ~ ;]
1 ] [ -2 ] [ 4 ] 0 0 1
7. UI = I . U! = ~ I . U3 = b I 0 0
[
u, ~ [ :J
c
~ .: .~
in Exercises 9-12. express the given vector v in terms In Exercises [9 and 20. find a basis for the null space and
of the orthogonal basis B = (UI. "2. U3). where UI. U2. the range of the given matrix. Then use Gram-Schmidt
and U3 are as in Exercise T. to obtain orthogonal bases.
9'~[l] 1O"~[;]
19. [ 1 -2 1 -; ]
2 I 7 5
I -1 2-2
3.7 Linear Transformations from R" to R- 225
, ]
20. [
-I
1 3 10 11
2
2 -I -I
5 4
1
;]
21. Argue that any set of four or more nonzero veCIQrs
letc = x T~'/yT y and expand (x _cy)T (x -Q') ~ O.
Also treal the case y = 8.]
25. The triangle inequality. lei x and y be \ ectors in
R~. Provethatllx+yl!::: Jlxll+IIYII. [Him: Expand
IIx + )"1[2 and use Exercise 24.)
in R) cannOI be an orthogonal set.
26. LeI x and y be vectors in R~. Prove that IlIxl -
Idlprocess to 2!. Let S = {Ut. "2, U31 bean orthogonal set of nonzero lIyUI ::: IIx - Yl!. [Him: For one part consider
linearly inde- vectors in RJ . Define the (3 x 3) matrix A by IIx + (y - x)11 and Exercise 25.]
A = [Ul. "2. U3]. Show thaI A is nonsingu!ar and
27. If the hypotheses for Theorem 14 were altered so
ATA = D. where D is a diagonal matrix. Calculate
the diagonal matrix D when A is created from the that {WiIf:/ Ir.1
is linearly independent and (Wj is
orthogonal vectOrs in Exercise I. linearly dependent. use Exercise 23 to show that
Eq. (6) yields up = 8.
B. Let IV be a p-dimensional subspace of Rn . If v is a
vector in IV such Ihal \,7 w = 0 for every w in W, 28. Let B = (UI. U2 •.••• upl be an orthonormal basis
show that v = 8. [Hint: Consider w = v.1 for a subspace W. Let \' be any vector in IV. where
v = 01111 + 02U2 + ... + opu p . Show that
2-1. The Callch)'-Schwa~ ineqlloliry. Let x and y be vec-
tors in R~. Prove thai I"TYI .:5 IIx JI: YII. [Hint: Ob- lIvll 2 = af + oi + ... + a~.
serve that IIx - cyU 2 ~ 0 for any scalar c. If y #- 8.
\loill denote a function, F. whose domain is the subspace V and whose range is contained
in W. Furthennore. for \' in V we write
w = F(v)
10 indicate that F maps \' to w. To illustrate. let F: R3 -+ R2 be defined by
X, - x, ]
IJI 'p3Ceand F(x)= [ xl+X3 •
~Schmidl
where
x~ [:J
226 Chapter 3 The Veclor Space R II
A= [ 1-I]
~ ~ .
In this case Eq. (I) defines a function T: R2 --+ R J • and the formula for T is
T ([ : ]) ~ [; l
Returning to the general case in which A is an (m x n) matrix. note that the function
T defined by Eq. (I) salisfies the following linearity properties:
T(v + w)
T(ev)
= A(\' + w) = Av + Aw = T(v)
= A(e\') = eAv = eT(v).
+ T(w) ,
\\here v and w are any vectors in R" and e is an arbitrary scalar. We next define a
linear transformation to be a function that satisfies the two linearity properties given in
Eq. (2).
DEFI'lTlO' 5 Let \I and \V be subspaces of Wand R"', respectively, and let T be a function
from \' to lV, T: V --+ IV. We say that T is a linear transformation if for all u
and \' in V and for all scalars Q
T(u + v) = T(u) + Tv)
and
T(au) = aT(u). (~)
3.7 Linear Transformations from R" to R- 227
II is apparent from Eq. (2) that the funclion T defined in Eq. (I) by matrix muhipli-
cation is a linear transformation. Conversely, if T: R" ... R'" is a linear transformation.
then (see Theorem 15 on page 232) there is an (m x 11) matrix A such that T is defined
by Eq. (I). Thus linear transformations from R" to R'" are precisely thosc functions that
can be defined by matrix multiplication as in Eq. (I). The situation is not so simple for
linear transformations on arbitrary vector spaces or even for linear transformations on
subspaces of R". Thus the concept of a linear transformation is a convenient and useful
generalization to arbitrary subspaces of matrix functions defined as in Eq. (I).
F(x)= [ X2+ X3
X, - x, ]
Determine whether F is a linear transformation.
where x =
[."." ]
X2 '
Solution We must determine whether the 1\\'0 linearity propenies in Eq. (3) are satisfied by F.
function Thus let u and \' be in R 3.
C:ll
"
define a
143 (/3
~ [ u, - u, ] + [ u, - u, ]
+ 143
142 (/2 + (/3
= F(u) + F(v).
228 Chapter 3 The "edor Space R"
SimiLarly,
so F is a linear lransfonnation.
Note that F can also be defined as F(x) = Ax. "here A is the (2 x 3) matrix
A= 1 -I 0 J. •
[ o 1 1
E.\.\.\\PLE 2 Define H: R 2 -+ R 2 by
H(x) =
[
xl-x2+1
3X2
u= [ u'J
112 and v=
["'J L'2 .
Then
H(u~v)=
(ul-t'Il-(1I2+t'l)+1 J.
[ 3(112 ~ L'l)
while
H(u) + H(",) =
[ u, - '" + 1 J+ ["' - '" + 1 J
3112 3V2
= [ (/ll + L'l) -
3(112
(U2 + L'!) + 2
+ l'2)
J.
Thus v.e see that H(u +v) ¥= H(u) + H(v). Therefore. H is not a linearlransformation.
Although. it is nOI necessary. it can also be \erified easily thai if C ¥= I. then H(cu) r
cHeu). •
E.X+\\lPI E 3 Let \V be a subspace of R n such Ihatdim{W) = p.and let 5 = (WI. ""2 ..... Wpl bean
orthonormal basis for IV. Define T: R~ ....... IV by
T(v) = {\ TWd""t + (\,T w2)W2 -+- .•• _ (\,T ""p)""p' {4,
II can be shown similarly that Tku) =:: cT(u) for each scalar c, SO T is a linear
uansformation. •
The \'ector T(\'> defined by Eq. (.J) is called the onhogonal projection of v onto
tV and will be considered funher in Sections 3.8 and 3.9. As a specific illustration of
Example 3. let tV be the subspace of R) consisting of all vectors of the form
ilj maui"
X, ]
• X=
[
~
Thus tV is the .\)'-plane. and the set tel. e21 is an orthonormal basis for tV. For x in R),
x= [." ]:: •
rex) ~[~
This transformation is illustrated geomeuically by Fig. 3.15.
l
, . (.l"\.X2·.l"))
ormation.
x
H(cu) =::
• ,
14',}bean ,,, y
T(x)
~
(.l! • .l"2' 0)
x
J" , E\ \\\PLE 4- Let \V be a subspace of R~. and lei a be a scalar. Define T: lV --,. \V by T(w) = aw.
Demonstrate that T is a linear transformation.
Solution If \' and 14' are in tV. then
t:\.\ \\1111:> Let \V be a subspace of R~. and let (J be the zero vector in R"'. Define T: IV --+ R'" by
T(w) = 9 for each w in IV. Show that T is a linear transfonnation.
so T is a linear transfonnation. •
The linear transfonnation T defined in Example 5 is called the :uo trallS/ormatioll.
Later in this section we will consider other examples when",e study a particular class
of linear tralLSfonnations from R" to R 2. For the present. we tum to further properties
of linear transfonnations.
Inducti\ely we can extend Eq. (5) to any finite subset of V. That is. if VI. \'2 ..... Vr are
veclors in V and if CI. C! •••.. Cr are scalars. then
nCI\'l + ell'! + ... + crvr ) = Cl T(l'l) + clT(l'l) + ... + Cr T(v r ). ,6,
• The following example illustrates an application of Eq. (6).
lena> land
Fig. 3.16.
E'.\\\PI rb Let \V be the subspace of R 3 defined by
X,+2X,]
W = Ix: x = x! . X2 and X3 any real numbers}.
[ x,
Then {WI. w:d is a basis for W. where
w, = [i] 'lld w, = [ ~ l
Suppose that T: W - R l is a linearlransformation such Ihat T(wd = ul and T(wl) =
Ul. "'here
ple..J in
u, ~ [ :] 'lld u, = [ _: l
'n lei the vector W be: gi\ en by
- R·b~
w ~ [: ]
Show that" is in 1V. express w as a linear combinalion of WI and w2. and use Eq. (6)
10determine Trw).
Solution Ii folio", s from the description of 1V that W is in W. Funhennore. it is easy to see that
W = 3\\"1 - 2"'2.
• B) Eq. (6)•
Innation
Trw) = 3T(wd - 2T(W2) = 3u\ - 2U2 = 3 [ : ] - 2 [ _: ].
.vhrcl'b~
>~nie)
Thus.
T,w) ~ [a •
u and,
Example 6 illustrates that Ihe action of a linear transfonnation T on a subspace 1V
is complelely determined once the action of T on a basis for W is known. Our next
example provides yet another illuSiration of this fact.
~
232 Chapter 3 The Vector Space Rn
TC',) ~[ : l
For an arbitrary vector x in R 3 •
TC',) ~[ -: l 'nd T(,,) ~[ a
X, ]
x= [ :: '
Thus,
T(X)=Xl
[ 1] [ -1 ] [2] [x, - xd 2x, ]
2
+X2
I
+x3
3
= -
2XI+X2+3x3
. •
Continuing with the notation of the preceding example, leI A be the (2 x 3) matrix
with columns T(el). T(e2), T(e): thus.
It is an immediate consequence ofEq. (7) and Theorem 5 of Section 1.5 that T(x) = Ax.
Thus Example 7 illustrates the following theorem.
a
15
-i 111 EORf\\ Let T: Rn ---+ Rm be a linear transformation, and let el. e2, .... en be the unit vectors in
R n. If A is the (m x n) matrix defined by
A = [T(el)' T(e2),"" T(e n )],
then T(x) = Ax for all x in Rn.
Proof [f x is a vector in Rn,
X, ]
x,
x= .
[
.;n
then x can be expressed in the form
A=[-:2 -1:]
... One can easily verify that
D-I:::fl" ITIOr-.. t) Let V and W be subspaces. and let T: V --+ \V be a linear transformation.
The null space of T, denoted by N(T). is the subset of V given by
..'\'(T) = Iy: ,. is in V and TCY} = O}.
The range of T, denoted by 'R(T). is the subset of \V defined by
'R(T) = {w: w is in Wand w = T(y} for some " in V}.
That N(T) and R(T) are subspaces will be proved in (he more general COnlex[ of
Chapter 5. If T maps RII into R"'. then by Theorem 15 there exists an (m x n) matri'\
234 Chapter 3 The Vector Space R"
A such that T(x) = A x. In this case il is clear that the null space of T is the null spa,~
of A and the range of T coincides with the range of A.
As with matrices. the dimension of the null space of a linear transfonnation T I'
called the lIullity of T. and the dimension of the range of T is called the rank of T. I:
T is defined by matrix multiplication. T(x) = Ax. then the transformation T and Ill.:
matrix A have the same nullity and the same rank. Moreo\"er. if T: Rn __ Riff. then A.
is an (m x n) matrix. so it follows from the remark in Seclion 3.5 that
I \ \ \\PU: 9 Let F be the linear transformation gken in Example I. F: R3 -+ R~. Describe the null
space and Ihe range of F. and delennine the nullity and the rank of F.
Solution It follows from Theorem 15 that F(x) = Ax. where A is the (2 x 3) matri:<
X, ]
x= .~~ .
[
."
This gives
u = [ =: ]
is a basis for .\'(Fl, so F has nullilY I. By Eq. (9),
rank(F) = II - nullity(F) = 3 - 1 = 2.
Thus 'R( F) is a two-dimensional subspace of R~. and hence 'R( F) = R~.
Alternatively. note that Ihe system of equations Ax = b has a solution for each b in
R~. so R(F) = R(A) = R". •
['"","flU: 10 Let T: R1 -- R3 be Ihe linear transformation ghen in Example 8. Describe the null
space and the range of T, and determine the nullity and the rank of T.
3.7 Linear Transformations from R It to R- 235
Ie null space SOlllliO/I In E:<.ample 8 it was shown that T(x) = Ax. where A is the (3 x 2) matrix
'I'lUtion T i~
m!ofT.1f
D T and the
A~[~: ~].
2 -I
R" . then A
Ifb is the (3 x I) vector.
the null
b~ [:J
then the augmented matrix (A I bj for the linear system Ax = b is row equivalent to
I 0 (1/3)b, - (2/3)b, ]
the null o 1 (1/3)b, + (l/3)b, (10)
[
o 0 (-lj3)b l + (5j3)b 2 + b)
Therefore. T(x) =
Ax =
b can be solved if and only if 0 = (-I j3)b l + (5j3)b2 + b3•
The range of T can thus be described as
'R(T) ~ 'R(A)
and the
) 'tem = fb: b = [ :: ]. b l and b2 any real numbersl.
(1/3)b, - (5/3)b,
A basis for 'R(T) is lUI. U2} where
u, ~[ ~ ] and U, ~ [ ~]
1/3 -5/3
Thus T has rank 2. and by Eq. (9).
nullity(T) = n - rank(T) = 2 - 2 = O.
II follows that T has null space {9}. Altemathely. it is clear from matrix (10). \lo'ith
b = 9. that the homogeneous system of equations Ax = 9 ha... only the trivial solution.
Therefore...qT) = .A'(A) = 19}. •
n for all v in R 2. Transfonnations that satisfy Eq. (II) are called orthogonal transforma.
tiolls. We begin by giving some examples of orthogonal transformations.
236 Chapter 3 The Vector Space R n
Let 8 be a fixed angle. and let T: R" -+ R" be the linear transformation defined ~
-i -EX·MIPl£ 11
T(v) = Av. where A is the (2 x 2) matrix
A_ _[co,O -'inO] .
sin (} cos 8
Give a geometric interpretation of T. and show that T is an orthogonallransfonnation.
We proceed now to show thai T(v) is obtained geometrically by rotating the vector,
through the angle 8. To see this, let ¢ be the angle between" and the positive x-axis
(see Fig. 3.17), and set r = II vII. Then the coordinates (/ and b can be wrilten as
a=rcos¢. b=rsin¢.
Making the substitution (13) for a and b in (12) yields
c = ,. cos rP cos 8 - r sin ¢ sin 8 = r cos(¢! + 8)
(J.l
--
nnd
d = r cos¢sin8 + r sin¢cos8 = r sin(¢ + 8).
Therefore, c and d are the coordinates of the point obtained by rotating the point (a. b)
through the angle O. Clearly then, IIT(v)1I = IIvll. and T is an orthogonal linear
transformation. _
y
(c. d)
d
""'"
,
",, (a. b)
c "x
Figure 3.17 Rotation through the angle 8
A~ [a -b].
b "
3.7 Linear Transformations from R" to R" 237
1 defined b~ where a~ + b1 = 1. then Ihe linear transfonnalion T(v) = Av is the rOlalion through
the angle 8. 0;5 8 < 2JT. wherecos8 = a and sin8 = b.
.'
, by the matrix
,,
,,
,
\-f
A~[CO'8 "08 ]
sin 8 -cos8 .
T(v)
Figun3.18 where (1/2)8 is the angle between I and the positive x-axis. Any such lran!>formation is
Reflection about a line called a rejlecriolJ. Note that a reflection T is also an orthogonal linear lransformation.
ab
li""" EM\\PI t: 13 Lei T: R~ ...... R~ be defined by T(\') = Av. where A is the (2 x 2) matrix
•
A~[ 1/2
./3/2
./3;2 ].
-1'2
itA Proof If T is an orthogonal tmosformalion. then IIT(v)lI = IIvl: for every vector \' in R 2.
In particular. IlT(elHI := I,edl := I. and similarly IIT(e~)1I = I. Set u\ = T(ell.
U1 = T(e~). and \. = rl. IV = el + e~. Then
Thus.
2 = lIuI + u211 2
= (Uj + U2)T (Uj + U2)
= (u; + uiHul + U2)
-
- U' +U' U2
l Uj +U1' Ul
+U'2 U2
j
2
= !lUI 11 + 2ur U2 + lIu2112
=2+2U;U2'
u, ~[ -:] m u, ~[ _: ]
In either case. it follows from Theorem 15 thai T is defined by T(v) = Av, where A is
the (2 x 2) matrix A = [Ul. u21. Thus if .,T
u, ~ [ -: l c>T
th,n
A ~[ : -: l
Mod
y
(-b,o)
--- -- •
" (0. b)
u, ,,
<, ,, c
, ,
,,, '-'-"
,,
,,/
(b. -0)
so T is a rotation. If
u'~[_~l
th'"
A~[: -~l
and T is a reflection. In either case note that ATA = f, so AT = A -I (see Exe,rcise 48).
An (II x II) real matrix with the propeny that ATA = I is called an orthogonal matrix.
• Thus we have shown mal an onhogonal transformation on R 1 is defined by
<
T(x) = Ax.
If
where A is an orthogonal matrix.
3 EXERCISES
"" I. Define T: R2 ...... R1 by Which of the following vectors are in the null space
ofT?
~[ 2x, - 3x, ].
aj T ([
T ([ ;; ])
~])
-Xl
bj T ([ : ])
+ X2
oj [n -n bj [
,j [ ; ] dj [ -:/2 ]
,j T ([ ~]) d) T ([ -~ ]) I -1/2
2. !kline T: R" -+ R1 by T(x) = Ax. where 4. lei T: R" -+ R 1 be the fUJ1Clion defined in Exer-
aj T ([ ~])
-3
bj T ([ : ])
3
.
b~[ -a
5. Let T: R" _ R2 be lhe function given in Exer-
cise I. Show lhal for each b in R2, there is an x in
R 2 such that T(x) = b.
T
-- R"
([ x:X']) = [
.f>
be the linear lransfonnation
b ~[ : ] F ([ :: ]) ~ 2x, + lx, .J T
9. F: R 2 -+ R 2 defined by
IV ~ I" ~ [;:] ~ ~ x, x) 0).
~ x~:, ]
Give a geometric interpretation of IV, v. and T (\').
F ([ :: ]) [ 19. Let T: R 2 -+ R 3 be a linear transformation such
that T(el) = U\ and T(e2) = U2, where
12. F: R 3 -+ R 2 defined by
F ([ :: ]) ~ [ -Xl
Xl - X2
+ 3X2 -
+ XJ ]
2x J
", ~ U] ood ", ~ ~
[ ]
~ [J
Find each of the roIlO'o\'ing.
T ([ - : ] )
aJ T ([ : ])
~[
'" ,J
:onal
~[ ~]
~
T ([ _: ])
T ([ :: ]) [ ::::: ]
~ ~d
T •
~u.:h
11. T ([ : ]) [ :] 27. T: R" - R defined by
T ([ -: ]) ~ [ n T ( [ : : ]) ~ 3" + 'x,
28. T: R 3 -.. RJ defined by
X']) [X' + x, ]
~ ~l
TX2=X)
([ x~ Xl
D T ([ ]) =[
29. T: R J .....,. R2 defined by
-l]) ~[n
......
T([
T ([ :J) = 2<, -x,+4x;
U T ([ ~ ]) = [ -:]
31. Let a be a real number. and define. j: R _ R by
f(x) = ax for each.1" in R. Show lhal f is a linear
transformation.
242 Chapter 3 The Vector Space RR
32. Let T: R ---+ R be a linear transformation. and sup- defining by [G 0 Fj(u) = G(F(u» for each u in V,
pose that T( I) = a. Show that T (x) = ax for each is a linear transformation.
x in R. 40. Let F: R3 -+ R" and G: R" -+ R3 be linear trans- .,
~
33. Let T: R 2 -"" R" be the function that maps each formations defined by
point in R" to its reflection with respect to the
•
F([::J)~[
~-axis. Give a formula for T and show that T is
-Xl+ 2x 2- 4x 3 ]
a linear transformation.
2XI+5x2+ X3
34. Let T: R" -+ R" be the function that maps each
point in R 2 to its reflection with respect to the line •
)' = x. Give a formula for T and show that T is a "d
c([::]) ~ [~:::~::]
linear transformation.
35. Let V and IV be subspaces, and let F: V -+ IV
and G: V -+ IV be linear transformations. Define
F + G: V -+ IV by [F +
G](v) = F(v) + G(\·)
for each v in V. Pro\'e that F +
G is a linear a) By Exercise 39. G 0 F: R3 -4 R3 is a linear
F (["'])
:: = [ 2xI -
4Xt
3_~2
+ 2X2 -
+ X3
5X3 ]
c) Verify that C = BA.
41. Let B be an (m x n) matrix, and let T: R" -4 R'"
be defined by T(x) = Bx for each x in R n . If A
and is the matrix for T given by Theorem 15, show that
A = B.
(["D ~ [-2x,
I _ XI +4X2 +2X3
c:: +]" + 3XJ 42. Let F: R n -+ RP and G: RP -+ R'" be linear trans-
formations. and assume that Theorem 15 yields ma-
trices A and B. respectively. for F and G. Show that
the matrix for the composition G 0 F (sec Exercise
a) Give a formula for the linear transformation 39) is BA. [Him: Show that (G 0 F)(x) = BAx for
F + G (see Exercise 35). x in R n and then apply Exercise 41.]
b) Find matrices A, B. and C such that F(x) = 43. Let I: R" -+ Rn be the identity transformation. De-
Ax, G(x) = Bx. and (F + G)(x) = Cx. termine the matrix A such that / (x) = Ax for each
c) Verify that C = A + B. x in R n .
37, Let V and IV be subspaces, and let T: V -+ IV be 44. Let a be a real number and define T: R" -+ R n by
a linear transformation. If a is a scalar. define aT: T(x) = ax (see Example 4). Detennine the matrix
V -+ IV by [aTl(v) = a[T(v)] for each v in V. A such that T (x) = Ax for each x in Rn .
Show that (./ T is a linear transformation.
Exercises 45-49 are based on the optional materia1.
38. Let T: RJ -+ R2 be the linear transformation de-
fined in Exercise 29. The linear transformation [3 T]: 45. Let T: R 2 -+ R 2 be a rotation through the angle 9.
R 3 -+ R 2 is defined in Exercise 37. In each of the following cases, exhibit the matrix for
T. Also represent v and T(v) geometrically, where
a) Give a formula for the transformation 3T.
rex)
~
b) Find matrices A and B such that = Ax
and [3T](x) = Bx. v [ : ]
c) Verify thai B = 3A.
39, Let U, V. and IV be subspaces. and let F: V -+ V a) 9 = ]I" 12 b) 9 = Jr 13 c) 9 = 27r 13
and G: V -+ IV be linear transformations. Prove 46. Let T: R'- -+ R2 be the reflection with respect to
that the composition G 0 F: U -+ IV of F and G, the line I. In each of the following cases, exhibit
3.8 Least-Squares Solutions to Inconsistent S)·stems. with Applications to Data Fitting 243
e.lC'h u in U.
the malrix for T. Also re,present el. <:2. T{el). and 48. Let T: R 2 -+ R2 be an orthogonal linear trans-
T{e2) geometrically. fonnation. and let A be the corresp::mding (2 x 2)
near lrans_ a) I is the x-axis. b) 1 is the y-axis. malrix. Show that ATA = f. [Him: Use Theorem
c) I is the line with equation.'" = x. 16.J
d) I is the line with equation y = .J3x. 49, Let A = [AI. A1J be a (2 x 2) matrix such that
~
linear LEAST-SQUARES SOLUTIONS TO INCONSISTENT
F. SYSTEMS, WITH APPLICATIONS TO DATA FITTING
, XI = When faced wilh solving a linear syslem of the form Ax = b. our procedure has been to
describe all solutions iflhe system is consistent but merely 10 say "there are no solutions"
if the system is inconsistent. We now want 10 go a step further with regard 10 inconsistent
systems. If a given linear system Ax = b has no solution. then .... e .....ould like to do
the next best thing-find a vector x· such Ihat the residual "ector. r = Ax· - b. is as
small as possible. In terms of praclical applications. we shall see that any technique for
minimizing Ihe residual "ector can also be used to find best least-squares fits to data.
A common source of inconsistent syslems are ol'f!rdf!ttrmined s)'sttms (that is.
systems wilh more equations than unkn()\\'ns). The syslem that fo11o....s is an e>;ample of
an o\'erdetennined system:
XI + 4X2 = -2
0.. XI +2(1 = 6
'fore~ 21"1 + 3X1 = I.
OH=rdelermined systems are often inconsistent. and the preceding example is no excep-
tion. Given that the above system has no solution. a reasonable goal is to find values for
Xl and X2 that come as close as possible 10 satisfying all three equations. ~ethods for
achieving that goal are the subjecl of this section.
Least·Squares Solutions to Ax = b
Consider the linear system Ax = b where A is (m x 1/). If x is a vector in Rn , then the
vector r = Ax - b is called a residual ~·f!ClOr. A \'ector x' in R~ Ihat yields Ihe smallesl
possible residual ,ector is called a least-squares solution to Ax = b. More precisely. x'
is a least-squares sofutioll to Ax = b if
,1/ I ~ y .~y
R(A)
.,
Figure 3.20 )" is the closest vector in R(A) to b
Thus, the geometry of the (3 x 2) system. as illustrated in Fig. 3.20, suggests that
we can find least-squares solutions by solving the associated system (I):
ATAx = Arb.
As the following theorem asserts, this solution procedure is indeed valid.
We will give the proof of Theorem 17 in the next seelion. For now, we will illustrale
Ihe use of Theorem 17 and also gi\c some examples showing Ihe connections between
data-filting problems and leasl-squares solutions of inconsi!>lcm systems. (In parts (a)
and (b) of Theorem 17. [he associated equalions ATAx :::: Arb are called the normal
f'quQlions.)
Xl +4X2::::-2
Xl + 2X2 = 6
2fl +3X2 = I.
Solution By Theorem 17. we can find the leasl.squares solUlions by solving Ihe normal equalions.
ATAx = Arb..... here
A ~ [: ~] "d b = [ -~ ]
ATA~[ 12612]
29
"d ATb=[6].
7
Solving the syslcm ATAx :::: Arb. we find the least.squares solUlion
x'~ [-a •
= Least-Squares Fits to Data
One of the major applications for least-squares solutions is to the problem of detennining
best least-squares fits to data. To introduce this imp:mant topic. consider a table of data
such as the one displayed next.
Table 3.1
H "
)'0
"
y,
"
)'2
"
.
-,
~
Suppose. when we plot the data in Table 3.1. that it has a distribution such as the one
sho\.\n in Fig. 3.21. When we ex:amine Fig. 3.21. it appe~ that the data (Ktints nearly
fall along a line of the form y = ml + c. A logical question is: "What is the best line
• that \.\e can draw through the data. one mat comes closest to representing the dataT
246 Chapter 3 The Vector Space R fl
•
• • •
•
• •
In order (0 answer this question. we need a way to quantify the terms best ano
cfose.n, There are many different methods we might u!>c to measure best. but one oflhe
most useful is the least-squares criterion:
"
Find 11/ and c to minimize L
,,,,,
{(lilt; + c) - )'1 J1
The panicular linear pol) nomial J = lilt + c that minimizes the sum of squares il'
Eq. (2) is called the b~st least·squar~s linearjit to the data in Table 3.1. (We see in tht
next section that best least-squares linear fits always exist and are unique.)
Similarl). suppose the set of data points from Table I has a distribution such as tM
one displayed in Fig. 3.22. In this case. it appears that the data might nearly fall along
lhe graph of a quadratic polynomial J = all + bt + C. As in Eq. (2). we can use a
least-squares criterion to choose the besr leasr-sqllan's quadraric fit:
"
Find a. b. and c to minimize L
,,,,, [(at? + btl + c) - J,ll.
In a like manner. we can consider fitting data in a least-squares sense with pol)nomiah
of an) appropriate degree.
,. •
• •
• •
• •
• •
mto+c=,\'o
m [to I][m]~[)'o].
~Sl ~ nltl +c = ,n /1 I C )"1
mli-C=Yt·
In matrix lenns. this o\'erdelennined S)Slem can be expressed as Ax = b. where
A =
[
to ']
t,
.
,
. . X~[:l ond b~[:::]'
I, , ~'k
",...,•
IIAx - b ,-, = L.., [(mt, + c) - '
y;]-.
Comparing the equation abo\'e with Ihe leasl-squares criterion (2). we see thai the best
least-squares linear fil. y = m-' + c·, can be detennined by finding the leasl-squares
solution of Ax = b.
~ "2 8 II
y II 4 ,
Find the least·squares linear fit to the data.
,
248 Chapter 3 The Vector Space R n
SO/l/Iion For the function defined by )' = lilt + c, the data lead to the overdetermined system
m+c= I
4m+c=2
811l+c=4
11m +c= 5.
In matrix terms, the system is Ax = b, where
ATA = [ 202
24
2: ] ",d A'b~ [96]12 .
There is a unique solution to A TAx = ATb because A has rank 2, The solution is
X'~[ 12 /29].
15/29
Thus the least-squares linear fit is defined by
12 15
) ' -- 29 t + 29'
The data points and the linear fit are sketched in Fig. 3.23,
•
r
6
2 4 6 8 10 12 r
"~'Iem. Ax = b. r-.IATLAB has several reliable altemalhes for finding least-squares solutions
to inconsistent systems: these methods do not depend on solving the nonnal equations.
If A is not square. the simple MATLAB command x = A\b produces a least·
squares solution to Ax = b using a QR-factorization of A. (In Chapter 7. we give a
thorough discussion of how to find least~squares solutions using QR-factorizations and
Householder transformations.) If A is ~quare but inconsistent. then the command x =
A\b results in a warning but does not return a least-squares solution. If A is not square.
a warning is also issued when A does not ha\e full rank. In the next section we will
give more details about these matters and about using MATLAB to find least·squares
solutions.
L\ \ \\ PI r.3 Lubricating characterislics of oils deteriorate at elevated temperatures. and the amount
of bearing wear. y. is nonnally a linear function of the operating temperature. I. That is.
.r = nil + b. By weighing bearings before and after operation at various temperatures.
the following table was constructed:
Opernting
Detennine the least-squares linear fit from these readings and use it 10 delennine an
operating temperature that should limit bearing wear 10 7 gm/IO.000 hr of operation.
SOllllioll For the system Ax = b. v.e see Ihal A and b are gi\'en by
•
- A = [
b ~ 13.
12~
4.
1.8
1
5. 5.5. 6.
175
1
20-l
1
7.5,
232
1
160
8.8.
I
10.
288
1
316
I I. I.
The least~squaressolution to Ax = b is found from the MATLAB commands in Fig, 3.24,
1
343
1
12f'.
37:]'
»A.. [UO 148 175 204 232 260 288 316 343 371;
1 1 1 1 1 1 1 1 1 1] , ;
»b.. [3 4 5 5.5 6 7.5 8.8 10 11.1 12] , ;
»x.. A\b
x.
0.0362
·1.6151
y = (0.0362)1 - 1.6151.
,
I
• • • • •
• •
•
•
,
'0
" "
Figure 3.25 Nonlinear data
As a measure for goodness of fit. we can ask for coefficients 00. a I..... an that
minimize the quantity Q(ao, al ..... an)' where
•
Q(ao·(/j·· ... (/,,) = L [p(I,) - )'if
i=O
= L
•
[(00 +al1i + ... +a"t7) - yilt.
"
; ..0
3.8 Least·Squares Solutions to Inconsistent S~·stems. with Applications to Data Fitting 251
As can be seen b} in:-.pection. minimizing Q(ao. at ..... all) is the same as minimizing
lAx - bl:!. "here
tJ ...
~ I~,
10
t .:!3!'1 C
j I, I', I" ]
[ G,
Go ]
YO]
[ :~
• A [ ;
:.:= : . and b=
(5)
I. I' r" a"
• •
As before. we can minimize I1A:.: - bill = Q(ao. {Jt ....• (J~) by solving ATAx = ATb.
The 11th-degree polynomial p. that minimizes Eq. (4) is called the least-squares nth-
degree fit.
5
0
3
I
2 4
X~[::l
-I 1 5
A ~ I 1 0 0 and b = 3
a. tbx
• I • 1
2
1
4
2
4
5 0 10] [ 26]
ATA =
[ 0
10
10
0
0
34
and ATb= -19
71
.
252 Chapter 3 The Vector Space R-
The solution is x· = [87/35. -19/10. 19/14]. and hence the least-squares quadratic fi:
is
19, 19 87
p(t)
I-\.
= -r - -I
10
+-.
35
A graph of)' = p(l) and the data poinls are sketched in Fig. 3.26. •
y
10
._.!22_'!! +!!
)-IJ 1 JOt 35
., ,
-2 -, 2
The same principles apply when we decide to fit data with any linear combination
of functions. For example. suppose)' = f(t) is defined by
" quadralic fil Therefore. when A is rank deficient. there is an infinite family of least·squares
solutions to Ax = b. Such an example is gi\cn next. This example is worked using
MATLAB. and we note thai MATLAB produces only a single least-squares solution but
does give a warning that A is rank deficient. In Section 3.9 we will discuss this topic in
• more demil.
E:\+\,\\Pl r j For A and b as given. Ihe system Ax = b has no solution. Find all the least·squares
solutions
A ~ ~ ~ ~]
[
-I I -I
. b = [ -: ] .
0
-I 2 0 -3
Solution The MATLAB calculation is displayed in Fig. 3.27(a). Notice that MATLAB warns us
that A is rank deficient. having rank two. In Exercise 18 we ask you to verify that A
does indeed have rank two. _
..quares to
..
o
-1. 5000
0.5000
~l-.e~_
Since A is rank deficient. there are infinitely many least-squares solutions 10 the
inconsistent system Ax = b. MATI...AB returned just one of these solutions. namely
Indeed. it x = [0. -1.5. O.5f. We can find all the solutions by solving the normal equations
unique ATAx = ATb.
e..cl) that Fig. 3.27(b) shows the result of using MATLAB to solve the normal equations for
the original system (since A and b have already been defined. in Fig. 3.27(a). MATLAB
254 Chapter 3 The Vector Space R"
NormBqn _
J -J
, , J ,
-J J -12
J J 0
»rret (NormBqn)
llnll _
1 0 2 1
0 1 1 -l
0 0 0 0
makes it very easy to define the augmented matrix for ATAx = ATb), The complete
solution is Xl =
I - 2t). Xl =-
I ~ X3. or in vector fonn:
x· =
1-
-lx~X3
2.<'] = [ -~'] +x3 [ -2
-:
]
[
As can be seen from the complete solUlion just displayed. the particular MATLAB least·
squares solution can be reco\'ered by setting X3 = 0.5.
3.S EXERCISES
In Exercises 1-6. find all \'ectors x· that minimize
121] [I] ~
lAx-bU. where A and bareasgi\en. Use the procedure
suggested by Theorem 17. as illuSlrated in Examples I
and 5.
3.A=
[ -1
354.
I -4
b= 3
0
given data. In each exercise. plot the data points and the
linear approximation. where
7, ~ -1 0 1 2 g,(I,)
1
g,(I,) ]
x~ [::
Y 0 I 2 4 A= 81~(1) 81(f2)
"d
t -2 0 I 2 [
8, ;:l
Y 2 I 0 0
81 (1m ) 81Um)
-I o 2
9. ;r-:.:;
f
2 3 b --
y, ]
",
.. .
[
2 3 Y.
10. t 0
y -2 3 7 10
in Exercises II-I·t find the least-squares quadratic fit 16. Let 81(r) = ..Ii and 82(t) = COSJU. and consider
to the given data. In each exercise. plot the data poinls the data points (t;. J;). I ::: j ::: 4. defined by
IILompl~
and the quadratic approximation. 1 4 9 16
-2 -I 2 y 0 2 4 5
11.
y 2 o 2
As in Eq. (6). lei Q(al. (/2) = L::.I(aI81(t;)-
_
f _,-0_,-1_.::2_.::3
(/282(t;) - -,",f......here 8t(t,)= JT, and 82(t,) =
12. cos Jrt,.
, 0 0 I 2
\8/, a) Use the resull of Exercise 15 to determine A. x.
and b so that Q(al. (2) = IIAx - b 12.
•f _ _-.=2_.::-:.:'_.::0~~
0. - b) Find the coefficients for f(l) = (/I..Ii .,..
y -3 -1 0 3 a2 cos;rt thai minimize Q(al. a~).
U,
:J -2 0 I 2 11. Consider the ((m - I) x (n - I)] matri-.: A in Eq. (5).
\\here m ~ n. Show that A has rank" - 1. (Hint:
y 5 I I 5 Suppose that Ax = 8. \\here x = lao. al ..... an)T.
15. Consider the following table of data: What can you say about the polynomial pet) =
ao -l- alt + ... + a.t n?)
~ "
f.
yin )'2
,.
.'
18. Find the rank of the matrix A in Example 5.
,
256 Chapter 3 The Vector Space R"
(b) Usc the theory to explain some of the technical language associated with le~·
squares so that we can become comfortable using computational packages su~
as MATLAB for least-squares problems.
x~[::]
x"
We define the distallce between two vectors x and y to be the length of the vector x - ~
recall thaI the length of x - Yis the number IIx - YII where
IIx - YII ~ lex - y)'ex - y)
= J(X\ ~ .n)2 + (X2 - Y2)2 + ... + (x" - )',,)2.
The problem we wish to consider is stated next.
That is, among all vectors w in IV. we wanl to find the special vector w¥ in IV that
is closest to v. Although this problem can be extended to some very complicated and
abstract settings, examination of the geometry of a simple special case will exhibit a
fundamental principle that extends to all such problems.
Consider lhe special case where IV is a two-dimensional subspace of R3 • Geomet-
rically, we can visualize IV as a plane through the origin (see Fig. 3.28). Given a point \
not 011 IV, we wish to find the point in the plane. w', that is closest to Y. The geometI)
of this problem seems to insist (see Fig, 3.28) that w' is characterized by the fact that
the vector v - w* is perpendicular to the plane W.
The next theorem shows that Fig. 3,28 is not misleading. That is, if Y - w' is
orthogonal to every vector in IV. then w' is the best least-squares approximation to v,
I TH EOI\ E\\ I ~ Let IV be a p-dimensional subspace of R". and let v be a vector in R". Suppose there
is a vector w' in \V such that (y - W*)T W = 0 for every vector w in IV. Then w' is the
best least-squares approximation to Y.
3.9 Theor)' and Practice of Least Squares 257
kd with least
"kages such "
v '--"ib,_w.
'airly concise
e begin b~ r .'
w_
)'
ector in R".
IV
x
Figure 3.18 w* is the closesl point in the plane IV 10 \'
'e..-Ior x - ~ Proof Let w be any vector in Wand consider the fol1owing calculation for the distance from \.
to w:
Ilv - wll 2 = n(v - w*) -l- (w* - w)11 2
= (,' - w·)T (\. _ w*) + 2(\' - w'l (w' - w)
(1)
-l- (w· - w)T (w· - w)
w d II'
" .•
TtH:OHE:\\ It) Let IV be a p-dimensional subspace of R n • and lelluJ. U2 •.... up) be a basis for \\
Let " be a vector in W. Then (,' - w·)r w = 0 for all w in IV if and only if
I t I rORr" 20 Let IV be a p-dimensionaJ subspace of R" and let \' be a vector in R". Then there is one
and only one best least-squares approximation in IV to \'.
Proof The proof of existence is based on finding a solution to the system of Eq. (2). KQ\\.
system (2) is easiest 10 analyze and sohe if we assume the basis vectors are onhogonaL
In panicular.lel {u]. U2 ••••• up) be an onhogonal basis for IV (in Section 3.6 \Ioe
observed that every subspace of R" has an onhogonal basis). Let w· be a "eclor in n-
where
w· = a]u] - U2U2 + ... +apu p. •
Using Eq. (3). the equations in system (2) become
Then. because the basis veclors are onhogonal. the preceding equations simplify con·
siderably:
V T u, - a;u,T Uj::: 0 . fori = 1.2..... p.
Solving for the coefficienlS (II. we obtain
v T u,
OJ = ~T~·
uf Uj
3.9 Theory and Practice of Least Squares 259
\- - w' is or· Note that the preceding expression for Ui is well defined since u, is a basis vectOr. and
\.'1.or in a ba~is hence the denominator uT u, cannot be zero.
10 the basis
Ha\ ing solved the system (2). we can write down an expression for a vector w'
: vector in W such that (v - w·)T W = 0 for all w in IV. By Theorem 18. this vector w· is a best
approximation to v:
• ba:.is for \\
,i . " "i P
w=L-r-.' Uj •
1=1
vT U
U,
(4)
• Having established the existence of best approximations with formula (4). we turn
now to the question of uniqueness, To begin, suppose w is any best approximation to
\'. and w· is the best approximation defined by Eq. (4). Since the vector \' - w· was
hing the p constructed so as to be orthogonal to every vector in IV, we can make a calculation
similar to the one in Eq, (I) and conclude the following:
x= [
-X,+3X,]
: =.1"2
[-1] [3]
~ +~ ~ .
f) COD-
Therefore. a nalUral basis for W consists of the tWO vectors WI = (-I. I, of and
"'2 = (3.0. If·
We now use the Gram-Schmidt process to derive an onhogonal basis {UI. U2} from
the natural basis {WI. \Il2}. In panicuiar.
uf U2 = uf (\\'2+ aUl)
, ,
1 W2 + aU I UJ.
= u
= 1.5.
Having found a. we calculate the second vector in the orthogonal basis for \V. findi
U2 = w~ + l.5uJ =
13. O. IV + !.S[ -I. l. OlT = [1.5. 1.5. I]T.
Next. let \1· = alUI + a2U2 denote the best approximation. and determine lL
coefficients of \I using Eq. H):
v T UI ~3
al=-,-=~=-1.5
u UI _
l
v T U2 -5.5
a, = ~- ~ - - =-1.
- UfU2 5.5
(As a check for the calculations. ""e can form " - w· = (I. I. -3f and verify tha.
\ - \I i~ orthogonal to each of the original basis vectors. WI = [-I. I. OJT and W2 =
[3.0. lIT.) •
R' Rm
=Ax
~
-------.
'[ttmine the In Fig. 3.30. we think of Ihe (m x n) matrix A as defining a function of the
fonn)' = Ax from R" to R"'. The subspace 'R(A) represents the range of A: it is a
p-dimensional subspace of Rm . We have drawn the vector b so that it is not in R(A),
illustrating the case where the system Ax = b is inconsistent. The vector }'- represents
the (unique) best approximalion in 'R(A) to b.
ProofofTheorem 17 Because ),.- is in 'RCA). Ihere must be veclOrs x in R" such Ihat Ax = )". In addition.
because ya is Ihe c10sesl point in 'R(A) 10 b. we can say:
'mtv In order to locate y~ in IV. we note that ).• is characterized by wT(y. - b) = 0 for
and 1Ir; = any veclOr w in R.(A). Then. since Ihe columns of A fonn a spanning set for 'RCA), y'
can be characterized by Ihe condilions:
•
AJ()·· - b) = O. fori = I. 2..... /1.
The orthogonality conditions abo\e can be rewritten in matrix/vector terms as
AT(y·-b)~6. (7)
Finally. since ya is in 'R(A). finding )'~ to solve Eq. (7) is the same as finding vectors x
in R" that salisfy Ihe nonnal equalions:
AT(Ax - b) =(J.
We can nov. complete Ihe proof of Theorem 17 by making the observation that
Eq. (6) and Eq. (8) are equivalent in the following sense: A veclor x in R" satisfies
ond
Eq. (8) if and only if the vector y. satisfies Eq. (6), where)'- = Ax.
To establish part (a) of Theorem 17. we note that Eq. (6) is consistent. and hence
lhe nonnal equations given in Eq. (8) are consistent as well. Part (b) of Theorem
17 follows from rule (5) and (he equivalence of equations (6) and (8). Pan (c) of
Theorem 17 follows because Ax = y- has a unique solUlion if and only if the columns
of A are linearly independent. _
262 Chapter 3 The Vector Space R"
Just as I x measures the size of a \'eclor x. IIA !IF measures lite size of a matrix A.
Now. let A be an (m XII) mauix. Thepseudoim'erseof A. denoled A+, is the (n xm
matrix thai minimizes lAX - IIIF where I de,notes the (m X m) idemit) matrix. It can
be shown that such a minimizing matri~ always e~ists and is alwa):, unique. As can be
seen from Ihe definilion of the pseudoinw.rse. it is Ihe closest thing (in a least-square
sense) 10 an inverse for a reclangular matrix. In the event that A is square and in\enible.
then Ihe pseudoinverse coincides with the usual inverse. A -I. It can be shown Ihat 1M
minimum nonn least-squares solution of Ax = b can be found from
x· = A-b,
An actual calculation of the pseudoin\oerse is usually made with the aid of anolher type
of decomposilion. the singular-value decomposition. A discussion of the singular-value
decomposition would lead us too far afield. and so we ask the interested reader to consult
a reference. such as Golub and Van Loan. Matrix Compillatiolls.
(a) If A is (m x II) wilh III :F 11. then the MATLAB command A\b retums a
m,loA). = least·squares solution to Ax = b. If A happens to be rank deliciem. then
ing becau"" MATLAB selects a least-squares solution with no more than /) nonzero emries
this wickl (where p denotes the rank of A). The least-squares solution is calcul.ated using
:UO. a QR-factorizatioll for A (see Chapter 7).
uenes\ \\ j,:\ (b) If A is square and inconsistent. then the MATLAB command A\b will produce
I"a '-ector I a warning that A is singular or nearly singular. but will not give a least-squares
loA). =.\ solution. One way to use MATLAB to find a least-squares solution for a square
uation h:.., but inconsistent system is to set up and solve the normal equations.
(c) Whether A is square or rectangular. the MATLAB command x = piny (A)·b
will give the minimum norm least-squares solution: the command piny (A)
generates the pseudoim'erse A +.
I \.\\\PlC2 The following sample values from the function =- = f(x. y) were obtained from exper-
imentalobservations:
f(1. 1) ~ -1.1 f(1.21 ~ 0.9
f(2. I) ~ 0.2 f(2. 2) ~ 2.0
f(3. I) ~ 0.9 f(3. 2) ~ 3.1
We "auld Iil.e to approximate Ihe surface:: = f(:c. y) by a plane of the form:: =
ax+ by + c. Use a least-squares criterion to choose the parameters a. b. and c.
Solution The conditions implied b} the experimental observalions are
a- b+c=-I.I
2a- b+c= 0.2
3a- b,c= 0.9
a - 2b +c = 0.9
2a-2b+c= 2.0
3a-l-2b+c= 3.1.
A least-squares solUlion.
112] [I]
A~ [ ~ ~ : . b~ : .
r ~ - .
264 Chapler 3 The 'ector Space R"
»A_fl,l,l;2,l,1;3,1,l,1,2,l;2,2,l;3,2,l];
»b_[_l.l, .2, .9, .9,2.,3.1] 'i
»x_A\b
x •
1.0500
2.0000
-4.1000
)
Figure 3.31 The result:. of Example 2
Solutioll The re~uhs art shown in Fig. 3.32(a}. Note that ~1ATLAB has issued a rank deficiem
warning and concluded that A has rank 2. Because A is not full rank. least-squares solu-
tions to Ax = b are not unique. Since A has rank 2. the MATLAB command A\b ~elecb
a solution with no more than 2 nonzero components. namely Xl = [0.0. -0.8, uf.
As an altemative. we can use the pseudoinverse to calculate the minimum-nonn
least-squares solution (see Fig. 3.32(b)). As can be seen from Fig. 3.32(b). the MATLAB
command piny (A) *b has produced the least-squares solution X2 = [0.6. -0.2. OAf
A calculation shows that IIxll! = 1.2806. while the minimum norm solution in
Fig. 3.32(b) has IIx211 = 0.7483.
Finally. to complete this example. we can find all possible least-squares solutions
by solving the nonnal equations. We find. using the MATLAB command rref (8 .
x • 0.6000
-0.2000
o 0.4000
-0.8000
1.1000
(x) (bl
1 0I 1I .21]
[ oo 0 0 0
Thus, the set of aillenst-squares solutions are found from x = [I - ,\"3. 0.2 - ,\"3. XJ f =
[I.O.2.0V +X3[-1, -I, IV· •
I \-\ \\1'1 L ~ As a final example 10 illustrate how MATLAB treats inconsistent square systems. find a
I least-squares solution Ax = b where
n b~[:J
10
I
A~[: ~
~ deficient
~uares solu- Soilltioll The results are given in Fig. 3.33 \\, here. for clarity. we used Ihe rlllional formal to display
;. bselem the calculations. As can be seen. the MATLAB command A\b results in a \\ aming that
. . uf. A may be ill conditioned and ma~ have a solution vector with "ery large components.
mum-norm Then. a least-squares solution is calculmed using the pseudoinverse. The leasl-
.'IATlAS squares solution found is x = [2/39. -2/13. 8/39J1. •
-f>:!. OAf
,Iulion in
~~Iution
/'
»A_[~,3,5;l,O,3;3,3,8};
'"
- ::-::-ef E »b_ [1,1,11';
»x_A\b
-6755399441055744
7S0S9gg3789508~
~~S1799813685248
»x_pinv(A)*b
x •
2/39
-2/13
8139
3.':11 EXERCISES
Exercises 1-16 refer to the following subspaces: 1. IV given by (a). \'= [1.2.6f
a) IV = 2. IV given by (a). \' = [3, O. 3V
B~[~:]
7. IV given by (c). \' = [2. O. 4V
b) IV ~ R(8).
8. IV given by (c). v = [4.0. -IV
9. IV given by (d). v = [1. 3. If c
oj IV ~ R(B). B~ [ -I
2o 4]
-2
10, IV given by (d), v = [3.4. OJT
I
approximation w* for the given vector v.
d) IV ~
(x x~ [:J Xl
X\ -
+ Xl + .'1'3 =
Xl - X3
IV = jx: x = X, ]
[ x2 . XIX2 = OJ.
A = ['-I I]
1
2
4-1
2 I
Verify that IV satisfies propenies (sl) and (s3) of
Theorem 2. l11ustrate by example that IV docs not
satisfy (s2). and
2. Let
IV = {x: x = [ x, ].
X2
x\ ~ 0, X2 ~ OJ.
IV = {x: x =
["X2
XJ
] • Ax = 3x}.
T~I[~lUHnl
c) T: R 3 _ R 3 and the rank of Tis 3.
9. In a)--c). use the given information to delennine the
rank of T.
then show that Sp(S) = Sp(T), [Him: Obtain an al-
a) T: R 3 ...,.. R 2 and the nullity of T i~ 2.
gebraic specification for cach of Sp(S) and Sp(T).}
b) T: R·1 __ R 1 and the nullity ofT is I.
S. Let
c) T: R2 _ Rl and the nullity of T is O.
1-I 25 43] 10. Let B = (XI.X~J be a basis for R 2. and let T:
A=
[ 2 -2
I -I 0 7
R 2 _ R 2 be a linear transfonnation such thai
e)
space of AT.
Find a basis for.'·(A).
6. leI S = {";. "~. ')}. y,here
b~ [ : l
"~ [-:J "~ [J
and suppose that T: R~ - R! is a linear transfor-
mation defined by T (x) = Ax. "here A is a (2 )( 3)
,00 matrix such that the augmented malrix (A bl re-
duces to
"~[ J [~ o
I -3
8 -5a"'t"3b].
2a - b
a) Find a subset of S that is a ba...is for Sp( S).
b) Find a basis for SpCS) by setting A = a) Find \ectors Xl and X1 in R' such (hat T(xI) =
[VI. "2. "3J and reducing AT to echelon fonn. Cl and T(xJ) = C1_ where CI and C2 are the unit
e) Ghe an algebraic specification for Sp(S). and \-cetoTS in R1 •
use that specification to obtain a basis for SpeS). b) Exhibit a nonzero \eClor X3 in R 1 such that X3 is
7. Let A be the (m )( n) matrix dcfined by inX(T).
,,_I ,,+2 2,,-1 c) Show that B = {XI. Xl. x31 is a basis for R 3.
d) Express each of the unit \CCtors el_ C2_ el of R)
211-1211+2 3/1-1 2"
3" ]
•-1. = as a linear combination of the vectors in B.
[ Now calculate T(c,). i = 1.2.3. and deter-
ml1:+111II1'+2 (111+1)/1-1 (m + 1)/1 mine the matrix A.
268 Chapter 3 The Vedor Space R"
In Exercise~ 12-18, b = [(I.b.c.dJT, T: R6 ....... R4 is 16. Suppose that A = lAt. Az. A3. A4. As. A61.
a linear transfonnation defined by rex) = Ax. and A is a) For each \ector w,. ; = I. 2. 3. 4. listed in E..;-
a (4 x 6) malrix such that the augmemed malrix [A bl ercise 13. if w is in the column space of A. the'"
reduces to express ". as a linear combination of the
o 2 0-3
[~
-la+ b-2, ] columns of A.
I -I o 2 2 I~ + 5b -7c b) Find II subset of (AI. A:!. A3. A..!. As. ~l thilt
o 0 I -I -2 -5a-2h+3c . a basis for the column space of A.
c) For each column. A). of A thai does nOI appe:tr
o 0 o 0 0 -1&1-7b -r9c +d in the basis obtained in part b). express Aj as a
12. Exhibit a basis for the row space of A. and deh~nnine linear combination of the basis \ ectors.
the rank and the nullity of A. d) Letb = (I. -2. I. -7f. Shay, that bisinthe
column space of A. and expres~ b as a linear
13. Dctennine \\hich of the follo\\ing \ector'S are in
combination of the basis vectors found in
l
'R(T). Explain hO\\ you can tell.
part b).
~
e) If.!!: = [2,3.1. -I. I. If.lhenexpress Ax asa
linear combination of the ba;,is vectors found in
W, [-] Wl = [; pari b).
17. a) Give an algebraic specification for "R(T). and
use that ~pecification to detennine a basis for
R(T).
CONCEPTUAL EXERCISES
In Exercises 1-12. answer true or fal ..e. Justify your an- ·t If S = {xl •.... x.tl is a ..ubsetof R· andk > II.
s"'er by prO\'iding a counterexample if the stalement is then S i!> a linearly dependent .set.
false or an outline of a proof if the statement is true. 5. If S = {Xl ....• xd is a subset of R· and k < n.
1. If IV is a subspace of R" and x and ). are \ ectors in then S j;, not a spanning ..et for R".
R~ such that x + y is in IV. then x is in IV and )' is 6. If 5 = {Xt ..... xd is a subsel of Wand k :::: n.
in IV. then 5 is l\ spanning set for R".
2. If IV is II subspace of R" and (IX is in IV. where a is 7. If 51 and S2 are linearly independent subsets of R·.
a nonzero scalar and x is in R". then x is in IV. then the sel SI U S~ is also linearly independent.
3. If S = /XI ..... x~l is a subset of R· and k ::: n. 8. If IV is a subspace of R-. then IV has exactly one
then S is a linearly independent .set. basis.
Conceptual Exercises 269
~ ~ _~
-2/3
23. .1:-1-2.1'-2.::= 17
H. ", [ ] ", [ ]
25. _l =4 -t.y = 5+1 . .: = I
37. [ =: ] u (3. 1)
x
-u
39. [ -1) (-3. -I)
41. [J 3. ) .
43.4..16 square units
45. 3JTT square units (3. I)
47. 24 cubic units
49. not coplanar
u
- x
1 0
= '" - "+ " [ :: ] I = AI ' "
{J~WnUrC~.L':U-e D :[ ~ ] } = A\ 'S'Z
·I=.~-,·\+tr
~ds 0Xj1 JO jlllll mdn ~I UO SlU!od jO j;/S ~ S! i" 'I,
(
'0 = ::, +.\ .... r UO!1tm~ !P!.\\. ;Jtrnjd;HJl S! ,n '61
- " 'LT
O-'V
,
"
~~~~.,
.l- n
-t
.l-
,,
, n
([ 'f)
"
(l'l)
'L
"
,
,,,
,
,,
(f'r)
OI.\rl:V
Answers to Selected Odd·Number«! Exercises A..'dl
Exercises 3.2, p. 174 31. ..V(A) = Ix in R.l: X[ + 21: 2 = 0 and X.l = 0):
I. \V is a subspace. \V is the sct of poillls on the line with "R(A) = R2
equation x = 1.\'. 33. _\'(A) = Ix in R2 : X2 = OJ:
3. II' is not &subspace. "R(A) = Ix in R.l: .12 = lx] and x, = 3x[1
5, II' is the subspace consi~ting of the points on the y-allis. 35. Ar(A) = Ix in R.l: .f[ = -7xJ and Xl = 2_fll:
7. II' is 110I a subspace. "R(A) = Ix in R.l: -4.f[ + 2X2 + Xl = O}
9. II' is the subspace consisting of the points on the plane 37. ~\'(A) = {I}: "R(A) = R'
2f-y-:=0. 39. a) The vectors b in ii). v). and vi) arc in R{A).
11. II' is not a subspace. b) For ii), x = (1. of is one choice: for v). x = 10. I)
is one choice: for vi), x = 10. OJT is one choice
13. \V is DOl a subspace.
c) For ii), b = AI: fonl). b = A~. fon-il,
IS. II' is the SUbspace consisting of the points on the line
b=OA[+OA2.
with parametric equations X "" 2/. -" "" -t . .:::::: t.
41. a) The vectors b in i). iii). v). and vi) are in RCA I.
17. II' is the SUbspace consisting of the points on the x-allis.
b) For i), x = [-I. 1. Olr is one choice: for iii I.
19. II' is the sct of points on the plane x + 2.1' - 3~ :::: O.
x = (-2. 3, O)T is one choice: for \).
23. II' IS the line formed by the two intersecting pl:lnes x = 1-2. I. OJT is one choice: for \i).
.f +2)'+2: = 0 :lndx +3)' = O. The line has p:lrametric x = 10. O. Olr is one choice.
equ&tionsx = -6/.)' = 2t.: = I. c) For i), b = -A[ T A!; for iii). b = -2A[ T JA; f
-./
25. II' is the set of points on the plane x - : = O. \'), b = -2AI + A 2 : fO£ \i), b = OA I -OA~ - ,\
47. w[ = (-2. 1. 3JT. W2 = [0. 3. 2J'
Exercises 3.3, p. 186
1. SpeS) = Ix: x[ -I-Xl = OJ: Spes) isthe line witheqllation 49, WI =ll.2.2]T. W2 = 10.3. IV
x+)'=O.
Exercises 3.4. p. 200
3, SpeS) = Ie}: Sp(S) is the poim (0. 0).
1, II I. O. I. OV. [-1. I. O. liT}
5, SpeS) = R 1
3, [[ 1. l. O. O]T. [-1. O. l. OJ' .13. o. O. liT}
7. SpeS) = {x: 3x[ + 2x 2 = OJ: Spes) is the line with T T
S.I[-I.1.0.01 • [0,0.1.01 , 10.0.0. I]TI
equation 3x + 2)' = O.
7. H2. I. -I. OIT, (-I. 0, O. I]T}
9, SpeS) = R2
")X~2[t]-[-] b)
-1.:: "" 0, II. Spes) = {x: Xl +X2 = 0): SpeS) is thc line withequalion
IWf of the sphere x+y=O.
13. SpeS) = Ix: X2 + X, = 0 and XI = OJ: Sp(S) is the line
through (0. O. 0) and (0. -I. J). The parametric equa-
x;,oo" I
l
tionsforthe line are x = 0.)' = -t.: = I.
~ ~ ~ t]
15. SpeS) = {x: Ix[ - X1 +Xl = OJ: Sp(S) is the plane with
equation 2x - )' -I- .:: = o.
17. Spes) = Rl <) x -3 [ - d) x 2[
19. Spes) = {x: X2 + XJ = OJ: Sp(S) is the plane with
equation)' + .: O. =
21. Thnectors u in b), c), and e) are in SpeS): for b)_ u = x: I 0 I ']
for c), u = ": for e). u = ],. - 4x. Il.a)8= 0 I I-I
[
23. dande o 0 0 0
,I 25. x and Y
27. -"(A) = Ix in R1: -Xl -lf2 = 01:
"R(A) = Ix in R1: lx[ +X2 = O}
29. N(A) = {S): "R(A) = Rl
b) A basis for .V(A) is {[- L -1. I. 0]1.
(-1.1.0. lIT}.
c) {AI. A 2} is a basis for the column space of
A; = AI + A 2 and Ai = A[ - A 1.
ANI2 Answers to Selected Odd-Numbered Exercises
o1 ° -1 2]
21. (( -2. WI is a basis for ..V (A): nullity(A) = I:
I I-I rank(A) = 1.
13. a) B=
[ oo 0
0
0
0
0
0
23. {[-5. -2, If} is a basis for N(A): nullity(A) = I:
rank(A) = 2.
Exerc~
1..) [
b) A basis for N(A) is ([I. -1. 1, of. [-2. 1. 0, WI. 25. HI. -I. If. [0. 2. 3)T) is a basis for R(A): rank(A) =
!
c) (AI. A 2} is a basis for the column space of A: 2; nullity(A) = I.
A] = -AI +A 2 andA 4 =2A I -A, . 27. a) ([I. 1. -2). [0. -1. If. [0. o. W} is a basis for IV: d1
d) Il I. O. -I, 2], [0. 1. 1. -I JJ is a basis forthe row dim(W) = 3,
space of A. b) (II, 2. -I. If. [0.1, -1. If. [0.0. -1. 4n is a 3. c) i:;,
basis for W: dim(W) = 3. 9. F is
1 2 0 ]
15.a)8= 001 29. dim(W) = 2 11. F i~
[ (33. a) rank(A) ~ 3 and nullity(A) ~ O. 13. F i~
000
b) rank(A) ~ 3 and nullity(A) ~ 1. 15. F j,
b) A basis for N(A) is ([ -2, I. Of I·
c) rank(A) ~ 4 and nullity(A) ~ O. 17. F i,
c) (A I' A31 is a basis for the column space of A:
A 2 = lA I. 19. al
d) HI. 2. 0). [0. o. III isa basis for the row space of A. Exercises 3.6, p. 224
17. Ill. 3. W. [0. -1. -If I is a basis for R(A). S. ur U3 = 0 requires a - b + c = 0: ur U3 = 0 requires
19. Ill. 1, 2. Of. [0. I. -2, IjT} is a basis for R(A).
la + lb - 4c = 0: therefore c = Oanda +b = O.
l
c) ([-l.l.ofl
27. -2v] - 3V2 + \', = 9. so 5 is linearly dependent. Since
\'] = 2v] + 3V2. if" = a) VI + a,l', + aJ"] is in
Sp{\'I. v,. '·Jl. then v = (a] + 2a3)VI + (a, + 3aJ)\'l.
Therefore ,. is in SP(vl. \·l}.
29. The subsets are {Vj. "2. (VI. v,. V4),
13. u, = [n u, = [ ; u, = [ -:~:] 25. A,
r.m
[J J
\'J}.
{vl. V). '·4}. 27. A
'R
33. 5 is not a basis.
15 u, = u, = [ u, = [ -;]
35. 5 is not a basis. 29..'"
Exercises 3.5, p. 212 1
L 5 does not span R ' .
0]~ [-1]
-~
[-2 3] ""
'"
3. S is linearly dependent.
5. 5 is linearly dependent and does not span R',
17.u,=
[ . u,= . u,= _:::
Exerci
7. 5 does not span R3. 1. x'
9. 5 is linearly dependent.
II. 5 is a basis. -3] [
-I 7/11]
-27/11
13. Sis nOl a basis.
15. dim(W) = 3
19. For the null space:
[° I
•
-6/11
1
,
3. x·
Answers to Selected Odd-Numbered Exercises ANI3
[a -a ,) [-: l
Exercises 3.7. p. 239
• d) [ -: ] 1. w' ~ IE, ]
[ ,.'. [:]
• 3. c) is not but a). b). :lnd d) nre.
9. F is a linear Iransfonn31ion.
11. F is not a linear transformation. 5. w' • [ : ] 7. w' • [ -: ]
13. F is a linear transformation.
15. F is a linear transformation.
17. F is nOl a linear transformation. 9w'. [ - : ]
1= u ~wre~
19.•j [ _: J =: J,) [_~ ]
bj [
11. w"" = i [~'] + 121 [-1/']
2~5
•
21. T ([ x,]) [x, + x, ]
= h
13.W'.[~]-4[ ~~',]
.T1 XI _
2
-:c,
:] 15.'-.'[ -; ].[ ~]
[
;c.
J 2
+ -.(,
2"
+ -;c.
2~
-:
25. A = [~ ~]; .\~(n = lSI: 'R.(T) = R 2
:
rank(T) = 2: nullity(T) = 0
21. A = (3 2]: X'(T) = Ix in R 2 ; 3xI + 2t2 = Ol:
29. A =
[ o1-I 0] : A"CT) = (x in Rl; Xl = .1)
CHAPTER 4
-:
I J
; ] nullit)(T) = I
I-I
I: ],
1. x. = [ -5{13 ]
7/13 3.).• 1.'.,[ : ]. a #0:
).3,.,[ -: l ,"0
(28/74) - 3xJ
II 3. x· = (27/74) + XJ ]. x, ,m;"."
[
x,