Sunteți pe pagina 1din 9

ENGINEERING MATH 1Fall 2009

VECTOR SPACES
A vector space, more specifically, a real vector space (as opposed to a complex
one or some even stranger ones) is any set that is closed under an operation of
addition and under multiplication by real numbers. To be a bit more precise, if
a set V is to be a vector space, then
1. One has to be able to add any two elements of V to get another element
of V . Specifically, an operation + has to be defined so that if x, y V ,
one can form x + y and x + y V . This operation should have the usual
properties, namely it has to be associative and commutative, there should
be a zero element (denoted by 0) that has no effect when added to another
element (If x V , then x + 0 = x) and one should also be able to subtract
in V ; that is, if x V then x has an additive inverse x which when
added to x results in 0. Subtraction is then defined by x y = x + (y).
2. If x V and c is a real number, it should make sense to multiply x by c
to get cx V . The usual properties should hold, specifically,
c(x + y) = cx + cy if c R, x, y V .
(c + d)x = cx + dx if c, d R, x V .
c(dx) = (cd)x = d(cx) if c, d R, x V .
1x = x if x V .
The last property may seem a bit strange, but one possibly could define
some weird product in which 1 times x is not x, and one wants to be sure
nobody goes around saying he or she has a vector space in that case.
The elements of V are then called vectors, the real numbers are then called
scalars.
For us the main examples are:
1. For every pair of positive integers m, n, the set of m n matrices, with
the usual operations, is a vector space. Of particular interest are the cases
m = 1; we then have column matrices or vectors. This space can be
identified with Rn , n-tuples of real numbers.
n = 1; we then have row matrices or vectors. This space can be
identified with Rm , m-tuples of real numbers.
Rn for every positive integer n.
The 0 element of this space is the 0 matrix. Actually, the most important
case is the set Rn of all n-tuples of real numbers, with the usual operations.
The 0 element is the vector with all components equal to 0.
2. If I is an interval in R; that is, I is one of the following sets
I = (a, b) for some a, b, a < b (a = or b = being
allowed), or
I = [a, b) for some a, b, < a < b (b = being allowed), or
I = (a, b] for some a, b, a < b < (a = a possibility), or
I = [a, b] for some a, b, < a < b < ,
then the set V of all real valued functions on I is a vector space. If
f (x), g(x) are defined for x I, if c R, we define (f + g)(x), (cf )(x) in
the usual and obvious way:
(f + g)(x) = f (x) + g(x),

(cf )(x) = cf (x),

(x I).

The 0 element of this vector space is the identically 0 function; the function
f defined by f (x) = 0 for all x I.

2
Linear Combinations. In every vector space one can combine vectors with
scalars to get linear combinations. The precise definition is:
Suppose v1 , v2 , . . . , vn are vectors. A linear combination of these vectors is
any vector of the form v = c1 v1 + c2 v2 + + cn vn , where c1 , c2 , . . . , cn are
scalars. The scalars c 1, . . . , cn are the coefficients of the linear combination.
Examples.
1. In R3 consider the vectors v1 = (1, 1, 0), v2 = (2, 3, 0) and v3 = (0, 5, 0).
Here are a few linear combinations of these vectors:
2v1 7v2 + 4v3

3v1 + 2v2 v3
v1 + v2 + 0v3

= (3, 3, 0) + (4, 6, 0) + (0, 5, 0) = (7, 4, 0),


= (1, 1, 0) + (2, 3, 0) = (1, 2, 0), (if in a combination a vector
appears with coefficient 0, we just omit it,
for example write v1 + v2 instead of v1 + v2 + 0v3 ),
= 0v1 + 2v2 + 0v3 = (4, 6, 0),
= (10, 10, 0) + (10, 15, 0) = (0, 5, 0) = v3 ,
= 0v1 + 1 v2 + 0v3 .

2v2
10v1 + 5v2
v2

(2, 2, 0) + (14, 21, 0) + (0, 20, 0) = (12, 1, 0),

We can notice a few facts of general interest. The last computation shows
v2 is itself a linear combination of v1 , v2 , v3 . This is of course generally
true; every vector of a set of vectors is a linear combination of this set;
just take all coefficients equal to 0, except the one corresponding to the
vector in question, which you take equal to 1.
More interestingly, the fifth example shows that v3 is a linear combination
of the other vectors of the bunch; that is a combination of v1 and v2 . This
means we dont need it for the purpose of forming linear combinations; in
every linear combination we can just replace v5 by 10v1 + 5v2 , collect
terms, to get a linear combination of v1 , v2 alone. For example, in the first
computation we get (12, 1, 0) as a linear combination of just v1 , v2 by
(12, 1, 0)

= 2v1 7v2 + 4v3 = 2v1 7v2 + 4(10v1 + 5v2 )


= 2v1 7v2 40v1 + 20v2 = 38v1 + 13v2 .

Here is an important definition: A set v1 , . . . , vn of vectors is linearly


independent if no vector from the set is a linear combination of the other
vectors from the set.
We just saw that v1 , v2 , v3 as given was not linearly independent. Is the
set consisting of v1 , v2 linearly independent? This is the same as asking
is one of v1 , v2 a linear combination of the other one, meaning is v1 = cv2
for some constant, or is v2 = cv1 for some constant c. It is easy to see that
the answer is NO. If v1 = cv2 , then c must satisfy (1, 1, 0) = (2c, 3c, 0),
thus 2c = 1 and 3c = 1, only possible if 2 = 3, and there is good evidence
to conclude that 2 6= 3. Similarly one sees that v2 = cv1 is impossible.
Usually, when one vector of a bunch is a linear combination of others in
the bunch, there are other vectors in the bunch with the same property.
For example, in our case, we can ask ourselves whether v2 is a linear
combination of v1 , v3 . This is the same as asking whether there are scalars
a, b such that v2 = av1 + bv3 or
(2, 3, 0) = (a, a, 0) + (0, 5b, 0) = (a, a + 5b, 0).
This leads to the system of equations a = 2, a+5b = 3, with the immediate
1
solution a = 2, b = 1/5. Thus v2 = 2v1 + v3 . We can then write the
5
vector of the first computation above without using v2 :
1
13
(12, 1, 0) = 2v1 7v2 + 4v3 = 2v1 7(2v1 + v3 ) + 4v3 = 12v1 + v3 .
5
5

3
Exercise. Show that v1 is a linear combination of v2 , v3 and write
(12, 1, 0) as a linear combination of v2 , v3 .
Well do one more thing with these vectors, describe the set of all their
linear combinations. We dont need all three; since v3 depends on v1 , v2 , it
suffices to describe all the linear combinations of v1 , v2 . This will consist
of all vectors (a, b, c) such that one can find coefficients c1 , c2 such that
c1 v1 + c2 v2 = (a, b, c). If we equate components, we are asking for all
vectors (a, b, c) such that
c1 + 2c2
c1 + 3c2
0c1 + 0c2

=
=
=

a
b
c.

Obviously, c = 0. The other two equations are however easy to solve


without any problem. By Gauss, Gauss Jordan, or otherwise, c1 = 3a2b,
c2 = b a. In other words, no matter what a, b are, every vector of the
form (a, b, 0) is a linear combination of v1 , v2 , hence also of v1 , v2 , v3 . For
later reference we remark that the set of vectors that are combinations
of v1 , v2 , v3 form a set closed under addition and scalar multiplication
containing the 0 vector of the space.
2. Here is a similar example in R4 . Consider the vectors v1 = (1, 2, 1, 3), v2 =
(1, 0, 2, 4) and v3 = (1, 1, 1, 1).
We want to answer two questions:
(a) Are they linearly independent?
(b) What is the set of all linear combinations of these vectors; i.e., when
is a vector (a, b, c, d) of R4 a linear combination of these vectors?
It would be very tedious to answer the first one by checking first whether
v1 depends on v2 , v3 , then if v2 depends on v1 , v3 , finally if v3 depends
on v1 , v2 . There is a better way to check for independence of vectors
v1 , . . . , vn in a vector space. One considers the equation
(1)

c1 v1 + + cn vn = 0

(The 0 element of the space).

If one can show there is a solution with at least one ci 6= 0, then one has
ones answer. For example, just as an example, suppose that there is a
solution of (1) with c2 6= 0. Then we can solve for v2 = (c1 /c2 )v1
(c3 /c2 )v3 + + (cn /c2 )vn , and the system is not linearly independent.
But if no such solution exists, then the system is linearly independent.
Let us try it with our vectors. The vector equation c1 v1 + c2 v2 + c3 v3 = 0
results in a system of 4 linear equations in the unknowns c1 , c2 , c3 :
c1 + c2 + c3
2c1 + c3

= 0
= 0

c1 + 2c2 c3

= 0

3c1 + 4c2 c3

= 0.

To solve it we row reduce the augmented

1 1
1
2 0
1

1 2 1
3 4 1

matrix of the system

0
0
.
0
0

The canonical row reduced form

1
0

0
0

0
0

0
0

is
0
1
0
0

0
0
1
0

4
showing that the only solution is c1 = 0, c2 = 0, c3 = 0. The system is
linearly independent.
Exercise. Verify that the stated row reduced form is the right one.
To answer the second question, we need to determine all vectors (a, b, c, d)
for which the system
c1 + c2 + c3
2c1 + c3
c1 + 2c2 c3
3c1 + 4c2 c3

=
=
=
=

a
b
c
d.

has a solution. The augmented matrix is now

1 1
1 a
2 0
1 b

1 2 1 c
3 4 1 d
and the exact same row operations as before take it to the canonical form:

2a + 3b + c
1
0
0

a+c
0 1 0

4a 3b 2c

0 0 1

0 0 0 2a 4b 3c + d
The conclusion is that a vector v = (a, b, c, d) is a linear combination of
the given three vectors v1 , v2 , v3 if and only if 2a 4b 3c + d = 0; that
is, d = 2a + 4b + 3c, in which case v = c1 v1 + c2 + c3 v3 if
c1 =

2a + 3b + c
,
3

c2 =

a+c
,
3

c3 =

4a 3b 2c
.
3

For example, consider the vector (4, 3, 2, 14). Here a = 4, b = 3,


c = 2, d = 14. We see that the equation 2a + 4b + 3c = d holds, so this
vector is a linear combination of the given v1 , v2 , v3 . Taking
c1 =

2a + 3b + c
= 5,
3

c2 =

a+c
= 2,
3

c3 =

4a 3b 2c
= 7,
3

we verify that indeed


(4, 3, 2, 14) = 5(1, 2, 1, 3) + 2(1, 0, 2, 4) + 7(1, 1, 1, 1).
Here is something we learned while going through this example:
A set v1 , . . . , vn of vectors is linearly independent if and only if the equation
c1 + + cn = 0
(the 0 here being the 0 element of the vector space), has the only solution
c1 = c2 = = cn = 0.
(This is frequently given as a definition in most textbooks).
3. Let V be the set of all functions defined on (, ). We consider the
functions as vectors, the 0 vector being the identically zero function. Consider f, f2 , f3 , f4 where
f1 (x) = x,

f2 (x) = sin x,

f3 (x) = cos x,

f4 (x) = sin(x +

).
6

5
Are these functions linearly independent? If we remember well our trigonometry it is easy to see that the answer is no, because

3
1
sin(x + ) = sin x cos + cos x sin =
sin x + cos x;
6
6
6
2
2

that is, f4 = ( 3/2)f2 + (1/2)f3 . What if we remove f4 ; are f1 , f2 , f3


linearly independent? We may later on see an easier way of deciding this,
for now we do a brute force attack. We suppose that c1 f+ c2 f2 + c3 f3 = 0
and try to either solve this equation for some triple c1 , c2 , c3 not all zero, or
show that the only solution is c1 = c2 = c3 = 0. Because the 0 element is
the identically zero function, the equation c1 f+ c2 f2 +c3 f3 = 0 is equivalent
to
c1 x + c2 sin x + c3 cosx = 0 for all values of x.
Lets give values to x, see what happens. For example if x = 0 we should
have c1 0 + c2 sin 0 + c3 cos0 = 0; in other words, c3 = 0. With c3 = 0, we
look for c1 , c2 so that c1 x + c2 sin x = 0 for all x. Take x = and we get
c1 = 0, thus c1 = 0. Finally if c2 sin x = 0 for all x, we must have c2 = 0.
The answer to the last question is yes, f1 , f2 , f3 are linearly independent.
Subspaces. If V is a vector space, a subspace is any subset that contains the
0 element and is closed under addition and scalar multiplication. A subspace of
a vector space is a vector space in its own right, with the operations of V .
Examples. In each case, verify it is, or isnt, a subspace of the indicated vector
space.
1. V = R3 and W the set of all vectors of the form (a, b, 0) (triples of real
numbers with third component 0). W is a subspace of V . Why? Because
(a) The 0 vector of V is (0, 0, 0), of the form (a, b, 0) with a = b = 0.
(b) If v, w are vectors in W , then (say) v = (a, b, 0), w = (c, d, 0), so that
v + w = (a + c, b + d, 0) is again a vector with third component 0,
hence in W .
(c) If v = (a, b, 0) is in W and c is a scalar, then cv = (ac, bc, 0) W .
2. Let V = R2 and consider the set W consisting of all vectors (a, b) with
a = b. It is a subspace.
3. Let V = R2 and consider the set W consisting of all vectors (a, b) with
a2 = b2 . It is NOT a subspace. Why? Because, for example, the vectors
(1, 1) and (1, 1) are in W , but (1, 1) + (1, 1) = (2, 0) is not in W .
4. Here is what could be our main example: Let V be any vector space and
let v1 , . . . , vn be vectors in V . The set W of ALL linear combinations of
the vectors v1 , . . . , vn is a subspace of V . In fact,
(a) 0 = 0v1 + + 0vn , so 0 is a linear combination of the vectors and
hence in W .
(Note: In the equation 0 = 0v1 + 0vn , the zero on the leftt hand
side is the zero element of the vector space; the zeros on the right
hand side are the real number 0. One usually knows, or should know,
from the context which is which.)
(b) Suppose v, w are in W . Then we can write v = c1 v1 + + cn vn ,
w = d1 v1 + +dn vn for scalars (real numbers) c1 , . . . , cn , d1 , . . . , dn .
Then
v + w = (c1 + d1 )v1 + + (cn + dn )vn
also is a linear combination of v1 , . . . , vn , hence in W .
(c) Suppose v is in W and c is a scalar. We can write v = c1 v1 + +cn vn
for scalars (real numbers) c1 , . . . , cn ; then
cv = (cc1 )v1 + + (ccn )vn
also is a linear combination of v1 , . . . , vn , hence in W .

6
5. Two silly examples: If V is a vector space, then it itself is a subspace of
itself. Also, the set consisting only of 0, the 0 element of V by itself, is a
subspace of V .
6. Suppose that I = (a, b) is an open interval of real numbers and let
V be the set of ALL real valued functions of domain I,
W the set of all continuous functions on I,
X the set of all differentiable functions of on I.
It should be clear that W is a subset of V , and X of W . It is also easy
to see that W is a subspace of the vector space V ; and X is a subspace of
the vector space W . It is sort of evident that a subspace X of a subspace
W of V is also a subspace of V .
7. An important example: Let A = [aij ] be an mn matrix. Consider the
homogeneous system of equations, Ax = 0; that is, the system written
in less abbreviated form as
a11 x1
a21 x1

am1 x1

+
+

a12 x2
a22 x2

am2 x2

+
+

+
+

a1n xn
a2n xn

amn xn

=
=

0
0

We will identify here an n-tuple (b1 , . . . , bn ) with the column vector

b1
..
. .
bn
In this way we can talk of a solution of the system as being an n-tuple
(x1 , . . . , xn ) satisfying the equations, or a column vector x such that Ax =
0. For example, for the system:
2x1 + 3x2 x3
x1 + 2x2 3x3

= 0
= 0,

the vector (1, 1, 1) is a solution. We identify this vector with the column matrix

1
1 ;
1
it is a solution because if we replace x1 by 1, x2 by 1 and x3 by 1 in the
equations, the equations ar satisfied. Or we can say that it is a solution
because



1
2 3 1
1 = 0 (the 2 1 zero matrix.)
1 2 3
1
As n-tuples, the solutions of the homogeneous system of m linear equations
in n unknowns are n-tuples, thus the set of ALL solutions is a subset of
Rn . Here is the important fact: The set of all solutions of a homogeneous
system of linear equations is a subspace of Rn , n=number of unknowns.
In fact, given such a system, the 0 vector in Rn ; that is the vector
(0, . . . , 0)
| {z }
n zeroes
is ALWAYS a solution. If VECTORS (column vectors) x, y are solutions,
then Ax = 0, Ay = 0 so that A(x + y) = Ax + Ay = 0 + 0 = 0, and
x + y is also a solution. Finally, if x is a solution and c is a scalar,
then A(cx) = cAx = c0 = 0, so cx is also a solution. This shows that
the solution space of a system of m homogeneous linear equations in n
unknowns is a subspace of Rn .

7
Bases and Dimension. Given a vector space we can ask for a minimal set of
vectors that span the space. We say the space is spanned by vectors v1 , . . . , vn
if and only if every vector of the space is a linear combination of v1 , . . . , vn .
Examples.
1. The vectors (1, 1, 2), (1, 2, 3), (1, 0, 4), (2, 2, 2) span R3 . How do we verify
that? To say these vectors span is equivalent to saying that given any
vector of R3 ; in other words any triple of numbers (a, b, c), we can find
coefficients c1 , c2 , c3 , c4 such that the equation
(a, b, c) = c1 (1, 1, 2) + c2 (1, 2, 3) + c3 (1, 0, 4) + c4 (2, 2, 2)
is solvable. In terms of components, this means that the system of equations
c1 + c2 + c3 + 2c4
c1 + 2c2 + 2c4
2c1 + 3c2 + 4c3 + 2c4

=
=
=

a
b
c

can be solved for all choices of a, b, c. To see whether this is or isnt so,
we consider the augmented matrix, and row reduce.

1 1 1 2 a
1 1
1
2
a
1 2 0 2 b 0 1 1
0
ba
2 3 4 2 c
0 1
2 2 c 2a

1 0
2
2 2a b
1 0
2
2
2a b
0 1 1
0
ba
0
b a 0 1 1
cab
2
0 0
3 2 c a b
0 0
1 3
3

10
3

8ab2c
3

23

4a+2b+c
3

23

ab+c
3

And we are done. This shows that we can select c4 at will, any value will
do. And then
10
8a b 2c
c4 +
3
3
2
4a + 2b + c
c4 +
3
3
2
a b + c
c4 +
3
3

c1

c2

c3

We get several conclusions from these computations. The first one is that
the vectors (1, 1, 2), (1, 2, 3), (1, 0, 4), (2, 2, 2) span R3 . But they are not a
minimal set. Since c4 can be chosen any way we wish, one choice is c4 = 0,
and that means that the fourth vector is superfluous.
2. In the previous example we saw a set of vectors spanning R3 . It is not the
most obvious set. For any n the vectors
e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, n)
span Rn . The vector ej for any j in the range 1 to n is n-tuple having a
1 in the j-th place, all other entries equal to 0. It is clear (is it) that if
x = (x1 , . . . , xn ) Rn , then
x = x1 e1 + x2 e2 + xn en
is a linear combination of these vectors.

8
3. Consider the set of all linear combinations of the functions ex , ex ; this
is a vector space, and a subspace of the vector space of all real valued
functions defined in the interval (, ). Show that cosh x, sinh x are in
the span of of these functions; in fact, show that the spans of ex , ex and
of cosh x, sinh x are identical.
Solution.

By definition
cosh x =

1 x 1 x
e + e ,
2
2

sinh x =

1 x 1 x
e e ,
2
2

so cosh x, sinh x are linear combinations of ex , ex and every linear combination of cosh x, sinh x is also one of ex , ex . In fact, we will have




1 x 1 x
1 x 1 x
c1 cosh x + c2 sinh x = c1
e + e
+ c2
e e
2
2
2
2
c1 + c2 x c1 c2 x
=
e +
e .
2
2
The span of cosh x, sinh x is thus contained in that of ex , ex . But, conversely,
ex = cosh x + sinh x,
ex = cosh x sinh x
so that ex , ex are linear combinations of cosh x, sinh x and the span of
ex , ex is included in that of cosh x, sinh x; in other words, the two spans
are equal.
Here are some important facts and definitions.
Definition. A set of vectors v1 , . . . , vn of a vector space V is a basis of the
vector space if (and only if) it is a minimal spanning set. That is:
1. Every vector of V is a linear combination of the vectors v1 , . . . , vn .
2. No proper subset of the set v1 , . . . , vn spans all of V ; in particular, if we
remove the vector v1 , then v1 cannot be obtained as a linear combination
of v2 , . . . , vn ; same for v2 , etc. In other words, no vector from the set can
be a lienar combination of the remaining vectors.
An equivalent definition, the one usually found in textbooks is: A set of
vectors v1 , . . . , vn is a basis of the vector space V if (and only if) the set is
linearly independent and spans V .
A property of a basis, which can also be used as a definition is: A set
v1 , . . . , vn is a basis of V if and only if every vector of V can be written uniquely
as a linear combination of the vectors v1 , . . . , vn .
For example, in our first example above, with v1 = (1, 1, 2), v2 = (1, 2, 3),
v3 = (1, 0, 4), v4 = (2, 2, 2), v1 , v2 , v3 , v4 is not a basis of R3 because we have
the freedom of choosing c4 any way we wish. However, once we put c4 = 0 (i.e.,
exclude c4 ) all freedom of choice is gone; the vector (a, b, c) can be written in
the form c1 v1 + c2 v2 + c3 v3 only in one form; the only choice of c, c2 , c3 that
works is given by what we found above, namely (to repeat):

c1

c2

c3

8a b 2c
3
4a + 2b + c
3
a b + c
3

For example, (1, 3, 5) can be written as a linear combination of v1 , v2 , v3 in the


form
5
7
1
(1, 3, 5) = v1 + v2 + v3 .
3
3
3

9
Exercise. Write the vectors (1, 0, 0), (0, 1, 0)and(0, 0, 1) as a linear combination
of v1 , v2 , v3 .
Definition. A vector space is finite dimensional if it has a finite basis.
We come to a very important result:
Suppose V is a finite dimensional vector space. Then any two bases have the
same number of elements. That number is called the dimension of the vector
space.
Examples and more.
1. We saw that (1, 1, 2), (1, 2, 3), (1, 0, 4) is a basis of R3 . But the obvious
basis is (1, 0, 0), (0, 1, 0), (0, 0, 1). In general, for every n the set of n vectors
e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, n)
are easily seen to be linearly independent and (as mentioned above) span
Rn , showing Rn has dimension n.

To be continued in classComplete notes may eventually appear.

S-ar putea să vă placă și