Documente Academic
Documente Profesional
Documente Cultură
VECTOR SPACES
A vector space, more specifically, a real vector space (as opposed to a complex
one or some even stranger ones) is any set that is closed under an operation of
addition and under multiplication by real numbers. To be a bit more precise, if
a set V is to be a vector space, then
1. One has to be able to add any two elements of V to get another element
of V . Specifically, an operation + has to be defined so that if x, y V ,
one can form x + y and x + y V . This operation should have the usual
properties, namely it has to be associative and commutative, there should
be a zero element (denoted by 0) that has no effect when added to another
element (If x V , then x + 0 = x) and one should also be able to subtract
in V ; that is, if x V then x has an additive inverse x which when
added to x results in 0. Subtraction is then defined by x y = x + (y).
2. If x V and c is a real number, it should make sense to multiply x by c
to get cx V . The usual properties should hold, specifically,
c(x + y) = cx + cy if c R, x, y V .
(c + d)x = cx + dx if c, d R, x V .
c(dx) = (cd)x = d(cx) if c, d R, x V .
1x = x if x V .
The last property may seem a bit strange, but one possibly could define
some weird product in which 1 times x is not x, and one wants to be sure
nobody goes around saying he or she has a vector space in that case.
The elements of V are then called vectors, the real numbers are then called
scalars.
For us the main examples are:
1. For every pair of positive integers m, n, the set of m n matrices, with
the usual operations, is a vector space. Of particular interest are the cases
m = 1; we then have column matrices or vectors. This space can be
identified with Rn , n-tuples of real numbers.
n = 1; we then have row matrices or vectors. This space can be
identified with Rm , m-tuples of real numbers.
Rn for every positive integer n.
The 0 element of this space is the 0 matrix. Actually, the most important
case is the set Rn of all n-tuples of real numbers, with the usual operations.
The 0 element is the vector with all components equal to 0.
2. If I is an interval in R; that is, I is one of the following sets
I = (a, b) for some a, b, a < b (a = or b = being
allowed), or
I = [a, b) for some a, b, < a < b (b = being allowed), or
I = (a, b] for some a, b, a < b < (a = a possibility), or
I = [a, b] for some a, b, < a < b < ,
then the set V of all real valued functions on I is a vector space. If
f (x), g(x) are defined for x I, if c R, we define (f + g)(x), (cf )(x) in
the usual and obvious way:
(f + g)(x) = f (x) + g(x),
(x I).
The 0 element of this vector space is the identically 0 function; the function
f defined by f (x) = 0 for all x I.
2
Linear Combinations. In every vector space one can combine vectors with
scalars to get linear combinations. The precise definition is:
Suppose v1 , v2 , . . . , vn are vectors. A linear combination of these vectors is
any vector of the form v = c1 v1 + c2 v2 + + cn vn , where c1 , c2 , . . . , cn are
scalars. The scalars c 1, . . . , cn are the coefficients of the linear combination.
Examples.
1. In R3 consider the vectors v1 = (1, 1, 0), v2 = (2, 3, 0) and v3 = (0, 5, 0).
Here are a few linear combinations of these vectors:
2v1 7v2 + 4v3
3v1 + 2v2 v3
v1 + v2 + 0v3
2v2
10v1 + 5v2
v2
We can notice a few facts of general interest. The last computation shows
v2 is itself a linear combination of v1 , v2 , v3 . This is of course generally
true; every vector of a set of vectors is a linear combination of this set;
just take all coefficients equal to 0, except the one corresponding to the
vector in question, which you take equal to 1.
More interestingly, the fifth example shows that v3 is a linear combination
of the other vectors of the bunch; that is a combination of v1 and v2 . This
means we dont need it for the purpose of forming linear combinations; in
every linear combination we can just replace v5 by 10v1 + 5v2 , collect
terms, to get a linear combination of v1 , v2 alone. For example, in the first
computation we get (12, 1, 0) as a linear combination of just v1 , v2 by
(12, 1, 0)
3
Exercise. Show that v1 is a linear combination of v2 , v3 and write
(12, 1, 0) as a linear combination of v2 , v3 .
Well do one more thing with these vectors, describe the set of all their
linear combinations. We dont need all three; since v3 depends on v1 , v2 , it
suffices to describe all the linear combinations of v1 , v2 . This will consist
of all vectors (a, b, c) such that one can find coefficients c1 , c2 such that
c1 v1 + c2 v2 = (a, b, c). If we equate components, we are asking for all
vectors (a, b, c) such that
c1 + 2c2
c1 + 3c2
0c1 + 0c2
=
=
=
a
b
c.
c1 v1 + + cn vn = 0
If one can show there is a solution with at least one ci 6= 0, then one has
ones answer. For example, just as an example, suppose that there is a
solution of (1) with c2 6= 0. Then we can solve for v2 = (c1 /c2 )v1
(c3 /c2 )v3 + + (cn /c2 )vn , and the system is not linearly independent.
But if no such solution exists, then the system is linearly independent.
Let us try it with our vectors. The vector equation c1 v1 + c2 v2 + c3 v3 = 0
results in a system of 4 linear equations in the unknowns c1 , c2 , c3 :
c1 + c2 + c3
2c1 + c3
= 0
= 0
c1 + 2c2 c3
= 0
3c1 + 4c2 c3
= 0.
1 1
1
2 0
1
1 2 1
3 4 1
0
0
.
0
0
1
0
0
0
0
0
0
0
is
0
1
0
0
0
0
1
0
4
showing that the only solution is c1 = 0, c2 = 0, c3 = 0. The system is
linearly independent.
Exercise. Verify that the stated row reduced form is the right one.
To answer the second question, we need to determine all vectors (a, b, c, d)
for which the system
c1 + c2 + c3
2c1 + c3
c1 + 2c2 c3
3c1 + 4c2 c3
=
=
=
=
a
b
c
d.
1 1
1 a
2 0
1 b
1 2 1 c
3 4 1 d
and the exact same row operations as before take it to the canonical form:
2a + 3b + c
1
0
0
a+c
0 1 0
4a 3b 2c
0 0 1
0 0 0 2a 4b 3c + d
The conclusion is that a vector v = (a, b, c, d) is a linear combination of
the given three vectors v1 , v2 , v3 if and only if 2a 4b 3c + d = 0; that
is, d = 2a + 4b + 3c, in which case v = c1 v1 + c2 + c3 v3 if
c1 =
2a + 3b + c
,
3
c2 =
a+c
,
3
c3 =
4a 3b 2c
.
3
2a + 3b + c
= 5,
3
c2 =
a+c
= 2,
3
c3 =
4a 3b 2c
= 7,
3
f2 (x) = sin x,
f3 (x) = cos x,
f4 (x) = sin(x +
).
6
5
Are these functions linearly independent? If we remember well our trigonometry it is easy to see that the answer is no, because
3
1
sin(x + ) = sin x cos + cos x sin =
sin x + cos x;
6
6
6
2
2
6
5. Two silly examples: If V is a vector space, then it itself is a subspace of
itself. Also, the set consisting only of 0, the 0 element of V by itself, is a
subspace of V .
6. Suppose that I = (a, b) is an open interval of real numbers and let
V be the set of ALL real valued functions of domain I,
W the set of all continuous functions on I,
X the set of all differentiable functions of on I.
It should be clear that W is a subset of V , and X of W . It is also easy
to see that W is a subspace of the vector space V ; and X is a subspace of
the vector space W . It is sort of evident that a subspace X of a subspace
W of V is also a subspace of V .
7. An important example: Let A = [aij ] be an mn matrix. Consider the
homogeneous system of equations, Ax = 0; that is, the system written
in less abbreviated form as
a11 x1
a21 x1
am1 x1
+
+
a12 x2
a22 x2
am2 x2
+
+
+
+
a1n xn
a2n xn
amn xn
=
=
0
0
b1
..
. .
bn
In this way we can talk of a solution of the system as being an n-tuple
(x1 , . . . , xn ) satisfying the equations, or a column vector x such that Ax =
0. For example, for the system:
2x1 + 3x2 x3
x1 + 2x2 3x3
= 0
= 0,
the vector (1, 1, 1) is a solution. We identify this vector with the column matrix
1
1 ;
1
it is a solution because if we replace x1 by 1, x2 by 1 and x3 by 1 in the
equations, the equations ar satisfied. Or we can say that it is a solution
because
1
2 3 1
1 = 0 (the 2 1 zero matrix.)
1 2 3
1
As n-tuples, the solutions of the homogeneous system of m linear equations
in n unknowns are n-tuples, thus the set of ALL solutions is a subset of
Rn . Here is the important fact: The set of all solutions of a homogeneous
system of linear equations is a subspace of Rn , n=number of unknowns.
In fact, given such a system, the 0 vector in Rn ; that is the vector
(0, . . . , 0)
| {z }
n zeroes
is ALWAYS a solution. If VECTORS (column vectors) x, y are solutions,
then Ax = 0, Ay = 0 so that A(x + y) = Ax + Ay = 0 + 0 = 0, and
x + y is also a solution. Finally, if x is a solution and c is a scalar,
then A(cx) = cAx = c0 = 0, so cx is also a solution. This shows that
the solution space of a system of m homogeneous linear equations in n
unknowns is a subspace of Rn .
7
Bases and Dimension. Given a vector space we can ask for a minimal set of
vectors that span the space. We say the space is spanned by vectors v1 , . . . , vn
if and only if every vector of the space is a linear combination of v1 , . . . , vn .
Examples.
1. The vectors (1, 1, 2), (1, 2, 3), (1, 0, 4), (2, 2, 2) span R3 . How do we verify
that? To say these vectors span is equivalent to saying that given any
vector of R3 ; in other words any triple of numbers (a, b, c), we can find
coefficients c1 , c2 , c3 , c4 such that the equation
(a, b, c) = c1 (1, 1, 2) + c2 (1, 2, 3) + c3 (1, 0, 4) + c4 (2, 2, 2)
is solvable. In terms of components, this means that the system of equations
c1 + c2 + c3 + 2c4
c1 + 2c2 + 2c4
2c1 + 3c2 + 4c3 + 2c4
=
=
=
a
b
c
can be solved for all choices of a, b, c. To see whether this is or isnt so,
we consider the augmented matrix, and row reduce.
1 1 1 2 a
1 1
1
2
a
1 2 0 2 b 0 1 1
0
ba
2 3 4 2 c
0 1
2 2 c 2a
1 0
2
2 2a b
1 0
2
2
2a b
0 1 1
0
ba
0
b a 0 1 1
cab
2
0 0
3 2 c a b
0 0
1 3
3
10
3
8ab2c
3
23
4a+2b+c
3
23
ab+c
3
And we are done. This shows that we can select c4 at will, any value will
do. And then
10
8a b 2c
c4 +
3
3
2
4a + 2b + c
c4 +
3
3
2
a b + c
c4 +
3
3
c1
c2
c3
We get several conclusions from these computations. The first one is that
the vectors (1, 1, 2), (1, 2, 3), (1, 0, 4), (2, 2, 2) span R3 . But they are not a
minimal set. Since c4 can be chosen any way we wish, one choice is c4 = 0,
and that means that the fourth vector is superfluous.
2. In the previous example we saw a set of vectors spanning R3 . It is not the
most obvious set. For any n the vectors
e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, n)
span Rn . The vector ej for any j in the range 1 to n is n-tuple having a
1 in the j-th place, all other entries equal to 0. It is clear (is it) that if
x = (x1 , . . . , xn ) Rn , then
x = x1 e1 + x2 e2 + xn en
is a linear combination of these vectors.
8
3. Consider the set of all linear combinations of the functions ex , ex ; this
is a vector space, and a subspace of the vector space of all real valued
functions defined in the interval (, ). Show that cosh x, sinh x are in
the span of of these functions; in fact, show that the spans of ex , ex and
of cosh x, sinh x are identical.
Solution.
By definition
cosh x =
1 x 1 x
e + e ,
2
2
sinh x =
1 x 1 x
e e ,
2
2
so cosh x, sinh x are linear combinations of ex , ex and every linear combination of cosh x, sinh x is also one of ex , ex . In fact, we will have
1 x 1 x
1 x 1 x
c1 cosh x + c2 sinh x = c1
e + e
+ c2
e e
2
2
2
2
c1 + c2 x c1 c2 x
=
e +
e .
2
2
The span of cosh x, sinh x is thus contained in that of ex , ex . But, conversely,
ex = cosh x + sinh x,
ex = cosh x sinh x
so that ex , ex are linear combinations of cosh x, sinh x and the span of
ex , ex is included in that of cosh x, sinh x; in other words, the two spans
are equal.
Here are some important facts and definitions.
Definition. A set of vectors v1 , . . . , vn of a vector space V is a basis of the
vector space if (and only if) it is a minimal spanning set. That is:
1. Every vector of V is a linear combination of the vectors v1 , . . . , vn .
2. No proper subset of the set v1 , . . . , vn spans all of V ; in particular, if we
remove the vector v1 , then v1 cannot be obtained as a linear combination
of v2 , . . . , vn ; same for v2 , etc. In other words, no vector from the set can
be a lienar combination of the remaining vectors.
An equivalent definition, the one usually found in textbooks is: A set of
vectors v1 , . . . , vn is a basis of the vector space V if (and only if) the set is
linearly independent and spans V .
A property of a basis, which can also be used as a definition is: A set
v1 , . . . , vn is a basis of V if and only if every vector of V can be written uniquely
as a linear combination of the vectors v1 , . . . , vn .
For example, in our first example above, with v1 = (1, 1, 2), v2 = (1, 2, 3),
v3 = (1, 0, 4), v4 = (2, 2, 2), v1 , v2 , v3 , v4 is not a basis of R3 because we have
the freedom of choosing c4 any way we wish. However, once we put c4 = 0 (i.e.,
exclude c4 ) all freedom of choice is gone; the vector (a, b, c) can be written in
the form c1 v1 + c2 v2 + c3 v3 only in one form; the only choice of c, c2 , c3 that
works is given by what we found above, namely (to repeat):
c1
c2
c3
8a b 2c
3
4a + 2b + c
3
a b + c
3
9
Exercise. Write the vectors (1, 0, 0), (0, 1, 0)and(0, 0, 1) as a linear combination
of v1 , v2 , v3 .
Definition. A vector space is finite dimensional if it has a finite basis.
We come to a very important result:
Suppose V is a finite dimensional vector space. Then any two bases have the
same number of elements. That number is called the dimension of the vector
space.
Examples and more.
1. We saw that (1, 1, 2), (1, 2, 3), (1, 0, 4) is a basis of R3 . But the obvious
basis is (1, 0, 0), (0, 1, 0), (0, 0, 1). In general, for every n the set of n vectors
e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, n)
are easily seen to be linearly independent and (as mentioned above) span
Rn , showing Rn has dimension n.