Documente Academic
Documente Profesional
Documente Cultură
In this chapter, we will discuss the elementary operations of addition, subtraction, multiplication, and inversion, a matrix algebra analogue of division of scalar numbers. Subsequently, we will turn our attention to powers of matrices and to operations involving scalar numbers and matrices, scalar numbers and vectors, and vectors and matrices. The operations on matrices differ from similar operations of scalar algebra in several respects. The matrix algebra operations, in general, are not commutative and attention must be paid to whether the matrices are conformable with respect to the intended operation. Also, it must be noted whether the matrix operation pertains to matrix elements or to matrices.
elements of matrix A are added to their corresponding elements in matrix B and stored as elements of matrix C. Obviously, all three matrices must be of the same order or size. Notice that the plus sign in the above equation is enclosed in parentheses to indicate addition of matrix elements, as contrasted with addition of matrices, to be discussed in section to follow. An example of addition of matrix elements is schematized below
and illustrated as
Subtraction of Matrix Elements The operation of subtracting matrix elements can be schematized as
and illustrated as
In sum, the above matrix elements can be added and subtracted if and only if the matrices are of the same order (identical in the number of rows and columns). Matrices upon which an operation is permissible are said to conform to the operation.
Addition of Matrices To add two matrices A and B and store the results in matrix C,
the number of columns in matrix A must equal the number of rows in matrix B, in another words, the matrices must be conformable to matrix addition. The resulting matrix C will have the number of rows of the first matrix and the number of columns of the second matrix. For example, if matrix A is a 3x2 matrix and matrix B is a 2x3 matrix, the resulting matrix will be a 3x3 matrix. The schematic representation of matrix addition is
illustrated as
Subtraction of Matrices To subtract two matrices A and B and store the results in matrix C
again, the matrices must be conformable. The number of columns in matrix A must equal the number of rows in matrix B. The resulting matrix C will have the number of rows of the first matrix and the number of columns of the second matrix. The schematic representation of matrix subtraction is shown below
illustrated as
Note that the first matrix is a 2x3 matrix and the second matrix is a 3x2 matrix, the resulting matrix is a 2x2 matrix.
each element of matrix A is multiplied with its corresponding element in matrix B and the result is stored in matrix C. To perform this operation, both matrices have to be either of the same size, or one matrix has to be a scalar matrix. The schematic representation of multiplication of matrix elements is
in the special case of multiplication of a matrix by a constant it is customary to omit the (.) sign.
Multiplication of Matrices
The matrix multiplication is analogous to matrix addition, and, as the matrix addition, this operation is not commutative, unless both matrices are symmetric. Also, to premultiply matrix B by matrix A
the matrices must be conformable to matrix multiplication. The number of columns in matrix A must equal the number of rows in matrix B. The product matrix C will have the number of rows of the first matrix and the number of columns of the second matrix. The schematic representation of matrix multiplication is
illustrated as
is equivalent to scalar multiplication of each row element in the matrix by the same row element of the diagonal matrix. In schematic representation
corresponding elements of A and B matrices form a fraction, stored in C. All three matrices must be of the same size, or the divisor must be a scalar number. The division sign is enclosed in parentheses to indicate division of matrix elements. An example of division of matrix elements is schematized below
Inverse
6
The analog of matrix division is multiplication by a reciprocal. The reciprocal of a matrix is called its inverse. Only square matrices can be inverted, however, some square matrices, called singular matrices, do not have inverses. The inverse of a matrix is denoted by a -1 superscript. To invert a matrix, first, we must determine that the matrix is invertible by computing its determinant. If the determinant is not a zero, the matrix is invertible. This is analogous to the restriction on the division of scalar numbers that cannot be divided by a zero. Consider a special case of a two by two matrix A
with determinant equal to ad-bc. For the special case of the two-by-two matrices, the inverse of the above matrix can be calculated by switching elements in the principal diagonal,
Consider a matrix A
First, we must determine whether this matrix is invertible. The determinant is computed as (1) (4)-(2)(3) and equals -2. This determinant does not equal zero. It means that the matrix is invertible. To invert a matrix, switch elements in the principal diagonal,
this analogy also suggests the nature of matrix singularity. In arithmetic, the only number without a reciprocal is 0. In matrix algebra, the singular matrices, i.e., matrices with a zero determinant, do not have an inverse. A numerical illustration of the above equation is
The discussed procedures for matrix inversion apply only to two-by-two matrices. The inverse of a matrix larger than a two-by-two matrix is difficult, best done with the help of a computer.
In schematic representation
A numerical example is
provides for obtaining roots of matrix elements and a matrix for obtaining reciprocals of roots of matrix elements.
Powers of Matrices
The square of a matrix is a multiplication of a matrix by itself
In schematic representation
the results of the illustrative example can be compared with the previous results to stress that a power of a matrix is not equal to power of matrix elements .
10