Sunteți pe pagina 1din 10

Matrix Algebra Operations

In this chapter, we will discuss the elementary operations of addition, subtraction, multiplication, and inversion, a matrix algebra analogue of division of scalar numbers. Subsequently, we will turn our attention to powers of matrices and to operations involving scalar numbers and matrices, scalar numbers and vectors, and vectors and matrices. The operations on matrices differ from similar operations of scalar algebra in several respects. The matrix algebra operations, in general, are not commutative and attention must be paid to whether the matrices are conformable with respect to the intended operation. Also, it must be noted whether the matrix operation pertains to matrix elements or to matrices.

Addition and Subtraction of Matrix Elements


Addition of Matrix Elements To add elements of matrices A and B and store the result as a matrix C,

elements of matrix A are added to their corresponding elements in matrix B and stored as elements of matrix C. Obviously, all three matrices must be of the same order or size. Notice that the plus sign in the above equation is enclosed in parentheses to indicate addition of matrix elements, as contrasted with addition of matrices, to be discussed in section to follow. An example of addition of matrix elements is schematized below

and illustrated as

Subtraction of Matrix Elements The operation of subtracting matrix elements can be schematized as

and illustrated as

In sum, the above matrix elements can be added and subtracted if and only if the matrices are of the same order (identical in the number of rows and columns). Matrices upon which an operation is permissible are said to conform to the operation.

Addition and Subtraction of Matrices


Textbooks on matrix algebra, routinely describing major and minor vector products, do not suggest analogical operations for the major and minor sums and differences of summands, minuends, and subtrahends. These operations are easy to imagine and are not discussed because most of their potential applications can be as well accomplished by unit vectors multiplications. However, on close scrutiny, matrix algebra operations of addition and subtraction (of vectors, not elements of vectors, of matrices, not elements of matrices) can be used for concise expression of several key algorithms of statistical theory and theory of probability. The novel matrix algebra operations were first formulated in the following two articles:
Krus, D.J., & Ceuvorst, R. W. (1979) Dominance, information, and hierarchical scaling of variance space. Applied Psychological Measurement, 3, 515-527. Krus, D.J., & Wilkinson, S.M. (1986) Matrix differencing as a concise expression of variance. Educational and Psychological Measurement, 46, 179-183.

Addition of Matrices To add two matrices A and B and store the results in matrix C,

the number of columns in matrix A must equal the number of rows in matrix B, in another words, the matrices must be conformable to matrix addition. The resulting matrix C will have the number of rows of the first matrix and the number of columns of the second matrix. For example, if matrix A is a 3x2 matrix and matrix B is a 2x3 matrix, the resulting matrix will be a 3x3 matrix. The schematic representation of matrix addition is

illustrated as

Subtraction of Matrices To subtract two matrices A and B and store the results in matrix C

again, the matrices must be conformable. The number of columns in matrix A must equal the number of rows in matrix B. The resulting matrix C will have the number of rows of the first matrix and the number of columns of the second matrix. The schematic representation of matrix subtraction is shown below

illustrated as

Note that the first matrix is a 2x3 matrix and the second matrix is a 3x2 matrix, the resulting matrix is a 2x2 matrix.

Multiplication of Matrix Elements


To multiply elements of two matrices A and B and store the results in matrix C

each element of matrix A is multiplied with its corresponding element in matrix B and the result is stored in matrix C. To perform this operation, both matrices have to be either of the same size, or one matrix has to be a scalar matrix. The schematic representation of multiplication of matrix elements is

illustrated by a special case of multiplication of a matrix by a scalar number as

in the special case of multiplication of a matrix by a constant it is customary to omit the (.) sign.

Multiplication of Matrices
The matrix multiplication is analogous to matrix addition, and, as the matrix addition, this operation is not commutative, unless both matrices are symmetric. Also, to premultiply matrix B by matrix A

the matrices must be conformable to matrix multiplication. The number of columns in matrix A must equal the number of rows in matrix B. The product matrix C will have the number of rows of the first matrix and the number of columns of the second matrix. The schematic representation of matrix multiplication is

illustrated as

The postmultiplicatin of matrix B by matrix A

using the same numerical example, is illustrated as

Premultiplication of a matrix A by a diagonal matrix T

is equivalent to scalar multiplication of each row element in the matrix by the same row element of the diagonal matrix. In schematic representation

The following example illustrates rearrangement of the columns of a matrix

Division of Matrix Elements


To divide elements of two matrices A and B and store the results in matrix C

corresponding elements of A and B matrices form a fraction, stored in C. All three matrices must be of the same size, or the divisor must be a scalar number. The division sign is enclosed in parentheses to indicate division of matrix elements. An example of division of matrix elements is schematized below

and illustrated, using as an example a division of a matrix by a scalar number

Inverse
6

The analog of matrix division is multiplication by a reciprocal. The reciprocal of a matrix is called its inverse. Only square matrices can be inverted, however, some square matrices, called singular matrices, do not have inverses. The inverse of a matrix is denoted by a -1 superscript. To invert a matrix, first, we must determine that the matrix is invertible by computing its determinant. If the determinant is not a zero, the matrix is invertible. This is analogous to the restriction on the division of scalar numbers that cannot be divided by a zero. Consider a special case of a two by two matrix A

with determinant equal to ad-bc. For the special case of the two-by-two matrices, the inverse of the above matrix can be calculated by switching elements in the principal diagonal,

changing signs of the off-diagonal elements,

and by dividing all elements by the determinant.

Consider a matrix A

First, we must determine whether this matrix is invertible. The determinant is computed as (1) (4)-(2)(3) and equals -2. This determinant does not equal zero. It means that the matrix is invertible. To invert a matrix, switch elements in the principal diagonal,

change signs of the off-diagonal elements,

Thus, the matrix inverse equals

Analogous to scalar multiplication of a number by its reciprocal

an identity matrix I equals

this analogy also suggests the nature of matrix singularity. In arithmetic, the only number without a reciprocal is 0. In matrix algebra, the singular matrices, i.e., matrices with a zero determinant, do not have an inverse. A numerical illustration of the above equation is

The discussed procedures for matrix inversion apply only to two-by-two matrices. The inverse of a matrix larger than a two-by-two matrix is difficult, best done with the help of a computer.

Powers of Matrix Elements


The squares of the matrix elements are written as

In schematic representation

A numerical example is

A matrix to a fractional power to a fractional negative power

provides for obtaining roots of matrix elements and a matrix for obtaining reciprocals of roots of matrix elements.

Powers of Matrices
The square of a matrix is a multiplication of a matrix by itself

In schematic representation

this operation is illustrated as

the results of the illustrative example can be compared with the previous results to stress that a power of a matrix is not equal to power of matrix elements .

10

S-ar putea să vă placă și