Sunteți pe pagina 1din 32

Anti-diagonal matrix

In mathematics, an anti-diagonal matrix is a matrix where all the entries are zero except those
on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-
diagonal.

More precisely, an n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero for all i, j ∈
{1, …, n} with i + j ≠ n + 1.

An example of an anti-diagonal matrix is

All anti-diagonal matrices are also persymmetric.

The product of two anti-diagonal matrices is a diagonal matrix

Augmented matrix
In linear algebra, the augmented matrix of a matrix is obtained by changing a matrix in some
way.

Given the matrices A and B, where:

Then, the augmented matrix (A|B) is written as:

This is useful when solving systems of linear equations or the augmented matrix may also be
used to find the inverse of a matrix by combining it with the identity matrix.
Band matrix
In mathematics, particularly matrix theory, a band matrix is a sparse matrix, whose non-zero
entries are confined to a diagonal band, comprising the main diagonal and zero or more
diagonals on either side.

Formally, an n×n matrix A=(ai,j ) is a band matrix if all matrix elements are zero outside a
diagonally bordered band whose range is determined by constants k1 and k2:

The quantities k1 and k2 are the left and right half-bandwidth, respectively. The bandwidth of
the matrix is k1 + k2 + 1 (in other words, the smallest number of adjacent diagonals to which
the non-zero elements are confined).

A band matrix with k1 = k2 = 0 is a diagonal matrix; a band matrix with k1 = k2 = 1 is


a tridiagonal matrix; when k1 = k2 = 2 one has a pentadiagonal matrix and so on. If one
puts k1 = 0,k2 = n−1, one obtains the definition of an upper triangular matrix; similarly,
for k1 = n−1, k2 = 0 one obtains a lower triangular matrix.

Conjugate transpose
"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint matrix".

In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an m-by-


*
n matrix A with complex entries is the n-by-m matrix A obtained from A by taking
thetranspose and then taking the complex conjugate of each entry (i.e. negating their imaginary
parts but not their real parts). The conjugate transpose is formally defined by

where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar
denotes a scalar complex conjugate. (The complex conjugate of a + bi, where a and b are
reals, isa − bi.)

This definition can also be written as

where denotes the transpose and denotes the matrix with complex
conjugated entries.

Other names for the conjugate transpose of a matrix are Hermitian conjugate,
or transjugate. The conjugate transpose of a matrix A can be denoted by any of these
symbols:
 or , commonly used in linear algebra
 (sometimes pronounced "A dagger"), universally used in quantum mechanics
 , although this symbol is more commonly used for the Moore-Penrose
pseudoinverse

In some contexts, denotes the matrix with complex conjugated entries, and thus
the conjugate transpose is denoted by or .

Example

If

then

Diagonal matrix
In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main
diagonal (↘) are all zero. The diagonal entries themselves may or may not be zero. Thus, the
matrix D = (di,j) with n columns and n rows is diagonal if:

For example, the following matrix is diagonal:

The term diagonal matrix may sometimes refer to a rectangular diagonal matrix,
which is an m-by-n matrix with only the entries of the form di,i possibly non-zero; for
example,

, or
Hermitian matrix
A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is
equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal
to the complex conjugate of the element in the jth row and ith column, for all indices i and j:

If the conjugate transpose of a matrix is denoted by , then the Hermitian property can
be written concisely as

Hermitian matrices can be understood as the complex extension of a real symmetric


matrix.

Examples

For example,

is a Hermitian matrix

Identity matrix
In linear algebra, the identity matrix or unit matrix of size n is the n-by-n square matrix with
ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is
immaterial or can be trivially determined by the context. (In some fields, such as quantum
mechanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I.)

Some mathematics books use U and E to represent the Identity Matrix (meaning "Unit
[1]
Matrix" and "Elementary Matrix", or from the German "Einheitsmatrix", respectively),
although I is considered more universal.
The important property of matrix multiplication of identity matrix is that for m-by-n A

Invertible matrix
In linear algebra, an n-by-n (square) matrix A is
called invertible or nonsingular or nondegenerate if there exists an n-by-n matrix B such that

where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix
multiplication. If this is the case, then the matrix B is uniquely determined by A and is called
−1
theinverse of A, denoted by A . It follows from the theory of matrices that if

for square matrices A and B, then also


[1]

Non-square matrices (m-by-n matrices for which m ≠ n) do not have an inverse.


However, in some cases such a matrix may have a left inverse or right inverse.
If A is m-by-n and the rank ofA is equal to n, then A has a left inverse: an n-by-
m matrix B such that BA = I. If A has rank m, then it has a right inverse: an n-by-
m matrix B such that AB = I.

While the most common case is that of matrices over


the real or complex numbers, all these definitions can be given for matrices over
any commutative ring.

A square matrix that is not invertible is called singular or degenerate. A square


matrix is singular if and only if its determinant is 0. Singular matrices are rare in
the sense that if you pick a random square matrix, it will almost surely not be
singular.

Matrix inversion is the process of finding the matrix B that satisfies the prior
equation for a given invertible matrix A.
Involutory matrix
In mathematics, an involutory matrix is a matrix that is its own inverse. That is, matrix A is an
2
involution iff A = I. One of the three classes of elementary matrix is involutory, namely therow-
interchange elementary matrix. A special case of another class of elementary matrix, that which
represents multiplication of a row or column by −1, is also involutory; it is in fact a trivial example
of a signature matrix, all of which are involutory.

Involutory matrices are all square roots of the identity matrix. This is simply a consequence of the
fact that any nonsingular matrix multiplied by its inverse is the identity. If A is an n × nmatrix,
then A is involutory if and only if ½(A + I) is idempotent.

An involutory matrix which is also symmetric is an orthogonal matrix, and thus represents
an isometry (a linear transformation which preserves Euclidean distance). A reflection matrix is an
example of an involutory matrix.

Irregular matrix
An irregular matrix, or ragged matrix, can be described as a matrix that has a different
number of elements in each row. Ragged matrices are not used in linear algebra, since
standard matrix transformations cannot be performed on them, but they are useful as
arrays in computing. Irregular matrices are typically stored using Iliffe vectors.

For example, the following is an irregular matrix:


List of matrices

Organization of a matrix

This page lists some important classes of matrices used in mathematics, science and
engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array
of numbers called entries, as shown at the right. Matrices have a long history of both
study and application, leading to diverse ways of classifying matrices. A first group is
matrices satisfying concrete conditions of the entries, including constant matrices. An
important example is the identity matrix given by

Further ways of classifying matrices are according to their eigenvalues or by imposing


conditions on the product of the matrix with other matrices. Finally, many domains, both
in mathematics and other sciences including physics and chemistry have particular
matrices that are applied chiefly in these areas.

Matrices with explicitly constrained entries

The following lists matrices whose entries are subject to certain conditions. Many of
them apply to square matrices only, that is matrices with the same number of columns
and rows. The main diagonal of a square matrix is the diagonal joining the upper left
corner and the lower right one or equivalently the entries ai,i. The other diagonal is called
anti-diagonal (or counter-diagonal).
Explanation Notes, References
Name

Synonym for binary matrix and


(0,1)-matrix A matrix with all elements either 0 or 1.
logical matrix.

A matrix in which successive columns


Alternant matrix have a particular function applied to their
entries.

Anti-diagonal A square matrix with all entries off the


matrix anti-diagonal equal to zero.

Anti-Hermitian Synonym for skew-Hermitian


matrix matrix.

Anti-symmetric Synonym for skew-symmetric


matrix matrix.

A square matrix containing zeros in all


Arrowhead matrix entries except for the first row, first
column, and main diagonal.

A square matrix whose non-zero entries


Band matrix
are confined to a diagonal band.

A matrix with elements only on the main


Sometimes defined differently,
Bidiagonal matrix diagonal and either the superdiagonal or
see article.
subdiagonal.

A matrix whose entries are all either 0 or Synonym for (0,1)-matrix and
Binary matrix
1. logical matrix. [1]

A square matrix that is symmetric


Bisymmetric matrix with respect to its main diagonal and its
main cross-diagonal.

Block-diagonal A block matrix with entries only on the


matrix diagonal.

Block matrix A matrix partitioned in sub-matrices


called blocks.

A block matrix which is essentially a


Block tridiagonal
tridiagonal matrix but with submatrices in
matrix
place of scalar elements

A matrix whose elements are of the form


Cauchy matrix 1/(xi + yj) for (xi), (yj) injective sequences
(i.e., taking every value only once).

Centrosymmetric A matrix symmetric about its center; i.e.,


matrix aij = an−i+1,n−j+1

A square matrix with zero diagonal and


Conference matrix +1 and −1 off the diagonal, such that CTC
is a multiple of the identity matrix.

A matrix with all rows and columns


Complex Hadamard
mutually orthogonal, whose entries are
matrix
unimodular.

A square matrix A with real coefficients,


Copositive matrix such that f(x) = xTAx is nonnegative for
every nonnegative vector x

Diagonally
aii| > Σj≠i |aij|.
dominant matrix

A square matrix with all entries off the


main diagonal equal to zero.
Diagonal matrix

A square matrix derived by applying an


Elementary matrix elementary row operation to the identity
matrix.

A matrix that can be derived from


Equivalent matrix another matrix through a sequence of
elementary row or column operations.

Frobenius matrix A square matrix in the form of an identity


matrix but with arbitrary entries in one
column below the main diagonal.

A square matrix with precisely one


Generalized
nonzero element in each row and
permutation matrix
column.

Integer matrix A matrix whose entries are all integers.

A square matrix with entries +1, −1


Hadamard matrix
whose rows are mutually orthogonal.

A matrix with constant skew-diagonals; A square Hankel matrix is


Hankel matrix
also an upside down Toeplitz matrix. symmetric.

A square matrix which is equal to its


Hermitian matrix
conjugate transpose, A = A*.

An "almost" triangular matrix, for


Hessenberg matrix example, an upper Hessenberg matrix has
zero entries below the first subdiagonal.

A square matrix whose main diagonal


Hollow matrix
comprises only zero elements.

Synonym for (0,1)-matrix or


Logical matrix A matrix with all entries either 0 or 1 binary matrix. Can be used to
represent a k-adic relation.

A matrix whose off-diagonal entries are


Metzler matrix
non-negative.

A square matrix with exactly one non- Synonym for generalized


Monomial matrix
zero entry in each row and column. permutation matrix.

A row consists of 1, a, aq, aq², etc., and


Moore matrix
each row uses a different variable

Nonnegative matrix A matrix with all nonnegative entries.

A matrix partitioned into sub-matrices, or


Partitioned matrix equivalently, a matrix whose entries are Synonym for block matrix
themselves matrices rather than scalars
A matrix with the only nonzero entries on
Pentadiagonal
the main diagonal and the two diagonals
matrix
just above and below the main one.

A matrix representation of a
permutation, a square matrix with exactly
Permutation matrix
one 1 in each row and column, and all
other elements 0.

A matrix that is symmetric about its


Persymmetric
northeast-southwest diagonal, i.e.,
matrix
aij = an−j+1,n−i+1

Polynomial matrix A matrix whose entries are polynomials.

Positive matrix A matrix with all positive entries.

A matrix whose entries are either +1, 0,


Sign matrix
or −1.

A diagonal matrix where the diagonal


Signature matrix
elements are either +1 or −1.

A square matrix which is equal to the


Skew-Hermitian
negative of its conjugate transpose, A* =
matrix
−A.

Skew-symmetric A matrix which is equal to the negative of


matrix its transpose, AT = −A.

A rearrangement of the entries of a


Skyline matrix
banded matrix which requires less space.

Sparse matrix algorithms can


A matrix with relatively few non-zero tackle huge sparse matrices that
Sparse matrix
elements. are utterly impractical for dense
matrix algorithms.

The Sylvester matrix is


A square matrix whose entries come from nonsingular if and only if the two
Sylvester matrix
coefficients of two polynomials. polynomials are coprime to each
other.
A square matrix which is equal to its
Symmetric matrix
transpose, A = AT (ai,j = aj,i).

Toeplitz matrix A matrix with constant diagonals.

A matrix with all entries above the main


diagonal equal to zero (lower triangular)
Triangular matrix
or with all entries below the main
diagonal equal to zero (upper triangular).

A matrix with the only nonzero entries on


Tridiagonal matrix the main diagonal and the diagonals just
above and below the main one.

A square matrix whose inverse is equal to


Unitary matrix
its conjugate transpose, A−1 = A*.

Vandermonde A row consists of 1, a, a², a³, etc., and


matrix each row uses a different variable.

A square matrix, with dimensions a


Walsh matrix power of 2, the entries of which are +1 or
-1.

A matrix with all off-diagonal entries less


Z-matrix
than zero.

[edit] Constant matrices

The list below comprises matrices whose elements are constant for any given dimension
(size) of matrix. The matrix entries will be denoted aij. The table below uses the
Kronecker symbol δij for two integers i and j which is 1 if i = j and 0 else.

Symbolic
Name Explanation description of the Notes
entries

Exchange A binary matrix with ones on the anti- A permutation


aij = δn + 1 − i,j
matrix diagonal, and zeroes everywhere else. matrix.
Hilbert
aij = (i + j − 1)−1. A Hankel matrix.
matrix

A square diagonal matrix, with all


Identity
entries on the main diagonal equal to 1, aij = δij
matrix
and the rest 0

Lehmer aij = min(i,j) ÷ A positive


matrix max(i,j). symmetric matrix.

Matrix of
A matrix with all entries equal to one aij = 1.
ones

A matrix containing the entries of


Pascal matrix
Pascal's triangle.

A set of three 2 × 2 complex Hermitian


and unitary matrices. When combined
Pauli
with the I2 identity matrix, they form an
matrices
orthogonal basis for the 2 × 2 complex
Hermitian matrices.

aij are 1 if i divides j


Redheffer
or if j = 1; A (0, 1)-matrix.
matrix
otherwise, aij = 0.

Multiplication by it
A matrix with ones on the
shifts matrix
Shift matrix superdiagonal or subdiagonal and aij = δi+1,j or aij = δi−1,j
elements by one
zeroes elsewhere.
position.

Zero matrix A matrix with all entries equal to zero. aij = 0.

Matrices with conditions on eigenvalues or eigenvectors


Name Explanation Notes

Companion A matrix whose eigenvalues are equal to the


matrix roots of the polynomial.

Defective matrix A square matrix that does not have a complete


basis of eigenvectors, and is thus not
diagonalisable.

It has an eigenbasis, that is, a


Diagonalizable
A square matrix similar to a diagonal matrix. complete set of linearly
matrix
independent eigenvectors.

A matrix whose eigenvalues have strictly


negative real part. A stable system of
Hurwitz matrix
differential equations may be represented by a
Hurwitz matrix.

Positive-definite A Hermitian matrix with every eigenvalue


matrix positive.

Stability matrix Synonym for Hurwitz matrix.

A real symmetric positive definite matrix with


Stieltjes matrix Special case of an M-matrix.
nonpositive off-diagonal entries.

[edit] Matrices satisfying conditions on products or inverses

A number of matrix-related notions is about properties of products or inverses of the


given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-by-
k matrix C given by

This matrix product is denoted AB. Unlike the product of numbers, matrix products are
not commutative, that is to say AB need not be equal to BA. A number of notions are
concerned with the failure of this commutativity. An inverse of square matrix A is a
matrix B (necessarily of the same dimension as A) such that AB = 1. Equivalently, BA =
1. An inverse need not exist. If it exists, B is uniquely determined, and is also called the
inverse of A, denoted A−1.

Name Explanation Notes


Congruent Two matrices A and B are congruent if there exists Compare with similar
matrix an invertible matrix P such that PT A P = B. matrices.

Idempotent
A matrix that has the property A² = AA = A.
matrix

Invertible A square matrix having a multiplicative inverse, that Invertible matrices form
matrix is, a matrix B such that AB = BA = I. the general linear group.

Involutary Signature matrices have


A square matrix which is its own inverse, i.e., AA = I.
matrix this property.

Nilpotent A square matrix satisfying Aq = 0 for some positive Equivalently, the only
matrix integer q. eigenvalue of A is 0.

They are the matrices to


A square matrix that commutes with its conjugate
Normal matrix which the spectral
transpose: AA∗ = A∗A
theorem applies.

Orthogonal A matrix whose inverse is equal to its transpose, A−1 They form the
matrix = AT. orthogonal group.

A matrix whose columns are orthonormal vectors.


Orthonormal
matrix

Two matrices A and B are similar if there exists an Compare with congruent
Similar matrix
invertible matrix P such that P−1AP = B. matrices.

Singular matrix A square matrix that is not invertible.

Unimodular An invertible matrix with entries in the integers Necessarily the


matrix (integer matrix) determinant is +1 or −1.

Equivalently, A − I is
Unipotent
A square matrix with all eigenvalues equal to 1. nilpotent. See also
matrix
unipotent group.

A matrix for which every non-singular square


Totally
submatrix is unimodular. This has some implications
unimodular
in the linear programming relaxation of an integer
matrix
program.
A square matrix the entries of which are in {0, 1, −1},
Weighing matrix
such that AAT = wI for some positive integer w.

[edit] Matrices with specific applications


Name Explanation Used in Notes

Calculating inverse
The matrix containing minors of
Adjugate matrix matrices via Laplace's
a given square matrix.
formula.

A square matrix of with entries


0, 1 and −1 such that the sum of
Alternating sign Dodgson condensation to
each row and column is 1 and
matrix calculate determinants
the nonzero entries in each row
and column alternate in sign.

A matrix whose rows are


Calculating inverse
Augmented matrix concatenations of the rows of
matrices.
two smaller matrices.

A square matrix which may be


Control theory, Stable
Bézout matrix used as a tool for the efficient
polynomials
location of polynomial zeros

A matrix that converts


Carleman matrix composition of functions to
multiplication of matrices.

A matrix representing a non-


semisimple finite-dimensional
Cartan matrix
algebra, or a Lie algebra (the two
meanings are distinct).

A matrix where each row is a System of linear equations,


Circulant matrix
circular shift of its predecessor. discrete Fourier transform

A containing the cofactors, i.e.,


Cofactor matrix
signed minors, of a given matrix.

Commutation A matrix used for transforming


matrix the vectorized form of a matrix
into the vectorized form of its
transpose.

A matrix related to Coxeter


groups, which describe
Coxeter matrix
symmetries in a structure or
system.

See also
A square matrix containing the
Computer vision, network Euclidean
Distance matrix distances, taken pairwise, of a
analysis. distance
set of points.
matrix.

A linear transformation matrix


used for transforming half-
Duplication matrix
vectorizations of matrices into
vectorizations.

A linear transformation matrix


used for transforming
Elimination matrix
vectorizations of matrices into
half-vectorizations.

A matrix that describes the See also


Euclidean distance
pairwise distances between distance
matrix
points in Euclidean space. matrix.

Fundamental A matrix containing the


matrix (linear fundamental solutions of a
differential linear ordinary differential
equation) equation.

A matrix whose rows generate


Generator matrix Coding theory
all elements of a linear code.

A matrix containing the pairwise Test linear independence


They are real
Gramian matrix angles of given vectors in an of vectors, including ones
symmetric.
inner product space. in function spaces.

A square matrix of second Detecting local minima and


Hessian matrix partial derivatives of a scalar- maxima of scalar-valued
valued function. functions in several
variables; Blob detection
(computer vision)

Householder A transformation matrix widely


QR decomposition.
matrix used in matrix algorithms.

A matrix of first-order partial Implicit function theorem;


Jacobian matrix derivatives of a vector-valued Smooth morphisms
function. (algebraic geometry).

A matrix in game theory and


economics, that represents the
Payoff matrix payoffs in a normal form game
where players move
simultaneously

A matrix that occurs in the study


Pick matrix of analytical interpolation
problems.

A matrix whose entries consist


Random matrix of random numbers from some
specified random distribution.

A matrix representing a
Special orthogonal group,
Rotation matrix rotational geometric
Euler angles
transformation.

A matrix in knot theory,


primarily for the algebraic
Seifert matrix Alexander polynomial
analysis of topological properties
of knots and links.

An elementary matrix whose


corresponding geometric
Shear matrix
transformation is a shear
transformation.

A matrix of scores which express


Similarity matrix the similarity between two data Sequence alignment
points.

Symplectic matrix A square matrix preserving a Symplectic group,


standard skew-symmetric form. symplectic manifold.

Generating the reference


Totally positive A matrix with determinants of all
points of Bézier curve in
matrix its square submatrices positive.
computer graphics.

A matrix representing a linear


transformation, often from one
Transformation
co-ordinate space to another to
matrix
facilitate a geometric transform
or projection.

• Derogatory matrix — a square n×n matrix whose minimal polynomial is of order less
than n.
• Moment matrix — a symmetric matrix whose elements are the products of common
row/column index dependent monomials.
• X-Y-Z matrix — a generalisation of the (rectangular) matrix to a cuboidal form (a 3-
dimensional array of entries).

substitution matrix

Main diagonal
In linear algebra, the main diagonal (sometimes leading diagonal or primary diagonal)
of a matrix A is the collection of cells Ai,j where i is equal to j.

The main diagonal of a square matrix is the diagonal which runs from the top left corner
to the bottom right corner. For example, the following matrix has 1s down its main
diagonal:

A square matrix like the above in which the entries outside the main diagonal are all zero
is called a diagonal matrix. The sum of the entries on the main diagonal of a matrix is
known as the trace of that matrix.
The main diagonal of a rectangular matrix is the diagonal which runs from the top left
corner and steps down and right, until the right edge is reached.

The other diagonal is called antidiagonal, counterdiagonal, secondary diagonal, or


minor diagonal.

Modal matrix
In linear algebra, the modal matrix is used in the diagonalization process involving
eigenvalues and eigenvectors.

Assume a linear system of the following form:

where X is n×1, A is n×n, and B is n×1. X typically represents the state vector, and U the
system input.

Specifically the modal matrix M is the n×n matrix formed with the eigenvectors of A as
columns in M. It is utilized in

where D is an n×n diagonal matrix with the eigenvalues of A on the main diagonal of D
and zeros elsewhere. (note the eigenvalues should appear left→right top→bottom in the
same order as its eigenvectors are arranged left→right into M)

This process is also known as the similarity transform.


Nilpotent matrix
In linear algebra, a nilpotent matrix is a square matrix N such that

for some positive integer k. The smallest such k is sometimes called the degree of N.

More generally, a nilpotent transformation is a linear transformation L of a vector space


such that Lk = 0 for some positive integer k. Both of these concepts are special cases of a
more general concept of nilpotence that applies to elements of rings.

Examples

The matrix

is nilpotent, since M2 = 0. More generally, any triangular matrix with 0's along the main
diagonal is nilpotent. For example, the matrix

is nilpotent, with

Though the examples above have a large number of zero entries, a typical nilpotent
matrix does not. For example, the matrices
both square to zero, though neither matrix has zero entries.

Normal matrix
A complex square matrix A is a normal matrix if

A*A=AA*

where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with
its conjugate transpose.

If A is a real matrix, then A*=AT; it is normal if ATA = AAT.

Normality is a convenient test for diagonalizability: every normal matrix can be


converted to a diagonal matrix by a unitary transform, and every matrix which can be
made diagonal by a unitary transform is also normal, but finding the desired transform
requires much more work than simply testing to see whether the matrix is normal.

Orthogonal matrix
In linear algebra, an orthogonal matrix is a square matrix with real entries whose
columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns
are unit vectors in addition to being orthogonal, some people use the term orthonormal
to describe such matrices.

Equivalently, a matrix Q is orthogonal if its transpose is equal to its inverse:


alternatively,

As a linear transformation, an orthogonal matrix preserves the dot product of vectors, and
therefore acts as an isometry of Euclidean space, such as a rotation or reflection. In other
words, it is a unitary transformation.

Pentadiagonal matrix
In linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact,
it is a matrix in which the only nonzero entries are on the main diagonal, and the first two
diagonals above and below it. So it is of the form

It follows that a pentadiagonal matrix has at most 5n − 6 nonzero entries, where n is the
size of the matrix. Hence, pentadiagonal matrices are sparse. This makes them useful in
numerical analysis.

Polynomial matrix
A polynomial matrix or matrix polynomial is a matrix whose elements are univariate
or multivariate polynomials.

A univariate polynomial matrix P of degree p is defined as:


where A(i) denotes a matrix of constant coefficients, and A(p) is non-zero. Thus a
polynomial matrix is the matrix-equivalent of a polynomial, with each element of the
matrix satisfying the definition of a polynomial of degree p.

An example 3×3 polynomial matrix, degree 2:

We can express this by saying that for a ring R, the rings Mn(R[X]) and (Mn(R))[X] are
isomorphic.

Rank (linear algebra)


The column rank of a matrix A is the maximal number of linearly independent columns
of A. Likewise, the row rank is the maximal number of linearly independent rows of A.

Since the column rank and the row rank are always equal, they are simply called the rank
of A. More abstractly, it is the dimension of the image of A. For the proofs, see, e.g.,
Murase (1960)[1], Andrea & Wong (1960)[2], Williams & Cater (1968)[3], Mackiw
(1995)[4]. It is commonly denoted by either rk(A) or rank A.

The rank of an matrix is at most min(m,n). A matrix that has a rank as large as possible
is said to have full rank; otherwise, the matrix is rank deficient.

[edit] Equivalence of definitions

A common approach is reduce to a simpler form, generally row-echelon form by row


operations: row operations do not change the row space (hence do not change the row
rank), and, being invertible, map the column space to an isomorphic space (hence do not
change the column rank). Once in row-echelon form, the rank is clearly the same for both
row rank and column rank, and equals the number of pivots and the number of non-zero
rows;
Equivalence of the determinantal definition (rank of largest non-vanishing minor) is
generally proved alternatively. It is a generalization of the statement that if the span of n
vectors has dimension p, then p of those vectors span the space: one can choose a
spanning set that is a subset of the vectors. For determinantal rank, the statement is that if
the row rank (column rank) of a matrix is p, then one can choose a p × p submatrix that is
invertible: a subset of the rows and a subset of the columns simultaneously define an
invertible submatrix. It can be alternatively stated as: if the span of n vectors has
dimension p, then p of these vectors span the space and there is a set of p coordinates on
which they are linearly independent.

A non-vanishing p-minor (p × p submatrix with non-vanishing determinant) shows that


the rows and columns of that submatrix are linearly independent, and thus those rows and
columns of the full matrix are linearly independent (in the full matrix), so the row and
column rank are at least as large as the determinantal rank; however, the converse is less
straightforward

Applications

One useful application of calculating the rank of a matrix is the computation of the
number of solutions of a system of linear equations. The system is inconsistent if the rank
of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other
hand, ranks of these two matrices are equal, the system must have at least one solution.
The solution is unique if and only if the rank equals the number of variables. Otherwise
the general solution has k free parameters where k is the difference between the number
of variables and the rank. This theorem is due to Rouché and Capelli.

In control theory, the rank of a matrix can be used to determine whether a linear system is
controllable, or observable.

Row vector
In linear algebra, a row vector or row matrix is a 1 × n matrix, that is, a matrix
consisting of a single row:[1]

The transpose of a row vector is a column vector:


The set of all row vectors forms a vector space which is the dual space to the set of all
column vector

Skew-Hermitian matrix
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or
antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is
skew-Hermitian if it satisfies the relation

where denotes the conjugate transpose of a matrix. In component form, this means that

for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex
conjugation.

Skew-Hermitian matrices can be understood as the complex versions of real skew-


symmetric matrices, or as the matrix analogue of the purely imaginary numbers.[2] The
concept can be generalized to include linear transformations of any complex vector space
with a sesquilinear norm.

Skew-symmetric matrix
In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a
square matrix A whose transpose is also its negative; that is, it satisfies the equation:
or in component form, if :

for all and

For example, the following matrix is skew-symmetric:

Compare this with a symmetric matrix whose transpose is the same as the matrix

or to an orthogonal matrix, the transpose of which is equal to its inverse:

Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose

The entries of a symmetric matrix are symmetric with respect to the main diagonal (top
left to bottom right). So if the entries are written as A = (aij), then

for all indices i and j. The following 3×3 matrix is symmetric:

A matrix is called skew-symmetric or antisymmetric if its transpose is the same as its


negative. The following 3×3 matrix is skew-symmetric:
The following matrix is neither symmetric nor skew-symmetric:

Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly,
each diagonal element of a skew-symmetric matrix must be zero, since each is its own
negative.

In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real
inner product space. The corresponding object for a complex inner product space is a
Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose.
Therefore, in linear algebra over the complex numbers, it is generally assumed that a
symmetric matrix refers to one which has has real-valued entries. Symmetric matrices
appear naturally in a variety of applications, and typical numerical linear algebra software
makes special accommodations for them.

Transpose
This article is about the Matrix Transpose operator. For other uses, see
Transposition

In linear algebra, the transpose of a matrix A is another matrix AT (also written A′, Atr or
t
A) created by any one of the following equivalent actions:

• write the rows of A as the columns of AT


• write the columns of A as the rows of AT
• reflect A by its main diagonal (which starts from the top left) to obtain AT

Formally, the (i,j) element of AT is the (j,i) element of A.

[AT]ij = [A]ji
If A is a m × n matrix then AT is a n × m matrix. The transpose of a scalar is the same
scalar.

Triangular matrix
In the mathematical discipline of linear algebra, a triangular matrix is a special kind of
square matrix where the entries either below or above the main diagonal are zero.
Because matrix equations with triangular matrices are easier to solve they are very
important in numerical analysis. The LU decomposition gives an algorithm to decompose
any invertible matrix A into a normed lower triangle matrix L and an upper triangle
matrix U.

A matrix of the form

is called lower triangular matrix or left triangular matrix, and analogously a matrix of
the form

is called upper triangular matrix or right triangular matrix.

The standard operations on triangular matrices conveniently preserve the triangular form:
the sum and product of two upper triangular matrices is again upper triangular. The
inverse of an upper triangular matrix is also upper triangular, and of course we can
multiply an upper triangular matrix by a constant and it will still be upper triangular. This
means that the upper triangular matrices form a subalgebra of the ring of square matrices
for any given size. The analogous result holds for lower triangular matrices. Note,
however, that the product of a lower triangular with an upper triangular matrix does not
preserve triangularity.

Tridiagonal matrix
In linear algebra, a tridiagonal matrix is a matrix that is "almost" a diagonal matrix. To
be exact: a tridiagonal matrix has nonzero elements only in the main diagonal, the first
diagonal below this, and the first diagonal above the main diagonal.

For example, the following matrix is tridiagonal:

A determinant formed from a tridiagonal matrix is known as a continuant.[1

Unitary matrix
In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition
where is the identity matrix in n dimensions and is the conjugate transpose (also
called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if
and only if it has an inverse which is equal to its conjugate transpose

A unitary matrix in which all entries are real is an orthogonal matrix. Just as an
orthogonal matrix G preserves the (real) inner product of two real vectors,

so also a unitary matrix U satisfies

for all complex vectors x and y, where stands now for the standard inner product on
.

If is an n by n matrix then the following are all equivalent conditions:

1. is unitary
2. is unitary
3. the columns of form an orthonormal basis of with respect to this inner
product
4. the rows of form an orthonormal basis of with respect to this inner product
5. is an isometry with respect to the norm from this inner product
6. U is a normal matrix with eigenvalues lying on the unit circle.

X–Y–Z matrix
An X–Y–Z matrix is a generalization of the concept of matrix to three dimensions.

An X–Y–Z matrix A will thus have components Ai,j,k where

for some positive integers M,N,P.

Such matrices are helpful for example when considering grids in three dimensions, as in
computer simulations of three-dimensional problems.
Zero matrix
In mathematics, particularly linear algebra, a zero matrix is a matrix with all its entries
being zero. Some examples of zero matrices are

The set of m×n matrices with entries in a ring K forms a ring . The zero matrix
in is the matrix with all entries equal to , where is the additive
identity in K.

The zero matrix is the additive identity in . That is, for all it satisfies

There is exactly one zero matrix of any given size m×n having entries in a given ring, so
when the context is clear one often refers to the zero matrix. In general the zero element
of a ring is unique and typically denoted as 0 without any subscript indicating the parent
ring. Hence the examples above represent zero matrices over any ring.

The zero matrix represents the linear transformation sending all vectors to the zero
vector.

S-ar putea să vă placă și