Sunteți pe pagina 1din 9

F

Matrix
Calculus

F–1
Appendix F: MATRIX CALCULUS

TABLE OF CONTENTS

Page
§F.1. Introduction F–3
§F.2. The Derivatives of Vector Functions F–3
§F.2.1. Derivative of Vector with Respect to Vector . . . . . . . F–3
§F.2.2. Derivative of a Scalar with Respect to Vector . . . . . . F–3
§F.2.3. Derivative of Vector with Respect to Scalar . . . . . . . F–3
§F.2.4. Jacobian of a Variable Transformation . . . . . . . . . F–4
§F.3. The Chain Rule for Vector Functions F–5
§F.4. The Derivative of Scalar Functions of a Matrix F–6
§F.4.1. Functions of a Matrix Determinant . . . . . . . . . . F–7
§F.5. The Matrix Differential F–8

F–2
§F.2 THE DERIVATIVES OF VECTOR FUNCTIONS

§F.1. Introduction
In this Appendix we collect some useful formulas of matrix calculus that often appear in finite
element derivations.

§F.2. The Derivatives of Vector Functions

Let x and y be vectors of orders n and m respectively:


x  y 
1 1
 x2   y2 
x= 
 ..  , y=  ..
,
 (F.1)
. .
xn ym
where each component yi may be a function of all the x j , a fact represented by saying that y is a
function of x, or
y = y(x). (F.2)
If n = 1, x reduces to a scalar, which we call x. If m = 1, y reduces to a scalar, which we call y.
Various applications are studied in the following subsections.

§F.2.1. Derivative of Vector with Respect to Vector


The derivative of the vector y with respect to vector x is the n × m matrix
 ∂ y1 ∂ y2 ∂y 
∂ x ∂ x · · · ∂ xm
 1 1 1 
 ∂ y1 ∂ y2 ∂ ym 
∂y def 
=  ∂ x2 ∂ x2 · · · ∂ x2   (F.3)
∂x  . . . 
 . . .. . 
 . . . . 
∂ y1 ∂ y2 ∂ ym
∂ xn ∂ xn · · · ∂ xn

§F.2.2. Derivative of a Scalar with Respect to Vector


If y is a scalar,  ∂y 
 ∂ x1 
 ∂y 
∂ y def  ∂x

=   2
.
 (F.4)
∂x  .. 
 . 
∂y
∂ xn

§F.2.3. Derivative of Vector with Respect to Scalar


If x is a scalar,
∂y def  ∂ y1 ∂ y2 ∂ ym
= ... (F.5)
∂x ∂x ∂x ∂x

F–3
Appendix F: MATRIX CALCULUS

Remark F.1. Many authors, notably in statistics and economics, define the derivatives as the transposes of
those given above.1 This has the advantage of better agreement of matrix products with composition schemes
such as the chain rule. Evidently the notation is not yet stable.

Example F.1. Given



x1
y1
y= , x= x2 (F.6)
y2
x3
and
y1 = x12 − x2
(F.7)
y2 = x32 + 3x2

the partial derivative matrix ∂y/∂x is computed as follows:


 ∂ y1 ∂ y2 
∂ x1 ∂ x1
∂y  ∂ ∂  2x1 0
= 2 
y y
1
= −1 3 (F.8)
∂x  ∂ x2 ∂ x2  0 2x3
∂ y1 ∂ y2
∂ x3 ∂ x3

§F.2.4. Jacobian of a Variable Transformation


In multivariate analysis, if x and y are of the same order, the determinant of the square matrix ∂x/∂y,
that is  
 ∂x 
J =   (F.9)
∂y
is called the Jacobian of the transformation determined by y = y(x). The inverse determinant is
 
 ∂y 
J =   .
−1
(F.10)
∂x

Example F.2. The transformation from spherical to Cartesian coordinates is defined by

x = r sin θ cos ψ, y = r sin θ sin ψ, z = r cos θ (F.11)


where r > 0, 0 < θ < π and 0 ≤ ψ < 2π. To obtain the Jacobian of the transformation, let
x ≡ x1 , y ≡ x2 , z ≡ x3
(F.12)
r ≡ y1 , θ ≡ y2 , ψ ≡ y3
Then    sin y2 cos y3 cos y2 

 ∂x   sin y2 sin y3

J =   =  y1 cos y2 cos y3 y1 cos y2 sin y3 −y1 sin y2 
∂y  −y sin y sin y y1 sin y2 cos y3 0  (F.13)
1 2 3

= y12 sin y2 = r 2 sin θ.

The foregoing definitions can be used to obtain derivatives to many frequently used expressions,
including quadratic and bilinear forms.

1 One author puts it this way: “When one does matrix calculus, one quickly finds that there are two kinds of people in this
world: those who think the gradient is a row vector, and those who think it is a column vector.”

F–4
§F.3 THE CHAIN RULE FOR VECTOR FUNCTIONS

Example F.3. Consider the quadratic form


y = xT Ax (F.14)
where A is a square matrix of order n. Using the definition (D.3) one obtains
∂y
= Ax + AT x (F.15)
∂x
and if A is symmetric,
∂y
= 2Ax. (F.16)
∂x
We can of course continue the differentiation process:
 
∂2 y ∂ ∂y
= = A + AT , (F.17)
∂x 2 ∂x ∂x

and if A is symmetric,
∂2 y
= 2A. (F.18)
∂x2

The following table collects several useful vector derivative formulas.

∂y
y ∂x
Ax AT
xT A A
xT x 2x
xT Ax Ax + AT x

§F.3. The Chain Rule for Vector Functions


Let      
x1 y1 z1
 x2   y2   z2 
x= 
 ...  , y= 
 ..  and z =  ... 
  (F.19)
.
xn yr zm
where z is a function of y, which is in turn a function of x. Using the definition (D.2), we can write
 ∂z1 ∂z1 
∂ x1 ∂ x2
. . . ∂∂zx1n
 T  ∂z2 ∂z2 . . . ∂z2 
∂z  ∂ x1 ∂ x2 ∂ xn 
= . . .

 (F.20)
∂x  .. .. .. 
∂z m ∂z m
∂ x1 ∂ x2
. . . ∂z
∂ xn
m

Each entry of this matrix may be expanded as


 
∂z i r
∂z i ∂ yq i = 1, 2, . . . , m
= (F.21)
∂x j ∂ yq ∂ x j j = 1, 2, . . . , n.
q=1

F–5
Appendix F: MATRIX CALCULUS

Then   ∂z1 ∂ yq  ∂z 1 ∂ yq  ∂z 2 ∂ yq 
yq ∂ x1 ∂ yq ∂ x2
... ∂ yq ∂ xn
 T  ∂z 2 ∂ yq  ∂z 2 ∂ yq  ∂z 2 ∂ yq 
∂z  ... 
 ∂ yq ∂ x1 ∂ yq ∂ x2 ∂ yq ∂ xn 
= 
∂x  .. 
 . 
 ∂zm ∂ yq  ∂zm ∂ yq  ∂zm ∂ yq
∂ yq ∂ x1 ∂ yq ∂ x2
... ∂ yq ∂ xn
 ∂z 1 ∂z 1 ∂z 1  ∂ y1 ∂ y1 ∂ y1 
∂ y1 ∂ y2
... ∂ yr ∂ x1 ∂ x2
... ∂ xn
 ∂z 2 ∂z 2
... ∂z 2  ∂ y2 ∂ y2
... ∂ y2 
 ∂ y1 ∂ y2 ∂ yr  ∂ x1 ∂ x2 ∂ xn 
=
 .

 .


 ..   .. 
∂z m ∂z m ∂ yr ∂ yr ∂ yr
∂ y1
. . . ∂z
∂ y2
m
∂ yr ∂ x1 ∂ x2
... ∂ xn
 T  T  
∂z ∂y ∂y ∂z T
= = . (F.22)
∂y ∂x ∂x ∂y
On transposing both sides, we finally obtain

∂z ∂y ∂z
= , (F.23)
∂x ∂x ∂y

which is the chain rule for vectors. If all vectors reduce to scalars,

∂z ∂ y ∂z ∂z ∂ y
= = , (F.24)
∂x ∂x ∂y ∂y ∂x

which is the conventional chain rule of calculus. Note, however, that when we are dealing with
vectors, the chain of matrices builds “toward the left.” For example, if w is a function of z, which
is a function of y, which is a function of x,

∂w ∂y ∂z ∂w
= . (F.25)
∂x ∂x ∂y ∂z

On the other hand, in the ordinary chain rule one can indistictly build the product to the right or to
the left because scalar multiplication is commutative.

§F.4. The Derivative of Scalar Functions of a Matrix

Let X = (xi j ) be a matrix of order (m × n) and let

y = f (X), (F.26)

be a scalar function of X. The derivative of y with respect to X, denoted by

∂y
, (F.27)
∂X

F–6
§F.4 THE DERIVATIVE OF SCALAR FUNCTIONS OF A MATRIX

is defined as the following matrix of order (m × n):


 ∂y ∂y 
∂ x11 ∂ x12
. . . ∂∂xy1n
 ∂y    
∂y  ∂ x21 ∂∂xy22 . . . ∂∂xy2n 
G= = .  = ∂y = Ei j
∂y
, (F.28)
∂X   ..
..
.
..
.

 ∂ xi j i, j
∂ xi j
∂y ∂y ∂y
∂ xm1 ∂ xm2
. . . ∂ xmn
where Ei j denotes the elementary matrix* of order (m × n). This matrix G is also known as a
gradient matrix.

Example F.4. Find the gradient matrix if y is the trace of a square matrix X of order n, that is


n
y = tr(X) = xii . (F.29)
i=1

Obviously all non-diagonal partials vanish whereas the diagonal partials equal one, thus
∂y
G= = I, (F.30)
∂X
where I denotes the identity matrix of order n.

§F.4.1. Functions of a Matrix Determinant


An important family of derivatives with respect to a matrix involves functions of the determinant
of a matrix, for example y = |X| or y = |AX|. Suppose that we have a matrix Y = [yi j ] whose
components are functions of a matrix X = [xr s ], that is yi j = f i j (xr s ), and set out to build the
matrix
∂|Y|
. (F.31)
∂X
Using the chain rule we can write
∂|Y|   ∂|Y| ∂ yi j
= Yi j . (F.32)
∂ xr s i j
∂ yi j ∂ xr s

But 
|Y| = yi j Yi j , (F.33)
j

where Yi j is the cofactor of the element yi j in |Y|. Since the cofactors Yi1 , Yi2 , . . . are independent
of the element yi j , we have
∂|Y|
= Yi j . (F.34)
∂ yi j
It follows that
∂|Y|   ∂ yi j
= Yi j . (F.35)
∂ xr s i j
∂ xr s

* The elementary matrix Ei j of order m × n has all zero entries except for the (i, j) entry, which is one.

F–7
Appendix F: MATRIX CALCULUS

There is an alternative form of this result which is ocassionally useful. Define


∂ yi j
ai j = Yi j , A = [ai j ], bi j = , B = [bi j ]. (F.36)
∂ xr s
Then it can be shown that
∂|Y|
= tr(ABT ) = tr(BT A). (F.37)
∂ xr s

Example F.5. If X is a nonsingular square matrix and Z = |X|X−1 its cofactor matrix,

∂|X|
G= = ZT . (F.38)
∂X
If X is also symmetric,
∂|X|
G= = 2ZT − diag(ZT ). (F.39)
∂X

§F.5. The Matrix Differential


For a scalar function f (x), where x is an n-vector, the ordinary differential of multivariate calculus
is defined as
n
∂f
df = d xi . (F.40)
i=1
∂ xi
In harmony with this formula, we define the differential of an m × n matrix X = [xi j ] to be
 
d x11 d x12 ... d x1n
def  d x 21 d x22 ... d x2n 
dX =   ... .. ..  . (F.41)
. . 
d xm1 d xm2 . . . d xmn
This definition complies with the multiplicative and associative rules

d(αX) = α dX, d(X + Y) = dX + dY. (F.42)

If X and Y are product-conforming matrices, it can be verified that the differential of their product
is
d(XY) = (dX)Y + X(dY). (F.43)
which is an extension of the well known rule d(x y) = y d x + x dy for scalar functions.

Example F.6. If X = [xi j ] is a square nonsingular matrix of order n, and denote Z = |X|X−1 . Find the
differential of the determinant of X:
 ∂|X| 
d|X| = d xi j = Xi j d xi j = tr(|X|X−1 )T dX) = tr(ZT dX), (F.44)
i, j
∂ xi j i, j

where X i j denotes the cofactor of xi j in X.

F–8
§F.5 THE MATRIX DIFFERENTIAL

Example F.7. With the same assumptions as above, find d(X−1 ). The quickest derivation follows by differ-
entiating both sides of the identity X−1 X = I:

d(X−1 )X + X−1 dX = 0, (F.45)

from which
d(X−1 ) = −X−1 dX X−1 . (F.46)
If X reduces to the scalar x we have  
1 dx
d =− . (F.47)
x x2

F–9

S-ar putea să vă placă și