Documente Academic
Documente Profesional
Documente Cultură
Cholesky Decomposition
Cholesky decomposition is the most efficient method to check
whether a real symmetric matrix is positive definite.
From: Computer Aided Chemical Engineering, 2013
Related terms:
Matrix Inversion
Michael Parker, in Digital Signal Processing 101 (Second Edition), 2017
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 1/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
The Cholesky decomposition maps matrix A into the product of A = L · LH where L
is the lower triangular matrix and LH is the transposed, complex conjugate or
Hermitian, and therefore of upper triangular form (Fig. 13.6). This is true because
of the special case of A being a square, conjugate symmetric matrix. The solution to
find L requires square root and inverse square root operators. The great majority of
the computations in Cholesky is to compute the matrix L, which is found to be
expanding the vector dot product equations for each element L and solving
recursively. Then the product L · LH is substituted for A, and after which x is solved
for using a substitution method. First the equations will be introduced, then an
example of the [4 × 4] case will be shown to better illustrate.
The first nonzero element, in each column, is a diagonal elements and can be
found by
Diagonal Elements of L
(13.1)
In particular,
where i and j are the row and column indices of the matrix
Eqs. (13.1) and (13.2) are the equations that will be used to find Ljj and Lij. In
solving for one element, a vector dot product proportional to its matrix size must
calculated.
Although matrices A and L may be complex, the diagonal elements must be real by
definition. Therefore, the square root is taken of a real number. The denominator
of the divide function is also real.
Once that L is computed, perform the substitution for matrix A:
Eq. (13.3) is almost the same as Eq. (13.2). If we treat b as an extension of A and y
as an extension of L, the process of solving y is the same as solving L. The only
difference is, in the multiply operation, the second operand is not conjugated (this
consideration may be important for hardware implementation, allowing
computational units to be shared).
After y is computed, x can be solved by backward substitution in LH · x = y
(Fig. 13.7). LH is an upper triangular matrix, therefore, x has to be solved in reverse
order—from bottom to top. That is why it is called backward substitution.
Therefore, solving x is separate process from the Cholesky decomposition and
forward substitution solver.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 3/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
The algorithm steps and data dependencies are more easily illustrated using a
small [4 × 4] matrix example.
Algorithm 21.6
Incomplete LU Decomposition
functionilub(A)
% Compute an incomplete LU decomposition
% Input: n × n matrix A
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 4/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
for k = j+1:n do
ifaik ≠ 0 then
aik= aik− aijajk
end if
end for
end if
end for
end for
U = upper triangular portion of A
L = portion of A below the main diagonal
for i = 1:n do
lii= 1
end for
end function
and
where
(21.29)
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 5/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
The 903 × 903 nonsymmetric matrix, DK01R, in Figure 21.11 was used to solve a
computational fluid dynamics problem. DK01R was obtained from the University of
Florida Sparse Matrix Collection. A right-hand side, b_DK01R, and an approximate
solution, x_DK01R, were supplied with the matrix. The approximate condition
number of the matrix is 2.[0-9]+ × [0-9]+8, so it is ill-conditioned. Using
m = 300, and tol = 1.[0-9]+ × [0-9]+−15, niter = 20, the solution
,
was obtained using gmresb and mpregmres. Table 21.1 gives the results of
comparing the solutions from mpregmres and gmresb to x_DK01R.
iter r Time
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 6/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
SOLUTION OF EQUATIONS
M.V.K. Chari, S.J. Salon, in Numerical Methods in Electromagnetism, 2000
Numerical Example
Consider the 6 × 6 system
(11.54)
The matrix is symmetric and positive definite. In the Cholesky decomposition the
l11 term is the square root of a11 or . To find l21 we note that the
product l11l21 = −2. Because we know l11 = 2.646, we now find l21 = −0.756.
Similarly, the first column of L (and therefore the first row of LT) is found by
dividing the first column of A by . To find l22 we note that . The
only unknown here is l22. We now find . We leave it
as an exercise to find the remaining elements of the decomposition as
(11.55)
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 7/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
(2.46)
Note that the entries in Equation 2.48 are in the order in which calculations are
performed (row-wise from left to right). We may generalize and write the following
expressions for the case of a positive definite symmetric n × n matrix (1 ≤ i ≤ n, 1 ≤ j
≤ n).
(2.50)
Example 2.13
Solve ihe problem in Example 2.12 using Cholesky decomposiiion.
Solution :
Step 1 Determine L
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 8/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
We note that the elements of the coefficient matrix are given by
We shall verify first that the coefficient matrix is positive definite. We have
This is positive for all x. The matrix is seen to be diagonal dominant. Hence
the matrix is positive definite. From Expressions 2.49 we get the elements of
L.
Thus we have (entries are shown rounded to four significant digits after
decimals)
Forward substitution is used to obtain the y’s. From the first equation we
have
Backward substitution is used to obtain the x’s. From the fourth equation we
have
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 10/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Sparse matrix is a matrix where majority of the elements are zeros. Such
matrices occur frequently in engineering practice. Instead of storing the
entire matrix only the nonzero entries can be stored. A simple scheme is to
store the row index, column index and the nonzero entry. For example the
following matrix
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 11/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
The computational effort for treating sparse matrices can be reduced largely
by employing special techniques, especially for large matrices.
We shall consider a special form of sparse matrix, tridiagonal matrix and
discuss methods to solve the same.
Least-Squares Problems
William Ford, in Numerical Linear Algebra with Applications, 2015
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 12/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
The MATLAB function cholsolve in the software distribution solves the linear
system Ax = b, where A is a positive definite matrix.
Example 13.7
Remark 13.6
If matrix A is tridiagonal and positive definite, it is more efficient to use the
algorithm tridiagLU to factor the matrix.
3 Reformulation Strategies
The reformulation strategies based on Sylvester's criterion, proposed earlier by
Blanco and Bandoni (2007), and Cholesky decomposition, proposed here, are
applied to problems (2) and (4). These strategies enable the use of off-the-shelf
solvers to tackle eigenvalue optimization problems rather than relying on
specialized solvers.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 13/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
(6)
(8)
One method for solving for SI is known as QR decomposition, which will be used
here. Another popular method is the Cholesky decomposition, as the interference
covariance matrix is Hermitian symmetric.
Perform the substitution SI = Q·R, or product of two matrices.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 14/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Q and R can be computed from SI using one of several methods, such as Gram–
Schmidt, Householder transformation, or Givens rotation. The nature of the
decomposition into two matrices is that R will turn out to be an upper triangular
matrix, and Q will be an orthonormal matrix, or a matrix composed of orthogonal
vectors of unity length. Orthnonormal matrices have the key property as follows:
z is a complex scaler, which is then fed into the detection threshold process. Over
the 16 STAP processes, the values can be integrated for each of the range and
steering vector locations.
Shown in Fig. 21.7 is a plot of , the inverted covariance matrix. In this case,
there is an interfering signal at 60 degrees azimuth angle, and a target of interest
at 45 degrees, with range of 1723 m normalized Doppler of 0.11. Note that this is
for an airborne radar scanning the ground searching for slow-moving targets,
hence the longer range.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 15/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Notice the very small values, in the order of −80 dB, present at 60 degrees. The
STAP filtering process is detecting the correlation associated with the interfering
signal direction at 60 degrees. But inverting the covariance matrix, this jammer will
be severely attenuated. Notice also the diagonal clutter line. This is a side-looking
airborne radar, so the ground clutter has positive Doppler looking in the forward
direction or angle and negative Doppler in the backward direction or angle. This
ground clutter is being attenuated at about −30 dB, proportionally less severely
than the more powerful interfering signal.
The target is not present in this plot. Recall that the estimated covariance matrix is
determined in range bins surrounding but not at the expected range of the target.
But in any case, it would not likely be visible anyway. However, the use of STAP
with the appropriate target steering vector can make a dramatic difference, as
shown in Fig. 8. The top plot shows the high return of the peak ground clutter at
range of 1000 m with magnitude of ∼0.01, and noise floor of about ∼0.0005.
With STAP processing, the noise floor is pushed down to ∼0.1 × 10−6 and the
target signal at about 1.5 × 10−6 is now easily detected. It is also clear that floating-
point numerical representation and processing will be needed for adequate
performance of the STAP algorithm.
The STAP method described is known as the power domain method. It is the called
power domain because the covariance matrix estimation results in squaring of the
radar data, hence, the algorithm is operating on signal power. This also increases
the dynamic range of the algorithm, but this is easily managed as floating point is
being used for all processing steps.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 16/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
In order to perform the wind vibration analysis, the wind speed time history should
be obtained firstly, the harmonic superposition method [247] is adopted to
simulate the wind velocity history. The calculation processes are as follows:
1. Taking the Cholesky decomposition of
(8.2)
2. Solving the random phase with related characteristic, the random phase can be
expressed as,
(8.4)
3. The wind speed time history of the simulated points can be calculated using
the following equation according to the Shinozuka theory.
(8.5)
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 17/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Figure 8.10. Simulation of the time-history curves of the wind velocity. (A) Time-
history curve of wind velocity of point 1 and (B) time-history curve of wind velocity
of point 23.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 18/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
matrices as well as reducing the number of calculations during the factorization
process. The skyline storing scheme when used together with a direct solving
procedure is well suited for FEM calculations. This is because FEM calculations
generate sparse matrices with a symmetric structure and also because the skyline
structure is preserved during the LU factorization. However, this method implies
extra calculations mostly during the factorization process and this storage scheme
has only a clear advantage over using fully populated matrices if these matrices are
strongly sparse. It is possible to greatly reduce the amount of stored data by using
programs for producing fill reducing orderings. Many algorithms presented in the
literature seem to give better results than the ones currently implemented in
ATILA. Note that if these algorithms could improve the calculation speed in ATILA
in the case of harmonic and transient analysis, they will not be very efficient if
eigenfrequencies of a transducer are to be found. Transducer models use coupled
electric-mechanical-fluid equations. Electric constitutive relationships generate null
entries in the mass matrix. Hence the mass matrix cannot be easily factorized using
a gauss elimination procedure. Currently, a static condensation of electric
equations is used in ATILA. It consists in eliminating electric equations using the
electro-mechanical coupling equations. This significantly increases the amount of
storage because the structure of the matrix is changed and this makes reordering
algorithms inefficient. There are two ways to solve this problem either by changing
the storage scheme or reorganizing the way electric equations are accounted for.
Many other storage schemes are presented in the literature. For most of these, they
are very efficient if used together with an incremental solving procedure. As
discussed above we think that the direct solving procedure is very adequate due to
the symmetric structure of matrices involved in FEM calculations.
In order to reorganize the way data are managed in the code, it is interesting to
note that we are generally looking for the natural mode of the structure. Hence it is
not necessary to hold electric equations during the eigenmode calculation – they
must be accounted for during the factorization procedure. The new method is to
organize equations in order to take into account the null entries in the mass matrix
during the matrix factorization by eliminating electric equations during the
eigenmode calculations. Using this method, it is possible to keep the symmetric
sparse structure of the matrix and then efficiently use the ordering procedure to
reduce the matrix filling. For the new version of ATILA called ATILA++, the used
language is the C++.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 19/19