Sunteți pe pagina 1din 21

NONAME00.

CPP May 8, 2020 Page 1

1 void Venka(){//cholesky sqrt S.P. Venkateshan, Prasanna Swaminathan,


2 //in Computational Methods in Engineering, 2014
3 //Cholesky Decomposition - an overview | ScienceDirect Topics page 7
4 //*****************************************************************************************
5 cout << "See Molanos Method STATIC CONDENSATION SQRT BANDED SYMMETRIC MATRIX-> " << endl;
6 cout << " linkedin copyright 2020 or SCRIBD" << endl;
7 cout << endl;
8 long xmax, i, j, k, i1, ma, mr, l;
9 //*****************************************************************************************
10 /*
11 xmax = 7L;
12 float **a = new float*[xmax+1L];
13 for(i = 1L; i <= xmax; a[i++] = new float[xmax +1L]);
14 for(i=1L; i<= xmax; i++)
15 for(j= i; j <= xmax; a[i][j++]=0.0);
16 a[1][1] = 9 , a[1][2] = 6, a[1][3] = 0, a[1][4] = 3;
17 a[2][2] = 8 , a[2][3] = 0, a[2][4] = 0, a[2][5] = 4;
18 a[3][3] = 16, a[3][4] = 4, a[3][5] = -4, a[3][6] = -20;
19 a[4][4] = 28, a[4][5] = 2, a[4][6] = 0, a[4][7] = 0;
20 a[5][5] = 15, a[5][6] = 9, a[5][7] = 3;
21 a[6][6] = 63, a[6][7] = 13;
22 a[7][7] = 14;
23 a[1][0]= 33 , a[2][0] = 42, a[3][0] = -76, a[4][0] = 137, a[5][0] = 154,
24 a[6][0] = 454, a[7][0] = 191;
25 */
26 xmax = 4L;
27 float **a = new float *[xmax+1L];
28 for (i = 1L; i <= xmax; a[i++] = new float [xmax +1L]);
29 //*****************************************************************************************
30 for (i=1L; i<= xmax; i++)
31 for (j= i; j <= xmax; a[i][j++]=0.0);
32 //*****************************************************************************************
33 a[1][1] = 4 , a[1][2] = 1, a[1][3] = 1, a[1][4] = 1;
34 a[2][2] = 4 , a[2][3] = 1, a[2][4] = 1;
35 a[3][3] = 4 , a[3][4] = 1;
36 a[4][4] = 4 ;
37 a[1][0]= 15.9 , a[2][0] = 17.7, a[3][0] = 13.2, a[4][0] = 9.9;
38 //*****************************************************************************************
39 for (i=1L; i<= xmax; i++){ //cholesky decmposition upper triangular matrix
40 for (j=i, i1=i-(k=1L); k <= i1; a[i][j]-=a[k][i]*a[k++][j]);
41 for (a[i][j]=sqrt(a[i][j]), ++j; j<= xmax; a[i][j++]/=a[i][i])
42 for (k=1L; k<= i1; a[i][j]-=a[k][i]*a[k++][j]);
43 }
44 //*****************************************************************************************
45 for (i=1L; i<= xmax; i++){
46 for (j=i; j<= xmax; j++)
47 cout << "c[" << i << "][" << j << "]= " << a[i][j] << " ";
48 cout << endl;
49 }
50 //*****************************************************************************************
51 for (i=1L; i<= xmax; a[i++][0L]/=a[i][i])
52 for (i1=i-(k=1L); k <= i1; a[i][0L]-=a[k][i]*a[k++][0L]);
53 //*****************************************************************************************
54 for (i=1L; i<= xmax; i++)//forward substitution
55 cout << "y[" << i << "]= " << a[i][0L] << " ";
56 cout << endl;
57 //*****************************************************************************************
58 for (i=xmax, a[i][0L]/=a[i][i], --i; i >= 1L; a[i--][0L]/=a[i][i])//backward substitution
59 for (ma = (mr = xmax - (j = i + 1L) + (l = 2L)) > xmax ? xmax : mr; l <= ma; a[i][0L] -= a[i]
[i-1L+l++] * a[j++][0L]);
60 //*****************************************************************************************
61 for (i=1L; i<= xmax; i++)
62 cout << "y[" << i << "]= " << a[i][0L] << " ";
63 cout << endl;
64 getch();
65 }
66
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

Cholesky Decomposition
Cholesky decomposition is the most efficient method to check
whether a real symmetric matrix is positive definite.
From: Computer Aided Chemical Engineering, 2013

Related terms:

Covariance Matrix, Eigenvalue, Eigenvector, Lower Triangular Matrix, Positive


Definite Matrix, Symmetric Matrix, Symmetric Positive Definite

Matrix Inversion
Michael Parker, in Digital Signal Processing 101 (Second Edition), 2017

13.2 Cholesky Decomposition


The Cholesky decomposition is used in the special case when A is a square,
conjugate symmetric matrix. This makes the problem a lot simpler. Recall that a
conjugate symmetric matrix is one where the element Ajk equals the element Akj
conjugated. This is shown as Ajk = Akj∗. If Ajk is a real value (not complex), then
Ajk = Akj.
Note: A conjugate is then the complex value with the sign on the imaginary
component reversed. For example, the conjugate of 5 + j12 = 5 − j12. And by
definition, the diagonal elements must be real (not complex), since Ajj = Ajj∗, or
more simply, only a real number can be equal to its conjugate.
The Cholesky decomposition floating point math operations per [N × N] matrix is
generally estimated as:

However, the actual computational rate and efficiency depend on implementation


details and the architecture details of the computing device used (CPU, FPGA,
GPU, DSP…).
The problem statement is A · x = b, where A is an [N × N] complex symmetric
matrix, x is an unknown complex [N × 1] vector, and b is a known complex [N × 1]
vector. The solution is x = A−1 · b, which requires the inversion of matrix A
(Fig. 13.5). As directly computing the inverse of a large matrix is difficult, there is
an alternate technique using a transform to make this problem easier and require
less computations.

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 1/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

Figure 13.5. Problem to solve: find x.

The Cholesky decomposition maps matrix A into the product of A = L · LH where L
is the lower triangular matrix and LH is the transposed, complex conjugate or
Hermitian, and therefore of upper triangular form (Fig. 13.6). This is true because
of the special case of A being a square, conjugate symmetric matrix. The solution to
find L requires square root and inverse square root operators. The great majority of
the computations in Cholesky is to compute the matrix L, which is found to be
expanding the vector dot product equations for each element L and solving
recursively. Then the product L · LH is substituted for A, and after which x is solved
for using a substitution method. First the equations will be introduced, then an
example of the [4 × 4] case will be shown to better illustrate.

Figure 13.6. Matrix substitution.

(L′ is commonly used to indicate LT or LH if the matrix is complex)


Solving for the elements of

where j is the column index of the matrix

The first nonzero element, in each column, is a diagonal elements and can be
found by
Diagonal Elements of L
(13.1)

In particular,

Similarly, the subsequent elements in the column are related as follows:

where i and j are the row and column indices of the matrix

where is the transpose of .


https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 2/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Off-diagonal elements of L
(13.2)

By substituting for Ljj the full recursion can be seen

Eqs. (13.1) and (13.2) are the equations that will be used to find Ljj and Lij. In
solving for one element, a vector dot product proportional to its matrix size must
calculated.
Although matrices A and L may be complex, the diagonal elements must be real by
definition. Therefore, the square root is taken of a real number. The denominator
of the divide function is also real.
Once that L is computed, perform the substitution for matrix A:

Next, we want to introduce an intermediate result, the vector y. The product of


matrix LH and vector x is defined as the vector y.

The vector y can be computed by a recursive substitution, called forward


substitution, because it is done from top to bottom. For example, y1 = b1/L11. Once
y1 is known, it is easy to solve for y2 and so on.
To restate, L and LH are known after the decomposition, and L · LH · x = b. Then
we define LH · x = y and substitute. Then we are solving L · y = b. L is a lower
triangular matrix, y and b are column vectors, and b is known. The values yj can be
found from top to bottom using forward substitution. The equations to find y is
Intermediate result y
(13.3)

Eq. (13.3) is almost the same as Eq. (13.2). If we treat b as an extension of A and y
as an extension of L, the process of solving y is the same as solving L. The only
difference is, in the multiply operation, the second operand is not conjugated (this
consideration may be important for hardware implementation, allowing
computational units to be shared).
After y is computed, x can be solved by backward substitution in LH · x = y
(Fig. 13.7). LH is an upper triangular matrix, therefore, x has to be solved in reverse
order—from bottom to top. That is why it is called backward substitution.
Therefore, solving x is separate process from the Cholesky decomposition and
forward substitution solver.

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 3/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

Figure 13.7. Solving for y and then for x.

In addition, y has to be completely known before solving x. The equation to solve x


is shown below, where VS = N is the length of vectors x and y.
Desired result x
(13.4)

The algorithm steps and data dependencies are more easily illustrated using a
small [4 × 4] matrix example.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128114537000135

Krylov Subspace Methods


William Ford, in Numerical Linear Algebra with Applications, 2015

21.7.1 Preconditioned GMRES


In the same way that we used incomplete Cholesky decomposition to precondition
A when A is positive definite, we can use the incomplete LU decomposition to
precondition a general matrix. Compute factors L and U so that if element aij ≠ 0
then the element at index (i, j) of A − LU is zero. To do this, compute the entries of L
and U at location (i, j) only if aij ≠ 0. It is hoped that if M = LU, then M−1A will have a
smaller condition number than A. Algorithm 21.6 describes the incomplete LU
decomposition. Rather than using vectorization, it is convenient for the algorithm
to use a triply nested loop. For more details see Ref. [64, pp. 287-296].

Algorithm 21.6
Incomplete LU Decomposition
functionilub(A)
% Compute an incomplete LU decomposition
% Input: n × n matrix A

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 4/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

% Output: lower triangular matrix L and upper triangular


matrix U.
for i = 2:n do
for j = 1:i-1 do
ifaij ≠ 0 then
aij= a ij/ajj

for k = j+1:n do
ifaik ≠ 0 then
aik= aik− aijajk
end if
end for
end if
end for
end for
U = upper triangular portion of A
L = portion of A below the main diagonal
for i = 1:n do
lii= 1
end for
end function

NLALIB: The function ilub implements Algorithm 21.6.


Proceeding as we did with incomplete Cholesky, there results

and

where
(21.29)

The function pregmres in the software distribution approximates the solution to Ax


= b using Equation 21.29.
Remark 21.5
Algorithm 21.6 will fail if there is a zero on the diagonal of U. In this case, it is
necessary to use Gaussian elimination with partial pivoting. We will not discuss
this, but the interested reader will find a presentation in Ref. [64, pp. 287-320]. The
software distribution contains a function mpregmres that computes the
incomplete LU decomposition with partial pivoting by using the MATLAB function
ilu. It returns a decomposition such that , so . It is
recommended that, in practice, mpregmres be used rather than pregmres.
Example 21.9

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 5/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
The 903 × 903 nonsymmetric matrix, DK01R, in Figure 21.11 was used to solve a
computational fluid dynamics problem. DK01R was obtained from the University of
Florida Sparse Matrix Collection. A right-hand side, b_DK01R, and an approximate
solution, x_DK01R, were supplied with the matrix. The approximate condition
number of the matrix is 2.[0-9]+ × [0-9]+8, so it is ill-conditioned. Using
m = 300, and tol = 1.[0-9]+ × [0-9]+−15, niter = 20, the solution
,
was obtained using gmresb and mpregmres. Table 21.1 gives the results of
comparing the solutions from mpregmres and gmresb to x_DK01R.

Figure 21.11. Large nonsymmetric matrix.

Table 21.1. Comparing gmresb and mpregmres

iter r Time

Solution supplied − 6.29 × 10−16 − −

gmresb −1(failure) 5.39 × 10−10 6.63 9.93 × 10−11

mpregmres 1 1.04 × 10−15 0.91 5.20 × 10−17

In a second experiment, the function gmresb required 13.56 s and 41 iterations to


attain a residual of 8.[0-9]+ × [0-9]+−16. Clearly, preconditioning GMRES is superior
to normal GMRES for this problem.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780123944351000211

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 6/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

SOLUTION OF EQUATIONS
M.V.K. Chari, S.J. Salon, in Numerical Methods in Electromagnetism, 2000

Numerical Example
Consider the 6 × 6 system
(11.54)

The matrix is symmetric and positive definite. In the Cholesky decomposition the
l11 term is the square root of a11 or . To find l21 we note that the
product l11l21 = −2. Because we know l11 = 2.646, we now find l21 = −0.756.
Similarly, the first column of L (and therefore the first row of LT) is found by
dividing the first column of A by . To find l22 we note that . The
only unknown here is l22. We now find . We leave it
as an exercise to find the remaining elements of the decomposition as
(11.55)

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780126157604500122

Solution of Linear Equations


S.P. Venkateshan, Prasanna Swaminathan, in Computational Methods in
Engineering, 2014

2.5.3 Cholesky decomposition


When the coefficient matrix is positive definite and symmetric it is possible to use
Cholesky decomposition. A positive definite symmetric matrix is a matrix that
satisfies the condition xT Ax > 0 and the elements of L and U are such that li,j = uj,i
i.e. U = LT and hence A = LLT. A symmetric matrix is positive definite if all its
diagonal elements are positive and the diagonal element is larger than the sum of
all the other elements in that row (diagonal dominant).
Consider as an example, a 4 × 4 symmetric positive definite matrix

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 7/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
(2.46)

This will be written as a product of L and U given by


(2.47)

Performing the indicated multiplication we then have A =


(2.48)

Using Equations 2.46 and 2.48, equating element by element and rearranging we


get the following:
(2.49)

Note that the entries in Equation 2.48 are in the order in which calculations are
performed (row-wise from left to right). We may generalize and write the following
expressions for the case of a positive definite symmetric n × n matrix (1 ≤ i ≤ n, 1 ≤ j
≤ n).
(2.50)

Cholesky decomposition needs to determine only one triangular matrix. For


large values of n, the cost of computation of solution using Cholesky
decomposition is .

Example 2.13
Solve ihe problem in Example 2.12 using Cholesky decomposiiion.
Solution :
Step 1 Determine L
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 8/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
We note that the elements of the coefficient matrix are given by

We shall verify first that the coefficient matrix is positive definite. We have

This is positive for all x. The matrix is seen to be diagonal dominant. Hence
the matrix is positive definite. From Expressions 2.49 we get the elements of
L.

Thus we have (entries are shown rounded to four significant digits after
decimals)

Calculations are however made including enough number of significant


digits after decimals.
Step 2 Forward substitution
First part of the problem consists of solving for vector y where

Forward substitution is used to obtain the y’s. From the first equation we
have

From the second equation we get

From the third equation we obtain

From the fourth equation we obtain


https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 9/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

Step 3 Backward substitution


The second part now consists in solving for vector x where

Backward substitution is used to obtain the x’s. From the fourth equation we
have

From the third equation we get

From the second equation we obtain

From the fourth equation we obtain

Thus the solution vector is given by

Program 2.3 has been applied to Example 2.13

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 10/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

Program 2.3. Cholesky decomposition

The output of the above program is

Sparse matrix is a matrix where majority of the elements are zeros. Such
matrices occur frequently in engineering practice. Instead of storing the
entire matrix only the nonzero entries can be stored. A simple scheme is to
store the row index, column index and the nonzero entry. For example the
following matrix

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 11/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

The computational effort for treating sparse matrices can be reduced largely
by employing special techniques, especially for large matrices.
We shall consider a special form of sparse matrix, tridiagonal matrix and
discuss methods to solve the same.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780124167025500028

Least-Squares Problems
William Ford, in Numerical Linear Algebra with Applications, 2015

Solving Overdetermined Problems Using the Normal Equations


If A has full rank, solving the normal equations ATAx = ATb by applying the Cholesky
decomposition to the symmetric positive definite matrix ATA yields a unique least-
squares solution. Despite its simplicity, this approach is almost never used, since
• There may be some loss of significant digits when computing ATA. It may even
be singular to working precision.
• The accuracy of the solution using the normal equations depends on the
square of condition number of the matrix. If κ (A) is large, the results can be
seriously in error.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780123944351000168

Important Special Systems


William Ford, in Numerical Linear Algebra with Applications, 2015

13.3.3 Solving Ax = b If A Is Positive Definite


If a matrix A is positive definite, it is very straightforward to solve Ax = b.
a. Use the Cholesky decomposition to obtain .
b. Solve the lower-triangular system .
c. Solve the upper-triangular system Rx = y.
This a very efficient means of solvingAx = b. Recall that we showed in Section 9.3.1
that each of forward and back substitution requires approximately n2 flops, so the
solution to Ax = b using the Cholesky decomposition requires approximately (n3/3)
+ 2n2 flops. The standard LU decomposition requires (2n3/3) + 2n2 flops. Because of
its increased speed, Cholesky decomposition is preferred for a large positive
definite matrix.

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 12/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

The MATLAB function cholsolve in the software distribution solves the linear
system Ax = b, where A is a positive definite matrix.

Example 13.7

Let and compute . Show that B is positive

definite, and solve using cholsolve.


» B = A’*A
B =
27 24 14
24 26 26
14 26 62
» R = cholesky(B); % no complaint. R is positive definite
» b = [25 3 35]’;
» cholsolve(R,b)
ans =
15.0455
-18.8409
5.0682
» B\b
ans =
15.0455
-18.8409
5.0682

Remark 13.6
If matrix A is tridiagonal and positive definite, it is more efficient to use the
algorithm tridiagLU to factor the matrix.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780123944351000132

23rd European Symposium on Computer Aided Process


Engineering
Danan S. Wicaksono, Wolfgang Marquardt, in Computer Aided Chemical
Engineering, 2013

3 Reformulation Strategies
The reformulation strategies based on Sylvester's criterion, proposed earlier by
Blanco and Bandoni (2007), and Cholesky decomposition, proposed here, are
applied to problems (2) and (4). These strategies enable the use of off-the-shelf
solvers to tackle eigenvalue optimization problems rather than relying on
specialized solvers.
https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 13/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

3.1 Sylvester's criterion


Sylvester's criterion states that a real symmetric matrix is positive definite if and
only if all its leading principal minors are positive definite (Gilbert, 1991).
According to Sylvester's criterion, the constraints on the positive definiteness of the
corresponding matrix enforce that all leading principal minors det(PMi) of the
corresponding matrix are positive. A small positive constant ε is introduced to
facilitate such constraints. Thus, problems (2) and (4) can be reformulated
respectively as follows:
(5)

(6)

3.2 Cholesky decomposition


A real symmetric positive definite (n × n)-matrix X can be decomposed as X = LLT
where L, the Cholesky factor, is a lower triangular matrix with positive diagonal
elements (Golub and van Loan, 1996). Cholesky decomposition is the most efficient
method to check whether a real symmetric matrix is positive definite. Therefore,
the constraints on the positive definiteness of the corresponding matrix stipulate
that all diagonal elements diagi of the Cholesky factor L are positive. Again, a small
positive constant e is introduced. Thus, problems (2) and (4) can be reformulated
respectively as follows:
(7)

(8)

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780444632340500828

Space Time Adaptive Processing (STAP) Radar


Michael Parker, in Digital Signal Processing 101 (Second Edition), 2017

21.4 Space Time Adaptive Processing Optimal Filter


Fortunately, the matrix inversion result can be used with multiple targets at the
same range. The steps are as follows:

One method for solving for SI is known as QR decomposition, which will be used
here. Another popular method is the Cholesky decomposition, as the interference
covariance matrix is Hermitian symmetric.
Perform the substitution SI = Q·R, or product of two matrices.

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 14/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Q and R can be computed from SI using one of several methods, such as Gram–
Schmidt, Householder transformation, or Givens rotation. The nature of the
decomposition into two matrices is that R will turn out to be an upper triangular
matrix, and Q will be an orthonormal matrix, or a matrix composed of orthogonal
vectors of unity length. Orthnonormal matrices have the key property as follows:

Therefore it is trivial to invert Q. Please refer to the chapter on matrix inversion


(Chapter 13) for more detail on QR decomposition.

Since R is an upper triangular matrix, u can be solved by a process known as “back


substitution.” This started with the bottom row that has one nonzero element and
solving for the bottom element in u. This result can be back-substituted for the
second to bottom row with two nonzero elements in the R matrix, and the second
to bottom element of u solved for. This continues until the vector u is completely
solved. Notice that since the steering vector t is unique for each target, the back-
substitution computation must be done for each steering vector.
Then solve for the actual weighting vector h.
h = u/(tH·u∗), where dot product (tH·u∗) is a weighting factor (this is a complex
scaler, not vector).
Finally solve for the final detection result z by the dot product of h and the vector y
from the range bin of interest.

z is a complex scaler, which is then fed into the detection threshold process. Over
the 16 STAP processes, the values can be integrated for each of the range and
steering vector locations.
Shown in Fig. 21.7 is a plot of , the inverted covariance matrix. In this case,
there is an interfering signal at 60 degrees azimuth angle, and a target of interest
at 45 degrees, with range of 1723 m normalized Doppler of 0.11. Note that this is
for an airborne radar scanning the ground searching for slow-moving targets,
hence the longer range.

Figure 21.7. Logarithmic plot of inverted covariance matrix.

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 15/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
Notice the very small values, in the order of −80 dB, present at 60 degrees. The
STAP filtering process is detecting the correlation associated with the interfering
signal direction at 60 degrees. But inverting the covariance matrix, this jammer will
be severely attenuated. Notice also the diagonal clutter line. This is a side-looking
airborne radar, so the ground clutter has positive Doppler looking in the forward
direction or angle and negative Doppler in the backward direction or angle. This
ground clutter is being attenuated at about −30 dB, proportionally less severely
than the more powerful interfering signal.
The target is not present in this plot. Recall that the estimated covariance matrix is
determined in range bins surrounding but not at the expected range of the target.
But in any case, it would not likely be visible anyway. However, the use of STAP
with the appropriate target steering vector can make a dramatic difference, as
shown in Fig. 8. The top plot shows the high return of the peak ground clutter at
range of 1000 m with magnitude of ∼0.01, and noise floor of about ∼0.0005.

Figure 21.8. Space time adaptive processing (STAP) gain.

With STAP processing, the noise floor is pushed down to ∼0.1 × 10−6 and the
target signal at about 1.5 × 10−6 is now easily detected. It is also clear that floating-
point numerical representation and processing will be needed for adequate
performance of the STAP algorithm.
The STAP method described is known as the power domain method. It is the called
power domain because the covariance matrix estimation results in squaring of the
radar data, hence, the algorithm is operating on signal power. This also increases
the dynamic range of the algorithm, but this is easily managed as floating point is
being used for all processing steps.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128114537000214

Example and Program Analysis


Zhao-Dong Xu, ... Fei-Hong Xu, in Intelligent Vibration Control in Civil
Engineering Structures, 2017

8.2.2 Wind Load Simulation

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 16/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
In order to perform the wind vibration analysis, the wind speed time history should
be obtained firstly, the harmonic superposition method [247] is adopted to
simulate the wind velocity history. The calculation processes are as follows:
1. Taking the Cholesky decomposition of
(8.2)

where is the spectrum density matrix; is a lower triangle


matrix; is the conjugate transpose matrix of , and it can be
expressed as follows:
(8.3)

2. Solving the random phase with related characteristic, the random phase can be
expressed as,
(8.4)

3. The wind speed time history of the simulated points can be calculated using
the following equation according to the Shinozuka theory.
(8.5)

where N is the sampling points, is the frequency spacing, is the phase


angle among with uniform distribution.
According to the above three processes, the wind velocity time history can be
obtained if the target power spectrum is given. In this section, the Kaimal
horizontal fluctuating wind velocity spectrum [248] is adopted as the target power
spectrum. The main factors in the wind field simulation are as follows: the span
, the effective height of the girder from the ground is , the
surface roughness , the average wind velocity at the girder is
U(z)=30 m/s, the number of simulation points , the space of
simulation points is , the upper limit frequency is , the
number of frequency division is , and the sampling time interval is
.
The wind field can be simplified into three independent one-dimension
multivariable random wind velocity fields, as shown in Table 8.5. Twenty-three
simulation points are distributed along the girder from the left to the right with a
space of 10 m, 5 simulation points are distributed along the main tower from the
bottom to the top with a space of 5 m, and the distribution of simulated points is
shown in Fig. 8.8. According to the above theory, the wind velocity time history is
obtained. Fig. 8.10 shows simulating results of point 1 and point 23.

Table 8.5. One-dimensional wind velocity field of the cable-stayed bridge

Wind Field Number Location Direction Simulation Point

2 Left tower Longitudinal 5

4 Right tower Longitudinal 5

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 17/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics

Wind Field Number Location Direction Simulation Point

6 Main girder Vertical 23

Figure 8.10. Simulation of the time-history curves of the wind velocity. (A) Time-
history curve of wind velocity of point 1 and (B) time-history curve of wind velocity
of point 23.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780124058743000084

The capabilities of the new version of ATILA


J-C. Debus, in Applications of ATILA FEM Software to Smart Materials, 2013

2.2 The new version of ATILA


A newer version of ATILA with new solvers that result in significantly faster
computing times is proposed. A finite element calculation process solves systems
of equations. In ATILA, according to the type of analysis, three different solvers are
used: LU or Cholesky decomposition procedure for static and harmonic analysis,
Lanczos algorithm for modal analysis, and central difference, Newmark’s, or
Wilson’s methods for transient analysis. The main improvement is reduction of the
filling of sparse matrices generated by the analysis. A skyline storing scheme is
used and consists in storing elements of the matrix in such a way that most of the
zero elements are not stored. This reduces the amount of memory used to store

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 18/19
7/5/2020 Cholesky Decomposition - an overview | ScienceDirect Topics
matrices as well as reducing the number of calculations during the factorization
process. The skyline storing scheme when used together with a direct solving
procedure is well suited for FEM calculations. This is because FEM calculations
generate sparse matrices with a symmetric structure and also because the skyline
structure is preserved during the LU factorization. However, this method implies
extra calculations mostly during the factorization process and this storage scheme
has only a clear advantage over using fully populated matrices if these matrices are
strongly sparse. It is possible to greatly reduce the amount of stored data by using
programs for producing fill reducing orderings. Many algorithms presented in the
literature seem to give better results than the ones currently implemented in
ATILA. Note that if these algorithms could improve the calculation speed in ATILA
in the case of harmonic and transient analysis, they will not be very efficient if
eigenfrequencies of a transducer are to be found. Transducer models use coupled
electric-mechanical-fluid equations. Electric constitutive relationships generate null
entries in the mass matrix. Hence the mass matrix cannot be easily factorized using
a gauss elimination procedure. Currently, a static condensation of electric
equations is used in ATILA. It consists in eliminating electric equations using the
electro-mechanical coupling equations. This significantly increases the amount of
storage because the structure of the matrix is changed and this makes reordering
algorithms inefficient. There are two ways to solve this problem either by changing
the storage scheme or reorganizing the way electric equations are accounted for.
Many other storage schemes are presented in the literature. For most of these, they
are very efficient if used together with an incremental solving procedure. As
discussed above we think that the direct solving procedure is very adequate due to
the symmetric structure of matrices involved in FEM calculations.
In order to reorganize the way data are managed in the code, it is interesting to
note that we are generally looking for the natural mode of the structure. Hence it is
not necessary to hold electric equations during the eigenmode calculation – they
must be accounted for during the factorization procedure. The new method is to
organize equations in order to take into account the null entries in the mass matrix
during the matrix factorization by eliminating electric equations during the
eigenmode calculations. Using this method, it is possible to keep the symmetric
sparse structure of the matrix and then efficiently use the ordering procedure to
reduce the matrix filling. For the new version of ATILA called ATILA++, the used
language is the C++.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780857090652500023

Copyright © 2020 Elsevier B.V. or its licensors or contributors.


ScienceDirect ® is a registered trademark of Elsevier B.V.

https://www.sciencedirect.com/topics/engineering/cholesky-decomposition 19/19

S-ar putea să vă placă și