Sunteți pe pagina 1din 8

A. J.

Volponi
H. DePold
R. Ganguli
Pratt & Whitney,
400 Main Street,
East Hartford, CT 06108

C. Daguang
Beijing University of Aeronautics
and Astronautics,
Beijing, China

The Use of Kalman Filter


and Neural Network
Methodologies in Gas Turbine
Performance Diagnostics:
A Comparative Study
The goal of gas turbine performance diagnositcs is to accurately detect, isolate, and
assess the changes in engine module performance, engine system malfunctions and instrumentation problems from knowledge of measured parameters taken along the engines
gas path. The method has been applied to a wide variety of commercial and military
engines in the three decades since its inception as a diagnostic tool and has enjoyed a
reasonable degree of success. During that time many methodologies and implementations
of the basic concept have been investigated ranging from the statistically based methods
to those employing elements from the field of artificial intelligence. The two most publicized methods involve the use of either Kalman filters or artificial neural networks (ANN)
as the primary vehicle for the fault isolation process. The present paper makes a comparison of these two techniques. DOI: 10.1115/1.1419016

Introduction
The goal of gas turbine performance diagnostics is to accurately
detect, isolate, and assess the changes in engine module performance, engine system malfunctions, and instrumentation problems from knowledge of measured parameters taken along the
engines gas path. Discernable shifts in engine speeds, temperatures, pressures, fuel flow, etc., provide the requisite information
for determining the underlying shift in engine operation from a
presumed nominal state. Historically, this type of analysis was
performed through the use of a Kalman filter or one of its derivatives to simultaneously estimate a plurality of engine faults. In the
past decade, artificial neural networks ANN have been employed as a pattern recognition device to accomplish the same
task. Both methods have enjoyed a reasonable success.
The purpose of this paper is to outline the two methodologies,
discuss their relative merits and weaknesses, and provide a direct
comparison of the two techniques via a controlled computer simulation study. In the sequel, we will provide a brief general description of both the Kalman filter and ANN as applied to the diagnostic problem. For the purpose of conducting a comparison, we will
limit the framework of the diagnostic system to the problem of
isolating a single fault to the component level. The single faults
under consideration will be comprised of engine module, engine
system, and instrumentation faults.

will briefly describe this procedure, however, for a more detailed


discussion we refer the reader to the following sources: 4 13.
The general approach taken for engine fault diagnostics typically involves the use of a linearized model approximation evaluated at a selected engine operating point. This provides a matrix
relationship between changes in engine component performance
independent parameters and the attendant changes in typically
measured engine parameters such as spool speeds, internal temperatures and pressures, fuel flow, etc. dependent parameters.
This relationship may be succinctly represented as
zHx

where z is a vector of measured parameter deltas, x is a vector of


fault deltas, H is a matrix of fault influence coefficients, and is
a random vector representing the uncertainties inherent in the
measurement process. In addition to the precision of the individual sensors, it has been customary to address the potential for
sensor bias and drift. Consequently, the fault vector given in the
model above is often configured to contain components directly
related to sensor error in addition to engine fault deltas.
The fault vector x given in the model can be thought of as the
concatenation of an engine fault vector (x e ) and a sensor error
fault vector x S , i.e., x x e ]x s T where

FAN
FAN
x e CH and x s
]
A 5

Kalman Filter Approach


Kalman filter methods were introduced as a fault isolation and
assessment technique for relative engine performance diagnostics
in the late 1970s and early 1980s 13. The success enjoyed in
these early programs promoted the use of these techniques in subsequent years to become the central methodology utilized in many
current engine performance analysis programs. In the sequel we
Contributed by the International Gas Turbine Institute IGTI of THE AMERICAN
SOCIETY OF MECHANICAL ENGINEERS for publication in the ASME JOURNAL OF
ENGINEERING FOR GAS TURBINES AND POWER. Paper presented at the International Gas Turbine and Aeroengine Congress and Exhibition, Munich, Germany, May
8 11, 2000; Paper 2000-GT-0547. Manuscript received by IGTI November 1999;
final revision received by ASME Headquarters February 2000. Associate Editor: D.
Wisler.

(1)

We may rewrite Eq. 1 as

N1 err
P3 err
W f err .
]
T49err

xe
zH e x e H s x s H e ] H s Hx .
xs

(2)

(3)

The matrix H has been partitioned into two parts; a matrix of


engine fault influence coefficients (H e ) and a matrix of sensor
fault influence coefficients (H S ). The generation of these matrices

Journal of Engineering for Gas Turbines and Power


Copyright 2003 by ASME

OCTOBER 2003, Vol. 125 917

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

Table 1 Sample root cause influences

FAN
LPC
HPC
HPT
LPT
2.5 BLD
FP14
FP8
2.9 BLD
TCC
HPCSVM
P49 Error

T49C2
C

WF
percent

N2C2
percent

N1C2
percent

P25Q2
percent

T25C2
C

T3C2
C

P3Q2
percent

3.86
4.54
6.80
10.88
1.19
3.07
1.22
0.61
4.22
17.75
0.95
0.33

0.70
0.66
0.80
1.29
0.96
0.49
0.21
1.39
1.06
2.10
0.11
1.70

0.30
0.29
0.06
0.57
0.63
0.16
0.07
0.17
0.29
0.90
0.39
0.25

0.68
0.14
0.05
0.08
0.98
0.00
0.24
0.64
0.06
0.12
0.00
0.46

2.00
1.18
0.83
1.29
3.40
1.04
0.67
1.06
0.68
2.14
0.08
0.55

1.95
0.11
0.71
1.14
3.45
0.85
0.73
1.31
0.63
1.86
0.09
0.63

1.58
2.62
3.66
4.03
1.42
0.86
0.15
2.62
0.60
4.38
0.34
0.21

0.03
0.01
0.17
1.26
0.11
0.00
0.01
1.09
0.02
1.11
0.09
1.22

and their interpretation have been discussed in great detail in


4,14 and elsewhere in the literature and will not be pursued in
this paper.
The model, as configured above, has often been used for the
purpose of tracking slowly occurring changes in engine performance from revenue flight data, through the use of a Kalman
filter-based methodology. An estimate for these performance
shifts, x , would be given by

x e
x e
De
x x
D zHx

x s
Ds
x s

x e
z H e ]H s
x s

x predictionGAINresidualpredictioncorrection (4)
where x represents an a priori estimate of the engine/sensor fault
deltas and D is the Kalman gain matrix referred to as the diagnostic matrix. The diagnostic matrix is computed as a function of
several quantities; the engine/sensor influence coefficients H, the
measurement covariance matrix R, and a positive semidefinite
weighting matrix P0 . The diagnostic matrix is computed as

P e0 H Te H e P e0 H Te R 1
De
T
T
1

D P 0 H H P 0 H R
Ds
P s0 H sT H s P s0 H sT R 1
P (0e )

P 0(s )

(5)

where
and
are weighting submatrices for the engine and
sensor fault estimation, respectively. An in-depth report on the
generation of the P 0 and R matrices can be found in references
4,5.
The use of the predictor/corrector methods like the Kalman
filter to estimate sensor error has made possible a more reliable
and consistent gas turbine module performance analysis. The procedure outlined above can be applied in a snapshot analysis or as
a continuing recursive analysis as new engine data are made available over time. In either scenario, the simultaneous determination
of both engine faults and measurement errors by this methodology
has been successfully applied to a large number of commercial
and military families of engines with varying instrumentation
suites for two decades.
In the sequel, we shall alter our emphasis to consider the problem of fault isolation, given the premise that a fault event has been
detected. The problem of detection becomes one of recognizing a
step or rate change in a gas path parameter or a collection of
parameters. The problems associated with fault detection and the
mechanisms which can be applied to accomplish this task have
been reported by 15. The types of faults that will be considered
in this discussion include engine performance faults, engine system faults, and instrumentation faults.
918 Vol. 125, OCTOBER 2003

Single Fault Isolation. The Kalman filter can be configured


to emulate a single fault isolator (SFI). This is a snapshot type of
analysis in the sense that it operates on a set of measurement
deltas without any a priori information or pre-history. The object
of the analysis is to flag a root cause on the basis of a single
measurement delta set. Root causes are single fault occurrences
and are pre-defined for the system. They consist of coupled faults
within the major modules of the engine, certain system faults such
as handling and ECS bleed leaks and failures, variable stator vane
malfunctions, TCC malfunctions as well as certain instrumentation faults. These faults are assumed to occur in isolation, i.e.,
there will be one and only one root cause occurrence at any given
time. The purpose of the single fault isolator is to identify the
correct root cause once a trend shift is detected.
Root Causes. Root Causes can be thought of as state variables, x 1 ,x 2 , . . . ,x n . They are represented within this system as
vectors of measurement deltas, z *
1 ,z *
2 , . . . ,z n* , which are calculated by applying influence coefficients. As an illustration, we can
consider the following 12 root causes. These root causes may exist
at varying levels. The state representation of the root cause will be
in the form of a one percent cause to be consistent with other
influences.
1.
FAN
coupled FAN 1 percent , 1.25 percent FC
2.
LPC
coupled LPC 1 percent , 1.10 percent FC
3.
HPC
coupled HPC 1 percent , 0.80 percent FC
4.
HPT
coupled HPT 1 percent , 0.75 percent FP4
5.
LPT
coupled LPT 1 percent , 1.65 percent FP45
6.
7.
8.
9.
10.
11.
12.

2.5 BLD
2.9 BLD
FP14
FP8
TCC
HPCSVM
P49 Error

stability bleed leak one percent


start bleed one percent
fan discharge area one percent
core discharge area one percent
turbine case cooling on
HPC stator vane misrigging
P49 indication problem two percent

An example of the influences for these root causes is depicted in


Table 1.
The actual root cause may appear as some multiple of the influences represented in Table 1. For instance, a 2.5 bleed root
cause may manifest itself as a stuck open bleed 15 percent 2.5
bleed fault or a partial bleed leak, say two percent 2.5 bleed fault.
These two faults will be treated as the same by the Kalman estimator. The difference between the two faults is one of magnitude.
The magnitude is estimated by the Kalman filter. Its ability to
estimate correctly will depend on the signal to noise ratio for the
given fault. In this particular case, a two percent 2.5 bleed is
sometimes confused with an LPC fault or a 2.9 Bleed fault. On the
other hand, a stuck 2.5 Bleed 14.77 percent has a significantly
higher S/N ratio and is estimated correctly. Another factor which
impacts accuracy of the estimator is the number of measured parameters. We will consider systems that have between four and
eight measurements in flight.
Transactions of the ASME

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

Given the definitions above for root cause influences one percent, a typical set of possible single faults may be constructed.
For purposes of this study, we will consider the following single
fault definitions, depicted in Table 2.
If we represent these 11 faults as, x 1 ,x 2 , . . . ,x 11 , and the matrix of influences by H * ,(98) 1 matrix depicted above then a set
of expected measurement delta vectors, z *
1 ,z 2* , . . . ,z *
11 , would
be calculated by
z i* H * T x i

for i1,2, . . . ,11.

(6)

Each z i* would be an 81 vector of measurement deltas.


The general form of the discrete Kalman filter estimator at time
k1 is as follows:
State extrapolation:
x k1 k k1 x k .
Covariance extrapolation:
P k1 k k1 P k T k1 Q k1 .
Kalman gain:
T
T
D k1 P k1 k H k1
R k1 1 .
H k1 P k1 k H k1

State update:
x k1 x k1 k D k1 z k1 H k1 x k1 k
Covariance update:
P k1 ID k1 H k1 P k1 k .

Table 2 Single fault problem set


Fault

percent

FAN
LPC
HPC
HPT
LPT
2.5 BLD low and high
2.9 BLD low and high
HPCSVM
P49 Error

2
2
2
2
2
2 and 14.77
6.74 and 15.45
6
2

As mentioned above, the normalized measurement error will


take into consideration the measurement nonrepeatabilities of the
system. It is assumed that these are known a priori or computed
from data during initialization of the diagnostic system. However
the values are obtained, they are assumed to be known and are
passed to the Kalman filter in the form of a positive definite matrix R. The diagonal elements of this matrix represent the variances of the measurement deltas corrected quantities. Thus, if
we represent the measurement delta-delta vector between time k
and k1, assuming that a trend shift has been detected during this
time period by Z z 1 ,z 2 , . . . ,z 8 , then the diag(R)
21,22, . . . , 28 , represents the individual variances. For definiteness, assume we have calculated the nth single fault estimate.
The associated normalized measurement error norm, e n , is calculated as follows:x n SFI estimate for the nth root cause

k1 I
Q k1 0
H k1 M c H * T
M c measurement configuration vectordiag m 1 ,m 2 , . . . ,m 8
diagonal matrix assuming eight potential measurements
where m j

if jth measurement is available

otherwise.

The effect of the measurement configuration matrix is to zero


the rows of the root cause influence matrix corresponding to the
measurements that are NOT available. The resulting matrix of
influences is 89 for this example.
The single fault isolation is obtained by processing the general
Kalman equations iteratively to provide a snapshot analysis for
each of the root causes under consideration 11 in this example.
Each call to the Kalman filter will be made with a different P 0
matrix chosen to accentuate the kth root cause. Since these are
snapshot analyses, the covariance update calculation is not required. The a priori state estimate is also assumed to be zero. An
SFI analysis is performed typically after a trend shift has been
detected in the measurement deltas at some discrete time, say,
between time k and k1. The delta-delta (Z k1 Z k ) constitutes the input measurement delta, Z, to the SFI.
The SFI is evaluated iteratively for each single fault 11 in the
above example. This process will yield estimates for each single
fault under consideration. It is necessary to rank each of these
estimates and determine the top two or three single faults. The
measure used to compare estimates for this purpose of ranking is
a normalized measurement error norm which we will describe
below. The single fault admitting the minimum error is deemed
the most likely, the second smallest error, the next likely, and so
forth.
The row dimension of H * is 9 and not 11 since the BLD25 and BLD29 root
causes are used to model the two different magnitude faults being considered.
1

Journal of Engineering for Gas Turbines and Power

For the single fault isolator we make the following adjustments:


e n

k1

z k z k*

k1

1/2

(7)

z 2k

T
z*
n .
k kth element of the vector M c (H * ) x
It is possible for the estimated values for some single faults to
be opposite in polarity than what would be reasonably expected.2
For example, an HPT SF of magnitude 2 percent might yield an
estimated value of 0.5 percent for an LPC SFI. Given that a
sudden shift in observed gas path parameters has taken place during engine operation, it is not likely that the condition of any
given module has improved and thus a positive shift in performance would not be given serious consideration. Thus, some preprocessing is mandated before the error ranking is performed.
This takes the form of perusing each of the SFI estimates and
considering only those which admit a reasonable polarity. Ordering the errors from min to max we obtain

e i 1 e i 2 e i n

(8)

corresponding to the single fault estimates


x i 1 x i 2 x i n

(9)

in order of likelihood. In most cases, the first of these ranked


single faults is deemed to be the underlying fault and is reported
to the user. In some instances, however, it may be prudent to
report the first and second SFs, since there exists the possibility
for an erroneous fault identification, especially if the associated
error norms are close.
Depending on the circumstances, the SFI may experience a
confusion between two single faults. The aliasing of SFs depends
on the single faults themselves, their relative magnitude with respect to the nonrepeatability of the measurements and the number
of measurements available for the analysis. All of these factors
can contribute to a confounding of the underlying SF with another
2
Although this is not a common occurrence, it is possible for an SF estimate
indicating an improvement in performance to admit the minimum measurement error
norm and hence win the isolation selection.

OCTOBER 2003, Vol. 125 919

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

Table 3 SFI accuracy: Kalman filter, 8 measurements


8 Measurement Set
FAULT
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
HPCSVM
P49err
Total

1st

Top 2

Top 3

100%
90%
100%
100%
100%
85%
96.7%
100%
100%
96.9%

100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

Table 4 SFI accuracy: Kalman filter, 4 measurements


4 Measurement Set
FAULT
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
HPCSVM
P49err
Total

1st

Top 2

Top 3

100%
90%
100%
100%
100%
50%
80.0%
100%
100%
91.1%

100%
90%
100%
100%
100%
75%
100.0%
100%
100%
96.1%

100%
100%
100%
100%
100%
95%
100%
100%
100%
99.4%

SF. To mitigate the possibility of a false identification, some postprocessing of the SFI results may be required. This will improve
the overall accuracy of the system. The rules to be applied would
be empirically motivated and would be suggested by computer
simulation test cases. For the problem set considered in this study
Table 2, it was not necessary to undergo this type of postprocessing.
Computer Simulation Results. Using Equation 1, we can
generate a set of hypothetical noisy measurements for each of
the 11 single faults under consideration (x i ,i1,2, . . . 11). These
noisy measurement vectors (z k ) are then passed through the SFI
process, the results ranked and tabulated for first, second, and
third ranked faults. These results appear in Table 3 and 4 for
eightand four measurement set3 systems, respectively, where the
four bleed leak faults have been combined into two faults. The
3

The four measurements set used in this study consisted of the flight required
parameters (T49, Wf, N1 and N2).

entries indicate the percentage of time the implanted fault is correctly annunciated as first, second, and third choice.
The results demonstrate a 96.9 percent and 91.1 percent accuracy for the first choice fault isolation for eight and four measurement systems, respectively. If we weaken the accuracy criteria to
that of being correct within the top two choices, the relative precision of the isolation increases to 100 percent and 96.1 percent.
In either case, the isolation accuracy is quite good.
The fact that we can obtain a 91 percent hit rate with only four
measurements raises the question as to why one would ever consider adding more instrumentation. There are several answers to
this question, one being that a multiple fault performance assessment would not perform as well with fewer measurements. But
even within the constraints of this study i.e., single fault annunciation, the more the better argument still holds if we consider
the robustness of the system. One measure of robustness would be
to vary the module couplings flow capacity versus efficiency,
hard coupled relationship in the implanted faults beyond that
which is assumed in the numerical model i.e., the influences coefficients H * ). Figures 1 and 2 depict the impact on first choice
accuracy for eight and four measurement systems, respectively,
when the coupling factor for the compression modules FAN,
LPC, HPC are randomly varied.4
The LPC and HPC clearly exhibit greater robustness to modeling assumptions in the 8 measurement system.
In the sequel, we will briefly discuss artificial neural networks
ANNs, and their usage in engine performance diagnostics. The
discussion will be intentionally brief since there already exists
ample documentation of this methodology in the literature, and
the reader is directed to the references for additional detail. For
the purpose of making a direct comparison with the Kalman
filter methodology, we will confine much of the discussion to the
single fault isolation problem introduced in the prelude. The identical computer simulation test will be used as the vehicle for the
evaluation.

Artificial Neural Network Approach


The fault isolation problem can be considered to be a pattern
classification problem. N-dimensional vectors in an
N-dimensional space represent the system response. The system
response for different faults tends to be partitioned into different
regions of this space and can be regarded as patterns. Pattern
recognition involves learning these partitions, from simulated or
real data, so that a given system response can be classified as a
particular fault. Neural networks represent a powerful pattern recognition technique, and have been applied for fault detection of
4
The turbine modules HPT and LPT isolation accuracy was essentially unaffected and hence were not plotted.

Fig. 1 Coupling factor impact on 8 measurement system

920 Vol. 125, OCTOBER 2003

Transactions of the ASME

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

Fig. 2 Coupling factor impact on 4 measurement system

complex systems such as aerospace vehicles 16, nuclear power


plants 17, and chemical process facilities 18 among others.
A key advantage of neural networks over other methods is their
ability to recognize relationships between patterns despite the
presence of noise contamination and/or partial information 19.
Most applications of ANN to fault diagnostics follow a common process. The ANN is trained off-line on fault signatures relating the changes in system measurements from a good baseline to system faults. Typically, faults are embedded into the
computer simulations, or real fault data is used, or a combination
of both. In case simulated data is used, noise must be added to
make the simulations realistic. Such a training process where the
ANN is presented input and output data by the system designer is
known as supervised learning. Once the ANN has been properly
trained using this process of supervised learning, it can analyze
data that are different from those it was originally exposed to
during the training sessions. When the trained ANN is placed
on-line, it recognizes a similar response from the actual system.

The discussion below involves the use of two types of neural


networks for engine fault diagnostics, a feed forward ANN trained
using back-propagation BP algorithm and a hybrid neural
network.
Back-Propagation BP Algorithm. While there are several
types of neural networks, multilayer feedforward networks trained
using the backpropagation algorithm have emerged as the most
widely used. Figure 3 illustrates the schematic of a feedforward
neural network, which consists of an input layer, an output layer,
and one or more hidden layers. The number of neurons in the
input and output layers is determined by the number of input
measurements and output parameters. The number of hidden layer
nodes is selected based on convergence criterion and the characteristic of the input-output mapping relationship.
A three-layer feedforward network is used for the present work,
as shown in Fig. 1. The feedforward network is trained using
supervised learning, which involves presenting input-output pairs
to the ANN and then using the BP algorithm to learn the relationships between the inputs and outputs by minimizing the following
error measure:
N

k1

(10)

in which E k represents the root mean square error associated with


the kth training sample and where N represents the number of
samples that are used for training the network. The BP algorithm
uses a gradient search to perform the nonlinear optimization
needed to minimize the error. Further details about the BP algorithm are outlined in standard texts on neural networks 20.
To improve the ANNs ability to deal with data scatter, the input
data were normalized using the following formula:

Table 5 SFI accuracy: ANN, 8 measurements


8 Measurement Set
FAULT
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
HPCSVM
P49err
Total

1st

Top 2

Top 3

100%
60%
100%
100%
100%
80%
86.7%
90%
100%
90.7%

100%
80%
100%
100%
100%
85%
86.7%
100%
100%
94.6%

100%
90%
100%
100%
100%
85%
86.7%
100%
100%
95.7%

Fig. 3 Architecture of three layer feedforward ANN

Journal of Engineering for Gas Turbines and Power

OCTOBER 2003, Vol. 125 921

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

Table 6 SFI accuracy: ANN, 4 measurements

Table 8 SFI accuracy: hybrid NN, 4 measurements

4 Measurement Set

4 Measurement Set

FAULT
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
HPCSVM
P49err
Total

1st

Top 2

Top 3

100%
90%
90%
100%
100%
75%
86.7%
100%
100%
93.5%

100%
90%
100%
100%
100%
75%
86.7%
100%
100%
94.6%

100%
90%
100%
100%
100%
75%
86.7%
100%
100%
94.6%

Y in Y im / Y i max i
where Y i is the ith monitoring parameter; n, m, and max are the
normalized, measured, and maximum possible value, respectively;
i is the standard deviation of the ith monitoring parameter.
The standard deviations and the influence coefficients used for
the ANN testing are the same as those used for the Kalman filter
described previously. For comparison of fault isolation results
with four inputs and eight inputs, 20 training cases and 50 testing
cases were generated. The training cases were used for the BP
algorithm to train the neural network. Once the neural network
was trained, the test cases were used to evaluate the performance
of the neural network. Data used for training was not used for
testing the neural network. The diagnostic results were considered
for the three highest outputs, which represent the three most likely
faults.
The test results for eight and four measurements using the BP
ANN are shown in Table 5 and Table 6, respectively. For both the
eight and four measurement cases, the Kalman SFI results shown
in Tables 5 and 6 are better than the BP ANN results. For the eight
measurements case, the Kalman SFI has a 100 percent accuracy in
fault isolation among the top three choices, compared to 95.7
percent for the BP ANN. For the four measurements case, the
Kalman SFI has an accuracy of 99.4 percent in fault isolation
among the top three choices, compared to 94.6 percent for the
BP ANN.
Hybrid Neural Network Algorithm. The BP ANN is handicapped relative to the Kalman SFI in some ways. For example,
while the Kalman SFI uses influence coefficients in the form of
the H matrix to define the model, the BP ANN uses data generated
from influence coefficients to learn the model.
A Hybrid NN is a network architecture where one or more
ANN functions are replaced by an algorithm that includes domain
knowledge 21. The objective is to substitute features in the
neural network architecture that are already analytically understood. This avoids the need for training the ANN to learn information which is already known. For example, instead of using

Table 7 SFI accuracy: hybrid NN, 8 measurements


8 Measurement Set
FAULT
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
HPCSVM
P49err
Total

1st

Top 2

Top 3

100%
90%
100%
100%
100%
90%
100%
100%
100%
97.8%

100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

922 Vol. 125, OCTOBER 2003

FAULT
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
HPCSVM
P49err
Total

1st

Top 2

Top 3

90%
90%
100%
100%
100%
70%
86.7%
70%
100%
91.1%

100%
90%
100%
100%
100%
90%
86.7%
100%
100%
97.8%

100%
100%
100%
100%
100%
100%
86.7%
100%
100%
100%

training data based on influence coefficients, the Hybrid ANN


uses the influence coefficients as part of the network model.
A Gaussian nearest neighbor function was substituted for the
ANN feature identification function and the fault mapping function was ignored for this Hybrid NN algorithm. The output of the
network was the root-sum-of-the-squares number of standard deviations from a perfect match of the fault pattern. The output of
the network was used directly to rank the faults.
A test was made to determine if the neural network mapping to
the faults could also be optimized. Up to 72 weightings were
available between the eight measurement features and the nine
faults. Thirty-six weightings were available with four measurements. While it was demonstrated that the weightings could be
optimized, the tests were run with all the weightings set to unity.
The results from the Hybrid neural net are shown in Tables 7
and 8, for eight and four measurements, respectively. The accuracy of the Hybrid neural network compares favorably with the
Kalman SFI. For the eight measurements case, both the Kalman
SFI and the Hybrid ANN show a fault isolation accuracy among
the top three choices of 100 percent. For the four measurement
case, the Kalman SFI shows a fault isolation accuracy among the
top three choices of 99.4 percent compared to 100 percent for the
Hybrid ANN.
In general, the Hybrid ANN gives better results than the BP
ANN. This may be because the hybrid uses the influence coefficients for each fault within the network, whereas the BP ANN has
to learn the influence coefficients from the training data.

Kalman Filter and Neural Network Methods


Feedforward neural networks are typically made of interconnected nonlinear neurons and are particularly useful where the
input-output relationship is nonlinear. The Kalman filter and the
Hybrid neural network assume a linearized model of the system.
In the present application, the linear model takes the form of
influence coefficients. For a feedforward ANN, BP learning for
linear problems in engine fault diagnostics will generally result in
poorer performance compared to a Kalman filter or hybrid ANN
approach.
Learning in neural networks involves mapping an input to an
output. The input and outputs can be generated from models, from
real data, or a combination of both. ANNs can therefore also be
model-free estimators, a quality that is very useful if modeling
information such as influence coefficients are not available. Kalman filters, on the other hand, are model-based estimators, and are
suitable for problems in engine performance diagnostics where
influence coefficients are available as the model.
Neural networks are not limited to multilayer neurons with BP
training. Self-organizing maps based on competitive learning
22, simulated annealing based on statistical thermodynamics
23, Boltzmann learning 24 and radial basis functions 25
are some of the other developments in ANN which may be applicable to engine diagnostics. Combining the Kalman filter apTransactions of the ASME

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

proach with some of the ANN methods may yield superior results
for the engine diagnostic problem, than those obtainable within
either methodology acting alone.

Concluding Remarks
Test results have suggested that the back-propagation neural
network, the hybrid neural network, and the Kalman filter method
are highly accurate for isolating single gas turbine fault symptoms. Furthermore, the results also indicate that these methodologies compare favorably in terms of accuracy, with a very slight
advantage going to the Kalman filter approach.
Each method has its own advantages and disadvantages.
The ANN is inherently nonlinear and can be used in applications where model information is scarce or lacking altogether. The
ANNs are, however, data driven and therefore must be trained.
The training is typically performed offline in a supervised fashion,
meaning that the input-output relationship is known. In the present
application this means that the underlying faults in the training
data are already known. This could be a drawback if real engine
data was used for training, since the precise disposition of the
fault may or may not be known. If the engine configuration and/or
instrumentation noise levels change, the ANN approach would
require that a re-training be performed. Once trained, the ANN
architecture provides a numerically simple, and hence fast diagnostic operator suitable for real-time application.
The Kalman filter is a linear model-based estimator and is suitable in those instances where a linear model is available and is
known to be a reasonably accurate representation of the inputoutput relationship. In the engine performance diagnostics application, influence coefficients have, historically, been fairly accurate and robust linear models. The Kalman filter approach utilizes
all model information available a priori estimate information,
measurement noise information, etc. and can be easily configured
to operate with different measurement suites, and fault configurations: single fault as in this present study or multiple fault isolation systems. Adaptive measures are also available to allow realtime reconfiguration of the Kalman filter to changing
measurement noise levels.
The hybrid neural network, like the Kalman filter, is a modelbased method utilizing influence coefficients as the primary linear
model in a neural network architecture framework. For the singlefault isolation problem referenced in this study, the hybrid NN
closely resembles a weighted least squares solution. This explains
the close agreement between the hybrid NN and Kalman filter in
the single-fault isolation computer simulation study.

Acknowledgment
The authors wish to thank Zhang Jin, Zhang Mingchuan, and
Zhu Zhili of the Department of Jet Propulsion and Power at the
Beijing University of Aeronautics and Astronautics as well as Lu
Pong-jeu and Hsu Tzu-cheng of the Institute of Aeronautics and
Astronautics, National Cheng Kung University in Taiwan for their
work in developing the ANN Fault Analysis Application under
contract with Pratt & Whitney.

Nomenclature
ANN
FAN
LPC
HPC
HPT
LPT
2.5 BLD
2.9 BLD
TCC
N1
N2
T49

artificial neural network


fan module
low pressure compressor
high pressure compressor
high pressure turbine
low pressure turbine
stability bleed
start bleed
turbine case cooling
low spool speed
high spool speed
exhaust gas temperature

Journal of Engineering for Gas Turbines and Power

P3
P49
HPCSVM
SFI
H
D
R
P

x
z

FC
A

HPC exit pressure


exhaust pressure
variable stator vane
single fault isolator
influence coefficient matrix
Kalman gain matrix
measurement covariance matrix
state covariance matrix
state transition matrix
state vector
measurement vector
efficiency
flow capacity
flow capacity
turbine nozzle area

Subscripts
e
s
0
k

relating to engine parameters


relating to sensor parameters
initial condition
discrete time k

Superscripts
T matrix transpose
1 matrix inverse

References
1 Urban, L. A., 1974, Parameter Selection for Multiple Fault Diagnostics of
Gas Turbine Engines, AGARD Conference Proceedings, No. 165, NATO,
Neuilly-sur-Seine, France.
2 Urban, L. A., 1972, Gas Path Analysis Applied to Turbine Engine Conditioning Monitoring, AIAA/SAE Paper 72-1082.
3 Volponi, A., 1983, Gas Path Analysis: An Approach to Engine Diagnostics,
Time-Dependent Failure Mechanisms and Assessment Methodologies, Cambridge University Press, Cambridge, UK.
4 Volponi, A. J., and Urban, L. A., 1992, Mathematical Methods of Relative
Engine Performance Diagnostics, SAE Trans., 101; Journal of Aerospace,
Technical Paper 922048.
5 Doel, D. L., 1992, TEMPERA Gas Path Analysis Tool for Commercial Jet
Engines, ASME Paper 92-GT-315.
6 Doel, D. L., 1993, An Assessment of Weighted-Least-Squares Based Gas
Path Analysis, ASME Paper 93-GT-119.
7 Stamatis, A. et al., 1991, Jet Engine Fault Detection With Discrete Operating
Points Gas Path Analysis, J. Propul. Power, 7, No. 6.
8 Merrington, G. L., 1993, Fault Diagnosis in Gas Turbines Using a Model
Based Technique, ASME Paper 93-GT-13.
9 Glenny, D. E., 1988, Gas Path Analysis and Engine Performance Monitoring
In A Chinook Helicopter, AGARD: Engine Condition Monitoring, NATO,
Neuilly-sur-Seine, France.
10 Winston, H., et al., 1991, Integrating Numeric and Symbolic Processing For
Gas Path Maintenance, AIAA Paper 91-0501.
11 Luppold, R. H., et al., 1989, Estimating In-Flight Engine performance Variations Using Kalman Filter Concepts, AIAA Paper 89-2584.
12 Gallops, G. W., et al., 1992, In-Flight Performance Diagnostic Capability Of
An Adaptive Engine Model, AIAA Paper 92-3746.
13 Kerr, L. J., et al., 1991, Real-Time Estimation of Gas Turbine Engine Damage Using a Control Based Kalman Filter Algorithm, ASME Paper 91-GT216.
14 Sensor Error Compensation in Engine Performance Diagnostics, 1994,
ASME Paper No. 94-GT-058.
15 DePold, H., and Gass, F. D., 1998, The Application of Expert Systems and
Neural Networks to Gas Turbine Prognostics and Diagnostics, ASME Paper
98-GT-101.
16 McDuff, R. J., and Simpson, P. K., 1990, An Investigation of Neural Network
for F-16 Fault Diagnosis, Proceedings of the SPIE Technical Symposium on
Intelligent Information Systems, The International Society for Optical Engineering, Bellingham, WA.
17 Guo, Z., and Uhhrig, R. E., 1992, Using Modular Neural Networks to Monitor Accident Conditions in Nuclear Power Plants, Proceedings of the SPIE
Technical Symposium on Intelligent Information Systems, The International
Society for Optical Engineering, Bellingham, WA.

OCTOBER 2003, Vol. 125 923

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

18 Wantanabe, K. et al., 1989, Incipient Fault Diagnosis of Chemical Processes


via Artificial Neural Network, AIChE J., 35, No. 11.
19 Holmstrom, L., and Koistinen, P., 1992, Using Additive Noise in Back Propagation Training, IEEE Trans. Neural Netw., 3, No. 1.
20 Haykin, S., 1994, Neural NetworksA Comprehensive Foundation, Macmillian, New York.
21 Dasarathy, B. V., 1991, Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques, IEEE Computer Society Press, New York.
22 Kohonen, T., 1990, The Self-Organizing Map, Proc. IEEE, 78, IEEE, New
York.

924 Vol. 125, OCTOBER 2003

23 Kirkpatrick, S., 1984, Optimization by Simulated Annealing: Quantitative


Studies, J. Stat. Phys., 34.
24 Ackley, D. H., Hinton, G. E., and Sejnowski, T. J., 1985, A Learning Algorithm for Boltzmann Machines, Cognitive Science, Vol. 9, Cognitive Science
Society, Cincinnati, OH.
25 Broomhead, D. S., and Lowe, D., 1988, Multivariate Function Interpolation
and Adaptive Networks, Complex Systems, Vol. 2, Complex Systems Publications, Champaign, IL.
26 Cybenko, G., 1989, Approximations of Superposition of a Sigmoidal Function, Mathematics of Control, Signals and Systems, Vol. 2.

Transactions of the ASME

Downloaded From: http://gasturbinespower.asmedigitalcollection.asme.org/ on 03/10/2015 Terms of Use: http://asme.org/terms

S-ar putea să vă placă și